[go: up one dir, main page]

HK1158862A - Imaging apparatus - Google Patents

Imaging apparatus Download PDF

Info

Publication number
HK1158862A
HK1158862A HK11113207.9A HK11113207A HK1158862A HK 1158862 A HK1158862 A HK 1158862A HK 11113207 A HK11113207 A HK 11113207A HK 1158862 A HK1158862 A HK 1158862A
Authority
HK
Hong Kong
Prior art keywords
image
unit
image pickup
imaging
captured
Prior art date
Application number
HK11113207.9A
Other languages
Chinese (zh)
Inventor
松本康佑
宫本直知
Original Assignee
卡西欧计算机株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 卡西欧计算机株式会社 filed Critical 卡西欧计算机株式会社
Publication of HK1158862A publication Critical patent/HK1158862A/en

Links

Abstract

An imaging apparatus including a display section, a capturing section which captures an image at a first viewing angle, a capturing control section which performs a plurality of capturing operations by the capturing section, a generation section which generates a composite image producing an image captured at a second viewing angle that is wider than the first viewing angle by combining a plurality of images acquired by the plurality of capturing operations by the capturing control section, and a display control section which displays the composite image generated by the generation section on the display section.

Description

Image pickup apparatus
Technical Field
The present invention relates to an imaging apparatus.
Background
In a mobile phone or the like having digital photographing and image pickup functions, the limit of an image pickup angle depends on hardware specifications of a device main body such as a focal length of a lens (lens) and a size of an image pickup element.
In addition, in order to solve the problem of the limitation of the imaging angle of view, there are technologies such as mounting a conversion lens (conversionlens) for wide-angle imaging on an existing lens (for example, JP patent open 2004/191897, JP patent open 2005/027142, and JP patent open 2005/057548). Further, there is a technology of having a plurality of lenses in advance and switching the lenses according to the purpose of imaging (for example, JP patent open No. 2007-081473).
However, in the above-described conventional technique, in order to perform wide-angle imaging, it is necessary to mount a conversion lens for wide-angle imaging one by one or switch a lens according to the purpose of imaging, which has problems in terms of operability and cost. Further, even if a conversion lens or a switchable lens for wide-angle imaging is used, there is still a problem that it is difficult to obtain a wide-angle image desired by a photographer.
Disclosure of Invention
Accordingly, an object of the present invention is to provide an imaging device capable of easily obtaining a wide-angle image.
An image pickup apparatus according to an embodiment of the present invention includes: a display unit; an image pickup unit for taking an image at a 1 st view angle; an image pickup control unit for performing a plurality of times of image pickup by the image pickup unit; a generation unit that generates a composite image by combining a plurality of images captured by the imaging control unit in a plurality of times, the composite image reproducing an image captured at a 2 nd angle of view that is wider than the 1 st angle of view; and a display control unit that displays the composite image generated by the generation unit on the display unit.
Drawings
Fig. 1 is a block diagram showing the configuration of a digital camera of embodiment 1 of the present invention;
fig. 2 is a conceptual diagram for explaining a wide-angle image capturing mode of the digital camera 1 according to embodiment 1;
fig. 3 is a conceptual diagram showing a relationship between the angle of view of a lens and a composite image obtained by a wide-angle image capturing mode in the digital camera 1 of the present embodiment 1;
fig. 4 is a schematic diagram for explaining a user operation in the wide-angle image capturing mode of the digital camera 1 according to embodiment 1;
fig. 5 is a flowchart for explaining the operation of the digital camera according to embodiment 1;
fig. 6 is a schematic diagram for explaining image composition in the wide-angle imaging mode of the digital camera according to embodiment 1;
fig. 7 is a flowchart for explaining the operation of the digital camera according to embodiment 2;
fig. 8 is a schematic diagram showing a display example of an image display section of the digital camera according to embodiment 2;
fig. 9 is a schematic diagram showing an operation of the digital camera and a display example of an image display unit according to embodiment 2;
fig. 10 is a flowchart for explaining the operation of the digital camera according to embodiment 3;
fig. 11 is a schematic diagram showing an operation of the digital camera and a display example of an image display unit according to embodiment 3;
fig. 12 is a flowchart for explaining the operation of the digital camera according to embodiment 4;
fig. 13 is a schematic diagram showing an operation of the digital camera and a display example of an image display unit according to embodiment 4;
fig. 14 is a schematic diagram showing a modification of embodiment 4.
Detailed Description
Embodiments of the present invention will be described below with reference to the drawings. However, the scope of the invention is not limited to the illustrated examples.
A. Embodiment 1
A-1. Structure of embodiment 1
Fig. 1 is a block diagram showing the configuration of a digital camera according to embodiment 1 of the present invention. In the figure, a digital camera 1 includes an imaging lens 2, a lens driving section 3, a shutter 4 for diaphragm, a CCD5, TG (clock Generator) 6, a unit circuit 7, an image processing section 8, a CPU11, a DRAM12, a memory 13, a flash memory 14, an image display section 15, a key input section 16, a card I/F17, and a memory card 18.
The imaging lens 2 includes a focus lens, a zoom lens (zoom lens), and the like, and is connected to the lens driving unit 3. The lens driving section 3 includes motors for driving the focus lens and the zoom lens constituting the image pickup lens 2 in the optical axis direction, respectively, and a focus motor driver and a zoom motor driver for driving the focus motor and the zoom motor in accordance with control signals from the CPU 11.
The diaphragm 4 includes a drive circuit, not shown, and the drive circuit operates the diaphragm 4 in accordance with a control signal from the CPU 11. The diaphragm 4 controls the amount of light entering from the imaging lens 2. The CCD (image pickup device) 5 converts light of the subject projected through the image pickup lens 2 and the diaphragm 4 into an electric signal, and outputs the electric signal to the unit circuit 7 as an image pickup signal. The CCD5 is driven by a timing signal of a predetermined frequency generated by the TG 6.
The unit circuit 7 includes a Correlated Double Sampling (CDS) circuit that performs Correlated Double Sampling and holding of an image pickup signal output from the CCD5, an Automatic Gain Control (AGC) circuit that performs Automatic Gain adjustment of the image pickup signal after the Sampling, and an a/D converter that converts an analog image pickup signal after the Automatic Gain adjustment into a digital signal. The image pickup signal of the CCD5 is transmitted as a digital signal to the image processing unit 8 via the cell circuit 7. The cell circuit 7 is driven based on a timing signal of a predetermined frequency generated by the TG 6.
The image processing unit 8 performs image processing (pixel interpolation processing, γ correction, generation of a luminance/color difference signal, white balance processing, exposure correction processing, and the like) of image data from the unit circuit 7, processing of compression and expansion (for example, compression and expansion in JPEG format, M-JPEG format, or MPEG format) of image data, processing of synthesizing a plurality of captured images, and the like. The image processing unit 8 is driven based on a timing signal of a predetermined frequency generated by the TG 6.
The CPU11 is a single chip microcomputer that controls each part of the digital camera 1. In particular, in embodiment 1, the CPU11 controls the respective sections so that a plurality of images are continuously captured at a predetermined cycle (time interval), and the plurality of captured images are synthesized so as to partially overlap each other (for example, using α blending), thereby generating 1 synthesized image captured at a wide angle. The details of image synthesis will be described later.
After photographing is performed by the CCD5, the DRAM12 serves as a buffer for temporarily storing image data from the CPU11, while serving as a work memory for the CPU 11. The memory 13 records a program necessary for the CPU11 to control each section of the digital camera 1 and data necessary for controlling each section, and the CPU11 performs processing according to the program. The flash memory 14 and the memory card 18 are recording media for storing image data captured by the CCD5, and the like.
The image display unit 15 includes a color LCD and a driving circuit thereof, and displays an object imaged by the CCD5 as a through image (through image) in an imaging standby state, and reads the through image from the storage flash memory 14 and the memory card 18 to display an expanded recording image when reproducing a recording image. In embodiment 1, a synthesized image is displayed in which a plurality of images continuously captured (continuous capture) in the wide-angle imaging mode are sequentially synthesized. The key input unit 16 includes a plurality of operation keys such as a shutter SW, a zoom SW, a mode key, a SET key, and a cross key, and outputs an operation signal corresponding to a key operation by the user to the CPU 11. The memory card 18 is removably mounted to the card I/F17 via a card slot, not shown, of the digital camera 1 main body.
Fig. 2 is a conceptual diagram for explaining a wide-angle image capturing mode of the digital camera 1 according to embodiment 1. For example, in the digital camera 1, a case is assumed where a scene as shown in fig. 2 is photographed. When shooting a scene in a desired range, a larger angle of view is required than the angle of view S of the image pickup system of the digital camera 1. However, in one shot, all desired scenes cannot be shot.
Thus, in embodiment 1, while the user moves the imaging direction of the digital camera 1 so as to cover a desired scene, a plurality of images are continuously captured for a predetermined time, or at intervals of a predetermined number of frames, or at a predetermined period (time interval). And then by synthesizing a plurality of captured images in a partially repetitive manner, a wide-angle imaging mode capable of easily obtaining a wide-angle image is provided.
In the following description, the scene in fig. 2 is illustrated as shown in fig. 3 in order to clarify the imaging range, the imaging angle, and the like. In fig. 3, the view angle S1 is the size (view angle) of the finally generated image, and the outside thereof is an area outside the area that is not in the final image even if captured.
In embodiment 1, an arrangement for writing images is secured in the memory (DRAM 12). For convenience, this is referred to as a canvas (canvas) in the present embodiment 1. The canvas shows an imaging range in which the generated wide-angle synthesized image is reproduced. That is, a plurality of photographed images are aligned (registered) in a partially repeated manner and synthesized, and written on the canvas. Then, a wide-angle image is generated by extracting an area where the image is written on the canvas from the synthesized image. In embodiment 1, the first captured image captured in the wide-angle imaging mode is used as a reference image (image corresponding to the angle of view S in fig. 3), and a region in which the vertical and horizontal dimensions of the reference image are enlarged by 2 times is used as a canvas (imaging region S1 in fig. 3), for example, and then the reference image is attached to the center of the canvas. In addition, the size of the canvas can be more than 2 times of the size in the longitudinal and transverse directions.
As a method of alignment, for example, a method such as block matching is considered. In writing an image on a canvas, a method of superimposing images by a method such as α blending may be used, for example, by performing projective transformation.
Fig. 4 is a schematic diagram for explaining a user operation in the wide-angle image capturing mode of the digital camera 1 according to embodiment 1. The user moves the digital camera 1 along a circle indicated by an arrow in a state where the shutter SW at the center portion is pressed (half-pressed → full-pressed), for example, for a desired scene. However, it is difficult for the user to know how to move the digital camera 1 or whether a necessary image is actually obtained.
Thus, in embodiment 1, if the user presses (half press → full press) the shutter SW, a plurality of images are continuously taken at predetermined time intervals, or at predetermined intervals of the number of frames, or at predetermined cycles (time intervals) as described above. Then, a reduced image (low resolution) is generated in real time for each continuous shooting, and the reduced image is synthesized with a reference image (or a synthesized image) so as to partially overlap with the reference image, and the synthesized image is displayed on the image display unit 15. At this time, an original image (high-quality image) of the reduced image for synthesis is stored in advance.
Then, if the photographing is finished at intervals of a predetermined time or a predetermined number of frames, the original image (high-quality image) stored in advance is used to perform the partial overlapping synthesis in the same manner as the synthesis using the reduced image. By such a series of processing, finally, a wide-angle image that cannot be obtained by one-time shooting is generated. In the continuous shooting, the reduced image obtained by combining the images is displayed on the image display unit 15, so that the user can easily confirm in which direction the digital camera is directed.
A-2. actions of embodiment 1
The following describes the operation of embodiment 1.
Fig. 5 is a flowchart for explaining the operation of the digital camera according to embodiment 1. Fig. 6A, B is a schematic diagram for explaining image synthesis in the wide-angle imaging mode of the digital camera according to embodiment 1.
First, the CPU11 determines whether the shutter SW is half pressed (step S10), and repeatedly executes step S10 if the shutter SW is not half pressed. In the other invention, if the shutter SW is half-pressed, AF (auto focus) processing is performed (step S12), and it is determined whether the shutter SW is fully pressed (step S14). When the shutter SW is not fully pressed, steps S10 and S12 are repeatedly executed.
On the other hand, if the shutter SW is fully pressed, the captured image is read, and a reduction (pixel clipping) process is executed to generate a reduced image (step S16). Next, the reduced image is used to calculate the image position for superimposition (step S18). The calculation of the position of the superimposed image is, for example, to calculate the center position (coordinates) of the reduced image, and when the reference image (or the synthesized image) is already present, to align the reduced image of the current frame and the reference image (or the synthesized image) so as to partially overlap each other, and to calculate the position of the reduced image of the current frame within the canvas. Next, it is determined whether or not the center position of the reduced image is in the processing area (within the canvas) based on the center position of the reduced image and the position within the canvas (step S20).
Then, in a case where the center position of the reduced image is located within the processing area, the read captured image (high definition) is saved as an effective image (step S22), and the reduced image is written on a blank portion (blank area) which is an unacquired portion (step S24). That is, when the center position of the reduced image of the current frame is located within the processing area, the reduced image of the current frame and the reference image (or the synthesized image) are synthesized so as to be partially overlapped and written on the canvas 40 (in the case of the first 1 st captured image, the reference image is written in the center portion of the canvas 40). In the example shown in fig. 6A, since the center position of the reduced image 31 of the current frame is located within the processing area 40, the reduced image 31 of the current frame and the reference image 30 are combined so as to partially overlap each other and written into the canvas 40. Then, the image display unit 15 displays the composite image 32 (step S26).
Next, it is determined whether or not all of the images necessary for generating the wide-angle image are acquired (for example, whether or not the images are acquired for a predetermined time or for a predetermined number of frames) (step S28). When all necessary images are not acquired, the process returns to step S16, and the same process is repeated for the captured image of the next frame. As a result, if the center position of the captured image is located within the processing area every time the image is captured, the images are sequentially combined with the reference image (or the combined image), and the combined image is displayed on the image display unit 15 every time.
On the other hand, when the center position of the reduced image of the current frame is not within the processing region, the process returns to step S16, and the same process is repeated for the next captured image. For example, as shown in fig. 6B, when the center position of the reduced image 31 of the current frame is not within the processing area 40, image synthesis is not performed.
Then, if all necessary images are acquired, effective images stored as original images of reduced images for synthesis are aligned and synthesized in a partially overlapping manner in the same manner as in the synthesis using reduced images, and finally, a wide-angle image as shown in fig. 2 is generated (step S30).
According to embodiment 1 described above, the reduced image obtained by image combination is displayed on the image display unit 15 in real time every time shooting is performed in the continuous shooting. Thus, the user can easily recognize the direction that has not been photographed, or the direction that has been photographed. As a result, the user can know which direction the digital camera is directed next, and thus a wide-angle image can be obtained easily and efficiently.
B. Embodiment 2
Next, embodiment 2 of the present invention will be explained.
In embodiment 2, when the user performs imaging outside the processing area (outside the canvas), when the digital camera is moved (changes in imaging direction) too quickly, or the like, the user is notified of the fact, and thus the user can know how fast the digital camera is moved and in what direction. Note that the configuration of the digital camera 1 is not described in the same manner as in fig. 1.
Fig. 7 is a flowchart for explaining the operation of the digital camera 1 according to embodiment 2. Fig. 8 and 9 are schematic diagrams showing an example of display of the image display unit 15 of the digital camera 1 according to embodiment 2. First, the CPU11 determines whether the shutter SW is half-pressed (step S30), and repeatedly executes step S30 when the shutter SW is not half-pressed. On the other hand, if the shutter SW is half pressed, AF (auto focus) processing is performed (step S32), and it is determined whether the shutter SW is fully pressed (step S34). When the shutter SW is not fully pressed, steps S30 and S32 are repeatedly executed.
On the other hand, if the shutter SW is fully pressed, the captured image is read, and a reduction (pixel clipping) process is executed to generate a reduced image (step S36). Next, the reduced image is used to calculate the image position for superimposition (step S38). The calculation of the position of the superimposed image is, for example, to calculate the center position (coordinates) of the reduced image, and when the reference image (or the composite image) is already present, to align the reduced image of the current frame and the reference image (or the composite image) so as to partially overlap each other, and to calculate the position of the reduced image of the current frame within the canvas, the distance between the center position (coordinates) of the image (reduced image) captured immediately before, and the like. Next, it is determined whether or not the center position of the reduced image is in the processing area (within the canvas) based on the center position of the reduced image and the position within the canvas (step S40).
When the center position of the reduced image is not within the processing area, the out-of-area mark is displayed on the image display unit 15 (step S42). For example, as shown in fig. 8A, in a case where the center of the reduced image 31 of the current frame is located outside the canvas 40 as the processing area, as shown in fig. 8B, an arrow 50 showing a direction of returning into the processing area is displayed on the image display portion 15 as an out-of-area mark. Thus, since the user knows that the imaging direction of the digital camera 1 is deviated from the processing area, the user can return the imaging angle of view to the processing area by moving the digital camera 1 in the direction of the arrow 50. Thereafter, the process returns to step S36, and the above-described process is performed on the next captured image. That is, image synthesis is not performed in this case.
On the other hand, when the center position of the reduced image is located within the processing area, it is determined whether the distance between the center position of the reduced image of the previous frame and the center position of the reduced image of the current frame is smaller than a predetermined threshold value (step S44). When the distance is equal to or greater than the predetermined threshold value, the overspeed flag is displayed on the image display unit 15 (step S46). Thereafter, the process returns to step S36, and the above-described process is performed on the captured image of the next frame. That is, image synthesis is not performed in this case.
Further, in the present invention, although the alignment is realized by a method such as block matching, if the moving speed of the digital camera 1 is high, the area of the region where the same portion is captured in the 2 images for alignment (the reduced image of the previous frame and the reduced image of the current frame) becomes small, resulting in incorrect alignment. For this reason, the moving speed of the digital camera 1 must be suppressed below a certain speed. Therefore, the user is required to clearly know the moving speed (the changing speed of the imaging direction) of the digital camera 1.
For example, as shown in fig. 9A, when the distance between the center position of the reduced image 31a of the previous frame and the center position of the reduced image 31b of the current frame is equal to or greater than a predetermined threshold value, the overlapping area between the two reduced images is reduced. Since the positioning is highly likely to be incorrect, the overspeed mark 60 is displayed on the image display unit 15 as shown in fig. 9B.
As shown in fig. 9C, the overspeed flag 60 may be a tachometer-like flag 61 that changes the position of the pointer of the meter and the color of the meter in accordance with the size of the movement distance of the image per unit time, a flag 62 that changes the area of the arc and the color of the arc in accordance with the size of the movement distance of the image per unit time, or the like. Alternatively, the length and color may be changed in accordance with the moving distance of the image per unit time (not shown).
On the other hand, when the reduced image of the current frame is within the processing area and the moving distance is smaller than the predetermined threshold, the read captured image (high definition) is stored as an effective image because sufficiently accurate registration is possible in the composite image (step S48), and the reduced image is written in a blank portion which is an unacquired portion (step S50). That is, when the center position of the reduced image of the current frame is located within the processing area, the reduced image of the current frame and the reference image (or the synthesized image) are synthesized so as to be partially overlapped and written on the canvas 40 (in the case of the first 1 st captured image, the reference image is written in the central portion of the canvas 40). Then, the image display unit 15 displays the composite image (step S52).
Next, it is determined whether or not all necessary images are acquired (for example, whether or not images of a predetermined time or a predetermined number of image capturing frames are acquired) (step S54). When all necessary images are not acquired, the process returns to step S36, and the same process is repeated for the captured image of the next frame. As a result, if the center position of the captured image is located within the processing area every time the image is captured, the images are sequentially combined with the reference image (or the combined image), and the combined image is displayed on the image display unit 15 every time.
Then, if all necessary images are acquired, effective images stored as original images of reduced images for synthesis are aligned and synthesized in a partially overlapping manner in the same manner as in the synthesis using reduced images, and finally, a wide-angle image as shown in fig. 2 is generated (step S56).
According to embodiment 2 described above, when shooting is performed outside the processing area, or when the movement (change in the imaging direction) of the digital camera 1 is too fast, or the like, this is notified to the user. Thus, the user can know how fast the digital camera 1 is moved and in what direction, and can easily and efficiently obtain a wide-angle image.
C. Embodiment 3
Next, embodiment 3 of the present invention will be explained.
In embodiment 3, (reduced images of) continuously shot images are not simply displayed in a combined manner, but a predetermined range is cut out from the combined image and displayed on the image display unit 15 so that the reduced image of the current frame is positioned at the center of the image display unit 15. Since the configuration of the digital camera 1 is the same as that of fig. 1, the description thereof is omitted.
Fig. 10 is a flowchart for explaining the operation of the digital camera according to embodiment 3. First, the CPU11 determines whether the shutter SW is half pressed (step S60), and repeatedly executes step S60 when the shutter SW is not half pressed. On the other hand, if the shutter SW is half pressed, af (autofocus) processing is executed (step S62), and it is determined whether or not the shutter SW is fully pressed (step S64). When the shutter SW is not fully pressed, steps S60, S62 are repeatedly executed.
On the other hand, if the shutter SW is fully pressed, the captured image is read, and a reduction (pixel clipping) process is executed to generate a reduced image (step S66). Next, the reduced image is used to calculate the image position for superimposition (step S68). The calculation of the position of the superimposed image means, for example, that when the center position (coordinates) of the reduced image is calculated and the reference image (or the synthesized image) already exists, the reduced image of the current frame and the reference image (or the synthesized image) are aligned so as to be partially overlapped with each other, and the position of the reduced image of the current frame in the canvas is calculated. Next, it is determined whether or not the center position of the reduced image is in the processing area (within the canvas) based on the center position of the reduced image and the position within the canvas (step S70), and when the center position of the reduced image is not within the processing area, the process returns to step S66, and the process is performed on the captured image of the next frame. At this time, image synthesis is not performed.
On the other hand, in the case where the center position of the reduced image is located within the processing area, the read captured image (high definition) is saved as an effective image (step S72), and the reduced image is written on a blank portion which is an unacquired portion (step S74). That is, when the center position of the reduced image is located within the processing area, the reduced image of the current frame and the reference image (or the synthesized image) are aligned and synthesized so as to be partially overlapped, and written on the canvas (in the case of the first 1 st captured image, written as the reference image in the center portion of the canvas).
Next, the composite image is cut out in accordance with the display size of the image display unit 15 with the reduced image of the current frame as the center (step S76), and the cut-out composite image is displayed on the image display unit 15 (step S78). Next, it is determined whether or not all necessary images are acquired (for example, whether or not images of a predetermined time or a predetermined number of image capturing frames are acquired) (step S80).
When all necessary images are not acquired, the process returns to step S66, and the same process is repeated for the next captured image. As a result, if the center position of the captured image is located within the processing area every time the image is captured, the captured image is sequentially combined with the reference image (or the combined image). Then, the composite image is cut out from the composite images sequentially combined, centering on the reduced image of the current frame, in accordance with the display size of the image display unit 15, and the cut-out composite image is displayed on the image display unit 15.
Then, if all necessary images are acquired, effective images stored as original images of reduced images for synthesis are aligned and synthesized in a partially overlapping manner in the same manner as in the synthesis using reduced images, and finally, a wide-angle image as shown in fig. 2 is generated (step S82).
Fig. 11 is a schematic diagram showing an operation of the digital camera according to embodiment 3 and an example of display of the image display unit. First, after the reference image 30 as the 1 st captured image is acquired, the 2 nd captured image (reduced image) 31 is acquired. If the center position of the 2 nd reduced image 31 is the image acquisition position, the 2 nd reduced image 31 and the 1 st reference image 30 are partially superimposed and synthesized. Next, the composite image 32 is cut out in accordance with the display size of the image display unit 15 with the reduced image 31 of the current frame as the center, and the cut-out composite image 32a is displayed on the image display unit 15.
According to embodiment 3 described above, each time the captured images are sequentially combined, the reduced image of the current frame is displayed on the image display unit 15 so as to be positioned at the center of the image display unit 15. That is, a user who shoots while looking at the image display unit 15 can see an image centered on the direction in which the digital camera is facing in real time. Thus, when a blank portion which has not been photographed is filled up, it can be intuitively and easily known which direction the digital camera is to be directed next is better. As a result, a wide-angle image can be easily and efficiently obtained.
In the above-described series of processing, since the reference image moves on the screen of the image display unit 15, it is preferable to surround the synthesized image with a frame of a predetermined color in order to make it clear to the user which part of the synthesized image is the reference image.
D. Embodiment 4
Next, embodiment 4 of the present invention will be explained.
In the present embodiment 4, a direction in which no image is taken yet or a direction in which an image is taken is clearly indicated to the user during continuous shooting, and a guide (guidance) is presented to teach the user how to move the digital camera or in which direction the digital camera is directed. Since the configuration of the digital camera 1 is the same as that of fig. 1, the description thereof is omitted.
Fig. 12 is a flowchart for explaining the operation of the digital camera according to embodiment 4. Fig. 13A to E are schematic diagrams showing an operation of the digital camera and a display example of the image display unit according to embodiment 4.
First, the CPU11 determines whether the shutter SW is half pressed (step S90), and repeatedly executes step S90 when the shutter SW is not half pressed. On the other hand, if the shutter SW is half pressed, af (autofocus) processing is performed (step S92), and it is determined whether or not the shutter SW is fully pressed (step S94). When the shutter SW is not fully pressed, steps S90, S92 are repeatedly executed.
On the other hand, if the shutter SW is fully pressed, for example, as shown in fig. 13A, a spiral guide 70 is displayed in the lower right of the image display unit 15 (step S96). Next, the captured image is read, and a reduction (pixel cropping) process is executed to generate a reduced image (step S98). Next, the reduced image is used to calculate the position of the image for superimposition (step S100). The calculation of the position of the superimposed image means, for example, that when the center position (coordinates) of the reduced image is calculated and the reference image (or the synthesized image) already exists, the reduced image of the current frame and the reference image (or the synthesized image) are aligned so as to be partially overlapped with each other, and the position of the reduced image of the current frame in the canvas is calculated.
Next, it is determined whether or not the center position of the reduced image is in the processing area (within the canvas) based on the center position of the reduced image and the position within the canvas (step S102), and when the center position of the reduced image is not within the processing area, the process returns to step S96, and the process is performed on the captured image of the next frame. At this time, image synthesis is not performed.
On the other hand, in the case where the center position of the reduced image is located within the processing area, the read captured image (high definition) is saved as an effective image (step S104), and the reduced image is written on a blank portion which is an unacquired portion (step S106). That is, when the center position of the reduced image is located within the processing area, the reduced image of the current frame and the reference image (or the synthesized image) are aligned and synthesized so as to be partially overlapped, and written on the canvas 40 (in the case of the first 1 st captured image, written as the reference image in the center portion of the canvas 40). Next, the composite image is displayed on the image display unit 15 (step S108).
Next, in order to show the portion where the reduced image of the current frame has been synthesized, the color of a part of the guide 70 corresponding to the synthesized portion is changed (indicated by a different type of line in the illustrated example), and the guide 70 displayed on the image display unit 15 is updated (step S110). For example, fig. 13B shows an example of display of the image display unit 15 after the imaging of the 1 st reference image 30. At this time, the color of the portion of the guide 70 corresponding to the position of the 1 st reference image changes from the center of the spiral (in the illustrated example, the line type of the guide 70 changes). The user sees the guide 70 by simply moving the digital camera 1 along the spiral.
After the synthesis of the 2 nd reduced image, as shown in fig. 13C, the color of the portion corresponding to the position of the 2 nd reference image (in the illustrated example, the line type of the guide 70) changes from the center of the spiral in the guide 70. The user sees the guide 70 and simply moves the digital camera 1 further along the spiral.
Next, it is determined whether or not all necessary images are acquired (for example, whether or not images of a predetermined time or a predetermined number of image capturing frames are acquired) (step S112). When all necessary images are not acquired, the process returns to step S96, and the same process is repeated for the captured image of the next frame. As a result, each time a reduced image is sequentially combined, the color of a part of the guide 70 corresponding to the position of the combined reduced image changes. That is, each time the reduced images are sequentially combined, the state of fig. 13D is obtained, and finally, as shown in fig. 13E, the whole of the guide 70 is changed, and the combined image in which the whole screen is obtained is displayed.
Then, if all necessary images are acquired, effective images stored as original images of reduced images for synthesis are synthesized in a partially overlapping manner in the same manner as in the synthesis using reduced images, and finally, a wide-angle image as shown in fig. 2 is generated (step S114).
Fig. 14A to C are schematic diagrams showing modifications of embodiment 4. In the above-described embodiment 4, the guide 70 is formed in a spiral shape and displayed at the right corner of the image display unit 15, but the present invention is not limited to this, and for example, as shown in fig. 14A, the guide 71 may be displayed in a zigzag shape starting from the upper left of the screen by being superimposed on the composite image and displayed on the entire screen displayed at 15, or as shown in fig. 14B, the guide 72 may be formed in a zigzag shape. In the case of fig. 14B, the 1 st reference image is the top left corner, which is the starting point of the guide 72. As shown in fig. 14C, a circular guide 73 may be provided at a position to be imaged, and the color may be changed when imaging is performed, whereby whether imaging is performed or not may be determined. That is, the guide 70 may have a shape corresponding to a composite image of how many times the angle of view (area) of the reference image is generated.
According to the above-described embodiment 4, in the continuous shooting, since the direction in which the digital camera has not been shot or the direction in which the digital camera has been shot is clearly indicated to the user and the guidance for teaching the user how to move the digital camera or to which direction the digital camera is to be moved is presented, the user who performs the shooting while viewing the image display section 15 can intuitively and easily know how to move the digital camera or in which direction the digital camera is to be directed. As a result, a wide-angle image can be easily and efficiently obtained.
Further, the above-described embodiments 1 to 4 may be combined, respectively, and for example, the cutting process of embodiment 3 or/and the guidance display of embodiment 4 may be added to the display process of the out-of-range mark or the overspeed mark of embodiment 2. For example, in the above-described embodiments 1 to 4, an acceleration sensor that detects the movement of the digital camera may be further provided, and when the captured images sequentially obtained by the continuous shooting are superimposed, the image position for superimposition may be calculated in consideration of the movement detected by the acceleration sensor.
In addition, although the digital camera is described as the imaging device in embodiments 1 to 4, the present invention is not limited to this, and may be applied to, for example, a mobile phone or the like as long as the electronic device has an imaging function. Note that the CPU11 may execute a predetermined program stored in a program memory (not shown).

Claims (13)

1. An image pickup apparatus comprising:
a display unit;
an image pickup unit for taking an image at a 1 st view angle;
an image pickup control unit for performing a plurality of times of image pickup by the image pickup unit;
a generation unit that generates a composite image by combining a plurality of images captured by the imaging control unit in a plurality of times, the composite image reproducing an image captured at a 2 nd angle of view that is wider than the 1 st angle of view; and
and a display control unit for displaying the composite image generated by the generation unit on the display unit.
2. The image pickup apparatus according to claim 1, further comprising:
a 1 st determination unit configured to determine whether or not an imaging direction of an image captured by the imaging control unit has changed significantly during an imaging operation in the plurality of times of imaging; and
and a 1 st notification unit configured to notify, when the 1 st determination unit determines that the imaging direction has changed significantly, the 1 st determination unit.
3. The image pickup apparatus according to claim 2,
when the 1 st determination unit determines that the imaging direction has changed significantly, the 1 st notification unit displays the change on the display unit.
4. The image pickup apparatus according to claim 2,
the 1 st determination unit determines whether or not the imaging direction has changed significantly based on a change in center coordinates of images sequentially captured by the imaging control unit through multiple imaging.
5. The image pickup apparatus according to claim 2,
the 1 st determination unit determines whether or not the imaging direction has changed significantly between images sequentially captured by the imaging control unit through multiple times of imaging based on whether or not the same area as the previously captured image is equal to or smaller than a predetermined area.
6. The image pickup apparatus according to claim 1, further comprising:
a 2 nd determination unit configured to determine whether or not a shooting direction of the image to be shot is outside a processing area for image synthesis by the generation unit during a shooting operation of the imaging control unit for the plurality of times of shooting; and
and a 2 nd notifying unit configured to notify, when the determination unit determines that the processing area is outside the processing area, the determination unit.
7. The image pickup apparatus according to claim 6,
when the 2 nd determination unit determines that the processing area is outside the processing area, the 2 nd notification unit displays the determination on the display unit.
8. The image pickup apparatus according to claim 6,
the 2 nd determination unit determines whether or not the image capturing direction is outside the processing area among the images sequentially captured by the plurality of times of capturing by the image capture control unit, based on whether or not there is an area identical to the previously captured image.
9. The image pickup apparatus according to claim 1,
the generation unit sequentially synthesizes a plurality of images sequentially captured by the imaging control unit, and generates a synthesized image having a 2 nd view angle wider than the 1 st view angle;
the display control unit displays the composite image of the 2 nd view angle sequentially synthesized by the generation unit on the display unit.
10. The image pickup apparatus according to claim 1,
further comprising a cutting unit for cutting out a predetermined region centered on the image synthesized immediately before from the synthesized image generated by the generating unit,
the display control unit displays an image of a predetermined area around the image just synthesized cut by the cutting unit on the entire screen of the display unit.
11. The image pickup apparatus according to claim 1,
the display control unit further displays direction information indicating a direction to be captured by the imaging unit on the display unit.
12. The image pickup apparatus according to claim 11,
the display control unit displays, on the display unit, position information indicating a position of an image that has been captured and a position of an image that has not been captured, so that the generation unit generates a 2 nd view angle composite image.
13. The image pickup apparatus according to claim 1,
the above-mentioned image pickup control section,
the image pickup unit is controlled so as to continuously perform a plurality of times of image pickup at a predetermined cycle.
HK11113207.9A 2010-03-19 2011-12-07 Imaging apparatus HK1158862A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2010-063763 2010-03-19

Publications (1)

Publication Number Publication Date
HK1158862A true HK1158862A (en) 2012-07-20

Family

ID=

Similar Documents

Publication Publication Date Title
JP4985808B2 (en) Imaging apparatus and program
JP5163676B2 (en) Imaging apparatus, imaging method, and program
TWI514847B (en) Image processing device, image processing method, and recording medium
JP5665013B2 (en) Image processing apparatus, image processing method, and program
JP2008311938A (en) Imaging device, lens unit, imaging method, and control program
JP2009225072A (en) Imaging apparatus
CN104919789A (en) Image processing device, imaging device, program, and image processing method
JP2006162991A (en) Stereoscopic image photographing apparatus
JP5655804B2 (en) Imaging apparatus and program
JP4957825B2 (en) Imaging apparatus and program
JP5100410B2 (en) Imaging apparatus and control method thereof
CN102209199B (en) Imaging apparatus
JP5892211B2 (en) Imaging apparatus and program
JP4925168B2 (en) Imaging method and apparatus
HK1158862A (en) Imaging apparatus
JP5641352B2 (en) Image processing apparatus, image processing method, and program
JP5648563B2 (en) Image processing apparatus, image processing method, and program
JP5637400B2 (en) Imaging apparatus and program
JP5168369B2 (en) Imaging apparatus and program thereof
HK1159906A (en) Imaging apparatus and imaging method
HK1177083A (en) Image processing device capable of generating wide-range image
HK1159905B (en) Imaging apparatus
HK1177076A (en) Image processing device for generating composite image having predetermined aspect ratio
HK1177082B (en) Image processing device capable of generating wide-range image
HK1177076B (en) Image processing device for generating composite image having predetermined aspect ratio