HK1177082B - Image processing device capable of generating wide-range image - Google Patents
Image processing device capable of generating wide-range image Download PDFInfo
- Publication number
- HK1177082B HK1177082B HK13104502.8A HK13104502A HK1177082B HK 1177082 B HK1177082 B HK 1177082B HK 13104502 A HK13104502 A HK 13104502A HK 1177082 B HK1177082 B HK 1177082B
- Authority
- HK
- Hong Kong
- Prior art keywords
- image
- shooting
- state
- display
- digital camera
- Prior art date
Links
Description
Technical Field
The present invention relates to an image processing apparatus such as a digital camera or a mobile phone having an imaging function, an image processing method, and a recording medium.
Background
In a digital camera, a mobile phone having an imaging function, and the like, the limit of the imaging angle of view depends on the hardware specifications of the device main body such as the focal distance of the lens and the size of the imaging means. The limit problem of the imaging angle of view is solved by: in an imaging device, a conversion lens (conversion lens) for wide-angle imaging or the like is mounted in front of an existing lens (see, for example, japanese patent laid-open nos. 2004-.
However, in the above-described conventional technique, in order to perform wide-angle shooting, it is necessary to mount conversion lenses for wide-angle shooting one by one or to switch lenses according to a shooting target, which has problems in terms of operability and cost. Further, even if a conversion lens or a switchable lens for wide-angle shooting is used, there is a problem that it is difficult to obtain a wide-angle image desired by a photographer.
Disclosure of Invention
Accordingly, an object of the present invention is to provide an image processing apparatus, an image processing method, and a recording medium capable of easily and efficiently obtaining an image necessary for generating a wide-angle image without performing lens replacement.
The present invention is an image processing apparatus, comprising: a shooting mechanism; a display mechanism; a shooting control mechanism which controls the shooting mechanism to continuously shoot; a wide-angle image generation unit configured to generate a wide-angle image from the plurality of images continuously captured by the capturing unit; a detection unit configured to detect a predetermined trigger indicating an end of continuous shooting by the shooting unit in a predetermined direction; and a display control unit that changes and displays information indicating a range to be continuously photographed by the photographing unit on the display unit every time the predetermined trigger is detected.
Drawings
Fig. 1 is a block diagram showing a configuration of a digital camera according to embodiment 1 of the present invention.
Fig. 2 is a conceptual diagram for explaining a normal shooting mode.
Fig. 3 is a conceptual diagram for explaining a panoramic shooting mode of the digital camera 1 according to embodiment 1.
Fig. 4 is a conceptual diagram illustrating the movement (the method of movement by the user) of the digital camera 1 in the panoramic shooting mode of the digital camera 1 according to embodiment 1.
Fig. 5 is a flowchart for explaining the operation of the digital camera 1 according to embodiment 1.
Fig. 6 is a flowchart for explaining the continuous shooting processing operation of the digital camera 1 according to embodiment 1.
Fig. 7 is a flowchart for explaining the operation of the synthesizing process of the digital camera 1 according to embodiment 1.
Fig. 8 is a conceptual diagram for explaining the operation of the synthesis process of the digital camera 1 according to embodiment 1.
Fig. 9 is a flowchart for explaining the continuous shooting processing operation of the digital camera 1 according to embodiment 2.
Fig. 10 is a conceptual diagram for explaining the continuous shooting processing operation of the digital camera 1 according to embodiment 2.
Fig. 11 is a flowchart for explaining the continuous shooting processing operation of the digital camera 1 according to embodiment 3.
Fig. 12 is a conceptual diagram for explaining the continuous shooting processing operation of the digital camera 1 according to embodiment 3.
Fig. 13 is a conceptual diagram showing another example of the moving method of the digital camera 1 in the panoramic shooting.
Fig. 14 is a conceptual diagram showing another display example of the shooting frame and the moving direction of the shooting range.
Detailed Description
Embodiments of the present invention will be described below with reference to the drawings.
A. Embodiment 1
A-1. Structure of embodiment 1
Fig. 1 is a block diagram showing a configuration of a digital camera according to embodiment 1 of the present invention. In the figure, the digital camera 1 includes an imaging lens 2, a lens driving unit 3, a shutter for diaphragm (compare り and シヤツタ)4, a CCD5 (imaging means), tg (timing generator)6, a unit circuit (unit circuit)7, an image processing unit 8 (wide-angle image generating means, panorama generating means, and preliminary synthesized image generating means), a CPU11 (imaging control means, display control means, and detection means), a DRAM 12, a memory 13, a flash memory 14, an image display unit 15 (display means), a key input unit 16, a card I/F17, and a memory card 18.
The imaging lens 2 includes a focus lens (focus lens), a zoom lens (zoom lens), and the like, and is connected to a lens driving unit 3. The lens driving section 3 includes: a motor for driving the focus lens and the zoom lens constituting the photographing lens 2 in the optical axis direction, respectively; the focus motor driver and the zoom motor driver drive the focus motor and the zoom motor in accordance with a control signal from the CPU 11.
The diaphragm 4 includes a drive circuit, not shown, and the drive circuit operates the diaphragm 4 in accordance with a control signal sent from the CPU 11. The diaphragm 4 controls the amount of light entering from the photographing lens 2. The CCD (image pickup device) 5 converts light of the subject projected through the image pickup lens 2 and the diaphragm 4 into an electric signal, and outputs the electric signal to the unit circuit 7 as an image pickup signal. The CCD5 is driven by a timing signal of a predetermined frequency generated by the TG 6.
The unit circuit 7 includes: a cds (correlated Double sampling) circuit that performs correlated Double sampling (correlated Double sampling) on the imaging signal output from the CCD5 and holds the same; an agc (automatic Gain control) circuit that performs automatic Gain adjustment of the sampled photographing signal; and an A/D converter for converting the analog photographing signal after the automatic gain adjustment into a digital signal. The image pickup signal of the CCD5 is sent to an image processing unit 8 as a digital signal via a unit circuit 7. The unit circuit 7 is driven based on a timing signal of a predetermined frequency generated by the TG 6.
The image processing unit 8 performs the following processes: image processing (pixel interpolation processing, γ correction, generation of a luminance color difference signal, white balance processing, exposure correction processing, and the like) of the image data sent from the unit circuit 7, compression/expansion processing of the image data (for example, compression/expansion of JPEG format, M-JPEG format, or MPEG format), and processing of combining a plurality of captured images. The image processing unit 8 is driven based on a timing signal of a predetermined frequency generated by the TG 6.
The CPU11 is a single chip microcomputer that controls each part of the digital camera 1. In particular, in embodiment 1, while the user is moving the digital camera 1, the CPU11 controls each unit so that a plurality of images are continuously captured at predetermined periods (time intervals), and the captured plurality of images are combined so that a part thereof is repeated (for example, using α blending) to generate one combined image as if the image was captured at a wide angle. The details of image synthesis will be described later.
The DRAM 12 is used as a buffer memory for temporarily storing image data that is captured by the CCD5 and then sent to the CPU11, and also as a work memory for the CPU 11. The memory 13 stores a program necessary for controlling each unit of the digital camera 1 and data necessary for controlling each unit, which are executed by the CPU11, and the CPU11 performs processing in accordance with the program. The flash memory 14 and the memory card 18 are recording media for storing image data and the like captured by the CCD 5.
The image display unit 15 includes a color LCD and a driving circuit thereof, and displays a subject imaged by the CCD5 as a real-time image (real-time image) in the imaging standby state, and displays a recording image read from the flash memory 14 and the memory card 23 and expanded when reproducing the recording image. The key input unit 16 includes a plurality of operation keys such as a shutter SW, a zoom SW, a mode key, a SET key, and a cross key, and outputs an operation signal corresponding to a key operation by the user to the CPU 11. The memory card 18 is detachably mounted to the card I/F17 via a card slot, not shown, of the digital camera 1 main body.
Fig. 2 is a conceptual diagram for explaining a normal shooting mode. When the digital camera 1 performs imaging in the normal imaging mode, imaging can be performed only at the angle of view S of the imaging system of the digital camera 1 as shown in fig. 2.
Fig. 3 is a conceptual diagram for explaining a panoramic shooting mode of the digital camera 1 according to embodiment 1. Fig. 4 is a conceptual diagram illustrating the movement (the method of movement by the user) of the digital camera 1 in the panoramic shooting mode of the digital camera 1 according to embodiment 1.
In this mode, the user sets the digital camera 1 in the longitudinal direction so that the longitudinal direction of the angle of view is the longitudinal direction with respect to a desired scene, presses the shutter SW (switch) at the upper left end (half press → full press), first moves in the right direction from the left end where the shutter SW is pressed (state # 1: refer to fig. 4), moves in the downward direction at a predetermined position (state # 2: refer to fig. 4), and further moves in the left direction at another predetermined position (state # 3: refer to fig. 4) as indicated by an arrow shown in fig. 3. While the user is performing the exercise, the digital camera 1 continuously captures images at a predetermined timing.
The digital camera 1 synthesizes a 1 st panoramic image from a plurality of images shot in a state #1 of moving from the left end to the right direction, synthesizes a 2 nd panoramic image from a plurality of images shot in a state #3 of moving from the right end to the left direction, synthesizes the 1 st panoramic image and the 2 nd panoramic image, and finally generates a desired wide-angle image (lower side of fig. 3). In addition, the image shot in the state #2 of moving in the top-down direction is not necessary for the generation of the panoramic image and is therefore not saved.
A-2. actions of embodiment 1
Next, the operation of embodiment 1 will be described.
Fig. 5 is a flowchart for explaining the operation of the digital camera 1 according to embodiment 1. First, if the shutter SW is half-pressed (step S10), the CPU11 executes af (auto focus) processing (step S12), and if the shutter SW is full-pressed (step S14), the CPU11 continuously takes a plurality of images at a predetermined cycle (time interval) (step S16). The details of the continuous shooting process will be described later.
At this time, as shown in fig. 3, the user sets the digital camera 1 in the longitudinal direction so that the longitudinal direction of the angle of view is the longitudinal direction with respect to the desired scene, presses the shutter SW upward and leftward (half press → full press), and first moves the digital camera 1 in the rightward direction from the left end where the shutter SW is pressed (state # 1: refer to fig. 4), in the downward direction at a predetermined position (state # 2: refer to fig. 4), and further in the leftward direction (state # 3: refer to fig. 4), as indicated by the arrow shown in fig. 3. Then, it is determined whether or not the continuous shooting is completed, that is, whether or not the panoramic shooting is completed (step S18), and if the continuous shooting is not completed, the process returns to step S16 to continue the continuous shooting process.
On the other hand, when the continuous shooting is finished, that is, when the panoramic shooting is finished (yes in step S18), the 1 st panoramic image is synthesized from the plurality of images shot in the state #1 of moving from the left end to the right direction, the 2 nd panoramic image is synthesized from the plurality of images shot in the state #3 of moving from the right end to the left direction, the 1 st panoramic image and the 2 nd panoramic image are synthesized, and finally, a desired wide-angle image is generated (step S20). The details of the synthesis process will be described later.
Fig. 6 is a flowchart for explaining the continuous shooting processing operation of the digital camera 1 according to embodiment 1. First, the CPU11 performs position matching between the previous captured image and the current captured image (step S30), and determines which of the states #1, #2, and #3 the current state is (step S32).
In a state #1 where the user moves the digital camera 1 from the left end to the right direction (state #1 in step S32), the current captured image is saved as an image for creating a panoramic image (step S34). Next, it is determined whether or not the digital camera 1 has reached a predetermined position (in this case, the right end which is the end position of the state #1) (step S36). If the predetermined position is not reached (no in step S36), the process ends without changing the current state #1, and the routine returns to the main routine (main routine) shown in fig. 5.
Thereafter, until the digital camera 1 reaches a predetermined position (in this case, the right end which is the end position of the state #1), step S34 is repeated to store the captured image as an image for creating a panoramic image. When the digital camera 1 reaches the predetermined position (yes in step S36), the state is changed from state #1 to state #2, and the current state is changed to state #2 (step S38).
Next, when the state is changed to the state #2 in which the user moves the digital camera 1 from the top to the bottom (the state #2 in step S32), the process proceeds to step S42 without storing the captured image as an image for creating a panoramic image (step S40), and it is determined whether the digital camera 1 has reached a predetermined position (in this case, the lower right end which is the end position of the state #2) (step S42). If the predetermined position is not reached (no in step S42), the process ends without changing the current state #2, and the routine returns to the main routine shown in fig. 5.
Thereafter, when the digital camera 1 reaches the predetermined position (in this case, the lower right end which is the end position of the state #2), the shooting is continued without holding the shot image, and when the digital camera 1 reaches the predetermined position (yes in step S42), the state #2 is shifted to the state #3, so that the current state is changed to the state #3 (step S44).
Next, when the state is shifted to the state #3 in which the user moves the digital camera 1 from the lower right to the left direction (the state #3 in step S32), the captured image is saved as an image for creating a panoramic image (step S46). Then, it is determined whether or not the digital camera 1 has reached a predetermined position (in this case, the left end which is the end position of the state #3) (step S48). If the predetermined position is not reached (no in step S48), the process ends without changing the current state #3, and the routine returns to the main routine shown in fig. 5.
Thereafter, until the digital camera 1 reaches a predetermined position (in this case, the left end which is the end position of the state #3), step S46 is repeated to store the captured image as an image for creating a panoramic image. When the digital camera 1 reaches the predetermined position (yes in step S48), the continuous shooting is terminated (step S50).
The above-described operation is performed so that the user can obtain a plurality of images captured in the state #1 in which the digital camera 1 is moved from the left end to the right direction and a plurality of images captured in the state #3 in which the digital camera 1 is moved from the right bottom to the left direction. Next, a method of obtaining a final wide-angle image using these captured plural images will be described.
Fig. 7 is a flowchart for explaining the operation of the digital synthesis processing according to embodiment 1. Fig. 8 is a conceptual diagram for explaining the operation of the digital synthesis processing according to embodiment 1. First, the CPU11 acquires an image for creating a panoramic image (step S60), and determines the state when the image is captured (step S62). When the state at the time of capturing the image is the state #1, the image is subjected to the synthesis processing for creating the panoramic image #1 (step S64).
Thereafter, it is determined whether or not panorama synthesis is completed (step S68), and if not, the process returns to step S60. Then, as shown in fig. 8A, a panoramic image #1 is created by synthesizing a plurality of images FR1 to FR6 captured in the state #1 so that a part of the images is overlapped (for example, using α blending).
On the other hand, if the acquired image for panoramic preparation is an image captured in state #3, the image is subjected to synthesis processing for preparing panoramic image #2 (step S66). Thereafter, it is determined whether or not the panorama synthesis is completed (step S68), and if not, the process returns to step S60. Thereafter, as shown in fig. 8B, a plurality of images FL1 to FL6 captured in state #3 are synthesized so as to partially overlap (for example, using α blending), thereby creating a panoramic image # 2.
Next, when the panorama synthesis is completed (yes in step S68), as shown in fig. 8C, the panorama image #1 and the panorama image #2 are synthesized so that a lower predetermined area and an upper predetermined area partially overlap each other (for example, using α blending) to create one wide-angle image (step S70).
In embodiment 1, when the shooting range of the wide-angle image is specified in advance from the key input unit 16, a panoramic image of a size based on the specified shooting range may be generated. In addition, the image processing unit 8 may compare the shooting ranges of the panoramic images to be combined with each other, and combine the panoramic images having a small shooting range with the shooting ranges of the other panoramic images to generate the final wide-angle image.
According to embodiment 1 described above, an image necessary for generating a wide-angle image can be easily and efficiently obtained without exchanging lenses.
B. Embodiment 2
Next, embodiment 2 of the present invention will be explained.
In embodiment 2, a change in the moving direction from state #1 to state #2 and from state #2 to state #3 when the user moves the digital camera 1 during panoramic shooting is detected using, as triggers (trigger), the amount of movement of the camera factor, information of an orientation sensor, an acceleration sensor, and the like, a moving direction instruction operation as a user factor, a key operation such as a shutter key, a posture of the user, a sound, and the like.
The configuration of the digital camera 1 according to embodiment 2 is the same as that of fig. 1, and therefore, the description thereof is omitted. Note that the main routine of the panoramic shooting mode is the same as fig. 5, and the combining process is the same as fig. 7, and therefore, the description thereof is omitted.
Fig. 9 is a flowchart for explaining the continuous shooting processing operation of the digital camera 1 according to embodiment 2. Fig. 10 is a conceptual diagram for explaining the continuous shooting processing operation of the digital camera 1 according to embodiment 2.
In embodiment 2, in order to determine which of states #1, #2, and #3 the digital camera 1 is currently in association with the trigger detection, a flag that is inverted every time the trigger is detected and a state # N (coefficient) that is incremented every time the trigger is detected are prepared. The flag has an initial value of "0" and is inverted each time a trigger is detected. The initial value of the state # N is "1", and is a coefficient indicating which state of the states #1, #2, and #3 the digital camera 1 is currently in. State # N is "1" before the 1 st trigger is detected, and indicates state #1, and is "2" when the 1 st trigger is detected, and indicates state # 2. When the next trigger is detected, the state becomes "3" and state #3 is indicated.
First, the CPU11 performs position collation between the previous captured image and the current captured image (step S80), and determines whether or not a trigger is detected (step S82). When the trigger is not detected, as shown in fig. 10, it is determined that the state #1 is not shifted from the state #1 to the state #2, that is, the state #1 is moved in the right direction from the left end, and it is determined whether or not the flag is "1" (step S88). In this case, since the trigger has not been detected, the flag is "0" and the state # N is "1".
Since the flag is "0" (no in step S88), the captured image is saved as an image for creating a panoramic image in the state # N (═ 1) (step S90). After that, the process is terminated, and the routine returns to the main routine shown in fig. 5. Thereafter, until the trigger is detected, step S90 is repeated, and the captured image is saved as an image for creating a panoramic image in state # N (═ 1).
Next, as shown in fig. 10, when the digital camera 1 has reached the end point of the state #1 (when the end of continuous shooting in the predetermined direction is detected), which is detected by, for example, information of the movement amount and orientation sensor as the camera factor, the movement direction as the user factor, a key operation such as a shutter key, or the like, or the posture and sound of the user when the state #1 shifts to the state #2, this is detected as a trigger (yes at step S82). When the trigger is detected, the flag is inverted (step S84), and the state # N is set to N +1 (step S86). In this case, the flag is set to "1", and the state # N is set to "2".
Next, it is determined whether or not the flag is "1" (step S86). In this case, since the flag is "1", the process ends without saving the captured image as an image for creating a panoramic image (step S92), and the routine returns to the main routine shown in fig. 5. Thereafter, until the next trigger is detected (until the moving direction of the digital camera 1 becomes state #3), the operation is repeated in which the process is ended without saving the captured image, and the process returns to the main routine shown in fig. 5. Therefore, the image taken in the state #2 is not saved.
Next, as shown in fig. 10, when the state #2 is shifted to the state #3, for example, when it is detected that the digital camera 1 has reached the end point of the state #2 (when the end of continuous shooting in the predetermined direction is detected), based on the information of the movement amount and orientation sensor as the camera factor, the movement direction as the user factor, the key operation such as the shutter key, the posture of the user, the voice, and the like, it is detected as a trigger (yes in step S82). When the trigger is detected, the flag is inverted (step S84), and the state # N is set to N +1 (step S86). In this case, the flag is "0", and the state # N is "3".
Next, it is determined whether or not the flag is "1" (step S86). In this case, since the flag is "0", the captured image is saved as the image for creating the panoramic image in the state # N (═ 3) (step S90). Thereafter, the process is terminated, and the routine returns to the main routine shown in fig. 5. Thereafter, until the next trigger is detected, step S90 is repeated, and the captured image is saved as an image for creating a panoramic image in state # N (═ 3).
The above-described operations are performed, thereby obtaining a plurality of images captured in a state #1 in which the user moves the digital camera 1 from the left end to the right direction, and a plurality of images captured in a state #3 in which the user moves the digital camera 1 from the right bottom to the left direction.
Next, in the same manner as in embodiment 1 (see fig. 7), a plurality of images shot in state #1 are synthesized to create a panoramic image #1, a plurality of images shot in state #3 are synthesized to create a panoramic image #2, and a predetermined area on the lower side of the panoramic image #1 and a predetermined area on the upper side of the panoramic image #2 are synthesized so as to partially overlap each other (for example, using α blending) to create one wide-angle image.
According to embodiment 2 described above, an image necessary for generating a wide-angle image can be easily and efficiently obtained without performing lens replacement.
C. Embodiment 3
Next, embodiment 3 of the present invention will be explained.
In the above-described embodiments 1 and 2, in order to capture all images necessary for obtaining a composite image with a wide angle of view, as shown by the arrows in fig. 3, the user sets the longitudinal direction of the angle of view to the vertical direction with respect to a desired scene, presses the shutter SW (half-press → full-press) in the upper left direction, first moves in the right direction from the left end where the shutter SW is pressed (state #1), moves in the downward direction at a predetermined position (state #2), and further moves in the left direction at another predetermined position (state # 3). However, it is difficult for the user to know how to move the digital camera 1 well or whether a desired image has been reliably obtained.
Therefore, in the present embodiment 3, when the user presses the shutter SW in the panoramic imaging mode, the image display unit 15 displays an imaging frame showing a range to be imaged by the digital camera 1 and a moving direction showing which direction the user is moving in, and guides the user. Further, in the panoramic photographing mode, an image currently imaged on the CCD5 of the digital camera 1 is displayed in real time as a preview image (low resolution) in the image display section 15, and a composite image synthesized using the preview image is displayed semi-transparently (transmittance 50%) in the image display section 15.
In this way, in the panoramic shooting mode, the shooting frame indicating the shooting area to be shot next, the moving direction indicating which direction to move in a good manner, and the reduced image obtained by combining the shot images are displayed on the image display unit 15, so that the user can easily recognize which direction to move the digital camera 1 in a good manner.
The configuration of the digital camera 1 according to embodiment 3 is the same as that of fig. 1, and therefore, the description thereof is omitted. Note that the main routine of the panoramic shooting mode is the same as fig. 5, and the combining process is the same as fig. 7, and therefore, the description thereof is omitted.
Fig. 11 is a flowchart for explaining the continuous shooting processing operation of the digital camera 1 according to embodiment 3. Fig. 12 is a conceptual diagram for explaining the continuous shooting processing operation of the digital camera 1 according to embodiment 3.
First, the CPU11 performs position matching between the previous captured image and the current captured image (step S100), and determines which of the states #1, #2, and #3 the current state is (step S102). When the user is in the state #1 in which the user moves the digital camera 1 from the left end to the right direction (the state #1 in step S102), the photographing frame FR1 and the moving direction M1 are displayed as shown in fig. 12 (step S104). Then, the current captured image is saved as an image for creating a panoramic image in the state #1 (step S106), and a simplified composite image IMG is created from a reduced image of the captured image saved up to that point and is displayed in a semi-transparent manner (transmittance 50%) (step S108).
Next, it is determined whether or not the digital camera 1 has reached a predetermined position P1 (in this case, the end position of state # 1: refer to fig. 12) (step S110). If the predetermined position P1 is not reached (no in step S110), the process ends without changing the current state #1, and the routine returns to the main routine.
Thereafter, steps S106 and S108 are repeated until the digital camera 1 reaches the predetermined position P1, and each time the captured image is stored as the image for creating a panoramic image in state #1, the composite image at that time is displayed semi-transparently (with a transmittance of 50%). When the digital camera 1 reaches the predetermined position P1 (yes in step S110), the state is shifted from state #1 to state #2, and the current state is changed to state #2 (step S112).
Next, when the current state shifts to the state #2 in which the digital camera 1 is moved downward from the end position of the state #1 (state #2 in step S102), the shooting frame FR2 and the moving direction M2 are displayed as shown in fig. 12 (step S114). Then, the process proceeds to step S118 without storing the captured image as an image for creating a panoramic image (step S116), and it is determined whether or not the digital camera 1 has reached a predetermined position P2 (in this case, the end position of state # 2: refer to fig. 12) (step S118). If the predetermined position P2 is not reached (no in step S118), the process ends without changing the current state #2, and the routine returns to the main routine shown in fig. 5.
Thereafter, the shooting is continued without saving the shot image until the digital camera 1 reaches the predetermined position P2, and when the digital camera 1 reaches the predetermined position P2 (yes in step S118), the state is shifted from the state #2 to the state #3, and thus the current state is changed to the state #3 (step S120).
Next, when the current state shifts to the state #3 in which the digital camera 1 is moved in the left direction from the state #2 (the state #3 in step S102), the photographing frame FR3 and the moving direction M3 are displayed as shown in fig. 12 (step S122). Then, the captured image is stored as the image for creating the panoramic image in the state #3 (step S124), and a simple composite image is created from the reduced image of the captured image stored before that time, and displayed in a semi-transparent state (transmittance 50%) (step S126).
Thereafter, steps S124 and S126 are repeated until the digital camera 1 reaches a predetermined position P3 (end point), and each time the captured image is stored as the image for creating a panoramic image in state #3, the composite image at that time is displayed semi-transparently (with a transmittance of 50%). When the digital camera 1 reaches the predetermined position P3 (yes in step S128), the continuous shooting is terminated (step S130).
The above-described operation is performed, thereby obtaining a plurality of images captured in a state #1 in which the user moves the digital camera 1 from the left end to the right direction, and a plurality of images captured in a state #3 in which the user moves the digital camera 1 from the right bottom to the left direction.
Next, in the same manner as in the above-described embodiments 1 and 2 (see fig. 7), a plurality of images shot in the state #1 are synthesized to create a panoramic image #1, a plurality of images shot in the state #3 are synthesized to create a panoramic image #2, and a predetermined area on the lower side of the panoramic image #1 and a predetermined area on the upper side of the panoramic image #2 are synthesized (for example, using α blending) so as to partially overlap each other, thereby creating one wide-angle image.
In the above-described embodiment 3, it is preferable to use the image frames FR1 to FR3 displayed on the image display unit 15, which are provided with the margin MG, that is, are slightly larger than the image range to be actually captured, as shown in fig. 12. By providing the margin MG, a margin is generated in the range in which the digital camera 1 is moved, and the stress on the user who moves the digital camera 1 can be reduced.
In embodiment 3, the change in the moving direction from state #1 to state #2 and from state #2 to state #3 is determined by whether or not the predetermined position is reached, but the present invention is not limited to this, and similarly to embodiment 2, the movement amount as a camera factor, information of an orientation sensor, an acceleration sensor, and the like, and the moving direction instruction operation as a user factor, the key operation of a shutter key, and the like, the posture of the user, the sound, and the like may be determined as triggers.
According to embodiment 3 described above, since the composite image is displayed on the image display unit 15 in real time and the shooting frame and the moving direction in which the digital camera 1 should be moved are displayed, the user can move the digital camera 1 while looking at the shooting frame and the moving direction, and thus, a plurality of images necessary for generating a wide-angle image, which cannot be obtained by shooting at once, can be easily and efficiently shot, and a wide-angle image can be easily generated.
In addition, in the above-described embodiments 1 to 3, the digital camera 1 is moved from left to right, from top to bottom, and from right to left in the panoramic imaging, but the present invention is not limited thereto, and for example, as shown in fig. 13A, a plurality of images may be captured while moving in one direction, and the images may be combined to generate one wide-angle image. Alternatively, as shown in fig. 13B, the digital camera 1 may be moved in the lateral direction three or more times in a repeated manner from left to right, from top to bottom, from right to left, from top to bottom, and then from left to right.
In embodiment 3, the image frame showing the image capturing range and the moving direction are displayed, but the present invention is not limited thereto, and the captured portion 30 may be filled in the entire frame 20 of the final wide-angle image as shown in fig. 14.
The present invention is not limited to the above embodiments, but includes the inventions described in the scope of claims and their equivalents.
Claims (7)
1. An image processing apparatus is characterized in that,
the disclosed device is provided with:
a shooting mechanism;
a display mechanism;
a shooting control mechanism which controls the shooting mechanism to continuously shoot;
a wide-angle image generation unit configured to generate a wide-angle image from a plurality of images continuously captured by the imaging unit in each of a plurality of imaging directions under control of the imaging control unit;
1 st display control means for causing the display means to display 1 st information, the 1 st information indicating: a range in which the photographing means should continuously photograph in one of the plurality of photographing directions by control of the photographing control means;
a detection unit configured to detect a predetermined trigger indicating an end of continuous shooting by the shooting unit in the one shooting direction; and
2 nd display control means for causing the display means to display 2 nd information, the 2 nd information indicating: a range in which continuous shooting is to be performed in a shooting direction different from the one shooting direction among the plurality of shooting directions as the detection means detects the predetermined trigger.
2. The image processing apparatus according to claim 1, characterized in that:
the image processing apparatus further includes a panoramic image generation unit that generates a panoramic image by synthesizing a plurality of images continuously captured by the capturing unit;
the wide-angle image generation unit generates the wide-angle image by synthesizing the panoramic images generated by the panoramic image generation unit with each other.
3. The image processing apparatus according to claim 2, characterized in that:
when the detection means detects a predetermined trigger, the 2 nd display control means displays, on the display means, 2 nd information indicating a range in which the imaging means should continuously capture in order to generate a next panoramic image.
4. The image processing apparatus according to claim 1, characterized in that:
the 2 nd display control means displays a frame indicating a range in which the image pickup means should continuously pick up the image according to the image pickup control by the image pickup control means.
5. The image processing apparatus according to claim 1, characterized in that:
the image processing apparatus further includes a preliminary synthesized image generating unit that sequentially synthesizes a plurality of images continuously captured by the capturing unit to generate an image;
the 1 st display control means displays the 1 st information indicating the range to be continuously captured and the image generated by the preliminary composite image generation means on the display means.
6. The image processing apparatus according to claim 1, characterized in that:
the detection means detects a movement amount, information of the direction sensor, a change in a movement direction, a predetermined instruction operation by the user, an instruction based on a voice of the user, or an instruction based on an operation of the user as the predetermined trigger.
7. An image processing method is characterized in that,
comprises the following steps:
a shooting control step of continuously shooting by a shooting mechanism;
a wide-angle image generation step of generating a wide-angle image from a plurality of images continuously captured in each of a plurality of capturing directions by the capturing control step;
a 1 st display control step of causing a display means to display 1 st information, the 1 st information indicating: a range to be continuously photographed in one of the plurality of photographing directions by the photographing control step;
a detection step of detecting a predetermined trigger indicating an end of continuous shooting by the shooting means in a predetermined direction; and
a 2 nd display control step of causing the display means to display 2 nd information, the 2 nd information indicating: a range to be continuously captured in a capturing direction different from the one capturing direction among the plurality of capturing directions, in response to the detection of the predetermined trigger in the detecting step.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2011084383A JP5665013B2 (en) | 2011-04-06 | 2011-04-06 | Image processing apparatus, image processing method, and program |
| JP084383/2011 | 2011-04-06 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1177082A1 HK1177082A1 (en) | 2013-08-09 |
| HK1177082B true HK1177082B (en) | 2016-02-26 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| KR101391042B1 (en) | Image processing device capable of generating wide-range image | |
| KR101346426B1 (en) | Image processing device capable of generating wide-range image | |
| JP5054583B2 (en) | Imaging device | |
| JP4985808B2 (en) | Imaging apparatus and program | |
| JP2008141518A (en) | Imaging device | |
| JP4533735B2 (en) | Stereo imaging device | |
| KR101433121B1 (en) | Image processing device for generating composite image having predetermined aspect ratio | |
| JP2006162991A (en) | Stereoscopic image photographing apparatus | |
| JP4957825B2 (en) | Imaging apparatus and program | |
| KR20110055241A (en) | Digital photographing apparatus having image stabilization module and control method thereof | |
| JP2012165405A (en) | Imaging apparatus and program | |
| CN102209199B (en) | Imaging apparatus | |
| JP5648563B2 (en) | Image processing apparatus, image processing method, and program | |
| JP5641352B2 (en) | Image processing apparatus, image processing method, and program | |
| JP2007225897A (en) | In-focus position determining apparatus and method | |
| HK1177082B (en) | Image processing device capable of generating wide-range image | |
| JP5370662B2 (en) | Imaging device | |
| HK1177083A (en) | Image processing device capable of generating wide-range image | |
| JP5637400B2 (en) | Imaging apparatus and program | |
| JP2007226141A (en) | Imaging apparatus and method | |
| HK1159905B (en) | Imaging apparatus | |
| HK1158862A (en) | Imaging apparatus | |
| HK1177076A (en) | Image processing device for generating composite image having predetermined aspect ratio | |
| HK1177076B (en) | Image processing device for generating composite image having predetermined aspect ratio | |
| HK1159906A (en) | Imaging apparatus and imaging method |