WO2011043249A1 - 画像処理装置および方法、並びにプログラム - Google Patents
画像処理装置および方法、並びにプログラム Download PDFInfo
- Publication number
- WO2011043249A1 WO2011043249A1 PCT/JP2010/067199 JP2010067199W WO2011043249A1 WO 2011043249 A1 WO2011043249 A1 WO 2011043249A1 JP 2010067199 W JP2010067199 W JP 2010067199W WO 2011043249 A1 WO2011043249 A1 WO 2011043249A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- captured
- images
- strip
- panoramic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B35/00—Stereoscopic photography
- G03B35/02—Stereoscopic photography by sequential recording
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B37/00—Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
- G03B37/04—Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe with cameras or projectors providing touching or overlapping fields of view
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/139—Format conversion, e.g. of frame-rate or size
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/207—Image signal generators using stereoscopic image cameras using a single 2D image sensor
- H04N13/221—Image signal generators using stereoscopic image cameras using a single 2D image sensor using the relative movement between cameras and objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
Definitions
- the present invention relates to an image processing apparatus, method, and program, and more particularly, to an image processing apparatus, method, and program capable of presenting stereoscopic images having different parallaxes.
- a so-called panoramic image is known as an effective method for presenting captured photographs.
- a panoramic image is a single still image obtained by arranging a plurality of still images obtained by imaging while panning the imaging device in a predetermined direction so that the same subject on the still images overlaps (for example, see Patent Document 1).
- a wider range of space can be displayed as a subject than the imaging range (angle of view) of one still image by a normal imaging device, so that imaging can be performed more effectively. It is possible to display an image of the selected subject.
- a panoramic image when a plurality of still images are captured while panning the imaging device, the same subject may be included in several still images. In such a case, since the same subject on different still images is taken from different positions, parallax occurs. If two images having parallax with each other (hereinafter referred to as a stereoscopic image) are generated from a plurality of still images by using this, these images are simultaneously displayed by a lenticular method or the like, thereby becoming an imaging target. The displayed subject can be displayed three-dimensionally.
- the present invention has been made in view of such a situation, and is capable of presenting stereoscopic images with different parallaxes according to a user's request.
- An image processing apparatus includes a plurality of image processing devices, such that the same subject included in different captured images overlaps based on a plurality of captured images captured by the imaging unit while moving the imaging unit.
- Position information generating means for generating position information indicating the relative positional relationship of the captured images when the captured images are arranged on a predetermined plane, and a plurality of the position information generating means based on the position information
- the first reference position of another captured image arranged in an overlapping manner with the captured image from the first reference position to the third reference position on the captured image.
- the first reference position is located between the second reference position and the third reference position on the captured image, from the first reference position to the second reference position. Is different from the distance from the first reference position to the third reference position.
- the image processing apparatus displays two of the first to third panoramic images selected by the selection unit at the same time, thereby displaying the same region in the imaging space in a stereoscopic manner.
- Display control means can be further provided.
- the strip image generation means includes a plurality of the first strip images from the captured image while shifting the first region to the third region on the captured image in a predetermined direction for the plurality of captured images.
- the third strip image is generated, and the panorama image generation unit causes the first panorama image to the third panorama image for each position of the first region to the third region.
- the position information generating means uses a plurality of predetermined block areas on the captured image, and blocks corresponding to the plurality of block areas from a captured image captured at a time prior to the captured image.
- the position information can be generated by searching each corresponding area.
- the position information generating means detects the block area including a moving subject based on a relative positional relationship between the plurality of block areas and a relative positional relationship between the plurality of block corresponding areas.
- the block area including the moving subject is detected, the block corresponding area is searched using the block area different from the detected block area among the plurality of block areas.
- the position information can be generated.
- an image processing method or program of one aspect of the present invention based on a plurality of captured images obtained by capturing with the imaging unit while moving the imaging unit, the same subject included in different captured images overlaps. Generating positional information indicating a relative positional relationship between the captured images when the captured images are arranged on a predetermined plane, and generating the captured images based on the positional information. When arranged on the plane, from the first reference position to the third reference position on the captured image, the first reference position to the first reference position of the other captured images aligned with the captured image.
- a first region to a third region on the captured image up to a third reference position are cut out to generate a first strip image to a third strip image from each of the plurality of captured images; Obtained from captured images
- the first strip image to the third strip image side by side the same area on the imaging space that is the imaging target when the plurality of the captured images are captured is displayed, and the first image having parallax is displayed.
- a plurality of the image capturing so that the same subject included in the different captured images overlap.
- Position information indicating the relative positional relationship of the captured images when the images are arranged on a predetermined plane is generated, and a plurality of the captured images are arranged on the plane based on the position information.
- the first reference position through the third reference position on the captured image to the first reference position through the third reference position of the other captured images arranged in an overlapping manner with the captured image.
- the first region to the third region on the captured image are cut out, and a first strip image to a third strip image are generated from each of the plurality of captured images, and obtained from the plurality of captured images.
- Said first strip By combining each of the image and the third strip image side by side, the same region on the imaging space that is the imaging target when the plurality of the captured images are captured is displayed, and the first panoramic image or the parallax having the parallax with each other is displayed.
- a third panoramic image is generated, and two of the first to third panoramic images are selected, and the first reference position is the second reference position on the captured image. The distance from the first reference position to the second reference position is different from the distance from the first reference position to the third reference position. .
- stereoscopic images with different parallaxes can be presented according to a user's request.
- the imaging apparatus to which the present invention is applied includes, for example, a camera, and generates a stereoscopic panoramic video from a plurality of captured images continuously captured by the imaging apparatus while the imaging apparatus is moving.
- the stereoscopic panoramic video is composed of two panoramic video having parallax.
- a panoramic video is an image group composed of a plurality of panoramic images in which a region in a wider range is displayed as a subject than an imaging range (view angle) in real space that can be captured by the imaging device once. It is. Therefore, a panoramic video can be said to be a single video if each panoramic image constituting the panoramic video is considered to be an image for one frame, or each panoramic image constituting the panoramic video can be represented by one sheet. If it is considered as a still image, it can be said that it is a still image group.
- the panoramic moving image is a moving image.
- the user When the user wants the imaging device to generate a stereoscopic panoramic video, the user operates the imaging device to capture a captured image that is used to generate the stereoscopic panoramic video.
- the user when capturing a captured image, the user points the optical lens of the imaging device 11 toward the front side in the drawing and moves the imaging device 11 from the right to the left around the rotation center C ⁇ b> 11.
- the subject is continuously imaged while rotating (panning) in the direction.
- the user adjusts the rotation speed of the imaging device 11 so that the same subject that is stationary is included in a plurality of captured images that are continuously captured.
- N captured images P (1) to P (N) are obtained.
- the captured image P (1) is the image with the oldest imaging time among the N captured images, that is, the first captured image, and the captured image P (N) is the N captured images. This is the last image captured at the latest imaging time.
- a captured image captured at the nth (where 1 ⁇ n ⁇ N) is also referred to as a captured image P (n).
- Each captured image may be a continuously shot still image or an image for one frame of a captured moving image.
- FIG. 1 when the image pickup apparatus 11 itself is rotated by 90 degrees, that is, when the image pickup is performed with the image pickup apparatus 11 in the horizontal direction, a longer picked-up image can be obtained in the figure, You may make it image-capture an image with the imaging device 11 facing sideways. In such a case, the captured image is rotated by 90 degrees in the same direction as the imaging device 11, and a stereoscopic panorama moving image is generated.
- the imaging device 11 When N captured images are obtained in this way, the imaging device 11 generates a plurality of panoramic moving images using these captured images.
- the panoramic moving image is a moving image in which the entire area in the imaging space that is the imaging target when N captured images are captured is displayed as the subject.
- a plurality of panoramic moving images having parallax are generated.
- a panoramic moving image having parallax from each other is obtained from the captured image because a plurality of captured images are captured in a state where the imaging device 11 is moving, and the subjects on these captured images have parallax. Because it becomes.
- the captured image captured when the imaging device 11 is at each of the positions PT1 and PT2 includes the same subject H11, but these captured images have different imaging positions, that is, the observation positions of the subject H11. Therefore, parallax occurs.
- the imaging device 11 rotates at a constant rotation speed, the parallax increases as the distance from the rotation center C11 to the imaging device 11 increases, for example, as the distance from the rotation center C11 to the position PT1 increases. .
- a stereoscopic panoramic video can be presented to the user.
- a panoramic moving image that is displayed so as to be observed by the right eye of the user among two panoramic moving images constituting the stereoscopic panoramic moving image is referred to as a panoramic moving image for the right eye.
- a panoramic video image that is displayed so as to be observed by the left eye of the user is referred to as a panoramic video image for the left eye.
- FIG. 3 is a diagram illustrating a configuration example of an embodiment of the imaging apparatus 11 to which the present invention is applied.
- the imaging device 11 includes an operation input unit 21, an imaging unit 22, an imaging control unit 23, a signal processing unit 24, a bus 25, a buffer memory 26, a compression / decompression unit 27, a drive 28, a recording medium 29, a display control unit 30, and a display.
- the unit 31 is configured.
- the operation input unit 21 includes buttons and the like, and receives a user operation and supplies a signal corresponding to the operation to the signal processing unit 24.
- the imaging unit 22 includes an optical lens, an imaging element, and the like, captures a captured image by photoelectrically converting light from a subject, and supplies the captured image to the imaging control unit 23.
- the imaging control unit 23 controls imaging by the imaging unit 22 and supplies the captured image acquired from the imaging unit 22 to the signal processing unit 24.
- the signal processing unit 24 is connected to the buffer memory 26 to the drive 28 and the display control unit 30 via the bus 25, and controls the entire imaging apparatus 11 according to a signal from the operation input unit 21.
- the signal processing unit 24 supplies the captured image from the imaging control unit 23 to the buffer memory 26 via the bus 25, or generates a panoramic video from the captured image acquired from the buffer memory 26.
- the buffer memory 26 is composed of SDRAM (Synchronous Dynamic Random Access Memory) or the like, and temporarily records data such as captured images supplied via the bus 25.
- the compression / decompression unit 27 encodes and decodes the panoramic video supplied via the bus 25 by a predetermined method.
- the drive 28 records the panoramic moving image supplied from the bus 25 on the recording medium 29, reads out the panoramic moving image recorded on the recording medium 29, and outputs the panoramic moving image to the bus 25.
- the recording medium 29 includes a non-volatile memory that can be attached to and detached from the imaging apparatus 11, and records a panoramic moving image according to the control of the drive 28.
- the display control unit 30 supplies the stereoscopic panorama moving image supplied via the bus 25 to the display unit 31 for display.
- the display unit 31 includes, for example, an LCD (Liquid Crystal Display) or a lenticular lens, and displays a three-dimensional image by the lenticular method according to control of the display control unit 30.
- the signal processing unit 24 of FIG. 3 is configured as shown in FIG. 4 in more detail.
- the signal processing unit 24 includes a motion estimation unit 61, a strip image generation unit 62, a panoramic video generation unit 63, and a selection unit 64.
- the motion estimation unit 61 performs motion estimation (Motion Estimation) using two captured images with different imaging times supplied via the bus 25.
- the motion estimation unit 61 includes a coordinate calculation unit 71.
- the coordinate calculation unit 71 compares the captured images when the captured images are arranged side by side on a predetermined plane so that the same subject on the two captured images overlaps. Information indicating the positional relationship is generated. Specifically, the coordinate of the center position of the captured image (hereinafter referred to as the center coordinate) when the two-dimensional xy coordinate system is taken on a predetermined plane is information indicating the relative positional relationship of the captured image. Is calculated as
- the strip image generation unit 62 uses the captured image and the center coordinates supplied via the bus 25 to cut out a predetermined area on the captured image to form a strip image, which is supplied to the panoramic video generation unit 63.
- the panoramic video generation unit 63 generates a plurality of panoramic images by synthesizing the strip images from the strip image generation unit 62, thereby generating a panoramic image that is a panoramic image group.
- the panoramic video generation unit 63 generates a plurality of panoramic video images having parallax with each other.
- a panoramic moving image for one frame, that is, one panoramic image is an image in which the entire range (area) on the imaging space that is the imaging target when the captured image is captured is displayed as the subject.
- the selection unit 64 uses two of the plurality of panoramic moving images having parallax with each other according to the parallax (viewpoint interval) designated by the user, for the right eye and the left eye constituting the stereoscopic panoramic moving image.
- the panorama moving image is selected and output to the display control unit 30.
- step S11 the imaging unit 22 images the subject in a state where the imaging device 11 is moving as shown in FIG. As a result, a single captured image (hereinafter referred to as one frame) is obtained.
- the captured image captured by the imaging unit 22 is supplied from the imaging unit 22 to the signal processing unit 24 via the imaging control unit 23.
- step S12 the signal processing unit 24 supplies the captured image supplied from the imaging unit 22 to the buffer memory 26 via the bus 25 and temporarily records it. At this time, the signal processing unit 24 records the captured image with a frame number so that the number of the captured image to be recorded can be specified.
- the nth captured image P (n) is also referred to as a captured image P (n) of frame n.
- step S13 the motion estimation unit 61 acquires captured images of the current frame n and the immediately preceding frame (n-1) from the buffer memory 26 via the bus 25, and aligns the captured images by motion estimation. .
- the motion estimation unit 61 captures the captured image P (n) of the current frame n. Then, the captured image P (n ⁇ 1) of the immediately previous frame (n ⁇ 1) is acquired.
- the motion estimation unit 61 displays the same image as the nine blocks BL (n) -1 to BR (n) -3 on the captured image P (n) in the immediately preceding frame. By searching for a position on the captured image P (n ⁇ 1), alignment is performed.
- the blocks BC (n) -1 to BC (n) -3 are located on the boundary CL-n, which is a virtual straight line in the vertical direction in the figure, which is located at the approximate center of the captured image P (n). These are rectangular regions arranged in the vertical direction in the figure.
- the blocks BL (n) -1 to BL (n) -3 are boundaries LL, which are virtual straight lines located on the left side of the boundary CL-n in the captured image P (n). A rectangular area arranged in the vertical direction in the figure on -n.
- the blocks BR (n) -1 to BR (n) -3 are boundaries that are virtual straight lines located on the right side of the boundary CL-n in the figure of the captured image P (n).
- RL-n rectangular regions are arranged in the vertical direction in the figure. The positions of these nine blocks BL (n) -1 to BR (n) -3 are determined in advance.
- the motion estimation unit 61 is an area on the captured image P (n ⁇ 1) having the same shape and size as the nine blocks on the captured image P (n), and the difference from the block The smallest area (hereinafter referred to as a block corresponding area) is searched.
- the difference from the block is, for example, the sum of absolute differences of pixel values of pixels at the same position between the block to be processed, for example, the block BL (n) -1 and the area corresponding to the block corresponding area. It is said.
- the block corresponding area of the captured image P (n ⁇ 1) corresponding to the processing target block on the captured image P (n) has the largest difference from the processing target block on the captured image P (n ⁇ 1). It is a small area. Therefore, it is estimated that the same image as the block to be processed is displayed in the block corresponding area.
- the captured image P (n) and the captured image P (n ⁇ 1) are placed on a predetermined plane so that the blocks BL (n) -1 to BR (n) -3 overlap with the corresponding block corresponding areas. If they are arranged side by side, the same subject on those captured images should overlap.
- the motion estimation unit 61 arranges the captured image P (n) and the captured image P (n ⁇ 1) on a plane so that all the blocks and the block corresponding regions substantially overlap, and the result Is the result of alignment of the captured image.
- the motion estimation unit 61 determines that the moving subject is Blocks estimated to be included are excluded, and alignment by motion estimation is performed again. That is, a block corresponding region having a relative positional relationship different from that of other block corresponding regions is detected, and blocks on the captured image P (n) corresponding to the detected block corresponding region are excluded from the processing target, and the rest Motion estimation is performed again using only the blocks.
- the blocks BL (n) -1 to BR (n) -3 are arranged at equal intervals in FIG. 6 at intervals of the distance QL.
- the distance between adjacent blocks BL (n) -1 and BL (n) -2 and the distance between block BL (n) -1 and block BC (n) -1 are both QL.
- the motion estimation unit 61 detects a moving block on the captured image P (n) based on the relative positional relationship of the block corresponding areas corresponding to the respective blocks.
- the motion estimation unit 61 obtains a distance QM between adjacent block corresponding areas such as a block corresponding area corresponding to the block BR (n) -3 and a block corresponding area corresponding to the block BC (n) -3. .
- the distance QM between the block corresponding area corresponding to these blocks and the block corresponding area corresponding to the block BR (n) -3 Assume that the absolute value of the difference from the distance QL is equal to or greater than a predetermined threshold value.
- the distance between the block corresponding area corresponding to block BR (n) -2 and block BC (n) -3 and the other adjacent block corresponding area (excluding the block corresponding area of block BR (n) -3) It is assumed that the absolute value of the difference between QM and distance QL is less than a predetermined threshold value.
- the block corresponding areas of other blocks different from the block BR (n) -3 are arranged in the same positional relationship as the relative positional relationship of each block. However, only the block corresponding area of the block BR (n) -3 has a positional relationship different from the positional relationship of each block with respect to the other block corresponding areas.
- the motion estimation unit 61 includes a moving subject in the block BR (n) -3.
- the distance between adjacent block corresponding areas but also the rotation angle of the target block corresponding area with respect to another adjacent block corresponding area may be used for detection of a moving block. That is, for example, if there is a block corresponding area that is inclined at a predetermined angle or more with respect to another block corresponding area, it is assumed that a moving subject is included in the block corresponding to the block corresponding area.
- the motion estimation unit 61 again performs the captured image P (n) and the captured image P by motion estimation using the remaining blocks excluding the motion block. Align with (n-1).
- the captured image P (n) and the captured image P (n ⁇ 1) are arranged in accordance with the result of this alignment, the captured images can be arranged in an overlapping manner so that subjects without motion overlap.
- the coordinate calculation unit 71 next captures the captured images P (1) to P (n) captured so far on a predetermined plane according to the alignment result of each frame, that is, The center coordinates of the captured image P (n) when arranged on the xy coordinate system are calculated.
- the respective captured images are arranged so that the center of the captured image P (1) is the position of the origin of the xy coordinate system and the same subject included in the captured image overlaps.
- the horizontal direction indicates the x direction
- the vertical direction indicates the y direction.
- the points O (1) to O (n) on the captured images P (1) to P (n) indicate the positions of the centers of the captured images.
- the center points O (1) to O (1) of the captured images P (1) to P (n ⁇ 1) are the centers.
- the center coordinates of (n ⁇ 1) have already been obtained and recorded in the buffer memory 26.
- the coordinate calculation unit 71 reads the center coordinates of the captured image P (n ⁇ 1) from the buffer memory 26, and aligns the read center coordinates with the captured image P (n) and the captured image P (n ⁇ 1). From the result, the center coordinates of the captured image P (n) are obtained. That is, the x coordinate and y coordinate of the point O (n) are obtained as the center coordinates.
- step S13 when alignment is performed in step S13 and the center coordinates of the captured image P (n) are obtained, the process proceeds to step S14.
- step S14 the motion estimator 61 supplies the center coordinates of the obtained captured image P (n) to the buffer memory 26 and records it in association with the captured image P (n).
- step S15 the signal processing unit 24 determines whether or not a predetermined number of captured images have been captured. For example, as shown in FIG. 1, when an area in a predetermined space is imaged in N times, it is determined that a predetermined number of captured images have been captured when N captured images are captured.
- the apparatus which can detect the angle which the image pick-up device 11 rotated such as a gyro sensor, it picks up after starting the image pick-up of the picked-up image instead of the number of picked-up images. It may be determined whether the device 11 is rotated by a predetermined angle. Even in this case, it is possible to specify whether or not the entire specific area in the predetermined space is the subject and the captured image is captured.
- step S15 If it is determined in step S15 that the predetermined number of captured images have not yet been captured, the process returns to step S11, and the captured image of the next frame is captured.
- step S15 if it is determined in step S15 that a predetermined number of captured images have been captured, the process proceeds to step S16.
- step S ⁇ b> 16 the strip image generation unit 62 acquires N captured images and their center coordinates from the buffer memory 26, and cuts out a predetermined area of each captured image based on the acquired captured images and center coordinates. A strip image is generated.
- the strip image generation unit 62 includes a region TM (n) and a region TL that are defined on the basis of the boundary ML-n, the boundary LL-n, and the boundary RL-n on the captured image P (n).
- Each of (n) and region TR (n) is cut out to form a strip image.
- portions corresponding to those in FIG. 6 are denoted by the same reference numerals, and description thereof is omitted.
- captured images P (n) and P (n + 1) captured continuously are arranged so that the same subject overlaps based on the center coordinates thereof, and the horizontal direction in FIG. For example, it corresponds to the x direction in FIG.
- the boundary ML-n on the captured image P (n) is a vertical virtual straight line located on the left side of the boundary CL-n, and the boundary ML ⁇ (n + 1) on the captured image P (n + 1). Is a boundary corresponding to the boundary ML-n of the captured image P (n). That is, the boundary ML-n and the boundary ML- (n + 1) are virtual straight lines in the vertical direction in the figure at the same position on the captured image P (n) and the captured image P (n + 1).
- the boundary LL ⁇ (n + 1) on the captured image P (n + 1) is a boundary corresponding to the boundary LL ⁇ n in the captured image P (n), and the boundary RL ⁇ (n + 1) on the captured image P (n + 1) is This is a boundary corresponding to the boundary RL-n in the captured image P (n).
- the boundary ML (M) -n and the boundary MR (M) -n which are vertical straight lines, are straight lines in the vicinity of the boundary ML-n on the captured image P (n).
- the ML-n is located on the left and right sides of the ML-n at a predetermined distance.
- the boundary ML (M) ⁇ (n + 1) and the boundary MR (M) ⁇ (n + 1) which are straight lines in the vertical direction, are in the vicinity of the boundary ML ⁇ (n + 1) on the captured image P (n + 1). These are straight lines that are located on the left and right sides of the boundary ML ⁇ (n + 1), respectively, by a predetermined distance.
- the boundary ML (L) -n and the boundary MR (L) -n which are straight lines in the vertical direction, are straight lines in the vicinity of the boundary LL-n on the captured image P (n), and each of the boundaries LL- It is located on the left and right sides of n at a predetermined distance.
- the boundary ML (L) ⁇ (n + 1) and the boundary MR (L) ⁇ (n + 1), which are vertical straight lines, are straight lines in the vicinity of the boundary LL ⁇ (n + 1) on the captured image P (n + 1). In this case, they are located on the left and right sides of the boundary LL ⁇ (n + 1), respectively, by a predetermined distance.
- the boundary ML (R) -n and the boundary MR (R) -n which are vertical straight lines, are straight lines in the vicinity of the boundary RL-n on the captured image P (n), and each of the boundaries It is located on the left and right sides of RL-n by a predetermined distance.
- the boundary ML (R) ⁇ (n + 1) and the boundary MR (R) ⁇ (n + 1) which are straight lines in the vertical direction, are in the vicinity of the boundary RL ⁇ (n + 1) on the captured image P (n + 1).
- the straight lines are located on the left and right sides of the boundary RL- (n + 1), respectively, by a predetermined distance.
- the strip image generation unit 62 cuts out each of the three regions TM (n), region TL (n), and region TR (n) from the captured image P (n) to form a strip image.
- an area TM (n) from the boundary ML (M) ⁇ n to the position of the boundary MR (M) ⁇ (n + 1) on the captured image P (n) is one strip image (hereinafter referred to as a strip image TM ( n)).
- the position of the boundary MR (M) ⁇ (n + 1) on the captured image P (n) is the boundary MR (M) ⁇ when the captured image P (n) and the captured image P (n + 1) are arranged. This is the position on the captured image P (n) that overlaps (n + 1).
- the subject in the region from the boundary ML (M) -n to the position of the boundary MR (M) -n is the boundary ML (M) ⁇ in the strip image TM (n ⁇ 1).
- the subject is basically the same as the subject in the region from n to the position of the boundary MR (M) -n.
- the strip image TM (n) and the strip image TM (n ⁇ 1) are images cut out from the captured image P (n) and the captured image P (n ⁇ 1), respectively, The imaged time is different.
- the subject in the area from the boundary ML (M) ⁇ (n + 1) to the position of the boundary MR (M) ⁇ (n + 1) is the boundary ML (M + 1) in the strip image TM (n + 1).
- )-(N + 1) to the subject in the region from boundary MR (M)-(n + 1) to the position is the boundary ML (M + 1) in the strip image TM (n + 1).
- a region TL (n) from the boundary ML (L) -n to the position of the boundary MR (L)-(n + 1) on the captured image P (n) is one strip image (hereinafter referred to as a strip image TL ( n)).
- a region TR (n) from the boundary ML (R) ⁇ n to the position of the boundary MR (R) ⁇ (n + 1) on the captured image P (n) is one strip image (hereinafter referred to as a strip image TR ( n)).
- the positions of the boundary MR (L) ⁇ (n + 1) and the boundary MR (R) ⁇ (n + 1) on the captured image P (n) are the captured image P (n) and the captured image P (n + 1). Is a position on the captured image P (n) that overlaps with the boundaries of the images.
- the strip images TM (n) obtained from the N captured images are arranged side by side and synthesized, one panoramic image can be obtained.
- the strip images TL (n) obtained from N captured images are combined and combined, one panoramic image is obtained, and the strip images TR (n) acquired from N captured images are combined and combined.
- one panoramic image can be obtained.
- panoramic images are images that display the entire range (area) on the imaging space that is the imaging target when N captured images are captured, and have parallax with each other.
- the strip image generation unit 62 supplies the obtained strip image and the center coordinates of each captured image to the panoramic video generation unit 63. Thereafter, the process proceeds from step S16 to step S17.
- step S17 the panoramic video generation unit 63 arranges and combines the strip images of each frame on the basis of the strip image from the strip image generation unit 62 and the center coordinates of the captured image, and generates one frame of the panoramic video. Generate image data.
- the panoramic video generation unit 63 arranges and synthesizes N strip images TM (n) cut out from a substantially central region of the N captured images P (n), and synthesizes one frame image of the panoramic video. Data, that is, one panoramic image is generated.
- the panoramic video generation unit 63 arranges and synthesizes N strip images TL (n) cut out from N captured images P (n) to obtain image data for one frame of the panoramic moving image. Further, the panoramic video generation unit 63 arranges and synthesizes N strip images TR (n) cut out from the N captured images P (n), and generates image data for one frame of the panoramic moving image.
- the panorama image generated from each of the strip image TM (n), the strip image TL (n), and the strip image TR (n) is also referred to as a panorama image PM, a panorama image PL, and a panorama image PR.
- each of the panorama moving images including the panorama image PM, the panorama image PL, and the panorama image PR is also referred to as a panorama moving image PMM, a panorama moving image PML, and a panorama moving image PMR.
- the panoramic video generation unit 63 for example, combines the strip image TM (n) and the strip image TM (n ⁇ 1), from the boundary ML (M) ⁇ n in the strip image to the boundary MR ( For the area up to the position of M) ⁇ n, the pixel value of the pixel of the panoramic image is obtained by weighted addition.
- the panoramic video generation unit 63 adds the pixel values of the overlapping pixels of the strip image TM (n) and the strip image TM (n ⁇ 1) with weights, and corresponds the values obtained as a result to the pixels. This is the pixel value of the pixel of the panoramic image at the position.
- the weight at the time of weighted addition of the pixels in the region from the boundary ML (M) -n to the position of the boundary MR (M) -n is It is determined to have the following characteristics.
- a panoramic image is generated as the pixel position becomes closer to the position from the boundary ML-n to the boundary MR (M) -n.
- the contribution ratio of the pixels of the strip image TM (n) to is higher.
- the contribution ratio of the pixels of the strip image TM (n ⁇ 1) to the generation is made higher.
- the region from the boundary MR (M) -n to the boundary ML (M)-(n + 1) of the strip image TM (n) is directly used as the panoramic image.
- the strip image TM (n) and the strip image TM (n + 1) are combined, the region from the boundary ML (M) ⁇ (n + 1) to the position of the boundary MR (M) ⁇ (n + 1) in the strip image
- the pixel value of the pixel of the panoramic image is obtained by weighted addition.
- the pixel position is closer to the position of the boundary MR ⁇ (n + 1) to the boundary MR (M) ⁇ (n + 1). Accordingly, the contribution ratio of the pixels of the strip image TM (n + 1) to the generation of the panoramic image is made higher.
- the pixel position depends on the position from the boundary ML ⁇ (n + 1) to the boundary ML (M) ⁇ (n + 1). As it gets closer, the contribution ratio of the pixels of the strip image TM (n) to the generation of the panoramic image becomes higher.
- the regions near the ends of the strip images of successive frames are weighted and added to form pixel values of the pixels of the panorama image, thereby simply arranging the strip images into one image. Compared to the above, a more natural image can be obtained.
- the panoramic video generation unit 63 combines the areas near the edges of the strip image by weighted addition to prevent the contour of the subject from being distorted and uneven brightness, which is more natural. A panoramic image can be obtained.
- the motion estimation unit 61 detects lens distortion caused by the optical lens constituting the imaging unit 22 based on the captured images, and when the strip images are combined, the strip image generation unit 62
- the strip image may be corrected using the lens distortion detection result. That is, the distortion generated in the strip image is corrected by image processing based on the detection result of the lens distortion.
- the panorama video generation unit 63 uses the panorama image as a bus. 25 to supply to the compression / decompression unit 27.
- step S ⁇ b> 18 the compression / decompression unit 27 encodes the panoramic video image data supplied from the panoramic video generation unit 63 using, for example, the JPEG (Joint Photographic Experts Group) method, and supplies the encoded data to the drive 28 via the bus 25. .
- JPEG Joint Photographic Experts Group
- the drive 28 supplies the image data of the panoramic moving image from the compression / decompression unit 27 to the recording medium 29 to be recorded. At the time of recording image data, a frame number is given to each image data by the panoramic video generation unit 63.
- step S19 the signal processing unit 24 determines whether or not the image data of the panoramic moving image is generated for a predetermined frame. For example, in the case where it is determined that a panoramic moving image composed of M frame image data is generated, it is determined that a panoramic moving image for a predetermined frame has been generated when image data for M frames is obtained. Is done.
- step S19 If it is determined in step S19 that the panorama moving image for the predetermined frame has not been generated yet, the process returns to step S16, and the image data of the next frame of the panorama moving image is generated.
- the boundary MR (M ) the boundary MR (M )
- the area TM (n) up to the position of-(n + 1) is cut out to form a strip image.
- a strip image used for generating the m-th frame of the panoramic video PMM is a strip image TM (n) -m (where 1 ⁇ m ⁇ M).
- the cutout position of the strip image TM (n) -m of the m-th frame is such that the area TM (n) that is the cutout position of the strip image TM (n) -1 has a width CW (m -1) The position is shifted by a double distance.
- the area where the strip image TM (n) -2 in the second frame is cut out is an area having the same shape and size as the area TM (n) in FIG. 8 on the captured image P (n).
- the right end position is an area where the boundary MR (M) -n is located.
- the direction in which the strip image cut-out area is shifted is determined in advance according to the direction in which the imaging device 11 is rotated when the captured image is captured.
- the imaging device 11 rotates so that the center position of the captured image of the next frame is always located on the right side in the figure with respect to the center position of the captured image of the predetermined frame. It is assumed that it will be moved. That is, in the example of FIG. 8, it is assumed that the moving direction of the imaging device 11 is the right direction in the drawing.
- the panorama image constituting the panorama moving image has the same motion without movement. This is because the subject is displayed at the same position.
- the region TL (n) and the region TR (n) of the captured image P (n) from which the strip image is cut out Is shifted to the left in FIG. 8 by the width from the boundary LL ⁇ n to the boundary LL ⁇ (n + 1) and the width from the boundary RL ⁇ n to the boundary RL ⁇ (n + 1).
- FIG. 9 the horizontal direction corresponds to the horizontal direction in FIG.
- the horizontal direction in FIG. 9 corresponds to the x direction of the xy coordinate system.
- strip images TR (1) -1 to TR (N) -1 are generated from each of the N captured images P (1) to P (N), and these strips are generated. The images are combined to obtain a panoramic image PR-1.
- strip images TR (1) -2 to TR (N) -2 are generated from each of the N captured images P (1) to P (N), and these strip images are combined. As a result, a panoramic image PR-2 is obtained.
- the panorama image PR-1 and the panorama image PR-2 are images constituting the first frame and the second frame of the panorama moving image PMR, respectively.
- strip images TL (1) -1 to TL (N) -1 are generated from the N captured images P (1) to P (N), and the strip images are synthesized. Thus, a panoramic image PL-1 is obtained.
- strip images TL (1) -2 to TL (N) -2 are generated from each of the N captured images P (1) to P (N), and these strip images are combined. As a result, a panoramic image PL-2 is obtained.
- the panorama image PL-1 and the panorama image PL-2 are images constituting the first frame and the second frame of the panorama moving image PML, respectively.
- a strip image TM (n) is cut out from the captured images P (1) to P (N), and panoramic images constituting each frame of the panoramic moving image PMM are also generated.
- the cutout area of the strip image TR (2) -2 in the captured image P (2) is an area where the cutout area of the strip image TR (2) -1 is shifted to the left by the width CW in the drawing. Is done.
- the size of the width CW changes for each frame of the captured image.
- the same subject at different times is displayed in the strip image TR (1) -1 and the strip image TR (2) -2. Further, the same subject at different times is also displayed in the strip image TR (1) -1 and the strip image TL (m) -1.
- the same subject at different times is displayed in each panorama image PR-1 to panorama image PL-2. That is, the panoramic images have parallax with each other. Furthermore, since the panoramic image is generated by combining strip images obtained from captured images of a plurality of different frames, the subject displayed in each region is captured even in one panoramic image. The time is different.
- the end portion of each panoramic image is generated using the captured image P (1) and the captured image P (N).
- the left end portion is an image from the left end of the captured image P (1) to the right end portion of the strip image TR (1) -1.
- step S ⁇ b> 19 the signal processing unit 24 determines the magnitude of the parallax of the stereoscopic panoramic video to be displayed. The process proceeds to step S20.
- three panoramic moving images PMM, panoramic moving images PML, and panoramic moving images PMR are generated and recorded on the recording medium 29 by the processing described above.
- the panoramic moving image PMM is a moving image generated by cutting out a substantially central region TM (n) in FIG. 8 on the captured image. Further, the panorama moving image PML and the panorama moving image PMR are moving images generated by cutting out the left region TL (n) and the right region TR (n) from the center in FIG. 8 on the captured image.
- the region TM (n) is located slightly on the left side in the figure from the center of the captured image P (n)
- the distance from the region TM (n) to the region TL (n) is the region TM (n).
- the magnitude of the parallax between the panorama image PM and the panorama image PL, the magnitude of the parallax between the panorama image PM and the panorama image PR, and the magnitude of the parallax between the panorama image PL and the panorama image PR are different.
- a stereoscopic panoramic video composed of the panoramic video PMM and the panoramic video PML is referred to as a stereoscopic panoramic video ML
- a stereoscopic panoramic video composed of the panoramic video PMM and the panoramic video PMR is referred to as a stereoscopic panoramic video MR.
- a stereoscopic panorama moving image composed of the panorama moving image PML and the panorama moving image PMR is referred to as a stereoscopic panorama moving image LR.
- the panorama moving image PML and the panorama moving image PMM are set as panorama moving images for the right eye and the left eye, respectively.
- the panorama moving image PMM and the panorama moving image PMR are the panorama moving images for the right eye and the left eye, respectively.
- the panorama moving image PML and the panorama moving image PMR are panorama moving images for the right eye and the left eye, respectively.
- the stereoscopic panoramic moving image LR has the largest parallax (viewpoint interval)
- the stereoscopic parallax moving image MR has the next largest parallax
- the stereoscopic panoramic moving image ML has the smallest parallax. Accordingly, it is possible to display a stereoscopic panorama moving image with different parallax depending on which of these three stereoscopic panoramic moving images is displayed on the display unit 31.
- the imaging apparatus 11 causes the user to specify “large parallax”, “medium parallax”, or “small parallax” as the magnitude of the parallax, and the stereoscopic panoramic video with the parallax according to the user designation. indicate. That is, the stereoscopic panorama moving image LR, the stereoscopic panorama moving image MR, and the stereoscopic panorama moving image ML are reproduced in response to designation of “large parallax”, “medium parallax”, and “small parallax”, respectively.
- step S20 the selection unit 64 selects two panoramic video images from the three panoramic video images recorded on the recording medium 29 based on the signal from the operation input unit 21. For example, when “large parallax” is designated by the user, the selection unit 64 selects the panorama moving image PML and the panorama moving image PMR in which the parallax of the stereoscopic panorama moving image is the largest.
- the selection unit 64 When two panoramic moving images, that is, a stereoscopic panoramic moving image having a designated parallax are selected, the selection unit 64 reads the selected two panoramic moving images from the recording medium 29 via the drive 28. Then, the selection unit 64 supplies the read panoramic video image data to the compression / decompression unit 27 to instruct decoding, and the process proceeds to step S21.
- step S21 the compression / decompression unit 27 decodes the image data of the two panoramic moving images supplied from the selection unit 64, that is, the panoramic image, for example, using the JPEG method, and supplies the decoded data to the signal processing unit 24.
- step S22 the signal processing unit 24 reduces the panoramic image of each frame constituting the panoramic moving image to a predetermined size. For example, the reduction process is performed so that the entire panoramic image can be displayed on the display screen of the display unit 31.
- the signal processing unit 24 supplies the display control unit 30 with a stereoscopic panorama moving image composed of the two reduced panorama moving images.
- a stereoscopic panorama moving image composed of the two reduced panorama moving images.
- the panoramic video PML is used for the right eye
- the panoramic video PMR is used for the left eye.
- step S23 the display control unit 30 supplies the stereoscopic panorama moving image from the signal processing unit 24 to the display unit 31 to display the stereoscopic panorama moving image. That is, the display control unit 30 supplies the frames of the right-eye and left-eye panorama moving images to the display unit 31 in order at predetermined time intervals, and stereoscopically displays them by the lenticular method.
- the display unit 31 divides the panoramic image for the right eye and the left eye of each frame into several strip images, and the divided right eye image and left eye image are predetermined.
- a stereoscopic panorama moving image is displayed by alternately displaying the images in the direction.
- the light of the right-eye panoramic image and the left-eye panoramic image that are displayed in a divided manner in this manner are respectively displayed by the right eye and left eye of the user viewing the display unit 31 by the lenticular lens that constitutes the display unit 31.
- Guided to the eyes Accordingly, a stereoscopic panoramic video is observed by the user's eyes.
- the imaging device 11 generates a plurality of strip images while shifting the cutout region from each of the plurality of captured images captured at different times, and combines the strip images to generate a panoramic moving image of each frame. Generate.
- the imaging device 11 generates a plurality of panorama moving images, selects two of the plurality of panorama moving images according to the parallax size designated by the user, and selects the two selected panoramas.
- a stereoscopic panorama moving image composed of moving images is displayed.
- the captured subject can be moved and expressed, and the subject can be displayed stereoscopically.
- the image of the subject can be displayed more effectively.
- a stereoscopic panoramic moving image with different parallax can be presented according to a user request. That is, the user can specify a desired size of parallax and view a stereoscopic panoramic video having the specified parallax.
- the imaging apparatus 11 has been described with respect to the example in which three panoramic moving images are generated and any one of three stereoscopic panoramic moving images having different parallaxes is displayed according to the user's designation of parallax.
- a stereoscopic panoramic video having different parallaxes may be displayed.
- as many panoramic moving images having parallax as many as the number of stereoscopic panoramic moving images that can be displayed are generated and recorded in the recording medium 29.
- three panoramic moving images are not recorded on the recording medium 29, but three stereoscopic panoramic moving images LR, a stereoscopic panoramic moving image MR, and a stereoscopic panoramic moving image ML are generated in advance and recorded on the recording medium 29. You may be made to do. In such a case, a stereoscopic panoramic video having parallax specified by the user is read from the recording medium 29 and displayed.
- N captured images are captured and all captured images are once recorded in the buffer memory 26, and then panoramic moving images are generated using the captured images. It is also possible to generate a panoramic moving image at the same time while performing the above imaging.
- a reduced panoramic video may be generated directly from the captured image.
- the processing amount until the stereoscopic panoramic moving image is reproduced can be reduced, the stereoscopic panoramic moving image can be displayed more quickly.
- a device such as a personal computer may be provided with a function of generating a panoramic moving image from a captured image, and the panoramic moving image may be generated from the captured image captured by the camera.
- a stereoscopic panoramic image composed of panoramic images for the right eye and the left eye having a designated parallax may be displayed.
- two panorama images determined by the designated parallax are selected from the panorama image PM, the panorama image PL, and the panorama image PR, and a stereoscopic panorama image composed of the selected panorama images is displayed.
- the series of processes described above can be executed by hardware or software.
- a program constituting the software may execute various functions by installing a computer incorporated in dedicated hardware or various programs. For example, it is installed from a program recording medium in a general-purpose personal computer or the like.
- FIG. 10 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processing by a program.
- a CPU Central Processing Unit
- ROM Read Only Memory
- RAM Random Access Memory
- the input / output interface 305 is connected to the bus 304.
- the input / output interface 305 includes an input unit 306 including a keyboard, a mouse, and a microphone, an output unit 307 including a display and a speaker, a recording unit 308 including a hard disk and a nonvolatile memory, and a communication unit 309 including a network interface.
- a drive 310 that drives a removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory is connected.
- the CPU 301 loads, for example, the program recorded in the recording unit 308 to the RAM 303 via the input / output interface 305 and the bus 304, and executes the above-described series. Is performed.
- the program executed by the computer (CPU 301) is, for example, a magnetic disk (including a flexible disk), an optical disk (CD-ROM (Compact-Read-Only Memory), DVD (Digital Versatile-Disc), etc.), magneto-optical disk, or semiconductor. It is recorded on a removable medium 311 which is a package medium composed of a memory or the like, or provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
- the program can be installed in the recording unit 308 via the input / output interface 305 by attaching the removable medium 311 to the drive 310. Further, the program can be received by the communication unit 309 via a wired or wireless transmission medium and installed in the recording unit 308. In addition, the program can be installed in advance in the ROM 302 or the recording unit 308.
- the program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
- Studio Circuits (AREA)
- Television Signal Processing For Recording (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Description
本発明を適用した撮像装置は、例えばカメラなどからなり、撮像装置が移動している状態で、撮像装置が連続的に撮像した複数の撮像画像から、立体パノラマ動画像を生成する。立体パノラマ動画像は、視差を有する2つのパノラマ動画像から構成される。
図3は、本発明を適用した撮像装置11の一実施の形態の構成例を示す図である。
また、図3の信号処理部24は、より詳細には図4に示すように構成される。
次に、図5のフローチャートを参照して、撮像装置11が撮像画像を撮像して立体パノラマ動画像を生成し、立体パノラマ動画像を再生する立体パノラマ動画像の再生処理について説明する。この立体パノラマ動画像の再生処理は、ユーザにより操作入力部21が操作され、立体パノラマ動画像の生成が指示されると開始される。
Claims (7)
- 撮像手段を移動させながら前記撮像手段により撮像して得られた複数の撮像画像に基づいて、異なる前記撮像画像に含まれる同じ被写体が重なるように、複数の前記撮像画像を所定の平面上に並べたときの前記撮像画像のぞれぞれの相対的な位置関係を示す位置情報を生成する位置情報生成手段と、
前記位置情報に基づいて複数の前記撮像画像を前記平面上に並べた場合に、前記撮像画像上の第1の基準位置乃至第3の基準位置から、前記撮像画像と重ねて並べられた他の撮像画像の前記第1の基準位置乃至前記第3の基準位置までの前記撮像画像上の第1の領域乃至第3の領域を切り出して、複数の前記撮像画像のそれぞれから第1の短冊画像乃至第3の短冊画像を生成する短冊画像生成手段と、
複数の前記撮像画像から得られた前記第1の短冊画像乃至前記第3の短冊画像のそれぞれを並べて合成することにより、複数の前記撮像画像の撮像時に撮像対象となった撮像空間上の同じ領域が表示され、互いに視差を有する第1のパノラマ画像乃至第3のパノラマ画像を生成するパノラマ画像生成手段と、
前記第1のパノラマ画像乃至前記第3のパノラマ画像のうちの2つを選択する選択手段と
を備え、
前記第1の基準位置は、前記撮像画像上において前記第2の基準位置と前記第3の基準位置との間に位置し、前記第1の基準位置から前記第2の基準位置までの距離は、前記第1の基準位置から前記第3の基準位置までの距離とは異なる
画像処理装置。 - 前記選択手段により選択された前記第1のパノラマ画像乃至前記第3のパノラマ画像のうちの2つを同時に表示させることで、前記撮像空間上の前記同じ領域を立体表示させる表示制御手段をさらに備える
請求項1に記載の画像処理装置。 - 前記短冊画像生成手段は、複数の前記撮像画像について、前記撮像画像上の前記第1の領域乃至前記第3の領域を所定方向にずらしながら、前記撮像画像から複数の前記第1の短冊画像乃至前記第3の短冊画像を生成し、
前記パノラマ画像生成手段は、前記第1の領域乃至前記第3の領域のそれぞれの位置ごとに、前記第1のパノラマ画像乃至前記第3のパノラマ画像を生成することで、前記撮像空間上の前記同じ領域が表示される複数の前記第1のパノラマ画像乃至前記第3のパノラマ画像のそれぞれからなる画像群を生成する
請求項1に記載の画像処理装置。 - 前記位置情報生成手段は、前記撮像画像上の予め定められた複数のブロック領域を用いて、前記撮像画像よりも前の時刻に撮像された撮像画像から、複数の前記ブロック領域と対応するブロック対応領域のそれぞれを探索することにより、前記位置情報を生成する
請求項1に記載の画像処理装置。 - 前記位置情報生成手段は、複数の前記ブロック領域の相対的な位置関係と、複数の前記ブロック対応領域の相対的な位置関係とに基づいて、動きのある被写体が含まれる前記ブロック領域を検出し、前記動きのある被写体が含まれる前記ブロック領域が検出された場合、複数の前記ブロック領域のうち、検出された前記ブロック領域とは異なる前記ブロック領域を用いて前記ブロック対応領域を探索することにより、前記位置情報を生成する
請求項4に記載の画像処理装置。 - 撮像手段を移動させながら前記撮像手段により撮像して得られた複数の撮像画像に基づいて、異なる前記撮像画像に含まれる同じ被写体が重なるように、複数の前記撮像画像を所定の平面上に並べたときの前記撮像画像のぞれぞれの相対的な位置関係を示す位置情報を生成する位置情報生成手段と、
前記位置情報に基づいて複数の前記撮像画像を前記平面上に並べた場合に、前記撮像画像上の第1の基準位置乃至第3の基準位置から、前記撮像画像と重ねて並べられた他の撮像画像の前記第1の基準位置乃至前記第3の基準位置までの前記撮像画像上の第1の領域乃至第3の領域を切り出して、複数の前記撮像画像のそれぞれから第1の短冊画像乃至第3の短冊画像を生成する短冊画像生成手段と、
複数の前記撮像画像から得られた前記第1の短冊画像乃至前記第3の短冊画像のそれぞれを並べて合成することにより、複数の前記撮像画像の撮像時に撮像対象となった撮像空間上の同じ領域が表示され、互いに視差を有する第1のパノラマ画像乃至第3のパノラマ画像を生成するパノラマ画像生成手段と、
前記第1のパノラマ画像乃至前記第3のパノラマ画像のうちの2つを選択する選択手段と
を備える画像処理装置の画像処理方法において、
前記位置情報生成手段が、複数の前記撮像画像から前記位置情報を生成し、
前記短冊画像生成手段が、複数の前記撮像画像から前記第1の短冊画像乃至前記第3の短冊画像を生成し、
前記パノラマ画像生成手段が、前記第1の短冊画像乃至前記第3の短冊画像のそれぞれから、前記第1のパノラマ画像乃至前記第3のパノラマ画像のそれぞれを生成し、
前記選択手段が、前記第1のパノラマ画像乃至前記第3のパノラマ画像のうちの2つを選択する
ステップを含み、
前記第1の基準位置は、前記撮像画像上において前記第2の基準位置と前記第3の基準位置との間に位置し、前記第1の基準位置から前記第2の基準位置までの距離は、前記第1の基準位置から前記第3の基準位置までの距離とは異なる
画像処理方法。 - 撮像手段を移動させながら前記撮像手段により撮像して得られた複数の撮像画像に基づいて、異なる前記撮像画像に含まれる同じ被写体が重なるように、複数の前記撮像画像を所定の平面上に並べたときの前記撮像画像のぞれぞれの相対的な位置関係を示す位置情報を生成し、
前記位置情報に基づいて複数の前記撮像画像を前記平面上に並べた場合に、前記撮像画像上の第1の基準位置乃至第3の基準位置から、前記撮像画像と重ねて並べられた他の撮像画像の前記第1の基準位置乃至前記第3の基準位置までの前記撮像画像上の第1の領域乃至第3の領域を切り出して、複数の前記撮像画像のそれぞれから第1の短冊画像乃至第3の短冊画像を生成し、
複数の前記撮像画像から得られた前記第1の短冊画像乃至前記第3の短冊画像のそれぞれを並べて合成することにより、複数の前記撮像画像の撮像時に撮像対象となった撮像空間上の同じ領域が表示され、互いに視差を有する第1のパノラマ画像乃至第3のパノラマ画像を生成し、
前記第1のパノラマ画像乃至前記第3のパノラマ画像のうちの2つを選択する
ステップを含む処理をコンピュータに実行させ、
前記第1の基準位置は、前記撮像画像上において前記第2の基準位置と前記第3の基準位置との間に位置し、前記第1の基準位置から前記第2の基準位置までの距離は、前記第1の基準位置から前記第3の基準位置までの距離とは異なる
プログラム。
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| BRPI1006013A BRPI1006013A2 (pt) | 2009-10-09 | 2010-10-01 | aparelho e método de processamento de imagem, e, programa para fazer com que um computador para executar um processo. |
| US13/131,922 US20120242780A1 (en) | 2009-10-09 | 2010-10-01 | Image processing apparatus and method, and program |
| CN2010800034135A CN102239698A (zh) | 2009-10-09 | 2010-10-01 | 图像处理设备、方法以及程序 |
| EP10821920.5A EP2355532A4 (en) | 2009-10-09 | 2010-10-01 | IMAGE PROCESSING DEVICE AND METHOD AND PROGRAM |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2009235403A JP2011082919A (ja) | 2009-10-09 | 2009-10-09 | 画像処理装置および方法、並びにプログラム |
| JP2009-235403 | 2009-10-09 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2011043249A1 true WO2011043249A1 (ja) | 2011-04-14 |
Family
ID=43856707
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2010/067199 Ceased WO2011043249A1 (ja) | 2009-10-09 | 2010-10-01 | 画像処理装置および方法、並びにプログラム |
Country Status (6)
| Country | Link |
|---|---|
| US (1) | US20120242780A1 (ja) |
| EP (1) | EP2355532A4 (ja) |
| JP (1) | JP2011082919A (ja) |
| CN (1) | CN102239698A (ja) |
| BR (1) | BRPI1006013A2 (ja) |
| WO (1) | WO2011043249A1 (ja) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103051916A (zh) * | 2011-10-12 | 2013-04-17 | 三星电子株式会社 | 产生三维(3d)全景图像的设备和方法 |
Families Citing this family (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP5347890B2 (ja) * | 2009-10-09 | 2013-11-20 | ソニー株式会社 | 画像処理装置および方法、並びにプログラム |
| JP5406813B2 (ja) * | 2010-10-05 | 2014-02-05 | 株式会社ソニー・コンピュータエンタテインメント | パノラマ画像表示装置およびパノラマ画像表示方法 |
| CN103576438B (zh) * | 2012-08-08 | 2017-03-01 | 西蒙·R·杰马耶勒 | 制作三维照片的方法 |
| US20140152765A1 (en) * | 2012-12-05 | 2014-06-05 | Samsung Electronics Co., Ltd. | Imaging device and method |
| RU2621488C2 (ru) | 2013-02-14 | 2017-06-06 | Сейко Эпсон Корпорейшн | Укрепляемый на голове дисплей и способ управления для укрепляемого на голове дисплея |
| JP6161461B2 (ja) * | 2013-08-01 | 2017-07-12 | キヤノン株式会社 | 画像処理装置、その制御方法、および制御プログラム |
| US10477179B2 (en) * | 2014-08-13 | 2019-11-12 | Telefonaktiebolaget Lm Ericsson (Publ) | Immersive video |
| US9420177B2 (en) * | 2014-10-10 | 2016-08-16 | IEC Infrared Systems LLC | Panoramic view imaging system with laser range finding and blind spot detection |
| JP6726931B2 (ja) * | 2015-03-20 | 2020-07-22 | キヤノン株式会社 | 画像処理装置およびその方法、並びに、画像処理システム |
| WO2017182789A1 (en) | 2016-04-18 | 2017-10-26 | Argon Design Ltd | Blending images |
| JP2017212698A (ja) * | 2016-05-27 | 2017-11-30 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法およびプログラム |
| JP6869841B2 (ja) | 2017-07-20 | 2021-05-12 | キヤノン株式会社 | 画像処理装置、画像処理装置の制御方法、およびプログラム |
| CN107580175A (zh) * | 2017-07-26 | 2018-01-12 | 济南中维世纪科技有限公司 | 一种单镜头全景拼接的方法 |
| DE102018202707A1 (de) * | 2018-02-22 | 2019-08-22 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Erzeugung von Panoramabildern |
| CN110830704B (zh) * | 2018-08-07 | 2021-10-22 | 纳宝株式会社 | 旋转图像生成方法及其装置 |
| CN117278733B (zh) * | 2023-11-22 | 2024-03-19 | 潍坊威龙电子商务科技有限公司 | 全景摄像在vr头显中的显示方法及系统 |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2003524927A (ja) * | 1998-09-17 | 2003-08-19 | イッサム リサーチ ディベロップメント カンパニー オブ ザ ヘブリュー ユニバーシティ オブ エルサレム | パノラマ画像および動画を生成し表示するためのシステムおよび方法 |
| JP2005529559A (ja) * | 2002-06-10 | 2005-09-29 | ラファエル―アーマメント ディベロップメント オーソリティー リミテッド | モノスコープ・イメージから立体的なイメージを生成するための方法 |
| JP2009103980A (ja) * | 2007-10-24 | 2009-05-14 | Fujifilm Corp | 撮影装置、画像処理装置、及び撮影システム |
Family Cites Families (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6831677B2 (en) * | 2000-02-24 | 2004-12-14 | Yissum Research Development Company Of The Hebrew University Of Jerusalem | System and method for facilitating the adjustment of disparity in a stereoscopic panoramic image pair |
| US7477284B2 (en) * | 1999-09-16 | 2009-01-13 | Yissum Research Development Company Of The Hebrew University Of Jerusalem | System and method for capturing and viewing stereoscopic panoramic images |
| EP1586204A1 (en) * | 2003-01-24 | 2005-10-19 | Micoy Corporation | Stereoscopic panoramic image capture device |
| US7606441B2 (en) * | 2003-11-27 | 2009-10-20 | Seiko Epson Corporation | Image processing device and a method for the same |
| JP2005295004A (ja) * | 2004-03-31 | 2005-10-20 | Sanyo Electric Co Ltd | 立体画像処理方法および立体画像処理装置 |
| JP2006146067A (ja) * | 2004-11-24 | 2006-06-08 | Canon Inc | カメラ、レンズ装置およびカメラシステム |
| US20070081081A1 (en) * | 2005-10-07 | 2007-04-12 | Cheng Brett A | Automated multi-frame image capture for panorama stitching using motion sensor |
| JP2007201566A (ja) * | 2006-01-24 | 2007-08-09 | Nikon Corp | 画像再生装置および画像再生プログラム |
| US8107769B2 (en) * | 2006-12-28 | 2012-01-31 | Casio Computer Co., Ltd. | Image synthesis device, image synthesis method and memory medium storage image synthesis program |
| JP2009124340A (ja) * | 2007-11-13 | 2009-06-04 | Fujifilm Corp | 撮像装置、撮影支援方法、及び撮影支援プログラム |
| US8103134B2 (en) * | 2008-02-20 | 2012-01-24 | Samsung Electronics Co., Ltd. | Method and a handheld device for capturing motion |
| US20100265313A1 (en) * | 2009-04-17 | 2010-10-21 | Sony Corporation | In-camera generation of high quality composite panoramic images |
-
2009
- 2009-10-09 JP JP2009235403A patent/JP2011082919A/ja not_active Withdrawn
-
2010
- 2010-10-01 US US13/131,922 patent/US20120242780A1/en not_active Abandoned
- 2010-10-01 CN CN2010800034135A patent/CN102239698A/zh active Pending
- 2010-10-01 EP EP10821920.5A patent/EP2355532A4/en not_active Withdrawn
- 2010-10-01 WO PCT/JP2010/067199 patent/WO2011043249A1/ja not_active Ceased
- 2010-10-01 BR BRPI1006013A patent/BRPI1006013A2/pt not_active IP Right Cessation
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2003524927A (ja) * | 1998-09-17 | 2003-08-19 | イッサム リサーチ ディベロップメント カンパニー オブ ザ ヘブリュー ユニバーシティ オブ エルサレム | パノラマ画像および動画を生成し表示するためのシステムおよび方法 |
| JP2005529559A (ja) * | 2002-06-10 | 2005-09-29 | ラファエル―アーマメント ディベロップメント オーソリティー リミテッド | モノスコープ・イメージから立体的なイメージを生成するための方法 |
| JP2009103980A (ja) * | 2007-10-24 | 2009-05-14 | Fujifilm Corp | 撮影装置、画像処理装置、及び撮影システム |
Non-Patent Citations (1)
| Title |
|---|
| See also references of EP2355532A4 * |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103051916A (zh) * | 2011-10-12 | 2013-04-17 | 三星电子株式会社 | 产生三维(3d)全景图像的设备和方法 |
| CN103051916B (zh) * | 2011-10-12 | 2016-08-03 | 三星电子株式会社 | 产生三维(3d)全景图像的设备和方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| EP2355532A4 (en) | 2013-06-05 |
| JP2011082919A (ja) | 2011-04-21 |
| US20120242780A1 (en) | 2012-09-27 |
| BRPI1006013A2 (pt) | 2016-04-05 |
| CN102239698A (zh) | 2011-11-09 |
| EP2355532A1 (en) | 2011-08-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP5418127B2 (ja) | 画像処理装置および方法、並びにプログラム | |
| WO2011043249A1 (ja) | 画像処理装置および方法、並びにプログラム | |
| WO2011043248A1 (ja) | 画像処理装置および方法、並びにプログラム | |
| JP5347890B2 (ja) | 画像処理装置および方法、並びにプログラム | |
| JP5267396B2 (ja) | 画像処理装置および方法、並びにプログラム | |
| JP5390707B2 (ja) | 立体パノラマ画像合成装置、撮像装置並びに立体パノラマ画像合成方法、記録媒体及びコンピュータプログラム | |
| JP5287702B2 (ja) | 画像処理装置および方法、並びにプログラム | |
| JP5387905B2 (ja) | 画像処理装置および方法、並びにプログラム | |
| CN103051916B (zh) | 产生三维(3d)全景图像的设备和方法 | |
| CN102547327A (zh) | 图像处理装置、图像处理方法以及程序 | |
| WO2012039306A1 (ja) | 画像処理装置、撮像装置、および画像処理方法、並びにプログラム | |
| JP2011160347A (ja) | 記録装置および記録方法、画像処理装置および画像処理方法、並びにプログラム | |
| KR20250113969A (ko) | 이미지를 디스플레이하고 캡처하기 위한 기법 | |
| JP5566196B2 (ja) | 画像処理装置及びその制御方法 | |
| WO2012086298A1 (ja) | 撮影装置、方法及びプログラム | |
| JP2012215980A (ja) | 画像処理装置、画像処理方法およびプログラム |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| WWE | Wipo information: entry into national phase |
Ref document number: 201080003413.5 Country of ref document: CN |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2010821920 Country of ref document: EP |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10821920 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 3800/CHENP/2011 Country of ref document: IN |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 13131922 Country of ref document: US |
|
| REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: PI1006013 Country of ref document: BR |
|
| ENP | Entry into the national phase |
Ref document number: PI1006013 Country of ref document: BR Kind code of ref document: A2 Effective date: 20110602 |