WO2011148595A1 - 画像処理装置、画像処理方法および画像処理用プログラム - Google Patents
画像処理装置、画像処理方法および画像処理用プログラム Download PDFInfo
- Publication number
- WO2011148595A1 WO2011148595A1 PCT/JP2011/002765 JP2011002765W WO2011148595A1 WO 2011148595 A1 WO2011148595 A1 WO 2011148595A1 JP 2011002765 W JP2011002765 W JP 2011002765W WO 2011148595 A1 WO2011148595 A1 WO 2011148595A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- reference point
- image
- conversion target
- candidate
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/18—Image warping, e.g. rearranging pixels individually
Definitions
- the present invention relates to an image processing device, an image processing method, and an image processing program, and in particular, an image processing device for generating an image for performing three-dimensional display of a target object from a plurality of conversion target images including the target object,
- the present invention relates to an image processing method and an image processing program.
- Patent Documents 1 to 3 describe an example of a method for capturing an image in which the object of interest is located at the center of the image.
- the object of interest is photographed from various angles using a plurality of cameras and a supporting portion that supports them.
- the imaging device described in Patent Document 2 captures an object of interest from various angles using a plurality of cameras and a turntable connected to a computer.
- the object of interest is photographed from various angles with a camera that is not fixed, and the center of the object of interest in the image is detected from the camera posture information detected by the posture detector.
- a two-dimensional geometric transformation matrix for aligning the positions is calculated, and image transformation is performed.
- Patent Documents 1 to 3 in order to position the target object at the center of the image, in addition to a single camera, a plurality of cameras and target objects whose installation locations are fixed are used. There existed a subject that many apparatuses, such as a turntable to install and an acceleration sensor, were required.
- the support unit requires a plurality of devices such as a plurality of pan heads for fixing the camera, a support frame for supporting the pan head, a support arm, and a support column.
- the photographer must arrange a plurality of cameras on the same straight line, in the same arc shape, on the same plane, or in the same shape, and preparation for taking a picture is very complicated.
- the operation of the turntable is controlled by a computer.
- the cameras are installed at equal intervals on the arcuate imaging device installation base. Since the photographer can shoot only by placing the object of interest on the turntable, knowledge of the camera and work is not required, but as in Patent Document 1, many devices other than the camera are required.
- Patent Document 3 does not require a plurality of cameras or an installation base for fixing the cameras. Therefore, the photographer only needs to photograph with the camera.
- a device other than a camera such as an attitude detection device such as an acceleration sensor or a magnetic sensor is required.
- the present invention provides an image processing apparatus capable of generating an image for three-dimensional display from an image photographed by a single camera without using other devices such as a turntable on which an object of interest is installed and an acceleration sensor.
- An object of the present invention is to provide an image processing method and an image processing program.
- An image processing apparatus is an image processing apparatus that generates an image for performing three-dimensional display of a target object from a plurality of conversion target images including the target object, and is an image display that displays at least one conversion target image.
- information on the first reference point candidate which is a candidate for the first reference point serving as a reference for the input of geometric transformation, and the information on the first reference point candidate are displayed on the conversion target image.
- a first reference point reception determination means for receiving a determination signal for the first reference point candidate and determining a first reference point based on information on the first reference point candidate that is a target of the received determination signal; Receiving a second reference point information serving as a reference for the output of the first reference point, a second reference point reception determining means for determining the second reference point, a first reference point determined by the first reference point reception determining means, and a second reference point By reference point acceptance decision means Based on the second reference point which is a constant, the image to be transformed by geometric transformation, is characterized in that a geometric transformation means for outputting the converted image.
- An image processing method is an image processing method for generating a three-dimensional image by performing geometric transformation on each of conversion target images, which are a plurality of images including a target object, via an image display means. At least one image to be converted is displayed, and in response to a user operation, information on a first reference point candidate that is a first reference point candidate serving as a reference for input of geometric transformation is received, and the received first reference point candidate The first reference point candidate is displayed on the conversion target image based on the information, and a determination signal for the first reference point candidate displayed on the conversion target image is received in accordance with a user operation, and is the target of the received determination signal.
- the first reference point is determined based on the information on the first reference point candidate, the second reference point is received as the reference for the output of the geometric transformation, the second reference point is determined, and each of the conversion target images is determined. Determined It was based on the first reference point and the second reference point, the conversion target image geometric transformation, and outputting the converted image.
- An image processing program is an image processing program for generating a three-dimensional image by performing geometric transformation on each of conversion target images that are a plurality of images including an object of interest. Processing for displaying at least one image to be converted via the display means, and information on a first reference point candidate that is input in response to a user operation and that is a candidate for a first reference point serving as a reference for geometric transformation input. The process of displaying the first reference point candidate on the conversion target image, the determination received when the determination signal for the first reference point candidate displayed on the conversion target image input according to the user operation is received.
- the second reference point is received when the information on the second reference point, which is the reference for the output of geometric transformation, is determined. Process of determining a point, and a first reference point on the basis of the second reference point, and wherein the conversion target image thereby execute a process of geometric transformation.
- an image for three-dimensional display can be generated from an image captured by a single camera without using other devices such as a turntable on which an object of interest is installed, an acceleration sensor, and the like.
- FIG. 1 is a block diagram illustrating a configuration example of an image processing apparatus according to the first embodiment of the present invention.
- the image processing apparatus shown in FIG. 1 includes a first reference point accepting unit 11, an image display unit 12, a second reference point accepting unit 13, a geometric transformation method determining unit 14, and a geometric transformation unit 15.
- the first reference point accepting unit 11 accepts, as an input from the user, a first reference point candidate as a first reference point candidate for each of a plurality of images including the target object (hereinafter referred to as a conversion target image). .
- the first reference point receiving means 11 receives a determination signal for the received first reference point candidate and determines the first reference point.
- the first reference point is a reference point on the input side among points serving as input and output references for geometric transformation. That is, the first reference point is a reference point represented by position information (image coordinates) in the coordinate system of the conversion target image.
- the first reference point is set for each conversion target image.
- the reference point may be, for example, two or more points that constitute the rotation axis of the camera in a three-dimensional image (for example, a 360-degree rotated image of the object of interest) that is desired to be represented by the converted image.
- the rotation axis of the camera is not the rotation axis of the camera at the time of actual shooting, but the axis that the user wants to use as the center of camera movement in the target three-dimensional image.
- the number of points necessary as reference points can be set according to the geometric transformation method, but is 2 or more in any method.
- the second reference point is a reference point on the output side, and is a point to which the first reference point is converted. That is, the second reference point is a reference point represented by position information in the coordinate system of the converted image.
- Each point of the first reference point and each point of the second reference point have a correspondence relationship that they are image coordinates on which the same three-dimensional coordinates are projected. In generating a three-dimensional image, it is more desirable to use the same three-dimensional coordinate as a reference point between the conversion target images, such as two or more points constituting the rotation axis of the camera.
- the first reference point accepting unit 11 inputs information (position information and the like) of the first reference point candidate in response to a user operation performed on each conversion target image displayed on the image display unit 12. May accept the first reference point candidate.
- the first reference point receiving unit 11 may display the received first reference point candidate on the conversion target image via the image display unit 12.
- the first reference point is moved on the conversion target image based on the information.
- the information input of the first reference point candidate is performed, for example, through pointing with a mouse or the like, a keyboard, or the like.
- the first reference point receiving means 11 sends a first reference point determination signal (information for instructing to determine a first reference point candidate as a target of the first reference point determination signal as the first reference point).
- the first reference point candidate at that time may be determined as the first reference point.
- the first reference point receiving means 11 determines at least two points as the first reference point.
- the two points set as initial values of the first reference point candidates in advance may be displayed in the drawing area of the operation screen, and the position of the two points may be moved to the user.
- the first reference point receiving unit 11 When the first reference point is determined, the first reference point receiving unit 11 outputs the determined first reference point to the geometric transformation method determining unit 14.
- the image display means 12 displays an image.
- the image display unit 12 displays the conversion target image or displays an image in which the first reference point is superimposed on the conversion target image. Further, the image display unit 12 may display the second reference point or display the converted image within the frame of the converted image that is the image after conversion. For example, the image display unit 12 may display image information of an operation screen including these displays.
- the second reference point receiving unit 13 receives the second reference point that is the conversion destination of the first reference point, and outputs the second reference point to the geometric conversion method determining unit 14. For example, when the first reference point is a plurality of image coordinates indicating the positions of the points (two or more) constituting the rotation axis of the camera in the conversion target image, the second reference point is set in the converted image. What is necessary is just to set it as the some image coordinate which shows the position which wants to project each point which comprises a rotating shaft. Similarly to the first reference point, the second reference point receiving means 13 may cause the user to set (input) the second reference point each time. The second reference point may be set in advance. Each point of the second reference point is set in association with each point of the first reference point.
- the reading of the setting information corresponds to the input of the second reference point. It is also possible to set the second reference point as a value common to all converted images. For example, the position (image coordinates) of the rotation axis of the camera may be set so that the object of interest is positioned approximately at the center in each converted image.
- the geometric transformation method determining means 14 applies to each image to be converted based on the first reference point output from the first reference point receiving means 11 and the second reference point output from the second reference point receiving means 13. Determine the geometric transformation method. More specifically, the geometric transformation method determination means 14 determines a parameter of a transformation formula (coordinate transformation formula) for geometric transformation to be applied to the transformation target image.
- the geometric transformation technique determination means 14 is used in the transformation technique based on a necessary number of first reference points and second reference points set for each conversion target image, for example, according to a preset transformation technique. A parameter of a predetermined conversion formula (coordinate conversion formula) may be determined.
- the geometric transformation method determination unit 14 outputs the determined parameter or information on the coordinate conversion formula including the determined parameter to the geometric transformation unit 15 as information indicating the geometric transformation method.
- the geometric transformation means 15 performs geometric transformation on each transformation target image based on the geometric transformation technique determined by the geometric transformation technique decision means 14 and outputs a converted image.
- the geometric conversion unit 15 obtains a converted image by converting pixel information of the conversion target image using, for example, a coordinate conversion formula input as information indicating a geometric conversion method. This conversion process includes a process of interpolating the converted image coordinates to compensate for pixel loss.
- the geometric conversion unit 15 may be configured to include the function of the geometric conversion method determination unit 14. That is, the geometric transformation method determining unit 14 may be mounted on the geometric transformation unit 15.
- the first reference point receiving means 11 and the second reference point receiving means 13 are realized by, for example, an information input device such as a mouse, a keyboard, and a touch panel, and a CPU that operates according to a program.
- the image display means 12 is implement
- the geometric transformation method determination unit 14 and the geometric transformation unit 15 are realized by, for example, hardware designed to perform specific arithmetic processing or the like, or a CPU that operates according to a program.
- FIG. 2 is a flowchart showing an example of the operation of the image processing apparatus according to the present embodiment.
- the image display means 12 displays a conversion target image (step S11).
- the conversion target image is displayed in order to receive the input of the first reference point candidate.
- the image display unit 12 draws a drawing area (with one conversion target image selected from the input conversion target image group as a background).
- an operation screen including a point, a mark, and an input area such as a line connecting them according to the mouse operation is displayed.
- the image display unit 12 may sequentially switch the conversion target images to be displayed, or may arbitrarily switch in accordance with a user operation.
- the first reference point accepting means 11 inputs a first reference point candidate for the conversion target image being displayed in response to a user operation (step S12).
- the first reference point receiving means 11 inputs information on the first reference point candidate for one conversion target image.
- the first reference point receiving unit 11 displays a message for prompting the input of the first reference point candidate on the operation screen displaying the conversion target image, or the initial of the first reference point candidate on the conversion target image. After displaying the two points set as values and displaying a message prompting the user to adjust the points, the user is prompted to input the first reference point candidate.
- the image display means 12 displays the first reference point candidate on the conversion target image based on the input information (step S13). That is, the image display unit 12 displays a composite image of the conversion target image and the first reference point candidate. In the process of step S13, the image display means 12 may use a user interface such that the first reference point candidate is drawn based on information input in the drawing area of the operation screen.
- the first reference point receiving means 11 determines the first reference point according to the user operation (step S14). For example, when receiving a first reference point determination signal input in response to a user operation, the first reference point receiving unit 11 determines the first reference point candidate displayed at that time as the first reference point. Alternatively, the first reference point accepting unit 11 may determine that the first reference point candidate that is in a selected state at that time is the first reference point candidate. For example, when more first reference point candidates are displayed than the number of essential reference points, the first reference point receiving unit 11 determines the first reference point in a state where some of them are selected. An interface for inputting a signal may be provided.
- the first reference point determination signal may be input by the user pressing a determination button.
- the user may confirm the position of the first reference point candidate displayed in step S13 and press the determination button when determining that the reference point condition is satisfied. Note that the user may adjust, delete, add, etc., if necessary.
- the first reference point receiving means 11 receives the first reference point determination signal, the first reference point candidate that is the target of the first reference point determination signal (for example, the first reference point candidate being displayed or selected) It is assumed that the number of first reference point candidates in the state is less than the number of essential reference points. In this case, the first reference point accepting unit 11 may notify the user to that effect and redo the selection operation, or may input information on a new first reference point candidate.
- the first reference point receiving unit 11 determines a first reference point including two or more points for each of the conversion target images in accordance with such a user operation. It is assumed that identification information for associating with the second reference point is given to each point set as the first reference point.
- the identification information may be any information that can hold the relationship between one point having the first reference point and one point having the second reference point corresponding thereto.
- the identification information may be, for example, an array subscript.
- the second reference point receiving means 13 inputs the second reference point (step S15).
- the second reference point accepting unit 13 may input the second reference point by, for example, reading preset second reference point information from the storage unit or the like.
- the second reference point may be input by receiving information on the second reference point from the user.
- the second reference point receiving unit 13 may input, for example, information on the second reference point common to all the conversion target images.
- the geometric conversion method determination means 14 determines each conversion target from the first reference point and the second reference point.
- a geometric transformation method to be applied to the image is determined (step S16).
- the geometric transformation method determination means 14 determines the transformation formula by determining parameters of the transformation formula of the geometric transformation to be applied to each transformation target image based on, for example, the number of reference points and the positions (image coordinates) in those images. Generate.
- the geometric conversion means 15 performs geometric conversion on each conversion target image using the conversion method (conversion formula) determined in step S16. Then, the geometric conversion means 15 outputs the image generated as a result as a converted image (step S17).
- conversion formula conversion formula
- FIG. 3 is an explanatory diagram showing an outline of geometric transformation performed on the transformation target image in the present embodiment.
- ⁇ image 1,..., Image m,..., Image n the first reference point is an example of a line segment that is a rotation axis of the camera in a three-dimensional space, and image coordinates of two points that constitute a line segment that is perpendicular to the ground and passes through the center of gravity of the object of interest Coordinates). Two points are the upper end and the lower end of the object of interest.
- the ⁇ mark and the X mark written on the upper conversion target image are the first reference points set for the conversion target image (the ⁇ mark is the upper end, ⁇ Represents the lower end).
- the symbol ⁇ is the first point (first reference point) of the first reference point
- the symbol X is the second point (lower reference point) of the first reference point.
- the ⁇ mark and the x mark written in the lower converted images 1, m, n indicate the second reference point that is the conversion destination of the first reference point set for the conversion target images 1, m, n. Represents.
- the first reference point is represented as “P1”
- the second reference point is represented as “P2”
- B reference point index.
- each index is shown as a value starting from 1.
- P1 [1] [1] represents image coordinates that are information of the first point (upper reference point) of the first reference points in the first conversion target image (image 1)
- P1 [1 ] [2] represents image coordinates which are information of the second point (lower end reference point) of the first reference points in the image 1.
- P2 [n] [1] is an image coordinate that is information on the first point (upper reference point) of the second reference point in the converted image n generated from the nth conversion target image (image n).
- P2 [n] [2] represents image coordinates that are information on the second point (lower end reference point) of the second reference point in the converted image n generated from the image n.
- the second reference point is common to all images.
- the image display unit 12 displays an image including the target object input as the conversion target image via an image display device such as a monitor or a projector included in the image display unit 12.
- the image display unit 12 displays an operation screen including the conversion target image and an input interface such as a toolbar or a cursor.
- the user inputs the first reference point candidate using a mouse, a keyboard, or the like while viewing the conversion target image displayed on the image display unit 12.
- the information of the first reference point candidate (the image coordinates of the mark ⁇ and the mark X) is input by pointing the position of two points constituting the line segment perpendicular to the ground and passing through the center of gravity of the object of interest. .
- the first reference point receiving means 11 receives, for example, information on the first reference point candidate (the image coordinates of the mark ⁇ and the mark X) as an input from the user for the currently displayed image 1.
- the image display means 12 displays the input first reference point candidate so as to overlap the conversion target image so that the user can confirm the position of the first reference point candidate in the conversion target image.
- the first reference point Input a point determination signal. If the user is not satisfied with the positional relationship, the user can switch the validity / invalidity of the first reference point candidate or adjust the position by operating an input interface such as a toolbar or a cursor using a mouse or a keyboard. That's fine.
- the first reference point receiving means 11 sets these first reference point candidates as the first reference points and outputs them to the geometric transformation method determining means 14.
- the second reference point receiving means 13 receives the second reference point that is the conversion destination of the first reference point.
- the user inputs the second reference point candidate using a mouse or a keyboard, and if satisfied, the input of the second reference point determination signal is accepted as the second reference point.
- the user matches the horizontal coordinates of the first reference point and the second reference point.
- the image coordinates of the first point (upper reference point) are (0.5w, 0.9h), and the image coordinates of the second point (lower reference point) are (0.5w, 0.1h).
- the above setting may be determined in advance, and the information on the second reference point may be read from storage means such as an external recording medium.
- the second reference point receiving unit 13 When receiving the second reference point, the second reference point receiving unit 13 outputs the second reference point to the geometric transformation method determining unit 14.
- the geometric transformation method determining means 14 determines the geometric transformation method from the first reference point and the second reference point, and outputs it to the geometric transformation means 15.
- similarity transformation is used.
- the similarity transformation is given by the following equation (1).
- the similarity conversion parameters (a, b, c, d) to be obtained are vectors that minimize Equation (2), and the fifth component of the eigenvector corresponding to the absolute minimum eigenvalue of the 4 ⁇ 5 matrix on the right side of Equation (2) Can be calculated by scaling to.
- the geometric transformation means 15 performs geometric transformation on the transformation target image to be processed using the similarity transformation parameters (a, b, c, d) obtained in this way, and outputs the transformation image.
- the geometric conversion means 15 performs interpolation to compensate for pixel loss.
- Interpolation methods include a nearest neighbor method that uses the pixel value of the observation point closest to the interpolation point as the pixel value of the interpolation point, and linear interpolation of the pixel values of the four observation points around the interpolation point.
- Various methods such as a bilinear method, which is a method of setting the pixel value of the interpolation point, can be used.
- the geometric transformation means 15 outputs the image subjected to the similarity transformation as a transformed image.
- the reason is that for each of a plurality of images including the target object, the user can specify a plurality of image coordinates constituting the rotation axis of the camera, and geometric conversion is performed so that the specified image coordinates coincide with each other. This is because the size and inclination of the object of interest in each of the images are aligned. Further, this is because the first reference point for determining the geometric transformation method is obtained from a plurality of images including the target object. That is, by calculating the geometric transformation parameter using only the image information, no equipment other than the camera is required at the time of shooting.
- first reference point receiving means 11 and the second reference point receiving means 12 may each receive two or more image coordinates and output them to the geometric transformation method determining means 14. Then, it is possible to cope with this by changing the calculation method of the geometric transformation method determination means 14 according to the selected number of geometric transformation parameters and the transformation formula.
- the required number of corresponding points is at least 3 points.
- the first reference point receiving unit 11 and the second reference point receiving unit 12 may receive three or more image coordinates and output them to the geometric transformation method determining unit 14.
- the geometric transformation method determination means 14 obtains the transformation parameters (a, b, c, d, e, f) from the following formula (3) based on the output information of the first reference point and the second reference point. Good.
- the required number of corresponding points is at least 4 points.
- the first reference point receiving unit 11 and the second reference point receiving unit 12 may each receive four or more image coordinates and output them to the geometric transformation method determining unit 14.
- the geometric transformation method determining means 14 may obtain the transformation parameters (a1 to a8) from the following equation (4) based on the output information on the first reference point and the second reference point.
- FIG. 4 is a block diagram illustrating a configuration example of the image processing apparatus according to the present embodiment.
- the image processing apparatus shown in FIG. 4 is different from the first embodiment shown in FIG. 1 in that it further includes a camera position / posture estimation unit 21 and a first reference point candidate projection unit 22.
- the user is allowed to input a plurality of three-dimensional coordinates constituting a candidate for the rotation axis of the camera in the image including the target object (conversion target image) as the first reference point candidate.
- the camera position / orientation estimation means 21 estimates the position and orientation of the camera that captured the conversion target image for each of the conversion target images, and outputs the estimated position and orientation to the first reference point candidate projection means 22.
- various methods such as a method using a predetermined marker and a method using a correspondence between a known three-dimensional coordinate and an image coordinate where the coordinate is observed can be used.
- the first reference point candidate projecting means 22 converts the first reference point candidate (or the line segment connecting them) into the conversion based on the input first reference point candidate information and the camera position and orientation of the conversion target image.
- the projection is performed on the target image, and the result is output to the image display means 12 as a projected first reference point candidate.
- the projected first reference point candidate can also be said to be the first reference point candidate represented by the coordinate information in the conversion target image.
- the projection first reference point candidate in another conversion target image generated by the first reference point candidate projection unit 22 based on the information Is determined as the first reference point in the other conversion target image.
- the projection first reference point candidate is temporarily displayed to allow the user to make a pass / fail decision or individually It is also possible to adjust.
- the operation of adjusting individually is the same as the operation of the first embodiment. Whether or not the adjustment to the projected first reference hand candidate to be reflected is reflected in another conversion target image is determined by switching the operation screen such as allowing the user to specify the adjustment or shifting to an individual adjustment page. it can.
- the first reference point candidate projection unit 22 appropriately outputs information on the first reference point in each determined conversion target image to the geometric conversion method determination unit 14.
- the first reference point candidate projecting means 22 outputs information on the projection first reference point candidates of each conversion target image to the first reference point accepting means 11, and the first reference point accepting means 11 based on the information.
- the first reference point in each conversion target image may be output.
- the image display means 12 displays an image in which the projection first reference point candidates are superimposed on the conversion target image.
- the camera position / orientation estimation means 21 and the first reference point candidate projection means 22 are realized by, for example, hardware designed to perform specific arithmetic processing or the like, or a CPU that operates according to a program.
- FIG. 5 is a flowchart showing an example of the operation of the image processing apparatus according to the present embodiment.
- the camera position / orientation estimation means 21 estimates the camera position / orientation of the conversion target image (step S21).
- the camera position and orientation estimation means 21 estimates the position and orientation of the camera for each input conversion target image.
- the camera position / orientation estimation means 21 estimates, for example, the three-dimensional coordinates (X, Y, Z) in the world coordinate system as the camera position from the captured image information, and uses the world coordinate system as the camera attitude.
- a 3 ⁇ 3 matrix representing the rotation with respect to may be estimated.
- the first reference point accepting means 11 inputs a plurality of three-dimensional coordinates constituting the rotation axis candidate of the camera in the conversion target image as information on the first reference point candidate for at least one image (step S21). ).
- the first reference point receiving means 11 displays, for example, an operation screen including input fields for information input of first reference point candidates corresponding to the number of corresponding points required in the geometric transformation formula to be used, and displays the first reference point to the user.
- Candidate information may be input.
- the first reference point receiving means 11 may obtain the value by reading a preset initial value. Then, the input information on the first reference point candidate is output to the first reference point candidate projection means 22.
- the first reference point candidate projecting unit 22 projects the input first reference point candidate on the image based on the camera position and orientation (step S23).
- the image display means 12 displays the projected first reference point candidate and the image in an overlapping manner (step S24).
- the image display unit 12 displays an operation screen for adjusting the position or the like of the projected first reference point candidate on the conversion target image.
- the image display means 12 may display an operation screen including a rotation axis (line segment) projected on the image by connecting a plurality of projected first reference point candidates.
- the user confirms the position of the projected first reference point candidate displayed in step S23 in the conversion target image, and inputs the first reference point determination signal if satisfied.
- the user moves the position of the projection first reference point candidate so that the length, inclination, and position of the line segment connecting the plurality of projection first reference point candidates are appropriate as the rotation axis of the camera. Make adjustments.
- information on the first reference point candidate is changed.
- the user may directly change the value (three-dimensional coordinates) input as information on the first reference point candidate.
- the first reference point determination signal is input to the first reference point receiving unit 11 and the first reference point candidate projecting unit 22.
- the first reference point receiving means 11 or the first reference point candidate projecting means 22 determines the projected first reference point candidate as the first reference point (step S25).
- the first reference point accepting means 11 or the first reference point candidate projecting means 22 is, for example, the projected first reference point candidate (currently displayed as a projected first reference point candidate as a target of the first reference point determination signal) ( More specifically, the projected first reference point candidate constituting the displayed line segment may be determined as the first reference point.
- the first reference point when the first reference point is determined from the projected first reference point candidates for one conversion target image, the three-dimensional coordinates that are the projection sources of the projected first reference point candidates are determined. Therefore, based on the three-dimensional coordinates and the camera position and orientation of each image, the first reference point can be calculated for other conversion target images.
- the second reference point receiving means 13 inputs the second reference point (step S26).
- the geometric transformation method determination means 14 determines a geometric transformation method from the first reference point and the second reference point (step S27).
- the geometric transformation means 15 performs geometric transformation on the image and outputs a transformed image (step S28).
- the operations in steps S26 to S28 may be the same as those in steps S15 to S17 in the first embodiment.
- steps S21 to S28 may be repeated for the number of conversion target images.
- steps S22 to S24 if any of the images in the conversion target image group has already been processed, the processing for the other conversion target images can be omitted. This is because the first reference point in the conversion target image can be obtained in step S25 without requiring the user to input the first reference point candidate again. More specifically, once the first reference point is determined for a certain conversion target image, the first reference point for each image is uniquely determined based on the camera position and orientation for each image.
- the camera position / orientation estimation means 21 estimates the camera position / orientation of each image to be converted, which is taken from the image.
- the camera position / orientation estimation means 21 estimates the camera position t and orientation R from the appearance of the marker registered in advance, for example, by printing on paper or the like.
- the camera position / orientation estimation means 21 estimates the camera position t and orientation R from a plurality of combinations of corresponding points of known three-dimensional coordinates and image coordinates where the coordinates are observed, for example.
- As the camera position t for example, three-dimensional coordinates (X, Y, Z) in the world coordinate system are obtained.
- the camera position / orientation estimation means 21 obtains, for example, a 3 ⁇ 3 matrix representing the rotation with respect to the world coordinate system as the attitude R of the camera.
- the posture R of the camera is expressed by the following expression (5) according to the expression by roll, pitch, and yaw.
- the expression by the quaternion it is expressed as the following expression (6).
- ⁇ rotation around the Z axis (roll)
- ⁇ rotation around the new Y axis (pitch)
- ⁇ rotation around the new X axis (yaw).
- the first reference point accepting means 11 accepts the first reference point candidate as an input from the user and outputs it to the first reference point candidate projecting means 22.
- the user inputs the first reference point candidate using a mouse or a keyboard while viewing the image displayed on the image display unit 12.
- the user for example, uses the three-dimensional coordinates (0, 0, 0) as the initial value of the first point (upper reference point) of the first reference point candidate and 3 as the initial value of the second point (lower reference point). Dimensional coordinates (0, 1, 0) may be input.
- the first reference point candidate projecting means 22 receives the input first reference point candidate based on the input three-dimensional coordinate value of the first reference point candidate and the estimated value of the camera position and orientation for the conversion target image. Is projected onto the conversion target image. Note that the initial value of the first reference point candidate is not a value input by the user but may be a preset value.
- the projected first reference point candidate is represented by the following equation (where t is the camera position, R is the posture, X 1 is the first three-dimensional coordinate of the first reference point candidate, and X 2 is the second three-dimensional coordinate). 7).
- K is an internal parameter of the camera. K may be determined in advance using, for example, a method of estimating internal parameters of the camera using projective transformation (for example, the method of Zhang et al.). Further, K may be obtained at the same time as shooting using a method (for example, a method of Pollefeys et al.) That estimates an internal parameter of the camera from three or more images.
- the camera position / posture estimation means 21 outputs the camera internal parameters to the first reference point candidate projection means 22 in addition to the camera position / posture.
- the user compares the displayed projected first reference point candidate with the object of interest, and sets the length, inclination, and position of the line segment connecting the projected first reference point candidates as the rotation axis of the camera.
- One reference point candidate is adjusted, and if satisfied, a first reference point determination signal is input.
- the first reference point receiving means 11 does not operate even when a first reference point determination signal is input.
- the first reference point candidate projection means 22 determines the projected first reference point candidate currently displayed as the first reference point in the conversion target image, and determines the geometric conversion method. Output to means 14. Further, the first reference point candidate projecting means 22 for the other conversion target images, based on the information of the first reference point candidate of the projection source at this time and the camera position and orientation for each image, the first reference point for each image. Find a point. The subsequent operations are the same as those in the first embodiment.
- the user it is not necessary for the user to input information on the first reference point candidate for each image or to determine it as the first reference point. The reason is that once the first reference point is determined, the first reference point for each image is uniquely determined based on the camera position and orientation for each image.
- FIG. 6 is a block diagram illustrating a configuration example of the image processing apparatus according to the third embodiment.
- the image processing apparatus shown in FIG. 6 is different from the first embodiment shown in FIG. 1 in that an image feature point detection unit 31 is further provided.
- the first reference point candidate is a plurality of image coordinates that constitute a candidate for the rotation axis of the camera in each of a plurality of images (conversion target images) including the object of interest.
- conversion target images images
- the first reference point for each image is determined using that.
- the image feature point detection means 31 detects an image feature point for each of the conversion target images. Further, the image feature point detection unit 31 performs matching between the conversion target images with respect to the detected image feature points, and the image features observed as the image coordinates on which the same three-dimensional coordinates are projected on a plurality of images. Detect points. Further, the image feature point detection unit 31 causes the image display unit 12 to display the detected image feature point. For the detection of the image feature point, for example, a Harris corner detection method for determining whether or not the corner is based on the surrounding gradient of the pixel value may be used to detect the portion determined as the corner as the image feature point.
- the image feature point detection means 31 detects the image feature points using various methods such as an image feature point detection method robust to illumination fluctuation, rotation, and enlargement / reduction and a feature description method SIFT.
- Various methods such as a method using a normalized cross-correlation of pixel values and a KLT method can be used for matching with image feature points.
- a method using normalized cross-correlation is a method in which a correlation value is calculated and compared for an image obtained by subtracting an average pixel value and dividing by a standard deviation within a search range.
- the KLT method is a method of searching for corresponding points on the assumption that the luminance gradient in a minute time is unchanged.
- the image display means 12 displays an image in which image feature points are superimposed on the conversion target image.
- the image display unit 12 displays an image in which the first reference point candidate selected from the image feature points is superimposed on the conversion target image.
- the image feature point detection means 31 is realized by, for example, hardware designed to perform specific arithmetic processing or the like, or a CPU that operates according to a program.
- FIG. 7 is a flowchart showing an example of the operation of the image processing apparatus according to the present embodiment.
- the image feature point detection unit 31 detects, for each conversion target image, image feature points that are common to a plurality of images including the conversion target image (step S31).
- the image display means 12 displays the image and the image feature point detected from the image in an overlapping manner for each image (step S32).
- the user compares a certain image displayed by the image display unit 12 with the image feature point of the image, and determines a point to be a first reference point candidate from the displayed image feature point in the image.
- An operation of selecting two or more is performed (step S33).
- the first reference point accepting unit 11 obtains information on the image feature point selected as the first reference point candidate in response to a user operation.
- the image display unit 12 displays the first reference point candidates in the image in an overlapping manner for each image in accordance with the user's selection operation (step S34).
- the image display means 12 may display the first reference point candidate for each image. In other words, if the selected image feature point is an image feature point common to a plurality of images, the first reference point accepting unit 11 automatically automatically sets other images having the image feature point as first reference point candidates. select.
- the user may input a first reference point determination signal and determine it as the first reference point (step S35).
- the method for determining the first reference point using the first reference point determination signal is the same as in the first embodiment. For example, when the first reference point accepting unit 11 accepts a first reference point determination signal input in response to a user operation, the first reference point determination signal 11 is displayed. The image feature point displayed as the reference point candidate) is determined as the first reference point.
- the first reference point accepting unit 11 may repeat the processes of steps S32 to S35 for the undetermined image.
- the second reference point receiving means 13 inputs the second reference point (step S36).
- the geometric transformation method determination means 14 determines a geometric transformation method from the first reference point and the second reference point (step S37).
- the geometric transformation means 15 performs geometric transformation on the image and outputs a transformed image (step S38).
- the operations in steps S36 to S38 may be the same as those in steps S15 to S17 in the first embodiment.
- the image feature point detection means 31 detects and matches image feature points in each of the conversion target images, and detects image feature points that can be matched between a plurality of images.
- the image feature point detection unit 31 detects, for example, image feature points that are common to images in a certain section in a group of images taken continuously. .
- image feature points common to all images are n points per image, the number of images is N, and n ⁇ N image feature points are detected.
- the user selects two arbitrary points from the n points while comparing the displayed image feature points with the image, and inputs a first reference point determination signal if satisfied. These two points are matched in all images. Therefore, if two points of an image are selected, the first reference point (2 ⁇ N points) of all images can be automatically determined.
- the first reference point reception unit 11 When the first reference point determination signal is input, the first reference point reception unit 11 outputs the determined 2 ⁇ N first reference points to the geometric transformation method determination unit 14 for each corresponding image processing.
- the subsequent operations are the same as those in the first embodiment.
- the first reference point candidate may be automatically determined in the range where the common image feature point is detected.
- the image feature point detection unit 31 may perform the matching until the image feature point is interrupted with respect to the entire screen. Then, when image feature point matching is interrupted, the image feature point detecting means 31 repeatedly performs the image feature point matching operation from the image until it is interrupted next, and the image feature point is detected together with the detection of the common image feature point. May be detected as well as intervals between the intervals (intermittent portions). Then, the first reference point receiving unit 11 calculates a geometric transformation parameter for a section in which the feature points are continuous, using the first reference point candidate selected by the user from the common image feature points.
- the geometric transformation parameter may be calculated by automatically selecting a point close to the first reference point candidate selected for the continuous section. According to the latter method, if the user selects once, the first reference point can be automatically determined for all other images to be converted.
- the present embodiment it is not necessary to determine the first reference point for each image. Further, it is not necessary to estimate the camera position and orientation. The reason is that an image feature point common to a plurality of images is detected and the first reference point is determined using the detected image feature point.
- FIG. 8 is a block diagram when the image processing apparatus according to the present invention is mounted in an information processing system.
- the information processing system shown in FIG. 8 is a general information processing system including a processor 400, a program memory 401, and a storage medium 402.
- the storage medium 402 may be a storage area composed of separate storage media, or may be a storage area composed of the same storage medium.
- a RAM or a magnetic storage medium such as a hard disk can be used.
- the program memory 401 causes the processor 400 to perform the processing of each part of the first reference point receiving means 11, the second reference point receiving means 13, the geometric transformation method determining means 14, and the geometric transformation means 15 described above. Are stored, and the processor 400 operates in accordance with this program.
- the processor 400 may be a processor that operates according to a program such as a CPU, for example.
- the present invention can be realized by a computer program. It should be noted that all of the means that can be operated by the program (for example, the first reference point receiving means 11, the second reference point receiving means 13, the geometric transformation method determining means 14, the geometric transformation means 15) need to be operated by the program. Alternatively, a part may be configured by hardware. Moreover, you may implement
- FIG. 9 is a block diagram showing an outline of the present invention.
- the image processing apparatus shown in FIG. 9 includes an image display unit 501, a first reference point reception determination unit 502, a second reference point reception determination unit 503, and a geometric conversion unit 504.
- the image display unit 501 displays at least one conversion target image.
- the image display unit 501 is illustrated as the image display unit 12, for example.
- the first reference point reception determination unit 502 is based on information on a first reference point candidate that is a candidate for a first reference point that is a reference for input of geometric transformation and information on the first reference point candidate in response to a user operation.
- the determination signal for the first reference point candidate displayed on the conversion target image is received, and the first reference point is determined based on the information of the first reference point candidate that is the target of the received determination signal.
- the first reference point reception determination unit 502 receives, for example, information on a first reference point candidate that is a candidate for a first reference point that is a reference for input of geometric transformation in response to a user operation.
- the first reference point candidate is displayed on the conversion target image based on the candidate information, and then a determination signal for the first reference point candidate displayed on the conversion target image is received and received according to the user operation.
- the first reference point may be determined based on information on the first reference point candidate that is the signal target.
- the 1st reference point reception determination means 502 is shown as the 1st reference point reception means 11 in the said embodiment, for example.
- the first reference point reception determination unit 502 is, for example, a camera position / posture estimation unit 21, a first reference point candidate projection unit 22, and an image feature point detection unit, which are other units necessary for determining the first reference point. 31 may be included, or a control unit that controls them may be included.
- the second reference point reception determination means 503 receives information on the second reference point that is a reference for the output of the geometric transformation, and determines the second reference point.
- the second reference point reception determination unit 503 is illustrated as the second reference point reception unit 13, for example.
- the geometric conversion unit 504 geometrically converts the conversion target image. Convert and output the converted image.
- the geometric transformation unit 504 is illustrated as, for example, the geometric transformation method determination unit 14 and the geometric transformation unit 15.
- An image processing apparatus that generates an image for performing three-dimensional display of a target object from a plurality of conversion target images including the target object, an image display unit that displays at least one conversion target image, and a user Depending on the operation, information on the first reference point candidate, which is a candidate for the first reference point, which is a reference for input of geometric transformation, and the first reference displayed on the conversion target image based on the information on the first reference point candidate
- a first reference point reception determination means for receiving a determination signal for the point candidate and determining a first reference point based on information of the first reference point candidate that is a target of the received determination signal; and a reference for output of geometric transformation 2nd reference point reception determination means for receiving the second reference point information to determine the second reference point, the first reference point determined by the first reference point reception determination means, and the second reference point reception determination Second criteria determined by means Based on the bets, the image processing apparatus and a geometric transformation means for geometric transformation, and outputs the converted image converted image.
- the first reference point reception determination means for each conversion target image, the first reference point displayed on the conversion target image based on the information on the first reference point candidate and the information on the first reference point candidate
- the camera position and orientation estimation means for estimating the camera position and orientation at the time of shooting, and the camera position and orientation estimated by the camera position and orientation estimation means are expressed in three-dimensional coordinates.
- a first reference point candidate projecting unit that projects one reference point candidate onto each conversion target image and generates a projected first reference point candidate in each conversion target image; And receiving a decision signal for the first reference point candidate projected on the at least one conversion target image based on the information on the first reference point candidate expressed by
- Each of the projection first reference point candidates in each conversion target image generated based on the information of the first reference point candidate that is the projection source of the projection first reference point candidate that is the target of the determination signal.
- the image processing apparatus according to any one of Appendices 3 Appendixes 1 to determine a first reference point in the converted image.
- Image feature inspection is performed for detecting image feature points for each conversion target image, comparing the detected image feature points between the conversion target images, and detecting image feature points common to the plurality of conversion target images.
- a first reference point reception determination unit that includes information indicating that the image feature point displayed on the conversion target image is designated as the first reference point candidate, and the first reference point that is the designated image feature point.
- a determination signal for the point candidate is received, and when the determination signal is received, the designated image feature point that is the first reference point candidate of the determination signal is determined as the first reference point in the conversion target image;
- the image feature point of each conversion target image that is the same as the image feature point is used as the first reference point in each conversion target image based on the correspondence between the image feature points that are the targets of the determination signal.
- the second reference point acceptance determination means accepts a plurality of image coordinates constituting the rotation axis of the camera in the converted image as information of the second reference point, and the first reference point acceptance determination means
- the image processing apparatus according to any one of supplementary notes 1 to 6, which receives a plurality of image coordinates or three-dimensional coordinates constituting a rotation axis of a camera in a conversion target image as point candidate information.
- the first reference point candidate is displayed on the first reference point candidate, receives a determination signal for the first reference point candidate displayed on the conversion target image in response to a user operation, and is the target of the received determination signal
- the first reference point is determined on the basis of the above information, information on the second reference point serving as a reference for the output of the geometric transformation is received, the second reference point is determined, and the determined first reference point is determined for each of the conversion target images.
- Reference point and second reference Based on the bets, the converted image to geometric conversion, image processing method and outputs the converted image
- the first reference point candidate displayed on the conversion target image input in response to a user operation and a process for displaying the first reference point candidate is received
- the first selected as the target of the received determination signal A process for determining the first reference point based on the reference point candidate information, a process for determining the second reference point when information on the second reference point serving as a reference for the output of the geometric transformation is received, and the first reference point Point and based on the second reference point, the image processing program causing the image to be transformed to execute the process of geometric transformation.
- the image feature points of each conversion target image are detected by the computer, and the detected image feature points are compared between the conversion target images to detect image feature points common to the plurality of conversion target images.
- Processing, processing for displaying image feature points detected on at least one conversion target image, and information for designating image feature points displayed on the conversion target image as first reference point candidates When a process for displaying an image feature point designated as a first reference point candidate on at least one conversion target image based on information and a determination signal for the first reference point candidate that is the designated image feature point are received
- the designated image feature point which is the first reference point candidate that is the target of the determination signal, is determined as the first reference point in the conversion target image, and On the basis of the correspondence between the image feature points determined as images, a process of determining the image feature point of each conversion target image that is the same as the image feature point as the first reference point in each conversion target image is executed.
- the image processing program according to attachment 13 The image processing program according to attachment 13.
- the present invention can be suitably applied to an application for generating an image for three-dimensional display of an object of interest.
- Geometric transformation means 1st reference point reception means 12 Image display means 13 2nd reference point reception means 14 Geometric transformation method determination means 15 Geometric transformation means 21 Camera position and orientation estimation means 22 First reference point candidate projection means 31 Image feature point detection means 400 Processor 401 Program memory 402 Recording medium 501 Image display means 502 First reference point reception determination means 503 Second reference point reception determination means 504 Geometric transformation means
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
Description
以下、本発明の実施形態を図面を参照して説明する。図1は、本発明の第1の実施形態の画像処理装置の構成例を示すブロック図である。図1に示す画像処理装置は、第1基準点受付手段11と、画像表示手段12と、第2基準点受付手段13と、幾何変換手法決定手段14と、幾何変換手段15とを備える。
vi’=bui+avi+c ・・・式(1)
vi’=dui+evi+f ・・・式(3)
vi’=(a4*ui+a5*vi+a6)/(a7*ui+a8*vi+1)
・・・式(4)
次に、本発明の第2の実施形態を図面を参照して説明する。図4は、本実施形態の画像処理装置の構成例を示すブロック図である。図4に示す画像処理装置は、図1に示す第1の実施形態と比べて、さらにカメラ位置姿勢推定手段21と第1基準点候補射影手段22とを備える点が異なる。
次に、本発明の第3の実施形態について図面を参照して説明する。図6は、第3の実施形態の画像処理装置の構成例を示すブロック図である。図6に示す画像処理装置は、図1に示す第1の実施形態と比べて、さらに画像特徴点検出手段31を備える点が異なる。
12 画像表示手段
13 第2基準点受付手段
14 幾何変換手法決定手段
15 幾何変換手段
21 カメラ位置姿勢推定手段
22 第1基準点候補射影手段
31 画像特徴点検出手段
400 プロセッサ
401 プログラムメモリ
402 記録媒体
501 画像表示手段
502 第1基準点受付決定手段
503 第2基準点受付決定手段
504 幾何変換手段
Claims (10)
- 注目物体を含む複数の変換対象画像から注目物体の3次元表示を行うための画像を生成する画像処理装置であって、
少なくとも1つの変換対象画像を表示する画像表示手段と、
ユーザ操作に応じて、幾何変換の入力の基準となる第1基準点の候補である第1基準点候補の情報と、前記第1基準点候補の情報に基づき変換対象画像上に表示される第1基準点候補に対する決定信号とを受け付け、受け付けた前記決定信号の対象とされた第1基準点候補の情報を基に第1基準点を決定する第1基準点受付決定手段と、
幾何変換の出力の基準となる第2基準点の情報を受け付け、第2基準点を決定する第2基準点受付決定手段と、
前記第1基準点受付決定手段によって決定された第1基準点と、第2基準点受付決定手段によって決定された第2基準点とに基づいて、変換対象画像を幾何変換し、変換画像を出力する幾何変換手段とを備えた
ことを特徴とする画像処理装置。 - 各変換対象画像について、第1基準点と第2基準点とに基づいて、所定の幾何変換式のパラメータを算出する幾何変換手法決定手段を備え、
幾何変換手段は、前記幾何変換手法決定手段が算出したパラメータによる幾何変換式を用いて変換対象画像を幾何変換する
請求項1に記載の画像処理装置。 - 第1基準点受付決定手段は、各変換対象画像について、第1基準点候補の情報と、前記第1基準点候補の情報に基づき当該変換対象画像上に表示される第1基準点候補に対する決定信号とを受け付け、前記決定信号を受け付けた場合に当該決定信号の対象とされた第1基準点候補を当該変換対象画像における第1基準点として決定する
請求項1または請求項2に記載の画像処理装置。 - 各変換対象画像について、撮影時のカメラ位置姿勢を推定するカメラ位置姿勢推定手段と、
前記カメラ位置姿勢推定手段によって推定されたカメラ位置姿勢に基づいて、3次元座標で表現される第1基準点候補を各変換対象画像に射影し、各変換対象画像における射影第1基準点候補を生成する第1基準点候補射影手段とを備え、
第1基準点受付決定手段は、3次元座標で表現される第1基準点候補の情報と、前記第1基準点候補の情報に基づき少なくとも1つの変換対象画像上に表示される射影第1基準点候補に対する決定信号とを受け付け、前記決定信号を受け付けた場合に当該決定信号の対象とされた射影第1基準点候補の射影元である第1基準点候補の情報に基づいて生成される前記各変換対象画像における射影第1基準点候補をそれぞれ各変換対象画像における第1基準点として決定する
請求項1から請求項3のうちのいずれか1項に記載の画像処理装置。 - 各変換対象画像について画像特徴点の検出を行うとともに、検出した画像特徴点を変換対象画像間で比較し、複数の変換対象画像間に共通する画像特徴点を検出する画像特徴点検出手段を備え、
第1基準点受付決定手段は、変換対象画像上に表示される画像特徴点を第1基準点候補に指定する旨の情報と、前記指定された画像特徴点である第1基準点候補に対する決定信号とを受け付け、前記決定信号を受け付けた場合に当該決定信号の対象とされた第1基準点候補である前記指定された画像特徴点を前記変換対象画像における第1基準点として決定するとともに、当該決定信号の対象とされた前記画像特徴点の画像間の対応関係を基に、該画像特徴点と同一とされた各変換対象画像の画像特徴点を各変換対象画像における第1基準点として決定する
請求項1から請求項4のうちのいずれか1項に記載の画像処理装置。 - 第2基準点受付決定手段は、全ての変換画像に共通の情報として第2基準点の情報を受け付ける
請求項1から請求項5のうちのいずれか1項に記載の画像処理装置。 - 第2基準点受付決定手段は、第2基準点の情報として、変換画像においてカメラの回転軸を構成する複数の画像座標を受け付け、
第1基準点受付決定手段は、第1基準点候補の情報として、変換対象画像においてカメラの回転軸を構成する複数の画像座標または3次元座標を受け付ける
請求項1から請求項6のうちのいずれか1項に記載の画像処理装置。 - 幾何変換手段は、内挿を行い画素の欠損を補償する
請求項1から請求項7のうちのいずれか1項に記載の画像処理装置。 - 注目物体を含む複数の画像である変換対象画像それぞれに幾何変換を施して3次元画像を生成するための画像処理方法であって、
画像表示手段を介して少なくとも1つの変換対象画像を表示し、
ユーザ操作に応じて、幾何変換の入力の基準となる第1基準点の候補である第1基準点候補の情報を受け付け、
受け付けた前記第1基準点候補の情報に基づき変換対象画像上に第1基準点候補を表示し、
ユーザ操作に応じて、前記変換対象画像上に表示される第1基準点候補に対する決定信号を受け付け、
受け付けた前記決定信号の対象とされた第1基準点候補の情報を基に第1基準点を決定し、
幾何変換の出力の基準となる第2基準点の情報を受け付けて、第2基準点を決定し、
変換対象画像それぞれについて、決定された第1基準点と第2基準点とに基づいて、変換対象画像を幾何変換し、変換画像を出力する
ことを特徴とする画像処理方法。 - 注目物体を含む複数の画像である変換対象画像それぞれに幾何変換を施して3次元画像を生成するための画像処理用プログラムであって、
コンピュータに、
画像表示手段を介して少なくとも1つの変換対象画像を表示する処理、
ユーザ操作に応じて入力される、幾何変換の入力の基準となる第1基準点の候補である第1基準点候補の情報を受け付けて、変換対象画像上に第1基準点候補を表示する処理、
ユーザ操作に応じて入力される、前記変換対象画像上に表示した第1基準点候補に対する決定信号を受け付けた場合に、受け付けた前記決定信号の対象とされた第1基準点候補の情報に基づき、第1基準点を決定する処理、
幾何変換の出力の基準となる第2基準点の情報を受け付けた場合に、第2基準点を決定する処理、および
前記第1基準点と前記第2基準点とに基づいて、変換対象画像を幾何変換する処理
を実行させるための画像処理用プログラム。
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2012517123A JP5825256B2 (ja) | 2010-05-26 | 2011-05-18 | 画像処理装置、画像処理方法および画像処理用プログラム |
| US13/699,216 US9053522B2 (en) | 2010-05-26 | 2011-05-18 | Image processing device, image processing method, and image processing program |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2010-121018 | 2010-05-26 | ||
| JP2010121018 | 2010-05-26 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2011148595A1 true WO2011148595A1 (ja) | 2011-12-01 |
Family
ID=45003596
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2011/002765 Ceased WO2011148595A1 (ja) | 2010-05-26 | 2011-05-18 | 画像処理装置、画像処理方法および画像処理用プログラム |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US9053522B2 (ja) |
| JP (1) | JP5825256B2 (ja) |
| WO (1) | WO2011148595A1 (ja) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2013105205A1 (ja) * | 2012-01-10 | 2013-07-18 | 日本電気株式会社 | 画像処理装置、画像処理方法および画像処理用プログラム |
| US20150244928A1 (en) * | 2012-10-29 | 2015-08-27 | Sk Telecom Co., Ltd. | Camera control method, and camera control device for same |
| JP2016162079A (ja) * | 2015-02-27 | 2016-09-05 | 富士通株式会社 | 表示制御方法、表示制御プログラム、及び情報処理装置 |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR102190904B1 (ko) * | 2013-09-25 | 2020-12-14 | 삼성전자 주식회사 | 윈도우 제어 방법 및 이를 지원하는 전자장치 |
| KR101668802B1 (ko) * | 2014-09-03 | 2016-11-09 | 신동윤 | 원거리 식별 이미지 생성 장치 및 그 동작 방법 |
| CN105631454B (zh) * | 2014-10-27 | 2019-02-12 | 浙江大华技术股份有限公司 | 一种球机定位方法、设备及球机 |
| US10157439B2 (en) * | 2015-07-20 | 2018-12-18 | Qualcomm Incorporated | Systems and methods for selecting an image transform |
| CN108269282B (zh) * | 2016-12-30 | 2021-10-22 | 技嘉科技股份有限公司 | 对位装置和对位方法 |
| CN107610042B (zh) * | 2017-08-23 | 2019-06-07 | 维沃移动通信有限公司 | 一种图像美化方法及移动终端 |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH10149455A (ja) * | 1996-11-20 | 1998-06-02 | Matsushita Electric Ind Co Ltd | 画像生成表示装置および生成表示画像編集装置 |
| JP2001290585A (ja) * | 2000-01-31 | 2001-10-19 | Canon Inc | 位置情報処理装置及びその方法及びそのプログラム、操作装置及びその方法及びそのプログラム |
| JP2008217243A (ja) * | 2007-03-01 | 2008-09-18 | Mitsubishi Electric Corp | 画像生成装置 |
Family Cites Families (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6912293B1 (en) * | 1998-06-26 | 2005-06-28 | Carl P. Korobkin | Photogrammetry engine for model construction |
| US6081577A (en) * | 1998-07-24 | 2000-06-27 | Wake Forest University | Method and system for creating task-dependent three-dimensional images |
| US6980690B1 (en) * | 2000-01-20 | 2005-12-27 | Canon Kabushiki Kaisha | Image processing apparatus |
| JP4004899B2 (ja) * | 2002-09-02 | 2007-11-07 | ファナック株式会社 | 物品の位置姿勢検出装置及び物品取出し装置 |
| US7228006B2 (en) * | 2002-11-25 | 2007-06-05 | Eastman Kodak Company | Method and system for detecting a geometrically transformed copy of an image |
| JP2004264492A (ja) | 2003-02-28 | 2004-09-24 | Sony Corp | 撮影方法及び撮像装置 |
| JP2005049999A (ja) | 2003-07-30 | 2005-02-24 | Ricoh Co Ltd | 画像入力装置、画像入力方法、この方法を情報処理装置上で実行可能に記述されたプログラム、及びこのプログラムを記憶した記憶媒体 |
| US7409108B2 (en) * | 2003-09-22 | 2008-08-05 | Siemens Medical Solutions Usa, Inc. | Method and system for hybrid rigid registration of 2D/3D medical images |
| JP4508049B2 (ja) | 2005-09-05 | 2010-07-21 | 株式会社日立製作所 | 360°画像撮影装置 |
| US20080253685A1 (en) * | 2007-02-23 | 2008-10-16 | Intellivision Technologies Corporation | Image and video stitching and viewing method and system |
| US8355579B2 (en) * | 2009-05-20 | 2013-01-15 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Automatic extraction of planetary image features |
-
2011
- 2011-05-18 WO PCT/JP2011/002765 patent/WO2011148595A1/ja not_active Ceased
- 2011-05-18 JP JP2012517123A patent/JP5825256B2/ja active Active
- 2011-05-18 US US13/699,216 patent/US9053522B2/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH10149455A (ja) * | 1996-11-20 | 1998-06-02 | Matsushita Electric Ind Co Ltd | 画像生成表示装置および生成表示画像編集装置 |
| JP2001290585A (ja) * | 2000-01-31 | 2001-10-19 | Canon Inc | 位置情報処理装置及びその方法及びそのプログラム、操作装置及びその方法及びそのプログラム |
| JP2008217243A (ja) * | 2007-03-01 | 2008-09-18 | Mitsubishi Electric Corp | 画像生成装置 |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2013105205A1 (ja) * | 2012-01-10 | 2013-07-18 | 日本電気株式会社 | 画像処理装置、画像処理方法および画像処理用プログラム |
| US20150244928A1 (en) * | 2012-10-29 | 2015-08-27 | Sk Telecom Co., Ltd. | Camera control method, and camera control device for same |
| US9509900B2 (en) * | 2012-10-29 | 2016-11-29 | Sk Telecom Co., Ltd. | Camera control method, and camera control device for same |
| JP2016162079A (ja) * | 2015-02-27 | 2016-09-05 | 富士通株式会社 | 表示制御方法、表示制御プログラム、及び情報処理装置 |
Also Published As
| Publication number | Publication date |
|---|---|
| US20130064430A1 (en) | 2013-03-14 |
| JP5825256B2 (ja) | 2015-12-02 |
| JPWO2011148595A1 (ja) | 2013-07-25 |
| US9053522B2 (en) | 2015-06-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP5825256B2 (ja) | 画像処理装置、画像処理方法および画像処理用プログラム | |
| JP4363151B2 (ja) | 撮影装置、その画像処理方法及びプログラム | |
| US9667864B2 (en) | Image conversion apparatus, camera, image conversion method, and storage medium with program stored therein | |
| JP2004318823A (ja) | 情報表示システム、情報処理装置、ポインティング装置および情報表示システムにおけるポインタマーク表示方法 | |
| WO2006004043A1 (ja) | 広視野画像入力方法及び装置 | |
| JP4975679B2 (ja) | ノート型情報処理装置、および、射影変換パラメータ算出方法 | |
| WO2005024723A1 (ja) | 画像合成システム、画像合成方法及びプログラム | |
| JPWO2014069247A1 (ja) | 画像処理装置および画像処理方法、並びにプログラム | |
| WO2018179040A1 (ja) | カメラパラメータ推定装置、方法およびプログラム | |
| JP2014514539A (ja) | 共線変換ワープ関数を用いて第1の画像の少なくとも一部と第2の画像の少なくとも一部を位置合わせする方法 | |
| JP2005123667A (ja) | 複数の画像データからの静止画像データの生成 | |
| WO2019093457A1 (ja) | 情報処理装置、情報処理方法及びプログラム | |
| CN112655194A (zh) | 用于捕获视图的电子装置和方法 | |
| JP2003015218A (ja) | 投影型表示装置 | |
| JP2008217526A (ja) | 画像処理装置、画像処理プログラム及び画像処理方法 | |
| JP2019054369A (ja) | 撮像装置、撮像装置の制御方法及びプログラム | |
| JP4198536B2 (ja) | 物体撮影装置、物体撮影方法及び物体撮影プログラム | |
| CN115412670A (zh) | 信息处理设备及其控制方法和存储介质 | |
| CN106373154B (zh) | 图像处理装置及图像处理方法 | |
| JP5151922B2 (ja) | 画素位置対応関係特定システム、画素位置対応関係特定方法および画素位置対応関係特定プログラム | |
| JPWO2018101135A1 (ja) | 撮影調整情報導出装置、撮影装置、撮影調整情報導出方法、制御プログラム、および記録媒体 | |
| JP2018032991A (ja) | 画像表示装置、画像表示方法及び画像表示用コンピュータプログラム | |
| JPH07160412A (ja) | 指示位置検出方法 | |
| KR20220108683A (ko) | 다중 카메라의 영상 합성 방법 및 다중 카메라의 영상 합성 장치 | |
| JP2017079024A (ja) | 画像処理装置、画像マッチングによる同一部位検出方法および画像処理用プログラム |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11786299 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2012517123 Country of ref document: JP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 13699216 Country of ref document: US |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 11786299 Country of ref document: EP Kind code of ref document: A1 |