[go: up one dir, main page]

HK1115022A - Image processing apparatus and image processing method - Google Patents

Image processing apparatus and image processing method Download PDF

Info

Publication number
HK1115022A
HK1115022A HK08110379.2A HK08110379A HK1115022A HK 1115022 A HK1115022 A HK 1115022A HK 08110379 A HK08110379 A HK 08110379A HK 1115022 A HK1115022 A HK 1115022A
Authority
HK
Hong Kong
Prior art keywords
image
fading
edge
image processing
photograph
Prior art date
Application number
HK08110379.2A
Other languages
Chinese (zh)
Inventor
牧野哲司
Original Assignee
卡西欧计算机株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 卡西欧计算机株式会社 filed Critical 卡西欧计算机株式会社
Publication of HK1115022A publication Critical patent/HK1115022A/en

Links

Description

Image processing apparatus, image processing method, and computer readable medium
Technical Field
The present invention relates to an image processing apparatus and an image processing method for extracting a predetermined area (photographic print) or the like from a photographed image.
Background
In recent years, digital cameras have become popular as photographic apparatuses, and digitalization of photographs has been advanced.
The digitization of photographs has a number of advantages. For example, photos may be stored compressed, photos may be easily copied, and photos may be viewed in a variety of ways, such as continuously or in a random order. From the viewpoint of these advantages, digitization of conventional photographs obtained by using silver salt films will be widely used. Since digital cameras have been popularized recently, many people have such conventional photographs. Conventional photographs include photographs on printing paper and developed film. Hereinafter, they are generically referred to as "photos".
The photograph may be digitized, for example, by scanning the photograph using a flat scanner. As is well known, the original color of a photograph changes color (fades) over time due to chemical changes. For this reason, some scanners are equipped with image processing apparatuses that perform fading correction for correcting discoloration.
An example of a conventional image processing apparatus that performs color fading correction is described in japanese patent application laid-open No. 2002-101313. The conventional image processing apparatus described in this patent document detects the degree of color fading of an acquired image, and performs color fading correction according to the detection result. The level detection makes it possible to perform appropriate color fading correction by only acquiring an image.
It is necessary to apply the level detection only to the image that is the object of the color fading correction. Otherwise, it is impossible to accurately recognize what fading has occurred on the image. As is well known, a planar scanner is used to scan a scanning object placed directly on a scanning table or placed on a feeder. Therefore, it is possible to easily scan only the photograph to which the fading correction is applied.
Meanwhile, the photograph is usually attached to a photo film or the like (herein, a method of putting the photograph into a document having at least one transparent side is included). The photograph fixed to the photo film cannot be placed on the feeder. And it is difficult to properly place them on the scanning table. It is generally not possible to scan only the target photograph fixed to the photo film. For this reason, when scanning, the photograph fixed to the photo film is usually taken off the photo film. Therefore, when the scanner is used to scan photographs, the following burdensome work must be performed for each photograph. That is, the photograph is removed from the photo album, placed on the scanning stage, and after the scanning is completed, the photograph on the scanning stage is re-fixed to the photo album.
The photograph may be damaged when removed from the photo film. For example, when photographs are stuck on transparent sheets covering the photographs in a photo album, there is a high possibility that they are damaged. It is therefore desirable to be able to scan the photograph without having to remove the photograph from the photo album.
With the invention described in japanese patent application laid-open No.2002-354331, it is possible to scan a photograph without taking it out of a photo album. However, it is more convenient and preferable for the user to be able to use a portable camera device, such as a digital camera. It is possible to photograph an object in an arbitrary positional relationship with a portable photographing apparatus. On the other hand, the advantage that the subject can be photographed in an arbitrary positional relationship means that it is difficult to know under what photographing conditions (positional relationship of the photographing apparatus and the subject, etc.) the photograph is actually taken. This makes it difficult to identify a particular photograph.
Disclosure of Invention
An object of the present invention is to provide an image processing apparatus and an image processing method which extract a predetermined image area in which color fading is considered to have occurred from a captured image.
According to an embodiment of the present invention, an image processing apparatus includes:
an image acquisition unit that acquires an image of an object including a photograph;
an edge detection unit that detects an edge of the photograph in the image of the subject acquired by the image acquisition unit; and
an image processing unit that performs color fading correction on an image area surrounded by the detected edge.
According to another embodiment of the present invention, an image processing method includes:
an acquisition step of acquiring an image of an object including a photograph;
a detection step of detecting an edge of the photograph in the acquired image of the object; and
a color fading correction execution step of executing color fading correction on the image area surrounded by the detected edge.
According to another embodiment of the invention, a computer program product stored in a computer usable medium for storing program instructions for execution on a computer system to enable the computer system to perform the steps of:
acquiring an image of an object including a photograph;
detecting an edge of the photograph in the acquired image of the object; and
fading correction is performed on an image area surrounded by the detected edges.
Drawings
Fig. 1 is a diagram illustrating a photographic apparatus including an image processing device according to a first embodiment;
fig. 2 is a structural diagram illustrating a photographic apparatus including an image processing device according to a first embodiment;
FIGS. 3A, 3B, 3C, 3D, 3E, 3F, and 3G are diagrams illustrating an edge detection method;
FIG. 4 is a diagram illustrating the assignment of marks using a marking method;
fig. 5A, 5B, 5C, 5D, 5E, and 5F are diagrams illustrating various images displayed on the liquid crystal display unit when a subject is photographed;
fig. 6A and 6B are diagrams illustrating Hough (Hough) transform;
FIG. 7 is a diagram illustrating the relationship between a photograph and a projected image of the photograph;
FIGS. 8A and 8B are diagrams illustrating distortion due to fading;
fig. 9 is a flowchart of a camera basic process;
FIG. 10 is a flow chart of an edge detection process;
FIG. 11 is a flowchart of a peripheral edge erase process by making a mark;
fig. 12 is a flowchart of the discoloration degree detection process;
fig. 13 is a flowchart of the fade correction guidance display process;
fig. 14 is a flowchart of the discoloration correction process;
fig. 15 is a flowchart of a modification of the fade correction guidance display process; and
fig. 16 is a flowchart of the peripheral edge erasing processing by marking according to the second embodiment.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
< first embodiment >
Fig. 1 is a diagram illustrating a photographic apparatus including an image processing device according to a first embodiment. The photographic apparatus 1 is assumed to be a portable apparatus such as a digital camera or a cellular phone having a photographic function. As described above, the object 2 is a photo film for storing the photo 3, wherein the photo is a photograph on a developed film or a print sheet. Hereinafter, reference numeral 2 refers only to the photographic film as the object.
Fig. 2 is a diagram illustrating the structure of the photographic apparatus 1. As shown in fig. 2, the photographic apparatus 1 includes: an image data generation unit 21 for generating image data obtained by digitizing an image obtained by photographing a photographic target object; a data processing unit 22 for performing image processing on the image data; and a user interface unit 23 for exchanging information with a user.
The image data generation unit 21 includes an optical lens apparatus 101 and an image sensor 102. The optical lens apparatus 101 constitutes an optical system that forms an image of an object on the image sensor 102 and is capable of adjusting parameter settings for shooting, such as focus, exposure, and white balance. The image sensor 102 converts an image of the subject focused/imaged by the optical lens apparatus 101 into digitized image data. The image sensor 102 is composed of a CCD or the like.
The image data generation unit 21 performs high-resolution image capturing and low-resolution image capturing (preview capturing). In low-resolution image capturing, the image resolution is, for example, about XGA (extended graphics array: 1024 × 768 dots) which is relatively low. However, it is possible to perform video recording or image reading at a speed of about 30fps (frames/second). On the other hand, in high-resolution image capturing, for example, capturing is performed with the maximum number of pixels (dots) that can be captured. Since the shooting is performed with the maximum number of pixels, the shooting speed and the image reading speed are low as compared with the case of the low-resolution image shooting.
The data processing unit 22 includes: a memory 201 that stores image data output from the image sensor 102; a video output device 202 for displaying the image data stored in the memory 201; an image processing unit 203 for performing image processing on the image data; a CPU 204 that controls the entire photographing apparatus 1; and a program code storage device (hereinafter abbreviated as "code storage device") 205 that stores programs (program codes) run by the CPU 204. The storage device 205 is, for example, a ROM or flash memory. The image processing apparatus according to the present invention is implemented as an image processing device 203.
The user interface unit 23 includes: a liquid crystal display unit 301; an operation unit 302 including various operation members; a PC interface device 303 for inputting/outputting data to/from an external device such as a Personal Computer (PC); and an external storage device 304.
The operation unit 302 includes, as operation sections, a power key, a mode changeover switch, a shutter key, a menu key, an arrow key, a focus key, and a zoom key, for example, which are not described. The mode switching switch includes a switch for switching a recording (shooting)/playback mode, and a switch for switching a sub mode in normal/fading correction (hereinafter, the latter is rewritten to "sub mode switching switch" for distinction). Although not shown, a sub-CPU that detects a change in the state of the switch and transmits a detection signal corresponding to the change to the CPU 204 is also provided. The external storage device 304 is, for example, a portable storage medium (e.g., a flash memory) that can be attached to the apparatus 1 or detached from the apparatus 1, or a hard disk device.
The CPU 204 controls the entire photographing apparatus 1 by loading and executing a program stored in the code storage device 205. The CPU 204 controls the optical lens apparatus 101 or the image sensor 102 of the image data generation unit 21 as necessary in accordance with a command signal input from the operation unit 302, thereby adjusting the focus, white balance, or the like, or changing the exposure time.
When the recording mode is set, the CPU 204 enables the image sensor 102 to output image data even before shooting. The video output device 202 generates RGB signals for display from the image data stored in the memory 201, and outputs the generated signals to the liquid crystal display unit 301. In this way, the liquid crystal display unit 301 can display an image of a photographic subject or a photographed image. The RGB signals from the video output apparatus 202 can be output to an external apparatus through a terminal (not shown) so that the RGB signals can be displayed by a television, a PC, a projector, or the like.
The image data stored in the memory 201 is saved in such a manner that the CPU 204 stores the image data in the external storage device 304 in a file format. The image data stored in the external storage device 304 may be output through the PC interface device 303. The PC interface device 303 conforms to USB, for example.
The CPU 204 instructs the image processing apparatus 203 to perform image processing on the image data as necessary. When the image data is saved in the external storage device 304, the CPU 204 causes the image processing device 203 to execute compression processing. When the playback mode is set, the CPU 204 loads the image data encoded by the compression processing to be saved in the external storage device 304 to the memory 201 in accordance with the operation of the operation unit 302. The CPU 204 then causes the image processing apparatus 203 to perform decompression processing to display the image data on the liquid crystal display unit 301.
As described above, in the recording mode, the normal mode or the fading correction mode may be set as the sub-mode. The normal mode is a mode in which a captured image is loaded as it is. The fading correction mode is a mode in which fading correction is performed by automatically extracting an image area having an object whose original color is assumed to have changed color (faded). The fading correction mode is prepared so that the digitization can be appropriately performed in a state where the photograph 3 is fixed to the photo film 2 or the like (i.e., there is no need to remove the photograph from the photo film 2 or the like). When the fading correction mode is set, the CPU 204 causes the image processing apparatus 203 to execute the following image processing. The object of image processing is image data obtained by shooting. The "image data obtained by shooting" is hereinafter referred to as "shot image data" or "raw image data" to distinguish them from the image data stored in the memory 201 before shooting. The image represented by the data is referred to as a "captured image" or an "original image".
The image processing mainly comprises: the edge detection, (2) oblique shooting correction, (3) detection of the degree of fading of the detected edge area, and (4) fading correction. The edge detection processing is used to detect an image edge (boundary line) of the photograph 2 (photograph image) included in the taken image of the subject 2. The photographic image is usually distorted to a greater or lesser extent due to the state of the photograph 3 or the positional relationship with respect to the camera. Unless the photograph surface is in a positional relationship (hereinafter referred to as "ideal positional relationship") perpendicular to the optical axis of the optical system composed of the optical lens apparatus 101, the photograph image is distorted even if the photograph 3 is perfectly flat, that is, when the photograph 3 is taken from an oblique direction. The tilt photographing correction process is used to correct distortion.
Depending on the state of preservation, i.e. humidity, temperature, uv radiation, etc., a chemical change occurs in the photograph 3, and the photograph 3 changes color, so that its original color fades, which causes fading. The degree of color fading detection processing for the detected edge area is used to detect the degree of color fading (degree of color fading) on the photographic image (detected edge area) subjected to the inclination shooting correction processing. The fading correction process is to perform fading correction on the photographic image in accordance with the degree of fading specified by the fading degree detection process so that the color changed by fading approaches the original color.
Next, the above-described various image processes performed by the image processing apparatus 203 will be described in detail. The image processing apparatus 203 performs various image processes using the memory 201 storing captured image data as a work area. A program for executing image processing is stored in a nonvolatile memory included in the image processing apparatus 203 or in the code storage apparatus 205 accessed by the CPU 204. Here, the latter is assumed. In this case, the CPU 204 loads programs for various image processing from the code storage device 205, and transfers the programs to the image processing device 203 as needed.
Fig. 3A to 3G are diagrams illustrating an edge detection method in the edge detection processing.
The preview image shown in fig. 3A is an image taken by taking a photograph 3 in the photo album 2, and its image data is stored in the memory 201. The binary edge image shown in fig. 3B is created by performing edge detection on the preview image. The creation is implemented by using a filter, for example for edge detection, called a Roberts filter.
In this Roberts filter, two different values Δ 1 and Δ 2 are determined by performing two types of weighting on four adjacent pixels composed of one target pixel and three pixels adjacent thereto, and averaging these different values to calculate a pixel value of the target pixel to highlight an edge. Assuming that the coordinates of the target pixel are (x, y) and the pixel value is f (x, y), the pixel value g (x, y) after the filtering process by the Roberts filter (after transformation) is expressed as follows.
Δ1=1·f(x,y)+0·f(x+1,y)+0·f(x,y-1)-1·f(x+1,y-1) =f(x,y)-f(x+1,y-1)
Δ2=0·f(x,y)+1·f(x+1,y)=1·f(x,y-1)+0·f(x+1,y-1) …(1)
The pixel value g (x, y) obtained by the formula (1) is binarized according to a predetermined threshold value TH. The threshold TH may be determined as a fixed value. However, it may be determined by a method such as a variable threshold method or a discriminant analysis method as necessary. The pixel value g (x, y) is converted into a binary pixel value h (x, y) by the following formula using the threshold TH. As shown in fig. 3B, a binarized image (binary edge image) h representing an edge extracted on the captured image is created (generated) by transformation.
There is a possibility that the color of the place (background) where the photograph 3 is placed is similar to the color of the periphery of the photograph 3. Especially, for example, a case where the photograph 3 with a blue sky background is fixed to a blue page. Even in this case, for example, when the format of the image data is YUV, it is possible to highlight the difference between the photograph 3 and the blue page by the following formula. In the following formula, not only the difference value of the luminance value Y but also the difference values U and V of each color component are determined and averaged, respectively. Then, the average value is multiplied by a coefficient n, and added to the average value determined for the luminance value Y.
ΔY1=1·fy(x,y)+0·fy(x+1,y)+0·fy(x,y-1)-1·fy(x+1,y-1) =fy(x,y)-fy(x+1,y-1)
ΔY2=0·fy(x,y)+1·fy(x+1,y)-1·fy(x,y-1)+0·fy(x+1,y-1) =fy(x+1,y)-fy(x,y-1)
ΔU1=1·fu(x,y)+0·fu(x+1,y)+0·fu(x,y-1)-1·fu(x+1,y-1) =fu(x,y)-fu(x+1,y-1)
ΔU2=0·fu(x,y)+1·fu(x+1,y)-1·fu(x,y-1)+0·fu(x+1,y-1) …(3) =fu(x+1,y)-fu(x,y-1)
ΔV1=1·fv(x,y)+0·fv(x+1,y)+0·fv(x,y-1)-1·fv(x+1,y-1) =fv(x,y)-fv(x+1,y-1)
ΔV2=0·fv(x,y)+1·fv(x+1,y)-1·fv(x,y-1)+0·fv(x+1,y-1) =fv(x+1,y)-fv(x,y-1)
In equation (3), there is the sum of the squares of the three components shown below. A transform using a Roberts filter is performed for edge extraction. Since it is sufficient to extract the edge, the pixel value g (x, y) can be calculated by using only the largest one of the sum of squares of the three components.
The above-mentioned emphasis can be performed even if the image data is in other formats. When the image data is in a format in which colors are expressed by RGB, the pixel values g (x, y) are determined by equation (1) for each RGB component, respectively, and the largest one thereof is applied, which enables emphasis. This is the same as the case where the image data is in a format in which colors are expressed by CMY or CMYK.
In order to take the photograph 3 fixed to the photo film 2, it is difficult to obtain only the desired photograph 3. The image acquired by the digital camera comprises an extra outer area around the desired photograph 3. In the worst case, other photographs 3 are located in the vicinity of the desired photograph 3, as shown in fig. 3B, and it is highly likely that the photograph 3 is taken together with other photographs. However, the user wishes to take a photograph 3 that is intended to be digitized (hereinafter, when there are other photographs that need to be distinguished, this is referred to as "desired photograph" for convenience, and its image is referred to as "desired photograph image"), trying to make the desired photograph 3 appear larger, while trying not to produce a portion that is not revealed, thereby better digitizing. Subsequently, as shown in fig. 3B, the entire desired picture 3 appears near the center of the captured image, and even if other pictures 3 appear, only a part thereof appears at the periphery in most cases.
As shown in fig. 3B, there may also be edges within the desired photographic image. In order to avoid incorrectly detecting edges within the desired photographic image, it is desirable to detect the outer edges with a higher priority. This is referred to as "outer edge priority".
However, if there are other photographic images included in the periphery of the photographed image, adopting the external edge priority may lead to the fact that the edges of the other photographs are erroneously detected as the edges of the undesired photographic images, as shown in fig. 3C. In order to avoid such false detection, in the present embodiment, weighting is performed to operate an edge according to whether the edge touches the edge of the captured image. This is hereinafter referred to as the "outermost edge". More particularly, the edges that touch the outermost edges are erased from the binary edge image. By the outermost edge wipe, the edge image shown in fig. 3B is updated as shown in fig. 3D. In this way, even if there is another photographic image, the edges of the other photographic image can be erased, making it possible to surely detect the area where the desired photographic image exists, as shown in fig. 3F.
The erasure of the edge contacting the outermost edge may be performed by using a marking method. Pixels that are interconnected and have at least one pixel contacting the outermost edge are erased.
In the case of a binary image such as a binary edge image, labeling is performed in such a manner that, as shown in fig. 4, when a target pixel F (x, y) is on an edge, the peripheral 8 pixels connected to the target pixel are checked, and the same label as the target pixel F (x, y) is assigned to the pixel connected to the target pixel F (x, y). By assigning the marks in this manner, the same mark is assigned to all the pixels connected to each other to constitute one edge. For this reason, the edge contacting the outermost edge is erased by canceling the mark assigned to the pixel constituting the edge contacting the outermost edge. Further, the value of the pixel for which the flag assignment has been cancelled is set to zero.
The straight lines forming the edges of the desired photographic image are detected on a binary edge image (hereinafter simply referred to as "edge image") that has been updated by the above-described erasing process. The detection is performed by using a hough transform.
As shown in fig. 6A and 6B, the hough transform is a transform technique in which a number of points constituting a straight line on an X-Y plane are "voted" to a ρ - θ plane expressed by the following formula and converted into the number of votes (votes) on the ρ - θ plane.
ρ=x·cosθ+y·sinθ …(4)
When the angle θ changes from 0 ° to 360 ° at the coordinates (x, y) of the respective points, the same straight line is represented by the same point on the ρ - θ plane. This is because, when there is a straight line passing through a coordinate point at a distance ρ from the origin, voting of the respective points constituting the straight line is performed on the same point on the ρ - θ plane. Therefore, it is possible to specify a straight line (its position) based on the ρ - θ coordinates where the number of votes obtained is large. The number of votes obtained is equal to the number of pixels on a straight line, which can be regarded as the length of the straight line. Therefore, the ρ - θ coordinates with a large number of votes are regarded as short straight lines or curved lines and calculated from candidates representing the outermost edges of the photographic image.
In the edge image serving as the inspection object, both terms on the right side of the formula (4) take positive and negative values in consideration of the case where the center of the image is taken as the origin of the coordinate system. Therefore, the value of the distance ρ is positive in the range where the angle θ is 0 ° ≦ θ < 180 °, and is negative in the range where the angle θ is 180 ° ≦ θ < 0 °.
As described above, the fading correction pattern is assumed to be used to digitize the already existing photograph 3. Under this assumption, as described above, there are cases where many photographs 3 are taken under conditions where the images appear more prominently and are located near the center of the taken images. In the case of taking in this way, the center of the photographic image (rectangular image area) is positioned near the center (origin) of the taken image, and the respective edges (sides) are located above, below, left, and right of the origin. Therefore, by dividing the detection range into the ranges as follows with respect to the angle θ, the number of votes obtained on the ρ - θ plane can be detected more efficiently than in the case of dividing the detection range into 0 ° ≦ θ < 180 ° and 180 ° ≦ θ < 0 (360) °.
Upper and lower boundaries (sides)
Theta is more than or equal to 45 degrees and less than 135 degrees (or more than or equal to 225 degrees and less than 315 degrees) \8230; (5)
Left and right border (side)
Theta is more than or equal to 135 degrees and less than 225 degrees (or more than or equal to 315 degrees and less than 405 degrees) \8230; (6)
According to the definition of the range of the angle θ and the positive or negative value of the distance ρ, the upper and lower sides or the left and right sides of the boundary may be specified, and the intersection may be calculated as the vertex of the photographic image. According to the above-assumed photographing conditions, the boundary of the photographic image may not be close to the origin. Therefore, the distance ρ is to be considered when specifying a candidate corresponding to the boundary from among the candidates for a straight line having a large number of votes.
By specifying four boundaries, the image area with the photographic image is specified as an edge-detected area. After the area for edge detection is specified, tilt shooting correction processing is performed on the area. In the correction processing, a specified area (photographic image) is extracted (edited) from the captured image to correct distortion. The correction is performed using a projective transformation. In the present embodiment, the transformation is performed by using a two-dimensional affine transformation, without using three-dimensional camera parameters. As is well known, affine transformations are widely applied to spatial transformations of images.
In the affine transformation, the coordinates (u, v) before transformation are subjected to operations such as shifting, scaling, rotation, and the like according to the following formula to determine the coordinates (x, y) after transformation.
The final coordinates (x, y) are calculated as follows.
Equation (8) is an equation for performing the projective transformation, and the coordinate (x, y) tends to 0 according to the value of z'. That is, the parameters included in z' have an effect on the projection. These parameters are a13, a23 and a33. Also, since other parameters may be normalized by the parameter a33, the value of the parameter a33 may be 1. The respective parameters of the 3 × 3 matrix on the right side of equation (7) can be calculated based on the edges (four vertices) of the edge-detected region and the focal length at the time of shooting.
Fig. 7 is a diagram illustrating a relationship between a photographic image and a photographic image in a captured image.
In fig. 7, the U-V-W coordinate system is a three-dimensional coordinate system of a captured image (original image) captured by the image data generating unit 21. The A (Au, av, aw) vector and the B (Bu, bv, bw) vector are vectors representing the photograph 3 on a three-dimensional U-V-W coordinate system. Moreover, S (Su, sv, sw) is a vector indicating the distance between the origin of the U-V-W coordinate system and the photograph 3.
A virtual projection screen (hereinafter referred to as "virtual screen") for taking an image shown in fig. 7 virtually indicates an image projected by the photographing apparatus 1. This is used to perform the projection of the photographic image. Assuming that the virtual screen is an X-Y coordinate system, the image of the photograph 3 projected onto the virtual screen corresponds to the photograph image to be taken. Here, it is assumed that the virtual screen is arranged at a distance d from and perpendicular to a plane passing through W = 0.
Assume that an arbitrary point P (u, v, w) on the photograph 3 and the origin are connected by a straight line, and the X-Y coordinate of the intersection of the straight line and the virtual screen is P (X, Y). In this case, the coordinate p is represented by the following formula using projective transformation.
From the formula (9), the relational expression shown below can be obtained from the relationship between the four vertices P0, P1, P2, and P3 and the projection points P0, P1, P2, and P3 projected onto the virtual screen thereof as shown in fig. 7.
In this case, the projection coefficients α and β are expressed by the following equations.
Next, projective transformation is described. Any point P (x, y) on the photograph 3 can be represented using the vectors S, A and B as follows:
P=S+m·A+n·B …(12)
wherein m: the coefficients of vector A (m is more than or equal to 0 and less than or equal to 1),
n: the coefficients of vector B (n is greater than or equal to 0 and less than or equal to 1).
When the formula (10) is substituted into this formula (12), the coordinate values x and y are expressed by the following formula.
When the relationship shown in this formula (13) is applied to formula (7), the coordinates (x ', y ', z ') can be expressed as follows.
For this formula (14), coordinates (x ', y ', z ') are determined by substituting the values of m and n, and corresponding coordinates (x, y) on the captured image are obtained using a formula similar to formula (8). Since the corresponding coordinates (x, y) are not necessarily integer values, the pixel values are determined by using an image interpolation technique or the like.
The coefficients m and n can be determined by setting the image size (0. Ltoreq. U < umax, 0. Ltoreq. V < vmax) to output the corrected image p (u, v), and adjusting the size of the photographic image to fit the image size. When this method is used, the coefficients m and n can be calculated by the following equations.
However, the aspect ratio of the corrected image p to be created and the aspect ratio of the photograph 3 do not necessarily match under the influence of distortion occurring at the time of photographing or the like. Here, according to the formulas (9) and (10), the relationship between the corrected image p and the coefficients m and n is expressed as follows.
When the focal length f of the lens is known, the aspect ratio k can be obtained by equation (16), where the focal length f of the lens is one of the parameters of the camera. Therefore, assuming that the image size of the corrected image p is (0. Ltoreq. U < umax, 0. Ltoreq. V < vmax), it is possible to obtain the same aspect ratio k as that of the photograph 3 by determining the coefficients m and n using the following formula.
(1) When vmax/umax is less than or equal to k
(2) When vmax/umax > k, \ 8230; (17)
When the photographing apparatus 1 has a fixed focal length, the focal length value of the lens may be set in advance. When the photographic apparatus 1 does not have a fixed focal length, that is, when there is a zoom lens or the like, the focal length value changes according to zoom magnification. Therefore, in this case, a table showing the relationship between the zoom magnification and the focal length is prepared in advance, whereby the focal length corresponding to the zoom magnification is determined from the table at the time of shooting.
The respective coordinates of the vertices p0, p1, p2, and p3 shown in fig. 7 are specified by the edge detection process. These coordinates are specified so that the projection coefficients α and β can be calculated by using formula (11), and the coefficients m and n can be determined according to formulas (16) and (17). Therefore, the formula (14) can be specified, and it is possible to produce a corrected image (corrected photographic image; hereinafter also referred to as "corrected photographic image") p by using the formula (14). The image size of the corrected image p corresponds to umax, vmax, and aspect ratio k. The image size specifies its shape and position. Since the projection transformation (affine transformation) is performed assuming the image size, the corrected image p is obtained in such a manner that the rotation operation and the shift operation are applied to the original image (photographic image) in addition to the deformation operation and the zoom operation for correcting the distortion. Of course those various operations may be performed as desired.
It is considered quite difficult to take only the desired photograph 3 in a suitable manner by means of the portable camera device 1. Consider that many users attempting to digitize the desired photograph 3 must take care not to leave the portion that is not included. In that case, items other than the photograph 3 (photo album, other photographs 3, etc.) will be included in the shooting in most cases for digitizing the desired photograph 3.
Under such assumption, in the present embodiment, only the desired photograph 3 is digitized by automatically extracting the desired photograph image from the taken image. As described above, it is difficult to know how shooting is performed. The photographing does not have to be performed in an appropriate manner. Thus, an ideal photograph (corrected image p) is automatically obtained from the actual photographic image. By performing such an operation on the taken image, the conditions required for taking in order to appropriately digitize the photograph 3 are relaxed. As a result, the user of the photographing apparatus 1 may more easily take the photograph 3 for digitization.
The photographic image (the edge-detected area) is extracted by detecting the edges (boundary lines) thereof. There is a possibility that a plurality of edge detection regions determined by detecting edges exist in a captured image. For example, consider a case where there are a plurality of photographic images, and a case where there are one or more photographic images and the image of one or more rectangular items is different from the desired photograph. Therefore, in the present embodiment, the detected edge (edge detection area) is displayed (fig. 5B and 5C), and the user is prompted to select the edge detection area in which the corrected image p is generated.
Since color fading has occurred on the photograph 3 (subject), only the corrected photograph image produced by the inclination shooting correction process is subjected to color fading correction.
The corrected photographic image is obtained by applying distortion correction or the like to the desired photographic image. Therefore, in the color fading degree detection process, the color fading degree is detected on a desired photographic image (edge detection area). For example, in the present embodiment, the detection is performed by preparing a histogram table for each RGB component on the respective pixels constituting the desired photo image. The detection may be applied to the corrected photographic image. A histogram table HT [ f (x, y) ] is generated by incrementing for each f (x, y). Here, f (x, y) represents each pixel value on the coordinates (x, y).
Histogram (R component):
HTr[fr(x,y)]←HTr[fr(x,y)]+1
histogram (G component):
HTg[fg(x,y)]←HTg[fg(x,y)]+1
histogram (B component):
HTb[fb(x,y)]←HTb[fb(x,y)]+1
(outAreaX≤x<maxX-outAreaX,
outAreaY≤y<maxY-outAreaY)
some of the photographs 3 have blank edges. Since there is no image in the margin, the margin is excluded from the inspection object in the present embodiment. It is assumed that the numbers of pixels in the vertical and horizontal directions of the edge detection area in which the photographic image exists are maxY and maxX, respectively, and the numbers of pixels in the vertical and horizontal directions for ignoring blank edges are outarey and outarex, respectively. In the X-axis direction, the checking range is outAlexaX ≦ X < maxX-outAlexaX, and in the Y-axis direction, the checking range is outAlexaY ≦ Y < maxY-outAlexaY. The respective numbers of pixels maxY, maxX, outAreaY, and outAreaY may be obtained from the edge detection area. Those numbers of pixels are obtained under the assumption that the edge detection region is in a rectangular shape without distortion and its four boundary lines are parallel or perpendicular to the X-axis (i.e., the image is not rotated). The actual inspection range varies depending on whether or not there is distortion or rotation and the degree thereof.
Fig. 8A and 8B are diagrams illustrating distortion due to fading. Fig. 8A shows an example of histogram tables of respective components in an image without fading. Fig. 8B shows an example of a histogram table of individual components in an image with fading. In each histogram, the abscissa axis shows RGB values, and the ordinate value shows the number of pixels.
As shown in fig. 8A and 8B, the range of values of RGB becomes narrower due to fading. Thus, according to the present embodiment, when the number of pixels exceeds the threshold value, the values of RGB on the maximum and minimum sides are specified with reference to the respective histogram tables HTr [ j ], HTg [ j ], and HTb [ j ]. Here, it is assumed that the values specified for each RGB component on the maximum value side are respectively referred to as upper limits maxR, maxG, and maxB, and the values on the minimum value side are respectively referred to as lower limits minR, minG, and minB. The upper and lower limits of the respective components are determined as follows. That is, for example, when the value j is represented by 8 bits, i.e., when the value j is in the range of 0 to 255, as the value j is sequentially incremented from 0, it is determined whether the corresponding number of pixels exceeds the threshold one by one. The threshold value may be any determined value. However, the threshold value may be a value obtained by multiplying the total number of pixels by an arbitrarily determined ratio. Hereinafter the upper and lower limits are generally referred to as fade parameters.
Since the range in which the component value exists becomes narrower due to fading, fading correction is performed to expand the range. Therefore, a correction table as shown below is prepared for each component of RGB.
Compensation table (R component):
compensation table (G component):
compensation table (B component):
wherein
minR≤j≤maxR
minG≤j≤maxG
minB≤j≤maxB
In the respective correction tables STr [ j ], STg [ j ], and STb [ j ], the corrected component value is stored in the record designated by the component value j. Subsequently, the fading correction is performed so that the component value j is changed to a component value stored in the record specified by the value j. Each RGB pixel value on the coordinates (x, y) is represented by fr (x, y), fg (x, y), and fb (x, y), respectively, and is changed as follows by fading correction. Here, fr (x, y), fg (x, y), and fb (x, y) represent RGB components of the corrected pixel, respectively. By this change, the histogram table of each component shown in fig. 8B changes to the histogram table shown in fig. 8A.
Compensated pixel value (R component):
fr(x,y)=STr[fr(x,y)]
compensated pixel value (G component):
fg(x,y)=STg[fg(x,y)]
compensated pixel value (B component):
fb(x,y)=STb[fb(x,y)] …(19)
fading generally occurs on photographs, however, fading does not necessarily occur in the photograph 3 that is actually to be taken. Therefore, in the present embodiment, whether or not fading has occurred is determined in accordance with the upper limit and the lower limit (fading parameter) detected for each component. The fading rate is calculated as information indicating the degree of fading using the following formula, and the determination is made according to whether the calculated fading rate is less than or equal to a predetermined threshold value. Therefore, when the fading rate is less than or equal to the threshold value, it is determined that fading has occurred.
Fading rate:
((maxR-minR)+(maxG-minG)+(maxB-minB))/(256×3) …(20)
note that whether fading has occurred may be determined for each component. This determination may be made, for example, as shown below, depending on whether the difference between the upper and lower limits is less than the threshold value THRange. In this case, whether or not to perform the fading correction may be determined based on the determination result of each component.
Even if color fading has occurred on the photographed picture 3, the colors on the corrected picture image are automatically corrected to the original colors (colors before the occurrence of color fading) or colors close to the original colors by performing color fading correction on the corrected picture image as needed. The corrected photographic image using such a fading correction as needed is regarded as an object to be stored as a result of digitization of the photograph 3. Thus, it is possible for the user to keep only the photograph 3 in the best form to digitize the photograph 3, as if the photograph 3 were taken with the camera device 1. Because the photograph 3 only has to be taken with the camera device 1, i.e. because no additional image processing of the taken and stored image is required, it is possible to easily digitize the photograph 3 optimally.
When the fading correction mode is set, the CPU 204 causes the image processing apparatus 203 to perform various image processes as described above on the captured image data stored in the memory 201 by capturing. Hereinafter, the operation of the photographing apparatus 1 under the control of the CPU 204 will be described in detail with reference to various flowcharts shown in fig. 9 to 13.
Fig. 9 is a flowchart of camera basic processing. Fig. 9 shows a flow of basic processing executed by the CPU 204 until the mode is cancelled when the fading correction mode is set. First, this basic processing will be described in detail with reference to fig. 9. This basic processing is realized such that the CPU 204 loads and executes a program stored in the program code storage device 205.
First, in step S11, the peripheral circuit is initialized, and in the next step S12, initialization of data and setting for displaying a preview image are performed. At this time, a program for image processing is loaded from the code storage device 205 to be transferred to the image processing device 203. Subsequently, the image processing apparatus 203 is set so that various image processes can be performed.
In step S13 following step S12, operation of the shutter key is awaited while the preview image is updated. When a notification of an operation of the shutter key is given from the operation unit 302, the flow proceeds to step S14. When a notification of an operation of the mode changeover switch or the sub-mode changeover switch is given from the operation unit 302, the mode specified by the operated switch is set. After the processing of this mode is started, the flow ends.
In step S14, the optical lens apparatus 101 and the image sensor 102 are controlled to perform shooting under the conditions set at that time. The conditions include a camera parameter f. In the next step S15, the image processing apparatus 203 is instructed to perform edge detection processing on captured image data stored in the memory 201 by capturing.
In step S16 following step S15, the video output apparatus 202 is instructed to display a preview image based on the captured image data on the liquid crystal display unit 301 as shown in fig. 5A. In the next step S17, the edge detected by causing the image processing apparatus 203 to perform the edge detection processing is displayed so as to be superimposed on the preview image. The edge is displayed in such a manner that the CPU 204 changes the image data of the portion detected as the edge in the captured image data.
Fig. 5B shows a display example in the case where the edge detection fails, that is, in the case where the detection fails. Fig. 5C shows a display example in the case where the edge detection is successful, that is, in the case where the detection is successful. The failure and success shown in fig. 5B and 5C, respectively, are both the cases that occur when photographs 3 fixed to the photo film 2 so as to be close to each other are taken. In the case of failure shown in fig. 5B, the manner of failure of detection is such that the edges of adjacent photographs are detected as the edge lines of the desired photograph 3 that is fully displayed. In the case of success shown in fig. 5C, the manner of detecting success is such that the edges of the other photographs 3 are accurately distinguished.
As described above, edge detection is performed on the edge image (fig. 3D) in which the edge contacting the outermost edge has been erased. As described above, in most cases, other photographs 3 adjacent to the desired photograph 3 will be taken so that only a part of them will be visualized even if they are visualized. Therefore, the failure shown in fig. 5B occurs only when the boundary between the other photograph 3 and the background of the photographic film (backing paper) cannot be detected. Since an edge image is generated to emphasize the edge including the boundary, a failure as shown in fig. 5B hardly occurs.
In step S18 performed after the edge is displayed, it is waited for the user to select an edge (edge detection area) by operating the operation unit 302. The selection is made by, for example, an operation of an arrow key. If the edge is selected, the flow proceeds to the following step S19 by notifying the user that an operation is performed from the operation unit 302. Although detailed description is omitted, since there is a possibility that extraction of the edge detection region fails, it may be instructed to discard the captured image at this step. When the user command is abandoned, the procedure returns to step S13.
In step S19, the user is asked whether to correct the fading that has occurred, and it is judged with reference to the inquiry whether the user has instructed correction of the fading. When the user instructs to perform correction by operating the operation unit 302, the determination result is yes, and the flow proceeds to step S22. Otherwise, the determination result is no, and the flow proceeds to step S20.
In step S20, the finally obtained image data (captured image data, corrected image p without color fading correction, or corrected image p with color fading correction) is stored in the external storage device 304 in a file format. In step S21 performed after step S20, the saved image is displayed on the liquid crystal display unit 301 for a given length of time. Then, the flow returns to step S13 to prepare for the following shooting.
On the other hand, in step S22, the image processing apparatus 203 is instructed to perform the inclination shooting correction process on the edge detection area (photographic image) selected by the user. In the correction processing, the projection coefficients α and β are calculated by using formula (11), and the coefficients m and n are calculated according to formulas (16) and (17). From the result, formula (14) is calculated, and a corrected image p is generated by using formula (14). After the corrected image p (corrected photographic image) is generated, the flow proceeds to step S23. The generated corrected image p is stored in a predetermined area of the memory 201.
In step S23, the image processing apparatus 203 is instructed to execute a color fading degree detection process to detect the degree of color fading on the edge detection area. In the next step S24, the image processing apparatus 203 is caused to execute the fading correction process in accordance with the detection result, and then the flow proceeds to step S20.
As described above, when the fading rate calculated by the formula (20) is less than or equal to the threshold value, it is considered that fading has occurred. Therefore, in the fading correction process, when it is considered that no fading has occurred, the operation of the fading correction is not performed on the corrected image p. As a result, when the flow proceeds to step S20, the image p for which the correction of the fading correction is not performed or the image p for which the correction of the fading correction is performed is saved.
In step S23, the image processing apparatus 203 is instructed to execute a color fading degree detection process to detect the degree of color fading on the edge detection area. In the next step S24, a fading correction guidance display process is performed to notify the user of the detection result. In the guidance display process, as will be described later, the user is prompted to finally select whether or not to perform fading correction. Thus, in step S25, it is determined whether the user has instructed to perform the color fading correction. When the user instructs to perform the fading correction, the judgment result is yes. After the image processing apparatus 203 is caused to execute the color fading correction process at step S26, the flow advances to step S20. Otherwise, the determination result is no, and the flow directly proceeds to step S20.
In the fading correction guidance display process, as a result of detecting the degree of fading, the fading rate calculated by the formula (20) is displayed, for example, as shown in fig. 5D. As a result, the user is prompted to judge the necessity of performing the fading correction. When the user instructs to perform the fading correction, the corrected image p subjected to the fading correction is displayed in step S21, for example, as shown in fig. 5E. As a result of detecting the degree of fading, the upper and lower limits (fading parameters; formula (21)) detected for each RGB component can be displayed, for example, as shown in fig. 5F.
Hereinafter, the image processing performed by the image processing apparatus 203 in the camera basic processing will be described in detail.
Fig. 10 is a flowchart of the edge detection processing to be executed by the image processing apparatus 203 at step S15. Next, the edge detection process will be described in detail with reference to fig. 10. Various image processing performed by the image processing apparatus 203 is realized such that the image processing apparatus 203 executes a program for performing image processing, which is loaded from the code storage apparatus 205 and transferred by the CPU 204.
First, in step S30, a binary edge image h is created from a captured image by using formula (1) and a threshold value. In the next step S31, a peripheral edge erasing process by marking is performed to erase a peripheral edge in contact with the outermost edge from the edge image. By performing the erasing process, the edge image as shown in fig. 3B is updated as shown in fig. 3D.
In step S32, hough (Hough) transformation is performed in which the number of points constituting a straight line (edge line) on the X-Y plane as shown in fig. 6A is "voted" onto the ρ - θ plane as shown in fig. 6B by formula (4) and is transformed into the number of votes on the ρ - θ plane. In the next step S33, a plurality of candidates having a large number of votes with respect to the respective positive and negative values of the distance ρ are acquired (specified) as information indicating candidates of edge lines in the range of 45 ° ≦ θ < 225 ° (more precisely, the ranges shown in equation (5) and equation (6), respectively). By performing the designation in this manner, edge lines (boundary lines) located above, below, to the left, and to the right of the origin are respectively designated as candidates. Then, the flow advances to step S34.
In step S34, a candidate table is prepared in which the specified candidates are arranged in descending order from the maximum number of votes. In the next step S35, the order of candidates is classified starting from the maximum value (absolute value) of the distance ρ from the center. In step S36, coordinates (candidates) respectively located above, below, to the left, and to the right are selected one by one with reference to the candidate table, and coordinates of intersections where two candidates intersect each other are respectively calculated from the four candidates as candidates for vertices. When the distance between the ends of two candidates and the intersection point is not within a predetermined range, it is judged that those candidates do not intersect with each other, and they are excluded from the objects constituting the combination of vertices. After the coordinates of the four vertices in total are calculated, the flow advances to step S37 to determine whether all the calculated coordinates of the vertices are in the captured image. When at least one vertex is not in the captured image, the determination result is no, and the flow proceeds to step S40. After the coordinates for calculating the coordinates of the vertices not in the captured image are respectively changed to other coordinates located in the same direction as viewed from the origin, the flow returns to step S36. As a result, the erasure is not considered as a candidate constituting the edge detection area. Otherwise, the judgment result is yes, and the routine proceeds to step S38.
It is assumed that all four vertices exist in the captured image. Therefore, in the present embodiment, as shown in fig. 3G, as the case of inhibition, it is judged that the photographed picture 3 is not fully shown, and the user is required not to perform such photographing. The reason why such a prohibition situation is judged is that the user is required to appropriately take the desired photograph 3.
In step S38, it is determined whether the calculation of the coordinates of the four vertices for all the coordinates (candidates) stored in the candidate table is completed. When there is no coordinate to be considered in the candidate table, the determination result is yes, and the flow proceeds to step S39. After the coordinates of the four vertices all present in the captured image are saved and output to the CPU 204, the series of processes terminates. Otherwise, the judgment result is no, and the routine proceeds to step S40. After one of the four candidates changes to another candidate, the flow returns to step S36.
As described above, in order to further appropriately digitize, it is considered that the user will take the photograph 3 desired to be digitized to be clearly displayed while attempting not to produce a portion that is not displayed in many cases. Thus, the outermost edge priority is employed. This is why the candidate list is prepared in step S34, and the coordinates are classified as in step S35.
There may be various combinations of the four vertices all present in the captured image. For example, when a photograph 3 with a white border is taken, both the outside and the inside of the white border are represented as edges in the binary image. Therefore, the present embodiment is configured such that, when there are a plurality of combinations of four vertices in step S39, the combination of four vertices farthest from the origin is regarded as an edge detection area (fig. 3F and 5C), and those coordinates are notified to the CPU 204. The CPU 204 copies the captured image stored in the memory 201, for example, to another area, overlays an image for displaying an edge specified by the coordinates of the notified four vertices on the copied captured image, and displays the captured image after the overlaying on the video output device 202. As a result, the CPU 204 displays an image as illustrated in fig. 5B or 5C on the liquid crystal display unit 301.
In step S39, the number of pixels maxY and maxX in the vertical or horizontal direction in the edge detection area specified by the four vertices is counted or calculated, for example, to be saved for color fading correction. Further, it is determined whether there is another combination in which all four vertices are located in the vicinity of the four vertices in the area specified by the four vertices. When such a combination can be determined, it is judged that a white edge exists, and the respective pixels outAreaY and outAreaX in the longitudinal direction and the lateral direction are counted or calculated for saving. Therefore, when there is a white edge, the fading correction can be performed only in the inner region thereof.
Fig. 11 is a flowchart of the peripheral edge erasing process performed by marking at step S31. Next, the erasing process will be described in detail with reference to a flowchart shown in fig. 11.
First, in step S60, a flag information area for storing flags to be assigned to respective pixels of an edge image (fig. 3B) is set in the memory 201, for example, and the area is initialized. In the following step S61, the connection state between the pixels constituting the edge is checked on all the pixels of the edge image, and the same flag is assigned to all the connected pixels, that is, all the pixels constituting the same edge line. When the allocation is completed, that is, all the marks to be stored in the mark information area are stored, the flow proceeds to step S62.
In step S62, with reference to the mark information area, the mark assigned to the pixel constituting the edge contacting the outermost edge is acquired (specified). In step S63, all pixels to which the obtained mark is assigned are erased. The erasing is performed by erasing the mark from the mark information area and setting the value of the pixel to which the erased mark is assigned to zero. In this way, after the edge image is updated as shown in fig. 3D by erasing all the invalid pixels on the edge, the flow is terminated.
Fig. 12 is a flowchart of the fading degree detection processing to be executed by the image processing apparatus 203 in step S23 in the camera basic processing shown in fig. 9. Next, the detection process will be described in detail with reference to fig. 12.
First, in step S45, histogram tables HTr, HTg, and HTb are generated for each RGB component on an edge detection area (desired photographic image) which is specified by performing an edge detection process and is selected by the user (formula (15)). This generation is performed by using the pixel numbers maxY, maxX, outAreaY, and outAreaY specified based on the result of performing the edge detection processing. After the generation, the flow proceeds to step S46.
In step S46, with reference to the generated histogram tables HTr, HTg, and HTb, an upper limit (maximum value) and a lower limit (minimum value) of values are determined (specified) for each RGB component, respectively. In the next step S47, the fading rate is calculated from the formula (20) by using the upper limit (maximum value) and the lower limit (minimum value) determined for each RGB component, and the calculated fading rate is saved together with the upper limit (maximum value) and the lower limit (minimum value) (fading parameter) determined for each RGB component. The flow then terminates.
Fig. 13 is a flowchart of the fading correction guidance display process, which is executed at step S73 in the camera basic process shown in fig. 9. The display processing is subroutine processing executed by the CPU 204 after the image processing apparatus 203 gives a notification of the fading rate or the like.
In the present embodiment, not only the user is prompted to select whether or not to perform the fading correction, but it is also possible to arbitrarily specify the degree to which one fading correction is to be performed for the user. When it is required to change the degree by operating the operation unit 302, the designation is performed such that the upper limit and the lower limit are displayed for each RGB component as shown in fig. 7F, and the upper limit and the lower limit are changed.
First, in step S80, it is determined whether or not fading has occurred by comparing the notified fading rate with a threshold value. As described above, when the fading rate is less than or equal to the threshold value, it is judged that fading has occurred, and therefore, the judgment result is yes, and the flow proceeds to step S81. Otherwise, the determination result is no, and after the fading correction flag is turned OFF in step S88, the flow is terminated. The fading correction flag is a variable used to determine whether or not fading correction is required, and OFF corresponds to substituting a value indicating that correction is not required. When it is determined that correction is not necessary, the determination result in step S25 shown in fig. 9 is no.
In step S81, the fading correction flag is switched ON. In the next step S82, the upper limit and the lower limit, which have been notified from the image processing apparatus 203, are set as color fading parameters for each RGB component. Then, the flow proceeds to step S83, and only the corrected image p (corrected photographic image) is displayed as a preview image. In the next step S84, the image indicating the fading rate is displayed in a superimposed form. Thereby, an image as shown in fig. 5D is displayed on the liquid crystal display unit 301. As described above, the corrected image p is stored in the area set in the memory 201. Thus, for example, display is performed by copying the corrected image p to another area and instructing the video output apparatus 202 to display it. A bar indicating the fading rate is displayed, for example, by overlaying the image on the copied corrected image p.
In step S85 following step S84, it is determined whether the user has instructed to perform the fading correction. When the user instructs to perform the fading correction by operating the operation unit 302, the judgment result is yes, and the flow ends here. In this case, since the fading correction flag is ON, the determination result in step S25 shown in fig. 9 is yes. Otherwise, the determination result is no, and the flow proceeds to step S86.
In step S86, it is determined whether the user has instructed cancellation of the fading correction. When the user instructs cancellation of the fading correction by operating the operation unit 302, the determination result is yes. After the fading correction flag is turned OFF in step S88, the flow ends. In this case, since the fading correction flag is OFF, the determination result is no in step S25 shown in fig. 9. Otherwise, the determination result is no, and the flow proceeds to step S87.
In step S87, processing for changing the fading parameters, i.e., the upper limit and the lower limit, of each RGB component is executed according to the operation of the operation unit 302. The change is performed by operating the image displayed on the liquid crystal display unit 302. When the user commands the application of the changed contents, the upper and lower limits for each RGB component currently displayed are set to the changed fading parameters, and then the flow returns to step S84, where the fading rate calculated from the changed fading parameters is redisplayed at step S84. Therefore, it is possible to confirm the changed fading parameters by the fading rate.
Fig. 14 is a flowchart of the fading correction process executed by the image processing apparatus 203 in step S24 in the camera basic process shown in fig. 9. The correction process will be described in detail with reference to fig. 14.
As described above, in the color fading correction process, when the color fading rate is larger than the threshold value, that is, when it is considered that no color fading has occurred, the color fading correction operation is not performed on the corrected image p. In fig. 14, the processing related to the determination as to whether or not to perform the fading correction is omitted, and only the processing performed after the determination as to whether or not to perform the fading correction is selectively shown.
First, in step S50, correction tables STr, STg, and STb are prepared for each RGB component according to equation (18) using the fading parameters. In the next step S51, as shown in formula (19), the component values of the RGB components of the respective pixels constituting the corrected image p are replaced by the values stored in the records of the correction tables STr, STg, and STb specified by the values. After the fading correction is performed for all the pixels in this manner, the flow is terminated.
Note that, in the present embodiment, the user is prompted to select one object for color fading correction in the extracted edge detection area, and to select whether or not to perform color fading correction. However, at least one of them may be omitted. That is, at least one of selection of an edge detection area as a subject of color fading correction and selection of whether to perform color fading correction may be automatically performed. Although it is possible for the user to arbitrarily change the fading parameters, it is impossible to perform the change.
The extraction of the edge detection area is based on the assumption that the edge shape of the photograph 3 is linear. Many photographs (e.g., gravure) printed on newspapers and books (including magazines, etc.) have such linear edges. Newspaper and book spaces and pages generally share the same characteristics. Assuming this situation, the photograph 3 may be such a printed photograph, paper space or page. When such objects are taken as photographs 3, they may be digitized for storage in a database. Even if fading occurs with the lapse of time, they can be preserved in a more appropriate state by correcting them. Therefore, the present invention can be effectively used for storing various digitized printed matters.
The corrected image p subjected to the color fading correction is saved. However, it is difficult to estimate how the result of the fading correction will vary according to the contents of the fading parameters. If so, the user may not be able to perform optimum fading correction on the saved corrected image p even if the fading parameter can be arbitrarily changed. Therefore, in order to save the corrected image p for which the optimum fading correction is considered to have been applied, the taking of the photograph 3 may be repeated. Very cumbersome repeated shots of the same picture 3 should be avoided. The repeated shooting can be avoided by changing the fading correction guidance display process shown in fig. 13 to the process shown in fig. 15, for example.
In fig. 15, the processing of steps S90 to S92 and steps S95 to S97 is the same as the processing of steps S80 to S82 and steps S85 to S87. Therefore, only the portion different from fig. 13 will be described.
In fig. 15, color fading correction according to the current color fading parameter is performed on the corrected image p at step S93. In the next step S94, the corrected image p after the color fading correction is displayed as a preview image on the liquid crystal display unit 301. After the user changes the color fading parameters at step S97, the flow returns to step S93, where the corrected image p, which has undergone color fading correction according to the changed color fading parameters, is redisplayed. As a result, it is possible for the user to confirm whether or not the fading correction can be appropriately performed in the case where the corrected image p is displayed as a preview image. This makes it possible for the user to save the corrected image p subjected to the optimum fading correction without repeatedly taking the photograph 3.
< second embodiment >
In a first embodiment, weighting is performed to erase all edges touching the outermost edge. However, when all of these edges are erased, when the edge of the desired photo image contacts the edge of another photo image extending from the outermost edge as shown in fig. 3E, the edge of the desired photo image which cannot be erased may be erased in the thousands. If the edge of the desired photo image is erased, it is impossible to accurately detect the edge detection area where the printed image exists. This is undesirable if, as shown in fig. 3E, a plurality of photographs 3 are fixed to the photo film 2. The second embodiment aims at avoiding possible defects due to the edge that cannot be erased contacting the edge to be erased. According to the second embodiment, it is possible to detect the edge detection area more accurately.
The configuration of the photographic apparatus including the image processing device according to the second embodiment is substantially the same as that in the first embodiment. The operation of which is largely the same or substantially the same. Therefore, only the portions different from the first embodiment will be described by using the reference numerals designated in the first embodiment.
In the second embodiment, in order to avoid the above-described defects, the edge contacting the outermost edge is erased based on the x-y coordinate values of the pixels constituting the edge, like a Bayer pattern, for example. That is, the coordinates of the erased object are determined by extraction, and the pixels having the predetermined coordinates and constituting the edge contacting the outermost edge are erased. In this way, weighting is performed by partially erasing the edge that touches the outermost edge. Partial erase can be achieved by using other methods.
By partially erasing the edge that touches the outermost edge, the number of votes for the partially erased edge line is reduced. As a result, the importance of the edge line of the desired photographic image as a candidate is reduced. However, the edge line of the desired photographic image is specified in consideration of the four vertex coordinates, i.e., the other three edge lines. Even if one or more edge lines of the desired photographic image are partially erased, the coordinates of the four vertices determined by combining the other edge lines, as the coordinates indicating the image area of the desired photographic image, become more appropriate than the other coordinates. Therefore, even if all the boundary lines of the desired photographic image are partially erased, the possibility that all the edge lines are designated as the edge lines of the desired photographic image is still high. As a result, even if partial erasing is performed, it is possible to detect the edge line of the desired photographic image more surely than in the first embodiment. As shown in fig. 3F, even from the edge image shown in fig. 3E, a desired photographic image can be detected with certainty and accuracy.
In the second embodiment, the peripheral edge erasing process by marking shown in fig. 11 is different from the first embodiment. Therefore, the erasing process in the second embodiment will be described in detail with reference to the flowchart shown in fig. 16.
The processing of steps S70 to S72 in fig. 16 is basically the same as the processing of steps S60 to S62 in fig. 11. Therefore, only steps S73 to S77 will be described.
In step S73, for example, with reference to the mark information area, the coordinates of the pixel to which one of the marks obtained in step S72 is assigned are acquired. In the next step S74, it is determined whether or not the coordinates are coordinates of an erasing (deleting) object. The coordinates of the erasing object are, for example, coordinates in which the x-coordinate is an even number and the y-coordinate is an odd number. When the coordinates are those of the erasing target, the judgment result is yes, and after those coordinates are erased in step S76, the flow proceeds to step S75. Otherwise, the determination result is no, and the flow proceeds to step S75.
In step S75, it is determined whether or not the pixel to which one of the flags obtained in step S72 is assigned is completely erased. Next, when no pixel is the target, the determination result is yes, and the flow ends here. Otherwise, the determination result is no, the coordinates of the pixel of the next object are obtained in step S77 in the same manner as in step S73, and then the flow returns to step S74.
Note that, in the second embodiment, partial erasing of an edge contacting the outermost edge is performed before detecting a candidate for an edge. However, after a candidate is detected, erasure may be performed in consideration of the detection result thereof.
In some cases, all or some of the four boundary lines of the upper, lower, left, and right sides of the desired photographic image are connected to each other on the edge image. When some of the interconnected boundary lines contact the edge extending from the outermost edge, all of the boundary lines are partially removed. However, considering the result of detecting the candidates, it is possible to perform partial erasing limited to the boundary line in contact with the edge extending from the outermost edge. In this way, by limiting the boundary line that is the object of erasure, it is possible to avoid reducing the number of votes in the other edges. This makes it possible to more surely and accurately detect the edge of the desired photographic image.
The image processing apparatus according to the above-described embodiment is applied to the image processing device 203 mounted on the portable photographing device 1. This is because the photographic apparatus 1 has such an image processing apparatus 203 mounted thereon, so it is possible to appropriately digitize the photograph 3 more quickly with fewer apparatuses. However, the image processing apparatus may be implemented on a data processing device, such as a personal computer, which is different from the camera device. To implement such an image processing apparatus, all or some of the programs for image processing stored in the code storage device 205 may be stored on a storage medium for circulation. Alternatively, the streaming may be performed through a communication medium constituting a network. To realize the photographing devices including the image processing apparatuses according to the present embodiment, they may be circulated together with a program executed by the CPU 204.
Although the embodiment is described for an example in which the photo film 2 includes the photograph 3 and the photographic image taken by the digital camera includes the photograph 3 as shown in fig. 3A, the present invention is not limited to the above-described embodiment. The present invention can be applied to an example in which the image acquired by the digital camera includes a desired photograph 3 and an outer area superimposed around the desired photograph 3.

Claims (15)

1. An image processing apparatus comprising:
an image acquisition unit (21) that acquires an image of an object including a photograph;
an edge detection unit (203) that detects an edge of the photograph in the image of the subject acquired by the image acquisition unit; and
an image processing unit (203) that performs color fading correction on an image area surrounded by the detected edge.
2. The image processing apparatus according to claim 1, wherein said edge detection unit (203) performs at least one of a rotation operation and a transformation operation on an image area surrounded by the detected edge.
3. The image processing apparatus according to claim 1, wherein the image processing unit detects a degree of color fading in an image area surrounded by the detected edge, displays the detected degree of color fading, and performs color fading correction based on the detected degree.
4. The image processing apparatus according to claim 3, wherein the image processing unit includes a user operation section that changes the displayed degree and performs color fading correction based on the changed degree.
5. The image processing apparatus according to claim 3, wherein said image processing apparatus is equipped in a digital camera having a display unit (301), and said image processing unit (203) displays an image of a fading correction area on said display unit.
6. An image processing method comprising:
an acquisition step (S14) for acquiring an image of an object including a photograph;
a detection step (S15) for detecting an edge of the photograph in the acquired image of the object; and
a color fading correction execution step (S26) for executing color fading correction on an image area surrounded by the detected edge.
7. The image processing method according to claim 6, wherein said detecting step (S15) comprises the step (S22) of performing at least one of a rotation operation and a transformation operation on an image area enclosed by the detected edge.
8. The image processing method according to claim 6, wherein said fading correction performing step (S26) comprises: the method includes the steps of detecting a degree of fading in an image area surrounded by the detected edge (S23), displaying the detected degree of fading (S24), and performing fading correction based on the detected degree.
9. The image processing method according to claim 8, wherein the fading correction performing step (S26) includes: a step (S25) of causing the user to change the displayed degree, and a step of performing fading correction based on the changed degree.
10. The image processing method according to claim 6, wherein said fading correction performing step (S26) comprises: and a step (S94) of displaying an image of the fading correction area.
11. A computer program product stored in a computer usable medium for storing program instructions for execution on a computer system to enable the computer system to perform the steps of:
acquiring an image of an object including a photograph;
detecting an edge of the photograph in the acquired image of the object; and
the fading correction is performed on the image area surrounded by the detected edge.
12. The computer program product of claim 11, wherein the detecting comprises performing at least one of a rotation operation and a transformation operation on an image region enclosed by the detected edge.
13. The computer program product of claim 11, wherein performing fading correction comprises detecting a level of fading in an image area enclosed by the detected edge, displaying the detected level of fading, and performing fading correction based on the detected level.
14. The computer program product of claim 13, wherein performing a fade correction comprises causing a user to change a degree of display, and performing a fade correction based on the changed degree.
15. The computer program product of claim 11, wherein performing fade correction comprises displaying an image of a fade corrected region.
HK08110379.2A 2005-09-08 2006-09-07 Image processing apparatus and image processing method HK1115022A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP261377/2005 2005-09-08

Publications (1)

Publication Number Publication Date
HK1115022A true HK1115022A (en) 2008-11-14

Family

ID=

Similar Documents

Publication Publication Date Title
CN104067605B (en) Shooting device and shooting image processing system
CN101151639B (en) Image processing apparatus and image processing method
CN101138008B (en) Image processing apparatus and image processing method
CN102025809B (en) Portable terminal apparatus, image output apparatus, method of controlling portable terminal apparatus
CN100468454C (en) Imaging device and image processing method thereof
JP4885789B2 (en) Image processing method, image region detection method, image processing program, image region detection program, image processing device, and image region detection device
US20090316163A1 (en) Image forming apparatus
US20100260423A1 (en) Apparatus and method for processing image
JP6426815B2 (en) Image processing apparatus and image processing method
JP6953178B2 (en) Image processing device, image processing method, program
JP5541679B2 (en) Image processing apparatus and method, and program
HK1115022A (en) Image processing apparatus and image processing method
JP4363153B2 (en) Imaging apparatus, image processing method thereof, and program
KR100919341B1 (en) Image processing apparatus and image processing method
JP2006048223A (en) Image processing apparatus, image processing method, and computer program
JP2001016428A (en) Image forming apparatus and transfer image distortion correction method
JP2005196241A (en) Method, device, and program for extracting facial area