[go: up one dir, main page]

WO2006033234A1 - Procede de traitement d'images, dispositif de traitement d'images, dispositif d'imagerie et programme de traitement d'images - Google Patents

Procede de traitement d'images, dispositif de traitement d'images, dispositif d'imagerie et programme de traitement d'images Download PDF

Info

Publication number
WO2006033234A1
WO2006033234A1 PCT/JP2005/016382 JP2005016382W WO2006033234A1 WO 2006033234 A1 WO2006033234 A1 WO 2006033234A1 JP 2005016382 W JP2005016382 W JP 2005016382W WO 2006033234 A1 WO2006033234 A1 WO 2006033234A1
Authority
WO
WIPO (PCT)
Prior art keywords
brightness
value
image data
calculated
hue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2005/016382
Other languages
English (en)
Japanese (ja)
Inventor
Hiroaki Takano
Tsukasa Ito
Takeshi Nakajima
Kenji Kuwae
Takeshi Saito
Misae Tasaki
Daisuke Sato
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Konica Minolta Photo Imaging Inc
Original Assignee
Konica Minolta Photo Imaging Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Konica Minolta Photo Imaging Inc filed Critical Konica Minolta Photo Imaging Inc
Publication of WO2006033234A1 publication Critical patent/WO2006033234A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/407Control or modification of tonal gradation or of extreme levels, e.g. background level
    • H04N1/4072Control or modification of tonal gradation or of extreme levels, e.g. background level dependent on the contents of the original
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/62Retouching, i.e. modification of isolated colours only or in isolated picture areas only
    • H04N1/628Memory colours, e.g. skin or sky

Definitions

  • Image processing method image processing apparatus, imaging apparatus, and image processing program
  • the present invention relates to an image processing method, an image processing device, an imaging device, and an image processing program.
  • Patent Document 1 discloses a method for calculating an additional correction value in place of the discriminant regression analysis method.
  • the method described in Patent Document 1 deletes the high luminance region and the low luminance region from the luminance histogram indicating the cumulative number of pixels of luminance (frequency number), and further uses the frequency frequency limit to limit the luminance.
  • An average value is calculated, and a difference between the average value and the reference luminance is obtained as a correction value.
  • Patent Document 2 describes a method of determining a light source state at the time of photographing in order to compensate for the extraction accuracy of a face region.
  • a face candidate area is extracted, and the average brightness of the extracted face candidate area is calculated with respect to the entire image. (Shooting close-up flash)) and adjust the tolerance of the judgment criteria for the face area.
  • Patent Document 2 as a method for extracting a face candidate region, a method using a two-dimensional histogram of hue and saturation described in JP-A-6-67320, JP-A-8-122944, JP-A-8-184925
  • the pattern matching and pattern search methods described in Japanese Patent Laid-Open No. 9138471 and Japanese Patent Laid-Open No. 9138471 are used for bow I.
  • Patent Document 2 discloses a method for removing a background region other than a face as disclosed in JP-A-8-122944. And the method described in Japanese Patent Application Laid-Open No. 8-184925 and the method of discriminating using the ratio of the linear portion, the line object property, the contact ratio with the outer edge of the screen, the density contrast, the pattern of the density change, and the periodicity. Has been. A method that uses a one-dimensional histogram of density is described to determine shooting conditions. This method is based on an empirical rule that the face area is dark and the background area is bright in the case of backlighting, and that the face area is bright and the background area is dark in the case of close-up flash photography.
  • Patent Document 1 JP 2002-247393 A
  • Patent Document 2 JP 2000-148980 A
  • the above-described conventional technology determines whether or not it can be adopted as the average luminance value of the skin color area (face area) based on whether or not the power is applied to typical backlight and strobe proximity photography.
  • the brightness of the skin color area of the captured image data may be reduced by causing discontinuity in density at the boundary between a typical shooting scene and a shooting scene that is not, or by misclassifying the shooting scene. There was a possibility that it was not corrected properly.
  • An object of the present invention is to enable image processing for continuously and appropriately correcting (correcting) the brightness of a skin color area of photographed image data.
  • An index calculating step for calculating an index representing a shooting condition of the captured image data; and at least a correction value for the reproduction target value, a correction value for the brightness of the skin color area, and, according to the calculated index, and A correction value calculation step for calculating any one of the correction values of the difference value when calculating the difference value between the value indicating the brightness of the skin color area and the reproduction target value;
  • FIG. 1 is a perspective view showing an external configuration of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing an internal configuration of the image processing apparatus according to the present embodiment.
  • FIG. 3 is a block diagram showing a main part configuration of the image processing unit in FIG.
  • FIG. 5 is a flowchart showing a flow of processing executed in an image adjustment processing unit.
  • FIG. 8 is a diagram showing an example of a program for converting to an RGB power HSV color system.
  • FIG. 9 is a diagram showing the brightness (V) —hue (H) plane and the region rl and region r2 on the V—H plane.
  • FIG. 10 is a diagram showing the lightness (V) —hue (H) plane, and regions r3 and r4 on the V—H plane. [11] A diagram showing a curve representing a first coefficient for multiplying the first occupancy ratio for calculating index 1.
  • [12] A diagram showing a curve representing a second coefficient for multiplying the first occupancy ratio for calculating the index 2.
  • FIG. 13 is a flowchart showing a second occupancy ratio calculation process for calculating a second occupancy ratio based on the composition of captured image data.
  • FIG. 14 is a diagram showing areas nl to n4 determined according to the distance from the outer edge of the screen of captured image data.
  • [15] A diagram showing a curve representing a third coefficient for multiplying the second occupancy ratio for calculating index 3 for each region (nl to n4).
  • ⁇ 19 A diagram showing a discrimination map for discriminating shooting conditions.
  • FIG. 20 A diagram showing the relationship between an index for specifying shooting conditions, parameters A to C, and gradation adjustment methods A to C.
  • FIG. 21 is a diagram showing a gradation conversion curve corresponding to each gradation adjustment method.
  • FIG. 22 is a diagram showing a luminance frequency distribution (histogram) (a), a normalized histogram (b), and a block-divided histogram (c).
  • histogram luminance frequency distribution
  • b normalized histogram
  • c block-divided histogram
  • FIG. 23 is a diagram ((a) and (b)) for explaining the deletion of the low luminance region and the high luminance region of the luminance histogram power, and a diagram for explaining the limitation of the luminance frequency ((c) and ( d)).
  • FIG. 24 is a flowchart showing tone adjustment amount calculation processing according to the first embodiment.
  • FIG. 25 is a flowchart showing tone adjustment amount calculation processing according to the second embodiment.
  • FIG. 26 is a flowchart showing tone adjustment amount calculation processing according to the third embodiment.
  • FIG. 27 is a flowchart showing tone adjustment amount calculation processing according to the fourth embodiment.
  • FIG. 28 is a diagram showing a relationship between an index and a correction value ⁇ of a parameter (reproduction target value, skin color average brightness value, etc.) used in the gradation adjustment amount calculation processing.
  • FIG. 29 is a diagram showing a gradation conversion curve representing gradation processing conditions when the photographing condition is backlight or under.
  • FIG. 30 is a block diagram showing the configuration of a digital camera to which the imaging apparatus of the present invention is applied.
  • the form according to Item 2 is the image processing method according to Item 1,
  • a correction value of the reproduction target value is calculated according to the calculated index, and in the gradation conversion step, based on the correction value of the calculated reproduction target value, A gradation conversion process is performed on the captured image data.
  • the form described in Item 3 is the image processing method described in Item 1, and in the correction value calculation step, the correction value of the brightness of the skin color area is determined according to the calculated index. In the gradation conversion step, gradation conversion processing is performed on the captured image data based on the calculated brightness correction value.
  • a correction value of the reproduction target value is calculated according to the calculated index, and a correction value of the brightness of the skin color region is calculated.
  • the correction value of the calculated reproduction target value and the brightness of the skin color area are determined.
  • a gradation conversion process is performed on the captured image data on the basis of the correction value of Rusa.
  • a correction value of the difference value when the difference value between the value indicating the brightness of the skin color area and the reproduction target value is calculated according to the calculated index is calculated.
  • gradation conversion processing is performed on the captured image data based on the calculated correction value.
  • the form according to Item 6 is the image processing method according to any one of Items 1, 2, and 4.
  • the minimum value and the maximum value of the correction value of the reproduction target value are set in advance according to the index representing the photographing condition.
  • the form according to Item 7 is the image processing method according to any one of Items 1, 3, and 4.
  • the minimum value and the maximum value of the brightness correction value of the skin color area are set in advance according to the index representing the photographing condition.
  • the differential force between the maximum value and the minimum value of the correction value is 35 at least an 8-bit value.
  • the form according to Item 9 is a discrimination map that is pre-divided into regions according to the calculated index and the accuracy of the imaging condition in the image processing method according to any one of Items 1 to 8.
  • the correction value is calculated based on the determination result in the determination step.
  • the form according to Item 10 is the image processing method according to any one of Items 1 to 9, wherein the captured image data includes at least a region having a combination strength of predetermined brightness and hue and the captured image.
  • Occupancy rate calculation that divides the data into one of the predetermined areas that are the combination power of the distance from the outer edge of the screen and the brightness, and calculates the occupancy ratio indicating the ratio of the entire captured image data for each of the divided areas Including the process, in the index calculation process, By multiplying the occupation ratio of each region calculated in the occupation ratio calculation step by a coefficient set in advance according to the imaging condition, an index representing the light source condition is calculated as the index representing the imaging condition.
  • the captured image data is divided into regions that are combinations of predetermined brightness and hue, and For each divided area, an occupancy ratio indicating the ratio of the entire captured image data is calculated.
  • the captured image data includes a combination of the distance from the outer edge of the screen of the captured image data and the brightness.
  • the area is divided into predetermined areas, and for each of the divided areas, an occupation ratio indicating the ratio of the entire captured image data is calculated.
  • the captured image data is divided into regions that are combinations of predetermined brightness and hue, and For each divided area, a first occupancy ratio indicating a ratio of the entire captured image data is calculated, and the captured image data is converted into a predetermined combination of the distance between the outer edge force of the captured image data and the brightness. A second occupancy ratio indicating the ratio of the entire captured image data is calculated for each of the divided areas.
  • the first occupancy ratio calculated in the occupancy ratio calculation step is calculated. By multiplying the occupation ratio and the second occupation ratio by a coefficient set in advance according to the imaging condition, an index representing the light source condition is calculated as an index representing the imaging condition.
  • a deviation amount calculating step of calculating a deviation amount indicating a deviation in gradation distribution of the photographed image data is provided.
  • an index representing an exposure condition is used as an index representing the photographing condition by multiplying the bias amount calculated in the bias amount calculating step by a coefficient set in advance according to the photographing condition. Is calculated.
  • the deviation amount includes a deviation amount of brightness of the photographed image data, and a brightness of the photographed image data in the center of the screen. It includes at least one of the average value of brightness and the difference value of brightness calculated under different conditions.
  • the brightness deviation amount of the photographed image data is, for example, standard deviation, variance, etc. of the brightness of the photographed image data.
  • the average brightness value of the captured image data at the center of the screen is, for example, the average brightness value at the center of the screen, the average brightness value of the skin color area at the center of the screen, or the like.
  • the brightness difference value calculated under different conditions is, for example, a difference value between the maximum brightness value and the minimum brightness value of the captured image data.
  • Item 16 is the image processing method according to any one of Items 10, 12, 14, and 15, wherein the number of accumulated pixels for each distance and brightness from the outer edge of the screen of the captured image data Including the step of creating a two-dimensional histogram by calculating
  • the occupation rate is calculated based on the created two-dimensional histogram.
  • the form according to Item 17 is the image processing method according to any one of Items 13 to 15.
  • a step of creating a two-dimensional histogram by calculating the cumulative number of pixels for each distance and brightness from the outer edge of the screen of the captured image data
  • the second occupation rate is calculated based on the created two-dimensional histogram.
  • the occupation rate is calculated based on the created two-dimensional histogram.
  • the form according to Item 19 is a two-dimensional calculation by calculating the cumulative number of pixels for each predetermined hue and brightness of the captured image data. Including the step of creating a histogram,
  • the first occupation ratio is calculated based on the created two-dimensional histogram.
  • the form described in Item 21 includes an intermediate brightness region of a flesh-colored hue region in the index calculation step.
  • a coefficient having a different sign is used for lightness regions other than the intermediate lightness region.
  • the lightness region of the hue region other than the high-brightness skin-colored hue region is a predetermined high-lightness region.
  • the brightness area other than the intermediate brightness area is a brightness area in the flesh-color hue area.
  • the skin color hue region of high brightness includes a region in the range of 170 to 224 in terms of brightness value of the HSV color system. It is.
  • the intermediate brightness region includes a region in the range of 85 to 169 in terms of the brightness value of the HSV color system.
  • the form described in Item 26 is the image processing method according to any one of Items 20, 22, and 24, wherein the hue region other than the high brightness skin color hue region includes a blue hue region, At least one of the green hue regions is included.
  • the lightness region other than the intermediate lightness region is a shadow region.
  • the hue value of the blue hue region is in the range of 161 to 250 in terms of the hue value of the HSV color system.
  • the hue value of the green hue region is in the range of 40 to 160 as the hue value of the HSV color system.
  • the brightness value of the shadow area is in the range of 26 to 84 as the brightness value of the HSV color system.
  • the form according to Item 32 and Item 33 is the image processing method according to any one of Items 20 to 31.
  • the skin color hue region is divided into two regions according to a predetermined conditional expression based on lightness and saturation.
  • the form described in Item 34 is an image processing apparatus that calculates a value indicating the brightness of a skin color area of photographed image data and corrects the calculated value indicating the brightness to a predetermined reproduction target value.
  • An index calculation unit that calculates an index that represents a shooting condition of the captured image data; and at least a correction value of the reproduction target value, a correction value of the brightness of the skin color region, and, according to the calculated index, and A correction value calculation unit that calculates any one of the correction values of the difference value when calculating the difference value between the value indicating the brightness of the skin color area and the reproduction target value;
  • a gradation conversion unit that performs gradation conversion processing on the captured image data based on any of the calculated correction values.
  • the correction value calculation unit calculates a correction value of the reproduction target value according to the calculated index
  • the gradation conversion unit performs gradation conversion processing on the captured image data based on the calculated correction value of the reproduction target value.
  • the correction value calculation unit calculates a correction value of the brightness of the skin color area according to the calculated index
  • the gradation conversion unit performs gradation conversion processing on the captured image data based on the calculated brightness correction value.
  • the correction value calculation unit calculates a correction value of the reproduction target value according to the calculated index, calculates a correction value of the brightness of the skin color region,
  • the gradation conversion unit performs gradation conversion processing on the captured image data based on the calculated correction value of the reproduction target value and the correction value of the brightness of the skin color area.
  • the form described in Item 38 is the image processing device described in Item 34,
  • the correction value calculation unit calculates a correction value of the difference value when calculating a difference value between the value indicating the brightness of the skin color area and the reproduction target value according to the calculated index.
  • the gradation conversion unit performs gradation conversion processing on the captured image data based on the calculated correction value.
  • the correction value of the reproduction target value is set according to an index representing the imaging condition. Minimum and maximum values are preset.
  • the differential force between the maximum value and the minimum value of the correction value is 35 in at least an 8-bit value.
  • the correction value calculation unit calculates the correction value based on the determination result by the determination unit.
  • the captured image data includes at least a region including a combination of predetermined brightness and hue, and the imaging The image data is divided into one of predetermined areas, which is a combination power of the distance from the outer edge of the screen and the brightness, and an occupation ratio indicating a ratio occupied in the entire captured image data is calculated for each of the divided areas.
  • An occupancy rate calculation unit wherein the index calculation unit represents the photographic condition by multiplying the occupancy rate of each area calculated by the occupancy rate calculation unit by a coefficient set in advance according to the photographic condition.
  • An indicator that represents the light source condition as an indicator Is calculated.
  • the occupancy rate calculation unit divides the captured image data into regions that are combinations of predetermined brightness and hue. Then, for each of the divided areas, an occupancy ratio indicating a ratio of the entire captured image data is calculated.
  • the occupancy ratio calculation unit uses the combination of the distance from the outer edge of the screen of the captured image data and the brightness. Are divided into predetermined areas, and for each of the divided areas, an occupancy ratio indicating a ratio of the entire captured image data is calculated.
  • the occupancy rate calculation unit divides the captured image data into regions that are combinations of predetermined brightness and hue, and For each divided area, a first occupancy ratio indicating the ratio of the entire captured image data is calculated, and the captured image data is combined with the distance from the outer edge of the captured image data and the light intensity combination power. And a second occupancy ratio indicating a ratio of the entire captured image data is calculated for each of the divided areas, and the index calculator is calculated by the occupancy calculator By multiplying the first occupancy rate and the second occupancy rate by a coefficient set in advance according to the imaging condition, an index indicating the light source condition is calculated as an index indicating the imaging condition.
  • a deviation amount calculation unit that calculates a deviation amount indicating a deviation in gradation distribution of the photographed image data is provided.
  • the index calculating unit multiplies the bias amount calculated by the bias amount calculating unit by a coefficient set in advance according to the shooting condition, thereby indicating an index indicating the exposure condition as the index indicating the shooting condition. Is calculated.
  • the deviation amount includes a deviation amount of brightness of the photographed image data, and brightness of the photographed image data in the center of the screen. It includes at least one of the average value and the difference value of brightness calculated under different conditions.
  • the number of accumulated pixels for each distance and brightness from the outer edge of the screen of the captured image data It has a part that creates a two-dimensional histogram by calculating
  • the occupancy ratio calculation unit calculates the occupancy ratio based on the created two-dimensional histogram.
  • the cumulative number of pixels is calculated for each distance and brightness from the outer edge of the screen of the captured image data. Has a part to create a two-dimensional histogram,
  • the occupation ratio calculation unit calculates the second occupation ratio based on the created two-dimensional histogram.
  • the cumulative number of pixels is calculated for each predetermined hue and brightness of the captured image data.
  • the occupancy ratio calculation unit calculates the occupancy ratio based on the created two-dimensional histogram.
  • the cumulative number of pixels is calculated for each predetermined hue and lightness of the captured image data. It has a section to create a histogram
  • the occupation ratio calculation unit calculates the first occupation ratio based on the created two-dimensional histogram.
  • the form described in Item 53 is a skin color hue region having a predetermined high brightness. And a coefficient having a different sign is used for the hue area other than the high brightness skin color hue area.
  • the index calculation unit includes an intermediate brightness region of a skin color hue region, and A coefficient having a different sign is used for lightness regions other than the intermediate lightness region.
  • the brightness region of the hue region other than the high-brightness skin color hue region is a predetermined high brightness region.
  • the lightness region other than the intermediate lightness region is a lightness region in the flesh-color hue region.
  • the high-brightness skin color hue region includes a region in the range of 170 to 224 as the brightness value of the HSV color system. It is.
  • the intermediate brightness region includes a region in the range of 85 to 169 in terms of the brightness value of the HSV color system.
  • the form according to Item 59 is a hue region other than the high-brightness skin color hue region, wherein a blue hue region, At least one of the green hue regions is included.
  • the lightness region other than the intermediate lightness region is a shadow region.
  • the hue value of the blue hue region is in the range of 161 to 250 in terms of the hue value of the HSV color system, and the green color The hue value in the hue area is in the range of 40 to 160 in the HSV color system.
  • the brightness value of the shadow area is in the range of 26 to 84 as the brightness value of the HSV color system.
  • the skin color hue region is divided into two regions according to a predetermined conditional expression based on lightness and saturation. It is divided into.
  • an imaging unit that captures captured image data by capturing a subject, calculates a value indicating brightness of a skin color area of the captured image data, and calculates the calculated brightness.
  • An imaging device that corrects the indicated value to the specified reproduction target value!
  • An index calculation unit that calculates an index that represents a shooting condition of the captured image data; and at least a correction value of the reproduction target value and the skin color range according to the calculated index Correction value calculation for calculating one of the correction value of the difference value when calculating the difference value between the correction value of the area brightness and the value indicating the brightness of the skin color area and the reproduction target value Department and
  • a gradation conversion unit that performs gradation conversion processing on the captured image data based on the calculated correction value of the reproduction target value.
  • the form according to Item 68 is the image processing device according to Item 67,
  • the correction value calculation unit calculates a correction value of the reproduction target value according to the calculated index
  • the gradation conversion unit performs gradation conversion processing on the captured image data based on the calculated correction value of the reproduction target value.
  • the correction value calculation unit calculates a correction value of the brightness of the skin color area according to the calculated index
  • the gradation conversion unit performs gradation conversion processing on the captured image data based on the calculated brightness correction value.
  • the form according to Item 70 is the image processing apparatus according to Item 67,
  • the correction value calculation unit calculates a correction value of the reproduction target value according to the calculated index, calculates a correction value of the brightness of the skin color region,
  • the gradation conversion unit performs gradation conversion processing on the captured image data based on the calculated correction value of the reproduction target value and the correction value of the brightness of the skin color area.
  • the correction value calculation unit calculates a correction value of the difference value when calculating a difference value between the value indicating the brightness of the skin color area and the reproduction target value according to the calculated index.
  • the gradation conversion unit performs gradation conversion processing on the captured image data based on the calculated correction value.
  • the form according to Item 72 is the minimum value of the correction value of the reproduction target value in the imaging apparatus according to any one of Items 67, 68, and 70, according to an index representing the imaging condition.
  • the maximum value is preset.
  • the minimum value and the maximum value of the brightness correction value of the skin color region are set in accordance with an index representing the shooting condition. It is set in advance.
  • the differential force between the maximum value and the minimum value of the correction value is at least an 8-bit value of 35.
  • the correction value calculation unit calculates the correction value based on a determination result by the determination unit.
  • the captured image data includes at least a region having a combination of predetermined brightness and hue and the captured image data.
  • An occupancy ratio calculation unit that divides a predetermined area that is a combination force of distance and brightness from the outer edge of the screen and calculates an occupancy ratio indicating the ratio of the entire captured image data for each of the divided areas
  • the index calculation unit multiplies the occupancy ratio of each area calculated by the occupancy ratio calculation unit by a coefficient set in advance according to the shooting condition, thereby obtaining a light source condition as an index indicating the shooting condition. An index representing is calculated.
  • the occupancy rate calculation unit divides the captured image data into regions that are combinations of predetermined brightness and hue, For each of the divided areas, an occupation ratio indicating the ratio of the entire captured image data is calculated.
  • the occupancy rate calculation unit may calculate the captured image data from a combination of the distance from the outer edge of the screen of the captured image data and the brightness. Divided into predetermined areas, and the captured image data is divided into the divided areas. An occupancy ratio indicating the ratio of the whole is calculated.
  • the occupancy ratio calculation unit divides the captured image data into regions that are combinations of predetermined brightness and hue, and A first occupancy ratio indicating the ratio of the entire captured image data for each of the captured areas is calculated, and the captured image data is defined by a combination of a distance from the outer edge of the screen of the captured image data and brightness. Dividing into regions, and for each of the divided regions, calculate a second occupancy ratio indicating the proportion of the entire captured image data;
  • the index calculation unit multiplies the first occupancy rate and the second occupancy rate calculated by the occupancy rate calculation unit by a coefficient set in advance according to the shooting condition, thereby indicating the shooting condition. As a result, an index representing the light source condition is calculated.
  • a deviation amount calculation unit that calculates a deviation amount indicating a deviation in gradation distribution of the photographed image data.
  • the index calculation unit represents an exposure condition as an index representing the photographing condition by multiplying the deviation amount calculated by the deviation amount calculation unit by a coefficient set in advance according to the photographing condition. Calculate the indicator.
  • the deviation amount includes a deviation amount of brightness of captured image data, and an average value of brightness in a screen center portion of the captured image data. Include at least one of the brightness difference values calculated under different conditions
  • the occupancy ratio calculation unit calculates the occupancy ratio based on the created two-dimensional histogram.
  • the cumulative number of pixels is calculated for each distance and brightness from the outer edge of the screen of the captured image data.
  • the occupation ratio calculation unit calculates the second occupation ratio based on the created two-dimensional histogram.
  • the occupancy ratio calculation unit calculates the occupancy ratio based on the created two-dimensional histogram.
  • a second order is obtained by calculating the cumulative number of pixels for each predetermined hue and brightness of the captured image data. It has a part to create the original histogram
  • the occupation ratio calculation unit calculates the first occupation ratio based on the created two-dimensional histogram.
  • the index calculation unit includes a skin color hue region having a predetermined high brightness.
  • a coefficient having a different sign is used in a hue area other than the high-brightness skin color hue area.
  • the form described in Item 87 is an intermediate brightness region of a flesh color hue region in the index calculation unit. Different code coefficients are used for lightness regions other than the intermediate lightness region.
  • the brightness region of the hue region other than the high-brightness skin color hue region is a predetermined high brightness region.
  • the brightness region other than the intermediate brightness region is a brightness region in the flesh-color hue region.
  • the high-brightness skin color hue region includes a region in the range of 170 to 224 in terms of the brightness value of the HSV color system. .
  • the intermediate brightness region includes a region in the range of 85 to 169 in terms of the brightness value of the HSV color system.
  • the form according to Item 92 includes a blue hue region and a green hue region other than the high-brightness skin color hue region. At least one of them.
  • the lightness region other than the intermediate lightness region is a shadow region.
  • the hue value of the blue hue region is in a range of 161 to 250 as a hue value of the HSV color system, and the green hue region The hue value of is in the range of 40 to 160 in the HSV color system.
  • the brightness value of the shadow area is in the range of 26 to 84 in terms of the brightness value of the HSV color system.
  • the form described in Items 96 and 97 is configured such that the hue value of the flesh color hue region is a hue value of the HSV color system 0 to 39 and 330-3
  • the skin color hue region may be calculated according to a predetermined conditional expression based on lightness and saturation in the imaging device according to any one of Items 86 to 97. Divided into two areas.
  • the configuration according to Item 100 is provided in a computer for executing image processing.
  • An index calculating step for calculating an index indicating a shooting condition of the captured image data; a brightness calculating step for calculating a value indicating brightness of a skin color region of the captured image data; and a value indicating the calculated brightness.
  • a predetermined reproduction target value at least the correction value of the reproduction target value, the correction value of the brightness of the skin color area, and the brightness of the skin color area are indicated according to the calculated index.
  • the correction value calculation step when correcting the calculated brightness value to a predetermined reproduction target value, a correction value of the reproduction target value is calculated according to the calculated index, and the gradation
  • tone conversion processing is performed on the captured image data based on the calculated correction value of the reproduction target value.
  • the photographed image data is applied based on the calculated brightness correction value.
  • the correction value calculation step when correcting the calculated brightness value to a predetermined reproduction target value, a correction value of the reproduction target value is calculated according to the calculated index, and the skin color Calculate the area brightness correction value,
  • gradation conversion processing is performed on the photographed image data based on the calculated correction value of the reproduction target value and the correction value of the brightness of the skin color area.
  • the correction value calculating step when correcting the calculated brightness value to a predetermined reproduction target value, the value indicating the brightness of the skin color region and the reproduction target according to the calculated index are calculated.
  • the correction value of the difference value is calculated,
  • gradation conversion processing is performed on the captured image data based on the calculated correction value.
  • the minimum correction value of the reproduction target value is determined according to an index representing the imaging condition. Values and maximum values are preset.
  • the form described in Item 106 is a correction of brightness of the skin color area according to an index representing the shooting condition.
  • the minimum and maximum values are preset.
  • a value indicating the brightness of the skin color region and the reproduction target value are set according to an index indicating the photographing condition.
  • the minimum value and the maximum value of the correction value of the difference value are set in advance.
  • a difference between the maximum value and the minimum value of the correction values is at least an 8-bit value of 35.
  • the captured image data includes at least a region including a combination of predetermined brightness and hue, and the image Divide the image data into any one of the predetermined areas consisting of a combination of the distance from the outer edge of the screen and the brightness, and calculate an occupation ratio indicating the ratio of the entire captured image data for each divided area.
  • the occupancy rate of each area calculated in the occupancy rate calculation step is multiplied by a coefficient set in advance according to the shooting condition, thereby obtaining a light source condition as an index representing the shooting condition.
  • the index to represent is calculated.
  • the captured image data is divided into regions that are combinations of predetermined brightness and hue, and For each divided area, an occupancy ratio indicating the ratio of the entire captured image data is calculated.
  • the captured image data is obtained from a combination of the distance from the outer edge of the screen of the captured image data and the brightness. Are divided into predetermined areas, and for each of the divided areas, an occupancy ratio indicating a ratio of the entire captured image data is calculated.
  • the captured image data is divided into regions that are combinations of predetermined lightness and hue, and For each divided area, a first occupancy ratio indicating the ratio of the entire captured image data is calculated, and the captured image data is combined with the distance from the outer edge of the captured image data and the light intensity combination power. Divided into predetermined areas and A second occupancy ratio indicating the ratio of the entire captured image data for each of the captured areas is calculated,
  • the first occupancy rate and the second occupancy rate calculated by the occupancy rate calculation step are multiplied by a coefficient set in advance according to the photographing condition, thereby An index representing the light source condition is calculated as an index representing the photographing condition.
  • the bias amount calculated in the bias amount calculating step is multiplied by a coefficient set in advance according to the shooting condition, whereby an index indicating the exposure condition is used as the index indicating the shooting condition. Is calculated.
  • the deviation amount includes a deviation amount of brightness of the photographed image data, and brightness in a central portion of the screen of the photographed image data. At least one of the average value and the brightness difference value calculated under different conditions is included.
  • the occupancy rate is calculated based on the created two-dimensional histogram.
  • the second occupation rate is calculated based on the created two-dimensional histogram.
  • the cumulative number of pixels is calculated for each predetermined hue and brightness of the captured image data.
  • the occupancy rate is calculated based on the created two-dimensional histogram.
  • the first occupation rate is calculated based on the created two-dimensional histogram.
  • the skin color hue may be achieved when the index calculation step is realized.
  • Different code coefficients are used for the intermediate brightness area of the area and the brightness areas other than the intermediate brightness area.
  • the brightness area of the hue area other than the high brightness skin color hue area is a predetermined high brightness area.
  • the lightness region other than the intermediate lightness region is a lightness region in the flesh-color hue region.
  • the high-brightness flesh-color hue region has a region in the range of 170 to 224 in terms of the brightness value of the HSV color system. included.
  • the intermediate brightness region includes a region in the range of 85 to 169 in terms of the brightness value of the HSV color system.
  • the form described in Item 125 includes a blue hue region and a green color region other than the high brightness skin color hue region. At least one of the hue regions is included.
  • the lightness region other than the intermediate lightness region is a shadow region.
  • the hue value of the blue hue region is in the range of 161 to 250 in terms of the hue value of the HSV color system, and the green color The hue value in the hue region is in the range of 40 to 160 in the HSV color system.
  • the brightness value of the shadow area is in the range of 26 to 84 as the brightness value of the HSV color system.
  • the form described in Item 129 and 130 is the hue value of the HSV color system in the image processing program described in any one of Items 119 to 128.
  • the flesh color hue region may be expressed by two conditional expressions based on lightness and saturation. Divided into regions.
  • FIG. 1 is a perspective view showing an external configuration of an image processing apparatus 1 according to an embodiment of the present invention.
  • the image processing apparatus 1 is provided with a magazine loading section 3 for loading a photosensitive material on one side surface of a housing 2.
  • a magazine loading section 3 for loading a photosensitive material on one side surface of a housing 2.
  • an exposure processing unit 4 for exposing the photosensitive material
  • a print creating unit 5 for developing and drying the exposed photosensitive material to create a print.
  • a tray 6 for discharging the prints produced by the print creation unit 5 is provided.
  • a CRT (Cathode Ray Tube) 8 as a display device, a film scanner unit 9 which is a device for reading a transparent document, a reflective document input device 10, and an operation unit 11 are provided at the upper part of the housing 2.
  • the CRT8 power print is composed of a display means that displays the image of the image information to be created on the screen.
  • the housing 2 is provided with an image reading unit 14 that can read image information recorded on various digital recording media, and an image writing unit 15 that can write (output) image signals to various digital recording media.
  • a control unit 7 that centrally controls each of these units is provided inside the housing 2.
  • the image reading unit 14 includes a PC card adapter 14a and a floppy (registered trademark) disk adapter 14b, and a PC card 13a and a floppy (registered trademark) disk 13b can be inserted therein.
  • the PC card 13a has a memory in which a plurality of frame image data captured by a digital camera is recorded.
  • a plurality of frame image data captured by a digital camera is recorded on the floppy (registered trademark) disk 13b.
  • Recording media that record frame image data in addition to the PC card 13a and floppy disk 13b include, for example, a multimedia card (registered trademark), a memory stick (registered trademark), MD data, and a CD-ROM. Etc.
  • the image writing unit 15 is provided with a floppy (registered trademark) disk adapter 15a, an MO adapter 15b, and an optical disk adapter 15c.
  • Examples of the optical disk 16c include CD-R, DVD-R, and the like.
  • the operation unit 11, the CRT 8, the film scanner unit 9, the reflective document input device 10, and the image reading unit 14 have a structure in which the casing 2 is integrally provided. Either one or more powers may be provided as separate bodies!
  • a force print creation method is exemplified in which a photosensitive material is exposed and developed to create a print.
  • the print creation method is not limited to this.
  • a method such as a kuget method, an electrophotographic method, a heat sensitive method, or a sublimation method may be used.
  • FIG. 2 shows a main part configuration of the image processing apparatus 1.
  • the image processing apparatus 1 includes a control unit 7, an exposure processing unit 4, a print generation unit 5, a film scanner unit 9, a reflection original input device 10, an image reading unit 14, a communication means (input) 32,
  • the image writing unit 15, data storage unit 71, template storage unit 72, operation unit 11, CRT 8, and communication unit (output) 33 are configured.
  • the control unit 7 is configured by a microcomputer and is stored in a storage unit (not shown) such as a ROM (Read Only Memory), and a CPU (Central Processing Unit) (not shown). The operation of each part constituting the image processing apparatus 1 is controlled in cooperation with the.
  • the control unit 7 has an image processing unit 70 according to the image processing apparatus of the present invention. Based on an input signal (command information) from the operation unit 11, the control unit 7 uses the film scanner unit 9 and the reflective original input device 10. The read image signal, the image signal read from the image reading unit 14, and the image signal input from the external device via the communication means 32 are subjected to image processing to form image information for exposure, and exposure Output to processing unit 4.
  • the image processing unit 70 performs a conversion process corresponding to the output form on the image signal subjected to the image processing, and outputs it.
  • output destinations of the image processing unit 70 there are a CRT 8, an image writing unit 15, a communication means (output) 33, and the like.
  • the exposure processing unit 4 performs image exposure on the photosensitive material and outputs the photosensitive material to the print creating unit 5.
  • the print creating unit 5 develops the exposed photosensitive material and dries it to create prints Pl, P2, and P3.
  • Print P1 is a service size, high-definition size, panorama size, etc.
  • print P2 is an A4 size print
  • print P3 is a business card size print.
  • the film scanner unit 9 reads a frame image recorded on a transparent original such as a developed negative film N or a reversal film imaged by an analog camera, and acquires a digital image signal of the frame image.
  • the reflective original input device 10 reads an image on the print P (photo print, document, various printed materials) by a flat bed scanner, and obtains a digital image signal.
  • the image reading unit 14 reads frame image information recorded on the PC card 13a or the floppy (registered trademark) disk 13b and transfers the frame image information to the control unit 7.
  • the image reading unit 14 includes, as image transfer means 30, a PC card adapter 14a, a floppy (registered trademark) disk adapter 14b, and the like.
  • the image reading unit 14 reads frame image information recorded on the PC card 13a inserted into the PC card adapter 14a or the floppy disk 13b inserted into the floppy disk adapter 14b. And transfer to the control unit 7.
  • a PC card reader or a PC card slot is used as the PC card adapter 14a.
  • the communication means (input) 32 receives an image signal representing a captured image and a print command signal from another computer in the facility where the image processing apparatus 1 is installed or a distant computer via the Internet or the like.
  • the image writing unit 15 includes a floppy (registered trademark) disk adapter 15a, an MO adapter 15b, and an optical disk adapter 15c as the image transport unit 31.
  • the image writing unit 15 includes a floppy disk 16a inserted into the floppy disk adapter 15a and an MO inserted into the MO adapter 15b. 16b, the optical disk 16c inserted into the optical disk adapter 15c, and the image signal generated by the image processing method of the present invention is written.
  • the data storage means 71 stores and sequentially stores image information and corresponding order information (information about how many sheets of image power are to be created, information about the print size, etc.).
  • the template storage means 72 stores at least one template data for setting a synthesis region and a background image, an illustration image, etc., which are sample image data corresponding to the sample identification information Dl, D2, and D3.
  • a predetermined template is selected from a plurality of templates that are set by the operator's operation and stored in advance in the template storage means 72, and the frame image information is synthesized by the selected template and designated sample identification information Dl, D2,
  • the sample image data selected based on D3 is combined with the image data based on the order and the Z or character data to create a print based on the specified sample.
  • the synthesis using this template is performed by the well-known Chromaki method.
  • the sample identification information Dl, D2, and D3 for designating the print sample is configured to be input from the operation unit 211. However, these sample identification information is stored in the print sample or order. Since it is recorded on the sheet, it can be read by reading means such as OCR. Or it can also input by an operator's keyboard operation.
  • sample image data is recorded corresponding to sample identification information D1 that specifies a print sample, sample identification information D1 that specifies a print sample is input, and this sample identification information that is input Select sample image data based on D1, and combine the selected sample image data with the image data and Z or character data based on the order to create prints based on the specified samples. Users can actually order samples for printing and can meet the diverse requirements of a wide range of users.
  • the first sample identification information D2 for designating the first sample and the image of the first sample is stored, and the second sample identification information D3 that designates the second sample and the image data of the second sample are stored, and the first and second sample identification information D2, D3 designated.
  • the sample image data selected based on the above and the image data based on the order and Z or character data are combined to create a print based on the specified sample. It is possible to create prints that meet the diverse requirements of a wider range of users.
  • the operation unit 11 includes information input means 12.
  • the information input means 12 is composed of, for example, a touch panel and outputs a pressing signal from the information input means 12 to the control unit 7 as an input signal.
  • the operation unit 11 may be configured with a keyboard, a mouse, and the like.
  • the CRT 8 displays image information and the like according to the display control signal input from the control unit 7.
  • the communication means (output) 33 sends an image signal representing a photographed image after the image processing of the present invention and order information attached thereto to other links in the facility where the image processing apparatus 1 is installed.
  • the computer transmits to a distant computer via the Internet or the like.
  • the image processing apparatus 1 includes an image input unit that captures image information obtained by dividing and metering images of various digital media and image originals, an image processing unit, and a processed image.
  • Image output means for displaying images, printing output, writing to image recording media, and order information attached to image data for remote computers via the communication line via another communication line or computer Means for transmitting.
  • FIG. 3 shows the internal configuration of the image processing unit 70.
  • the image processing unit 70 includes an image adjustment processing unit 701, a film scan data processing unit 702, a reflection original scan data processing unit 703, an image data format decoding processing unit 704, a template processing unit 705, and a CRT specific process.
  • a unit 706, a printer specific processing unit A707, a printer specific processing unit B708, and an image data format creation processing unit 709 are configured.
  • the film scan data processing unit 702 performs calibration operations specific to the film scanner unit 9, negative / positive reversal (in the case of a negative manuscript), dust scratch removal, contrast adjustment, and the like for the image data input from the film scanner unit 9. Processes such as granular noise removal and sharpening enhancement The completed image data is output to the image adjustment processing unit 701. In addition, the film size, negative / positive type, information on the main subject optically or magnetically recorded on the film, information on the shooting conditions (for example, information content described in APS), etc. are also output to the image adjustment processing unit 701. .
  • the reflective document scan data processing unit 703 performs calibration operations specific to the reflective document input device 10, negative / positive reversal (in the case of a negative document), dust flaw removal, and contrast adjustment for the image data input from the reflective document input device 10. Then, processing such as noise removal and sharpening enhancement is performed, and the processed image data is output to the image adjustment processing unit 701.
  • the image data format decoding processing unit 704 applies compression code to the image data input from the image transfer means 30 and Z or the communication means (input) 32 according to the data format of the image data as necessary. Processing such as restoration and conversion of the color data expression method is performed, the data is converted into a data format suitable for computation in the image processing unit 70, and output to the image adjustment processing unit 701. In addition, when the size of the output image is specified from any of the operation unit 11, the communication means (input) 32, and the image transfer means 30, the image data format decoding processing unit 704 detects the specified information. And output to the image adjustment processing unit 701. Information about the size of the output image specified by the image transfer means 30 is embedded in the header information and tag information of the image data acquired by the image transfer means 30.
  • the image adjustment processing unit 701 is based on a command from the operation unit 11 or the control unit 7, and includes a film scanner unit 9, a reflective document input device 10, an image transfer unit 30, a communication unit (input) 32, and a template.
  • the image data received from the image processing unit 705 is subjected to image processing described later (see FIGS. 6, 7, 13, and 17) for image formation optimized for viewing on the output medium.
  • Digital image data is generated and output to the CRT specific processing unit 706, the printer specific processing unit A707, the printer specific processing unit B708, the image data format creation processing unit 709, and the data storage unit 71.
  • the optimal color reproduction is performed within the color gamut of the sRGB standard. If output to silver salt photographic paper is assumed, processing is performed to obtain the optimum color reproduction within the color gamut of silver salt photographic paper.
  • 16bit to 8 Also included are gradation compression to bits, reduction of the number of output pixels, and support for output device output characteristics (LUT). It goes without saying that noise compression, sharpening, gray balance adjustment, saturation adjustment, or gradation compression processing such as masking and baking are performed.
  • the image adjustment processing unit 701 determines a gradation processing condition (gradation adjustment method, gradation adjustment amount) by determining a shooting condition of the captured image data, and a scene determination unit 710
  • the tone conversion unit 711 performs tone conversion processing according to the determined tone processing conditions.
  • the photographing conditions are classified into light source conditions and exposure conditions.
  • the light source condition is derived from the light source at the time of shooting, the positional relationship between the main subject (mainly a person) and the photographer. In the broader sense, it also includes the type of light source (sunlight, strobe light, tandasten lighting and fluorescent lamps).
  • Backlit scenes occur when the sun is located behind the main subject.
  • a strobe (close-up) scene occurs when the main subject is strongly irradiated with strobe light. Both scenes have the same brightness (light / dark ratio), and the relationship between the brightness of the foreground and background of the main subject is merely reversed.
  • the exposure conditions are derived from the settings of the camera shutter speed, aperture value, etc., and underexposure is under, proper exposure is normal, and overexposure is over. In a broad sense, so-called “white jump” and “shadow collapse” are also included. Under all light source conditions, under or over exposure conditions can be used. Especially in DSC (digital still camera) with a narrow dynamic range, even if the automatic exposure adjustment function is used, due to the setting conditions aimed at suppressing overexposure, the frequency of underexposed exposure conditions is high. High,.
  • FIG. 4 (a) shows the internal configuration of the scene discrimination unit 710.
  • the scene discriminating unit 710 includes a ratio calculating unit 712, a bias calculating unit 722, an index calculating unit 713, and a gradation processing condition calculating unit 714.
  • the ratio calculation unit 712 includes a color system conversion unit 715, a histogram creation unit 716, and an occupation rate calculation unit 717.
  • the color system conversion unit 715 converts the RGB (Red, Green, Blue) value of the captured image data into the HSV color system.
  • the HSV color system is a representation of image data with three elements: Hue, Saturation, and Lightness (Value or Brightness). It was devised based on the proposed color system.
  • “brightness” means “brightness” that is generally used unless otherwise noted.
  • the power to use V (0 to 255) of the HSV color system as “brightness” may use a unit system that expresses the brightness of any other color system.
  • numerical values such as various coefficients described in the present embodiment are recalculated.
  • the photographed image data in the present embodiment is assumed to be image data having a person as a main subject.
  • the histogram creation unit 716 creates a two-dimensional histogram by dividing the captured image data into regions composed of a predetermined combination of hue and brightness, and calculating the cumulative number of pixels for each of the divided regions. In addition, the histogram creation unit 716 divides the captured image data into predetermined regions having a combination power of distance and brightness from the outer edge of the screen of the captured image data, and calculates the cumulative number of pixels for each of the divided regions. To create a two-dimensional histogram. Note that the captured image data is divided into regions that have the combined power of distance, brightness, and hue from the outer edge of the screen of the captured image data, and a three-dimensional histogram is created by calculating the cumulative number of pixels for each divided region. You may make it do. In the following, a method of creating a two-dimensional histogram will be adopted.
  • Occupancy rate calculation unit 717 indicates the ratio of the cumulative number of pixels calculated by histogram creation unit 716 to the total number of pixels (the entire captured image data) for each area divided by the combination of brightness and hue. Calculate the first occupancy (see Table 1). The occupancy calculation unit 717 also calculates the total number of pixels calculated by the histogram creation unit 716 for each area divided by the combination of the distance from the outer edge of the screen of the captured image data and the brightness (the captured image data). Calculate the second occupancy ratio (see Table 4) indicating the ratio of the total occupancy.
  • the bias calculation unit 722 calculates a bias amount indicating the bias of the gradation distribution of the captured image data.
  • the deviation amount is a standard deviation of luminance values of photographed image data, a luminance difference value, a skin color average luminance value at the center of the screen, an average luminance value at the center of the screen, and a skin color luminance distribution value.
  • the processing for calculating these deviation amounts will be described in detail later with reference to FIG.
  • the index calculation unit 713 sets the first coefficient set in advance (for example, by discriminant analysis) in accordance with the imaging conditions to the first occupancy rate calculated for each area in the occupancy rate calculation unit 717.
  • the index 1 for specifying the shooting conditions is calculated by multiplying (see Table 2) and taking the sum. Index 1 indicates the characteristics of flash photography such as indoor photography, close-up photography, and high brightness of the face color, and is used to separate the image that should be identified as a flash from other shooting conditions.
  • the index calculation unit 713 uses coefficients of different signs for a predetermined high-lightness skin color hue region and a hue region other than the high-lightness skin color hue region.
  • the skin color hue region of a predetermined high lightness includes a region of 170 to 224 in the lightness value of the HSV color system.
  • the hue area other than the predetermined high brightness skin color hue area includes at least one of the high brightness areas of the blue hue area (hue values 161 to 250) and the green hue area (hue values 40 to 160).
  • the index calculation unit 713 sets the first occupancy calculated for each area in the occupancy calculation unit 717 to the second occupancy previously set according to the imaging conditions (for example, by discriminant analysis). By multiplying the coefficient (see Table 3) and taking the sum, index 2 for specifying the shooting conditions is calculated.
  • Indicator 2 is a composite indication of characteristics during backlighting such as outdoor shooting, sky blue high brightness, and low facial color, and is used to separate the image that should be identified as backlight from other shooting conditions. .
  • the index calculation unit 713 uses different codes for the intermediate brightness area of the flesh color hue area (hue values 0 to 39, 330 to 359) and the brightness areas other than the intermediate brightness area.
  • the coefficient of is used.
  • the intermediate brightness area of the flesh tone hue area includes areas with brightness values of 85 to 169.
  • the brightness area other than the intermediate brightness area includes, for example, a shadow area (brightness value 26-84).
  • the index calculation unit 713 sets the second occupancy calculated for each area in the occupancy calculation unit 717 to the third occupancy set in advance (for example, by discriminant analysis) according to the imaging conditions. By multiplying the coefficient (see Table 5) and taking the sum, index 3 for specifying the shooting conditions is calculated. Indicator 3 shows the difference in contrast between the center and outside of the screen of the captured image data between backlight and strobe, and quantitatively shows only the image that should be identified as backlight or strobe. When calculating the index 3, the index calculation unit 713 uses different values of coefficients depending on the distance from the outer edge of the screen of the captured image data.
  • the index calculation unit 713 is set in advance (for example, by discriminant analysis) to the index index 3 and the average luminance value of the skin color area in the center of the screen of the captured image data according to the shooting conditions.
  • Index 4 is calculated by multiplying the coefficient and taking the sum.
  • the index calculation unit 713 multiplies the average luminance value of the skin color area in the index 2, index 3, and center of the screen by a coefficient set in advance (for example, by discriminant analysis) according to the shooting conditions.
  • index 5 is calculated by taking the sum.
  • the index calculation unit 713 multiplies the deviation amount calculated by the bias calculation unit 722 by a fourth coefficient (see Table 6) set in advance (for example, by discriminant analysis) according to the shooting conditions.
  • index 6 is calculated by taking the sum.
  • a specific calculation method of the indices 1 to 6 in the index calculation unit 713 will be described in detail in the operation description of the present embodiment described later.
  • FIG. 4 (c) shows the internal configuration of the gradation processing condition calculation unit 714.
  • the gradation processing condition calculation unit 714 includes a scene determination unit 718, a gradation adjustment method determination unit 719, a gradation adjustment parameter calculation unit 720, and a gradation adjustment amount calculation unit 721. Is done.
  • the scene discriminating unit 718 is divided into regions in advance according to the values of the index 4, the index 5 and the index 6 calculated by the index calculating unit 713 and the accuracy of the shooting conditions, and discriminates the reliability of the index Based on the map (see FIG. 19), the shooting condition of the shot image data is determined.
  • the gradation adjustment method determination unit 719 determines a gradation adjustment method for the captured image data according to the imaging conditions determined by the scene determination unit 718. For example, when the shooting condition is direct light or strobe light, as shown in Fig. 21 (a), the method of correcting the translation (offset) of the pixel value of the input captured image data (tone adjustment method A) is Applied. When the shooting condition is backlit or under, as shown in FIG. 21 (b), a method (tone adjustment method B) for gamma correction of the pixel value of the input captured image data is applied. If the shooting condition is between backlight and direct light (low accuracy region (1)), or between strobe and under light (low accuracy region (2)), as shown in Fig. 21 (c). Gamma correction and translation (offset) correction (tone adjustment method C) are applied to the pixel values of the input captured image data.
  • the tone adjustment parameter calculation unit 720 uses the values of the index 4, the index 5 and the index 6 calculated by the index calculation unit 713 to determine the parameters necessary for tone adjustment (the average brightness of the skin color region). A degree value (skin color average luminance value), luminance correction value, etc.) is calculated.
  • FIG. 4D shows the internal configuration of the gradation adjustment parameter calculation unit 720.
  • the gradation adjustment parameter calculation unit 720 includes a correction value calculation unit 722 as shown in FIG.
  • the correction value calculation unit 722 calculates a correction value used for performing the gradation conversion process based on the index value calculated by the index calculation unit 713.
  • the correction values used to perform the tone conversion process include the correction value for the skin tone average brightness value reproduction target value, the correction value for the skin color average brightness value (brightness of the skin color area), and the skin color average brightness value ( Use a value that indicates the brightness of the skin tone area) and a correction value for the difference between the reproduction target value or a combination of each.
  • the gradation adjustment amount calculation unit 721 includes the index calculated by the index calculation unit 713, the gradation adjustment parameter calculated by the tone adjustment meter calculation unit 720, and the correction calculated by the correction value calculation unit 722. Based on the value, the gradation adjustment amount for the captured image data is calculated.
  • the gradation adjustment amount calculation unit 721 calculates a gradation adjustment amount for the average luminance value of the skin color area of the captured image data (hereinafter referred to as “skin color average luminance value”) when the shooting condition is backlight or under. To do. When the shooting condition is one of the shooting conditions of the following light, strobe, and low angle region, the gradation adjustment amount corresponding to the parallel shift processing of the gradation conversion curve that does not use the skin color average luminance value is calculated.
  • the method for determining the shooting condition in the scene determination unit 718, the method for calculating the gradation adjustment parameter in the gradation adjustment parameter calculation unit 720, and the method for calculating the gradation adjustment amount in the gradation adjustment amount calculation unit 721 are as follows. This will be described in detail later in the description of the operation of the present embodiment.
  • the gradation conversion unit 711 performs gradation conversion processing of the gradation adjustment amount calculated by the gradation adjustment amount calculation unit 721 on the captured image data.
  • the template processing unit 705 reads predetermined image data (template) from the template storage unit 72 based on a command from the image adjustment processing unit 701, and synthesizes the image data to be processed and the template. The template processing is performed, and the image data after the template processing is output to the image adjustment processing unit 701.
  • the CRT specific processing unit 706 performs processing on the image data input from the image adjustment processing unit 701. Then, if necessary, perform processing such as changing the number of pixels and color matching, and output the display image data combined with the information that needs to be displayed, such as control information, to the CRT8.
  • the printer-specific processing unit A707 performs printer-specific calibration processing, color matching, pixel number change processing, and the like as necessary, and outputs processed image data to the exposure processing unit 4.
  • a printer specific processing unit B708 is provided for each printer apparatus to be connected.
  • the printer-specific processing unit B708 performs printer-specific calibration processing, color matching, pixel number change, and the like, and outputs processed image data to the external printer 51.
  • the image data format creation processing unit 709 converts the image data input from the image adjustment processing unit 701 to various general-purpose image formats represented by JPEG, TIFF, Exif, and the like as necessary.
  • the processed image data is output to the image transport unit 31 and the communication means (output) 33.
  • the categories A707, printer specific processing unit B708, and image data format creation processing unit 709 are provided to help understand the functions of the image processing unit 70, and are not necessarily realized as physically independent devices. For example, it may be realized as a type of software processing by a single CPU.
  • the size of the captured image data is reduced (step T1).
  • a known method for example, a bilinear method, a bicubic method, a two-arrest naver method, or the like
  • the reduction ratio is not particularly limited, but is preferably about 1Z2 to LZ10 of the original image from the viewpoint of processing speed and the accuracy of determining the photographing condition.
  • step T2 DSC white balance adjustment correction for reduced shot image data Processing is performed (step T2), and index calculation processing for calculating indexes (indexes 1 to 6) for specifying shooting conditions is performed based on the captured image data after the correction processing (step ⁇ 3).
  • index calculation process of step IV3 will be described in detail later with reference to FIG.
  • step IV4 based on the index calculated in Step ⁇ ⁇ ⁇ ⁇ 3 and the discrimination map, the shooting conditions of the shot image data are determined, and the gradation processing conditions (tone adjustment method, tone adjustment for the shot image data) are determined.
  • Gradation processing condition determination processing for determining (quantity) is performed (step IV4).
  • the gradation processing condition determination processing in step IV4 will be described in detail later with reference to FIG.
  • step ⁇ 5 gradation conversion processing is performed on the original photographed image data in accordance with the gradation processing conditions determined in step ⁇ 4 (step ⁇ 5). Then, the sharpness adjustment processing is performed on the captured image data after the gradation conversion processing (step ⁇ ⁇ ⁇ ⁇ 6). In step ⁇ 6, it is preferable to adjust the processing amount according to the shooting conditions and output print size.
  • step ⁇ 7 a process of removing the noise enhanced by the gradation adjustment and the enhancement of sharpness is performed.
  • color correction processing is performed to convert the color space in accordance with the type of media that outputs the captured image data (step ⁇ 8), and the captured image data after image processing is output to the designated media.
  • step ⁇ 3 in FIG. 5 the index calculation process executed in the scene determination unit 710 will be described.
  • photographed image data is image data reduced in step T1 in FIG.
  • the captured image data is divided into predetermined image areas, and an occupation ratio (the first occupation ratio, the second occupation ratio) indicating the ratio of each divided area to the entire captured image data. ) Is calculated (step S1). Details of the occupation rate calculation process will be described later with reference to FIGS.
  • step S2 a bias amount calculation process for calculating a bias amount indicating a bias in the gradation distribution of the captured image data is performed (step S2).
  • the bias amount calculation processing in step S2 will be described in detail later with reference to FIG.
  • an index for specifying the light source condition is calculated based on the occupation ratio calculated in the ratio calculation unit 712 and a coefficient set in advance according to the imaging condition (step S3). Also, it is set in advance according to the occupation ratio calculated by the ratio calculation unit 712 and the shooting conditions. Based on the determined coefficient, an index for specifying the exposure condition is calculated (step S4), and this index calculation process ends.
  • the method for calculating the indices in steps S3 and S4 will be described in detail later.
  • the RGB values of the photographed image data are converted into the HSV color system (step S10).
  • Figure 8 shows an example of a conversion program (HSV conversion program) that obtains hue values, saturation values, and brightness values by converting to the RGB power HSV color system, using program code (c language).
  • HSV conversion program shown in Fig. 8
  • the digital image data values that are input image data are defined as InR, InG, and InB
  • the calculated hue value is defined as OutH
  • the scale is defined as 0 to 360
  • the degree value is OutS
  • the lightness value is OutV
  • the unit is defined as 0 to 255.
  • the photographed image data is divided into regions having a predetermined lightness and hue combination force, and a two-dimensional histogram is created by calculating the cumulative number of pixels for each divided region (step Sl 1).
  • a two-dimensional histogram is created by calculating the cumulative number of pixels for each divided region (step Sl 1).
  • the area division of the captured image data will be described in detail.
  • Lightness (V) i Lightness value power ⁇ ) to 25 (vl), 26—50 ( ⁇ 2), 51 to 84 ( ⁇ 3), 85 to 169 ( ⁇ 4), 170 to 199 (v5), 200 It is divided into seven areas of ⁇ 224 (v6) and 225-255 (v7).
  • Hue (H) is a flesh color range (HI and H2) with a hue value of 0 to 39, 330 to 359, a green hue range (H3) with a hue value of 0 to 160, and a blue hue range with a hue value of 161 to 250. It is divided into four areas: (H4) and red hue area (H5).
  • the flesh-color hue area is further divided into a flesh-color area (HI) and other areas (H2).
  • HI flesh-color area
  • H2 other areas
  • the hue '(H) that satisfies the following formula (1) is defined as the flesh color area (HI), and the formula (1) is not satisfied! /, Let the region be (H2).
  • Hue '(H) Hue (H) + 60 (when 0 ⁇ Hue (H) ⁇ 300),
  • Hue, (H) Hue (H) — 300 (when 300 ⁇ Hue (H) 360 360),
  • Luminance (Y) InRX 0.30 + InG X O. 59 + InB X O. 11
  • Hue '(H) Z Luminance (Y) 3. ⁇ ⁇ (Saturation (S) Z255) +0.7 (1) Therefore, the number of divided areas of the photographed image data is 4 ⁇ 7 28. It is also possible to use V and brightness (V) in equation (1).
  • a first occupancy ratio indicating the ratio of the cumulative number of pixels calculated for each divided region to the total number of pixels (the entire captured image) is calculated (step S12).
  • the occupation rate calculation process ends. Assuming that Rij is the first occupancy calculated in the divided area, which is the combined power of the lightness area vi and the hue area Hj, the first occupancy ratio in each divided area is expressed as shown in Table 1.
  • Table 2 shows, for each divided area, the first coefficient necessary for calculating the index 1 that quantitatively indicates the accuracy of flash photography, that is, the brightness state of the face area during flash photography.
  • the coefficient of each divided area shown in Table 2 is a weighting coefficient by which the first occupancy Rij of each divided area shown in Table 1 is multiplied, and is set in advance according to the photographing conditions.
  • Figure 9 shows the brightness (v) —hue (H) plane.
  • Table 2 high brightness skin color in Figure 9
  • a positive (+) coefficient is used, and the blue hue region (r2) force, which is the other hue, is calculated.
  • a negative (one) coefficient is used for the occupation ratio.
  • Figure 11 shows a curve (coefficient curve) in which the first coefficient in the skin tone area (HI) and the first coefficient in the other area (green hue area (H3)) change continuously over the entire brightness. ).
  • the sign of the first coefficient in the skin color region (HI) is positive (+), and the other regions (for example, the green hue region)
  • the sign of the first coefficient in (H3)) is negative (one).
  • Hfc region sum y 3 ⁇ 4xC3 ⁇ 4 (2
  • Index 1 is defined as equation (3) using the sum of the H1 to H4 regions shown in equations (2-1) to (2-4).
  • Table 3 shows the second coefficient required for each divided area to calculate the index 2 that quantitatively shows the accuracy of backlighting, that is, the brightness state of the face area during backlighting. Shown in Table 3
  • the coefficient of each divided area is a weighting coefficient by which the first occupancy Rij of each divided area shown in Table 1 is multiplied, and is set in advance according to the shooting conditions.
  • Figure 10 shows the brightness (v) -hue (H) plane.
  • a negative (one) coefficient is used for the occupancy calculated from the area (r4) distributed in the intermediate lightness of the flesh-colored hue area in Fig. 10, and the low-lightness (shadow) of the flesh-colored hue area.
  • a positive (+) coefficient is used for the occupation ratio calculated from the region (r3).
  • Fig. 12 shows the second coefficient in the flesh color region (HI) as a curve (coefficient curve) that continuously changes over the entire brightness. According to Table 3 and Fig.
  • the sign of the second coefficient of the intermediate lightness area of the flesh tone hue area with a lightness value of 85 to 169 (v4) is negative (-), and the lightness value is 26 to 84 (v2,
  • the sign of the second coefficient in the low lightness (shadow) region of v3) is positive (+), which indicates that the sign of the coefficient in both regions is different.
  • H4 area sum R14 X 0. 0 + R24 X (-5.1) + (omitted)
  • Index 2 is defined as equation (5) using the sum of the H1 to H4 regions shown in equations (41) to (44).
  • Index 2 sum of H1 regions + sum of H2 regions + sum of H3 regions + sum of H4 regions + 1. 554
  • the index 1 and the index 2 are calculated based on the brightness and hue distribution amount of the captured image data, the index 1 and the index 2 are effective in determining the imaging condition when the captured image data is a color image.
  • the RGB values of the photographed image data are converted into the HSV color system (step S20).
  • the captured image data is divided into areas where the combined power of the distance from the outer edge of the captured image screen and the brightness is determined, and a two-dimensional histogram is created by calculating the cumulative number of pixels for each divided area. (Step S21).
  • the area division of the captured image data will be described in detail.
  • Figs. 14 (a) to 14 (d) show four regions nl to n4 divided according to the distance from the outer edge of the screen of the captured image data.
  • the area nl shown in FIG. 14 (a) is the outer frame
  • the area n2 shown in FIG. 14 (b) is the area inside the outer frame
  • the area n3 shown in FIG. 14 (c) is the area n2.
  • an inner area, an area n4 shown in FIG. 14 (d) is an area at the center of the captured image screen.
  • a second occupancy ratio indicating the ratio of the cumulative number of pixels calculated for each divided region to the total number of pixels (the entire captured image) is calculated (step S22).
  • the occupation rate calculation process ends. Assuming that Qij is the second occupancy calculated in the divided area, which is the combination of the lightness area vi and the screen area nj, the second occupancy ratio in each divided area is shown in Table 4. [0234] [Table 4]
  • Table 5 shows the third coefficient necessary for calculating the index 3 for each divided region.
  • the coefficient of each divided area shown in Table 5 is a weighting coefficient by which the second occupancy Qij of each divided area shown in Table 4 is multiplied, and is set in advance according to the photographing conditions.
  • FIG. 15 shows the third coefficient in the screen areas nl to n4 as a curve (coefficient curve) that continuously changes over the entire brightness.
  • nl region sum Q11 X 40. 1 + Q21 X 37. 0 + (Omitted)
  • n4 area sum Q14X1. 5 + Q24X (-32. 9) + (omitted)
  • Index 3 is defined as equation (7) using the sum of the N1 to H4 regions shown in equations (6-1) to (6-4).
  • Indicator 3 sum of nl regions + sum of n2 regions + sum of n3 regions + sum of n4 regions 12.6201
  • the index 3 is calculated based on the compositional characteristics (distance from the outer edge of the screen of the captured image data) based on the brightness distribution position of the captured image data, so the shooting conditions of the monochrome image as well as the color image are determined. It is also effective for discrimination.
  • step S 2 of FIG. 16 the bias amount calculation process (step S 2 of FIG. 6) executed in the bias calculation unit 722 will be described.
  • the luminance Y (brightness) of each pixel is calculated from the RGB (Red, Green, Blue) value of the captured image data using Equation (A), and the standard deviation (xl) of the luminance is calculated. (Step S23).
  • the standard deviation (xl) of luminance is expressed as shown in Equation (8).
  • the pixel luminance value is the luminance of each pixel of the captured image data
  • the average luminance value is the average value of the luminance of the captured image data.
  • the total number of pixels is the number of pixels of the entire captured image data.
  • Luminance difference value (x2) (maximum luminance value, average luminance value) Z255 (9)
  • the maximum luminance value is the maximum luminance value of the captured image data.
  • the average luminance value (x3) of the skin color area at the center of the screen of the captured image data is calculated.
  • the average luminance value (x4) at the center of the screen is calculated (step S26).
  • the center of the screen is, for example, an area composed of an area n3 and an area n4 in FIG.
  • the flesh color luminance distribution value (x5) is calculated (step S27), and this deviation amount calculation processing ends.
  • the maximum brightness value of the skin color area of the captured image data is Yskin—max
  • the minimum brightness value of the skin color area is Yskin—min
  • the average brightness value of the skin color area is Yskin—ave
  • the skin color brightness distribution value (x5) is It is expressed as 10).
  • x6 be the average luminance value of the skin color area in the center of the screen of the captured image data.
  • the central portion of the screen is, for example, a region composed of region n2, region n3, and region n4 in FIG.
  • index 4 is defined as in equation (11) using index indexes 3 and x6, and index 5 is defined as in equation (12) using index 2, index 3, and x6.
  • Indicator 4 0.46X Indicator 1 + 0. 61 X Indicator 3 + 0.01 ⁇ 6— 0. 79 (11)
  • Indicator 5 0.58.
  • the weighting coefficient by which each index is multiplied in Equation (11) and Equation (12) is preset according to the shooting conditions.
  • the index 6 is obtained by multiplying the deviation amounts (xl) to (x5) calculated in the deviation amount calculation processing by a fourth coefficient set in advance according to the photographing conditions.
  • Table 6 shows the fourth coefficient, which is a weighting coefficient by which each deviation is multiplied.
  • the index 6 is expressed as in Expression (13).
  • Indicator 6 xlX0.02 + x2Xl. 13 + x3XO.06 + x4X (-0.01) + x5XO.03— 6.49 (13)
  • This index 6 is a luminance histogram that consists of only the compositional features of the screen of the captured image data. It has distribution information and is particularly useful for distinguishing between strobe shooting scenes and under shooting scenes.
  • step T4 in FIG. 5 the gradation processing condition determination process (step T4 in FIG. 5) executed by the gradation processing condition calculation unit 714 will be described with reference to the flowchart in FIG.
  • the average brightness value (skin color average brightness value) of the skin color area (HI) of the photographed image data is calculated (step S30).
  • the shooting conditions of the captured image data are determined based on the indexes calculated by the index calculation unit 713 (indexes 4 to 6) and the determination map divided in advance according to the shooting conditions (step S31). . The following explains how to determine the shooting conditions.
  • Figure 18 (a) shows 60 images taken under each of the following conditions: forward light, backlight, and strobe, and index 4 and index 5 were calculated for a total of 180 digital image data. The values of index 4 and index 5 are plotted.
  • Figure 18 (b) shows the results of plotting the values of index 4 and index 6 for images where index 4 is greater than 0.5 under 60 strobe and under shooting conditions.
  • the discriminant map evaluates the reliability of the index. As shown in Fig. 19, each of the basic areas of forward light, backlight, strobe, and under, and the low accuracy area between backlight and forward light (1 ), Consisting of a low accuracy area (2) between the strobe and under. Note that there are other low-accuracy regions such as a low-accuracy region between backlight and strobe on the discrimination map, but they are omitted in this embodiment.
  • Table 7 shows a plot of each index value shown in Fig. 18 and the shooting conditions discriminated based on the discrimination map of Fig. 19.
  • the light source condition can be quantitatively determined based on the values of the index 4 and the index 5
  • the exposure condition can be quantitatively determined based on the values of the index 4 and the index 6.
  • the low accuracy region (1) between the forward light and the backlight can be distinguished from the values of the indicators 4 and 5
  • the low accuracy region (2) between the strobe and the under can be determined from the values of the indicators 4 and 6. Can be determined.
  • a gradation adjustment method for the shot image data is selected (determined) according to the determined shooting conditions (step S32).
  • tone adjustment method A Fig. 21 (a)
  • tone adjustment method B Fig. 21 (b)
  • tone adjustment method C Fig. 21 (c)
  • the correction amount is relatively small. Therefore, it is possible to apply the gradation adjustment method A for correcting the translation (offset) of the pixel value of the captured image data.
  • a viewpoint power capable of suppressing gamma fluctuation is also preferable.
  • the amount of correction is relatively large, so applying gradation adjustment method A significantly increases the gradation where there is no image data, resulting in black turbidity or whiteness. This leads to a decrease in brightness. Therefore, when the shooting condition is backlight or under, it is preferable to apply the gradation adjustment method B in which the pixel value of the shot image data is gamma corrected.
  • the gradation adjustment method for one of the adjacent shooting conditions is A or B in any low accuracy area, so both gradation adjustment methods are mixed. It is preferable to apply the gradation adjustment method C described above. By setting the low-accuracy region in this way, the processing result can be smoothly transferred even when different gradation adjustment methods are used. In addition, it is possible to reduce variations in density between multiple photo prints taken of the same subject.
  • the tone conversion curve shown in FIG. 21 (b) is convex upward, but may be convex downward.
  • the tone conversion curve shown in FIG. 21 (c) is convex downward, but may be convex upward.
  • a parameter (gradation adjustment parameter) necessary for gradation adjustment is calculated based on the index calculated by the index calculation unit 713.
  • a gradation adjustment amount calculation process for calculating the gradation adjustment amount of the photographed image data based on the gradation adjustment parameter is performed (step S33), and this gradation processing condition determination process ends.
  • the calculation method of the gradation adjustment parameter and the gradation adjustment amount calculated in step S33 will be described. In the following, it is assumed that the 8-bit captured image data has been converted to 16-bit in advance, and the unit of the captured image data value is 16-bit.
  • step S33 the following parameters P1 to P5 are calculated as tone adjustment parameters.
  • Reproduction target correction value Brightness reproduction target value (30360) — P3
  • tone adjustment amounts 1 to 5 are calculated as follows according to the determined shooting conditions. A method for calculating the gradation adjustment amounts 3 to 5 will be described in detail later.
  • Tone adjustment amount 3 For low-accuracy area shooting conditions ⁇ Tone adjustment amount 3,
  • a CDF cumulative density function
  • the maximum and minimum values of the CDF force obtained are determined.
  • the maximum and minimum values are obtained for each RGB.
  • the maximum and minimum values for each of RGB obtained respectively, Rmax, R mm ⁇ Gmax, Gmm ⁇ Bmax, Bmm and to 0
  • Rx normalized data in R plane is R, Gx in G plane
  • N (B + G + R) / 3 (17)
  • Figure 22 (a) shows the frequency distribution (histogram) of the brightness of RGB pixels before normalization.
  • the horizontal axis represents luminance
  • the vertical axis represents pixel frequency. This histogram is created for each RGB.
  • regularity is applied to the captured image data for each plane according to equations (14) to (16).
  • Fig. 22 (b) shows a histogram of brightness calculated by equation (17). Since the captured image data is normally entered at 65535, each pixel takes an arbitrary value between the maximum value of 65535 and the minimum value power.
  • FIG. 22 (c) When the luminance histogram shown in FIG. 22 (b) is divided into blocks divided by a predetermined range, a frequency distribution as shown in FIG. 22 (c) is obtained.
  • the horizontal axis is the block number (luminance) and the vertical axis is the frequency.
  • FIG. 23 (c) an area having a frequency greater than a predetermined threshold is deleted from the luminance histogram. This is because if there is a part with an extremely high frequency, the data in this part has a strong influence on the average brightness of the entire photographed image, so that erroneous correction is likely to occur. Therefore, as shown in Fig. 23 (c), the number of pixels above the threshold is limited in the luminance histogram.
  • Figure 23 (d) shows the luminance histogram after the pixel number limiting process.
  • the high luminance region and the low luminance region are deleted from the normalized luminance histogram.
  • the parameter P2 is obtained by calculating the average value of luminance based on each block number and frequency of the luminance histogram (Fig. 23 (d)) obtained by limiting the cumulative number of pixels.
  • a reference index among the corresponding indices in the low accuracy region is determined. For example, in the low accuracy region (1), the index 5 is determined as the reference index, and in the low accuracy region (2), the index 6 is determined as the reference index. Then, by normalizing the value of the reference index in the range of 0 to 1, the reference index is converted into a normalized index.
  • the normalization index is defined as in equation (18).
  • Normalized index (Standard index Minimum index value) Z (Maximum index value Minimum index value) (18)
  • the maximum index value and minimum index value are within the corresponding low accuracy range. The maximum and minimum values of the reference index.
  • the correction amounts at the boundary between the corresponding low-accuracy region and the two regions adjacent to the low-accuracy region are ex and ⁇ , respectively.
  • the correction amounts ⁇ and j8 are fixed values calculated in advance using the reproduction target value defined at the boundary of each region on the discrimination map.
  • Gradation adjustment amount 3 is expressed as equation (19) using the normalization index of equation (18) and correction amounts a and ⁇ .
  • the correlation between the normal index and the correction amount is a linear relationship, but it may be a curve relationship in which the correction amount shifts more gradually.
  • an index used in each of the following gradation adjustment amount calculation processes and the minimum value Imin and maximum value Imax of the index are set in advance according to the shooting conditions (see FIG. 28). ). When shooting conditions are backlit, index 5 is used, and when shooting conditions are under, index 6 is used. Furthermore, correction values of parameters used in each tone adjustment amount calculation process (skin color average brightness reproduction target value, skin color average brightness value, reproduction target value-skin color average brightness value, etc.) Minimum value ⁇ min and maximum value ⁇ max is also preset according to the shooting conditions! And As shown in FIG.
  • the minimum value ⁇ of the correction value ⁇ is a correction value corresponding to the minimum value Imin of the corresponding index
  • the maximum value ⁇ max of the correction value ⁇ is the maximum value of the corresponding index. This is a correction value corresponding to Imax.
  • the difference ( ⁇ max ⁇ Amin) between the maximum value ⁇ max and the minimum value ⁇ min is preferably at least an 8-bit value of 35.
  • the minimum value ⁇ min and the maximum value ⁇ max of the correction value ⁇ of the reproduction target value are determined based on the imaging condition determined in step S31 of FIG. 17 (step S40).
  • a normality index is calculated, and a correction value ⁇ mod of the reproduction target value is calculated from the normalization index and the minimum value ⁇ min and the maximum value ⁇ max of the correction value ⁇ of the reproduction target value (Ste S41).
  • the index calculated by the index calculation process of FIG. 6 is I
  • the normalized index is expressed by the following equation (20). .
  • correction value ⁇ mod of the reproduction target value calculated in step S41 is expressed as the following equation (21).
  • the correction value A mod is a correction value corresponding to the index I calculated by the index calculation process.
  • the tone adjustment amount is calculated from the difference between the skin color average luminance value calculated in step S30 in FIG. 17 and the corrected reproduction target value (step S43), and the tone adjustment is performed.
  • the amount calculation process ends.
  • the skin tone average luminance reproduction target value is 30360 (16 bits), and step S30 in FIG.
  • the skin tone average luminance value calculated in step 2 is 21500 (16 bits).
  • the determined shooting condition is backlit, and the value of index 5 calculated by the index calculation process is 2.7.
  • the normalization index, correction value A mod, correction reproduction target value, and gradation adjustment amount 4 are as follows.
  • the gradation adjustment amount 4 can be calculated by the same method.
  • the tone adjustment amount calculation processing in the second embodiment will be described.
  • a process for calculating a gradation adjustment amount when correcting the skin color average luminance value will be described.
  • the correction value ⁇ min and the maximum value ⁇ max of the skin color average luminance value calculated in step S30 in FIG. 17 are determined. (Step S50).
  • a normal index is calculated. From this normalized index and the minimum value ⁇ min and the maximum value ⁇ max of the correction value ⁇ of the skin color average luminance value, the equation (24) As shown in Fig. 5, the correction value ⁇ mod of the skin color average luminance value is calculated (step S5 Do)
  • the correction value A mod is a correction value corresponding to the index I calculated by the index calculation process.
  • a corrected skin color average brightness value is calculated from the skin color average brightness value and its correction value ⁇ mod as shown in Expression (25) (step S52).
  • the gradation adjustment amount is calculated from the difference between the corrected skin color average luminance value and the reproduction target value (step S53), and the gradation adjustment amount calculation process ends.
  • Tone adjustment amount Correction skin color average luminance value Reproduction target value (26)
  • the tone adjustment amount calculation processing in the third embodiment will be described.
  • a process for calculating a gradation adjustment amount when both the skin color average luminance value and the reproduction target value are corrected will be described.
  • step S60 the flesh color average brightness value and the correction value of the reproduction target value calculated in step S30 in FIG. ⁇ max is determined (step S60). Note that the minimum value and maximum value of the correction value of the flesh color average luminance value are the same as the minimum value and maximum value of the correction value of the reproduction target value, respectively.
  • a normalization index is calculated as shown in the above equation (20).
  • the normalization index, the skin color average luminance value and the reproduction target value correction value ⁇ , the minimum value ⁇ min and the maximum value ⁇ max From the equation (27), the flesh color average luminance value and the correction value A mod of the reproduction target value are calculated (step S61).
  • Modified value ⁇ mod ( ⁇ max— ⁇ min) X (normalized index) + ⁇ min (27)
  • This modified value A mod is an index I calculated by the index calculation process as shown in FIG. The correction value corresponds to.
  • Modified skin color average luminance value skin color average luminance value ⁇ mod X 0.5 (28—1)
  • Correction reproduction target value Reproduction target value + ⁇ mod X 0.5 (28-2)
  • Equations (28-1) and (28-2) show the case where the composite ratio of the flesh color average luminance value and the reproduction target value is both 0.5.
  • the gradation adjustment amount is calculated from the difference between the corrected skin color average luminance value and the corrected reproduction target value (step S63), and this gradation adjustment amount calculation processing Ends.
  • Tone adjustment amount corrected flesh color average luminance value one corrected reproduction target value (29)
  • Example 4 With reference to the flowchart in FIG. 27, the tone adjustment amount calculation processing in the fourth embodiment will be described.
  • a process of calculating a gradation adjustment amount when correcting the difference between the skin color average luminance value and the reproduction target value will be described.
  • step S70 based on the shooting conditions determined in step S31 in FIG. 17, the difference value between the flesh color average luminance value calculated in step S30 in FIG. 17 and the reproduction target value (one flesh color average luminance value reproduction target). Value) correction value ⁇ minimum value ⁇ min and maximum value ⁇ max are determined (step S70).
  • a normalization index is calculated as shown in the above equation (20), and this normalization index and the minimum value ⁇ min of the correction value ⁇ of the difference value (skin color average luminance value one reproduction target value) From the maximum value ⁇ max, a correction value ⁇ mod of the difference value is calculated as shown in the equation (30) (step S71
  • the correction value A mod is a correction value corresponding to the index I calculated by the index calculation process.
  • the gradation adjustment amount is calculated as shown in Expression (31) from the correction value A mod calculated by Expression (30) and the difference value (skin color average luminance value one reproduction target value) (Step 31). S72), the main tone adjustment amount calculation process is completed.
  • Tone adjustment amount skin color average luminance value one reproduction target value ⁇ ⁇ ⁇ (31)
  • the gradation adjustment amount When the gradation adjustment amount is calculated, it is calculated in the gradation adjustment amount calculation process from a plurality of gradation conversion curves set in advance corresponding to the gradation adjustment method determined in step S32 in FIG. A gradation conversion curve corresponding to the adjusted gradation adjustment amount is selected (determined). Note that the gradation conversion curve may be calculated based on the calculated gradation adjustment amount.
  • the photographed image data is gradation converted according to the determined gradation conversion curve.
  • offset correction parallel shift of 8-bit value that matches parameter P1 with ⁇ 4 is performed using the following equation (32).
  • RGB value of output image RGB value of input image + gradation adjustment amount 1 (32) Therefore, when the shooting condition is normal light, a gradation conversion curve corresponding to Expression (32) is selected from the plurality of gradation conversion curves shown in FIG. Alternatively, the gradation conversion curve may be calculated (determined) based on Expression (32).
  • the key correction value Q is calculated from the gradation adjustment amount 4 calculated in any of the gradation adjustment amount calculation processes in Examples 1 to 4 as shown in the following equation (33).
  • the tone conversion curve corresponding to the key correction value Q shown in equation (33) is selected from the plurality of tone conversion curves shown in FIG. 21 (b).
  • the value of the key correction coefficient in equation (33) is 24.78.
  • a specific example of the gradation conversion curve in Fig. 21 (b) is shown in Fig. 29.
  • the correspondence between the key correction value Q and the gradation conversion curve selected in Fig. 29 is shown below.
  • the gradation conversion curve selected in FIG. 29 is L1.
  • the key correction value Q ′ is calculated as shown in the following equation (34) from the gradation adjustment amount 5 calculated in the gradation adjustment amount calculation process of any of Examples 1 to 4. Then, the gradation conversion curve corresponding to the key correction value Q ′ shown in Expression (34) is selected from the plurality of gradation conversion curves shown in FIG. 21 (b).
  • Key correction value Q ' gradation adjustment amount 5Z key correction coefficient (34)
  • the value of the key correction coefficient in equation (34) is 24.78.
  • the correspondence between the value of the key correction value Q ′ and the gradation conversion curve selected in FIG. 29, which is a specific example of FIG. 21 (b), is shown below.
  • RGB value of output image RGB value of input image + gradation adjustment amount 2 (35)
  • a gradation conversion curve corresponding to Expression (35) is selected from the plurality of gradation conversion curves shown in FIG. Or, calculate (determine) the tone conversion curve based on Equation (35).
  • RGB value of output image RGB value of input image + gradation adjustment amount 3 (36)
  • the gradation conversion curve corresponding to 6) is selected.
  • the brightness of the skin color area of the photographed image data according to the index representing the photographing conditions (light source condition, exposure condition) of the photographed image data.
  • the brightness of the subject can be corrected appropriately and continuously without being too dependent on the calculated value of the brightness of the skin color area.
  • the shooting condition of the shot image data is determined based on an index (index 4 to 6) that quantitatively represents the shooting condition, and the brightness of the skin color area is corrected based on the determination result.
  • Appropriate gradation conversion processing according to the result can be performed. Further, by setting a range for correcting (correcting) the brightness of the skin color area in advance, more appropriate brightness correction can be performed.
  • FIG. 30 shows a configuration of a digital camera 200 to which the imaging device of the present invention is applied.
  • Digital camera 200, Fig. 30 [As shown] CPU201, optical system 202, image sensor ⁇ 203, AF calculation ⁇ 204, WB calculation ⁇ 205, AE calculation ⁇ 206, lens control ⁇ 207, image processing unit 208 , Display unit 209, recording data creation unit 210, recording medium 211, scene mode setting key 212, color space setting key 213, release button 214, and other operation keys 215 c
  • the CPU 201 comprehensively controls the operation of the digital camera 200.
  • the optical system 202 is a zoom lens, and forms a subject image on a charge-coupled device (CCD) image sensor in the imaging sensor unit 203.
  • the imaging sensor unit 203 photoelectrically converts an optical image by a CCD image sensor, converts it into a digital signal (AZD conversion), and outputs it.
  • the image data output from the imaging sensor unit 203 is input to the AF calculation unit 204, the WB calculation unit 205, the AE calculation unit 206, and the image processing unit 208.
  • the AF calculation unit 204 calculates and outputs the distances of the AF areas provided at nine places in the screen. The determination of the distance is performed by determining the contrast of the image, and the CPU 201 selects a value at the closest distance among them and sets it as the subject distance.
  • the WB calculation unit 205 calculates and outputs a white balance evaluation value of the image.
  • the white balance evaluation value is a gain value required to match the RGB output value of a neutral subject under the light source at the time of shooting, and is calculated as the ratio of RZG and BZG with reference to the G channel. The calculated evaluation value is input to the image processing unit 208, and the white balance of the image is adjusted.
  • the AE calculation unit 206 The hawk also calculates and outputs an appropriate exposure value, and the CPU 201 calculates an aperture value and a shutter speed value so that the calculated appropriate exposure value matches the current exposure value.
  • the aperture value is output to the lens control unit 207, and the corresponding aperture diameter is set.
  • the shutter speed value is output to the image sensor unit 203, and the corresponding CCD integration time is set.
  • the image processing unit 208 performs white balance processing, CCD filter array interpolation processing, color conversion, primary gradation conversion, sharpness correction, and the like on the captured image data, and then performs the above-described implementation. Similar to the form, an index (index 1 to 6) for specifying the shooting condition is calculated, the shooting condition is determined based on the calculated index, and the gradation conversion process determined based on the determination result is performed. By doing so, it is converted into a preferable image. ⁇ Perform PEG compression and other conversions. The JPEG-compressed image data is output to the display unit 209 and the recording data creation unit 210.
  • the display unit 209 displays captured image data on a liquid crystal display and various information according to instructions from the CPU 201.
  • the recording data creation unit 210 formats JPEG-compressed image data and various captured image data input from the CPU 201 into an Exif (Exchangeable Image File Format) file, and records the data on the recording medium 211.
  • Exif Exchangeable Image File Format
  • the recording media 211 there is a part called maker note as a space where each manufacturer can write free information, and it is also possible to record the judgment result of shooting conditions and index 4, index 5 and index 6. Oh ,.
  • the shooting scene mode can be switched by a user setting. That is, three modes can be selected as a shooting scene mode: a normal mode, a portrait mode, and a landscape mode scene.
  • a shooting scene mode When the user operates the scene mode setting key 212 and the subject is a person, the portrait mode and the landscape mode are selected. In case of, switch to landscape mode to perform primary gradation conversion suitable for the subject.
  • the digital camera 200 records the selected shooting scene mode information by adding it to the maker note portion of the image data file. The digital camera 200 also records the position information of the AF area selected as the subject in the image file in the same manner.
  • the user can set the output color space using the color space setting key 213.
  • the output color space is sRGB (IEC61966— 2—i; ⁇ Ra w can be selected.
  • sRGB image processing according to the present embodiment is executed.
  • Raw image processing according to the present embodiment is not performed, and the image is output in a color space unique to the CCD.
  • an index that quantitatively indicates the shooting condition of the shot image data is calculated, and The shooting conditions are determined based on the calculated index, the gradation adjustment method for the captured image data is determined according to the determination result, and the gradation adjustment amount (gradation conversion curve) of the captured image data is determined. Accordingly, it is possible to appropriately correct the brightness of the subject. In this way, the digital camera 200 and the printer are directly connected without going through a personal computer by performing appropriate gradation conversion processing according to the shooting conditions inside the digital camera 200. Also, a preferable image can be output.
  • the brightness of the skin color area of the photographed image data is corrected (corrected) according to the index representing the photographing condition of the photographed image data, based on the accuracy verification result as the skin color area. It is possible to appropriately and continuously correct the brightness of the subject without depending too much on the adoption feasibility determination and the calculated value of the brightness of the skin color area.
  • the shooting condition of the shot image data is determined, and the brightness of the skin color area is corrected based on the determination result. Therefore, it is possible to perform appropriate gradation conversion processing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Color Image Communication Systems (AREA)

Abstract

Cette invention porte sur un dispositif de traitement d'images qui calcule une valeur indiquant la luminosité de la zone de couleur de peau dans les données images acquises et corrige la valeur calculée indiquant la luminosité en une valeur cible de reproduction prédéterminée. Le dispositif de traitement d'images calcule un indice indiquant la condition d'imagerie des données images acquises et, suivant cet indice calculé, calcule au moins une des valeurs suivantes : la valeur corrigée de la valeur cible de reproduction, la valeur corrigée de la luminosité de la zone de couleur de peau et la valeur corrigée de la différence calculée entre la valeur indiquant la luminosité de la zone de couleur de peau et la valeur cible de reproduction. Suivant une des valeurs corrigées calculées, les données images acquises sont soumises à un traitement de conversion de la gradation.
PCT/JP2005/016382 2004-09-22 2005-09-07 Procede de traitement d'images, dispositif de traitement d'images, dispositif d'imagerie et programme de traitement d'images Ceased WO2006033234A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2004274868A JP2006093946A (ja) 2004-09-22 2004-09-22 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム
JP2004-274868 2004-09-22

Publications (1)

Publication Number Publication Date
WO2006033234A1 true WO2006033234A1 (fr) 2006-03-30

Family

ID=36089997

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2005/016382 Ceased WO2006033234A1 (fr) 2004-09-22 2005-09-07 Procede de traitement d'images, dispositif de traitement d'images, dispositif d'imagerie et programme de traitement d'images

Country Status (2)

Country Link
JP (1) JP2006093946A (fr)
WO (1) WO2006033234A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114742716A (zh) * 2022-02-18 2022-07-12 阿里巴巴(中国)有限公司 图像处理方法、装置、设备和存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006325015A (ja) * 2005-05-19 2006-11-30 Konica Minolta Photo Imaging Inc 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム
JP7019387B2 (ja) * 2017-11-17 2022-02-15 キヤノン株式会社 画像処理装置、画像処理方法、及びプログラム、並びに画像形成装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1117963A (ja) * 1997-06-27 1999-01-22 Fuji Xerox Co Ltd 画像処理装置
JP2001092956A (ja) * 1999-09-22 2001-04-06 Nec Corp 自動色補正装置及び自動色補正方法並びにその制御プログラムを記録した記録媒体
JP2001186323A (ja) * 1999-12-24 2001-07-06 Fuji Photo Film Co Ltd 証明写真システム及び画像処理方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1117963A (ja) * 1997-06-27 1999-01-22 Fuji Xerox Co Ltd 画像処理装置
JP2001092956A (ja) * 1999-09-22 2001-04-06 Nec Corp 自動色補正装置及び自動色補正方法並びにその制御プログラムを記録した記録媒体
JP2001186323A (ja) * 1999-12-24 2001-07-06 Fuji Photo Film Co Ltd 証明写真システム及び画像処理方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114742716A (zh) * 2022-02-18 2022-07-12 阿里巴巴(中国)有限公司 图像处理方法、装置、设备和存储介质

Also Published As

Publication number Publication date
JP2006093946A (ja) 2006-04-06

Similar Documents

Publication Publication Date Title
US20050141002A1 (en) Image-processing method, image-processing apparatus and image-recording apparatus
JP2004173010A (ja) 撮像装置、画像処理装置、画像記録装置、画像処理方法、プログラム及び記録媒体
US20040247175A1 (en) Image processing method, image capturing apparatus, image processing apparatus and image recording apparatus
JP2006319714A (ja) 画像処理方法、画像処理装置及び画像処理プログラム
WO2006123492A1 (fr) Procede et dispositif de traitement d'images, dispositif d'imagerie et programme de traitement d'images
JP2005192162A (ja) 画像処理方法、画像処理装置及び画像記録装置
WO2005112428A1 (fr) Procédé de traitement d’images, dispositif de traitement d’images, enregistreur d’images, et programme de traitement d’images
JP2006039666A (ja) 画像処理方法、画像処理装置及び画像処理プログラム
US6801296B2 (en) Image processing method, image processing apparatus and image recording apparatus
WO2006033235A1 (fr) Procede de traitement d'images, dispositif de traitement d'images, dispositif d'imagerie et programme de traitement d'images
JP2007184888A (ja) 撮像装置、画像処理装置、画像処理方法、及び画像処理プログラム
WO2006033234A1 (fr) Procede de traitement d'images, dispositif de traitement d'images, dispositif d'imagerie et programme de traitement d'images
JP2004096508A (ja) 画像処理方法、画像処理装置、画像記録装置、プログラム及び記録媒体
JP2006318255A (ja) 画像処理方法、画像処理装置及び画像処理プログラム
WO2006077702A1 (fr) Dispositif d’imagerie, dispositif de traitement d’image et méthode de traitement d’image
WO2006033236A1 (fr) Procede de traitement d'images, dispositif de traitement d'images, dispositif d'imagerie et programme de traitement d'images
JP4449619B2 (ja) 画像処理方法、画像処理装置及び画像処理プログラム
JP2006094000A (ja) 画像処理方法、画像処理装置及び画像処理プログラム
JP2005332054A (ja) 画像処理方法、画像処理装置、画像記録装置及び画像処理プログラム
JP2007311895A (ja) 撮像装置、画像処理装置、画像処理方法及び画像処理プログラム
JP2007221678A (ja) 撮像装置、画像処理装置、画像処理方法及び画像処理プログラム
WO2006132067A1 (fr) Procédé et dispositif de traitement d’image, dispositif d’imagerie et programme de traitement d’image
WO2006077703A1 (fr) Dispositif d’imagerie, dispositif de traitement d’image et dispositif d’enregistrement d’image
JP2007293686A (ja) 撮像装置、画像処理装置、画像処理方法及び画像処理プログラム
JP2003264711A (ja) 画像処理方法、画像処理装置、及び画像記録装置

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase