[go: up one dir, main page]

WO2008056777A1 - Authentication system and authentication method - Google Patents

Authentication system and authentication method Download PDF

Info

Publication number
WO2008056777A1
WO2008056777A1 PCT/JP2007/071807 JP2007071807W WO2008056777A1 WO 2008056777 A1 WO2008056777 A1 WO 2008056777A1 JP 2007071807 W JP2007071807 W JP 2007071807W WO 2008056777 A1 WO2008056777 A1 WO 2008056777A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
face
local
shape
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2007/071807
Other languages
French (fr)
Japanese (ja)
Inventor
Hiroshi Yamato
Yuichi Kawakami
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Konica Minolta Inc
Original Assignee
Konica Minolta Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Konica Minolta Inc filed Critical Konica Minolta Inc
Priority to JP2008543143A priority Critical patent/JP4780198B2/en
Publication of WO2008056777A1 publication Critical patent/WO2008056777A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to an authentication system and an authentication method for performing face authentication.
  • biometrics authentication technology biometric technology
  • face authentication technology that performs face authentication is known. This face authentication technology is a non-contact type authentication method, and the demand for it is very high in offices because of its convenience.
  • authentication 3D face authentication
  • 3D shape information 3D information
  • this 3D face recognition uses the entire face, which causes the partial hiding problem. That is, the problem that the authentication accuracy is lowered due to the loss of data at the location where the concealment occurs in the authentication process cannot be solved.
  • dense 3D information there is a problem that the authentication process takes time S.
  • Patent Document 1 discloses the following technique.
  • the reference point of the face is extracted by examining the change in curvature of the face surface.
  • the reference point of the face is These include points where the absolute value of curvature is maximum (for example, the tip of the nose) and points where the absolute value of curvature is maximum near the center of the side of the face (for example, ear hole points).
  • the face orientation (inclination), that is, the face orientation is corrected by calculating the reference posture based on these face reference points.
  • the corrected 3D shape data of the face is approximated to a plane with an arbitrary size, and the unit normal vector and area of this plane are obtained.
  • the normal distribution in which the size of the unit normal vector is expressed by this area is used as a feature amount, and authentication is performed.
  • Patent Document 1 the technique disclosed in Patent Document 1 is premised on using the entire three-dimensional shape, so-called global patch information. For this reason, since it is necessary to determine the reference direction of the face, it is not possible to determine this reference direction when the face is partially hidden due to the above-mentioned change in posture etc. The process cannot be executed.
  • Patent Document 2 discloses the following technique. First, color information is used to extract 3D shape information and color information of only the face portion of a person, and face data is obtained by combining the 3D shape information and color information. Next, the center of the entire three-dimensional shape of this face data (collation face data) and dictionary face data prepared in advance is obtained, and translated so that the positions of these centroids coincide. At the same time, rotated face data is obtained by slightly rotating around the matched center of gravity. Then, a minimum error is obtained by calculating an error between the rotated face data and the dictionary face data, and determination (authentication) is performed based on the minimum error.
  • color information is used to extract 3D shape information and color information of only the face portion of a person, and face data is obtained by combining the 3D shape information and color information.
  • the center of the entire three-dimensional shape of this face data (collation face data) and dictionary face data prepared in advance is obtained, and translated so that the positions of these centroids coincide.
  • rotated face data is obtained by slightly rotating around
  • Patent Document 1 Japanese Patent Laid-Open No. 5-215531
  • Patent Document 2 JP-A-9 259271
  • the present invention has been made in view of the above circumstances, and it is an object of the present invention to provide an authentication system and an authentication method capable of reducing a decrease in authentication accuracy and improving an authentication speed. Say it.
  • a plurality of local three-dimensional local regions in the person to be authenticated are determined, and the three-dimensional feature amount force of the face in each of these three-dimensional local regions
  • the three-dimensional face feature value is calculated as a three-dimensional face feature value, and a comparison facial feature value prepared in advance is compared to perform an authentication operation for the person to be authenticated.
  • FIG. 1 is a schematic configuration diagram showing an example of an authentication system according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram showing an example of the overall configuration of a controller in the authentication system.
  • FIG. 3 is a functional block diagram for explaining a face authentication function provided in the controller.
  • FIG. 4 is a schematic diagram showing an example of coordinates of feature points in each feature part of a face.
  • FIG. 5 is a schematic diagram for explaining calculation of three-dimensional coordinates of each characteristic part.
  • FIG. 6 is a schematic diagram showing an example of a standard model.
  • FIG. 7 is a three-dimensional graph for conceptually explaining the Gabor filter.
  • FIG. 8 is a schematic diagram for explaining a method of setting a rectangular area from each feature point of 3D face part shape data.
  • FIG. 9 is a schematic diagram for explaining a method of extracting (determining) a local patch region from 3D face part shape data using the rectangular region information set in FIG. 8.
  • FIG. 10 is a schematic diagram for explaining a method of setting a rectangular area from each feature point of 3D face part shape data.
  • FIG. 11 is a schematic diagram showing an example of each 3D point and each local patch region in 3D face part shape data.
  • FIG. 12] (A) (B) and (C) are diagrams for explaining the intersection determination.
  • FIG. 13 is a schematic diagram showing an example of a Bezier curved surface in extracting a three-dimensional face feature quantity.
  • FIG. 14 is a flowchart showing an example of face authentication operation according to the present embodiment.
  • FIG. 15 is a flowchart showing an example of the operation in step S9 of FIG.
  • FIG. 16 is a functional block diagram for explaining the face authentication function provided in another controller.
  • FIG. 17 is a flowchart showing an example of the operation of the authentication system shown in FIG.
  • FIG. 1 is a schematic configuration diagram showing an example of an authentication system 1 according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram illustrating an example of the overall configuration of the controller 10.
  • FIG. 3 is a functional block diagram for explaining the face authentication function provided in the controller 10.
  • FIG. 4 is a schematic diagram showing an example of the coordinates of feature points in each feature part of the face.
  • the authentication system 1 performs personal authentication by face (hereinafter referred to as face authentication), and includes a controller 10 and two photographing cameras (two-dimensional camera; 2D camera) (hereinafter referred to as “face authentication”). Just “camera”! /, U) with CA1 and CA2! /
  • the cameras CA1 and CA2 are arranged so that the face of the authentication target person HM can be photographed from different positions (angles) with respect to the face position of the authentication target person HM.
  • the appearance information of the person HM to be authenticated HM obtained by this photographing that is, two types of face images are transmitted to the controller 10 via the communication line.
  • the image data communication method between the cameras CA1 and CA2 and the controller 10 is not limited to the wired method, and may be a wireless method.
  • the face image may be an image including a background image as well as a face image. As shown in FIG.
  • the controller 10 is embodied by an information processing device such as a personal computer (PC), for example, and includes a CPU 2, a storage unit 3, a media drive 4, and a liquid crystal display, for example.
  • Display unit 5 a keyboard 6a and an input unit 6 such as a mouse 6b as a pointing device, and a communication unit 7 such as a network card.
  • the storage unit 3 includes a plurality of storage media such as a hard disk drive (HDD) 3a and a RAM (semiconductor memory) 3b.
  • the media drive 4 also stores information recorded therein from a portable storage medium 8 such as a CD-ROM (Compact Disc Read Only Memory), DVD (Digital Versatile Disk), flexible disk, or memory card.
  • CD-ROM Compact Disc Read Only Memory
  • DVD Digital Versatile Disk
  • flexible disk or memory card.
  • the controller 10 is equipped with drive devices such as CD-ROM drive devices, DVD drive devices, flexible disk drive devices, and memory card drive devices that can be read.
  • the information supplied to the controller 10 is not limited to being supplied via the recording medium 8, and may be supplied via a network such as a LAN (Local Area Network) or the Internet. .
  • the controller 10 may be a dedicated controller (main unit control device) manufactured for this system. It has the functions described below!
  • the controller 10 includes an image input unit 11, a face region detection unit 12, a face part detection unit 13, a face part 3D calculation unit 14, and a posture / light source correction unit 15.
  • a standard model storage unit 16 a two-dimensional authentication unit 17, a face area 3D calculation unit 18, a three-dimensional authentication unit 19, a similarity calculation unit 20, a registered data storage unit 21, and a determination unit 22. It has.
  • the image input unit 11 inputs a face image of the person HM to be authenticated obtained by photographing with the cameras CA1 and CA2 from the cameras CA1 and CA2 to the controller 10.
  • the image input unit 11 includes a first image input unit 11a and a second image input unit l ib corresponding to the cameras CA1 and CA2, and the face images transmitted from the cameras CA1 and CA2 respectively. Is entered. Therefore, a total of two face images are input from the cameras CA1 and CA2.
  • the authentication system 1 of this embodiment performs two-dimensional authentication (2D authentication) and three-dimensional authentication (3D authentication) using the input face image! / (If this is multiple-authenticated! / , U), and a configuration for making a determination based on these results.
  • a 2D image (2D image) and 3D shape data (3D shape data) are required.
  • the input device (2D image ⁇ 3D (3D) measurement input device) for acquiring the 2D image and 3D shape data is for multiple (2 to N) general 2D cameras (stereo cameras). There is a way. In this case, the 3D shape of the face (3D shape) is calculated from two or more 2D images.
  • 3D shape data can be acquired by using a 3D measuring device (3D measuring device; 3D camera) such as a non-contact 3D digitizer using a light cutting method! /.
  • 3D measuring device 3D measuring device; 3D camera
  • the 3D such as the non-contact 3D digitizer described above is required.
  • 3D shape data can be acquired directly by the 3D measurement device, and there is no need to calculate it from a 2D image.
  • the type of 3D measurement device that combines a camera for acquiring 3D shape data and a camera for acquiring 2D images, it is not necessary to prepare a separate camera for acquiring 2D images as described above.
  • the face area detection unit 12 detects (identifies and extracts) a face area from the face image input to the image input unit 11.
  • the face area detection unit 12 includes a first face area detection unit 12a and a second face area detection unit 12b corresponding to the first image input unit 11a and the second image input unit l ib of the image input unit 11. Then, a face area (face area image) is detected from the face images transmitted from the first image input unit 11a and the second image input unit l ib, respectively. More specifically, the face area detection unit 12 extracts (cuts out) an area where a face image force face exists by performing template matching using a standard face image prepared in advance, for example. ) Perform processing.
  • a so-called neural network that trains images of face regions of a plurality of people, stores the results as a learning dictionary, and compares the newly input face images with each other to determine face area detection.
  • a so-called neural network that trains images of face regions of a plurality of people, stores the results as a learning dictionary, and compares the newly input face images with each other to determine face area detection.
  • Viola-Jones detector which stores various face area identifiers and uses them step by step, ie, comparison.
  • the method of determining facial region detection while reducing the number of identifiers used as the system progresses for example, P. Viola and M. Jones.Rapid object detection using a boosted cascade of simple features.In Proc. Of IEEE conference on Computer Vision and Pattern Recognition, Kauai, HI, December 2001.). Note that this method can be configured by combining a plurality of simple discriminant functions using simple image feature amounts.
  • the first face area detection unit 12a and the second face area detection unit 12b may individually detect the face area, but only one of them detects the face area. Also good.
  • the first face area detection unit 12a and the second face area detection unit 12b may individually detect the face areas, and may employ a highly accurate detection result. The face area can be detected with high accuracy by the corresponding area search process. The same applies to the face part detection unit 13.
  • the face part detection unit 13 detects (extracts or calculates) a characteristic part (referred to as a characteristic part) of the face with respect to the image power of the face area detected by the face area detection unit 12. Detecting a characteristic part of the face is called “face part detection”.
  • the face part detection unit 13 includes a first face region detection unit 13a and a second face region detection unit 13b. The position of the characteristic part (coordinates on the image) is detected from the face area images transmitted from the first face area detecting unit 12a and the second face area detecting unit 12b, respectively.
  • the facial features include the eyes (eg, the center of the pupil, the corner of the eye, the top of the eye, the upper and lower parts of the pupil), the eyebrows (eg, both ends and the middle of the eyebrows), and the nose (eg, the edges of the nose, the lower center of the nose, Hole), mouth (eg, left and right mouth edges, upper and lower lip center) or mandibular tip.
  • the face part detection unit 13 calculates the coordinates of the feature points Q1 to Q23 of each feature part as shown in FIG.
  • Feature points Ql, Q3; Q2, Q4 are the ends of the left and right eyes, and feature points Q7, Q5; Q8, Q6 are the upper and lower parts of the left and right pupils, and feature points Q9, Q13; Q10, Q14 Are the left and right eyebrows, and feature points Ql l and Q12 are the approximate center of the left and right eyebrows, and feature points Q15 and Q16; Q17 and Q18 are the ends of the nose and feature points Q19 Is the lower center of the nose, the feature points Q20 and Q21 are both ends of the mouth, and the feature points Q22 and Q23 are the upper and lower portions of the center of the lips. It should be noted that the portion of the feature points to be extracted can be increased or decreased as necessary if it is set appropriately. Further, this feature part can be detected by various methods such as template matching using a standard template of the feature part.
  • the coordinates of the calculated feature points Q1 to Q23 are expressed as two-dimensional coordinates on each image input from the cameras CA1 and CA2.
  • the coordinate value of the feature point Q20 is obtained in each of two images Gl and G2 (see FIG. 5 described later). More specifically, the coordinates (xl, yl) of the feature point Q20 on the image G1 are calculated with the end point of the images G1 and G2 as the origin O, and the coordinates of the feature point Q20 on the image G2 (x2 , Y2) is calculated.
  • the face part detection unit 13 calculates the coordinates of each feature point from the image of the face region, and calculates the luminance value of each pixel in the region having the feature point as a vertex (referred to as a feature region). This is acquired as information (called texture information) in this area.
  • texture information information
  • the face part detection unit 13 since two images are input, the face part detection unit 13 performs, for example, an average of corresponding pixels in corresponding feature regions in each of these images (images Gl and G2). The average brightness value of each pixel is used as texture information in the feature region.
  • the method of detecting the facial part is not limited to this! /.
  • a face part detection method a method as proposed in Japanese Patent Laid-Open No. 9 102043 “Detection of element positions in an image” may be employed.
  • a method for detecting a facial part a method for detecting from the shape of a facial part by using auxiliary light, A method using learning by Rennet, or a method using frequency analysis by Gabor wavelet transform or normal wavelet transform which is not Gabor may be adopted.
  • the three-dimensional face data that combines the 3D coordinates M G) of each feature point Qj is called “3D face part shape data”.
  • the symbol “w” in the equation (1 ⁇ 1) is a constant (w ⁇ 0) that is not 0 (zero), and the symbol “P” represents the perspective projection matrix (camera parameter Pi).
  • (1 ⁇ 1) is expressed by the following expression (1 ⁇ 3).
  • the perspective projection matrix P is a 3 ⁇ 4 row IJ equation, and if each component is expressed by the following equation (1 ⁇ 4), the above (1 * 1
  • the relationship between the coordinates of the space and the image becomes as shown in the following equations (1 ⁇ 5) and (1 ⁇ 6).
  • FIG. 5 is a schematic diagram for explaining the calculation of the three-dimensional coordinates of each feature part.
  • P 2 represents the (i, j) component of P.
  • This equation (1 ⁇ 9) is a simultaneous linear equation of X, Y, and ⁇ .
  • the coordinates (X, ⁇ , ⁇ ⁇ ) of the feature point in the three-dimensional space can be obtained. That power S.
  • (1 ⁇ 9) four equations are given for the three unknowns X, ⁇ , and ⁇ . This means that the four components (xl, yl) and (x2, y2) are not independent. Similarly, the coordinates of the other feature points are calculated in space.
  • the posture / light source correction unit 15 performs posture variation correction and light source variation correction on the texture information calculated by the face part detection unit 13.
  • Posture fluctuation correction corrects the effect on the texture due to the difference in face posture, that is, orientation (tilt).
  • Light source fluctuation correction corrects the effect on the texture due to the difference in the direction (tilt) of the light source relative to the face.
  • the posture / light source correction unit 15 is a standard model (standard stereo model; see Fig. 7 described later) that is a general (standard) face model prepared in advance for posture change correction and light source change correction for this texture information. ) Is used. [0040] ⁇ Attitude variation correction>
  • the shape of the 3D face part shape data (3D coordinates M (j) of each feature point Qj) is corrected.
  • the posture 'light source correction unit 15 corrects the three-dimensional position so that the 3D face part shape data, that is, the 3D shape, most closely matches the 3D shape of the standard model (the shape of the 3D face part shape data itself changes). Shina! /,).
  • the posture / light source correction unit 15 performs so-called model fitting based on the standard model when the face according to the 3D face part shape data is facing sideways, and the face facing the side is the orientation of the standard model face ( Reference position), for example, correct the position so that it faces the front. This position is corrected based on the posture parameter t (pose parameter) shown in the following equation (2).
  • the symbol “s” represents the scale conversion index
  • the symbol “ ⁇ , ⁇ , ⁇ ” represents the transformation parameter indicating the rotational displacement (tilt)
  • the symbol “tx, ty, tz” is orthogonal 3 Represents a conversion parameter indicating translational displacement in the axial direction.
  • the symbol “Tt” on the right shoulder represents “transposition”.
  • the two-dimensional texture (2D of each feature region acquired by the face part detection unit 13 is obtained.
  • the texture information is corrected so that the (texture) is in the front direction (reference direction).
  • the texture information corresponding to the case where the face was photographed from the front is reconstructed.
  • a proper normalized texture image is created.
  • the texture information correction is not limited to the above method! /.
  • the texture information correction is performed by pasting (mapping) the texture (texture image) of each feature region acquired by the face part detection unit 13 to the corresponding region (polygon described later) of the standard model. Similar to the above, there is a method of correcting so that a front textured face image can be obtained. It will be adopted. This makes it possible to handle texture information without being affected by the difference in posture!
  • the front texture face image obtained by the correction may be projected onto cylindrical coordinates (cylindrical surface) arranged around the standard model so as to be easily compared with each other.
  • the texture information of the projection image obtained by this projection is not affected by posture fluctuations, but is also not affected by changes in facial shape due to changes in facial expressions! /, Because it is pure facial texture information, It is very useful as information used for authentication.
  • the luminance is corrected for each feature region. More specifically, for example, the inside of the feature area is inclined so that the brightness of each pixel (node) in each feature area acquired by the face part detection unit 13 is equal to the brightness of the pixel corresponding to the standard model. In other words, the luminance is corrected by controlling the luminance value with the parameter of the tilt angle (orientation).
  • the standard model storage unit 16 stores information on the standard model of the face in advance.
  • FIG. 6 is a schematic diagram showing an example of a standard model.
  • this standard model is composed of vertex data and polygon data.
  • the vertex data is a set of coordinates of the vertex U of the feature part in the standard model, and has a one-to-one correspondence with the 3D coordinate of each feature point Qj.
  • Polygon data is obtained by dividing the surface of a standard model into small polygons, for example, polygons such as triangles and quadrangles, and expressing these polygons as numerical data. Each polygon includes pixel luminance information and the like used in the light source fluctuation correction.
  • the standard model may be average face data obtained by averaging the data of a plurality of people's faces.
  • the vertex of each polygon of the standard model may be configured by using an intermediate point other than the feature point Qj together with the feature point Qj. This midpoint is calculated by interpolation.
  • the two-dimensional authentication unit (2D authentication unit) 17 is shown in the posture / light source correction unit 15.
  • the 2D face feature value (2D face feature value: local 2D face feature value; local 2D face feature value) is calculated from the texture information of each feature region that has been corrected for the power fluctuation and the light source fluctuation.
  • the 2D authentication unit 17 includes a corrected image acquisition unit 17a and a 2D feature quantity extraction unit 17b.
  • the corrected image acquisition unit 17a acquires a corrected image (corrected texture image! /, U) obtained by the posture / light source correction unit 15 in which the texture image is subjected to posture change correction and light source change correction. That is, the corrected image from the attitude 'light source correction unit 15 is input to the corrected image acquisition unit 17a.
  • the 2D feature amount extraction unit 17b extracts a 2D face feature amount from the corrected texture image acquired by the corrected image acquisition unit 17a.
  • This 2D face feature extraction is performed by a method that uses Gabor wavelet transform, which is a technique that extracts local grayscale information (contour lines in a specific direction, etc.) as a feature.
  • This Gabor wavelet transform can be used to detect the above-mentioned facial part, and can also be used to extract the grayscale information here. More specifically, the grayscale information obtained by applying the Gabor filter to the corrected texture image with the 2D coordinate point of the corrected texture image as a reference is extracted as a 2D face feature amount.
  • FIG. 7 is a three-dimensional graph for conceptually explaining the Gabor filter.
  • the Gabor filter is a spatial filter using a kernel in which a sin function (imaginary part) and a cos function (real part) are localized by a Gaussian function, and is a local filter of an image. It is a filter that performs transformation (Gabor wavelet transformation) that can extract information with high contrast.
  • the Gabor wavelet transform creates a kernel with various periods by fixing the shape of the kernel, and expanding and contracting this kernel, and features corresponding to the spatial period (Gabor feature value; grayscale information here) This is a transformation that extracts.
  • the feature vector (two-dimensional feature vector; 2D feature vector) representing the feature quantity of the spatial period is an array of Gabor wavelet coefficients having different size and direction characteristics.
  • the Gabor Wemblet transform is a function that minimizes the uncertainty of position and frequency, and is expressed by the following equation (3).
  • the k vector in the above equation (3) is a constant that determines the wavelength and direction of the wave.
  • the second term in [] is that the DC component of the function is 0 (zero) to satisfy the wavelet reconstruction condition, that is, It is a term added so that the following equation (4) can be obtained in the Fourier transform.
  • the face area 3D calculation unit 18 obtains a high-density face from the image of the face area detected by the face area detection unit 12, that is, from the stereo image by the stereo camera in this embodiment.
  • 3D shape (referred to as 3D close-fitting shape data) is calculated.
  • the “high-density data” referred to here is data of only the face eyes and nose detected by the face part detection unit 13!
  • the dense data acquisition points that make up the 3D dense face shape data This is called “3D point (3D point; or 3D measurement point)”.
  • the 3D face shape data is face shape data composed of a plurality of 3D points.
  • phase-only correlation method is one of the correlation calculation methods using Fourier transform. Two Fourier images are normalized for each spectrum and then synthesized. In other words, in the phase-only correlation method, when two images are given, the two-dimensional discrete Fourier transform of each image is normalized by the amplitude component, and a composite phase spectrum is obtained by calculating these products. Then, the inverse Fourier transform is performed on this. If the two images are similar, the POC function has a very sharp peak. The height of the correlation peak is useful as a measure of image similarity.
  • the coordinates of the peak correspond to the relative misalignment of the two images. Since the phase-only correlation method has such characteristics, it is possible to obtain corresponding points between images with high precision under the influence of luminance fluctuation and noise. In other words, the phase-only correlation method is a process of searching for matching points between different images with high accuracy, that is, matching. In addition, highly accurate 3D close-fitting shape data is obtained by performing 3D reconstruction processing on the acquired corresponding points. As described above, since it is assumed that a plurality of 2D cameras are used in the present embodiment, when the force S and the 3D measurement device that calculates a high-density 3D shape by the phase-only correlation method are used. Since it is possible to obtain a high-density 3D shape without calculating from a plurality of images, it is not necessary to use such a method.
  • a reduced image is created as a multi-resolution image.
  • a corresponding point search is executed at the pixel level (pixel level).
  • the reduced image is enlarged by a predetermined size by narrowing down the corresponding point candidates.
  • corresponding point search is executed around the candidate at the pixel level.
  • the third and fourth are repeated until the same size as the original image before being reduced in the first one.
  • a sub-pixel level corresponding point search is performed with the same size as the original image.
  • the three-dimensional authentication unit (3D authentication unit) 19 is a 3D face dense shape calculated by the face region 3D calculation unit 18.
  • 3D face feature values (local 3D face feature values; local 3D face feature values) are calculated based on the shape data and the 3D face part shape data calculated by the face part 3D calculation unit 14.
  • the 3D authentication unit 19 includes a 3D local patch extraction unit (3D local patch extraction unit) 19a and a 3D feature quantity extraction unit (3D feature quantity extraction unit) 19b.
  • the 3D local patch extraction unit 19a extracts (calculates) a 3D local patch region from the 3D face dense shape data and the 3D face part shape data (feature part).
  • the three-dimensional local patch region is simply referred to as “local patch region”.
  • FIG. 8 is a schematic diagram for explaining a method of setting a rectangular area from each feature point of 3D face part shape data.
  • FIG. 9 is a schematic diagram for explaining a method of extracting (determining) a local patch region from 3D face part shape data using the rectangular region information set in FIG.
  • FIG. 10 is a schematic diagram for explaining a method of setting a rectangular area from each feature point of the 3D face part shape data.
  • FIG. 11 is a schematic diagram showing an example of each 3D point and each local patch region in the 3D face part shape data.
  • the 3D coordinates M ( j ) (referred to as feature point coordinates) of each feature point Qj in each feature part of the 3D face part shape data exist on a high-density 3D shape (3D face part shape data).
  • the local patch area is an area defined by a relative relationship from the feature point coordinates of the 3D face part shape data to the 3D face dense shape data. More specifically, for example, as shown in FIG. 8, on a plane T (local patch extraction plane) T defined by three points of the right eye a, right eye b, and right nose c, which are feature points Qj,
  • a rectangular region S defined by four points defined as a linear sum of vectors ca and cb is defined as a starboard region, for example.
  • FIG. 9 shows conceptual 3D points when 3D face shape data is looked down from above the face. Also, the local patch extraction plane is composed of 4 or more feature points. You may ask for it.
  • each local patch extraction plane is set from each feature point coordinate 201 of the 3D face part shape data, and a predetermined number is set on this local patch extraction plane.
  • a rectangular area 211 (portion part) and a rectangular area 212 (forehead part) are set.
  • the rectangular area may be arbitrarily set to an area including facial features such as eyes, nose, mouth, and eyebrows as shown in rectangular areas 213, 214, and 215, for example. It is preferable that the facial feature part to be set is a part where the facial feature appears more prominently.
  • Each rectangular area is set in this way, and as shown in FIG. 11, local patch areas 301, 302, 303,... Corresponding to these rectangular areas are determined. However, in FIG.
  • a plurality of points (plot points) 311 arranged on the entire face indicate each 3D point in the 3D close-fitting shape data), and in particular, the point indicated by reference numeral 312 (in the figure)
  • the dark dots are the 3D points that make up the local patch area.
  • the local patch region 302 corresponds to the local patch region P at the wrinkle point described in FIG.
  • each extracted local patch region is preferably arranged at a symmetric position on the face.
  • the eye area may be hidden by sunglasses, etc., and the mouth area may not be 3D-measurable due to the influence of wrinkles, etc., so the local patch area to be extracted is at least a part that is not easily hidden or 3D-measurable. It is desirable to include the nose and heel (the forehead is likely to be hidden by the hair).
  • the method of extracting the local patch region is not limited to this.
  • the local patch region can be extracted by preparing in advance a partial model shape (reference model shape) as a reference that the starboard portion has such a shape, and this partial model shape is a 3D face.
  • a method may be used in which the force that is applied to the dense shape data is found, and the applied position is used as the local patch region of the starboard portion.
  • a reference three-dimensional (3D) patch shape reference patch shape, reference partial model shape
  • corresponding to the local patch region to be extracted that is, the average face (standard face) of the local patch itself, for example.
  • the patch model obtained from the data is stored and saved in advance, and this patch model is compared with the 3D close-fitting shape data, for example, the similarity of each other's shape is compared, and the most A region having a shape similar (approximate) to the patch model shape is locally It may be a method of determining as a notch area.
  • the local patch region extraction method may be a method of determining a region of 3D close-fitting shape data included in a region defined in advance on a two-dimensional image as a local patch region. More specifically, as shown in FIG. 10, a region that can be defined based on the feature point Qj detected by the face part detection unit 13 is defined as a selection region on the two-dimensional image, and this defined two-dimensional The 3D face shape data area of the selected area on the image is determined as the local patch area.
  • the region on the 2D image is defined in advance by the calculation of the facial part 3D calculation unit 14, and all the 3D face shape data is measured. By searching for the corresponding points only in the upper region and performing 3D reconstruction, it is possible to measure the shape of only the local patch region, and to shorten the processing time.
  • the local patch region extraction method may be a method of determining the local patch region by performing an intersection determination with the shape of the standard model calculated from the average face. More specifically, first, a standard model is prepared in advance and extracted to this standard model! /, Local patch areas are defined, and these standard models and local patch areas on the standard model are stored and stored. . Next, the 3D position is corrected so that the 3D face part shape data best matches the 3D shape of the standard model. Next, after this position correction, a triangular patch that is a triangular area on the standard model is projected onto the triangular patch of the 3D face part shape data around the projection center point of the standard model.
  • This triangular patch of 3D face part shape data is given as a patch composed of two points, the reference measurement point and the adjacent measurement point. Then, it is determined whether or not the projected triangular patch on the standard model and the triangular patch on the 3D face part shape data intersect, and if it intersects, the 3D face part shape is determined.
  • a triangular patch on the data is determined as a local patch region. There are three cases of this intersection. If either case is satisfied, it is determined that the triangle patches intersect.
  • FIG. 12 is a diagram for explaining the intersection determination.
  • Figure 12 (A) shows the first case determined to be an intersection
  • Figure 12 (B) shows the second case determined to be an intersection
  • Figure 12 (C) is determined to be an intersection. The third case is shown.
  • the net pattern represents a standard model
  • the hatched pattern represents measurement data. [0064] 1.
  • is the angle between OA and ⁇ axis at point A (r, ⁇ , ⁇ ) on spherical coordinates
  • is the intersection of the perpendicular line of point A (r, ⁇ , ⁇ ) on the xy plane and the xy plane, z
  • B is the angle between OB and the X axis. And width and height. This is the width of the projected image.
  • the point cloud of 3D face part shape data is processed in the same manner and projected onto the map image. Then, the region on the 3D face part shape data included in the labeled region is determined as the local patch region.
  • the 3D feature quantity extraction unit 19b extracts a 3D face feature quantity from the information of the local patch region extracted by the 3D local patch extraction unit 19a. More specifically, a curved surface is calculated for each local patch based on information on a plurality of 3D points in each local patch region. The calculation of the curved surface is executed by a method using a curvature map, for example. In this case, first, normalization of the local patch area is performed. For example, in the case of a rectangular local patch area, the normalization is performed by performing three-dimensional affine transformation so that the vertex of the rectangular area becomes the vertex of a predetermined standard rectangular area (standard rectangular area). Is real Is done.
  • a transformation three-dimensional affine transformation
  • the coordinate value indicating the 3D point of the local patch region is matched with the standard coordinate value.
  • the normalized local patch region is uniformly sampled, and the curvature at each sampling point is used as the shape feature (3D face feature amount) of the local patch region.
  • this method also uses the curvature map to compare the curvature of the local patch area and the standard rectangular area.
  • the curvature is “Face Identification Using 3D Curvature 3D Shape Feature Extraction 1”, IEICE Transactions Vol. J76-D2 No. 8 (August 1993) pp.1595- It can be calculated by using the method disclosed in 1603.
  • the extraction of the 3D face feature value is not limited to the above method.
  • a method of extracting the 3D face feature value by surface approximation may be used.
  • This curved surface approximation can use various curved surfaces such as a Bezier curved surface, a bicubic curved surface, a rational Bezier curved surface, a B-spline curved surface, and a NURBS (Non Uniform Rational B-Spline) curved surface.
  • a Bezier curved surface is used for example, a Bezier curved surface is used.
  • FIG. 13 is a schematic diagram showing an example of a Bezier curved surface in the extraction of the three-dimensional face feature amount.
  • the Bezier curved surface is a curved surface F (Bézier curved surface F) defined by control points P arranged in a grid with P00, P01,.
  • the control point P defines the four corner points and the rough shape of the curved surface F.
  • a Bezier surface is a polynomial surface defined in the parameter region ue [0, l], ve [0, 1].
  • An n-order expression for u and an m-order expression for V is called an n X m-order surface, and is expressed by (n + 1) * (m + 1) control points.
  • Such a Bezier surface is given by the following equation (5). ", ⁇ ) ⁇ ⁇ ( ⁇ ) ⁇ ⁇ ⁇ ) ⁇ ⁇ ⁇ (5)
  • the shape information (curved surface information) of the approximated Bezier curved surface F is obtained as the patch shape information of the local patch region.
  • the patch shape information for each local patch area of the face that is, 3
  • 3D feature vector that is, a 3D face feature value
  • the present invention is not limited to this, and the information on the relative positional relationship between each local patch region (or each patch shape information), that is, the mutual distance, inclination, etc., is further added to the total 3D facial feature information. Information may be added. In this case, since it becomes possible to obtain “global shape information” indicating the overall characteristics of the face, the 3D face feature amount is more suitable for personal authentication.
  • the local patch region from which the 3D face feature value is extracted is preferably a three-local patch region including at least a part other than a facial feature part (the eyes, eyebrows, nose, mouth, etc.).
  • the 3D face feature amount has no or little feature, that is, a part such as “forehead” or “ ⁇ ”, which is a part where the feature is difficult to appear with the 2D feature quantity (feature part, 2D image) (surface irregularities)
  • the patch is extracted from a local patch region including a flat portion with little change.
  • the 3D face feature quantity can be handled as a 3D feature vector (vector quantity) in this way, the calculated 3D face feature quantity (3D feature vector) or a comparison feature quantity prepared in advance (described later)
  • the 3D feature vector for comparison corresponding to the 3D feature vector of the 3D face feature amount (comparison vector amount) is registered (stored) in the storage unit 3 of the controller 10, for example, the above-mentioned 3D face dense shape data
  • registering 3D feature vectors requires less registration data. In other words, data handling is improved, for example, memory capacity can be reduced.
  • the similarity calculation unit 20 compares the facial feature value of the comparison target person registered in advance (referred to as a comparison feature value) and the facial feature value of the authentication target person HM calculated above, that is, the 2D face feature value ( 2D feature vectors) and 3D face features (3D feature vectors) are evaluated for similarity. More specifically, the similarity calculation unit 20 performs similarity calculation based on the comparison feature quantity, the 2D face feature quantity, and the 3D face feature quantity, and each of the two-dimensional similarity degrees. (2D similarity; U and 3D similarity (3D similarity; Dij) are calculated, and multiple similarity is calculated using these 2D similarity and 3D similarity. First, calculation of 2D similarity explain.
  • the 2D similarity L between the authentication target HM and the comparison target is calculated by the 2D feature quantity extraction unit 17b! It is given as the average value of the sum of the similarity SD (Ji, Ji ') of the F feature vectors extracted (generated).
  • the 2D similarity L is calculated by assuming that the feature quantity of the calculated feature vector is G and the registered feature quantity is G ′. ') Is expressed as.
  • This equation (8) has a form in which the correlation of amplitude is weighted by the similarity of the phase angle.
  • N is complex Gabor filter
  • the symbol “a” represents the amplitude
  • the symbol “ ⁇ ” represents the phase
  • the k vector is jth A vector having the direction of the two-dimensional wave and the magnitude of the frequency, and is given by the following equation (9).
  • the calculation of the 2D similarity can be performed by the Euclidean distance as in the 3D similarity calculation described later.
  • the multiple similarity which is the overall similarity between the HM (authentication object) and the comparison object (comparison object), is the 2D similarity and 3D similarity. It is calculated by the weighted sum for each degree of similarity.
  • the multiple similarity is indicated by Re.
  • the registration data storage unit 21 stores information on face feature amounts (comparison feature amounts and comparison face feature amounts) of a comparison target prepared in advance.
  • the determination unit 22 performs authentication determination based on the multiple similarity Re.
  • the method differs between the case of face verification (Verification) and the case of face identification (Identification) as shown in (a) and (b) below.
  • the input face (input face; face of the person to be authenticated HM) is a specific registrant (special It is determined whether it is a regular registrant.
  • the degree of similarity between the face feature amount of a specific registrant, that is, the comparison target person (comparison feature amount) and the face feature amount of the authentication target person HM is compared with a predetermined threshold value to compare with the authentication target person HM.
  • Identity with the subject is determined. More specifically, when the multiple similarity Re is smaller than a predetermined threshold TH1, it is determined that the authentication target person HM is the same person (person) as the comparison target person.
  • information on the threshold TH1 is stored in the determination unit 22.
  • the information on the threshold TH 1 in this case is stored in the registration data storage unit 21! /.
  • Face identification is to determine who the input face belongs to. In this face identification, all the similarities between the face feature amount of the registered person (compared person) and the face feature amount of the person HM to be authenticated are calculated, and the person HM to be authenticated and each comparison object The identity of each person is determined. Then, the comparison target person having the highest identity among the plurality of comparison target persons is determined to be the same person as the authentication target person HM. More specifically, the comparison target person corresponding to the minimum multiple similarity Re (Remi n) among the multiple similarity degrees Re of the authentication target person HM and multiple comparison target persons is the authentication target person HM. It is determined that they are the same person.
  • Remi n minimum multiple similarity Re
  • FIG. 14 is a flowchart showing an example of the face authentication operation according to the present embodiment.
  • the face image of the person HM to be authenticated is acquired by photographing with the cameras CA1 and CA2 (step Sl).
  • the two face images obtained by the photographing are input to the controller 10 (image input unit 11) (step S2).
  • the face area detection unit 12 detects a face area image from each face image input to the image input unit 11 (step S3).
  • the face part detection unit 13 detects the facial feature part, that is, the coordinates of the feature point and the texture information of the feature area (step S4).
  • the face part 3D calculation unit 14 calculates the three-dimensional coordinates (3D face part shape data) of each feature part from the coordinates (feature point coordinates) of the feature part of the face detected by the face part detection unit 13. Calculated (step S5). Further, the posture / light source correction unit 15 performs posture variation correction and light source variation correction on the texture information detected by the face part detection unit 13 (step S6). The 2D authentication unit 17 calculates a 2D face feature amount from the corrected texture image of each feature region that has been corrected for the posture variation correction and the light source variation correction (step S7).
  • the face area 3D calculation unit 18 calculates 3D face shape data composed of a plurality of 2D points from the face area image (stereo image) detected by the face area detection unit 12 (step). S8).
  • the 3D local patch extraction unit 19a calculates the 3D face shape data calculated by the face region 3D calculation unit 18 and the face part 3D calculation unit 14 in step S5.
  • a three-dimensional local patch region is calculated from the 3D face part shape data (step S9).
  • the 3D feature quantity extraction unit 19b calculates a 3D face feature quantity from the information of the local patch region calculated by the 3D local notch extraction unit 19a (step S10).
  • the similarity calculation unit 20 compares the face feature amount (comparison feature amount) of the comparison target registered in advance with the local 2D face feature amount and 3D face feature amount calculated in steps S7 and S10. The similarity is calculated based on the comparison feature quantity, the 2D face feature quantity, and the 3D face feature quantity, and the 2D similarity degree and 3D similarity degree. The multiple similarity is calculated from (Step Sl l). Then, based on the multiple similarity, the determination unit 22 performs face collation or face identification authentication determination (step S12).
  • FIG. 15 is a flowchart showing an example of the operation in step S9 in FIG.
  • the local patch extraction unit 19a first extracts a local patch from each feature point (3D coordinate) (3D face part shape data) in each feature part calculated by the face part 3D calculation unit 14.
  • the plane T for use is set (calculated) (step S21).
  • a rectangular area S (a partial area described later) is set on the set local patch extraction plane T (step S22).
  • a local patch area P corresponding to the rectangular area S is set, that is, a perpendicular line drawn perpendicularly to the local patch extraction plane ⁇ ⁇ ⁇ out of a plurality of 3D points ⁇ constituting the 3D face part shape data is a rectangular area S.
  • the 3D point ⁇ that falls within is identified, and the region that also has the identified 3D point ⁇ force is set as the local patch region ⁇ (step S23).
  • the entire face of the authentication target person's face is obtained by the three-dimensional shape acquisition unit (the face region detection unit 12, the face region 3D calculation unit 18).
  • 3D shape information (overall 3D shape) is acquired, and the 3D shape information acquired by the 3D shape acquisition unit by the local region determination unit (3D local patch extraction unit 19a) is acquired.
  • 3D close-fitting shape data From (3D close-fitting shape data), a local region in the overall 3D shape A number of 3D local regions (3D local regions; local patch regions) are determined.
  • the 3D feature amount calculation unit (3D feature amount extraction unit 19b) determines the shape of the 3D local region from the local 3D shape information (local 3D shape information) in the 3D local region determined by the local region determination unit.
  • 3D face feature value which is the local area shape information about the face and is a three-dimensional feature value of the face. Then, the 3D face feature amount calculated by the 3D feature amount calculation unit is prepared in advance by the feature amount comparison unit (similarity calculation unit 20 and determination unit 22) to perform the authentication operation for the authentication target person HM. The comparison face feature amount is compared.
  • the authentication method of the present embodiment in the first step, information on the entire 3D shape, which is the overall 3D shape of the face of the person to be authenticated, is acquired, and in the second step ! /, From the entire 3D shape information, a plurality of 3D local regions that are local regions in the entire 3D shape are determined.
  • 3D face feature amount which is local region shape information related to the shape of the 3D local region and is a three-dimensional feature amount of the face, is calculated.
  • the 3D face feature value is compared with the comparison face feature value prepared in advance to perform the authentication operation for the person to be authenticated HM.
  • a plurality of 3D local regions are determined from the entire 3D shape of the face of the person HM to be authenticated, and the 3D face feature amount is calculated from the local 3D shape information in the 3D local region. Then, the authentication operation for the person to be authenticated is performed by comparing the 3D face feature quantity with the comparison face feature quantity. Therefore, multiple 3D shape force local areas (3D local areas) of the entire face are extracted without using the information on the entire 3D shape of the face as it is, and based on these extracted 3D local areas!
  • the 3D shape acquisition unit includes a 2D image acquisition unit (camera CA1, CA2) that acquires a 2D image of a face, and a feature region extraction unit (face region detection)
  • the part 13) extracts a characteristic part that is a characteristic part of the face from the 2D image acquired by the two-dimensional image acquisition part.
  • the 3D coordinate calculation unit (facial part 3D calculation unit 14) calculates the 3D coordinates ( ⁇ ⁇ ) of the feature part extracted by the feature part extraction unit, and the local region determination unit calculates the 3D coordinate. Based on the 3D coordinates of the feature part calculated by the calculation unit, a 3D local area is determined!
  • the first step is a step including a fifth step of acquiring a 2D image of a face, and in the sixth step, a feature portion that is a characteristic portion of the face from the 2D image Is extracted.
  • the third step the 3D coordinates of the characteristic part are calculated.
  • a 3D local region is determined based on the 3D coordinates of the feature part.
  • the characteristic part that is a characteristic part of the face is extracted from the 2D image, the 3D coordinate of the characteristic part is calculated, and the 3D local region is calculated based on the 3D coordinate. Therefore, when determining a 3D local area, it can be associated with the information of the two-dimensional feature part, and it is possible to perform high-accuracy authentication using the feature part information together with the 3D local area information. It becomes.
  • the local region determination unit sets a partial region (for example, a rectangular region S) having a predetermined shape in a plane (local patch extraction plane ⁇ ) determined from the 3D coordinates, and The region corresponding to the partial region in the 3D shape is determined as the 3D local region.
  • a partial area of a predetermined shape is set in the plane determined from the 3D coordinates of the feature part, and the area corresponding to this partial area in the overall 3D shape is determined as a 3D local area.
  • the 3D local region can be easily determined from the 3D coordinates of the feature part.
  • the entire 3D shape information is made into face shape data composed of a plurality of 3D points ( ⁇ ), and the local region determining unit determines the plane from the 3D points ( ⁇ ).
  • a region composed of 3D points where the perpendicular line drawn vertically to the region is included in the partial region is determined as the 3D local region (local patch region ⁇ ).
  • the region composed of 3D points where the perpendicular line that is virtually perpendicularly dropped from the 3D point to the plane is included in the partial region is determined as the 3D local region.
  • 3D local corresponding to partial area easily An area can be determined.
  • the local region determination unit compares the entire 3D shape with a reference three-dimensional partial model shape (reference 3D partial model shape; reference patch) prepared in advance.
  • the part that is the most similar to the reference 3D partial model shape in the 3D shape is determined as the 3D local region.
  • the overall 3D shape and the reference 3D partial model shape are compared, and the portion of the overall 3D shape that has the most similar shape to the reference 3D partial model shape is determined as the 3D local region. It is possible to easily determine a 3D local region in the entire 3D shape without acquiring a structure and operation for acquiring a feature part (2D face feature amount) from this 2D image.
  • the 3D feature amount calculation unit converts local 3D shape information in the 3D local region into predetermined curved surface information (for example, by a method using a Bezier curved surface). Is calculated as local region shape information.
  • the local region shape information of the 3D local region information obtained by converting the local 3D shape information in the 3D local region into predetermined curved surface information is used, that is, the 3D shape information cannot be used as it is. Since this is converted to be treated as curved surface information (for example, curvature), dimensional compression is possible and the processing speed is increased.
  • the 3D feature amount calculation unit calculates a 3D face feature amount including information on the relative positional relationship of each 3D local region as the 3D face feature amount.
  • the 3D face feature amount includes information on the relative positional relationship of each 3D local region, the entire face that is composed of only individual features in each 3D local region is determined by this 3D face feature amount. It is possible to represent the characteristics over a range of times (the global shape information of the face can be obtained), and more accurate authentication can be performed.
  • the 3D local area in the overall 3D shape is determined by the local area determining unit so that a plurality of 3D local areas are arranged at positions symmetrical with respect to the face.
  • the in this way since the 3D local area is placed in a symmetrical position on the face, the 3D local area (position) can be efficiently determined in the overall 3D shape, and the processing time is shortened. Data handling is improved.
  • a plurality of 3D local regions are created by the local region determining unit.
  • the 3D local region in the overall 3D shape is determined so that at least the nose and the heel region of the face are included.
  • the 3D local area in the overall 3D shape is determined so that at least the face nose and wrinkle are included, the 3D local area is measured by, for example, a part (for example, forehead) or a part that is hidden by hair. Difficult! /, It can be set avoiding parts (for example, mouth when having a mustache), and it is possible to calculate 3D facial features with high accuracy from this 3D local area. Can be performed.
  • the two-dimensional feature amount of the face is obtained from the feature part information extracted by the feature part extraction unit by the two-dimensional feature quantity calculation unit (2D feature quantity extraction unit 17b).
  • a 2D face feature is calculated.
  • the feature amount comparison unit combines the 2D face feature amount calculated by the 2D feature amount calculation unit and the 3D face feature amount calculated by the 3D feature amount calculation unit, for example, by a weighted sum.
  • the feature quantity (multiple similarity) is compared with the comparison face feature quantity.
  • the 2D face feature value which is a two-dimensional feature value of the face
  • the total face feature value that is a combination of the 2D face feature value and the 3D face feature value is compared with the comparison face feature value.
  • the 2D face feature amount of the face is calculated, and the comprehensive face feature amount combining the 2D face feature amount and the 3D face feature amount is compared with the comparison face. Since the feature quantities are compared, it is possible to perform more accurate authentication using the 2D face feature quantities and the 3D face feature quantities.
  • the 3D feature quantity calculation unit calculates the 3D face feature quantity from the local 3D shape information in the 3D local region including at least a part other than the facial feature part.
  • 3D face feature quantities are calculated from local 3D shape information in a 3D local area that includes at least parts other than facial feature parts. Therefore, authentication using multiple 2D face feature quantities and 3D face feature quantities (multiple authentication) ), Features of parts other than feature parts that are difficult to extract as 2D face feature quantities can be included as 3D face feature quantities, that is, feature quantities that cannot be covered by 2D face feature quantities. It can be covered with facial features, and as a result, more accurate authentication can be performed.
  • information on feature parts for calculating 2D face feature values Is texture information
  • the correction unit performs posture fluctuation correction that is correction related to the posture of the face and light source fluctuation correction that is correction related to the direction of the light source relative to the face.
  • the posture variation correction which is correction related to the posture of the face, and the correction of the direction of the light source with respect to the face are performed on the texture information of the feature part for calculating the 2D face feature amount.
  • a 2D image of the face is captured by at least two imaging devices (cameras CA1 and CA2) in the 3D shape acquisition unit, and each of the imaging is performed by the 3D shape calculation unit.
  • the 2D image force obtained from the device is calculated by performing 3D reconstruction on the corresponding points obtained by the computation process using the phase-only correlation method. According to this, since the entire 3D shape is calculated from the two 2D images obtained from at least two imaging devices by calculation using the phase-limited correlation method, the cost can be reduced without using an expensive 3D imaging device or the like.
  • the power S can be used to calculate the entire 3D shape with high accuracy using the phase-only correlation method.
  • the 3D face feature amount calculated by the three-dimensional feature amount calculation unit is set as a vector amount (3D feature vector), and the storage unit (storage unit 3) sets the vector nore amount.
  • the comparison vector feature (comparison 3D feature vector) is stored as the corresponding comparison facial feature (comparison feature), that is, the data stored as the comparison facial feature is measured. Since the amount of data is not so-called dense 3D shape data (3D face shape data), the amount of data to be stored can be reduced (the memory capacity can be reduced), and the data can be handled. It becomes easy.
  • multiple similarity is calculated based on the 2D face feature amount and the 3D face feature amount, and face matching or face identification is performed based on the multiple similarity degree.
  • the similarity is calculated based on the local area shape information and the global area shape information, and the face matching or face identification authentication determination is performed based on the similarity. It may be configured.
  • FIG. 16 is a functional block diagram for explaining the face authentication function provided in another controller. It is.
  • FIG. 17 is a flowchart showing an example of the operation of the authentication system shown in FIG.
  • the authentication system of this embodiment is shown in FIGS. 1 to 3 in that it includes a controller 30 shown in FIG. 16 instead of the controller 10 in the authentication system 1 shown in FIG. Different from authentication system 1. Therefore, the description of the schematic configuration of the authentication system as shown in FIG. 1 and the overall configuration of the controller as shown in FIG. 2 will be omitted, and the functional blocks of the controller 30 will be described below.
  • the controller 30 functionally includes an image input unit 31, a face region detection unit 32, a face region detection unit 33, a face region 3D calculation unit 34, and a face region 3D calculation unit 35.
  • image input unit 31 first and second image input units 31a and 31b
  • face region detection unit 32 first and second face region detection units 32a and 32b
  • face part detection unit 33 first And the second facial part detection unit 33a, 33b
  • the facial part 3D calculation unit 34 and the facial region 3D calculation unit 35
  • image input unit 11 first and second image input units 11a, 1 lb
  • Face area detector 12 first and second face area detectors 12a, 12b
  • face part detector 13 first and second face part detectors 13a, 13b
  • face part 3D calculator 14 Since it is the same as that of the face area 3D calculation unit 18, the description thereof is omitted.
  • the 3D local region extraction unit 36 performs a three-dimensional analysis from the 3D face dense shape data calculated by the face region 3D calculation unit 35 and the 3D face region shape data (feature portion) calculated by the face region 3D calculation unit 34.
  • a local region is extracted (calculated). That is, the 3D local region extraction unit 36 is the same as the 3D local patch extraction unit 19a of the 3D authentication unit 19 shown in FIG. 3, and the 3D face shape data and 3D face part shape data (feature part) 3 A dimensional local patch area is extracted (calculated).
  • this method of extracting a three-dimensional local patch region is performed by, for example, dropping a corresponding 3D close-fitting shape data region locally by dropping a perpendicular line to a partial region of a predetermined shape set in a plane.
  • Extraction method as patch area, 3D close-fitting shape data area most similar to reference model shape is locally A method of extracting as a notch region, a method of determining a region of 3D facial shape data included in a predefined region on a two-dimensional image as a local patch region, and a shape of a standard model calculated from an average face
  • the local region information calculation unit 37 also extracts (calculates) local region information from the information power of a single 3D local region (local patch region) extracted by the 3D local region extraction unit 36.
  • the local region information calculation unit 37 uses the 3D local region (local patch region) alone information extracted by the 3D local region extraction unit 36, and the 3D unique feature amount in the facial feature part. (Local 3D facial feature) is extracted.
  • the local 3D facial feature amount extraction method for example, an extraction method similar to the extraction method in the 3D feature amount extraction unit 19b of the three-dimensional authentication unit 19 shown in FIG.
  • the local 3D face feature amount extraction method is, for example, a method of extracting curvatures at a plurality of points in the curved surface of the local patch region as a local 3D face feature amount, and an approximation to the shape of the local patch region. Applying a method to extract the shape information (curved surface information) of the curved surface as a local 3D face feature, the force S is used.
  • a method of extracting the distance between the standard model and the local patch area after performing registration for each local patch area as a local 3D face feature amount using the standard model More specifically, first, a definition point h of a standard local model consisting of a plurality (N) defined in advance on the standard local region model used in the 3D local region extraction unit 36.
  • the global area information calculation unit 38 also extracts (calculates) the global area shape information with respect to the information power of the three-dimensional local area (local patch area) extracted by the 3D local area extraction unit 36.
  • the global region information calculation unit 38 is executed by the 3D local region extraction unit 36. From the extracted 3D local area (local patch area) information, 3D characteristic features (global 3D facial feature quantities) in the entire face are extracted (calculated).
  • the global area shape information is a word for the local area shape information, and is a feature amount of the three-dimensional shape of the entire face of the person to be authenticated.
  • the global area shape information is calculated based on, for example, the ⁇ 1> face three-dimensional local area. Further, for example, the global area shape information is calculated based on ⁇ 2> the shape of the entire face. Further, for example, the global area shape information is calculated based on ⁇ 3> face three-dimensional feature points.
  • ⁇ 1> to ⁇ 3> will be described more specifically.
  • Examples of the calculation method for calculating the global 3D facial feature quantity based on the local patch region of the face include the following ⁇ 1-1> and ⁇ 1-2>.
  • the global 3D facial feature amount is calculated based on the normal of the local patch region. More specifically, first, the standard model and the local patch region are aligned by SRT fitting. Next, RT registration makes it possible to more accurately align the local patch regions to be compared. In this way, the corresponding points are obtained by recalculating. A normal is obtained for each of the N corresponding points thus obtained.
  • SRT fitting is a process that aligns the feature points in the measurement data with the feature points of the standard model.
  • SRT fitting is a process of affine transformation of standard model data using Equations 14-1, 14-2 so that the distance energy between the feature points of the standard model and the feature points of the measurement data is minimized.
  • M is the feature point of the standard model
  • C is the measurement data.
  • K is the number of feature points
  • f (M, C) is the feature point of the standard model.
  • a transformation matrix that minimizes the distance energy is obtained by, for example, the least square method, and the position of the standard model data after the transformation is obtained. In addition, the projection center point of the standard model is also relocated.
  • a covariance matrix B is calculated using the corresponding point group S ′, T ′.
  • the covariance matrix B is given by Equation 15-1.
  • Equation 15.2 The matrix A is given by Equation 15.2.
  • s ' (s ', s', s '), which represents the three-dimensional coordinates of the measurement point, and t' is the same.
  • eigenvalue decomposition is performed, for example, by the Jacobian method, and eigenvalues and eigenvectors are calculated.
  • the global 3D facial features are subjected to SRT fitting by using the local patch region extracted for the standard model, and are used as the deformation parameter S of the deformation S, movement T, and rotation R parameters of the SRT fitting.
  • the deformation parameter S is a parameter for deforming the shape of the standard model so that the definition point on the standard model and the feature point on the local patch region are fitted to the shape of the local patch region. If the feature point used for SRT fitting matches the feature point of the face almost exactly, the deformation parameter S will be the same person's face size (width, height and depth). It is thought that it represents an individual because the travel, etc.) is unchanged. Also, deformation parameters
  • the same value is calculated even if shooting conditions such as magnification and exposure change.
  • the deformation parameter s may not use all the local patch regions.
  • a plurality of local patch regions including the stably obtained nose may be collected and obtained by performing SRT fitting.
  • the global 3D facial feature value is a distance for each of a plurality of predefined points in the standard model and a plurality of measurement points on the local patch area respectively corresponding to the plurality of points of the standard model. It is given by finding the average of those distances.
  • the standard model and the local patch region are aligned by SRT fitting.
  • a plurality of measurements h on the local patch region corresponding to a plurality (N) of point groups H (H (h, h,..., H)) defined in advance on the selected standard model. 1 2 Nh
  • the distances d (h, s) between the point group H and the plurality of point groups S in the corresponding local patch region are obtained. Then, an average value of the plurality of distances d (h, s) is obtained as the global 3D facial feature quantity di stb (see Equation 12). As long as the SRT fitting is used to make the alignment approximately correct, the corresponding point S 'is given to each subject.
  • the global 3D facial feature quantity is obtained by projecting a plurality of predefined points in the standard model onto the local patch area and processing the registered data in the same way as the projected points projected from the standard model. It is given by calculating the distance for each projected point and calculating the average of those distances.
  • the standard model and the local patch region are aligned by SRT fitting.
  • the global 3D facial feature value is given for each local patch area by finding the average value of the distances between corresponding points in the local patch areas to be compared (measurement data and registration data).
  • the standard model and the local patch region are aligned by SRT fitting.
  • the alignment between the local patch regions to be compared is performed with higher accuracy by the RT registration.
  • the global 3D facial feature value is given by calculating an average value between points corresponding to each other in the local patch areas to be compared for each local patch area.
  • the standard model and the local patch region are aligned by SRT fitting.
  • the alignment between the local patch regions to be compared is performed with higher accuracy by the RT registration. N points after alignment
  • the global 3D facial feature value is given by obtaining a variance value between points corresponding to each other in the local patch areas to be compared for each local patch area.
  • the standard model and the local patch region are aligned by SRT fitting.
  • the alignment between the local patch regions to be compared is performed with higher accuracy by the RT registration.
  • the registered point group T ⁇ t I t ⁇ N ⁇ consisting of N points after alignment, and the measurement point group S consisting of N points different from N
  • the global patch shape information is compared as a dense data group in the local patch region and as a coarse data group in the other regions. It can be done.
  • the global 3D face feature is calculated based on the line (feature extraction line) defined in the local patch area. More specifically, a line (feature extraction line) is defined in a predetermined local patch region set in advance.
  • the feature extraction line is, for example, 3D facial part shape data It is defined from multiple feature points. It is desirable that the feature extraction line be defined in the part where the 3D shape features of the face that enhance the authentication accuracy appear frequently.
  • the feature extraction line is a line including undulations of facial irregularities such as crossing the nose.
  • the feature extraction line defined in this local patch area is projected onto 3D facial shape data, and the 3D facial shape corresponding to multiple points on the feature extraction line defined in this local patch area.
  • the feature extraction line may be extended outside the local patch area from the local patch area that further increases the authentication accuracy.
  • a feature extraction line is defined for each of a plurality of local patch regions that further improve the accuracy of authentication, and a plurality of point groups on the 3D face shape data are obtained from each feature extraction line.
  • a group may be a global 3D facial feature.
  • the plurality of points on the feature extraction line may be equally spaced or equally spaced.
  • the similarity calculation unit 39 compares the feature quantity (comparison feature quantity) of the comparison target person registered in advance and the authentication target person HM calculated above. Similarity is evaluated by calculating similarity based on local region shape information and global region shape information. The similarity is calculated as local information similarity D sl and global information similarity D sb for local area shape information and global area shape information, respectively. d s ) can be obtained by calculating the total Euclidean distance between each other (see Equation 10).
  • the registration data storage unit 40 is prepared in advance in the same manner as the registration data storage unit 21 shown in FIG. 3 in order to calculate the local information similarity D sl and the global information similarity D sb by the similarity calculation unit 39. Information on the feature amount (comparison face feature amount) of the comparison target person is stored.
  • authentication determination is performed based on this multiple similarity Re.
  • the authentication decision is As described above, there are cases of face matching and face identification.
  • the overall determination unit 41 first makes a determination based on the local information similarity D sl , and if the difference in the similarity is equal to or greater than a threshold in the determination result, determines that the other person Judgment may be made based on the global information similarity D sb only when the difference is less than the threshold value.
  • a face image of the person HM to be authenticated is acquired by photographing with the cameras CA1 and CA2 (step S3 Do).
  • the two face images obtained by the above are input to the controller 30 (image input unit 31) (step S32), and then from the face images input to the image input unit 31 by the face area detection unit 32.
  • a face area image is detected (step S33), and from the detected face area image, the face part detection unit 33 detects the feature part of the face, that is, the coordinates of the feature point (step S34).
  • a 3D coordinate (3D face part shape data) of each feature part is calculated by the 3D calculation part 34 from the coordinates of the feature part of the face (feature point coordinates) detected by the face part detection part 33 (step 3D).
  • the face area 3D calculation unit 35 calculates 3D face shape data composed of a plurality of 2D points from the face area image (stereo image) detected by the face area detection unit 32 (step). S36).
  • the 3D facial dense shape data calculated by the facial region 3D calculation unit 35 by the 3D local region extraction unit 36 and the 3D facial part shape data calculated by the facial region 3D calculation unit 34 in step S35 described above.
  • a three-dimensional local region (local patch region) is calculated from (step S37).
  • the local region information calculation unit 37 calculates the local region information, that is, the 3D face feature amount in this embodiment, from the information of the three-dimensional local region (local patch region) alone extracted by the 3D local region extraction unit 36. (Step 38).
  • the global region information calculation unit 38 calculates the information power of the 3D local region (local patch region) extracted by the 3D local region extraction unit 36, the global region shape information, and in this embodiment, the global 3D facial feature amount.
  • the similarity calculation unit 39 compares the feature quantity (comparison feature quantity) of the comparison target registered in advance with the local area shape information and the global area shape information calculated in steps S38 and S39. Similarity is evaluated (step S40). Then, based on the multiple similarity Re, the comprehensive determination unit 41 performs face collation or face identification authentication determination (step S41). [0153] In the case where the degree of coincidence of the shapes is compared for each local region by a method such as alignment, the shapes between the local regions are compared.
  • the shape matching accuracy of the local area is high, the error becomes small even if the relative positional relationship between the local areas is greatly different, so that the error of other people also becomes small, resulting in a decrease in authentication accuracy. It will be. Therefore, when comparing the degree of matching of shapes by a method such as alignment using multiple local regions as one global region, in addition to the shape comparison for each local region, the relative position between the local regions Since related information is also included, the accuracy of authentication is expected to improve.
  • the three-dimensional face authentication method based on the ICP algorithm such as Japanese Patent Application Laid-Open No. 2007-164670, is an effective technique from this point.
  • the force S and ICP algorithms are actually the processing time and characteristics. It was difficult in terms of quantification.
  • the shape information of the global region of the face is divided into local regions and separated into the global region shape information and the local region shape information, so that the data amount and the processing time can be shortened. ing. Since global area shape information is also used, authentication can be performed with higher accuracy.
  • the present embodiment can take the following aspects.
  • the area set on the local patch extraction plane T does not have to be rectangular like the rectangular area S. In short, it is a partial area (partial area) on the local patch extraction plane T. If so, the shape may be arbitrary. In addition, the shape of the feature portion may not be a rectangle but may be an arbitrary shape.
  • the method for determining the local patch area from the rectangular area S is as follows.
  • the vertical line dropped perpendicularly to the local patch extraction plane T (the vertical leg force enters the rectangular area S 3D
  • Various methods can be employed without being limited to the method of making a point, for example, it may be lowered at a predetermined angle with respect to the plane T without dropping from the 3D point perpendicular to the local patch extraction plane T.
  • a method may be used in which a virtual, for example, radial line is output from the rectangular area S in a predetermined direction, and a range on the 3D shape that intersects (contacts) the line is used as the local patch area.
  • the authentication system 1 does not have to be separated into the controller 10 and the cameras CA1 and CA2.
  • a configuration in which each camera is built directly in the controller 10 may be employed.
  • each camera is built in such an arrangement that the subject person HM can be photographed at different angles!
  • the authentication system that is one aspect is a local area determination unit that determines a plurality of three-dimensional local areas, which are local areas of the person to be authenticated, and the 3D determined by the local area determination unit.
  • 3D feature quantity that is local area shape information related to the shape of each 3D local area, and that calculates the 3D facial feature quantity that is the 3D feature quantity of the face from the local 3D shape information in the local area Feature amount comparison for comparing the 3D face feature amount calculated by the calculation unit and the 3D feature amount calculation unit that performs the authentication operation for the person to be authenticated with a face feature amount for comparison prepared in advance.
  • the authentication system further includes a three-dimensional shape acquisition unit that acquires information on the entire three-dimensional shape that is an overall three-dimensional shape of the face of the person to be authenticated.
  • the local region determination unit determines a plurality of 3D local regions, which are local regions in the entire 3D shape, from the entire 3D shape information acquired by the 3D shape acquisition unit.
  • the local region determination unit determines a plurality of three-dimensional local regions that are local regions of the person to be authenticated.
  • the 3D shape acquisition unit acquires information on the entire 3D shape, which is the overall 3D shape of the face of the person to be authenticated, and the local region determination unit From the entire 3D shape information acquired by the 3D shape acquisition unit, the plurality of 3D local regions, which are local regions in the entire 3D shape, are determined. From the local 3D shape information in the 3D local area determined by the local area determination unit by the 3D feature quantity calculation unit, the local area shape information on the shape of each 3D local area is obtained.
  • a 3D face feature value which is a feature value, is calculated. Then, the feature quantity comparison unit compares the three-dimensional face feature quantity calculated by the three-dimensional feature quantity calculation unit with the comparison face feature quantity prepared in advance so as to perform the authentication operation for the person to be authenticated.
  • the 3D shape acquisition unit includes a 2D image acquisition unit that acquires a 2D image of the face, and acquires the 2D image.
  • the three-dimensional local region is determined based on a result of a feature part extraction unit that extracts a two-dimensional feature part that is a characteristic part of the face from the two-dimensional image acquired by the part.
  • the three-dimensional shape acquisition unit includes a three-dimensional coordinate calculation unit that calculates the three-dimensional coordinates of the feature part extracted by the feature part extraction unit.
  • the local region determination unit determines the three-dimensional local region based on the three-dimensional coordinates of the feature portion calculated by the three-dimensional coordinate calculation unit.
  • the 3D shape acquisition unit includes the 2D image acquisition unit that acquires the 2D image of the face, and the face is obtained from the 2D image acquired by the 2D image acquisition unit.
  • a three-dimensional local region is determined based on the result of the feature part extraction unit that extracts a two-dimensional feature part that is a characteristic part of.
  • the feature part extraction unit extracts a feature part that is a characteristic part of the face from the two-dimensional image acquired by the two-dimensional image acquisition unit by the feature part extraction unit.
  • the three-dimensional coordinates of the feature part extracted by the feature part extraction unit are calculated, and the three-dimensional local region is determined by the local region determination unit based on the three-dimensional coordinates of the feature part calculated by the three-dimensional coordinate calculation unit. It is done.
  • this authentication system when determining a three-dimensional local region, it can be associated with information of a two-dimensional feature region, and the information of the feature region is used together with the information of the three-dimensional local region. High-precision authentication can be performed.
  • the feature part extraction unit further includes a two-dimensional local region extraction unit that extracts a local region on a two-dimensional image from the extracted two-dimensional feature part.
  • the 3D local region determining unit is calculated by the 2D local region extracting unit.
  • the 3D local region is determined based on the generated 2D local region.
  • the local region determination unit calculates and extracts only a region corresponding to the two-dimensional local region as the three-dimensional local region.
  • the local region determination unit sets a partial region of a predetermined shape in a plane determined from the three-dimensional coordinates, and in the overall three-dimensional shape. A region corresponding to the partial region is determined as the three-dimensional local region.
  • the local region determination unit sets a partial region of a predetermined shape in a plane determined from the three-dimensional coordinates, and the region corresponding to the partial region in the entire three-dimensional shape is three-dimensional. Determined as local region.
  • a three-dimensional local region can be easily determined from the three-dimensional coordinates of a feature part using a simple method.
  • the overall 3D shape information is face shape data composed of a plurality of 3D points
  • the local region determination unit is configured so that the 3D points are Then, a region composed of three-dimensional points in which a perpendicular line that is virtually perpendicular to the plane is included in the partial region is determined as the three-dimensional local region.
  • the entire 3D shape information is the face shape data composed of a plurality of 3D points, and the local region determination unit descends the 3D points virtually perpendicular to the plane.
  • a region composed of 3D points where the perpendicular line is included in the partial region is determined as a 3D local region.
  • the three-dimensional local region corresponding to the partial region can be easily determined using a simple method.
  • the local region determination unit compares the overall three-dimensional shape with a reference three-dimensional partial model shape prepared in advance, and A portion having a shape most similar to the reference three-dimensional partial model shape in the three-dimensional shape is determined as the three-dimensional local region.
  • the local region determination unit compares the entire three-dimensional shape with the reference three-dimensional partial model shape prepared in advance, and the reference three-dimensional partial model in the total three-dimensional shape.
  • the partial force that is the shape most similar to the shape is determined as a 3 ⁇ 4-dimensional local region [0177] Therefore, according to this authentication system, a configuration and operation for acquiring a two-dimensional image and extracting a feature part (two-dimensional facial feature amount) from the two-dimensional image are not required and easy.
  • a 3D local region in the overall 3D shape can be determined.
  • the local region determination unit includes the local region information defined on the entire three-dimensional shape and a reference three-dimensional partial model shape prepared in advance. A same space conversion unit for converting the same three-dimensional shape into the same space, and comparing the inclusion relationship between the entire three-dimensional shape and the reference three-dimensional partial model shape in the same space converted by the same space conversion unit. To determine the three-dimensional local region.
  • the three-dimensional local region determination unit includes a three-dimensional surface on the reference three-dimensional model and a three-dimensional surface of the entire three-dimensional shape. The three-dimensional local region is determined by comparing the relationship.
  • the 3D local region determination unit includes a 3D surface on the reference 3D model and a 3D coordinate point of the overall 3D shape.
  • the three-dimensional local region is determined by comparing the inclusive relations.
  • the 3D local region determination unit includes a 3D coordinate point on the reference 3D model and a 3D surface of the overall 3D shape.
  • the three-dimensional local region is determined by comparing the inclusive relations.
  • the 3D local area determined by the local area determination unit is set as dense data, and the 3D local area determined to be other than the 3D local area is used. Keep the area as sparse data.
  • the local area determination unit converts the entire 3D shape and the local area information defined on the reference 3D partial model shape prepared in advance into the same space.
  • the same space conversion unit compares the inclusion relation between the entire 3D shape and the reference 3D partial model shape in the converted same space, and according to the comparison result. A three-dimensional local region is determined. Therefore, according to this authentication system, the 3D local region in the entire 3D shape can be easily determined.
  • the three-dimensional feature amount calculation unit includes the tertiary Original local region force Local 3D shape information is calculated as the local region shape information.
  • the three-dimensional feature amount calculation unit converts the local three-dimensional shape information in the three-dimensional local region into predetermined curved surface information, and the local region Calculate as shape information.
  • the three-dimensional feature amount calculation unit uses local three-dimensional shape information in the three-dimensional local region as defined points defined on a standard model and a three-dimensional The local area shape information is calculated by converting the distance information of the corresponding points in the local area into an outer area.
  • the three-dimensional feature amount calculation unit calculates local three-dimensional shape information from the three-dimensional local region as the local region shape information.
  • the 3D feature quantity calculation unit calculates the local area shape information obtained by converting the local 3D shape information in the 3D local area into predetermined curved surface information.
  • the 3D feature value calculation unit converts the local 3D shape information in the 3D local region into the vector from the distance information between the defined points defined on the standard model and the corresponding points in the 3D local region. Calculated as local area shape information.
  • the 3D shape information is not used as it is.
  • the 3D shape information calculated from the 3D local area for example, the local 3D shape information is converted. Since the structure is handled as curved surface information (for example, curvature), dimensional compression is possible and the processing speed is increased.
  • the three-dimensional feature amount calculation unit includes, as the three-dimensional face feature amount, information on the positional relationship between three-dimensional local regions. Calculate the amount.
  • the 3D feature amount calculation unit calculates a 3D face feature amount including information on the relative positional relationship of each three-dimensional local region as the 3D face feature amount.
  • the local region determination unit is configured to form the entire three-dimensional shape so that the plurality of three-dimensional local regions are arranged at positions where the left and right sides of the face are symmetrical. Determine the 3D local region in the shape.
  • the local region determination unit determines the three-dimensional local region in the entire three-dimensional shape so that the plurality of three-dimensional local regions are arranged at positions that are symmetrical with respect to the face.
  • the local region determination unit includes the entire three-dimensional shape so that the plurality of three-dimensional local regions include at least the nose and the heel portion of the face. The three-dimensional local region at is determined.
  • the three-dimensional local area in the entire three-dimensional shape is determined by the local area determination unit so that the plurality of three-dimensional local areas include at least the face nose and the heel region.
  • the three-dimensional local region is avoided, for example, by a part hidden by hair (for example, a forehead) or difficult to measure! /, Part (for example, a mouth when having a mustache). Since it can be set, it is possible to accurately calculate the 3D facial feature quantity from this 3D local area, and it is possible to perform highly accurate authentication.
  • a two-dimensional face feature value which is a two-dimensional feature value of the face
  • the feature quantity comparison unit includes a two-dimensional face feature value calculated by the two-dimensional feature value calculation unit and a three-dimensional face feature value calculated by the three-dimensional feature value calculation unit.
  • the two-dimensional feature quantity that is a two-dimensional feature quantity of the face is calculated by the two-dimensional feature quantity calculation unit from the feature part information extracted by the feature part extraction unit. Then, the 2D face feature calculated by the 2D feature amount calculation unit by the feature amount comparison unit The total facial feature quantity, which is a combination of the quantity and the 3D facial feature quantity calculated by the 3D feature quantity calculation unit, is compared with the comparison facial feature quantity.
  • the three-dimensional feature amount calculation unit includes the three-dimensional shape information from local three-dimensional shape information in a three-dimensional local region including at least a part other than the facial feature part. A face feature amount is calculated.
  • the 3D feature quantity calculation unit calculates the 3D face feature quantity from the local 3D shape information in the 3D local region including at least a part other than the facial feature part.
  • the feature part information for calculating the two-dimensional face feature amount is texture information
  • the facial posture is related to the texture information.
  • the image processing apparatus further includes a correction unit that performs posture variation correction that is correction and light source variation correction that is correction related to the direction of the light source relative to the face.
  • the feature part information for calculating the two-dimensional face feature amount is texture information
  • the correction unit corrects the posture of the face with respect to the texture information.
  • the posture variation correction and the light source variation correction that are corrections related to the direction of the light source with respect to the face are performed.
  • this authentication system it is possible to obtain an appropriate two-dimensional face feature amount based on the texture information subjected to the posture variation correction and the light source variation correction, and thus to perform more accurate authentication. Become.
  • the three-dimensional shape acquisition unit includes at least two photographing devices that photograph a two-dimensional image of the face, and two images obtained from the photographing devices.
  • a three-dimensional shape calculation unit for calculating the whole three-dimensional shape by performing a high-precision corresponding point search process from the two-dimensional image by calculation using a phase-only correlation method and performing three-dimensional reconstruction.
  • the 3D shape acquisition unit captures a 2D image of the face with at least two imaging devices, and the 3D shape calculation unit obtains 2 from each imaging device.
  • Two-dimensional image force of the sheet A high-accuracy inspection process is performed by calculation using the phase-only correlation method, and the entire three-dimensional shape is calculated by performing three-dimensional reconstruction.
  • this authentication system it is possible to calculate the entire three-dimensional shape with high accuracy by the phase-only correlation method at a low cost without using an expensive three-dimensional imaging apparatus or the like.
  • the three-dimensional face feature amount calculated by the three-dimensional feature amount calculation unit is a vector amount
  • the comparison face feature corresponding to the vector amount A storage unit is further provided for storing a comparison vector quantity as a quantity.
  • the three-dimensional face feature amount calculated by the three-dimensional feature amount calculation unit is a vector amount, and is stored as a comparison face feature amount corresponding to the vector amount by the storage unit.
  • the comparison vector quantity is stored.
  • the data stored as the comparison facial feature quantity by the storage unit becomes the vector quantity that is obtained by the measured so-called dense three-dimensional shape data.
  • the amount can be reduced (requires less memory capacity), and data handling becomes easier.
  • the authentication system which is particularly effective in other aspects, it is based on the 3D local area determined by the local area determining unit! /, And is a global area in the overall 3D shape 3
  • a global region shape information relating to a shape of a three-dimensional global region, further comprising a global 3D feature amount calculation unit that calculates a global 3D face feature amount that is a 3D feature amount of the face,
  • the comparison unit compares the global three-dimensional face feature amount calculated by the global three-dimensional feature amount calculation unit, which performs the authentication operation for the authentication target person, with a comparative global facial feature amount prepared in advance.
  • the global three-dimensional feature amount calculation unit determines the local region determination unit. Based on the specified 3D local area! /, The global area shape information on the shape of the 3D global area, which is the global area of the entire 3D shape, and the 3D feature of the face A global 3D face feature is calculated and prepared in advance by the feature comparison unit to perform an authentication operation on the authentication target by the global 3D feature calculation unit. The compared global facial feature quantity is compared.
  • a global 3D facial feature quantity calculation unit for calculating a global 3D facial feature quantity of the face, wherein the feature quantity comparison unit performs the global 3D feature quantity to perform an authentication operation on the person to be authenticated.
  • the global three-dimensional facial feature value calculated by the calculation unit is compared with a comparative global facial feature value prepared in advance.
  • the global 3D feature value calculation unit is based on the information of the 3D local region determined by the local region determination unit!
  • the global 3D facial feature value of the face is calculated, and the global 3D facial feature value calculated by the global 3D feature value calculation unit to perform the authentication operation for the authentication target person by the feature value comparison unit.
  • the comparison global face feature quantity prepared in advance is compared.
  • the entire three-dimensional It further includes a global 3D face feature quantity calculation unit that calculates a global 3D face feature quantity that is a global 3D feature quantity of the face, which is global information on the shape.
  • the global three-dimensional feature amount calculation unit calculates the deformation parameter of the standard model calculated based on the three-dimensional feature point information defined on the three-dimensional local region. Extract information.
  • the global three-dimensional feature amount calculation unit calculates a three-dimensional local standard calculated based on three-dimensional feature point information defined on the three-dimensional local region. Extract distance information between model and 3D local area.
  • the global three-dimensional feature quantity calculation unit calculates the three-dimensional station calculated based on the three-dimensional feature point information defined on the three-dimensional local area. Extract distance information between different areas.
  • the global 3D feature value calculation unit is based on the 3D feature point information defined on the 3D local region determined by the local region determination unit!
  • a global 3D facial feature value which is a 3D feature value of the face, is calculated as global information in the 3D shape.
  • information on the deformation parameter of the standard model calculated based on the 3D feature point information defined on the 3D local region is extracted by the global 3D feature amount calculation unit.
  • the global 3D feature quantity calculation unit extracts distance information between the 3D local standard model calculated based on the 3D feature point information defined on the 3D local area and the 3D local area.
  • the global 3D feature quantity calculation unit extracts the distance information between the 3D local areas calculated based on the 3D feature point information defined on the 3D local area! . Then, the feature value comparison unit compares the global three-dimensional face feature value calculated by the global three-dimensional feature value calculation unit with the prepared comparison global face feature value in order to perform an authentication operation on the person to be authenticated. Is done.
  • the three-dimensional local region determined by the local region determination unit is extracted in a line shape, and based on the extracted line-shaped three-dimensional local region, A global 3D feature quantity calculation unit is further provided that calculates a global 3D local facial feature quantity as a shape vector of the 3D global area, which is a global area in the overall 3D shape.
  • the global three-dimensional feature amount calculation unit extracts the three-dimensional local region determined by the local region determination unit in a line shape, and based on the extracted line-shaped three-dimensional local region.
  • a global 3D facial feature is calculated as a shape vector of the 3D global area, which is a global area in the overall 3D shape.
  • the feature amount comparison unit calculates the global three-dimensional face feature amount calculated by the global three-dimensional feature amount calculation unit so as to perform the authentication operation for the authentication target person, and the comparison-use global face feature amount prepared in advance.
  • the global 3D face feature value is used for face authentication, the authentication accuracy can be further improved.
  • the global area shape information can be compressed by a so-called data compression technique, and the data amount can be reduced.
  • global 3D facial features are calculated based on 3D local regions. Therefore, it is possible to calculate global information unique to the 3D local region.
  • the amount of data in the 3D local area is less than the 3D shape data of the entire face.
  • the 3D local region is determined based on the 3D coordinates of the feature part, it is possible to select only global information between the feature points of the face part from the entire 3D shape data. Become.
  • the global three-dimensional feature amount calculation unit calculates center-of-gravity information regarding the three-dimensional local region as the global three-dimensional face feature amount.
  • the global three-dimensional feature amount calculation unit calculates normal information related to the three-dimensional local region as the global three-dimensional face feature amount.
  • the feature amount comparison unit performs the three-dimensional feature according to a comparison result between the global three-dimensional face feature amount and the comparative global face feature amount.
  • the 3D face feature value calculated by the quantity calculator is compared with the comparison face feature value prepared in advance.
  • the comparison result is the global 3D facial feature quantity and the comparative global feature quantity. If the facial feature quantity is different, the comparison between the 3D facial feature quantity and the comparison facial feature quantity can be omitted, the authentication processing time is shortened, and the authentication can be performed faster. Become.
  • the feature amount comparison unit performs a global three-dimensional feature amount calculation unit that performs an authentication operation on the person to be authenticated.
  • a global comparison result obtained by comparing a facial feature quantity with a comparative global facial feature quantity prepared in advance, and a 3D facial feature quantity calculated by the 3D feature quantity calculation unit
  • the total comparison result is calculated by integrating the local comparison results that are compared with the compared facial feature quantities.
  • authentication is performed based on the overall comparison result obtained by integrating the global comparison result and the local comparison result, so that the comparison results can be interpolated with each other. Authentication can be performed with high accuracy.
  • the authentication method which is powerful in other aspects includes a first step of acquiring information on the entire three-dimensional shape, which is the entire three-dimensional shape of the face of the person to be authenticated, and the entire three-dimensional shape.
  • a fourth step of comparing the face feature quantity with a comparison face feature quantity prepared in advance.
  • the first step information on the entire three-dimensional shape that is the entire three-dimensional shape of the face of the person to be authenticated is acquired.
  • the second step a plurality of three-dimensional local regions that are local regions in the whole three-dimensional shape are determined from the whole three-dimensional shape information.
  • the third step from the local 3D shape information in the 3D local area, the local area shape information on the shape of each 3D local area, and the 3D face feature quantity that is the 3D feature quantity of the face Is calculated.
  • the third step from the local 3D shape information in the 3D local area, the local area shape information on the shape of each 3D local area, and the 3D face feature quantity that is the 3D feature quantity of the face Is calculated.
  • the fourth step the three-dimensional face feature value is compared with the comparison face feature value prepared in advance to perform the authentication operation for the person to be authenticated.
  • a plurality of 3D local regions are determined from the entire 3D shape of the face of the person to be authenticated, and 3D facial features are obtained from the local 3D shape information in each 3D local region. Since the amount is calculated and the 3D face feature value is compared with the comparison face feature amount, the authentication operation is performed on the person to be authenticated, that is, the information on the entire 3D shape of the face is not used as it is. Since multiple local regions (3D local regions) are extracted from the 3D shape of the entire face and authentication is performed based on the extracted 3D local regions, the Even if a concealment etc.
  • the first step includes a fifth step of acquiring a two-dimensional image of the face, and the face of the face is obtained from the two-dimensional image.
  • the first step is a step including a fifth step of acquiring a two-dimensional image of the face, and in the sixth step, a characteristic part of the face is obtained from the two-dimensional image. Feature parts are extracted. In the seventh step, the three-dimensional coordinates of the feature part are calculated. In the second step, the determination is made based on the three-dimensional coordinates of the characteristic part.
  • this authentication method when determining a three-dimensional local region, it can be associated with information of a two-dimensional feature region, and the information using the feature region information can be used together with the information of the three-dimensional local region. It is possible to perform accuracy authentication.
  • the method further includes an eighth step of calculating a two-dimensional face feature value that is a two-dimensional feature value of the face from the information on the feature part,
  • the fourth step is a step of comparing a total face feature amount that is a combination of the two-dimensional face feature amount and the three-dimensional face feature amount with the comparison face feature amount.
  • a two-dimensional facial feature amount that is a two-dimensional facial feature amount is calculated from the feature part information, and in the fourth step, a two-dimensional facial feature amount is calculated.
  • the total facial feature value that is a combination of the 3D face feature value and the face feature value for comparison is compared.
  • a 3D global area that is a global area in the overall 3D shape The method further comprises a ninth step of calculating global three-dimensional face feature amounts that are global region shape information relating to the shape of the face, which is a three-dimensional feature amount of the face, wherein the fourth step includes the authentication subject.
  • the global three-dimensional facial feature value calculated in the above-mentioned nine steps to perform the authentication operation for the above is compared with a comparative global facial feature value prepared in advance.
  • the global region related to the shape of the three-dimensional global region which is a global region in the overall three-dimensional shape, based on the three-dimensional local region determined in the second step by the ninth step
  • a global 3D facial feature which is shape information and is a 3D facial feature, is calculated, and then the global calculated by the 9th step to perform an authentication operation for the person to be authenticated in the 4th step.
  • the three-dimensional face feature quantity is compared with a comparison-use global face feature quantity prepared in advance.
  • the global 3D face feature value is used for face authentication, the authentication accuracy can be further improved.
  • the global area shape information can be compressed by a so-called data compression technique, and the data amount can be reduced.
  • the global 3D facial feature value is calculated based on the 3D local region, it is possible to calculate global information unique to the 3D local region.
  • the amount of data in the 3D local area is less than the 3D shape data of the entire face.
  • the 3D local region is determined based on the 3D coordinates of the feature part, it is possible to select only global information between the feature points of the face part from the entire 3D shape data. Become.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Provided are an authentication system and an authentication method in which a plurality of 3D local regions are decided in a person face to be authenticated and 3D characteristic amounts of the face in the respective 3D local regions are calculated as the 3D face characteristic amounts. The 3D face characteristic amounts are compared to face characteristic amounts prepared in advance for comparison so as to authenticate the person. This reduces lowering the authentication accuracy and improves the authentication speed.

Description

明 細 書  Specification

認証システム及び認証方法  Authentication system and authentication method

技術分野  Technical field

[0001] 本発明は、顔の認証を行う認証システム及び認証方法に関するものである。  [0001] The present invention relates to an authentication system and an authentication method for performing face authentication.

背景技術  Background art

[0002] 近年、情報機器のネットワーク化が進み、電子化された様々なサービスが普及して きた。これに伴い、人に頼らない非対面での個人認証 (本人認証)を行う技術のニー ズが高まりつつある。これに関し、人物の生体的特徴により自動的に個人の識別を行 うバイオメトリタス認証技術 (生体認証技術)の研究が盛んに行われている。バイオメト リクス認証技術の一つとして、顔による認証を行う顔認証技術が知られている。この顔 認証技術は、非接触型の認証方法であり、オフィス等においてはその利便性から二 ーズが高レ、ものとなってレ、る。  In recent years, networking of information devices has progressed, and various electronic services have become widespread. Along with this, the need for non-face-to-face non-face-to-face personal authentication (person authentication) is increasing. In this regard, research on biometrics authentication technology (biometric technology) that automatically identifies individuals based on the biometric characteristics of a person has been actively conducted. As one of biometric authentication technologies, face authentication technology that performs face authentication is known. This face authentication technology is a non-contact type authentication method, and the demand for it is very high in offices because of its convenience.

[0003] ところで、上記顔認証にお!/、て、撮影条件の変化がその認証性能に大きな影響を 及ぼす。これは、撮影条件の変化がしばしば、顔画像に被撮影者の識別情報を上回 るほどの画像変動をもたらすことによる。認証性能に大きく影響を及ぼす主な原因は 、「姿勢変動」、「光源変動」及び「表情変化」である。顔は、 3次元形状を有している( 凹凸や奥行きがある)ため、顔の姿勢や顔に対する光源光の照射角度が変動するこ とにより、或いは顔の表情が変化して顔の形状が変化することにより、例えば頰ゃ鼻 の急峻な傾斜部分など顔に部分的な隠れ (影の部分、暗部)力 S生じてしまう(これをォ クリュージヨンという)。顔認証の方法としては、 3次元形状情報(3次元情報)を用いた 認証(3次元顔認証)力 S挙げられる。一般的に、この 3次元顔認証では顔全体を使用 するため、当該部分的な隠れの問題が生じる。すなわち、認証処理においてこの隠 れが生じた箇所のデータが欠落してしまい、認証精度が低下するという問題は、解決 されない。また、密な 3次元情報を用いた場合には、認証処理に時間力 Sかかるという 問題もある。  [0003] By the way, in the face authentication described above, a change in shooting conditions greatly affects the authentication performance. This is because changes in shooting conditions often cause image fluctuations in the face image that exceed the identification information of the subject. The main causes that greatly affect the authentication performance are “posture change”, “light source change”, and “expression change”. Since the face has a three-dimensional shape (there is unevenness and depth), the face shape changes due to changes in the posture of the face and the illumination angle of the light source light on the face, or the facial expression changes. By changing, for example, a sharp hiding part of the nose, a partial hiding (shadow part, dark part) force S is generated on the face (this is called an occlusion). As a face authentication method, authentication (3D face authentication) power S using 3D shape information (3D information) can be mentioned. In general, this 3D face recognition uses the entire face, which causes the partial hiding problem. That is, the problem that the authentication accuracy is lowered due to the loss of data at the location where the concealment occurs in the authentication process cannot be solved. In addition, when dense 3D information is used, there is a problem that the authentication process takes time S.

[0004] これに関し、例えば、特許文献 1には、次のような技術が開示されている。まず、顔 表面の曲率の変化を調べることによって顔の基準点が抽出される。顔の基準点は、 曲率の絶対値が最大となる点(例えば鼻頂点)や、顔の側面の中央付近で曲率の絶 対値が最大となる点(例えば耳穴点)等である。次に、これら顔の基準点に基づいて 基準姿勢を算出することによって顔の向き (傾き)、つまり顔の姿勢の補正が行われるIn this regard, for example, Patent Document 1 discloses the following technique. First, the reference point of the face is extracted by examining the change in curvature of the face surface. The reference point of the face is These include points where the absolute value of curvature is maximum (for example, the tip of the nose) and points where the absolute value of curvature is maximum near the center of the side of the face (for example, ear hole points). Next, the face orientation (inclination), that is, the face orientation is corrected by calculating the reference posture based on these face reference points.

。次に、当該補正された顔の 3次元形状データを任意の大きさで曲面が平面に近似 され、この平面の単位法線ベクトルと面積とが求められる。次に、単位法線ベクトルの 大きさをこの面積で表した法線分布が特徴量とされ、認証が行われる。 . Next, the corrected 3D shape data of the face is approximated to a plane with an arbitrary size, and the unit normal vector and area of this plane are obtained. Next, the normal distribution in which the size of the unit normal vector is expressed by this area is used as a feature amount, and authentication is performed.

[0005] しかしながら、この特許文献 1に開示の技術では、 3次元形状全体、所謂大局的パ ツチ情報を用いることが前提とされている。このため、顔の基準方向の決定が必要と されることから、上記姿勢変動等によって顔に部分的な隠れが生じた場合 (オタリュー ジョン)に、この基準方向を決定することができず、認証処理が実行できなくなつてし まう。 [0005] However, the technique disclosed in Patent Document 1 is premised on using the entire three-dimensional shape, so-called global patch information. For this reason, since it is necessary to determine the reference direction of the face, it is not possible to determine this reference direction when the face is partially hidden due to the above-mentioned change in posture etc. The process cannot be executed.

[0006] また、例えば、特許文献 2には、次のような技術が開示されている。まず、カラー情 報を用いて人物の顔部分のみの 3次元形状情報とカラー情報とが抽出され、当該 3 次元形状情報とカラー情報とをセットにした顔データが求められる。次に、この顔デ ータ(照合顔データ)と、予め用意しておいた辞書顔データとの 3次元形状全体の重 心が求められ、これら重心の位置が一致するように平行移動されるとともに、この一致 させた重心回りに微小回転させて回転顔データが求められる。そして、この回転顔デ ータと辞書顔データとの間における誤差を計算することによって最小誤差が求められ 、この最小誤差に基づいて判定 (認証)が行われる。  [0006] Further, for example, Patent Document 2 discloses the following technique. First, color information is used to extract 3D shape information and color information of only the face portion of a person, and face data is obtained by combining the 3D shape information and color information. Next, the center of the entire three-dimensional shape of this face data (collation face data) and dictionary face data prepared in advance is obtained, and translated so that the positions of these centroids coincide. At the same time, rotated face data is obtained by slightly rotating around the matched center of gravity. Then, a minimum error is obtained by calculating an error between the rotated face data and the dictionary face data, and determination (authentication) is performed based on the minimum error.

[0007] しかしながら、この特許文献 2に開示の技術では、 3次元形状全体の重心を求め、 平行移動及び重心回りに微小回転させて最小誤差を求める処理に時間が力、かって しまう。ましてゃ密な 3次元形状データを用いる場合は、処理時間が大幅にかかり、さ らに認証速度が遅くなつてしまう。また、上記顔に部分的な隠れが生じた場合 (ォタリ ユージョン)には、認証精度が低下してしまう。また、計測された 3次元形状そのものを 保持しなければならないため、データベースの記憶容量が膨大になってしまう。 特許文献 1:特開平 5— 215531号公報  [0007] However, with the technique disclosed in Patent Document 2, it takes time to obtain the minimum error by obtaining the center of gravity of the entire three-dimensional shape, and performing the translation and micro-rotation around the center of gravity. If dense 3D shape data is used, it takes a lot of processing time, and the authentication speed is slowed down. In addition, when the face is partially hidden (a tale version), the authentication accuracy is lowered. In addition, since the measured 3D shape itself must be retained, the storage capacity of the database becomes enormous. Patent Document 1: Japanese Patent Laid-Open No. 5-215531

特許文献 2 :特開平 9 259271号公報  Patent Document 2: JP-A-9 259271

発明の開示 [0008] 本発明は、上記事情に鑑みてなされたもので、認証精度の低下を軽減することがで き、また、認証速度を向上させることができる認証システム及び認証方法を提供する ことを目白勺とする。 Disclosure of the invention [0008] The present invention has been made in view of the above circumstances, and it is an object of the present invention to provide an authentication system and an authentication method capable of reducing a decrease in authentication accuracy and improving an authentication speed. Say it.

[0009] 本発明に力、かる認証システムおよび認証方法では、認証対象者における複数の局 所的な 3次元局所領域が決定され、これら各 3次元局所領域における顔の 3次元的 な特徴量力 ¾次元顔特徴量として算出され、そして、この 3次元顔特徴量と予め用意 された比較用の顔特徴量とを比較するによって、認証対象者に対する認証動作が行 われる。  [0009] In the authentication system and the authentication method according to the present invention, a plurality of local three-dimensional local regions in the person to be authenticated are determined, and the three-dimensional feature amount force of the face in each of these three-dimensional local regions The three-dimensional face feature value is calculated as a three-dimensional face feature value, and a comparison facial feature value prepared in advance is compared to perform an authentication operation for the person to be authenticated.

[0010] このような認証システムおよび認証方法では、顔全体の情報をそのまま用いるので はなぐ顔の局所的な複数の領域が抽出され、これら抽出した局所的な複数の領域 に基づいて認証が行われる。したがって、顔に部分的な隠れ等が生じたとしても、必 ずしもこの隠れ等が生じた部分を用いることなぐこの部分を除く局所領域の情報を 用いて認証を行うことが可能となる。このため、認証精度の低下が軽減され、また認 証速度を向上させることができる。  [0010] In such an authentication system and authentication method, a plurality of local regions of the face to be extracted are extracted without using the information of the entire face as it is, and authentication is performed based on the extracted local regions. Is called. Therefore, even if partial hiding or the like occurs in the face, it is always possible to perform authentication using information in the local area excluding this portion without using the portion in which this hiding or the like has occurred. For this reason, a decrease in authentication accuracy is reduced, and the authentication speed can be improved.

図面の簡単な説明  Brief Description of Drawings

[0011] [図 1]本発明の実施形態に係る認証システムの一例を示す概略構成図である。  FIG. 1 is a schematic configuration diagram showing an example of an authentication system according to an embodiment of the present invention.

[図 2]上記認証システムにおけるコントローラの全体的な構成の一例を示す模式図で ある。  FIG. 2 is a schematic diagram showing an example of the overall configuration of a controller in the authentication system.

[図 3]上記コントローラが備える顔認証の機能を説明するための機能ブロック図である  FIG. 3 is a functional block diagram for explaining a face authentication function provided in the controller.

[図 4]顔の各特徴部位における特徴点の座標の一例を示す模式図である。 FIG. 4 is a schematic diagram showing an example of coordinates of feature points in each feature part of a face.

[図 5]各特徴部位の 3次元座標の算出について説明するための模式図である。  FIG. 5 is a schematic diagram for explaining calculation of three-dimensional coordinates of each characteristic part.

[図 6]標準モデルの一例を示す模式図である。  FIG. 6 is a schematic diagram showing an example of a standard model.

[図 7]Gaborフィルタについて概念的に説明するための立体グラフ図である。  FIG. 7 is a three-dimensional graph for conceptually explaining the Gabor filter.

[図 8]3D顔部位形状データの各特徴点から矩形領域を設定する方法について説明 するための模式図である。  FIG. 8 is a schematic diagram for explaining a method of setting a rectangular area from each feature point of 3D face part shape data.

[図 9]図 8において設定した矩形領域の情報を用いて、 3D顔部位形状データから局 所パッチ領域を抽出(決定)する方法について説明するための模式図である。 [図 10]3D顔部位形状データの各特徴点から矩形領域を設定する方法について説 明するための模式図である。 FIG. 9 is a schematic diagram for explaining a method of extracting (determining) a local patch region from 3D face part shape data using the rectangular region information set in FIG. 8. FIG. 10 is a schematic diagram for explaining a method of setting a rectangular area from each feature point of 3D face part shape data.

[図 11]3D顔部位形状データにおける各 3次元点及び各局所パッチ領域の一例を示 す模式図である。  FIG. 11 is a schematic diagram showing an example of each 3D point and each local patch region in 3D face part shape data.

[図 12] (A) (B)及び (C)は交差判定を説明するための図である。  [FIG. 12] (A) (B) and (C) are diagrams for explaining the intersection determination.

[図 13]3次元顔特徴量の抽出におけるベジエ曲面の一例を示す模式図である。  FIG. 13 is a schematic diagram showing an example of a Bezier curved surface in extracting a three-dimensional face feature quantity.

[図 14]本実施形態に係る顔認証の動作の一例を示すフローチャートである。  FIG. 14 is a flowchart showing an example of face authentication operation according to the present embodiment.

[図 15]図 14のステップ S9における動作の一例を示すフローチャートである。  FIG. 15 is a flowchart showing an example of the operation in step S9 of FIG.

[図 16]他のコントローラが備える顔認証の機能を説明するための機能ブロック図であ  FIG. 16 is a functional block diagram for explaining the face authentication function provided in another controller.

[図 17]図 16に示す認証システムの動作の一例を示すフローチャートである。 FIG. 17 is a flowchart showing an example of the operation of the authentication system shown in FIG.

発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION

[0012] 以下、本発明に係る実施の一形態を図面に基づいて説明する。なお、各図におい て同一の符号を付した構成は、同一の構成であることを示し、その説明を省略する。  Hereinafter, an embodiment according to the present invention will be described with reference to the drawings. In addition, the structure which attached | subjected the same code | symbol in each figure shows that it is the same structure, The description is abbreviate | omitted.

[0013] 図 1は、本発明の実施形態に係る認証システム 1の一例を示す概略構成図である。  FIG. 1 is a schematic configuration diagram showing an example of an authentication system 1 according to an embodiment of the present invention.

図 2は、コントローラ 10の全体的な構成の一例を示す模式図である。図 3は、コント口 ーラ 10が備える顔認証の機能を説明するための機能ブロック図である。図 4は、顔の 各特徴部位における特徴点の座標の一例を示す模式図である。  FIG. 2 is a schematic diagram illustrating an example of the overall configuration of the controller 10. FIG. 3 is a functional block diagram for explaining the face authentication function provided in the controller 10. FIG. 4 is a schematic diagram showing an example of the coordinates of feature points in each feature part of the face.

[0014] 図 1に示すように認証システム 1は、顔による個人認証(以下、顔認証という)を行う ものであり、コントローラ 10と 2台の撮影カメラ(2次元カメラ; 2Dカメラ)(以下、単に「 カメラ」とも!/、う) CA1及び CA2とを備えて!/、る。  As shown in FIG. 1, the authentication system 1 performs personal authentication by face (hereinafter referred to as face authentication), and includes a controller 10 and two photographing cameras (two-dimensional camera; 2D camera) (hereinafter referred to as “face authentication”). Just “camera”! /, U) with CA1 and CA2! /

[0015] カメラ CA1及び CA2は、認証対象者 HMの顔位置に対してそれぞれ異なる位置( 角度)から認証対象者 HMの顔を撮影できるように配置されている。カメラ CA1及び CA2によって認証対象者 HMの顔画像が撮影されると、この撮影により得られる認証 対象者 HMの外観情報、すなわち 2種類の顔画像が通信線を介してコントローラ 10 に送信される。なお、各カメラ CA1、 CA2とコントローラ 10との間における画像データ の通信方式は、有線方式に限定されず、無線方式であってもよい。また、上記顔画 像は、顔部分の画像だけでなく背景画像をも含む画像であってもよレ、。 [0016] コントローラ 10は、図 2に示すように、例えばパーソナルコンピュータ(PC)等の情報 処理装置によって具現化されており、 CPU2と、記憶部 3と、メディアドライブ 4と、例 えば液晶ディスプレイ等の表示部 5と、キーボード 6aおよび例えばポインティングデ バイスであるマウス 6b等の入力部 6と、例えばネットワークカードなどの通信部 7とを 備えている。記憶部 3は、例えばノ、ードディスクドライブ(HDD) 3a及び RAM (半導 体メモリ) 3bなどの複数の記憶媒体を備えている。また、メディアドライブ 4は、例えば CD— ROM (Compact Disc Read Only Memory)、 DVD (Digital Versatile Disk )、フレキシブルディスク及びメモリカードなどの可搬性の記憶媒体 8からその中に記 録されている情報を読み出すことができる例えば CD— ROMドライブ装置、 DVDドラ イブ装置、フレキシブルディスクドライブ装置及びメモリカードドライブ装置等のドライ ブ装置を備えている。なお、このコントローラ 10に対して供給される情報は、記録媒 体 8を介して供給される場合に限定されず、 LAN (Local Area Network)やインター ネットなどのネットワークを介して供給されてもよい。なお、コントローラ 10は、このシス テム用に製作された専用のコントローラ (本体制御装置)であってもよぐ以下に説明 する機能を備えて!/、るものであればょレ、。 [0015] The cameras CA1 and CA2 are arranged so that the face of the authentication target person HM can be photographed from different positions (angles) with respect to the face position of the authentication target person HM. When the face image of the person HM to be authenticated is photographed by the cameras CA1 and CA2, the appearance information of the person HM to be authenticated HM obtained by this photographing, that is, two types of face images are transmitted to the controller 10 via the communication line. The image data communication method between the cameras CA1 and CA2 and the controller 10 is not limited to the wired method, and may be a wireless method. The face image may be an image including a background image as well as a face image. As shown in FIG. 2, the controller 10 is embodied by an information processing device such as a personal computer (PC), for example, and includes a CPU 2, a storage unit 3, a media drive 4, and a liquid crystal display, for example. Display unit 5, a keyboard 6a and an input unit 6 such as a mouse 6b as a pointing device, and a communication unit 7 such as a network card. The storage unit 3 includes a plurality of storage media such as a hard disk drive (HDD) 3a and a RAM (semiconductor memory) 3b. The media drive 4 also stores information recorded therein from a portable storage medium 8 such as a CD-ROM (Compact Disc Read Only Memory), DVD (Digital Versatile Disk), flexible disk, or memory card. It is equipped with drive devices such as CD-ROM drive devices, DVD drive devices, flexible disk drive devices, and memory card drive devices that can be read. The information supplied to the controller 10 is not limited to being supplied via the recording medium 8, and may be supplied via a network such as a LAN (Local Area Network) or the Internet. . The controller 10 may be a dedicated controller (main unit control device) manufactured for this system. It has the functions described below!

[0017] また、コントローラ 10は、図 3に示すように、画像入力部 11と、顔領域検出部 12と、 顔部位検出部 13と、顔部位 3D計算部 14と、姿勢 ·光源補正部 15と、標準モデル記 憶部 16と、 2次元認証部 17と、顔領域 3D計算部 18と、 3次元認証部 19と、類似度 計算部 20と、登録データ記憶部 21と、判定部 22とを備えている。  Further, as shown in FIG. 3, the controller 10 includes an image input unit 11, a face region detection unit 12, a face part detection unit 13, a face part 3D calculation unit 14, and a posture / light source correction unit 15. A standard model storage unit 16, a two-dimensional authentication unit 17, a face area 3D calculation unit 18, a three-dimensional authentication unit 19, a similarity calculation unit 20, a registered data storage unit 21, and a determination unit 22. It has.

[0018] 画像入力部 11は、カメラ CA1及び CA2による撮影によって得られた認証対象者 H Mの顔画像をカメラ CA1及び CA2から本コントローラ 10に入力するものである。画 像入力部 11は、カメラ CA1及び CA2に対応して、第 1画像入力部 11a及び第 2画像 入力部 l ibを備えており、それぞれに対してカメラ CA1及び CA2から送信されてき た顔画像が入力される。したがって、カメラ CA1及び CA2から合計 2枚の顔画像が 入力される。ここで、本実施形態の認証システム 1は、入力された顔画像を用いて 2 次元認証(2D認証)と 3次元認証(3D認証)とを行!/、(このことを多重認証すると!/、う) 、これらの結果に基づいて判定を行う構成である。このため、本実施形態の認証シス テム 1では、 2次元画像(2D画像)と 3次元形状データ(3D形状データ)とが必要とな る。当該 2D画像及び 3D形状データを取得するための入力装置(2D画像 · 3次元(3 D)計測の入力装置)は、一般的な 2Dカメラ (ステレオカメラ)を複数台(2〜N台)用 いる方法がある。この場合、 2枚以上の 2D画像から顔の 3次元形状(3D形状)が算 出される。 [0018] The image input unit 11 inputs a face image of the person HM to be authenticated obtained by photographing with the cameras CA1 and CA2 from the cameras CA1 and CA2 to the controller 10. The image input unit 11 includes a first image input unit 11a and a second image input unit l ib corresponding to the cameras CA1 and CA2, and the face images transmitted from the cameras CA1 and CA2 respectively. Is entered. Therefore, a total of two face images are input from the cameras CA1 and CA2. Here, the authentication system 1 of this embodiment performs two-dimensional authentication (2D authentication) and three-dimensional authentication (3D authentication) using the input face image! / (If this is multiple-authenticated! / , U), and a configuration for making a determination based on these results. For this reason, in the authentication system 1 of this embodiment, a 2D image (2D image) and 3D shape data (3D shape data) are required. The The input device (2D image · 3D (3D) measurement input device) for acquiring the 2D image and 3D shape data is for multiple (2 to N) general 2D cameras (stereo cameras). There is a way. In this case, the 3D shape of the face (3D shape) is calculated from two or more 2D images.

[0019] ただし、これに限らず、他の 3D形状データ取得方法が採用可能である。例えば、 3 D形状データを取得する方法は、光切断方式による非接触 3次元デジタイザのような 3次元計測装置(3D計測装置; 3Dカメラ)が用いられてもよ!/、。本実施形態では 2台 のカメラ CA1及び CA2を用いる構成であるため、 2枚の 2D画像(顔画像)から顔の 3 D形状を算出する必要があるが、上記非接触 3次元デジタイザなどの 3D計測装置を 用いる(1台のカメラと 1台の 3D計測装置を用いる)場合は、 3D計測装置により 3D形 状データが直接取得可能であり、 2D画像から算出する必要はない。さらに、 3D形状 データ取得用のカメラと 2D画像取得用のカメラとを兼用しているタイプの 3D計測装 置では、上述のように 2D画像取得のためのカメラを別途用意する必要もなくなる。  However, the present invention is not limited to this, and other 3D shape data acquisition methods can be adopted. For example, 3D shape data can be acquired by using a 3D measuring device (3D measuring device; 3D camera) such as a non-contact 3D digitizer using a light cutting method! /. In this embodiment, since two cameras CA1 and CA2 are used, it is necessary to calculate the 3D shape of the face from two 2D images (face images). However, the 3D such as the non-contact 3D digitizer described above is required. When using a measurement device (one camera and one 3D measurement device), 3D shape data can be acquired directly by the 3D measurement device, and there is no need to calculate it from a 2D image. Furthermore, in the type of 3D measurement device that combines a camera for acquiring 3D shape data and a camera for acquiring 2D images, it is not necessary to prepare a separate camera for acquiring 2D images as described above.

[0020] 顔領域検出部 12は、画像入力部 11に入力された顔画像から顔領域を検出(特定 、抽出)するものである。顔領域検出部 12は、画像入力部 11の第 1画像入力部 11a および第 2画像入力部 l ibに対応して、第 1顔領域検出部 12a及び第 2顔領域検出 部 12bを備えており、それぞれ第 1画像入力部 11a及び第 2画像入力部 l ibから送 信されてきた顔画像から顔領域 (顔領域画像)を検出する。より具体的には、顔領域 検出部 12は、例えば、予め用意された標準の顔画像を用いたテンプレートマツチン グを行うことによって、顔画像力 顔の存在している領域を抽出する(切り出す)処理 を行う。  The face area detection unit 12 detects (identifies and extracts) a face area from the face image input to the image input unit 11. The face area detection unit 12 includes a first face area detection unit 12a and a second face area detection unit 12b corresponding to the first image input unit 11a and the second image input unit l ib of the image input unit 11. Then, a face area (face area image) is detected from the face images transmitted from the first image input unit 11a and the second image input unit l ib, respectively. More specifically, the face area detection unit 12 extracts (cuts out) an area where a face image force face exists by performing template matching using a standard face image prepared in advance, for example. ) Perform processing.

[0021] なお、顔領域の検出手法は、以下 1.〜3.に示す方法が採用可能であり、また、そ の他の手法が用いられてもよレ、。  [0021] It should be noted that the method shown in the following 1. to 3. can be adopted as a method for detecting a face region, and other methods may be used.

[0022] 1.顔画像に対して、所定サイズのウィンドウ領域 (矩形領域)を走査しつつ、このゥ インドウ領域内に人の顔を表した領域が含まれるか否かの判定を該ウィンドウ領域内 の画素値と所定のしきい値とを比較することで行う方法(例えば、特開 2003— 2244 1号公報、特開平 8— 339445号公報)。なお、この方法によれば、動き情報や色情 報を必要としない顔発見アルゴリズムによって、複雑背景の中から高速且つ高い認 証率で顔領域を検出することができる。 1. [0022] 1. While scanning a window area (rectangular area) of a predetermined size with respect to a face image, it is determined whether or not an area representing a human face is included in the window area. A method that is performed by comparing a pixel value in the pixel and a predetermined threshold value (for example, Japanese Patent Application Laid-Open No. 2003-22441, Japanese Patent Application Laid-Open No. 8-339445). According to this method, a face detection algorithm that does not require motion information or color information can be used for high speed and high recognition from complex backgrounds. The face area can be detected based on the verification rate.

[0023] 2.複数人の顔部位の画像をトレーニングさせてその結果を学習辞書として記憶し ておき、新たに入力された顔画像と比較することによって顔領域検出の判定を行う所 謂ニューラルネットを用いた方法(例えば、 H.Rowley,S.Baluja, and T. anade."New ral NetworK-Based Face Detection In IEE ^ Patt. Anal. Mach.intell,. volume 20,pages 22-38, 1998·)。  [0023] 2. A so-called neural network that trains images of face regions of a plurality of people, stores the results as a learning dictionary, and compares the newly input face images with each other to determine face area detection. (For example, H. Rowley, S. Baluja, and T. anade. "New ral Networ K-Based Face Detection In IEE ^ Patt. Anal. Mach.intell ,. volume 20, pages 22-38, 1998. ).

[0024] 3. Violaらの提案した検出器 (Viola— Jones検出器)を用いた方法であって、様々 な顔領域検出用の識別子を記憶しておき、これを段階的に用いてすなわち比較が進 むにつれて使用する識別子の数を減少させていきながら顔領域検出の判定を行う方 法 (例えは、 P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In Proc. Of IEEE conference on Computer Vi sion and Pattern Recognition, Kauai, HI, December 2001·)。なお、この方法 (こ よれば、顔と非顔との入り組んだ空間の識別関数を、簡単な画像特徴量を用いた単 純な判別関数を複数組み合わせて構成することができる。  [0024] 3. A method using a detector proposed by Viola et al. (Viola-Jones detector), which stores various face area identifiers and uses them step by step, ie, comparison. The method of determining facial region detection while reducing the number of identifiers used as the system progresses (for example, P. Viola and M. Jones.Rapid object detection using a boosted cascade of simple features.In Proc. Of IEEE conference on Computer Vision and Pattern Recognition, Kauai, HI, December 2001.). Note that this method can be configured by combining a plurality of simple discriminant functions using simple image feature amounts.

[0025] なお、顔領域検出部 12は、第 1顔領域検出部 12a及び第 2顔領域検出部 12bが個 別に顔領域をそれぞれ検出してもよいが、何れか一方のみが検出を行ってもよい。あ るいは、顔領域検出部 12は、第 1顔領域検出部 12a及び第 2顔領域検出部 12bが 個別に顔領域をそれぞれ検出し、精度のよい検出結果を採用してもよい。顔領域の 対応点探索処理によって精度のよい顔領域の検出が可能となる。顔部位検出部 13 も同様である。  [0025] It should be noted that in the face area detection unit 12, the first face area detection unit 12a and the second face area detection unit 12b may individually detect the face area, but only one of them detects the face area. Also good. Alternatively, in the face area detection unit 12, the first face area detection unit 12a and the second face area detection unit 12b may individually detect the face areas, and may employ a highly accurate detection result. The face area can be detected with high accuracy by the corresponding area search process. The same applies to the face part detection unit 13.

[0026] 顔部位検出部 13は、顔領域検出部 12によって検出した顔領域の画像力も顔の特 徴的な部位 (特徴部位という)を検出(抽出、算出)するものである。顔の特徴的な部 位を検出することを「顔部位検出」という。顔部位検出部 13は、顔領域検出部 12の第 1顔領域検出部 12aおよび第 2顔領域検出部 12bに対応して、第 1顔部位検出部 13 a及び第 2顔部位検出部 13bを備えており、それぞれ第 1顔領域検出部 12a及び第 2 顔領域検出部 12bから送信されてきた顔領域画像から上記特徴部位の位置(画像 上の座標)を検出する。顔の特徴部位は、 目(例えば瞳中心、 目尻、 目頭、瞳の上下 部)、眉(例えば眉の両端部や中央部)、鼻 (例えば小鼻の端、中央下部、或いは鼻 孔)、 口(例えば左右の口の端、唇中央部の上下部)又は下顎尖端部等の部位が挙 げられる。本実施形態では、例えば、顔部位検出部 13によって、図 4に示されるよう な各特徴部位の特徴点 Q1〜Q23の座標が算出される。特徴点 Ql、 Q3 ; Q2、 Q4 は、左右の目の両端部であり、特徴点 Q7、 Q5 ; Q8, Q6は、左右の瞳の上下部であ り、特徴点 Q9、 Q13 ; Q10, Q14は、左右の眉の両端部であり、特徴点 Ql l、 Q12 は、左右の眉の略中央部であり、特徴点 Q15、 Q16 ; Q17、 Q18は、小鼻の端部で あり、特徴点 Q19は、鼻の中央下部であり、特徴点 Q20、 Q21は、口の両端部であり 、そして、特徴点 Q22、 Q23は、唇中央の上下部である。なお、抽出する特徴点の部 位は、適宜に設定すればよぐ必要に応じて増減可能である。また、この特徴部位の 検出は、特徴部位の標準的なテンプレートを用いたテンプレートマッチング等、種々 の方法で行うことが可能である。 The face part detection unit 13 detects (extracts or calculates) a characteristic part (referred to as a characteristic part) of the face with respect to the image power of the face area detected by the face area detection unit 12. Detecting a characteristic part of the face is called “face part detection”. Corresponding to the first face region detection unit 12a and the second face region detection unit 12b of the face region detection unit 12, the face part detection unit 13 includes a first face region detection unit 13a and a second face region detection unit 13b. The position of the characteristic part (coordinates on the image) is detected from the face area images transmitted from the first face area detecting unit 12a and the second face area detecting unit 12b, respectively. The facial features include the eyes (eg, the center of the pupil, the corner of the eye, the top of the eye, the upper and lower parts of the pupil), the eyebrows (eg, both ends and the middle of the eyebrows), and the nose (eg, the edges of the nose, the lower center of the nose, Hole), mouth (eg, left and right mouth edges, upper and lower lip center) or mandibular tip. In the present embodiment, for example, the face part detection unit 13 calculates the coordinates of the feature points Q1 to Q23 of each feature part as shown in FIG. Feature points Ql, Q3; Q2, Q4 are the ends of the left and right eyes, and feature points Q7, Q5; Q8, Q6 are the upper and lower parts of the left and right pupils, and feature points Q9, Q13; Q10, Q14 Are the left and right eyebrows, and feature points Ql l and Q12 are the approximate center of the left and right eyebrows, and feature points Q15 and Q16; Q17 and Q18 are the ends of the nose and feature points Q19 Is the lower center of the nose, the feature points Q20 and Q21 are both ends of the mouth, and the feature points Q22 and Q23 are the upper and lower portions of the center of the lips. It should be noted that the portion of the feature points to be extracted can be increased or decreased as necessary if it is set appropriately. Further, this feature part can be detected by various methods such as template matching using a standard template of the feature part.

[0027] 上記算出される特徴点 Q1〜Q23の座標は、上記カメラ CA1及び CA2より入力さ れた各画像上の 2次元座標として表される。例えば、認証対象者 HMから見て口の 右端に相当する特徴点 Q20について言えば、 2枚の画像 Gl、 G2 (後述の図 5参照) それぞれにおいて当該特徴点 Q20の座標値が求められる。より具体的には、画像 G 1及び G2の端点を原点 Oとして、特徴点 Q20の画像 G1上の座標 (xl、 yl)が算出さ れ、また、特徴点 Q20の画像 G2上の座標(x2、 y2)が算出される。  The coordinates of the calculated feature points Q1 to Q23 are expressed as two-dimensional coordinates on each image input from the cameras CA1 and CA2. For example, regarding the feature point Q20 corresponding to the right end of the mouth as viewed from the authentication target person HM, the coordinate value of the feature point Q20 is obtained in each of two images Gl and G2 (see FIG. 5 described later). More specifically, the coordinates (xl, yl) of the feature point Q20 on the image G1 are calculated with the end point of the images G1 and G2 as the origin O, and the coordinates of the feature point Q20 on the image G2 (x2 , Y2) is calculated.

[0028] また、顔部位検出部 13は、上記顔領域の画像から、各特徴点の座標を算出すると ともに、各特徴点を頂点とする領域 (特徴領域という)内の各画素の輝度値を、この領 域が有する情報 (テクスチャ情報という)として取得する。例えば、本実施形態の場合 、入力される画像は、 2枚であるので、顔部位検出部 13は、これら各画像(画像 Gl、 G2)における互いに対応する特徴領域内の対応する画素における例えば平均の輝 度値を算出し、この各画素の平均輝度値を当該特徴領域におけるテクスチャ情報と して用いる。  [0028] Further, the face part detection unit 13 calculates the coordinates of each feature point from the image of the face region, and calculates the luminance value of each pixel in the region having the feature point as a vertex (referred to as a feature region). This is acquired as information (called texture information) in this area. For example, in the case of the present embodiment, since two images are input, the face part detection unit 13 performs, for example, an average of corresponding pixels in corresponding feature regions in each of these images (images Gl and G2). The average brightness value of each pixel is used as texture information in the feature region.

[0029] なお、上記顔部位検出の手法は、これに限定されるものではな!/、。例えば、顔部位 検出手法として、特開平 9 102043号公報「画像内の要素の位置検出」に提案さ れているような方法が採用されてもよい。また例えば、顔部位検出手法として、補助 光を使用することで顔部位の形状から検出するような方法や、上述と同様の二ユーラ ルネットによる学習を用いた方法、或いは Gabor (ガボール)ウェーブレット変換や Ga borではない通常のウェーブレット変換による周波数解析を用いた方法が採用されて あよい。 [0029] It should be noted that the method of detecting the facial part is not limited to this! /. For example, as a face part detection method, a method as proposed in Japanese Patent Laid-Open No. 9 102043 “Detection of element positions in an image” may be employed. Further, for example, as a method for detecting a facial part, a method for detecting from the shape of a facial part by using auxiliary light, A method using learning by Rennet, or a method using frequency analysis by Gabor wavelet transform or normal wavelet transform which is not Gabor may be adopted.

[0030] 顔部位 3D計算部 14は、顔部位検出部 13において検出された顔の特徴部位の 2 次元座標から該各特徴部位の 3次元における座標を算出するものである。より具体 的には、顔部位 3D計算部 14は、顔部位検出部 13において検出された各特徴点 Qj の各画像 Gi(i=l, ···, N)における 2次元座標(2D座標) Ui(j)と、各画像 Giを撮影 したカメラのカメラパラメータ Pi (i=l, ···, N)とに基づいて、三角測量の原理によつ て各特徴部位、すなわち各特徴点 Qjの 3次元座標(3D座標) M(j) (j = l, ···, M) f を算出する(所謂「3次元再構成」する)。ただし、記号「N」は、カメラの台数 (ここでは N = 2)を示し、また、記号「M」は、計測点或いは特徴点の数 (M個)を示す。なお、 f f [0030] The face part 3D calculation unit 14 calculates the three-dimensional coordinates of each feature part from the two-dimensional coordinates of the face feature part detected by the face part detection unit 13. More specifically, the face part 3D calculation unit 14 performs two-dimensional coordinates (2D coordinates) in each image Gi (i = l,..., N) of each feature point Qj detected by the face part detection unit 13. Based on Ui ( j ) and the camera parameters Pi (i = l, ..., N) of the camera that captured each image Gi, each feature part, i.e., each feature point Qj 3D coordinates (3D coordinates) M ( j ) (j = l, ..., M) f is calculated (so-called "three-dimensional reconstruction"). However, the symbol “N” indicates the number of cameras (N = 2 in this case), and the symbol “M” indicates the number of measurement points or feature points (M). Ff

各特徴点 Qjの 3D座標 MG)を纏めてなる 3次元的な顔のデータのことを「3D顔部位 形状データ」という。 The three-dimensional face data that combines the 3D coordinates M G) of each feature point Qj is called “3D face part shape data”.

[0031] 以下に、この 3D座標 M(j)を算出する方法の具体的な一例を説明する。空間のヮー ルド座標 (X, Υ, Ζ)τと、画像上の座標 (X, y)とは、以下の(1·1)式に示す関係が成 り立っている。 [0031] A specific example of a method for calculating the 3D coordinate M (j) will be described below. The field coordinates (X, Υ, Ζ) τ in space and the coordinates (X, y) on the image have the relationship shown in the following formula (1 · 1).

Figure imgf000011_0001
Figure imgf000011_0001

但し、(1·1)式中の記号「w」は、 0(ゼロ)でない定数 (w≠0)であり、記号「P」は、透 視投影行列(カメラパラメータ Pi)を表す。  However, the symbol “w” in the equation (1 · 1) is a constant (w ≠ 0) that is not 0 (zero), and the symbol “P” represents the perspective projection matrix (camera parameter Pi).

[0033] 上記座標の表記には、以下の(1· 2)式に示すように 1次元多いベクトルが用いられ ている。この表記は、斉次座標と呼ばれる。斉次座標では、座標を表すベクトルの 0( ゼロ)でない定数倍、つまり上記における (wx, wy, w)T^(x, y, 1^と等が同じ点を 表すものとする。空間の点の斉次座標を M=(X, Y, Z, 1)Tとし、その空間点の画像 における斉次座標を u=(x, y, 1)Tとし、「=」と「〜」とを組み合わせてなる記号を、 定数倍であることを許せば等しレ、ので左辺が右辺の 0でな!/、定数倍に等し!/、、と!/、う とを表す記号であるとすると、上記(1·1)式は、以下の(1· 3)式で表される。 [0033] As shown in the following equation (1 · 2), one-dimensionally more vectors are used to express the coordinates. This notation is called homogeneous coordinates. In homogeneous coordinates, a constant multiple other than 0 (zero) of the vector representing the coordinates, that is, (wx, wy, w) T ^ (x, y, 1 ^ etc.) above represents the same point. The homogeneous coordinates of a point are M = (X, Y, Z, 1) T , the homogeneous coordinates of the spatial point in the image are u = (x, y, 1) T, and `` = '' and `` ~ '' If the combination of the symbols is allowed to be a constant multiple, the left side is equal to 0 on the right side! /, Equal to the constant multiple! /, And! /, U The above expression (1 · 1) is expressed by the following expression (1 · 3).

Figure imgf000012_0001
u sPM (1-3) ここで、透視投影行列 Pは、 3 X 4の行歹 IJ式であり、その各成分を以下の(1·4)式に 示すものとすると、上記(1*1)式における「w」を消去することによって、空間と画像と の座標の関係は、以下の(1 · 5)及び(1 · 6)式に示すようになる。
Figure imgf000012_0001
u sPM (1-3) Here, the perspective projection matrix P is a 3 × 4 row IJ equation, and if each component is expressed by the following equation (1 · 4), the above (1 * 1 By eliminating “w” in equation (1), the relationship between the coordinates of the space and the image becomes as shown in the following equations (1 · 5) and (1 · 6).

• · . (1.4)• ·. (1.4)

Figure imgf000012_0002
Figure imgf000012_0002

PX + P12Y + P13Z + Ρ„ PX + P 12 Y + P 13 Z + Ρ „

X 5)  X 5)

+ i 32/ + P33Z + (1- PM j JL + j. 24 + i 32 / + P 33 Z + (1- P M j JL + j. 24

y = (1-6) ここで、注意点としては、(1· 3)式によって定数倍の自由度を許していることから、 Ρ の各成分は、各パラメータを用いて組み合わされたものであって独立ではない。  y = (1-6) Here, the cautionary point is that the constant multiple degree of freedom is allowed by the formula (1 · 3), so each component of Ρ is combined using each parameter. It is not independent.

[0036] 図 5は、各特徴部位の 3次元座標の算出について説明するための模式図である。  FIG. 5 is a schematic diagram for explaining the calculation of the three-dimensional coordinates of each feature part.

例えば、図 5に示すように、互いに異なる、自由な位置に配置された、カメラパラメ一 タの異なる一般的な 2台のカメラ (これらを第 1カメラ、第 2カメラとレ、う)を用いて構成さ れたシステムでは、ワールド座標(X, Υ, Ζ)τと、その点(ワールド座標点)に対応する 第 1及び第 2カメラそれぞれの画像 Gl、 G2上の座標 (xl, yl)、 (x2, y2)とは、それ ぞれのカメラパラメータ P、Pを用いて以下の(1· 7)式で表される。

Figure imgf000012_0003
ただし、 (1-7)式中の記号「u」及び記号「M」は、それぞれ以下の(1 · 8)式に示すも のを表している。 ( For example, as shown in Fig. 5, two general cameras with different camera parameters (these are the first camera and the second camera), which are arranged at different free positions, are used. In this system, the world coordinates (X, Υ, Ζ) τ and the coordinates (xl, yl) on the images Gl and G2 of the first and second cameras corresponding to that point (world coordinate point) , (X2, y2) are expressed by the following formula (1.7) using the camera parameters P and P, respectively.
Figure imgf000012_0003
However, the symbols “u” and “M” in Equation (1-7) represent those shown in Equation (1 · 8) below. (

,

Figure imgf000013_0001
Figure imgf000013_0001

[0038] したがって、透視投影行列 P 、 Pが分かっている場合では、その画像上での特徴 [0038] Therefore, when the perspective projection matrices P 1 and P 2 are known, the feature on the image

1 2  1 2

点の位置座標(xl , yl)、 (x2, y2)の組から上記(1 · 7)式及び(1 · 8)式を w、 w、  From the set of point coordinates (xl, yl), (x2, y2), the above formulas (1 · 7) and (1 · 8) are changed to w, w,

1 2 1 2

X、 Υ、 Ζの方程式と見なして解くことによって、当該特徴点の空間における座標を求 め、これによつて 3次元再構成を行うことができる。すなわち、 w、 w、を消去すること By solving it as an equation of X, Υ, Ζ, the coordinates of the feature point in the space can be obtained, and 3D reconstruction can be performed by this. I.e., erasing w, w

1 2  1 2

によって(1 · 5)式及び(1 · 6)式が得られるので、記号 ΓΡ1」は、 Ρの(i, j)成分を表し (1 · 5) and (1 · 6) are obtained by the expression, so the symbol ΓΡ 1 ”represents the (i, j) component of Ρ

ij 1  ij 1

、記号「P2」は、 Pの(i, j)成分を表すものであるとすると、当該関係式を整理するこ The symbol “P 2 ” represents the (i, j) component of P.

¾ 2  ¾ 2

とによって以下の(1 · 9)式となる。この(1 · 9)式は、 X、 Y、 Ζの連立一次方程式となる ので、これら方程式を解くことって当該特徴点の 3次元空間上の座標(X, Υ, Ζ)を求 めること力 Sできる。なお、(1 · 9)式では、 3つの未知数 X、 Υ、 Ζに対して 4つの方程式 が与えられている。これは、 (xl , yl)、 (x2, y2)の 4つの成分は、独立でないことを 意味している。その他の特徴点も同様に空間上の座標が算出される。  The following equation (1 · 9) is obtained. This equation (1 · 9) is a simultaneous linear equation of X, Y, and Ζ. By solving these equations, the coordinates (X, Υ, 特 徴) of the feature point in the three-dimensional space can be obtained. That power S. In (1 · 9), four equations are given for the three unknowns X, Υ, and Ζ. This means that the four components (xl, yl) and (x2, y2) are not independent. Similarly, the coordinates of the other feature points are calculated in space.

. . . ( 1 · 9 )(1 · 9)

- + -

Figure imgf000013_0002
+ P -+-
Figure imgf000013_0002
+ P

[0039] 図 3に戻って、姿勢 ·光源補正部 15は、顔部位検出部 13によって算出されたテクス チヤ情報に対する姿勢変動補正及び光源変動補正を行うものである。姿勢変動補正 は、顔の姿勢つまり向き (傾き)の違いによるテクスチャへの影響を補正するものであ る。光源変動補正は、顔に対する光源の向き(傾き)の違いによるテクスチャへの影響 を補正するものである。姿勢 ·光源補正部 15は、このテクスチャ情報に対する姿勢変 動補正及び光源変動補正に際して、予め準備された一般的 (標準的)な顔のモデル である標準モデル (標準立体モデル;後述の図 7参照)を用いる。 [0040] <姿勢変動補正〉 Returning to FIG. 3, the posture / light source correction unit 15 performs posture variation correction and light source variation correction on the texture information calculated by the face part detection unit 13. Posture fluctuation correction corrects the effect on the texture due to the difference in face posture, that is, orientation (tilt). Light source fluctuation correction corrects the effect on the texture due to the difference in the direction (tilt) of the light source relative to the face. The posture / light source correction unit 15 is a standard model (standard stereo model; see Fig. 7 described later) that is a general (standard) face model prepared in advance for posture change correction and light source change correction for this texture information. ) Is used. [0040] <Attitude variation correction>

(形状情報補正)  (Shape information correction)

テクスチャ情報に対する姿勢変動を補正する際に、先ず、上記 3D顔部位形状デ ータ(各特徴点 Qjの 3D座標 M(j))の形状が補正される。姿勢'光源補正部 15は、 3D 顔部位形状データすなわち 3D形状が、上記標準モデルの 3D形状に最も合致する ように 3次元的な位置の補正を行う(3D顔部位形状データの形状自体は変化しな!/、 )。要するに、姿勢 ·光源補正部 15は、 3D顔部位形状データによる顔が横を向いて いた場合、標準モデルを基準として所謂モデルフィッティングを行い、その横を向い た顔が標準モデルの顔の向き(基準方向)、例えば正面方向を向くように位置補正す る。この位置の補正は、以下の(2)式に示す姿勢パラメータ t (ポーズパラメータ)に基 づレ、て fiわれる。 When correcting the posture variation with respect to the texture information, first, the shape of the 3D face part shape data (3D coordinates M (j) of each feature point Qj) is corrected. The posture 'light source correction unit 15 corrects the three-dimensional position so that the 3D face part shape data, that is, the 3D shape, most closely matches the 3D shape of the standard model (the shape of the 3D face part shape data itself changes). Shina! /,). In short, the posture / light source correction unit 15 performs so-called model fitting based on the standard model when the face according to the 3D face part shape data is facing sideways, and the face facing the side is the orientation of the standard model face ( Reference position), for example, correct the position so that it faces the front. This position is corrected based on the posture parameter t (pose parameter) shown in the following equation (2).

[0041] t= (s, φ , θ , φ , tx, ty, tz)Tt · · · (2) [0041] t = (s, φ, θ, φ, tx, ty, tz) Tt (2)

ただし、記号「s」は、スケール変換指数を表し、記号「 φ , θ , φ」は、回転変位 (傾き )を示す変換パラメータを表し、そして、記号「tx, ty, tz」は、直交 3軸方向における 並進変位を示す変換パラメータを表す。また、右肩の記号「Tt」は、 "転置"を表す。  However, the symbol “s” represents the scale conversion index, the symbol “φ, θ, φ” represents the transformation parameter indicating the rotational displacement (tilt), and the symbol “tx, ty, tz” is orthogonal 3 Represents a conversion parameter indicating translational displacement in the axial direction. The symbol “Tt” on the right shoulder represents “transposition”.

[0042] (テクスチャ情報補正)  [0042] (Texture information correction)

次に、上述において 3D顔部位形状データの顔の向きを正面方向に補正すること によって得られた位置補正情報に基づいて、顔部位検出部 13によって取得された 各特徴領域の 2次元テクスチャ(2Dテクスチャ)が正面方向(基準方向)を向いた状 態となるようにテクスチャ情報が補正される。これによつて顔が正面から撮影された場 合に相当するテクスチャ情報(正面テクスチャ顔画像とレ、う)が再構成される。すなわ ち、正規化された適正なテクスチャ画像が作成される。このように再構成された正面 テクスチャ顔画像を用いることによって、姿勢変動すなわち形状の違いに影響されな Vヽ(依存しなレ、)テクスチャ情報が扱えるようになる。  Next, based on the position correction information obtained by correcting the face direction of the 3D face part shape data in the above direction, the two-dimensional texture (2D of each feature region acquired by the face part detection unit 13 is obtained. The texture information is corrected so that the (texture) is in the front direction (reference direction). As a result, the texture information corresponding to the case where the face was photographed from the front is reconstructed. In other words, a proper normalized texture image is created. By using the front textured face image reconstructed in this way, it becomes possible to handle V ヽ (independent dependence) texture information that is not affected by posture variations, that is, differences in shape.

[0043] テクスチャ情報補正は、上記方法に限るものではな!/、。例えば、テクスチャ情報補 正は、顔部位検出部 13によって取得された各特徴領域のテクスチャ (テクスチャ画像 )を、上記標準モデルの対応する領域 (後述のポリゴン)に貼り付ける(マッピングする )ことによって、上記と同様、正面テクスチャ顔画像が得られるように補正する方法が 採用されてもょレ、。これによつて姿勢の違いに影響されな!/、テクスチャ情報が扱える ようになる。当該補正によって得られた正面テクスチャ顔画像は、相互比較しやすい ように、更に、標準モデル周囲に配置した円筒座標(円筒面)に投影するようにされて もよい。この投影により得られた投影画像のテクスチャ情報は、姿勢変動に影響され ないばかりか、表情変化等による顔の形状変化にも影響されな!/、純粋な顔のテクス チヤ情報となるため、個人認証に用いる情報として非常に有用である。 [0043] The texture information correction is not limited to the above method! /. For example, the texture information correction is performed by pasting (mapping) the texture (texture image) of each feature region acquired by the face part detection unit 13 to the corresponding region (polygon described later) of the standard model. Similar to the above, there is a method of correcting so that a front textured face image can be obtained. It will be adopted. This makes it possible to handle texture information without being affected by the difference in posture! The front texture face image obtained by the correction may be projected onto cylindrical coordinates (cylindrical surface) arranged around the standard model so as to be easily compared with each other. The texture information of the projection image obtained by this projection is not affected by posture fluctuations, but is also not affected by changes in facial shape due to changes in facial expressions! /, Because it is pure facial texture information, It is very useful as information used for authentication.

[0044] <光源変動補正〉  [0044] <Light source fluctuation correction>

(テクスチャ情報補正)  (Texture information correction)

テクスチャ情報に対する光源変動補正では、例えば、テクスチャの輝度情報が補正 される。この場合、カメラによって撮影される画像には、一般的に光源の向きによるシ エーデイングの影響が含まれるため、入力された画像における各特徴領域のテクスチ ャにもその影響が残っている。このため、各特徴領域単位で輝度が補正される。より 具体的には、例えば、顔部位検出部 13によって取得された各特徴領域における各 画素(ノード)の輝度が、標準モデルに対応する画素の輝度に等しくなるように特徴 領域内部で傾斜をかけることによって、すなわち傾斜角度(向き)のパラメータで輝度 値を制御することによって輝度が補正される。  In the light source fluctuation correction for the texture information, for example, the brightness information of the texture is corrected. In this case, since the image captured by the camera generally includes the influence of shading due to the direction of the light source, the influence remains on the texture of each feature region in the input image. For this reason, the luminance is corrected for each feature region. More specifically, for example, the inside of the feature area is inclined so that the brightness of each pixel (node) in each feature area acquired by the face part detection unit 13 is equal to the brightness of the pixel corresponding to the standard model. In other words, the luminance is corrected by controlling the luminance value with the parameter of the tilt angle (orientation).

[0045] 標準モデル記憶部 16は、上記顔の標準モデルの情報を予め記憶するものである。  [0045] The standard model storage unit 16 stores information on the standard model of the face in advance.

図 6は、標準モデルの一例を示す模式図である。この標準モデルは、例えば、図 6に 示すように、頂点データとポリゴンデータとで構成される。頂点データは、標準モデル における特徴部位の頂点 Uの座標の集合体であり、上記各特徴点 Qjの 3D座標と 1 対 1に対応している。ポリゴンデータは、標準モデルの表面を微小な多角形、例えば 3角形や 4角形といった多角形のポリゴンに分割し、このポリゴンを数値データとして 表現したものである。各ポリゴンには、上記光源変動補正において用いられる画素の 輝度情報等が含まれる。なお、この標準モデルは、複数人の顔のデータを平均して 求めた平均顔データであってもよい。また、標準モデルの各ポリゴンの頂点は、特徴 点 Qjとともに、特徴点 Qj以外の中間点を用いて構成されてもよい。この中間点は、補 間によつて算出される。  FIG. 6 is a schematic diagram showing an example of a standard model. For example, as shown in FIG. 6, this standard model is composed of vertex data and polygon data. The vertex data is a set of coordinates of the vertex U of the feature part in the standard model, and has a one-to-one correspondence with the 3D coordinate of each feature point Qj. Polygon data is obtained by dividing the surface of a standard model into small polygons, for example, polygons such as triangles and quadrangles, and expressing these polygons as numerical data. Each polygon includes pixel luminance information and the like used in the light source fluctuation correction. The standard model may be average face data obtained by averaging the data of a plurality of people's faces. Further, the vertex of each polygon of the standard model may be configured by using an intermediate point other than the feature point Qj together with the feature point Qj. This midpoint is calculated by interpolation.

[0046] 図 3に戻って、 2次元認証部(2D認証部) 17は、姿勢 ·光源補正部 15において姿 勢変動補正及び光源変動補正されて成る各特徴領域のテクスチャ情報から 2次元顔 特徴量 (2D顔特徴量:局所的な 2D顔特徴量;局所 2D顔特徴量)を算出するもので ある。 2D認証部 17は、補正画像取得部 17a及び 2D特徴量抽出部 17bを備えてい る。補正画像取得部 17aは、姿勢 ·光源補正部 15においてテクスチャ画像が姿勢変 動補正及び光源変動補正されてなる補正画像 (補正テクスチャ画像と!/、う)を取得す るものである。すなわち、姿勢'光源補正部 15からの補正画像が補正画像取得部 17 aに入力される。 Returning to FIG. 3, the two-dimensional authentication unit (2D authentication unit) 17 is shown in the posture / light source correction unit 15. The 2D face feature value (2D face feature value: local 2D face feature value; local 2D face feature value) is calculated from the texture information of each feature region that has been corrected for the power fluctuation and the light source fluctuation. The 2D authentication unit 17 includes a corrected image acquisition unit 17a and a 2D feature quantity extraction unit 17b. The corrected image acquisition unit 17a acquires a corrected image (corrected texture image! /, U) obtained by the posture / light source correction unit 15 in which the texture image is subjected to posture change correction and light source change correction. That is, the corrected image from the attitude 'light source correction unit 15 is input to the corrected image acquisition unit 17a.

[0047] 2D特徴量抽出部 17bは、補正画像取得部 17aにより取得された補正テクスチャ画 像から 2D顔特徴量を抽出するものである。この 2D顔特徴量の抽出は、画像の局所 的な濃淡情報(特定方向の輪郭線など)を特徴量として取り出す手法である Gaborゥ エーブレット変換を用いた方法により行う。この Gaborウェーブレット変換は、上記顔 部位の検出にも使用できるし、ここでの濃淡情報を取り出すことにも使用できる。より 具体的には、補正テクスチャ画像が有する 2D座標点を基準として、この補正テクス チヤ画像に対して Gaborフィルタによるフィルタ処理を施すことによって得られる濃淡 情報が 2D顔特徴量として抽出される。  [0047] The 2D feature amount extraction unit 17b extracts a 2D face feature amount from the corrected texture image acquired by the corrected image acquisition unit 17a. This 2D face feature extraction is performed by a method that uses Gabor wavelet transform, which is a technique that extracts local grayscale information (contour lines in a specific direction, etc.) as a feature. This Gabor wavelet transform can be used to detect the above-mentioned facial part, and can also be used to extract the grayscale information here. More specifically, the grayscale information obtained by applying the Gabor filter to the corrected texture image with the 2D coordinate point of the corrected texture image as a reference is extracted as a 2D face feature amount.

[0048] 図 7は、 Gaborフィルタについて概念的に説明するための立体グラフ図である。ここ で、 Gaborフィルタは、図 7に示すように、 sin関数 (虚部)及び cos関数(実部)をガウ ス関数で局在化したカーネルを用いた空間フィルタであって、画像の局所的な濃淡 情報を取り出すことが可能な変換 (Gaborウェーブレット変換)を行うフィルタである。  FIG. 7 is a three-dimensional graph for conceptually explaining the Gabor filter. Here, as shown in FIG. 7, the Gabor filter is a spatial filter using a kernel in which a sin function (imaginary part) and a cos function (real part) are localized by a Gaussian function, and is a local filter of an image. It is a filter that performs transformation (Gabor wavelet transformation) that can extract information with high contrast.

Gaborフィルタによるフィルタ処理は、局所的な情報に対する処理であるので、画像 の照明変動の影響を受けにくいという利点がある。 Gaborウェーブレット変換は、カー ネルの形を固定しておき、このカーネルを伸び縮みさせて様々な周期のカーネルを 作り出し、これに対応した空間周期の特徴量 (Gabor特徴量;ここでの濃淡情報)を 抽出する変換である。  Since the filter processing by the Gabor filter is processing for local information, there is an advantage that it is not easily affected by fluctuations in illumination of the image. The Gabor wavelet transform creates a kernel with various periods by fixing the shape of the kernel, and expanding and contracting this kernel, and features corresponding to the spatial period (Gabor feature value; grayscale information here) This is a transformation that extracts.

[0049] 上記空間周期の特徴量を表す特徴ベクトル(2次元特徴ベクトル; 2D特徴べクトノレ )は、サイズ、方向特性の異なる Gaborウェーブレット係数の並びである。 Gaborゥェ 一ブレット変換は、位置及び周波数の不確定性を最小にする関数であって、以下の (3)式で表される。

Figure imgf000017_0001
[0049] The feature vector (two-dimensional feature vector; 2D feature vector) representing the feature quantity of the spatial period is an array of Gabor wavelet coefficients having different size and direction characteristics. The Gabor Wemblet transform is a function that minimizes the uncertainty of position and frequency, and is expressed by the following equation (3).
Figure imgf000017_0001

上記(3)式中の kベクトルは、波の波長と方向を決める定数であり、 [ ]内の第 2項は 、ウェーブレットの再構成条件を満たすべく関数の直流成分が 0 (ゼロ)、すなわちそ のフーリエ変換において以下の(4)式が得られるように加えられた項である。  The k vector in the above equation (3) is a constant that determines the wavelength and direction of the wave. The second term in [] is that the DC component of the function is 0 (zero) to satisfy the wavelet reconstruction condition, that is, It is a term added so that the following equation (4) can be obtained in the Fourier transform.

Ψ ( 0 ) = 0 (4) Ψ (0) = 0 (4)

[0051] このような Gaborウェーブレット変換を用いた手法は、顔画像に適用した場合、様々 な方向と濃淡周期によって豊富な特徴情報を抽出することができるため、高精度な 顔認証システムに採用される。 [0051] When such a method using Gabor wavelet transform is applied to a face image, abundant feature information can be extracted by various directions and shading cycles, and therefore, it is adopted in a highly accurate face authentication system. The

[0052] 2D顔特徴量は、補正テクスチャ画像に対して、上記図 7に示す Gaborフィルタを畳 み込むことによって算出することができる。例えば、方向力 S、方向 {0, π /8, 2 π /8 , 3 π /8, 4 π /8, 5 π /8, 6 π /8, 7 π /8, 8 π /8 }の 8方向、スゲーノレ力 ス ケール {4, 4^2, 8, 8^2, 16 }の 5段での複数の Gaborフィルタを畳み込むことに よって、 2D顔特徴量としての 40 ( = 5 * 8 (記号「 *」は乗算を表す) )次元の特徴べ タトル (それぞれの濃淡の周期の情報)を得ることができる。なお、 2D顔特徴量の抽 出は、この Gaborウェーブレット変換による方法に限るものではない。 2D顔特徴量の 抽出は、その他一般的なテクスチャ情報を使用した方法であれば何れの方法であつ てもよい。また、上記方向やスケールも 8方向や 5段に限らず、任意に定めることがで きる。  [0052] The 2D face feature amount can be calculated by convolving the Gabor filter shown in FIG. 7 with the corrected texture image. For example, directional force S, direction {0, π / 8, 2 π / 8, 3 π / 8, 4 π / 8, 5 π / 8, 6 π / 8, 7 π / 8, 8 π / 8} 40 (= 5 * 8 (2D face feature) is obtained by convolving multiple Gabor filters in 5 stages of 8 directions and Segenore force scale {4, 4 ^ 2, 8, 8 ^ 2, 16}. The symbol “*” indicates multiplication))) Dimensional feature vector (information on each shade period) can be obtained. Note that the extraction of 2D face feature values is not limited to this method using Gabor wavelet transform. Any method may be used for extracting the 2D face feature amount as long as the method uses other general texture information. Also, the direction and scale are not limited to 8 directions and 5 steps, and can be arbitrarily determined.

[0053] 図 3に戻って、顔領域 3D計算部 18は、顔領域検出部 12によって検出された顔領 域の画像から、すなわち本実施形態ではステレオカメラによるステレオ画像から、顔 の高密度な 3D形状(3D顔密形状データという)を算出するものである。ただし、ここ で言う"高密度なデータ"とは、上記顔部位検出部 13によって検出された顔の目や鼻 と!/、つた特徴部位(特徴点 Qjの 3D座標 Μΰ))だけのデータ、つまりデータ取得点数 が少ない"粗 (低密度)"のデータに対して、この特徴部位のデータだけでなぐ頰ゃ 額などの部分も含む顔全体のデータ、つまりデータ取得点数が多い"密"なデータで あることを示している。なお、 3D顔密形状データを構成する当該密なデータ取得点 のことを「3次元点(3D点;又は 3D計測点)」という。 3D顔密形状データは、複数の 3 D点からなる顔の形状データである。 [0053] Returning to FIG. 3, the face area 3D calculation unit 18 obtains a high-density face from the image of the face area detected by the face area detection unit 12, that is, from the stereo image by the stereo camera in this embodiment. 3D shape (referred to as 3D close-fitting shape data) is calculated. However, the “high-density data” referred to here is data of only the face eyes and nose detected by the face part detection unit 13! /, And other feature parts (3D coordinates 特 徴 ΰ) of feature points Qj ) In other words, the data of the whole face including the amount of the face, etc., including the amount of data, only the data of this feature part, that is, “dense” with a large number of data acquisition points, for “rough (low density)” data with a small number of data acquisition points It is shown that the data is correct. The dense data acquisition points that make up the 3D dense face shape data This is called “3D point (3D point; or 3D measurement point)”. The 3D face shape data is face shape data composed of a plurality of 3D points.

[0054] ステレオ画像からの顔の高密度な 3D形状の算出は、例えば位相限定相関法 (PO C : Phase-Only Correlation)を用いて実行される。位相限定相関法は、フーリエ変換 を使った相関計算方法の一つであり、 2つのフーリエ画像をスペクトルごとに規格化し た後に合成する。すなわち、位相限定相関法では、 2枚の画像が与えられた場合に 、それぞれの画像の 2次元離散フーリエ変換が振幅成分で正規化され、これらの積 を演算することによって合成位相スペクトルが求められ、そして、これに対して逆フー リエ変換が行われる。 2枚の画像が類似している場合、 POC関数は、極めて鋭いピ ークを有する。この相関ピークの高さは、画像の類似度の尺度として有用である。ピ ークの座標は、 2枚の画像の相対的な位置ずれに対応する。位相限定相関法は、こ のような特性を有するため、輝度変動やノイズの影響を受けに《、高精度に画像間 の対応点を取得することができる。換言すれば、位相限定相関法は、高精度な異な る画像間の対応点検索つまりマッチングを行う処理である。また、取得した対応点に 対して 3次元再構成処理を行うことによって、高精度の 3D顔密形状データが求めら れる。なお、上述のように、本実施形態では 2Dカメラを複数台用いることを想定して いるため、高密度な 3D形状を位相限定相関法により算出している力 S、 3D計測装置 を用いる場合には、複数枚の画像から算出することなぐ高密度な 3D形状を取得す ることが可能であるためこのような手法を用いなくともよい。  [0054] Calculation of a high-density 3D shape of a face from a stereo image is performed using, for example, a phase-only correlation method (POC: Phase-Only Correlation). The phase-only correlation method is one of the correlation calculation methods using Fourier transform. Two Fourier images are normalized for each spectrum and then synthesized. In other words, in the phase-only correlation method, when two images are given, the two-dimensional discrete Fourier transform of each image is normalized by the amplitude component, and a composite phase spectrum is obtained by calculating these products. Then, the inverse Fourier transform is performed on this. If the two images are similar, the POC function has a very sharp peak. The height of the correlation peak is useful as a measure of image similarity. The coordinates of the peak correspond to the relative misalignment of the two images. Since the phase-only correlation method has such characteristics, it is possible to obtain corresponding points between images with high precision under the influence of luminance fluctuation and noise. In other words, the phase-only correlation method is a process of searching for matching points between different images with high accuracy, that is, matching. In addition, highly accurate 3D close-fitting shape data is obtained by performing 3D reconstruction processing on the acquired corresponding points. As described above, since it is assumed that a plurality of 2D cameras are used in the present embodiment, when the force S and the 3D measurement device that calculates a high-density 3D shape by the phase-only correlation method are used. Since it is possible to obtain a high-density 3D shape without calculating from a plurality of images, it is not necessary to use such a method.

[0055] なお、 POCと多重解像度とによる対応点探索は、例えば、次の手法も挙げられる。  [0055] Note that the corresponding point search using POC and multi-resolution includes, for example, the following method.

まず、第 1に、多重解像度の画像として縮小画像が作成される。第 2に、この縮小画 像において、ピクセルレベル (画素レベル)で対応点探索が実行される。第 3に、対応 点の候補を絞って縮小画像が所定の大きさだけ拡大される。第 4に、この所定の大き さだけ拡大された縮小画像において、候補の周囲をピクセルレベルで対応点探索が 実行される。第 5に、第 1で縮小画像とされる前の元の画像と同一の大きさになるまで 、前記第 3及び第 4が繰り返される。そして、第 6に、元の画像と同一の大きさでサブ ピクセルレベルの対応点探索が実行される。  First, a reduced image is created as a multi-resolution image. Secondly, in this reduced image, a corresponding point search is executed at the pixel level (pixel level). Thirdly, the reduced image is enlarged by a predetermined size by narrowing down the corresponding point candidates. Fourthly, in the reduced image enlarged by this predetermined size, corresponding point search is executed around the candidate at the pixel level. Fifth, the third and fourth are repeated until the same size as the original image before being reduced in the first one. Sixth, a sub-pixel level corresponding point search is performed with the same size as the original image.

[0056] 3次元認証部(3D認証部) 19は、顔領域 3D計算部 18によって算出した 3D顔密形 状データと、顔部位 3D計算部 14によって算出した 3D顔部位形状データとに基づい て、 3D顔特徴量 (局所的な 3D顔特徴量;局所 3D顔特徴量)を算出するものである。 3D認証部 19は、 3次元局所パッチ抽出部(3D局所パッチ抽出部) 19a及び 3次元 特徴量抽出部(3D特徴量抽出部) 19bを備えている。 3D局所パッチ抽出部 19aは、 3D顔密形状データと、 3D顔部位形状データ(特徴部位)とから 3次元的な局所パッ チ領域を抽出(算出)するものである。以下、 3次元的な局所パッチ領域を単に「局所 パッチ領域」という。 [0056] The three-dimensional authentication unit (3D authentication unit) 19 is a 3D face dense shape calculated by the face region 3D calculation unit 18. 3D face feature values (local 3D face feature values; local 3D face feature values) are calculated based on the shape data and the 3D face part shape data calculated by the face part 3D calculation unit 14. The 3D authentication unit 19 includes a 3D local patch extraction unit (3D local patch extraction unit) 19a and a 3D feature quantity extraction unit (3D feature quantity extraction unit) 19b. The 3D local patch extraction unit 19a extracts (calculates) a 3D local patch region from the 3D face dense shape data and the 3D face part shape data (feature part). Hereinafter, the three-dimensional local patch region is simply referred to as “local patch region”.

[0057] 図 8は、 3D顔部位形状データの各特徴点から矩形領域を設定する方法について 説明するための模式図である。図 9は、図 8において設定した矩形領域の情報を用 いて、 3D顔部位形状データから局所パッチ領域を抽出(決定)する方法について説 明するための模式図である。図 10は、 3D顔部位形状データの各特徴点から矩形領 域を設定する方法について説明するための模式図である。図 11は、 3D顔部位形状 データにおける各 3次元点及び各局所パッチ領域の一例を示す模式図である。  FIG. 8 is a schematic diagram for explaining a method of setting a rectangular area from each feature point of 3D face part shape data. FIG. 9 is a schematic diagram for explaining a method of extracting (determining) a local patch region from 3D face part shape data using the rectangular region information set in FIG. FIG. 10 is a schematic diagram for explaining a method of setting a rectangular area from each feature point of the 3D face part shape data. FIG. 11 is a schematic diagram showing an example of each 3D point and each local patch region in the 3D face part shape data.

[0058] 3D顔部位形状データの各特徴部位における各特徴点 Qjの 3D座標 M(j) (特徴点 座標という)は、高密度な 3D形状(3D顔部位形状データ)上に存在する。局所パッ チ領域は、 3D顔密形状データに対する、 3D顔部位形状データの特徴点座標から の相対関係で定義される領域である。より具体的には、例えば、図 8に示すように、特 徴点 Qjである右目頭 a、右目尻 b及び右小鼻 cの 3点により定まる平面(局所パッチ抽 出用平面) T上に、例えばベクトル ca、 cbの線形和として定義される 4点で囲まれた 矩形領域 Sが例えば右頰領域として定義される。そして、図 9に示すように、 3D顔部 位形状データを構成する複数の 3D点 αのうち、これら 3D点 α力、ら上記平面 Τに仮 想的に垂直に降ろした垂線が(垂線の足が)矩形領域 Sの中に入っている 3D点を纏 めて成る領域(当該 3D点の集合体として見たときの領域)が、当該右頰部分の局所 ノ ツチ領域 Ρとされる。局所パッチ領域 Ρは、例えば、この右頰の場合の局所パッチ 領域 Ρのように、実際の顔 (ここでは右頰)形状に略合致した曲面形状の領域となって いる(矩形領域 S或いは局所パッチ抽出用平面で定義される完全な平面形状ではな い)。なお、図 9は、 3D顔密形状データを顔の上方から見下ろした場合の概念的な 3 D点が示されている。また、局所パッチ抽出用平面は、 4点以上の複数の特徴点から 求めてもよい。 [0058] The 3D coordinates M ( j ) (referred to as feature point coordinates) of each feature point Qj in each feature part of the 3D face part shape data exist on a high-density 3D shape (3D face part shape data). The local patch area is an area defined by a relative relationship from the feature point coordinates of the 3D face part shape data to the 3D face dense shape data. More specifically, for example, as shown in FIG. 8, on a plane T (local patch extraction plane) T defined by three points of the right eye a, right eye b, and right nose c, which are feature points Qj, For example, a rectangular region S defined by four points defined as a linear sum of vectors ca and cb is defined as a starboard region, for example. Then, as shown in FIG. 9, among a plurality of 3D points α constituting the 3D face shape data, the 3D point α force, and the vertical line that is dropped vertically perpendicular to the plane Τ (perpendicular line) A region formed by collecting 3D points in the rectangular region S (a region when viewed as an aggregate of the 3D points) is defined as a local notch region の of the starboard portion. The local patch region Ρ is, for example, a curved surface region (rectangular region S or local region) that substantially matches the actual face (right starboard) shape, such as the local patch region 場合 in the starboard case. It is not a complete planar shape defined by the patch extraction plane). FIG. 9 shows conceptual 3D points when 3D face shape data is looked down from above the face. Also, the local patch extraction plane is composed of 4 or more feature points. You may ask for it.

[0059] このようにして、図 10に示すように、 3D顔部位形状データの各特徴点座標 201か ら各局所パッチ抽出用平面が設定されるとともに、この局所パッチ抽出用平面上に 所定数の矩形領域、例えば矩形領域 211 (左頰の部分)や矩形領域 212 (額の部分 )が設定される。この矩形領域は、これらの他にも、例えば、矩形領域 213、 214及び 215に示すような目、鼻、 口、眉などの顔の特徴部位を含む領域に任意に設定され てよい。この設定される顔の特徴部位は、より顕著に顔の特徴が表れる箇所が好まし い。このように各矩形領域を設定しておいて、図 11に示すように、これら矩形領域に 対応する各局所パッチ領域 301、 302、 303 · · ·が決定される。ただし、図 11におい て顔全体に配置された複数の点(プロット点) 311は、 3D顔密形状データにおける各 3D点 )を示しており、特に、符号 312で示す場所の点(図中の色が濃い点)は、 局所パッチ領域を構成する 3D点を示している。局所パッチ領域 302は、上記図 9に ぉレ、て説明した頰の箇所の局所パッチ領域 Pに相当する。  In this way, as shown in FIG. 10, each local patch extraction plane is set from each feature point coordinate 201 of the 3D face part shape data, and a predetermined number is set on this local patch extraction plane. For example, a rectangular area 211 (portion part) and a rectangular area 212 (forehead part) are set. In addition to these, the rectangular area may be arbitrarily set to an area including facial features such as eyes, nose, mouth, and eyebrows as shown in rectangular areas 213, 214, and 215, for example. It is preferable that the facial feature part to be set is a part where the facial feature appears more prominently. Each rectangular area is set in this way, and as shown in FIG. 11, local patch areas 301, 302, 303,... Corresponding to these rectangular areas are determined. However, in FIG. 11, a plurality of points (plot points) 311 arranged on the entire face indicate each 3D point in the 3D close-fitting shape data), and in particular, the point indicated by reference numeral 312 (in the figure) The dark dots are the 3D points that make up the local patch area. The local patch region 302 corresponds to the local patch region P at the wrinkle point described in FIG.

[0060] なお、顔は、左右対称であることから、抽出する各局所パッチ領域は、顔における 左右対称の位置に配置されていることが好ましい。また、 目領域は、サングラスなどで 隠される場合があり、 口領域は、髭などの影響で 3D計測できない場合があるため、 抽出する局所パッチ領域は、少なくとも、隠れたり 3D計測不能となりにくい部分であ る、鼻及び頰を含む(額は髪で隠れる可能性が高レ、)ことが望ましレ、。  [0060] Since the face is symmetric, each extracted local patch region is preferably arranged at a symmetric position on the face. In addition, the eye area may be hidden by sunglasses, etc., and the mouth area may not be 3D-measurable due to the influence of wrinkles, etc., so the local patch area to be extracted is at least a part that is not easily hidden or 3D-measurable. It is desirable to include the nose and heel (the forehead is likely to be hidden by the hair).

[0061] 局所パッチ領域の抽出方法は、これに限定されるものではない。局所パッチ領域の 抽出方法は、例えば、右頰部分の形状が、このようなものであるという基準となる部分 モデル形状 (基準モデル形状)を予め用意しておき、この部分モデル形状が、 3D顔 密形状データのどこに当て嵌まるものであるの力、を見つけ、その当て嵌まったところ を右頰部分の局所パッチ領域とする方法であってもよい。より具体的には、抽出した い局所パッチの領域に対応する参照用の 3次元(3D)パッチ形状 (参照パッチ形状、 参照用部分モデル形状)、すなわち局所パッチ自身の例えば平均顔 (標準顔)デー タから求めたパッチモデルを予め記憶、保存しておき、このパッチモデルと 3D顔密 形状データとを比較し、例えば互いの形状の類似度を比較し、 3D顔密形状データ 上における最もこのパッチモデル形状に類似(近似)した形状を有する領域を、局所 ノ ツチ領域として決定する方法であってもよレ、。 [0061] The method of extracting the local patch region is not limited to this. For example, the local patch region can be extracted by preparing in advance a partial model shape (reference model shape) as a reference that the starboard portion has such a shape, and this partial model shape is a 3D face. A method may be used in which the force that is applied to the dense shape data is found, and the applied position is used as the local patch region of the starboard portion. More specifically, a reference three-dimensional (3D) patch shape (reference patch shape, reference partial model shape) corresponding to the local patch region to be extracted, that is, the average face (standard face) of the local patch itself, for example. The patch model obtained from the data is stored and saved in advance, and this patch model is compared with the 3D close-fitting shape data, for example, the similarity of each other's shape is compared, and the most A region having a shape similar (approximate) to the patch model shape is locally It may be a method of determining as a notch area.

[0062] また例えば、局所パッチ領域の抽出方法は、 2次元画像上に予め定義された領域 に含まれる 3D顔密形状データの領域を局所パッチ領域として決定する方法であつ てもよい。より具体的には、図 10に示すように、顔部位検出部 13によって検出された 特徴点 Qjを基に定義可能な領域を 2次元画像上の選択領域として定義し、この定義 された 2次元画像上の選択領域の 3D顔密形状データの領域を局所パッチ領域とし て決定する。この手法では、 2次元画像上の領域を顔部位 3D計算部 14の演算より 事前に定義しておくことによって、全ての 3D顔密形状データの計測を行うことなぐこ の定義された 2次元画像上の領域のみの対応点探索および 3次元再構成を行うこと によって、局所パッチ領域のみの形状を計測することも可能であり、処理時間の短縮 を図ることが可能となる。  [0062] Further, for example, the local patch region extraction method may be a method of determining a region of 3D close-fitting shape data included in a region defined in advance on a two-dimensional image as a local patch region. More specifically, as shown in FIG. 10, a region that can be defined based on the feature point Qj detected by the face part detection unit 13 is defined as a selection region on the two-dimensional image, and this defined two-dimensional The 3D face shape data area of the selected area on the image is determined as the local patch area. In this method, the region on the 2D image is defined in advance by the calculation of the facial part 3D calculation unit 14, and all the 3D face shape data is measured. By searching for the corresponding points only in the upper region and performing 3D reconstruction, it is possible to measure the shape of only the local patch region, and to shorten the processing time.

[0063] また例えば、局所パッチ領域の抽出方法は、平均の顔より算出された標準モデル の形状と交差判定を行うことによって局所パッチ領域を決定する方法であってもよい 。より具体的には、まず、予め標準モデルが用意されると共にこの標準モデルに切り 出した!/、局所パッチ領域が定義され、これら標準モデルおよび標準モデル上の局所 パッチ領域が記憶、保存される。次に、 3D顔部位形状データが標準モデルの 3次元 形状に最も合致するように 3次元的な位置が補正される。次に、この位置補正後にお いて、標準モデル上の三角形の領域である三角パッチが、標準モデルの射影中心 点を中心に、 3D顔部位形状データの三角パッチ上に射影される。この 3D顔部位形 状データの三角パッチは、基準の計測点と隣接する計測点との 2点で構成されたパ ツチとして与えられる。そして、この射影された標準モデル上の三角パッチと、 3D顔 部位形状データ上の三角パッチとが交差してレ、るか否かが判定され、交差してレ、る 場合に 3D顔部位形状データ上の三角パッチが局所パッチ領域として決定される。こ の交差は、次の 3つのケースがある。何れかのケースを満たす場合に、両三角パッチ が交差していると判断される。図 12は、交差判定を説明するための図である。図 12 ( A)は、交差と判定される第 1のケースを示し、図 12 (B)は、交差と判定される第 2の ケースを示し、そして、図 12 (C)は、交差と判定される第 3のケースを示す。なお、図 12において、網模様は、標準モデルを表し、斜線模様は、測定データを表す。 [0064] 1.射影された標準モデル上の三角パッチが 3D顔部位形状データ上の三角パッチ に含まれる場合(図 12 (A) )。 [0063] Further, for example, the local patch region extraction method may be a method of determining the local patch region by performing an intersection determination with the shape of the standard model calculated from the average face. More specifically, first, a standard model is prepared in advance and extracted to this standard model! /, Local patch areas are defined, and these standard models and local patch areas on the standard model are stored and stored. . Next, the 3D position is corrected so that the 3D face part shape data best matches the 3D shape of the standard model. Next, after this position correction, a triangular patch that is a triangular area on the standard model is projected onto the triangular patch of the 3D face part shape data around the projection center point of the standard model. This triangular patch of 3D face part shape data is given as a patch composed of two points, the reference measurement point and the adjacent measurement point. Then, it is determined whether or not the projected triangular patch on the standard model and the triangular patch on the 3D face part shape data intersect, and if it intersects, the 3D face part shape is determined. A triangular patch on the data is determined as a local patch region. There are three cases of this intersection. If either case is satisfied, it is determined that the triangle patches intersect. FIG. 12 is a diagram for explaining the intersection determination. Figure 12 (A) shows the first case determined to be an intersection, Figure 12 (B) shows the second case determined to be an intersection, and Figure 12 (C) is determined to be an intersection. The third case is shown. In FIG. 12, the net pattern represents a standard model, and the hatched pattern represents measurement data. [0064] 1. When the triangular patch on the projected standard model is included in the triangular patch on the 3D face part shape data (FIG. 12 (A)).

2.射影された標準モデル上の三角パッチに 3D顔部位形状データ上の三角パッチ が含まれる場合(図 12 (B) )。  2. When the triangular patch on the projected standard model includes the triangular patch on the 3D face part shape data (Fig. 12 (B)).

3.射影された標準モデル上の三角パッチにおけるエッジ部分が 3D顔部位形状デ ータ上の三角パッチと交差する場合 (図 12 (C) )。  3. When the edge of the triangular patch on the projected standard model intersects the triangular patch on the 3D face part shape data (Fig. 12 (C)).

[0065] また例えば、局所パッチ領域の抽出方法は、平均の顔より算出された標準モデル の形状と領域判定を行うことによって局所パッチ領域を決定する方法であってもよい 。より具体的には、まず、予め標準モデルが用意され、標準モデルの点群からマップ が作成される。すなわち、予め用意された標準モデルの点群における各 3次元点が 直交座標系(x、 y、 z)から極座標系(球座標系)(r、 θ、 φ )に変換される。次に、標 準モデルの点群を変換した球座標(Θ、 φ )が変換式によって uv平面座標に変換さ れる。次に、標準モデルの点群で構成される切り出すべき局所パッチ領域ごとにラベ ル付けを行うことによって、マップが作成される。変換式は、(u, ν)Τ= ( θ X (widt [0065] Further, for example, the local patch region extraction method may be a method of determining the local patch region by determining the shape and region of the standard model calculated from the average face. More specifically, first, a standard model is prepared in advance, and a map is created from the points of the standard model. That is, each three-dimensional point in the standard model point group prepared in advance is converted from the orthogonal coordinate system (x, y, z) to the polar coordinate system (spherical coordinate system) (r, θ, φ). Next, the spherical coordinates (Θ, φ) converted from the standard model point cloud are converted into uv plane coordinates by the conversion formula. Next, a map is created by labeling each local patch area to be extracted, which is composed of standard model point clouds. The conversion formula is (u, ν) Τ = (θ X (widt

x, y h— 1) / (2 π Xwidth) , ( π /2— φ ) (height— π X height) ) 1である。x, yh—1) / (2πXwidth), (π / 2—φ) (height—πX height)) 1 .

, y, z  , y, z

ここで、 Θ は、球座標上の点 A(r、 θ、 φ )において OAと ζ軸とがなす角度であり、 ,  Where Θ is the angle between OA and ζ axis at point A (r, θ, φ) on spherical coordinates,

Φ は、球座標上の点 A (r、 θ、 φ )の xy平面に降ろした垂線と xy平面との交点を , z  Φ is the intersection of the perpendicular line of point A (r, θ, φ) on the xy plane and the xy plane, z

Bとすると OBと X軸とのなす角度である。そして、 widthおよび heightは。投影する画 像の幅である。次に、 3D顔部位形状データの点群も同様に処理され、マップ画像上 に投影される。そして、ラベル付けされた領域に含まれる 3D顔部位形状データ上の 領域が局所パッチ領域として決定される。  B is the angle between OB and the X axis. And width and height. This is the width of the projected image. Next, the point cloud of 3D face part shape data is processed in the same manner and projected onto the map image. Then, the region on the 3D face part shape data included in the labeled region is determined as the local patch region.

[0066] 図 3に戻って、 3D特徴量抽出部 19bは、 3D局所パッチ抽出部 19aによって抽出さ れた局所パッチ領域の情報から 3D顔特徴量を抽出するものである。より具体的には 、各局所パッチ領域内の複数の 3D点の情報に基づいて、局所パッチごとに曲面が 計算される。この曲面の計算は、例えば曲率マップを用いた方法によって実行される 。この場合、先ず局所パッチ領域の正規化が実行される。例えば、矩形の局所パッ チ領域である場合は、その矩形領域の頂点が、予め定められた標準の矩形領域 (標 準矩形領域)の頂点となるように 3次元ァフィン変換することによって当該正規化が実 行される。換言すれば、局所パッチ領域の 3D点を示す座標値を、標準となる座標値 に合わせる変換(3次元ァフィン変換)が行われる。そして、正規化された局所パッチ 領域が均等にサンプリングされ、各サンプリング点における曲率を局所パッチ領域の 形状特徴(3D顔特徴量)とされる。このこと力も、曲率マップを用いた方法は、局所パ ツチ領域と標準矩形領域との曲率を比較するものであるとも言える。当該曲率は、例 えば「3次元曲率を用いた顔の同定 顔の 3次元形状特徴抽出一」電子情報通信学 会論文誌 Vol. J76-D2 No. 8 (1993年 8月) pp.1595-1603に開示されている手法を 用いることで算出可能である。 Returning to FIG. 3, the 3D feature quantity extraction unit 19b extracts a 3D face feature quantity from the information of the local patch region extracted by the 3D local patch extraction unit 19a. More specifically, a curved surface is calculated for each local patch based on information on a plurality of 3D points in each local patch region. The calculation of the curved surface is executed by a method using a curvature map, for example. In this case, first, normalization of the local patch area is performed. For example, in the case of a rectangular local patch area, the normalization is performed by performing three-dimensional affine transformation so that the vertex of the rectangular area becomes the vertex of a predetermined standard rectangular area (standard rectangular area). Is real Is done. In other words, a transformation (three-dimensional affine transformation) is performed in which the coordinate value indicating the 3D point of the local patch region is matched with the standard coordinate value. Then, the normalized local patch region is uniformly sampled, and the curvature at each sampling point is used as the shape feature (3D face feature amount) of the local patch region. It can be said that this method also uses the curvature map to compare the curvature of the local patch area and the standard rectangular area. For example, the curvature is “Face Identification Using 3D Curvature 3D Shape Feature Extraction 1”, IEICE Transactions Vol. J76-D2 No. 8 (August 1993) pp.1595- It can be calculated by using the method disclosed in 1603.

[0067] 3D顔特徴量の抽出は、上記方法に限定されるものではなぐ例えば、面近似によ つて 3D顔特徴量の抽出を行う方法であってもよい。この曲面近似は、ベジエ曲面、 双 3次曲面、有理ベジエ曲面、 Bスプライン曲面、 NURBS (Non Uniform Rational B-Spline)曲面など、種々の曲面を用いることができる。ここでは、ベジエ曲面を用い た場合について説明する。  [0067] The extraction of the 3D face feature value is not limited to the above method. For example, a method of extracting the 3D face feature value by surface approximation may be used. This curved surface approximation can use various curved surfaces such as a Bezier curved surface, a bicubic curved surface, a rational Bezier curved surface, a B-spline curved surface, and a NURBS (Non Uniform Rational B-Spline) curved surface. Here, the case where a Bezier curved surface is used will be described.

[0068] 図 13は、 3次元顔特徴量の抽出におけるベジエ曲面の一例を示す模式図である。  FIG. 13 is a schematic diagram showing an example of a Bezier curved surface in the extraction of the three-dimensional face feature amount.

ベジエ曲面は、図 13に示すように、 P00、 P01、 · · ·、 P33等と格子状に並んだ制御 点 Pによって定義した曲面 F (ベジエ曲面 F)である。この場合に、制御点 Pは、曲面 Fの四隅の点と概形とを定める。ベジエ曲面は、 ue [0, l]、ve [0, 1]なるパラメ一 タ領域において定義される多項式曲面である。 uに関して n次、 Vに関して m次の式で あれば、 n X m次曲面とよばれ、(n+ 1) * (m+ 1)個の制御点によって表現される。 n = mのときは、双 n次曲面とよばれる。このようなベジエ曲面は、以下の(5)式で与え られる。 ", ν) ∑∑ρ^ (Μ)Β^ν) · · · ( 5)  As shown in FIG. 13, the Bezier curved surface is a curved surface F (Bézier curved surface F) defined by control points P arranged in a grid with P00, P01,. In this case, the control point P defines the four corner points and the rough shape of the curved surface F. A Bezier surface is a polynomial surface defined in the parameter region ue [0, l], ve [0, 1]. An n-order expression for u and an m-order expression for V is called an n X m-order surface, and is expressed by (n + 1) * (m + 1) control points. When n = m, it is called a bi-n-order surface. Such a Bezier surface is given by the following equation (5). ", Ν) ∑∑ρ ^ (Μ) Β ^ ν) · · · · (5)

[0069] 図 13に示すベジエ曲面 Fは、 n = m = 3の場合の双 3次ベジエ曲面を示している。こ の制御点 P (の座標値)を制御することによってベジエ曲面 Fの形状が変化し、ベジエ 曲面 Fは、上記局所パッチ領域の形状に近似される。当該近似させたベジエ曲面 F の形状情報(曲面情報)が局所パッチ領域のパッチ形状情報として求められる。同様 に顔の各局所パッチ領域にっレ、てパッチ形状情報、すなわち各局所パッチ領域の 3 次元特徴ベクトル(3D特徴ベクトル)、すなわち 3D顔特徴量が求められる。そして、 この各局所パッチ領域に対して求められた各パッチ形状情報(3D顔特徴量)を 1つ に合わせてトータルの 3D顔特徴量が求められる。ただし、これに限らず、このトータ ルの 3D顔特徴量の情報に対して、さらに各局所パッチ領域 (或いは各パッチ形状情 報)間の相対位置関係の情報、つまり相互の距離や傾き等の情報を加えるようにして もよい。この場合、顔の全体的な特徴を示す"大域形状情報"を极うことができるよう になるので、当該 3D顔特徴量がより一層個人認証に適したものとなる。 [0069] A Bezier curved surface F shown in FIG. 13 represents a bicubic Bezier curved surface in the case of n = m = 3. By controlling this control point P (coordinate value thereof), the shape of the Bezier curved surface F changes, and the Bezier curved surface F is approximated to the shape of the local patch region. The shape information (curved surface information) of the approximated Bezier curved surface F is obtained as the patch shape information of the local patch region. Similarly, the patch shape information for each local patch area of the face, that is, 3 A dimensional feature vector (3D feature vector), that is, a 3D face feature value is obtained. Then, the total 3D face feature value is obtained by combining the patch shape information (3D face feature value) obtained for each local patch region. However, the present invention is not limited to this, and the information on the relative positional relationship between each local patch region (or each patch shape information), that is, the mutual distance, inclination, etc., is further added to the total 3D facial feature information. Information may be added. In this case, since it becomes possible to obtain “global shape information” indicating the overall characteristics of the face, the 3D face feature amount is more suitable for personal authentication.

[0070] なお、上記 3D顔特徴量を抽出する局所パッチ領域は、少なくとも顔の特徴部位( 上記目、眉、鼻、口など)以外の部位を含む 3局所パッチ領域であることが好ましい。 換言すれば、 3D顔特徴量は、特徴が無い或いは少ない、つまり 2D特徴量(特徴部 位、 2D画像)では特徴が出にくい部位である「額」や「頰」などの部位 (表面の凹凸変 化が少ない平坦な部位)を含む局所パッチ領域から抽出したものであることが好まし い。これにより、 2D認証との多重照合の際に、すなわち後述の多重類似度による認 証判定時において、特徴の有る部位から得た特徴量(2次元的に得た特徴量)はもち ろんのこと、特徴が出にくい部位の特徴量(3次元的に得た特徴量)の情報も用いて 、より高い精度での認証を行うことが可能となる。  [0070] Note that the local patch region from which the 3D face feature value is extracted is preferably a three-local patch region including at least a part other than a facial feature part (the eyes, eyebrows, nose, mouth, etc.). In other words, the 3D face feature amount has no or little feature, that is, a part such as “forehead” or “頰”, which is a part where the feature is difficult to appear with the 2D feature quantity (feature part, 2D image) (surface irregularities) It is preferable that the patch is extracted from a local patch region including a flat portion with little change. As a result, when multiple verification with 2D authentication is performed, that is, at the time of authentication determination based on multiple similarity described later, feature quantities obtained from features (two-dimensional feature quantities) are of course used. In addition, it is possible to perform authentication with higher accuracy by using information on the feature amount (feature amount obtained three-dimensionally) of the part where the feature is difficult to appear.

[0071] なお、このように 3D顔特徴量を 3D特徴ベクトル (ベクトル量)として扱うことができる ので、この算出した 3D顔特徴量(3D特徴ベクトル)、或いは後述における予め用意 する比較特徴量(上記 3D顔特徴量の 3D特徴ベクトルに対応する比較用の 3D特徴 ベクトル;比較用ベクトル量)を、例えばコントローラ 10の記憶部 3などに登録 (記憶) しておく場合、上記 3D顔密形状データ(各 3D点の座標情報)そのものを登録する場 合と比べて、 3D特徴ベクトルを登録する方が少ない登録データ量で済む。すなわち メモリ容量が小さくて済むなど、データの取り扱い性が良くなる。  [0071] Since the 3D face feature quantity can be handled as a 3D feature vector (vector quantity) in this way, the calculated 3D face feature quantity (3D feature vector) or a comparison feature quantity prepared in advance (described later) When the 3D feature vector for comparison corresponding to the 3D feature vector of the 3D face feature amount (comparison vector amount) is registered (stored) in the storage unit 3 of the controller 10, for example, the above-mentioned 3D face dense shape data Compared to registering (coordinate information of each 3D point) itself, registering 3D feature vectors requires less registration data. In other words, data handling is improved, for example, memory capacity can be reduced.

[0072] 類似度計算部 20は、予め登録された比較対象者の顔特徴量 (比較用特徴量という )と、上述で算出された認証対象者 HMの顔特徴量、すなわち 2D顔特徴量(2D特 徴ベクトルの特徴量)及び 3D顔特徴量(3D特徴ベクトルの特徴量)との類似性の評 価を行うものである。より具体的には、類似度計算部 20は、上記比較用特徴量と 2D 顔特徴量及び 3D顔特徴量とに基づいて類似度計算を行い、それぞれ 2次元類似度 (2D類似度; U及び 3次元類似度(3D類似度; Dij)を算出し、さらにこれら 2D類似 度及び 3D類似度を用いて多重類似度を算出する。先ず、 2D類似度の算出につい て説明する。 [0072] The similarity calculation unit 20 compares the facial feature value of the comparison target person registered in advance (referred to as a comparison feature value) and the facial feature value of the authentication target person HM calculated above, that is, the 2D face feature value ( 2D feature vectors) and 3D face features (3D feature vectors) are evaluated for similarity. More specifically, the similarity calculation unit 20 performs similarity calculation based on the comparison feature quantity, the 2D face feature quantity, and the 3D face feature quantity, and each of the two-dimensional similarity degrees. (2D similarity; U and 3D similarity (3D similarity; Dij) are calculated, and multiple similarity is calculated using these 2D similarity and 3D similarity. First, calculation of 2D similarity explain.

[0073] < 2D類似度の計算〉  [0073] <Calculation of 2D similarity>

認証対象者 HMと比較対象者との 2D類似度 Lは、以下の(6)式に示すように、上 記 2D特徴量抽出部 17bにお!/、て Gaborフィルタによるフィルタ処理を F個の特徴点 に対して行い、これにより抽出(生成)された F個の特徴ベクトルの類似度 SD (Ji, Ji' ) を積算した値の平均値として与えられる。  As shown in the following equation (6), the 2D similarity L between the authentication target HM and the comparison target is calculated by the 2D feature quantity extraction unit 17b! It is given as the average value of the sum of the similarity SD (Ji, Ji ') of the F feature vectors extracted (generated).

[0074] 1 F  [0074] 1 F

L (G, G') · ' ' ( 6 )  L (G, G ') ·' '(6)

ト i=0 ただし、上記 ½)式において、 2D類似度 Lは、算出された特徴ベクトルの特徴量を G 、登録されている特徴量を G'とすると、これらの類似度 L (G, G' )として表現している 。また、上記 ½)式中の記号「i」は、特徴点の個数を表し、 i= l〜F (個)を示す。  I = 0 However, in the above equation ½), the 2D similarity L is calculated by assuming that the feature quantity of the calculated feature vector is G and the registered feature quantity is G ′. ') Is expressed as. In addition, the symbol “i” in the above formula (i) represents the number of feature points, and i = l to F (pieces).

[0075] 上記(6)式における類似度 S (J , J ' )は、以下の(7)式で定義される。 The similarity S (J, J ′) in the above equation (6) is defined by the following equation (7).

D i i

Figure imgf000025_0001
= sixSD{j,J',d I . . . ( 7) ただし、上記(7)式中の記号「Ω」は、原点近傍 (0変位近傍)の局所領域を表す。ま た、記号 dベクトルは、位相差を表す。 D ii
Figure imgf000025_0001
= 6S D {j, J ', d I... (7) However, the symbol “Ω” in the above equation (7) represents a local region near the origin (near zero displacement). The symbol d vector represents the phase difference.

[0076] また、上記(7)式中の S (J , J ' , dベクトル)は、位相類似度に変位修正を考慮した [0076] In addition, S (J, J ', d vector) in the above equation (7) takes displacement correction into consideration for the phase similarity.

D i i  D i i

以下の(8)式により与えられる。この(8)式は、振幅の相関を位相角の類似度で重み 付けした形をしている。  It is given by the following equation (8). This equation (8) has a form in which the correlation of amplitude is weighted by the similarity of the phase angle.

Figure imgf000025_0002
Figure imgf000025_0002

[0077] ただし、上記(8)式中の記号「J [0077] However, the symbol “J” in the above equation (8)

あり、 Nは、複素 Gaborフィルタ

Figure imgf000025_0003
Yes, N is complex Gabor filter
Figure imgf000025_0003

ある。また、記号「a」は、振幅を、記号「 φ」は、位相を表す。また、 kベクトルは、 j番目 の 2次元波の方向を向き、且つ周波数の大きさを有するベクトルであって、以下の(9 )式により与えられる。 is there. The symbol “a” represents the amplitude, and the symbol “φ” represents the phase. The k vector is jth A vector having the direction of the two-dimensional wave and the magnitude of the frequency, and is given by the following equation (9).

( 2π cos 9j、  (2π cos 9j,

'k J.X、 j ( 9)  'k J.X, j (9)

jy j jy j

Figure imgf000026_0001
Figure imgf000026_0001
No

[0078] なお、 Gaborフィルタを使用しない場合には、 2D類似度の計算は、後述する 3D類 似度算出と同様に、ユークリッド距離による算出でも可能である。  [0078] When the Gabor filter is not used, the calculation of the 2D similarity can be performed by the Euclidean distance as in the 3D similarity calculation described later.

[0079] < 3D類似度の計算〉  [0079] <Calculation of 3D similarity>

認証対象者 HMと比較対象者との 3D類似度、すなわち形状特徴量の類似度 Dは 3D similarity between the HM subject to authentication and the comparison subject, that is, the similarity D of the shape feature

、以下の(10)式で示されるように、互いに対応するベクトル(3D特徴ベクトル dS)同 士間のユークリッド距離の合計を算出することにより得ることができる。 As shown by the following equation (10), the sum of the Euclidean distances between the vectors corresponding to each other (3D feature vector d S ) can be obtained.

. . , d o) .., d o)

[0080] <多重類似度の計算〉 [0080] <Calculation of multiple similarity>

認証対象者 HM (認証対象物)と比較対象者(比較対象者)との総合的な類似度で ある多重類似度は、以下の(11)式に示すように、上記 2D類似度及び 3D類似度の 各類似度に対しての重み付け和によって算出される。なお、多重類似度は、 Reで示 す。  As shown in the following formula (11), the multiple similarity, which is the overall similarity between the HM (authentication object) and the comparison object (comparison object), is the 2D similarity and 3D similarity. It is calculated by the weighted sum for each degree of similarity. The multiple similarity is indicated by Re.

[0081]  [0081]

R e = WDi + {\ ~ W)SD {Ji, Ji ,) ( 1 1 ) 但し、上記(11)式中の記号「W」は、所定の重み付け係数を表す。ここでは、この重 み付け係数 Wは、予め決められた固定 として与える。 R e = WDi + {\ ˜W) S D {J i , J i , ) (1 1) However, the symbol “W” in the above equation (11) represents a predetermined weighting coefficient. Here, this weighting factor W is given as a predetermined fixed value.

[0082] 登録データ記憶部 21は、予め用意された比較対象者の顔特徴量 (比較特徴量、 比較用顔特徴量)の情報を記憶しておくものである。  The registration data storage unit 21 stores information on face feature amounts (comparison feature amounts and comparison face feature amounts) of a comparison target prepared in advance.

[0083] 判定部 22は、多重類似度 Reに基づいて認証判定を行うものである。認証判定は、 顔照合 (Verification)の場合と顔識別(Identification)の場合とで、その手法が以下(a )、(b)のように異なる。  The determination unit 22 performs authentication determination based on the multiple similarity Re. In the authentication judgment, the method differs between the case of face verification (Verification) and the case of face identification (Identification) as shown in (a) and (b) below.

[0084] (a)顔照合は、入力された顔 (入力顔;認証対象者 HMの顔)が特定の登録者 (特 定登録者)であるか否かを判定するものである。この顔照合では、特定登録者つまり 比較対象者の顔特徴量 (比較特徴量)と認証対象者 HMの顔特徴量との類似度を 所定の閾値と比較することによって、認証対象者 HMと比較対象者との同一性が判 定される。より具体的には、多重類似度 Reが所定の閾値 TH1よりも小さいときに認証 対象者 HMが比較対象者と同一人物(本人)であると判定される。なお、この場合の 閾値 TH1の情報は、判定部 22内に記憶されている。あるいは、この場合の閾値 TH 1の情報は、登録データ記憶部 21に記憶されて!/、てもよレ、。 [0084] (a) In face matching, the input face (input face; face of the person to be authenticated HM) is a specific registrant (special It is determined whether it is a regular registrant. In this face matching, the degree of similarity between the face feature amount of a specific registrant, that is, the comparison target person (comparison feature amount) and the face feature amount of the authentication target person HM is compared with a predetermined threshold value to compare with the authentication target person HM. Identity with the subject is determined. More specifically, when the multiple similarity Re is smaller than a predetermined threshold TH1, it is determined that the authentication target person HM is the same person (person) as the comparison target person. In this case, information on the threshold TH1 is stored in the determination unit 22. Alternatively, the information on the threshold TH 1 in this case is stored in the registration data storage unit 21! /.

[0085] (b)顔識別は、入力顔が誰のものであるかを判定するものである。この顔識別では、 登録されて!/、る人物(比較対象者)の顔特徴量と認証対象者 HMの顔の特徴量との 類似度を全て算出して、認証対象者 HMと各比較対象者との同一性をそれぞれ判 定する。そして、複数の比較対象者のうちの最も高い同一性を有する比較対象者を 認証対象者 HMと同一人物であると判定する。より具体的には、認証対象者 HMと複 数の比較対象者とのそれぞれの多重類似度 Reのうち、最小の多重類似度 Re (Remi n)に対応する比較対象者が、認証対象者 HMと同一人物であると判定される。  [0085] (b) Face identification is to determine who the input face belongs to. In this face identification, all the similarities between the face feature amount of the registered person (compared person) and the face feature amount of the person HM to be authenticated are calculated, and the person HM to be authenticated and each comparison object The identity of each person is determined. Then, the comparison target person having the highest identity among the plurality of comparison target persons is determined to be the same person as the authentication target person HM. More specifically, the comparison target person corresponding to the minimum multiple similarity Re (Remi n) among the multiple similarity degrees Re of the authentication target person HM and multiple comparison target persons is the authentication target person HM. It is determined that they are the same person.

[0086] 図 14は、本実施形態に係る顔認証の動作の一例を示すフローチャートである。先 ず、カメラ CA1及び CA2それぞれによる撮影によって認証対象者 HMの顔画像が 取得される(ステップ Sl)。次に、当該撮影により得られた 2枚の顔画像がコントローラ 10 (画像入力部 11)に入力される(ステップ S2)。次に、顔領域検出部 12によって、 画像入力部 11に入力された各顔画像から顔領域画像が検出される(ステップ S3)。 この検出された顔領域画像から、顔部位検出部 13によって、顔の特徴部位すなわち 特徴点の座標と特徴領域のテクスチャ情報とが検出される(ステップ S4)。そして、顔 部位 3D計算部 14によって、顔部位検出部 13により検出された顔の特徴部位の座 標(特徴点の座標)から該各特徴部位の 3次元における座標(3D顔部位形状データ )が算出される (ステップ S5)。また、姿勢 ·光源補正部 15によって、顔部位検出部 13 により検出されたテクスチャ情報に対する姿勢変動補正及び光源変動補正が行われ る(ステップ S6)。そして、 2D認証部 17によって、当該姿勢変動補正及び光源変動 補正されてなる各特徴領域の補正テクスチャ画像から 2D顔特徴量が算出される(ス テツプ S7)。 [0087] 一方、顔領域 3D計算部 18によって、顔領域検出部 12により検出された顔領域画 像 (ステレオ画像)から、複数の 2D点からなる 3D顔密形状データが算出される(ステ ップ S8)。次に、 3D認証部 19において、 3D局所パッチ抽出部 19aによって、顔領域 3D計算部 18により算出された 3D顔密形状データと、上記ステップ S 5おいて顔部位 3D計算部 14により算出された 3D顔部位形状データとから 3次元的な局所パッチ領 域が算出される(ステップ S9)。そして、 3D特徴量抽出部 19bによって、当該 3D局所 ノ ツチ抽出部 19aによって算出された局所パッチ領域の情報から 3D顔特徴量が算 出される(ステップ S10)。次に、類似度計算部 20によって、予め登録された比較対 象者の顔特徴量 (比較用特徴量)と、上記ステップ S7及び S10において算出された 局所 2D顔特徴量及び 3D顔特徴量との類似性の評価が行われる、すなわち、上記 比較用特徴量と 2D顔特徴量及び 3D顔特徴量とに基づく類似度計算が行われて、 2 D類似度及び 3D類似度、さらにこれら類似度から多重類似度が算出される (ステツ プ Sl l)。そして、当該多重類似度に基づいて、判定部 22によって、顔照合或いは 顔識別の認証判定が行われる(ステップ S 12)。 FIG. 14 is a flowchart showing an example of the face authentication operation according to the present embodiment. First, the face image of the person HM to be authenticated is acquired by photographing with the cameras CA1 and CA2 (step Sl). Next, the two face images obtained by the photographing are input to the controller 10 (image input unit 11) (step S2). Next, the face area detection unit 12 detects a face area image from each face image input to the image input unit 11 (step S3). From the detected face area image, the face part detection unit 13 detects the facial feature part, that is, the coordinates of the feature point and the texture information of the feature area (step S4). Then, the face part 3D calculation unit 14 calculates the three-dimensional coordinates (3D face part shape data) of each feature part from the coordinates (feature point coordinates) of the feature part of the face detected by the face part detection unit 13. Calculated (step S5). Further, the posture / light source correction unit 15 performs posture variation correction and light source variation correction on the texture information detected by the face part detection unit 13 (step S6). The 2D authentication unit 17 calculates a 2D face feature amount from the corrected texture image of each feature region that has been corrected for the posture variation correction and the light source variation correction (step S7). On the other hand, the face area 3D calculation unit 18 calculates 3D face shape data composed of a plurality of 2D points from the face area image (stereo image) detected by the face area detection unit 12 (step). S8). Next, in the 3D authentication unit 19, the 3D local patch extraction unit 19a calculates the 3D face shape data calculated by the face region 3D calculation unit 18 and the face part 3D calculation unit 14 in step S5. A three-dimensional local patch region is calculated from the 3D face part shape data (step S9). Then, the 3D feature quantity extraction unit 19b calculates a 3D face feature quantity from the information of the local patch region calculated by the 3D local notch extraction unit 19a (step S10). Next, the similarity calculation unit 20 compares the face feature amount (comparison feature amount) of the comparison target registered in advance with the local 2D face feature amount and 3D face feature amount calculated in steps S7 and S10. The similarity is calculated based on the comparison feature quantity, the 2D face feature quantity, and the 3D face feature quantity, and the 2D similarity degree and 3D similarity degree. The multiple similarity is calculated from (Step Sl l). Then, based on the multiple similarity, the determination unit 22 performs face collation or face identification authentication determination (step S12).

[0088] 図 15は、図 14のステップ S9における動作の一例を示すフローチャートである。ステ ップ S9では、 3D局所パッチ抽出部 19aによって、先ず顔部位 3D計算部 14により算 出された各特徴部位における各特徴点(3D座標)(3D顔部位形状データ)から、局 所パッチ抽出用平面 Tが設定 (算出)される(ステップ S21)。次に、この設定された局 所パッチ抽出用平面 T上に、矩形領域 S (後述の部分領域)が設定される (ステップ S 22)。そして、矩形領域 Sに対応する局所パッチ領域 Pが設定される、すなわち 3D顔 部位形状データを構成する複数の 3D点 αのうちの局所パッチ抽出用平面 Τに垂直 に降ろした垂線が矩形領域 Sの中に入る 3D点 αが特定され、この特定された 3D点 α力もなる領域が局所パッチ領域 Ρとして設定される (ステップ S 23)。  FIG. 15 is a flowchart showing an example of the operation in step S9 in FIG. In step S9, the local patch extraction unit 19a first extracts a local patch from each feature point (3D coordinate) (3D face part shape data) in each feature part calculated by the face part 3D calculation unit 14. The plane T for use is set (calculated) (step S21). Next, a rectangular area S (a partial area described later) is set on the set local patch extraction plane T (step S22). Then, a local patch area P corresponding to the rectangular area S is set, that is, a perpendicular line drawn perpendicularly to the local patch extraction plane の う ち out of a plurality of 3D points α constituting the 3D face part shape data is a rectangular area S. The 3D point α that falls within is identified, and the region that also has the identified 3D point α force is set as the local patch region Ρ (step S23).

[0089] 以上のように、本実施形態に係る認証システム 1によれば、 3次元形状取得部(顔 領域検出部 12、顔領域 3D計算部 18)によって、認証対象者 ΗΜの顔の全体的な 3 D形状である全体 3次元形状 (全体 3D形状)の情報が取得され、局所領域決定部(3 D局所パッチ抽出部 19a)によって、 3次元形状取得部により取得された全体 3D形 状情報(3D顔密形状データ)から、該全体 3D形状における局所的な領域である複 数の 3次元局所領域 (3D局所領域;局所パッチ領域)が決定される。また、 3次元特 徴量算出部(3D特徴量抽出部 19b)によって、局所領域決定部により決定された 3D 局所領域における局所 3次元形状情報 (局所 3D形状情報)から、該 3D局所領域の 形状に関する局所領域形状情報であって、顔の 3次元的な特徴量である 3D顔特徴 量が算出される。そして、特徴量比較部 (類似度計算部 20、判定部 22)によって、認 証対象者 HMに対する認証動作を行うべく 3次元特徴量算出部により算出された 3D 顔特徴量と、予め用意された比較用顔特徴量とが比較される。 [0089] As described above, according to the authentication system 1 according to the present embodiment, the entire face of the authentication target person's face is obtained by the three-dimensional shape acquisition unit (the face region detection unit 12, the face region 3D calculation unit 18). 3D shape information (overall 3D shape) is acquired, and the 3D shape information acquired by the 3D shape acquisition unit by the local region determination unit (3D local patch extraction unit 19a) is acquired. From (3D close-fitting shape data), a local region in the overall 3D shape A number of 3D local regions (3D local regions; local patch regions) are determined. In addition, the 3D feature amount calculation unit (3D feature amount extraction unit 19b) determines the shape of the 3D local region from the local 3D shape information (local 3D shape information) in the 3D local region determined by the local region determination unit. 3D face feature value, which is the local area shape information about the face and is a three-dimensional feature value of the face. Then, the 3D face feature amount calculated by the 3D feature amount calculation unit is prepared in advance by the feature amount comparison unit (similarity calculation unit 20 and determination unit 22) to perform the authentication operation for the authentication target person HM. The comparison face feature amount is compared.

[0090] また、本実施形態に係る認証方法によれば、第 1の工程において、認証対象者の 顔の全体的な 3D形状である全体 3D形状の情報が取得され、第 2の工程にお!/、て、 全体 3D形状情報から、該全体 3D形状における局所的な領域である複数の 3D局所 領域が決定される。第 3の工程において、 3D局所領域における局所 3D形状情報か ら、該 3D局所領域の形状に関する局所領域形状情報であって、顔の 3次元的な特 徴量である 3D顔特徴量が算出される。そして、第 4の工程において、認証対象者 H Mに対する認証動作を行うべく 3D顔特徴量と予め用意された比較用顔特徴量とが 比較される。 [0090] Also, according to the authentication method of the present embodiment, in the first step, information on the entire 3D shape, which is the overall 3D shape of the face of the person to be authenticated, is acquired, and in the second step ! /, From the entire 3D shape information, a plurality of 3D local regions that are local regions in the entire 3D shape are determined. In the third step, from the local 3D shape information in the 3D local region, 3D face feature amount, which is local region shape information related to the shape of the 3D local region and is a three-dimensional feature amount of the face, is calculated. The Then, in the fourth step, the 3D face feature value is compared with the comparison face feature value prepared in advance to perform the authentication operation for the person to be authenticated HM.

[0091] このように認証システム又は認証方法において、認証対象者 HMの顔の全体 3D形 状から複数の 3D局所領域が決定され、この 3D局所領域における局所 3D形状情報 から 3D顔特徴量が算出され、この 3D顔特徴量と比較用顔特徴量との比較が行われ ることで認証対象者に対する認証動作が行われる。したがって、顔の全体 3D形状の 情報をそのまま用いるのではなぐ顔全体の 3D形状力 局所的な領域 (3D局所領 域)を複数個抽出し、この抽出した 3D局所領域に基づ!/、て認証を行う構成であるの で、顔に部分的な隠れ等が生じたとしても、必ずしもこの隠れ等が生じた部分を用い ずともよく、この部分以外の局所領域の情報を用いて認証を行うことができるから、認 証精度の低下を軽減することができる。また、データ量の多い全体 3D形状(3Dデー タ)の情報をそのまま扱わなくてもよいため、つまり局所領域の部分的な 3D形状デー タを极えばよいため、処理時間が短縮され、認証速度を向上させることができる。  [0091] As described above, in the authentication system or the authentication method, a plurality of 3D local regions are determined from the entire 3D shape of the face of the person HM to be authenticated, and the 3D face feature amount is calculated from the local 3D shape information in the 3D local region. Then, the authentication operation for the person to be authenticated is performed by comparing the 3D face feature quantity with the comparison face feature quantity. Therefore, multiple 3D shape force local areas (3D local areas) of the entire face are extracted without using the information on the entire 3D shape of the face as it is, and based on these extracted 3D local areas! /, Since it is configured to perform authentication, even if partial hiding occurs in the face, it is not always necessary to use the part where the hiding occurs, and authentication is performed using information in a local area other than this part. Therefore, it is possible to reduce the decrease in authentication accuracy. In addition, since it is not necessary to handle the entire 3D shape (3D data) with a large amount of data as it is, that is, only partial 3D shape data in the local area needs to be used, the processing time is reduced and the authentication speed is reduced. Can be improved.

[0092] 上記認証システムにおいて、 3次元形状取得部が、顔の 2D画像を取得する 2次元 画像取得部(カメラ CA1、 CA2)を備えたものとされ、特徴部位抽出部 (顔部位検出 部 13)によって、 2次元画像取得部により取得された 2D画像から顔の特徴的な部位 である特徴部位が抽出される。そして、 3次元座標算出部 (顔部位 3D計算部 14)に よって、特徴部位抽出部により抽出された特徴部位の 3D座標(Μΰ))が算出され、局 所領域決定部によって、 3次元座標算出部により算出された特徴部位の 3D座標に 基づ!/、て 3次元局所領域が決定される。 [0092] In the above authentication system, the 3D shape acquisition unit includes a 2D image acquisition unit (camera CA1, CA2) that acquires a 2D image of a face, and a feature region extraction unit (face region detection) The part 13) extracts a characteristic part that is a characteristic part of the face from the 2D image acquired by the two-dimensional image acquisition part. Then, the 3D coordinate calculation unit (facial part 3D calculation unit 14) calculates the 3D coordinates (Μ ΰ) of the feature part extracted by the feature part extraction unit, and the local region determination unit calculates the 3D coordinate. Based on the 3D coordinates of the feature part calculated by the calculation unit, a 3D local area is determined!

[0093] 上記認証方法において、第 1の工程が、顔の 2D画像を取得する第 5の工程を含む 工程とされ、第 6の工程において、 2D画像から顔の特徴的な部位である特徴部位が 抽出される。また、第 7の工程において、特徴部位の 3D座標が算出される。そして、 上記第 2の工程において、特徴部位の 3D座標に基づいて 3D局所領域が決定され [0093] In the authentication method, the first step is a step including a fifth step of acquiring a 2D image of a face, and in the sixth step, a feature portion that is a characteristic portion of the face from the 2D image Is extracted. In the seventh step, the 3D coordinates of the characteristic part are calculated. Then, in the second step, a 3D local region is determined based on the 3D coordinates of the feature part.

[0094] このように認証システム又は認証方法では、 2D画像から顔の特徴的な部位である 特徴部位が抽出されてこの特徴部位の 3D座標が算出され、この 3D座標に基づい て 3D局所領域が決定されるので、 3D局所領域を決定するに際して 2次元的な特徴 部位の情報と関連付けることができ、当該 3D局所領域の情報と共に特徴部位の情 報を用いた高精度の認証を行うことが可能となる。 As described above, in the authentication system or the authentication method, the characteristic part that is a characteristic part of the face is extracted from the 2D image, the 3D coordinate of the characteristic part is calculated, and the 3D local region is calculated based on the 3D coordinate. Therefore, when determining a 3D local area, it can be associated with the information of the two-dimensional feature part, and it is possible to perform high-accuracy authentication using the feature part information together with the 3D local area information. It becomes.

[0095] また、上記認証システムにおいて、局所領域決定部によって、 3D座標から定まる平 面(局所パッチ抽出用平面 Τ)内に所定形状の部分領域 (例えば矩形領域 S)が設定 されるとともに、全体 3D形状における当該部分領域に対応する領域が 3D局所領域 として決定される。このように、特徴部位の 3D座標から定まる平面内に所定形状の部 分領域が設定され、全体 3D形状におけるこの部分領域に対応する領域が 3D局所 領域として決定されるので、簡易な方法を用いて容易に特徴部位の 3D座標から 3D 局所領域を決定することができる。  [0095] In the authentication system, the local region determination unit sets a partial region (for example, a rectangular region S) having a predetermined shape in a plane (local patch extraction plane 平面) determined from the 3D coordinates, and The region corresponding to the partial region in the 3D shape is determined as the 3D local region. In this way, a partial area of a predetermined shape is set in the plane determined from the 3D coordinates of the feature part, and the area corresponding to this partial area in the overall 3D shape is determined as a 3D local area. The 3D local region can be easily determined from the 3D coordinates of the feature part.

[0096] また、上記認証システムにお!/、て、全体 3D形状情報が、複数の 3D点( α )からなる 顔の形状データとされ、局所領域決定部によって、 3D点(《)から平面に仮想的に 垂直に降ろされた垂線が部分領域に入っている 3D点で構成される領域が 3D局所 領域 (局所パッチ領域 Ρ)として決定される。このように、 3D点から平面に仮想的に垂 直に降ろされた垂線が部分領域に入っている 3D点で構成される領域が 3D局所領 域として決定されるので、簡易な方法を用いて容易に部分領域に対応する 3D局所 領域を決定することができる。 [0096] Further, in the above authentication system, the entire 3D shape information is made into face shape data composed of a plurality of 3D points (α), and the local region determining unit determines the plane from the 3D points (<<). A region composed of 3D points where the perpendicular line drawn vertically to the region is included in the partial region is determined as the 3D local region (local patch region Ρ). In this way, the region composed of 3D points where the perpendicular line that is virtually perpendicularly dropped from the 3D point to the plane is included in the partial region is determined as the 3D local region. 3D local corresponding to partial area easily An area can be determined.

[0097] また、上記認証システムにおいて、局所領域決定部によって、全体 3D形状と、予め 用意された参照用 3次元部分モデル形状 (参照用 3D部分モデル形状;参照パッチ) とが比較され、該全体 3D形状における参照用 3D部分モデル形状に最も類似した形 状である部分が 3D局所領域として決定される。このように、全体 3D形状と参照用 3D 部分モデル形状とが比較され、全体 3D形状における当該参照用 3D部分モデル形 状に最も類似した形状の部分が 3D局所領域として決定されるので、 2D画像を取得 したり、この 2D画像から特徴部位(2D顔特徴量)を抽出したりする構成及び動作を 必要とせず、容易に全体 3D形状における 3D局所領域を決定することができる。  [0097] In the authentication system, the local region determination unit compares the entire 3D shape with a reference three-dimensional partial model shape (reference 3D partial model shape; reference patch) prepared in advance. The part that is the most similar to the reference 3D partial model shape in the 3D shape is determined as the 3D local region. In this way, the overall 3D shape and the reference 3D partial model shape are compared, and the portion of the overall 3D shape that has the most similar shape to the reference 3D partial model shape is determined as the 3D local region. It is possible to easily determine a 3D local region in the entire 3D shape without acquiring a structure and operation for acquiring a feature part (2D face feature amount) from this 2D image.

[0098] また、上記認証システムにおいて、 3次元特徴量算出部によって、 3D局所領域に おける局所 3D形状情報が(上記例えばベジエ曲面等を用いた方法により)所定の曲 面情報に変換されたものが局所領域形状情報として算出される。このように、 3D局 所領域の局所領域形状情報として、 3D局所領域における局所 3D形状情報が所定 の曲面情報に変換されたものが用いられるので、すなわち 3D形状情報がそのまま用 いられるのではなぐこれを変換して曲面情報 (例えば曲率)として扱う構成であるの で、次元圧縮が可能となり、処理が高速となる。  [0098] In the authentication system, the 3D feature amount calculation unit converts local 3D shape information in the 3D local region into predetermined curved surface information (for example, by a method using a Bezier curved surface). Is calculated as local region shape information. Thus, as the local region shape information of the 3D local region, information obtained by converting the local 3D shape information in the 3D local region into predetermined curved surface information is used, that is, the 3D shape information cannot be used as it is. Since this is converted to be treated as curved surface information (for example, curvature), dimensional compression is possible and the processing speed is increased.

[0099] また、上記認証システムにおいて、 3次元特徴量算出部によって、 3D顔特徴量とし て、各 3D局所領域の相対位置関係の情報も含む 3D顔特徴量が算出される。このよ うに、 3D顔特徴量が、各 3D局所領域の相対位置関係の情報も含むものとされるの で、この 3D顔特徴量によって、各 3D局所領域における個々の特徴だけでなぐ顔全 体に亘つての特徴を表すことが可能となり(顔の大域形状情報を极うことができ)、より 高精度の認証を行うことが可能となる。  [0099] Further, in the authentication system, the 3D feature amount calculation unit calculates a 3D face feature amount including information on the relative positional relationship of each 3D local region as the 3D face feature amount. In this way, since the 3D face feature amount includes information on the relative positional relationship of each 3D local region, the entire face that is composed of only individual features in each 3D local region is determined by this 3D face feature amount. It is possible to represent the characteristics over a range of times (the global shape information of the face can be obtained), and more accurate authentication can be performed.

[0100] また、上記認証システムにお!/、て、局所領域決定部によって、複数の 3D局所領域 が顔の左右対称となる位置に配置されるように全体 3D形状における 3D局所領域が 決定される。このように、 3D局所領域が顔の左右対称の位置に配置されるので、全 体 3D形状における 3D局所領域の(位置の)決定が効率良く行えるようになり処理時 間が短縮されるとともに、データの取り扱い性が向上する。  [0100] Also, in the above authentication system, the 3D local area in the overall 3D shape is determined by the local area determining unit so that a plurality of 3D local areas are arranged at positions symmetrical with respect to the face. The In this way, since the 3D local area is placed in a symmetrical position on the face, the 3D local area (position) can be efficiently determined in the overall 3D shape, and the processing time is shortened. Data handling is improved.

[0101] また、上記認証システムにお!/、て、局所領域決定部によって、複数の 3D局所領域 が少なくとも顔の鼻及び頰の部位が含まれるように全体 3D形状における該 3D局所 領域が決定される。このように、少なくとも顔の鼻及び頰の部位が含まれるように全体 3D形状における 3D局所領域が決定されるので、当該 3D局所領域を、例えば髪で 隠れてしまう部位 (例えば額)や計測し難!/、部位 (例えば口髭を有する場合の口 )を 避けて設定することができて、この 3D局所領域から精度良く 3D顔特徴量を算出する こと力 Sでき、 、ては高精度の認証を行うことが可能となる。 [0101] Also, in the above authentication system, a plurality of 3D local regions are created by the local region determining unit. The 3D local region in the overall 3D shape is determined so that at least the nose and the heel region of the face are included. In this way, since the 3D local area in the overall 3D shape is determined so that at least the face nose and wrinkle are included, the 3D local area is measured by, for example, a part (for example, forehead) or a part that is hidden by hair. Difficult! /, It can be set avoiding parts (for example, mouth when having a mustache), and it is possible to calculate 3D facial features with high accuracy from this 3D local area. Can be performed.

[0102] また、上記認証システムにおいて、 2次元特徴量算出部(2D特徴量抽出部 17b)に よって、特徴部位抽出部により抽出された特徴部位の情報から顔の 2次元的な特徴 量である 2D顔特徴量が算出される。そして、特徴量比較部によって、 2次元特徴量 算出部により算出された 2D顔特徴量と 3次元特徴量算出部により算出された 3D顔 特徴量とを例えば重み付け和により併せてなる総合的な顔特徴量 (多重類似度)と、 比較用顔特徴量とが比較される。  [0102] In the above authentication system, the two-dimensional feature amount of the face is obtained from the feature part information extracted by the feature part extraction unit by the two-dimensional feature quantity calculation unit (2D feature quantity extraction unit 17b). A 2D face feature is calculated. Then, the feature amount comparison unit combines the 2D face feature amount calculated by the 2D feature amount calculation unit and the 3D face feature amount calculated by the 3D feature amount calculation unit, for example, by a weighted sum. The feature quantity (multiple similarity) is compared with the comparison face feature quantity.

[0103] また、上記認証方法にぉレ、て、第 8の工程にお!/、て、特徴部位の情報から顔の 2次 元的な特徴量である 2D顔特徴量が算出され、第 4の工程において、 2D顔特徴量と 3D顔特徴量とを併せてなる総合的な顔特徴量と、比較用顔特徴量とが比較される。  [0103] In addition, in the eighth step, the 2D face feature value, which is a two-dimensional feature value of the face, is calculated from the information on the feature part. In step 4, the total face feature value that is a combination of the 2D face feature value and the 3D face feature value is compared with the comparison face feature value.

[0104] これらのように認証システム又は認証方法では、顔の 2D顔特徴量が算出され、この 2D顔特徴量と 3D顔特徴量とを併せてなる総合的な顔特徴量と、比較用顔特徴量と が比較されるので、 2D顔特徴量と 3D顔特徴量とを用いたより高精度な認証を行うこ とが可能となる。  [0104] As described above, in the authentication system or the authentication method, the 2D face feature amount of the face is calculated, and the comprehensive face feature amount combining the 2D face feature amount and the 3D face feature amount is compared with the comparison face. Since the feature quantities are compared, it is possible to perform more accurate authentication using the 2D face feature quantities and the 3D face feature quantities.

[0105] また、上記認証システムにおいて、 3次元特徴量算出部によって、少なくとも顔の特 徴部位以外の部位を含む 3D局所領域における局所 3D形状情報から 3D顔特徴量 が算出される。このように、少なくとも顔の特徴部位以外の部位を含む 3D局所領域 における局所 3D形状情報から 3D顔特徴量が算出されるので、 2D顔特徴量と 3D顔 特徴量とを用いた認証(多重認証)を行うに際して、 2D顔特徴量として特徴を抽出し 難い特徴部位以外の部位の特徴を、 3D顔特徴量として含むことができ、すなわち 2 D顔特徴量でカバーすることができない特徴量を 3D顔特徴量でカバーすることがで き、ひいてはより高精度な認証を行うことができる。  [0105] Further, in the authentication system, the 3D feature quantity calculation unit calculates the 3D face feature quantity from the local 3D shape information in the 3D local region including at least a part other than the facial feature part. In this way, 3D face feature quantities are calculated from local 3D shape information in a 3D local area that includes at least parts other than facial feature parts. Therefore, authentication using multiple 2D face feature quantities and 3D face feature quantities (multiple authentication) ), Features of parts other than feature parts that are difficult to extract as 2D face feature quantities can be included as 3D face feature quantities, that is, feature quantities that cannot be covered by 2D face feature quantities. It can be covered with facial features, and as a result, more accurate authentication can be performed.

[0106] また、上記認証システムにおいて、 2D顔特徴量を算出するための特徴部位の情報 はテクスチャ情報であって、補正部(姿勢 ·光源補正部 15)によって、当該テクスチャ 情報に対して、顔の姿勢に関する補正である姿勢変動補正及び顔に対する光源の 向きに関する補正である光源変動補正が行われる。これによれば、 2D顔特徴量を算 出するための特徴部位のテクスチャ情報に対して、顔の姿勢に関する補正である姿 勢変動補正及び顔に対する光源の向きに関する補正が行われるので、姿勢変動補 正及び光源変動補正がなされた補正テクスチャ情報に基づいて適正な 2D顔特徴量 を得ることができ、ひいてはより高精度な認証を行うことができる。 [0106] Also, in the above authentication system, information on feature parts for calculating 2D face feature values Is texture information, and the correction unit (posture / light source correction unit 15) performs posture fluctuation correction that is correction related to the posture of the face and light source fluctuation correction that is correction related to the direction of the light source relative to the face. Done. According to this, the posture variation correction, which is correction related to the posture of the face, and the correction of the direction of the light source with respect to the face are performed on the texture information of the feature part for calculating the 2D face feature amount. Based on the corrected texture information that has been corrected and corrected for light source fluctuations, it is possible to obtain appropriate 2D facial feature values, which in turn enables more accurate authentication.

[0107] また、上記認証システムにおいて、 3次元形状取得部において、少なくとも 2つの撮 影装置 (カメラ CA1、 CA2)によって顔の 2D画像が撮影され、 3次元形状算出部によ つて、当該各撮影装置から得られた 2枚の 2D画像力 位相限定相関法による演算 処理で得た対応点に対して 3次元再構成を行うことによって全体 3D形状が算出され る。これによれば、少なくとも 2つの撮影装置から得られた 2枚の 2D画像から位相限 定相関法による演算によって全体 3D形状が算出されるので、高価な 3次元撮影装 置等を用いることなく低コストで、且つ位相限定相関法により精度良く全体 3D形状を 算出すること力 Sでさる。 [0107] In the authentication system, a 2D image of the face is captured by at least two imaging devices (cameras CA1 and CA2) in the 3D shape acquisition unit, and each of the imaging is performed by the 3D shape calculation unit. The 2D image force obtained from the device is calculated by performing 3D reconstruction on the corresponding points obtained by the computation process using the phase-only correlation method. According to this, since the entire 3D shape is calculated from the two 2D images obtained from at least two imaging devices by calculation using the phase-limited correlation method, the cost can be reduced without using an expensive 3D imaging device or the like. The power S can be used to calculate the entire 3D shape with high accuracy using the phase-only correlation method.

[0108] また、上記認証システムにおいて、 3次元特徴量算出部により算出される 3D顔特 徴量がベクトル量(3D特徴ベクトル)とされ、記憶部(記憶部 3)によって、該べクトノレ 量に対応する比較用顔特徴量 (比較用特徴量)としての比較用ベクトル量 (比較用の 3D特徴ベクトル)が記憶されるので、すなわち比較用顔特徴量として記憶されるデー タが、計測された所謂密な 3D形状データ(3D顔密形状データ)でなくべ外ル量とな るので、記憶するデータ量を小さくすることができる (メモリ容量が少なくて済む)ととも に、データの扱いが容易となる。  [0108] In the authentication system, the 3D face feature amount calculated by the three-dimensional feature amount calculation unit is set as a vector amount (3D feature vector), and the storage unit (storage unit 3) sets the vector nore amount. The comparison vector feature (comparison 3D feature vector) is stored as the corresponding comparison facial feature (comparison feature), that is, the data stored as the comparison facial feature is measured. Since the amount of data is not so-called dense 3D shape data (3D face shape data), the amount of data to be stored can be reduced (the memory capacity can be reduced), and the data can be handled. It becomes easy.

[0109] なお、上述の実施形態における認証システムおよび認証方法は、 2D顔特徴量およ び 3D顔特徴量に基づいて多重類似度が算出され、この多重類似度に基づいて顔 照合或いは顔識別の認証判定が行われるように構成されたが、局所領域形状情報 および大域領域形状情報に基づいて類似度が算出され、この類似度に基づいて顔 照合或いは顔識別の認証判定が行われるように構成されてもよい。  In the authentication system and the authentication method in the above-described embodiment, multiple similarity is calculated based on the 2D face feature amount and the 3D face feature amount, and face matching or face identification is performed based on the multiple similarity degree. However, the similarity is calculated based on the local area shape information and the global area shape information, and the face matching or face identification authentication determination is performed based on the similarity. It may be configured.

[0110] 図 16は、他のコントローラが備える顔認証の機能を説明するための機能ブロック図 である。図 17は、図 16に示す認証システムの動作の一例を示すフローチャートであ FIG. 16 is a functional block diagram for explaining the face authentication function provided in another controller. It is. FIG. 17 is a flowchart showing an example of the operation of the authentication system shown in FIG.

[0111] 本実施形態の認証システムは、図 1な!/、し図 3に示す認証システム 1におけるコント ローラ 10に代え、図 16に示すコントローラ 30を備える点で、図 1ないし図 3に示す認 証システム 1と異なる。したがって、図 1に示すような認証システムの概略構成や図 2 に示すようなコントローラの全体的な構成については、その説明を省略し、以下、この コントローラ 30の機能ブロックについて説明する。 [0111] The authentication system of this embodiment is shown in FIGS. 1 to 3 in that it includes a controller 30 shown in FIG. 16 instead of the controller 10 in the authentication system 1 shown in FIG. Different from authentication system 1. Therefore, the description of the schematic configuration of the authentication system as shown in FIG. 1 and the overall configuration of the controller as shown in FIG. 2 will be omitted, and the functional blocks of the controller 30 will be described below.

[0112] 図 16において、コントローラ 30は、機能的に、画像入力部 31と、顔領域検出部 32 と、顔部位検出部 33と、顔部位 3D計算部 34と、顔領域 3D計算部 35と、 3次元局所 領域抽出部(3D局所領域抽出部) 36と、局所領域情報計算部 38と、大域領域情報 計算部 37と、類似度計算部 39と、登録データ記憶部 40と、総合判定部 41とを備え て構成されている。  In FIG. 16, the controller 30 functionally includes an image input unit 31, a face region detection unit 32, a face region detection unit 33, a face region 3D calculation unit 34, and a face region 3D calculation unit 35. 3D local region extraction unit (3D local region extraction unit) 36, local region information calculation unit 38, global region information calculation unit 37, similarity calculation unit 39, registered data storage unit 40, and comprehensive judgment unit 41.

[0113] これら画像入力部 31 (第 1および第 2画像入力部 31a、 31b)、顔領域検出部 32 ( 第 1および第 2顔領域検出部 32a、 32b)、顔部位検出部 33 (第 1および第 2顔部位 検出部 33a、 33b)、顔部位 3D計算部 34および顔領域 3D計算部 35は、図 3に示す 画像入力部 11 (第 1および第 2画像入力部 11 a、 1 lb)、顔領域検出部 12 (第 1およ び第 2顔領域検出部 12a、 12b)、顔部位検出部 13 (第 1および第 2顔部位検出部 1 3a、 13b)、顔部位 3D計算部 14および顔領域 3D計算部 18とそれぞれ同様である ので、その説明を省略する。  [0113] These image input unit 31 (first and second image input units 31a and 31b), face region detection unit 32 (first and second face region detection units 32a and 32b), face part detection unit 33 (first And the second facial part detection unit 33a, 33b), the facial part 3D calculation unit 34, and the facial region 3D calculation unit 35 are the image input unit 11 (first and second image input units 11a, 1 lb) shown in FIG. , Face area detector 12 (first and second face area detectors 12a, 12b), face part detector 13 (first and second face part detectors 13a, 13b), face part 3D calculator 14 Since it is the same as that of the face area 3D calculation unit 18, the description thereof is omitted.

[0114] 3D局所領域抽出部 36は、顔領域 3D計算部 35によって算出された 3D顔密形状 データと顔部位 3D計算部 34によって算出された 3D顔部位形状データ(特徴部位) とから 3次元局所領域を抽出(算出)するものである。すなわち、 3D局所領域抽出部 36は、図 3に示す 3次元認証部 19の 3D局所パッチ抽出部 19aと同様であり、 3D顔 密形状データと、 3D顔部位形状データ(特徴部位)とから 3次元的な局所パッチ領 域を抽出(算出)するものである。したがって、この 3次元的な局所パッチ領域の抽出 方法は、上述したように、例えば、平面内に設定された所定形状の部分領域に垂線 を降ろすことによって対応する 3D顔密形状データの領域を局所パッチ領域として抽 出する方法、参照用のモデル形状に最も類似する 3D顔密形状データの領域を局所 ノ ツチ領域として抽出する方法、 2次元画像上に予め定義された領域に含まれる 3D 顔密形状データの領域を局所パッチ領域として決定する方法、平均の顔より算出さ れた標準モデルの形状と交差判定を行うことによって局所パッチ領域を決定する方 法、および、平均の顔より算出された標準モデルの形状と領域判定を行うことによつ て局所パッチ領域を決定する方法等を採用することができる。 [0114] The 3D local region extraction unit 36 performs a three-dimensional analysis from the 3D face dense shape data calculated by the face region 3D calculation unit 35 and the 3D face region shape data (feature portion) calculated by the face region 3D calculation unit 34. A local region is extracted (calculated). That is, the 3D local region extraction unit 36 is the same as the 3D local patch extraction unit 19a of the 3D authentication unit 19 shown in FIG. 3, and the 3D face shape data and 3D face part shape data (feature part) 3 A dimensional local patch area is extracted (calculated). Therefore, as described above, this method of extracting a three-dimensional local patch region is performed by, for example, dropping a corresponding 3D close-fitting shape data region locally by dropping a perpendicular line to a partial region of a predetermined shape set in a plane. Extraction method as patch area, 3D close-fitting shape data area most similar to reference model shape is locally A method of extracting as a notch region, a method of determining a region of 3D facial shape data included in a predefined region on a two-dimensional image as a local patch region, and a shape of a standard model calculated from an average face Adopt the method of determining the local patch area by performing the intersection determination and the method of determining the local patch area by determining the shape and area of the standard model calculated from the average face. Can do.

[0115] 局所領域情報計算部 37は、 3D局所領域抽出部 36によって抽出された 3次元局 所領域 (局所パッチ領域)単体の情報力も局所領域情報を抽出(算出)するものであ る。本実施形態では、局所領域情報計算部 37は、 3D局所領域抽出部 36によって 抽出された 3次元局所領域 (局所パッチ領域)単体の情報から、顔の特徴部位にお ける 3次元特有な特徴量 (局所 3D顔特徴量)を抽出するものである。この局所 3D顔 特徴量の抽出方法は、例えば、上述の図 3に示す 3次元認証部 19の 3D特徴量抽出 部 19bにおける抽出方法と同様の抽出方法を採用することができる。すなわち、局所 3D顔特徴量の抽出方法は、例えば、局所パッチ領域の曲面内に在る複数の点にお ける曲率を局所 3D顔特徴量として抽出する方法、および、局所パッチ領域の形状に 近似された曲面の形状情報(曲面情報)を局所 3D顔特徴量として抽出する方法を採 用すること力 Sでさる。 [0115] The local region information calculation unit 37 also extracts (calculates) local region information from the information power of a single 3D local region (local patch region) extracted by the 3D local region extraction unit 36. In the present embodiment, the local region information calculation unit 37 uses the 3D local region (local patch region) alone information extracted by the 3D local region extraction unit 36, and the 3D unique feature amount in the facial feature part. (Local 3D facial feature) is extracted. As the local 3D facial feature amount extraction method, for example, an extraction method similar to the extraction method in the 3D feature amount extraction unit 19b of the three-dimensional authentication unit 19 shown in FIG. That is, the local 3D face feature amount extraction method is, for example, a method of extracting curvatures at a plurality of points in the curved surface of the local patch region as a local 3D face feature amount, and an approximation to the shape of the local patch region. Applying a method to extract the shape information (curved surface information) of the curved surface as a local 3D face feature, the force S is used.

[0116] また例えば、標準モデルを用いて局所パッチ領域ごとに位置合わせを行った後の 標準モデルと局所パッチ領域との距離を局所 3D顔特徴量として抽出する方法も採 用すること力 Sできる。より具体的には、まず、 3D局所領域抽出部 36で用いた標準局 所領域モデル上に予め定義してある複数 (N個)から成る標準局所モデルの定義点 h  [0116] Also, for example, it is possible to employ a method of extracting the distance between the standard model and the local patch area after performing registration for each local patch area as a local 3D face feature amount using the standard model. . More specifically, first, a definition point h of a standard local model consisting of a plurality (N) defined in advance on the standard local region model used in the 3D local region extraction unit 36.

Hに対応する局所パッチ領域における対応点 S ' (S ' = (s 、 s 、 · · ·、 s ) )が取得さ  The corresponding point S '(S' = (s, s, ..., s)) in the local patch region corresponding to H is obtained.

1 2 Nh れる。次に、標準モデルの複数の点 Hとこれらに対応する局所パッチ領域の複数の 点 S 'との距離 d (h、 s )がそれぞれ求められる。そして、それら複数の距離 d (h、 s )を 1 2 Nh. Next, distances d (h, s) between a plurality of points H in the standard model and a plurality of points S ′ in the local patch region corresponding thereto are obtained. And these multiple distances d (h, s)

3D特徴ベクトルとして算出することによって局所 3D顔特徴量として抽出することがで きる。 By calculating as a 3D feature vector, it can be extracted as a local 3D face feature.

[0117] 大域領域情報計算部 38は、 3D局所領域抽出部 36によって抽出された 3次元局 所領域 (局所パッチ領域)の情報力も大域領域形状情報を抽出(算出)するものであ る。本実施形態では、大域領域情報計算部 38は、 3D局所領域抽出部 36によって 抽出された 3次元局所領域 (局所パッチ領域)の情報から、顔全体における 3次元特 有な特徴量 (大域 3D顔特徴量)を抽出(算出)するものである。 [0117] The global area information calculation unit 38 also extracts (calculates) the global area shape information with respect to the information power of the three-dimensional local area (local patch area) extracted by the 3D local area extraction unit 36. In the present embodiment, the global region information calculation unit 38 is executed by the 3D local region extraction unit 36. From the extracted 3D local area (local patch area) information, 3D characteristic features (global 3D facial feature quantities) in the entire face are extracted (calculated).

[0118] 大域領域形状情報は、局所領域形状情報に対する語であって、認証対象者にお ける顔全体の 3次元的な形状の特徴量である。大域領域形状情報は、例えば、 <1 〉顔の 3次元局所領域に基づいて算出される。また例えば、大域領域形状情報は、 < 2〉顔全体の形状に基づいて算出される。また例えば、大域領域形状情報は、 < 3〉顔の 3次元特徴点に基づいて算出される。以下、 <1〉〜く 3〉の各場合につい てより具体的に説明する。  [0118] The global area shape information is a word for the local area shape information, and is a feature amount of the three-dimensional shape of the entire face of the person to be authenticated. The global area shape information is calculated based on, for example, the <1> face three-dimensional local area. Further, for example, the global area shape information is calculated based on <2> the shape of the entire face. Further, for example, the global area shape information is calculated based on <3> face three-dimensional feature points. Hereinafter, each case of <1> to <3> will be described more specifically.

[0119] < 1〉顔の 3次元局所領域に基づく算出方法  [0119] <1> Calculation method based on 3D local region of face

この顔の局所パッチ領域に基づいて大域 3D顔特徴量を算出する算出方法として は、例えば、以下の < 1— 1〉およびく 1— 2〉が挙げられる。  Examples of the calculation method for calculating the global 3D facial feature quantity based on the local patch region of the face include the following <1-1> and <1-2>.

[0120] <1-1>  [0120] <1-1>

大域 3D顔特徴量は、局所パッチ領域の重心に基づいて算出される。すなわち、ま ず、複数 (Nh個)の局所パッチ領域のそれぞれについて重心が算出される。次に、こ れら算出された複数の重心 S(S=(s、s、 ·'·、 3 ))と、登録されている複数の重心  The global 3D face feature amount is calculated based on the center of gravity of the local patch region. That is, first, the centroid is calculated for each of a plurality (Nh) of local patch regions. Next, the plurality of calculated centroids S (S = (s, s, ... ', 3)) and the registered centroids

1 2 Nh  1 2 Nh

T(T=(t、t、 ·'·、ΐ ))とについて、対応する重心 S、T間の距離 d(t、 s)がそれ  For T (T = (t, t, ... ', ΐ)), the distance d (t, s) between the corresponding centroids S and T is

1 2 Nh j j ぞれ算出される。そして、これら算出された複数の距離の平均が大域 3D顔特徴量 di stbとして求められる(式 12)。

Figure imgf000036_0001
1 2 Nh jj Calculated respectively. Then, the average of the calculated plurality of distances is obtained as the global 3D facial feature quantity di stb (Equation 12).
Figure imgf000036_0001

[0121] <1-2>  [0121] <1-2>

また、大域 3D顔特徴量は、局所パッチ領域の法線に基づいて算出される。より具 体的には、まず、 SRTフィッティングによって標準モデルと局所パッチ領域との位置 合わせが行われる。次に、 RTレジストレーシヨンによって、比較する局所パッチ領域 同士の位置合わせがより高精度に行われる。これによつて対応点が再度計算するこ とによって求められる。この求められた N個の対応点ごとに法線が求められ、式 13に  Further, the global 3D facial feature amount is calculated based on the normal of the local patch region. More specifically, first, the standard model and the local patch region are aligned by SRT fitting. Next, RT registration makes it possible to more accurately align the local patch regions to be compared. In this way, the corresponding points are obtained by recalculating. A normal is obtained for each of the N corresponding points thus obtained.

S  S

よって大域 3D顔特徴量 distbが求められる。 distb: Therefore, the global 3D face feature distb is obtained. distb:

N  N

[0122] > [0122]>

SRTフィッティングは、測定データにおける特徴点を標準モデルの特徴点に位置 合わせする処理である。 SRTフィッティングは、標準モデルの特徴点と測定データの 特徴点との距離エネルギーが最小になるように、式 14·1、式 14· 2によって標準モデ ルデータをァフィン変換する処理である。  SRT fitting is a process that aligns the feature points in the measurement data with the feature points of the standard model. SRT fitting is a process of affine transformation of standard model data using Equations 14-1, 14-2 so that the distance energy between the feature points of the standard model and the feature points of the measurement data is minimized.

[0123] = · · - (14 - 1) 肌, C ∑|M |2 . . . (14.2) ここで、 P は、変換後の 3次元点であり、 Mは、 3行 4列の変換行列であり、 P は、 [0123] = ···-(14-1) Skin, C ∑ | M | 2 ... (14.2) where P is the 3D point after conversion, and M is a 3 × 4 conversion Matrix and P is

new old 変換前の 3次元点である。また、 Mは、標準モデルの特徴点であり、 Cは、計測デー  new old 3D point before conversion. M is the feature point of the standard model, and C is the measurement data.

k k  k k

タの特徴点であり、 kは、特徴点の個数であり、 f (M、 C )は、標準モデルの特徴点と  K is the number of feature points, and f (M, C) is the feature point of the standard model.

k k  k k

測定データの特徴点との距離エネルギーである。  It is the distance energy to the feature point of the measurement data.

[0124] SRTフィッティングでは、距離エネルギーを最小とする変換行列が例えば最小二乗 法によって求められ、変換後の標準モデルデータの位置が求められる。また、標準モ デルの射影中心点も位置変換される。  [0124] In SRT fitting, a transformation matrix that minimizes the distance energy is obtained by, for example, the least square method, and the position of the standard model data after the transformation is obtained. In addition, the projection center point of the standard model is also relocated.

[0125]  [0125]

RTレジストレーシヨンは、複数 (N 個)の点から成る第 1測定データ T(T={t I i≡ RT registration is the first measurement data T (T = {t I i≡

N }と、第 1測定データ Mの個数とは異なる複数 (N個)の点から成る第 2測定データ t s N} and the second measurement data t s consisting of multiple (N) points different from the number of the first measurement data M

S(S = {s I sEN }とにおいて、選択された標準モデル N個から成る標準モデルの  In S (S = {s I sEN}, the standard model consisting of N selected standard models

s h  s h

点群 Hと対応する対応点群 S' (S' = (s、 s、 ···、 s ))と T' (T' = (t、t、 ·'·、ΐ  Corresponding point cloud S '(S' = (s, s, ..., s)) and T '(T' = (t, t,

1 2 Nh 1 2 Nh 1 2 Nh 1 2 Nh

))を求める。 )).

[0126] 2つの点群のレジストレーシヨンは、 m =RRs +TTとなる回転行列 RRおよび並進 ベクトノレ RTを求める。  [0126] The registration of two point groups is a rotation matrix RR and a translation vector Nort RT such that m = RRs + TT.

[0127] まず、対応点群 S'、T 'を用いて共分散行列 Bが算出される。共分散行列 Bは、式 1 5·1によって与えられる。

Figure imgf000038_0001
First, a covariance matrix B is calculated using the corresponding point group S ′, T ′. The covariance matrix B is given by Equation 15-1.
Figure imgf000038_0001

、行列 Aは 式 15· 2である。  The matrix A is given by Equation 15.2.

[0128]  [0128]

0 -s? -mT 0 -s ? -M T

s_ +m 0 -< s' + m s_ + m 0-<s' + m

A  A

-s'y; -m s + , 0 0 (15 · 2) s ― m ' ,一 my't s', —m', 、 ― s m,

Figure imgf000038_0002
ここで、 s'は、 s' = (s '、s '、s ')であり、測定点の 3次元座標を表しており、 t'も同 様である。 - s'y; - ms + , 0 0 (15 · 2) s - m ', one m y' t s', -m ',, - s m,
Figure imgf000038_0002
Here, s 'is s' = (s ', s', s '), which represents the three-dimensional coordinates of the measurement point, and t' is the same.

[0129] この共分散行列 Bの対称行列から例えばヤコビ法によって固有値分解が行われ、 固有値と固有ベクトルとが算出される。  [0129] From the symmetric matrix of the covariance matrix B, eigenvalue decomposition is performed, for example, by the Jacobian method, and eigenvalues and eigenvectors are calculated.

[0130] この算出された固有値え (λ ≥···≥ λ ≥···≥ λ )の中から最小の固有値が求 j 1 j k  [0130] The smallest eigenvalue is obtained from the calculated eigenvalues (λ ≥ ··· ≥ λ ≥ ··· ≥ λ) j 1 j k

められ、この最小の固有値に対応する固有ベクトルが求められ、この求められた固有 ベクトルによって構成される行歹 、 φ 、 ···、 φ ]が選択され、そして、回転行列  And the eigenvector corresponding to this minimum eigenvalue is obtained, and the rows φ, φ,..., Φ] constituted by the obtained eigenvector are selected, and the rotation matrix

1 2 k  1 2 k

RRと並進ベクトル RTとが算出される。  RR and translation vector RT are calculated.

[0131] < 2〉顔全体の形状に基づく算出方法  [0131] <2> Calculation method based on the shape of the entire face

この顔全体に基づいて大域領域形状情報を算出する算出方法としては、例えば、 以下の < 2— ;!〉〜 < 2— 6〉が挙げられる。  As a calculation method for calculating the global area shape information based on the entire face, for example, the following <2— ;! > To <2-6>.

[0132] <2-1>  [0132] <2-1>

大域 3D顔特徴量は、標準モデルに対して抽出した局所パッチ領域を用いることに よって SRTフィッティングが行われ、 SRTフィッティングの変形 S、移動 Tおよび回転 Rの各パラメータのうちの変形パラメータ Sとされる。  The global 3D facial features are subjected to SRT fitting by using the local patch region extracted for the standard model, and are used as the deformation parameter S of the deformation S, movement T, and rotation R parameters of the SRT fitting. The

[0133] 変形パラメータ Sは、局所パッチ領域の形状に標準モデル上の定義点と局所パッ チ領域上の特徴点とをフィッティングするように標準モデルの形状を変形するための パラメータである。変形パラメータ Sは、 SRTフィッティングに使用される特徴点が略 正確に顔の特徴点と一致していれば、同一人物の顔の大きさ(横幅、縦幅および奥 行き等)が不変であることから、個人を表していると考えられる。また、変形パラメータ[0133] The deformation parameter S is a parameter for deforming the shape of the standard model so that the definition point on the standard model and the feature point on the local patch region are fitted to the shape of the local patch region. If the feature point used for SRT fitting matches the feature point of the face almost exactly, the deformation parameter S will be the same person's face size (width, height and depth). It is thought that it represents an individual because the travel, etc.) is unchanged. Also, deformation parameters

Sは、例えば、倍率や露出等の撮影条件が変わっても、同一値が算出される。また、 変形パラメータ sは、全ての局所パッチ領域を用いなくてもよい。つまり安定して得ら れる鼻を含む複数の局所パッチ領域を纏めて SRTフィッティングを行って求めるよう にしてもよい。 For S, for example, the same value is calculated even if shooting conditions such as magnification and exposure change. Further, the deformation parameter s may not use all the local patch regions. In other words, a plurality of local patch regions including the stably obtained nose may be collected and obtained by performing SRT fitting.

[0134] <2-2> [0134] <2-2>

大域 3D顔特徴量は、各局所パッチ領域について、標準モデルにおける予め定義 された複数の点とこれら標準モデルの複数の点にそれぞれ対応する局所パッチ領域 上の複数の測定点とのそれぞれについて距離を求め、それらの距離の平均値を求 めることによって与えられる。  For each local patch area, the global 3D facial feature value is a distance for each of a plurality of predefined points in the standard model and a plurality of measurement points on the local patch area respectively corresponding to the plurality of points of the standard model. It is given by finding the average of those distances.

[0135] より具体的には、まず、 SRTフィッティングによって標準モデルと局所パッチ領域と の位置合わせが行われる。次に、選択された標準モデル上に予め定義された複数( N個)の点群 H(H= (h、 h、 · · ·、 h ))に対応する局所パッチ領域上の複数の測 h 1 2 Nh [0135] More specifically, first, the standard model and the local patch region are aligned by SRT fitting. Next, a plurality of measurements h on the local patch region corresponding to a plurality (N) of point groups H (H = (h, h,..., H)) defined in advance on the selected standard model. 1 2 Nh

定点の点群 S' (S' = (s、 s、 · · ·、 s ))が取得される。次に、標準モデルの複数の  A fixed point group S ′ (S ′ = (s, s,..., S)) is acquired. Next, multiple standard models

1 2 Nh  1 2 Nh

点群 Hとこれらに対応する局所パッチ領域の複数の点群 Sとの距離 d(h、 s )がそれ ぞれ求められる。そして、それら複数の距離 d(h、 s)の平均値が大域 3D顔特徴量 di stbとして求められる(式 12参照)。 SRTフィッティングによって略正確に位置合わせ されていれば、対応点 S'は、対象者ごとに同じ点が与えられる。  The distances d (h, s) between the point group H and the plurality of point groups S in the corresponding local patch region are obtained. Then, an average value of the plurality of distances d (h, s) is obtained as the global 3D facial feature quantity di stb (see Equation 12). As long as the SRT fitting is used to make the alignment approximately correct, the corresponding point S 'is given to each subject.

[0136] <2-3> [0136] <2-3>

大域 3D顔特徴量は、各局所パッチ領域について、標準モデルにおける予め定義 された複数の点が局所パッチ領域上にそれぞれ投影され、標準モデルより投影され た投影点と同様に処理された登録データの投影点とのそれぞれについて距離を求 め、それらの距離の平均値を求めることによって与えられる。  For each local patch area, the global 3D facial feature quantity is obtained by projecting a plurality of predefined points in the standard model onto the local patch area and processing the registered data in the same way as the projected points projected from the standard model. It is given by calculating the distance for each projected point and calculating the average of those distances.

[0137] より具体的には、まず、 SRTフィッティングによって標準モデルと局所パッチ領域と の位置合わせが行われる。次に、選択された標準モデルにおける予め定義された複 数 個)の点群 H(H= (h、 h、 · · ·、 h ))を局所パッチ領域上に投影することに h 1 2 Nh [0137] More specifically, first, the standard model and the local patch region are aligned by SRT fitting. Next, h 1 2 Nh is used to project a plurality of predefined point clouds H (H = (h, h, ···, h)) on the selected standard model onto the local patch region.

ょって局所パッチ領域上の複数の投影点群3' ' = (3 、s、 ···、;¾ ))が取得され  Therefore, a plurality of projection point groups 3 ′ ′ = (3, s,...; ¾)) on the local patch region are acquired.

1 2 Nh  1 2 Nh

る。次に、選択された標準モデルに同様の処理を行った複数 (N個)の登録データ の点 Tとこれに対応する局所パッチ領域上の複数の投影点 S'との距離 d(t、 s )がそ れぞれ求められる。そして、それら複数の距離 d(t、 s)の平均値が大域 3D顔特徴量 distbとして求められる(式 12参照)。 SRTフィッティングによって略正確に位置合わ せされていれば、対応点 S'は、対象者ごとに同じ点が与えられる。 The Next, multiple (N) registered data that has been processed in the same way for the selected standard model The distances d (t, s) between the point T and the corresponding projection points S ′ on the local patch region are obtained. Then, the average value of the plurality of distances d (t, s) is obtained as the global 3D face feature amount distb (see Equation 12). The corresponding point S ′ is given to each subject as long as they are aligned approximately by SRT fitting.

[0138] <2-4> [0138] <2-4>

大域 3D顔特徴量は、各局所パッチ領域について、比較する局所パッチ領域 (測定 データと登録データ)間における互いに対応する点間の距離の平均値を求めること によって与えられる。  The global 3D facial feature value is given for each local patch area by finding the average value of the distances between corresponding points in the local patch areas to be compared (measurement data and registration data).

[0139] より具体的には、まず、 SRTフィッティングによって標準モデルと局所パッチ領域と の位置合わせが行われる。次に、 RTレジストレーシヨンによって、比較する局所パッ チ領域間の位置合わせがより高精度に行われる。次に、この RTレジストレーシヨンに 用いた N個から成る測定点群 S' (S, = (s、 s、 ···、 s ))と登録点群 T' (丁, = (s h 1 2 Nh 1 [0139] More specifically, first, the standard model and the local patch region are aligned by SRT fitting. Next, the alignment between the local patch regions to be compared is performed with higher accuracy by the RT registration. Next, the N measurement point group S '(S, = (s, s, ..., s)) and the registered point group T' (cho, = (sh 1 2) used in this RT registration Nh 1

、 s、 ···、 s ))との間における互いに対応する点間の距離が求められる。次に、こ, S,..., S)), the distance between corresponding points is obtained. Next, this

2 Nh 2 Nh

れら複数の距離 d(s、 t)の平均値が大域 3D顔特徴量 distbとして求められる(式 16)  The average value of these multiple distances d (s, t) is obtained as the global 3D face feature distb (Equation 16)

distb― ~distb― ~

Figure imgf000040_0001
Figure imgf000040_0001

[0140] <2-5>  [0140] <2-5>

大域 3D顔特徴量は、各局所パッチ領域について、比較する局所パッチ領域間に おける互いに対応する点間の平均値を求めることによって与えられる。  The global 3D facial feature value is given by calculating an average value between points corresponding to each other in the local patch areas to be compared for each local patch area.

[0141] より具体的には、まず、 SRTフィッティングによって標準モデルと局所パッチ領域と の位置合わせが行われる。次に、 RTレジストレーシヨンによって、比較する局所パッ チ領域間の位置合わせがより高精度に行われる。そして、位置合わせ後の N個の点 [0141] More specifically, first, the standard model and the local patch region are aligned by SRT fitting. Next, the alignment between the local patch regions to be compared is performed with higher accuracy by the RT registration. N points after alignment

h 力、ら成る登録点群 T={t  Registration point cloud consisting of h, t = {t

i I t^N }と、この Nと異なる N個の点から成る測定点群 S t h s  i I t ^ N} and a group of measurement points S t h s consisting of N points different from N

={s I sEN }との対応点が再計算される。点群 Sの各点 sにおいて点群 Tとの距離 は、 d(s, T) =mind(s , t)=d(s, m)となる。ここで、「mind(s, t)」は、 j l、 ··· Corresponding points with = {s I sEN} are recalculated. The distance from the point group T at each point s of the point group S is d (s, T) = mind (s, t) = d (s, m). Here, "mind (s, t)" is j l, ...

、 Nにおける d(s , t)について最小値を求めることである。 sに対応する点を m と すると、 sの対応点集群 Mは、 M = C (S、 T) (Cは、最近傍点を求める関数)となる。 すなわち、測定データの点群 Sと測定データの点群 Tにおいて、点群 Sの点 sに対し て点群 Tの点 tの中から最近傍に存在する点を求め、対応点集群 Mとする。次に、こ れら求めた対応点集群 Mの点 mとこれに対応する点群 Sの点 sとの間の距離 d (s、 m, Find the minimum value of d (s, t) in N. The point corresponding to s is m Then, the corresponding point cloud M of s becomes M = C (S, T) (C is a function for obtaining the nearest neighbor point). That is, in the point cloud S of the measurement data and the point cloud T of the measurement data, the nearest point from the point t of the point cloud T is obtained for the point s of the point cloud S, and the corresponding point cloud M is obtained. . Next, the distance d (s, m between the point m of the corresponding point cloud M and the point s of the corresponding point cloud S is obtained.

)が求められる。 ) Is required.

bとして求められる(式 17)。

Figure imgf000041_0001
Calculated as b (Equation 17).
Figure imgf000041_0001

[0142] < 2- 6 > [0142] <2- 6>

大域 3D顔特徴量は、各局所パッチ領域について、比較する局所パッチ領域間に おける互いに対応する点間の分散値を求めることによって与えられる。  The global 3D facial feature value is given by obtaining a variance value between points corresponding to each other in the local patch areas to be compared for each local patch area.

[0143] より具体的には、まず、 SRTフィッティングによって標準モデルと局所パッチ領域と の位置合わせが行われる。次に、 RTレジストレーシヨンによって、比較する局所パッ チ領域間の位置合わせがより高精度に行われる。そして、位置合わせ後の N個の点 力、ら成る登録点群 T= {t I t^N }と、この Nと異なる N個の点から成る測定点群 S[0143] More specifically, first, the standard model and the local patch region are aligned by SRT fitting. Next, the alignment between the local patch regions to be compared is performed with higher accuracy by the RT registration. Then, the registered point group T = {t I t ^ N} consisting of N points after alignment, and the measurement point group S consisting of N points different from N

= {s I s EN }との対応点が再計算される。そして、点群 Sの各点 sと点群 Tの各点 t の分散値 Sigmaが大域 3D顔特徴量 distbとして求められる(式 18)。 = Corresponding points with {s I s EN} are recalculated. Then, the variance Sigma of each point s in the point group S and each point t in the point group T is obtained as the global 3D face feature distb (Equation 18).

SigmaSigma

Figure imgf000041_0002
Figure imgf000041_0002

[0144] なお、これら顔全体の形状に基づく算出方法において、局所パッチ領域において は、データ量を密なデータ群とし、それ以外の領域においては粗なデータ群として、 大域領域形状情報の比較が行われてもよレ、。  [0144] In these calculation methods based on the shape of the whole face, the global patch shape information is compared as a dense data group in the local patch region and as a coarse data group in the other regions. It can be done.

[0145] < 3〉顔の 3次元特徴点に基づく算出方法  [0145] <3> Calculation method based on 3D feature points of face

大域 3D顔特徴量は、局所パッチ領域に定義されたライン (特徴抽出ライン)に基づ いて算出される。より具体的には、予め設定された所定の局所パッチ領域にライン( 特徴抽出ライン)が定義される。特徴抽出ラインは、例えば、 3D顔部位形状データに おける複数の特徴点から定義される。特徴抽出ラインは、認証精度をより高めるベぐ 顔の 3D形状の特徴がよく現れる部分に定義されることが望ましい。例えば、特徴抽 出ラインは、例えば鼻を横断するような顔の凹凸の起伏を含むラインである。次に、こ の局所パッチ領域に定義された特徴抽出ラインが 3D顔密形状データへ投影され、こ の局所パッチ領域に定義された特徴抽出ライン上の複数の点に対応する 3D顔密形 状データ上の複数の点がそれぞれ求められ、これら点群が大域 3D顔特徴量とされ る。なお、この局所パッチ領域に定義された特徴抽出ライン上の点に対応する 3D顔 密形状データ上の点が存在しない場合には、 3D顔密形状データ上におけるその近 傍の複数の点より補間によって求められる。 The global 3D face feature is calculated based on the line (feature extraction line) defined in the local patch area. More specifically, a line (feature extraction line) is defined in a predetermined local patch region set in advance. The feature extraction line is, for example, 3D facial part shape data It is defined from multiple feature points. It is desirable that the feature extraction line be defined in the part where the 3D shape features of the face that enhance the authentication accuracy appear frequently. For example, the feature extraction line is a line including undulations of facial irregularities such as crossing the nose. Next, the feature extraction line defined in this local patch area is projected onto 3D facial shape data, and the 3D facial shape corresponding to multiple points on the feature extraction line defined in this local patch area. Multiple points on the data are obtained, and these point clouds are used as global 3D facial features. If there are no points on the 3D face shape data corresponding to the points on the feature extraction line defined in this local patch area, interpolation is performed from multiple points on the 3D face shape data. Sought by.

[0146] なお、特徴抽出ラインは、認証精度をより高めるベぐ局所パッチ領域から局所パッ チ領域外へ延長されてもよい。また、特徴抽出ラインは、認証精度をより高めるベぐ 複数の局所パッチ領域のそれぞれに定義され、それぞれの特徴抽出ラインから 3D 顔密形状データ上の複数の点群が求められ、これら複数の点群が大域 3D顔特徴量 とされてもよい。また、特徴抽出ライン上の複数の点は、等間隔でもよぐ不等間隔で あってもよい。 [0146] Note that the feature extraction line may be extended outside the local patch area from the local patch area that further increases the authentication accuracy. A feature extraction line is defined for each of a plurality of local patch regions that further improve the accuracy of authentication, and a plurality of point groups on the 3D face shape data are obtained from each feature extraction line. A group may be a global 3D facial feature. Further, the plurality of points on the feature extraction line may be equally spaced or equally spaced.

[0147] 類似度計算部 39は、図 3に示す類似度計算部 20と同様に、予め登録された比較 対象者の特徴量 (比較用特徴量)と、上述で算出された認証対象者 HMの局所領域 形状情報および大域領域形状情報に基づいて類似度を算出することによって類似 性の評価を行うものである。類似度の算出は、局所領域形状情報および大域領域形 状情報のそれぞれについて局所情報類似度 Dslおよび大域情報類似度 Dsbとして算 出され、類似度 Dは、互いに対応するベクトル(3D特徴ベクトル ds)同士間のユーク リツド距離の合計を算出することにより得ることができる(式 10参照)。 Similar to the similarity calculation unit 20 shown in FIG. 3, the similarity calculation unit 39 compares the feature quantity (comparison feature quantity) of the comparison target person registered in advance and the authentication target person HM calculated above. Similarity is evaluated by calculating similarity based on local region shape information and global region shape information. The similarity is calculated as local information similarity D sl and global information similarity D sb for local area shape information and global area shape information, respectively. d s ) can be obtained by calculating the total Euclidean distance between each other (see Equation 10).

[0148] 登録データ記憶部 40は、類似度計算部 39で局所情報類似度 Dslおよび大域情報 類似度 Dsbを計算するために、図 3に示す登録データ記憶部 21と同様に、予め用意 された比較対象者の特徴量 (比較用顔特徴量)の情報を記憶しておくものである。 The registration data storage unit 40 is prepared in advance in the same manner as the registration data storage unit 21 shown in FIG. 3 in order to calculate the local information similarity D sl and the global information similarity D sb by the similarity calculation unit 39. Information on the feature amount (comparison face feature amount) of the comparison target person is stored.

[0149] 総合判定部 41は、局所情報類似度 Dslと大域領域形状情報 Dsbとのそれぞれに重 み付け W、 1— Wを行って、その和を多重類似度 Reとして求め(Re =WDsl + (1— [0149] The overall judgment unit 41 assigns weights W and 1-W to the local information similarity D sl and the global area shape information D sb, and obtains the sum as multiple similarity Re (Re = WD sl + (1—

ij  ij

W) Dsb )、この多重類似度 Reに基づいて認証判定を行うものである。認証判定は、 上述したように、顔照合の場合および顔識別の場合がある。 W) D sb ), authentication determination is performed based on this multiple similarity Re. The authentication decision is As described above, there are cases of face matching and face identification.

[0150] なお、総合判定部 41は、まず局所情報類似度 Dslに基づいて判定を行って、その 判定結果で類似度の差が閾値以上である場合には他人と判定し、類似度の差が閾 値未満である場合のみに大域情報類似度 Dsbに基づいて判定を行ってもよい。 [0150] Note that the overall determination unit 41 first makes a determination based on the local information similarity D sl , and if the difference in the similarity is equal to or greater than a threshold in the determination result, determines that the other person Judgment may be made based on the global information similarity D sb only when the difference is less than the threshold value.

[0151] 図 17において、大域領域形状情報を用いた顔認証では、先ず、カメラ CA1及び C A2それぞれによる撮影によって認証対象者 HMの顔画像が取得される(ステップ S3 D o次に、当該撮影により得られた 2枚の顔画像がコントローラ 30 (画像入力部 31) に入力される (ステップ S32)。次に、顔領域検出部 32によって、画像入力部 31に入 力された各顔画像から顔領域画像が検出される(ステップ S33)。この検出された顔 領域画像から、顔部位検出部 33によって、顔の特徴部位すなわち特徴点の座標が 検出される(ステップ S34)。そして、顔部位 3D計算部 34によって、顔部位検出部 33 により検出された顔の特徴部位の座標(特徴点の座標)から該各特徴部位の 3次元 における座標(3D顔部位形状データ)が算出される(ステップ S35)。  [0151] In FIG. 17, in the face authentication using the global area shape information, first, a face image of the person HM to be authenticated is acquired by photographing with the cameras CA1 and CA2 (step S3 Do). The two face images obtained by the above are input to the controller 30 (image input unit 31) (step S32), and then from the face images input to the image input unit 31 by the face area detection unit 32. A face area image is detected (step S33), and from the detected face area image, the face part detection unit 33 detects the feature part of the face, that is, the coordinates of the feature point (step S34). A 3D coordinate (3D face part shape data) of each feature part is calculated by the 3D calculation part 34 from the coordinates of the feature part of the face (feature point coordinates) detected by the face part detection part 33 (step 3D). S35).

[0152] 一方、顔領域 3D計算部 35によって、顔領域検出部 32により検出された顔領域画 像 (ステレオ画像)から、複数の 2D点からなる 3D顔密形状データが算出される(ステ ップ S36)。次に、 3D局所領域抽出部 36によって、顔領域 3D計算部 35により算出 された 3D顔密形状データと、上記ステップ S35おいて顔部位 3D計算部 34により算 出された 3D顔部位形状データとから 3次元的な局所領域 (局所パッチ領域)が算出 される(ステップ S37)。次に、局所領域情報計算部 37によって、 3D局所領域抽出 部 36によって抽出された 3次元局所領域 (局所パッチ領域)単体の情報から局所領 域情報、本実施形態では 3D顔特徴量が算出される(ステップ 38)。次に、大域領域 情報計算部 38によって、 3D局所領域抽出部 36によって抽出された 3次元局所領域 (局所パッチ領域)の情報力 大域領域形状情報、本実施形態では大域 3D顔特徴 量が算出される (ステップ 39)。次に、類似度計算部 39によって、予め登録された比 較対象者の特徴量 (比較用特徴量)と、上記ステップ S38及び S39において算出さ れた局所領域形状情報および大域領域形状情報との類似性の評価が行われる (ス テツプ S40)。そして、多重類似度 Reに基づいて、総合判定部 41によって、顔照合 或いは顔識別の認証判定が行われる(ステップ S41)。 [0153] 局所領域単体ごとに位置合わせ等の手法によって形状の一致度を比較する場合 では、局所領域間の形状が比較される。したがって、局所領域の形状一致精度が高 ければ、局所領域間の相対的な位置関係が大きく異なっている場合でも誤差が小さ くなるので、他人間の誤差も小さくなる結果、認証精度が低下することになる。そのた め、複数の局所領域を 1つの大域領域とし、位置合わせ等の手法によって形状の一 致度を比較する場合では、局所領域ごとの形状比較に加えて、局所領域間の相対 的な位置関係の情報も含まれることから認証精度の向上が見込まれる。ここで、例え ば、特開 2007— 164670号公報などの ICPアルゴリズムによる 3次元顔認証方法は 、この点から有効な手法である力 S、 ICPアルゴリズムは、実際には、その処理時間と特 徴量化との点で困難であった。上記本実施形態では、顔の大域領域の形状情報を 局所領域に分割し、大域領域形状情報と局所領域形状情報とに分離しているため、 データ量の削減と処理時間の短縮化が図られている。そして、大域領域形状情報も 用いているため、より高精度に認証を行うことが可能となる。 On the other hand, the face area 3D calculation unit 35 calculates 3D face shape data composed of a plurality of 2D points from the face area image (stereo image) detected by the face area detection unit 32 (step). S36). Next, the 3D facial dense shape data calculated by the facial region 3D calculation unit 35 by the 3D local region extraction unit 36 and the 3D facial part shape data calculated by the facial region 3D calculation unit 34 in step S35 described above. A three-dimensional local region (local patch region) is calculated from (step S37). Next, the local region information calculation unit 37 calculates the local region information, that is, the 3D face feature amount in this embodiment, from the information of the three-dimensional local region (local patch region) alone extracted by the 3D local region extraction unit 36. (Step 38). Next, the global region information calculation unit 38 calculates the information power of the 3D local region (local patch region) extracted by the 3D local region extraction unit 36, the global region shape information, and in this embodiment, the global 3D facial feature amount. (Step 39). Next, the similarity calculation unit 39 compares the feature quantity (comparison feature quantity) of the comparison target registered in advance with the local area shape information and the global area shape information calculated in steps S38 and S39. Similarity is evaluated (step S40). Then, based on the multiple similarity Re, the comprehensive determination unit 41 performs face collation or face identification authentication determination (step S41). [0153] In the case where the degree of coincidence of the shapes is compared for each local region by a method such as alignment, the shapes between the local regions are compared. Therefore, if the shape matching accuracy of the local area is high, the error becomes small even if the relative positional relationship between the local areas is greatly different, so that the error of other people also becomes small, resulting in a decrease in authentication accuracy. It will be. Therefore, when comparing the degree of matching of shapes by a method such as alignment using multiple local regions as one global region, in addition to the shape comparison for each local region, the relative position between the local regions Since related information is also included, the accuracy of authentication is expected to improve. Here, for example, the three-dimensional face authentication method based on the ICP algorithm, such as Japanese Patent Application Laid-Open No. 2007-164670, is an effective technique from this point. The force S and ICP algorithms are actually the processing time and characteristics. It was difficult in terms of quantification. In the above embodiment, the shape information of the global region of the face is divided into local regions and separated into the global region shape information and the local region shape information, so that the data amount and the processing time can be shortened. ing. Since global area shape information is also used, authentication can be performed with higher accuracy.

[0154] なお、本実施形態は、以下の態様をとることができる。  [0154] The present embodiment can take the following aspects.

[0155] (A)局所パッチ抽出用平面 T上に設定する領域は、矩形領域 Sのように矩形でなく ともよく、要は、局所パッチ抽出用平面 Tにおける部分的な領域 (部分領域)であれば 、形状は、任意でよい。また、特徴部位の形状も矩形でなくともよぐ任意な形状でよ い。  [0155] (A) The area set on the local patch extraction plane T does not have to be rectangular like the rectangular area S. In short, it is a partial area (partial area) on the local patch extraction plane T. If so, the shape may be arbitrary. In addition, the shape of the feature portion may not be a rectangle but may be an arbitrary shape.

[0156] (B)矩形領域 Sから局所パッチ領域を決定する方法は、上述のように局所パッチ抽 出用平面 Tに垂直に降ろした垂線が(垂線の足力 矩形領域 Sの中に入る 3D点とす る方法に限定されず、種々の方法が採用可能である。例えば 3D点から局所パッチ 抽出用平面 Tに垂直に降ろさずに、該平面 Tに対して所定の角度で降ろしてもよい。 また例えば、矩形領域 Sから所定の方向に仮想の例えば放射状の線を出して、この 線と交差(当接)する 3D形状上の範囲を局所パッチ領域とする方法であってもよい。  [0156] (B) The method for determining the local patch area from the rectangular area S is as follows. The vertical line dropped perpendicularly to the local patch extraction plane T (the vertical leg force enters the rectangular area S 3D Various methods can be employed without being limited to the method of making a point, for example, it may be lowered at a predetermined angle with respect to the plane T without dropping from the 3D point perpendicular to the local patch extraction plane T. Alternatively, for example, a method may be used in which a virtual, for example, radial line is output from the rectangular area S in a predetermined direction, and a range on the 3D shape that intersects (contacts) the line is used as the local patch area.

[0157] (C)認証システム 1は、図 1に示すようにコントローラ 10とカメラ CA1、 CA2とに分離 された構成でなくてもよい。例えばコントローラ 10内に直接、各カメラが内蔵された構 成であってもよい。ただしこの場合、各カメラは、互いに異なる角度で認証対象者 H Mを撮影できるような配置で内蔵されて!/、る。 [0158] 本明細書は、上記のように様々な技術を開示している力 S、そのうち主な技術を以下 に纏める。 (C) As shown in FIG. 1, the authentication system 1 does not have to be separated into the controller 10 and the cameras CA1 and CA2. For example, a configuration in which each camera is built directly in the controller 10 may be employed. However, in this case, each camera is built in such an arrangement that the subject person HM can be photographed at different angles! [0158] The present specification summarizes the main technologies, of which S discloses various technologies as described above.

[0159] 一態様に力、かる認証システムは、認証対象者の局所的な領域である複数の 3次元 局所領域を決定する局所領域決定部と、前記局所領域決定部により決定された 3次 元局所領域における局所 3次元形状情報から、各 3次元局所領域の形状に関する 局所領域形状情報であって、前記顔の 3次元的な特徴量である 3次元顔特徴量を算 出する 3次元特徴量算出部と、前記認証対象者に対する認証動作を行うベく前記 3 次元特徴量算出部により算出された 3次元顔特徴量と、予め用意された比較用顔特 徴量とを比較する特徴量比較部とを備える。  [0159] The authentication system that is one aspect is a local area determination unit that determines a plurality of three-dimensional local areas, which are local areas of the person to be authenticated, and the 3D determined by the local area determination unit. 3D feature quantity that is local area shape information related to the shape of each 3D local area, and that calculates the 3D facial feature quantity that is the 3D feature quantity of the face from the local 3D shape information in the local area Feature amount comparison for comparing the 3D face feature amount calculated by the calculation unit and the 3D feature amount calculation unit that performs the authentication operation for the person to be authenticated with a face feature amount for comparison prepared in advance. A part.

[0160] また、他の態様に力、かる認証システムでは、前記認証対象者の顔の全体的な 3次 元形状である全体 3次元形状の情報を取得する 3次元形状取得部をさらに備え、前 記局所領域決定部は、前記 3次元形状取得部により取得された全体 3次元形状情報 から、該全体 3次元形状における局所的な領域である複数の 3次元局所領域を決定 する。  [0160] In addition, in the authentication system that is powerful in another aspect, the authentication system further includes a three-dimensional shape acquisition unit that acquires information on the entire three-dimensional shape that is an overall three-dimensional shape of the face of the person to be authenticated. The local region determination unit determines a plurality of 3D local regions, which are local regions in the entire 3D shape, from the entire 3D shape information acquired by the 3D shape acquisition unit.

[0161] 上記構成によれば、局所領域決定部によって、認証対象者の局所的な領域である 複数の 3次元局所領域が決定される。これら複数の 3次元局所領域は、例えば、 3次 元形状取得部によって、認証対象者の顔の全体的な 3次元形状である全体 3次元形 状の情報が取得され、局所領域決定部によって、 3次元形状取得部により取得され た全体 3次元形状情報から、該全体 3次元形状における局所的な領域であるこれら 複数の 3次元局所領域が決定される。 3次元特徴量算出部によって、局所領域決定 部により決定された 3次元局所領域における局所 3次元形状情報から、各 3次元局所 領域の形状に関する局所領域形状情報であって、顔の 3次元的な特徴量である 3次 元顔特徴量が算出される。そして、特徴量比較部によって、認証対象者に対する認 証動作を行うべく 3次元特徴量算出部により算出された 3次元顔特徴量と、予め用意 された比較用顔特徴量とが比較される。  [0161] According to the above configuration, the local region determination unit determines a plurality of three-dimensional local regions that are local regions of the person to be authenticated. For the plurality of 3D local regions, for example, the 3D shape acquisition unit acquires information on the entire 3D shape, which is the overall 3D shape of the face of the person to be authenticated, and the local region determination unit From the entire 3D shape information acquired by the 3D shape acquisition unit, the plurality of 3D local regions, which are local regions in the entire 3D shape, are determined. From the local 3D shape information in the 3D local area determined by the local area determination unit by the 3D feature quantity calculation unit, the local area shape information on the shape of each 3D local area is obtained. A 3D face feature value, which is a feature value, is calculated. Then, the feature quantity comparison unit compares the three-dimensional face feature quantity calculated by the three-dimensional feature quantity calculation unit with the comparison face feature quantity prepared in advance so as to perform the authentication operation for the person to be authenticated.

[0162] したがって、この認証システムによれば、顔の全体 3次元形状の情報をそのまま用 いるのではなぐ認証対象者の顔の全体 3次元形状から選定された複数の 3次元局 所領域における局所 3次元形状情報に基づ!/、て認証動作が行われるので、顔に部 分的な隠れ等が生じたとしても、必ずしもこの隠れ等が生じた部分を用いずともよぐ この部分を除く他の局所領域の情報を用いて認証を行うことができるから、認証精度 の低下を軽減することができる。また、データ量の多い全体 3次元形状の情報をその まま扱わなくてもよく、局所領域の部分的な 3次元形状データを扱えばよいので、処 理に時間が短所され、認証速度の向上が可能となる。 [0162] Therefore, according to this authentication system, local information in a plurality of 3D local regions selected from the entire 3D shape of the face of the person to be authenticated is not used as it is. Based on the 3D shape information! Even if partial occlusion occurs, it is not always necessary to use the part where this occlusion occurred. Authentication can be performed using information in other local areas excluding this part. Can be reduced. In addition, since it is not necessary to handle the entire 3D shape information with a large amount of data as it is, it is only necessary to handle partial 3D shape data in the local area, which shortens the processing time and improves the authentication speed. It becomes possible.

[0163] また、他の態様に力、かる認証システムでは、前記 3次元形状取得部は、前記顔の 2 次元画像を取得する 2次元画像取得部を備えたものであり、前記 2次元画像取得部 により取得された 2次元画像から前記顔の特徴的な部位である 2次元特徴部位を抽 出する特徴部位抽出部の結果に基づいて前記 3次元局所領域を決定する。  [0163] In addition, in the authentication system that is effective in another aspect, the 3D shape acquisition unit includes a 2D image acquisition unit that acquires a 2D image of the face, and acquires the 2D image. The three-dimensional local region is determined based on a result of a feature part extraction unit that extracts a two-dimensional feature part that is a characteristic part of the face from the two-dimensional image acquired by the part.

[0164] また、他の態様に力、かる認証システムでは、前記 3次元形状取得部は、前記特徴部 位抽出部により抽出された特徴部位の 3次元座標を算出する 3次元座標算出部とを さらに備え、前記局所領域決定部は、前記 3次元座標算出部により算出された特徴 部位の 3次元座標に基づいて前記 3次元局所領域を決定する。  [0164] Also, in the authentication system that is effective in other aspects, the three-dimensional shape acquisition unit includes a three-dimensional coordinate calculation unit that calculates the three-dimensional coordinates of the feature part extracted by the feature part extraction unit. In addition, the local region determination unit determines the three-dimensional local region based on the three-dimensional coordinates of the feature portion calculated by the three-dimensional coordinate calculation unit.

[0165] 上記構成によれば、 3次元形状取得部が、顔の 2次元画像を取得する 2次元画像 取得部を備えたものとされ、 2次元画像取得部により取得された 2次元画像から顔の 特徴的な部位である 2次元特徴部位を抽出する特徴部位抽出部の結果に基づいて 3次元局所領域が決定される。この 3次元局所領域は、例えば、特徴部位抽出部に よって、 2次元画像取得部により取得された 2次元画像から顔の特徴的な部位である 特徴部位が抽出され、 3次元座標算出部によって、特徴部位抽出部により抽出され た特徴部位の 3次元座標が算出され、そして、局所領域決定部によって、 3次元座標 算出部により算出された特徴部位の 3次元座標に基づいて 3次元局所領域が決定さ れる。  [0165] According to the above configuration, the 3D shape acquisition unit includes the 2D image acquisition unit that acquires the 2D image of the face, and the face is obtained from the 2D image acquired by the 2D image acquisition unit. A three-dimensional local region is determined based on the result of the feature part extraction unit that extracts a two-dimensional feature part that is a characteristic part of. For example, the feature part extraction unit extracts a feature part that is a characteristic part of the face from the two-dimensional image acquired by the two-dimensional image acquisition unit by the feature part extraction unit. The three-dimensional coordinates of the feature part extracted by the feature part extraction unit are calculated, and the three-dimensional local region is determined by the local region determination unit based on the three-dimensional coordinates of the feature part calculated by the three-dimensional coordinate calculation unit. It is done.

[0166] したがって、この認証システムによれば、 3次元局所領域を決定するに際して 2次元 的な特徴部位の情報と関連付けることができ、当該 3次元局所領域の情報と共に特 徴部位の情報を用いた高精度の認証を行うことが可能となる。  [0166] Therefore, according to this authentication system, when determining a three-dimensional local region, it can be associated with information of a two-dimensional feature region, and the information of the feature region is used together with the information of the three-dimensional local region. High-precision authentication can be performed.

[0167] また、他の態様に力、かる認証システムでは、前記特徴部位抽出部は、抽出された 2 次元の特徴部位から 2次元画像上の局所領域を抽出する 2次元局所領域抽出部を さらに備え、前記 3次元局所領域決定部は、前記 2次元局所領域抽出部により算出 された 2次元局所領域に基づいて前記 3次元局所領域を決定する。 [0167] In addition, in the authentication system that can be applied to other aspects, the feature part extraction unit further includes a two-dimensional local region extraction unit that extracts a local region on a two-dimensional image from the extracted two-dimensional feature part. The 3D local region determining unit is calculated by the 2D local region extracting unit. The 3D local region is determined based on the generated 2D local region.

[0168] また、他の態様に力、かる認証システムでは、前記局所領域決定部は、前記 2次元局 所領域に対応する領域のみを前記 3次元局所領域として演算して抽出する。 [0168] Also, in an authentication system that is powerful in another aspect, the local region determination unit calculates and extracts only a region corresponding to the two-dimensional local region as the three-dimensional local region.

[0169] また、他の態様に力、かる認証システムでは、前記局所領域決定部は、前記 3次元座 標から定まる平面内に所定形状の部分領域を設定するとともに、前記全体 3次元形 状における当該部分領域に対応する領域を前記 3次元局所領域として決定する。 [0169] In addition, in the authentication system that has power in another aspect, the local region determination unit sets a partial region of a predetermined shape in a plane determined from the three-dimensional coordinates, and in the overall three-dimensional shape. A region corresponding to the partial region is determined as the three-dimensional local region.

[0170] 上記構成によれば、局所領域決定部によって、 3次元座標から定まる平面内に所 定形状の部分領域が設定されるとともに、全体 3次元形状における当該部分領域に 対応する領域が 3次元局所領域として決定される。 [0170] According to the above configuration, the local region determination unit sets a partial region of a predetermined shape in a plane determined from the three-dimensional coordinates, and the region corresponding to the partial region in the entire three-dimensional shape is three-dimensional. Determined as local region.

[0171] したがって、この認証システムによれば、簡易な方法を用いて容易に特徴部位の 3 次元座標から 3次元局所領域が決定され得る。 Therefore, according to this authentication system, a three-dimensional local region can be easily determined from the three-dimensional coordinates of a feature part using a simple method.

[0172] また、他の態様に力、かる認証システムでは、前記全体 3次元形状情報は、複数の 3 次元点からなる顔の形状データであって、前記局所領域決定部は、前記 3次元点か ら前記平面に仮想的に垂直に降ろした垂線が前記部分領域に入っている 3次元点 で構成される領域を前記 3次元局所領域として決定する。 [0172] Further, in the authentication system that is effective in other aspects, the overall 3D shape information is face shape data composed of a plurality of 3D points, and the local region determination unit is configured so that the 3D points are Then, a region composed of three-dimensional points in which a perpendicular line that is virtually perpendicular to the plane is included in the partial region is determined as the three-dimensional local region.

[0173] 上記構成によれば、全体 3次元形状情報が、複数の 3次元点からなる顔の形状デ ータとされ、局所領域決定部によって、 3次元点から平面に仮想的に垂直に降ろされ た垂線が部分領域に入っている 3次元点で構成される領域が 3次元局所領域として 決定される。 [0173] According to the above configuration, the entire 3D shape information is the face shape data composed of a plurality of 3D points, and the local region determination unit descends the 3D points virtually perpendicular to the plane. A region composed of 3D points where the perpendicular line is included in the partial region is determined as a 3D local region.

[0174] したがって、この認証システムによれば、簡易な方法を用いて容易に部分領域に対 応する 3次元局所領域が決定され得る。  [0174] Therefore, according to this authentication system, the three-dimensional local region corresponding to the partial region can be easily determined using a simple method.

[0175] また、他の態様に力、かる認証システムでは、前記局所領域決定部は、前記全体 3次 元形状と、予め用意された参照用 3次元部分モデル形状とを比較し、該全体 3次元 形状における前記参照用 3次元部分モデル形状に最も類似した形状である部分を 前記 3次元局所領域として決定する。  [0175] Also, in the authentication system that is effective in other aspects, the local region determination unit compares the overall three-dimensional shape with a reference three-dimensional partial model shape prepared in advance, and A portion having a shape most similar to the reference three-dimensional partial model shape in the three-dimensional shape is determined as the three-dimensional local region.

[0176] 上記構成によれば、局所領域決定部によって、全体 3次元形状と、予め用意された 参照用 3次元部分モデル形状とが比較され、該全体 3次元形状における参照用 3次 元部分モデル形状に最も類似した形状である部分力 ¾次元局所領域として決定され [0177] したがって、この認証システムによれば、 2次元画像を取得したり、この 2次元画像 から特徴部位(2次元顔特徴量)を抽出したりする構成及び動作が必要とされず、容 易に全体 3次元形状における 3次元局所領域が決定され得る。 [0176] According to the above configuration, the local region determination unit compares the entire three-dimensional shape with the reference three-dimensional partial model shape prepared in advance, and the reference three-dimensional partial model in the total three-dimensional shape. The partial force that is the shape most similar to the shape is determined as a ¾-dimensional local region [0177] Therefore, according to this authentication system, a configuration and operation for acquiring a two-dimensional image and extracting a feature part (two-dimensional facial feature amount) from the two-dimensional image are not required and easy. In addition, a 3D local region in the overall 3D shape can be determined.

[0178] また、他の態様に力、かる認証システムでは、前記局所領域決定部は、前記全体 3次 元形状と、予め用意された参照用 3次元部分モデル形状上に定義された局所領域 情報を同一空間上に変換する同一空間変換部を備え、該同一空間変換部により変 換された同一空間における前記全体 3次元形状と、前記参照用 3次元部分モデル形 状の包含関係を比較することによって、前記 3次元局所領域として決定する。  [0178] In addition, in the authentication system that works in another aspect, the local region determination unit includes the local region information defined on the entire three-dimensional shape and a reference three-dimensional partial model shape prepared in advance. A same space conversion unit for converting the same three-dimensional shape into the same space, and comparing the inclusion relationship between the entire three-dimensional shape and the reference three-dimensional partial model shape in the same space converted by the same space conversion unit. To determine the three-dimensional local region.

[0179] また、他の態様に力、かる認証システムでは、前記 3次元局所領域決定部は、前記参 照用 3次元モデル上の 3次元面と前記全体 3次元形状の 3次元面との包含関係を比 較することによって、前記 3次元局所領域として決定する。  [0179] In addition, in the authentication system that focuses on other aspects, the three-dimensional local region determination unit includes a three-dimensional surface on the reference three-dimensional model and a three-dimensional surface of the entire three-dimensional shape. The three-dimensional local region is determined by comparing the relationship.

[0180] また、他の態様に力、かる認証システムでは、前記 3次元局所領域決定部は、前記参 照用 3次元モデル上の 3次元面と前記全体 3次元形状の 3次元座標点との包含関係 を比較することによって、前記 3次元局所領域として決定する。  [0180] In addition, in the authentication system that focuses on other aspects, the 3D local region determination unit includes a 3D surface on the reference 3D model and a 3D coordinate point of the overall 3D shape. The three-dimensional local region is determined by comparing the inclusive relations.

[0181] また、他の態様に力、かる認証システムでは、前記 3次元局所領域決定部は、前記参 照用 3次元モデル上の 3次元座標点と前記全体 3次元形状の 3次元面との包含関係 を比較することによって、前記 3次元局所領域として決定する。  [0181] In addition, in the authentication system that focuses on other aspects, the 3D local region determination unit includes a 3D coordinate point on the reference 3D model and a 3D surface of the overall 3D shape. The three-dimensional local region is determined by comparing the inclusive relations.

[0182] また、他の態様に力、かる認証システムでは、前記局所領域決定部により決定された 前記 3次元局所領域を密なデータとし、前記 3次元局所領域以外と決定された 3次元 局所外領域を疎なデータとして保持する。  [0182] In addition, in the authentication system which is powerful in another aspect, the 3D local area determined by the local area determination unit is set as dense data, and the 3D local area determined to be other than the 3D local area is used. Keep the area as sparse data.

[0183] 上記構成にとれば、局所領域決定部が、全体 3次元形状と、予め用意された参照 用 3次元部分モデル形状上に定義された局所領域情報を同一空間上に変換する同 一空間変換部を備えるものとされ、この同一空間変換部によって、この変換された同 一空間における全体 3次元形状と参照用 3次元部分モデル形状との包含関係が比 較され、その比較結果に応じて 3次元局所領域が決定される。したがって、この認証 システムによれば、容易に全体 3次元形状における 3次元局所領域が決定され得る。  [0183] With the above configuration, the local area determination unit converts the entire 3D shape and the local area information defined on the reference 3D partial model shape prepared in advance into the same space. The same space conversion unit compares the inclusion relation between the entire 3D shape and the reference 3D partial model shape in the converted same space, and according to the comparison result. A three-dimensional local region is determined. Therefore, according to this authentication system, the 3D local region in the entire 3D shape can be easily determined.

[0184] また、他の態様に力、かる認証システムでは、前記 3次元特徴量算出部は、前記 3次 元局所領域力 局所 3次元形状情報を算出したものを前記局所領域形状情報として 算出する。 [0184] In addition, in the authentication system which has power in other aspects, the three-dimensional feature amount calculation unit includes the tertiary Original local region force Local 3D shape information is calculated as the local region shape information.

[0185] また、他の態様に力、かる認証システムでは、前記 3次元特徴量算出部は、前記 3次 元局所領域における局所 3次元形状情報を所定の曲面情報に変換したものを前記 局所領域形状情報として算出する。  [0185] Further, in the authentication system that is effective in other aspects, the three-dimensional feature amount calculation unit converts the local three-dimensional shape information in the three-dimensional local region into predetermined curved surface information, and the local region Calculate as shape information.

[0186] また、他の態様に力、かる認証システムでは、前記 3次元特徴量算出部は、前記 3次 元局所領域における局所 3次元形状情報を標準モデル上に定義された定義点と 3 次元局所領域の対応点の距離情報をべ外ルに変換したものを前記局所領域形状 情報として算出する。  [0186] In addition, in the authentication system that focuses on other aspects, the three-dimensional feature amount calculation unit uses local three-dimensional shape information in the three-dimensional local region as defined points defined on a standard model and a three-dimensional The local area shape information is calculated by converting the distance information of the corresponding points in the local area into an outer area.

[0187] 上記構成によれば、 3次元特徴量算出部によって、 3次元局所領域から局所 3次元 形状情報を算出したものが局所領域形状情報として算出される。例えば、 3次元特徴 量算出部によって、 3次元局所領域における局所 3次元形状情報が所定の曲面情報 に変換されたものが局所領域形状情報として算出される。また例えば、 3次元特徴量 算出部によって、 3次元局所領域における局所 3次元形状情報を標準モデル上に定 義された定義点と 3次元局所領域の対応点の距離情報をベクトルに変換したものが 局所領域形状情報として算出される。  [0187] According to the above configuration, the three-dimensional feature amount calculation unit calculates local three-dimensional shape information from the three-dimensional local region as the local region shape information. For example, the 3D feature quantity calculation unit calculates the local area shape information obtained by converting the local 3D shape information in the 3D local area into predetermined curved surface information. Also, for example, the 3D feature value calculation unit converts the local 3D shape information in the 3D local region into the vector from the distance information between the defined points defined on the standard model and the corresponding points in the 3D local region. Calculated as local area shape information.

[0188] したがって、この認証システムによれば、 3次元形状情報がそのまま用いられるので はなぐ 3次元局所領域から局所 3次元形状情報を算出したもの、例えば、局所 3次 元形状情報を変換して曲面情報 (例えば曲率)として扱う構成であるから、次元圧縮 が可能となり、処理が高速となる。  Therefore, according to this authentication system, the 3D shape information is not used as it is. The 3D shape information calculated from the 3D local area, for example, the local 3D shape information is converted. Since the structure is handled as curved surface information (for example, curvature), dimensional compression is possible and the processing speed is increased.

[0189] また、他の態様に力、かる認証システムでは、前記 3次元特徴量算出部は、前記 3次 元顔特徴量として、 3次元局所領域同士の位置関係の情報も含む 3次元顔特徴量を 算出する。  [0189] In addition, in the authentication system that can be applied to other aspects, the three-dimensional feature amount calculation unit includes, as the three-dimensional face feature amount, information on the positional relationship between three-dimensional local regions. Calculate the amount.

[0190] 上記構成によれば、 3次元特徴量算出部によって、 3次元顔特徴量として、各 3次 元局所領域の相対位置関係の情報も含む 3次元顔特徴量が算出される。  [0190] According to the above configuration, the 3D feature amount calculation unit calculates a 3D face feature amount including information on the relative positional relationship of each three-dimensional local region as the 3D face feature amount.

[0191] したがって、この認証システムによれば、この 3次元顔特徴量によって、各 3次元局 所領域における個々の特徴だけでなぐ顔全体に亘つての特徴を表すことが可能と なり(顔の大域形状情報を极うことができ)、より高精度の認証を行うことが可能となる [0192] また、他の態様に力、かる認証システムでは、前記局所領域決定部は、前記複数の 3 次元局所領域が前記顔の左右対称となる位置に配置されるように前記全体 3次元形 状における 3次元局所領域を決定する。 [0191] Therefore, according to this authentication system, it is possible to represent the feature of the entire face by using only the individual features in each three-dimensional local area by using this three-dimensional facial feature amount (facial features). Global shape information can be obtained) and more accurate authentication can be performed. [0192] In addition, in the authentication system which is effective in another aspect, the local region determination unit is configured to form the entire three-dimensional shape so that the plurality of three-dimensional local regions are arranged at positions where the left and right sides of the face are symmetrical. Determine the 3D local region in the shape.

[0193] 上記構成によれば、局所領域決定部によって、複数の 3次元局所領域が顔の左右 対称となる位置に配置されるように全体 3次元形状における 3次元局所領域が決定さ れる。 [0193] According to the above configuration, the local region determination unit determines the three-dimensional local region in the entire three-dimensional shape so that the plurality of three-dimensional local regions are arranged at positions that are symmetrical with respect to the face.

[0194] したがって、この認証システムによれば、全体 3次元形状における 3次元局所領域 の(位置の)決定が効率良く行えるようになって、処理時間が短縮されるとともに、デ ータの取り扱い性が向上する。  [0194] Therefore, according to this authentication system, it is possible to efficiently determine the (position) of the three-dimensional local region in the entire three-dimensional shape, shorten the processing time, and handle the data. Will improve.

[0195] また、他の態様に力、かる認証システムでは、前記局所領域決定部は、前記複数の 3 次元局所領域が少なくとも前記顔の鼻及び頰の部位を含むように前記全体 3次元形 状における該 3次元局所領域を決定する。 [0195] In addition, in the authentication system according to another aspect, the local region determination unit includes the entire three-dimensional shape so that the plurality of three-dimensional local regions include at least the nose and the heel portion of the face. The three-dimensional local region at is determined.

[0196] 上記構成によれば、局所領域決定部によって、複数の 3次元局所領域が少なくとも 顔の鼻及び頰の部位が含まれるように全体 3次元形状における該 3次元局所領域が 決定される。 [0196] According to the above configuration, the three-dimensional local area in the entire three-dimensional shape is determined by the local area determination unit so that the plurality of three-dimensional local areas include at least the face nose and the heel region.

[0197] したがって、この認証システムによれば、当該 3次元局所領域を、例えば髪で隠れ てしまう部位 (例えば額)や計測し難!/、部位 (例えば口髭を有する場合の口 )を避けて 設定することができるから、この 3次元局所領域から精度良く 3次元顔特徴量を算出 すること力 Sでき、ひいては高精度の認証を行うことが可能となる。  [0197] Therefore, according to this authentication system, the three-dimensional local region is avoided, for example, by a part hidden by hair (for example, a forehead) or difficult to measure! /, Part (for example, a mouth when having a mustache). Since it can be set, it is possible to accurately calculate the 3D facial feature quantity from this 3D local area, and it is possible to perform highly accurate authentication.

[0198] また、他の態様に力、かる認証システムでは、前記特徴部位抽出部により抽出された 特徴部位の情報から前記顔の 2次元的な特徴量である 2次元顔特徴量を算出する 2 次元特徴量算出部をさらに備え、前記特徴量比較部は、前記 2次元特徴量算出部 により算出された 2次元顔特徴量と前記 3次元特徴量算出部により算出された 3次元 顔特徴量とを併せてなる総合的な顔特徴量と、前記比較用顔特徴量とを比較する。  [0198] In addition, in an authentication system that is powerful in other aspects, a two-dimensional face feature value, which is a two-dimensional feature value of the face, is calculated from the feature part information extracted by the feature part extraction unit 2 A feature quantity calculation unit, wherein the feature quantity comparison unit includes a two-dimensional face feature value calculated by the two-dimensional feature value calculation unit and a three-dimensional face feature value calculated by the three-dimensional feature value calculation unit. Are compared with the comparative face feature value.

[0199] 上記構成によれば、 2次元特徴量算出部によって、特徴部位抽出部により抽出され た特徴部位の情報から顔の 2次元的な特徴量である 2次元顔特徴量が算出される。 そして、特徴量比較部によって、 2次元特徴量算出部により算出された 2次元顔特徴 量と 3次元特徴量算出部により算出された 3次元顔特徴量とを併せてなる総合的な 顔特徴量と、比較用顔特徴量とが比較される。 [0199] According to the above configuration, the two-dimensional feature quantity that is a two-dimensional feature quantity of the face is calculated by the two-dimensional feature quantity calculation unit from the feature part information extracted by the feature part extraction unit. Then, the 2D face feature calculated by the 2D feature amount calculation unit by the feature amount comparison unit The total facial feature quantity, which is a combination of the quantity and the 3D facial feature quantity calculated by the 3D feature quantity calculation unit, is compared with the comparison facial feature quantity.

[0200] したがって、この認証システムによれば、 2次元顔特徴量と 3次元顔特徴量とを用い たより高精度な認証を行うことが可能となる。 [0200] Therefore, according to this authentication system, it is possible to perform more accurate authentication using the 2D face feature value and the 3D face feature value.

[0201] また、他の態様に力、かる認証システムでは、前記 3次元特徴量算出部は、少なくとも 前記顔の特徴部位以外の部位を含む 3次元局所領域における局所 3次元形状情報 から前記 3次元顔特徴量を算出する。 [0201] In addition, in the authentication system that is effective in other aspects, the three-dimensional feature amount calculation unit includes the three-dimensional shape information from local three-dimensional shape information in a three-dimensional local region including at least a part other than the facial feature part. A face feature amount is calculated.

[0202] 上記構成によれば、 3次元特徴量算出部によって、少なくとも顔の特徴部位以外の 部位を含む 3次元局所領域における局所 3次元形状情報から 3次元顔特徴量が算 出される。 [0202] According to the above configuration, the 3D feature quantity calculation unit calculates the 3D face feature quantity from the local 3D shape information in the 3D local region including at least a part other than the facial feature part.

[0203] したがって、この認証システムによれば、 2次元顔特徴量と 3次元顔特徴量とを用い た認証(多重認証)を行うに際して、 2次元顔特徴量として特徴を抽出し難!/、特徴部 位以外の部位の特徴が、 3次元顔特徴量として含むことが可能となる。このため、 2次 元顔特徴量でカバーすることができない特徴量が 3次元顔特徴量でカバーすること が可能となり、ひいてはより高精度な認証を行うことが可能となる。  [0203] Therefore, according to this authentication system, it is difficult to extract features as 2D face feature quantities when performing authentication (multiple authentication) using 2D face feature quantities and 3D face feature quantities! /, Features of parts other than the feature part can be included as 3D facial feature quantities. For this reason, feature quantities that cannot be covered by two-dimensional face feature quantities can be covered by three-dimensional face feature quantities, and as a result, more accurate authentication can be performed.

[0204] また、他の態様に力、かる認証システムでは、前記 2次元顔特徴量を算出するための 特徴部位の情報はテクスチャ情報であって、当該テクスチャ情報に対して、前記顔の 姿勢に関する補正である姿勢変動補正及び顔に対する光源の向きに関する補正で ある光源変動補正を行う補正部をさらに備える。  [0204] In addition, in the authentication system that is effective in other aspects, the feature part information for calculating the two-dimensional face feature amount is texture information, and the facial posture is related to the texture information. The image processing apparatus further includes a correction unit that performs posture variation correction that is correction and light source variation correction that is correction related to the direction of the light source relative to the face.

[0205] 上記構成によれば、 2次元顔特徴量を算出するための特徴部位の情報は、テクス チヤ情報であって、補正部によって、当該テクスチャ情報に対して、顔の姿勢に関す る補正である姿勢変動補正及び顔に対する光源の向きに関する補正である光源変 動補正が行われる。  [0205] According to the above configuration, the feature part information for calculating the two-dimensional face feature amount is texture information, and the correction unit corrects the posture of the face with respect to the texture information. The posture variation correction and the light source variation correction that are corrections related to the direction of the light source with respect to the face are performed.

[0206] したがって、この認証システムによれば、姿勢変動補正及び光源変動補正がなされ たテクスチャ情報に基づいて適正な 2次元顔特徴量が得られ、ひいてはより高精度 な認証を行うことが可能となる。  Therefore, according to this authentication system, it is possible to obtain an appropriate two-dimensional face feature amount based on the texture information subjected to the posture variation correction and the light source variation correction, and thus to perform more accurate authentication. Become.

[0207] また、他の態様に力、かる認証システムでは、前記 3次元形状取得部は、前記顔の 2 次元画像を撮影する少なくとも 2つの撮影装置と、当該各撮影装置から得られた 2枚 の 2次元画像から、位相限定相関法による演算によって高精度な対応点検索処理を 行い、 3次元再構成を行うことで、前記全体 3次元形状を算出する 3次元形状算出部 とを備える。 [0207] In addition, in the authentication system that is effective in other aspects, the three-dimensional shape acquisition unit includes at least two photographing devices that photograph a two-dimensional image of the face, and two images obtained from the photographing devices. A three-dimensional shape calculation unit for calculating the whole three-dimensional shape by performing a high-precision corresponding point search process from the two-dimensional image by calculation using a phase-only correlation method and performing three-dimensional reconstruction.

[0208] 上記構成によれば、 3次元形状取得部において、少なくとも 2つの撮影装置によつ て顔の 2次元画像が撮影され、 3次元形状算出部によって、当該各撮影装置から得 られた 2枚の 2次元画像力 位相限定相関法による演算によって高精度な対応点検 索処理が行われ、 3次元再構成が行われることで、全体 3次元形状が算出される。  [0208] According to the above configuration, the 3D shape acquisition unit captures a 2D image of the face with at least two imaging devices, and the 3D shape calculation unit obtains 2 from each imaging device. Two-dimensional image force of the sheet A high-accuracy inspection process is performed by calculation using the phase-only correlation method, and the entire three-dimensional shape is calculated by performing three-dimensional reconstruction.

[0209] したがって、この認証システムによれば、高価な 3次元撮影装置等を用いることなく 低コストで、且つ位相限定相関法によって精度良く全体 3次元形状を算出することが 可能となる。  Therefore, according to this authentication system, it is possible to calculate the entire three-dimensional shape with high accuracy by the phase-only correlation method at a low cost without using an expensive three-dimensional imaging apparatus or the like.

[0210] また、他の態様に力、かる認証システムでは、前記 3次元特徴量算出部により算出さ れる 3次元顔特徴量はベクトル量であって、該ベクトル量に対応する前記比較用顔 特徴量としての比較用ベクトル量を記憶する記憶部をさらに備える。  [0210] In addition, in the authentication system that has power in another aspect, the three-dimensional face feature amount calculated by the three-dimensional feature amount calculation unit is a vector amount, and the comparison face feature corresponding to the vector amount A storage unit is further provided for storing a comparison vector quantity as a quantity.

[0211] 上記構成によれば、 3次元特徴量算出部により算出される 3次元顔特徴量は、ベタ トル量であって、記憶部によって、該ベクトル量に対応する比較用顔特徴量としての 比較用ベクトル量が記憶される。  [0211] According to the above configuration, the three-dimensional face feature amount calculated by the three-dimensional feature amount calculation unit is a vector amount, and is stored as a comparison face feature amount corresponding to the vector amount by the storage unit. The comparison vector quantity is stored.

[0212] したがって、この認証システムによれば、記憶部によって比較用顔特徴量として記 憶されるデータが、計測された所謂密な 3次元形状データでなぐベクトル量となるか ら、記憶するデータ量を小さくすることができる (メモリ容量が少なくて済む)とともに、 データの扱いが容易となる。  [0212] Therefore, according to this authentication system, the data stored as the comparison facial feature quantity by the storage unit becomes the vector quantity that is obtained by the measured so-called dense three-dimensional shape data. The amount can be reduced (requires less memory capacity), and data handling becomes easier.

[0213] また、他の態様に力、かる認証システムでは、前記局所領域決定部により決定された 3次元局所領域に基づ!/、て、前記全体 3次元形状における大域的な領域である 3次 元大域領域の形状に関する大域領域形状情報であって、前記顔の 3次元的な特徴 量である大域 3次元顔特徴量を算出する大域 3次元特徴量算出部をさらに備え、前 記特徴量比較部は、前記認証対象者に対する認証動作を行うベく前記大域 3次元 特徴量算出部により算出された大域 3次元顔特徴量と、予め用意された比較用大域 顔特徴量とを比較する。  [0213] Further, in the authentication system which is particularly effective in other aspects, it is based on the 3D local area determined by the local area determining unit! /, And is a global area in the overall 3D shape 3 A global region shape information relating to a shape of a three-dimensional global region, further comprising a global 3D feature amount calculation unit that calculates a global 3D face feature amount that is a 3D feature amount of the face, The comparison unit compares the global three-dimensional face feature amount calculated by the global three-dimensional feature amount calculation unit, which performs the authentication operation for the authentication target person, with a comparative global facial feature amount prepared in advance.

[0214] 上記構成によれば、大域 3次元特徴量算出部によって、局所領域決定部により決 定された 3次元局所領域に基づ!/、て、全体 3次元形状における大域的な領域である 3次元大域領域の形状に関する大域領域形状情報であって、顔の 3次元的な特徴 量である大域 3次元顔特徴量が算出され、そして、特徴量比較部によって、認証対 象者に対する認証動作を行うべく大域 3次元特徴量算出部により算出された大域 3 次元顔特徴量と、予め用意された比較用大域顔特徴量とが比較される。 [0214] According to the above configuration, the global three-dimensional feature amount calculation unit determines the local region determination unit. Based on the specified 3D local area! /, The global area shape information on the shape of the 3D global area, which is the global area of the entire 3D shape, and the 3D feature of the face A global 3D face feature is calculated and prepared in advance by the feature comparison unit to perform an authentication operation on the authentication target by the global 3D feature calculation unit. The compared global facial feature quantity is compared.

[0215] また、他の態様に力、かる認証システムでは、前記局所領域決定部により決定された 3次元局所領域の情報に基づ!/、て、前記全体 3次元形状における大域的な情報で ある、前記顔の大域 3次元顔特徴量を算出する大域 3次元顔特徴量算出部をさらに 備え、前記特徴量比較部は、前記認証対象者に対する認証動作を行うべく前記大 域 3次元特徴量算出部により算出された大域 3次元顔特徴量と、予め用意された比 較用大域顔特徴量とを比較する。  [0215] In addition, in the authentication system which is powerful in other aspects, based on the information of the three-dimensional local region determined by the local region determination unit! /, The global information in the entire three-dimensional shape is used. A global 3D facial feature quantity calculation unit for calculating a global 3D facial feature quantity of the face, wherein the feature quantity comparison unit performs the global 3D feature quantity to perform an authentication operation on the person to be authenticated. The global three-dimensional facial feature value calculated by the calculation unit is compared with a comparative global facial feature value prepared in advance.

[0216] 上記構成によれば、大域 3次元特徴量算出部によって、局所領域決定部により決 定された 3次元局所領域の情報に基づ!/、て、全体 3次元形状における大域的な情報 である、顔の大域 3次元顔特徴量が算出され、そして、特徴量比較部によって、認証 対象者に対する認証動作を行うべく大域 3次元特徴量算出部により算出された大域 3次元顔特徴量と、予め用意された比較用大域顔特徴量とが比較される。  [0216] According to the above configuration, the global 3D feature value calculation unit is based on the information of the 3D local region determined by the local region determination unit! The global 3D facial feature value of the face is calculated, and the global 3D facial feature value calculated by the global 3D feature value calculation unit to perform the authentication operation for the authentication target person by the feature value comparison unit. The comparison global face feature quantity prepared in advance is compared.

[0217] また、他の態様に力、かる認証システムでは、前記局所領域決定部により決定された 3次元局所領域上に定義した 3次元特徴点情報に基づ!/、て、前記全体 3次元形状に おける大域的な情報である、前記顔の 3次元的な特徴量である大域 3次元顔特徴量 を算出する大域 3次元顔特徴量算出部をさらに備える。  [0217] In addition, in the authentication system that works on other aspects, based on the three-dimensional feature point information defined on the three-dimensional local region determined by the local region determination unit! /, The entire three-dimensional It further includes a global 3D face feature quantity calculation unit that calculates a global 3D face feature quantity that is a global 3D feature quantity of the face, which is global information on the shape.

[0218] また、他の態様に力、かる認証システムでは、前記大域 3次元特徴量算出部は、前記 3次元局所領域上に定義した 3次元特徴点情報に基づいて算出した標準モデルの 変形パラメータの情報を抽出する。  [0218] In addition, in the authentication system that has power in another aspect, the global three-dimensional feature amount calculation unit calculates the deformation parameter of the standard model calculated based on the three-dimensional feature point information defined on the three-dimensional local region. Extract information.

[0219] また、他の態様に力、かる認証システムでは、前記大域 3次元特徴量算出部は、前記 3次元局所領域上に定義した 3次元特徴点情報に基づいて算出した 3次元局所標 準モデルと 3次元局所領域との距離情報を抽出する。  [0219] In addition, in the authentication system that focuses on other aspects, the global three-dimensional feature amount calculation unit calculates a three-dimensional local standard calculated based on three-dimensional feature point information defined on the three-dimensional local region. Extract distance information between model and 3D local area.

[0220] また、他の態様に力、かる認証システムでは、前記大域 3次元特徴量算出部は、前記 3次元局所領域上に定義した 3次元特徴点情報に基づいて算出した前記 3次元局 所領域同士の距離情報を抽出する。 [0220] In addition, in the authentication system that has power in another aspect, the global three-dimensional feature quantity calculation unit calculates the three-dimensional station calculated based on the three-dimensional feature point information defined on the three-dimensional local area. Extract distance information between different areas.

[0221] 上記構成によれば、大域 3次元特徴量算出部によって、局所領域決定部により決 定された 3次元局所領域上に定義した 3次元特徴点情報に基づ!/、て、全体 3次元形 状における大域的な情報である、顔の 3次元的な特徴量である大域 3次元顔特徴量 が算出される。ここで、例えば、大域 3次元特徴量算出部によって、 3次元局所領域 上に定義した 3次元特徴点情報に基づいて算出した標準モデルの変形パラメータの 情報が抽出される。また例えば、大域 3次元特徴量算出部によって、 3次元局所領域 上に定義した 3次元特徴点情報に基づいて算出した 3次元局所標準モデルと 3次元 局所領域との距離情報が抽出される。また例えば、大域 3次元特徴量算出部によつ て、 3次元局所領域上に定義した 3次元特徴点情報に基づ!/、て算出した前記 3次元 局所領域同士の距離情報が抽出される。そして、特徴量比較部によって、認証対象 者に対する認証動作を行うべく大域 3次元特徴量算出部により算出された大域 3次 元顔特徴量と、予め用意された比較用大域顔特徴量とが比較される。  [0221] According to the above configuration, the global 3D feature value calculation unit is based on the 3D feature point information defined on the 3D local region determined by the local region determination unit! A global 3D facial feature value, which is a 3D feature value of the face, is calculated as global information in the 3D shape. Here, for example, information on the deformation parameter of the standard model calculated based on the 3D feature point information defined on the 3D local region is extracted by the global 3D feature amount calculation unit. Also, for example, the global 3D feature quantity calculation unit extracts distance information between the 3D local standard model calculated based on the 3D feature point information defined on the 3D local area and the 3D local area. Also, for example, the global 3D feature quantity calculation unit extracts the distance information between the 3D local areas calculated based on the 3D feature point information defined on the 3D local area! . Then, the feature value comparison unit compares the global three-dimensional face feature value calculated by the global three-dimensional feature value calculation unit with the prepared comparison global face feature value in order to perform an authentication operation on the person to be authenticated. Is done.

[0222] また、他の態様に力、かる認証システムでは、前記局所領域決定部により決定する 3 次元局所領域をライン状に抽出し、抽出されたライン状の 3次元局所領域に基づい て、前記全体 3次元形状における大域的な領域である、 3次元大域領域の形状べタト ルとして大域 3次元局顔特徴量を算出する大域 3次元特徴量算出部をさらに備える。  [0222] In addition, in the authentication system that has power in other aspects, the three-dimensional local region determined by the local region determination unit is extracted in a line shape, and based on the extracted line-shaped three-dimensional local region, A global 3D feature quantity calculation unit is further provided that calculates a global 3D local facial feature quantity as a shape vector of the 3D global area, which is a global area in the overall 3D shape.

[0223] 上記構成によれば、大域 3次元特徴量算出部によって、局所領域決定部により決 定する 3次元局所領域がライン状に抽出され、この抽出されたライン状の 3次元局所 領域に基づいて、全体 3次元形状における大域的な領域である、 3次元大域領域の 形状ベクトルとして大域 3次元局顔特徴量が算出される。そして、特徴量比較部によ つて、認証対象者に対する認証動作を行うべく大域 3次元特徴量算出部により算出 された大域 3次元顔特徴量と、予め用意された比較用大域顔特徴量とが比較される  [0223] According to the above configuration, the global three-dimensional feature amount calculation unit extracts the three-dimensional local region determined by the local region determination unit in a line shape, and based on the extracted line-shaped three-dimensional local region. Thus, a global 3D facial feature is calculated as a shape vector of the 3D global area, which is a global area in the overall 3D shape. Then, the feature amount comparison unit calculates the global three-dimensional face feature amount calculated by the global three-dimensional feature amount calculation unit so as to perform the authentication operation for the authentication target person, and the comparison-use global face feature amount prepared in advance. Compared

[0224] したがって、これら認証システムによれば、顔認証にお!/、て、大域 3次元顔特徴量 を用いるので、認証精度がより向上させることが可能となる。また、大域領域形状情報 は、いわゆるデータ圧縮技術によって圧縮することが可能であり、データ量を低減す ることが可能である。また、大域 3次元顔特徴量が 3次元局所領域に基づいて算出さ れるので、 3次元局所領域特有の大域情報を算出することが可能となる。また、 3次 元局所領域のデータは、顔全体の 3次元形状データより少なぐデータ量が低減され ている。また、特に、 3次元局所領域が特徴部位の 3次元座標に基づいて決定される 場合では、全体の 3次元形状データから顔部位の特徴点間における大域的な情報 のみを選択することが可能となる。 [0224] Therefore, according to these authentication systems, since the global 3D face feature value is used for face authentication, the authentication accuracy can be further improved. The global area shape information can be compressed by a so-called data compression technique, and the data amount can be reduced. In addition, global 3D facial features are calculated based on 3D local regions. Therefore, it is possible to calculate global information unique to the 3D local region. In addition, the amount of data in the 3D local area is less than the 3D shape data of the entire face. In particular, when the 3D local region is determined based on the 3D coordinates of the feature part, it is possible to select only global information between the feature points of the face part from the entire 3D shape data. Become.

[0225] また、他の態様に力、かる認証システムでは、前記大域 3次元特徴量算出部は、前記 大域 3次元顔特徴量として前記 3次元局所領域に関する重心の情報を算出する。 [0225] In addition, in the authentication system that is effective in other aspects, the global three-dimensional feature amount calculation unit calculates center-of-gravity information regarding the three-dimensional local region as the global three-dimensional face feature amount.

[0226] したがって、この認証システムによれば、全体の 3次元形状データから算出すること が困難な重心の情報が算出され、 3次元局所領域に特有な大域 3次元顔特徴量を 算出することが可能となる。 [0226] Therefore, according to this authentication system, it is possible to calculate information on the center of gravity that is difficult to calculate from the entire three-dimensional shape data, and to calculate a global three-dimensional facial feature amount that is unique to the three-dimensional local region. It becomes possible.

[0227] また、他の態様に力、かる認証システムでは、前記大域 3次元特徴量算出部は、前記 大域 3次元顔特徴量として前記 3次元局所領域に関する法線の情報を算出する。 [0227] In addition, in the authentication system that is effective in other aspects, the global three-dimensional feature amount calculation unit calculates normal information related to the three-dimensional local region as the global three-dimensional face feature amount.

[0228] したがって、この認証システムによれば、全体の 3次元形状データから算出すること が困難な法線の情報が算出され、 3次元局所領域に特有な大域 3次元顔特徴量を 算出することが可能となる。 [0228] Therefore, according to this authentication system, it is possible to calculate normal information that is difficult to calculate from the entire three-dimensional shape data, and to calculate a global three-dimensional facial feature that is unique to a three-dimensional local region. Is possible.

[0229] また、他の態様に力、かる認証システムでは、前記特徴量比較部は、前記大域 3次元 顔特徴量と前記比較用大域顔特徴量との比較結果に応じて、前記 3次元特徴量算 出部により算出された 3次元顔特徴量と、予め用意された比較用顔特徴量とを比較 する。 [0229] In addition, in the authentication system that focuses on other aspects, the feature amount comparison unit performs the three-dimensional feature according to a comparison result between the global three-dimensional face feature amount and the comparative global face feature amount. The 3D face feature value calculated by the quantity calculator is compared with the comparison face feature value prepared in advance.

[0230] したがって、この認証システムによれば、大域 3次元顔特徴量と前記比較用大域顔 特徴量との比較結果に応じて、例えば、比較結果が大域 3次元顔特徴量と前記比較 用大域顔特徴量とが異なることを示している場合に、 3次元顔特徴量と比較用顔特 徴量との比較が省略可能となり、認証の処理時間が短縮され、より高速に認証が可 能となる。  Therefore, according to this authentication system, according to the comparison result between the global 3D facial feature quantity and the comparative global facial feature quantity, for example, the comparison result is the global 3D facial feature quantity and the comparative global feature quantity. If the facial feature quantity is different, the comparison between the 3D facial feature quantity and the comparison facial feature quantity can be omitted, the authentication processing time is shortened, and the authentication can be performed faster. Become.

[0231] また、他の態様に力、かる認証システムでは、前記特徴量比較部は、前記認証対象 者に対する認証動作を行うベぐ前記大域 3次元特徴量算出部により算出された大 域 3次元顔特徴量と、予め用意された比較用大域顔特徴量とを比較した大域比較結 果、および、前記 3次元特徴量算出部により算出された 3次元顔特徴量と、予め用意 された比較用顔特徴量とを比較した局所比較結果を統合した総合比較結果を算出 する。 [0231] In addition, in the authentication system that focuses on other aspects, the feature amount comparison unit performs a global three-dimensional feature amount calculation unit that performs an authentication operation on the person to be authenticated. A global comparison result obtained by comparing a facial feature quantity with a comparative global facial feature quantity prepared in advance, and a 3D facial feature quantity calculated by the 3D feature quantity calculation unit The total comparison result is calculated by integrating the local comparison results that are compared with the compared facial feature quantities.

[0232] したがって、この認証システムによれば、大域比較結果と局所比較結果とを統合し た総合比較結果によって認証されるので、比較結果を互いに補間し合うことが可能 であり、その結果、より高精度で認証が可能となる。  [0232] Therefore, according to this authentication system, authentication is performed based on the overall comparison result obtained by integrating the global comparison result and the local comparison result, so that the comparison results can be interpolated with each other. Authentication can be performed with high accuracy.

[0233] また、他の態様に力、かる認証方法は、認証対象者の顔の全体的な 3次元形状であ る全体 3次元形状の情報を取得する第 1の工程と、前記全体 3次元形状情報から、該 全体 3次元形状における局所的な領域である複数の 3次元局所領域を決定する第 2 の工程と、前記 3次元局所領域における局所 3次元形状情報から、各 3次元局所領 域の形状に関する局所領域形状情報であって、前記顔の 3次元的な特徴量である 3 次元顔特徴量を算出する第 3の工程と、前記認証対象者に対する認証動作を行うベ く前記 3次元顔特徴量と予め用意された比較用顔特徴量とを比較する第 4の工程と を有する。  [0233] In addition, the authentication method which is powerful in other aspects includes a first step of acquiring information on the entire three-dimensional shape, which is the entire three-dimensional shape of the face of the person to be authenticated, and the entire three-dimensional shape. A second step of determining a plurality of three-dimensional local regions, which are local regions in the entire three-dimensional shape, from the shape information, and each three-dimensional local region from the local three-dimensional shape information in the three-dimensional local region Local area shape information related to the shape of the face, and a third step of calculating a three-dimensional facial feature quantity that is a three-dimensional feature quantity of the face; And a fourth step of comparing the face feature quantity with a comparison face feature quantity prepared in advance.

[0234] 上記構成によれば、第 1の工程において、認証対象者の顔の全体的な 3次元形状 である全体 3次元形状の情報が取得される。第 2の工程において、全体 3次元形状 情報から、該全体 3次元形状における局所的な領域である複数の 3次元局所領域が 決定される。第 3の工程において、 3次元局所領域における局所 3次元形状情報から 、各 3次元局所領域の形状に関する局所領域形状情報であって、顔の 3次元的な特 徴量である 3次元顔特徴量が算出される。そして、第 4の工程において、認証対象者 に対する認証動作を行うべく 3次元顔特徴量と予め用意された比較用顔特徴量とが 比較される。  [0234] According to the above configuration, in the first step, information on the entire three-dimensional shape that is the entire three-dimensional shape of the face of the person to be authenticated is acquired. In the second step, a plurality of three-dimensional local regions that are local regions in the whole three-dimensional shape are determined from the whole three-dimensional shape information. In the third step, from the local 3D shape information in the 3D local area, the local area shape information on the shape of each 3D local area, and the 3D face feature quantity that is the 3D feature quantity of the face Is calculated. Then, in the fourth step, the three-dimensional face feature value is compared with the comparison face feature value prepared in advance to perform the authentication operation for the person to be authenticated.

[0235] したがって、この認証方法によれば、認証対象者の顔の全体 3次元形状から複数 の 3次元局所領域が決定され、この各 3次元局所領域における局所 3次元形状情報 から 3次元顔特徴量が算出され、この 3次元顔特徴量と比較用顔特徴量との比較が 行われることで認証対象者に対する認証動作が行われるので、すなわち顔の全体 3 次元形状の情報をそのまま用いるのではなぐ顔全体の 3次元形状から局所的な領 域(3次元局所領域)を複数個抽出し、この抽出した 3次元局所領域に基づ!/、て認証 を行う構成であるので、顔に部分的な隠れ等が生じたとしても、必ずしもこの隠れ等 が生じた部分を用いずともよぐこの部分以外の局所領域の情報を用いて認証を行う ことができ、認証精度の低下を軽減することができる。また、データ量の多い全体 3次 元形状(3次元データ)の情報をそのまま扱わなくてもよいため、つまり局所領域の部 分的な 3次元形状データを扱えばよ!/、ので、処理に時間力 Sかかることなく認証速度を 向上させること力 Sでさる。 [0235] Therefore, according to this authentication method, a plurality of 3D local regions are determined from the entire 3D shape of the face of the person to be authenticated, and 3D facial features are obtained from the local 3D shape information in each 3D local region. Since the amount is calculated and the 3D face feature value is compared with the comparison face feature amount, the authentication operation is performed on the person to be authenticated, that is, the information on the entire 3D shape of the face is not used as it is. Since multiple local regions (3D local regions) are extracted from the 3D shape of the entire face and authentication is performed based on the extracted 3D local regions, the Even if a concealment etc. occurs, It is possible to perform authentication using information of a local region other than this part without using the part where the error occurs, and the reduction in authentication accuracy can be reduced. Also, it is not necessary to handle the entire 3D shape information (3D data) with a large amount of data as it is, that is, it is necessary to handle partial 3D shape data in the local area! / Time force S Increases the authentication speed without taking S.

[0236] また、他の態様に力、かる認証方法では、前記第 1の工程は、前記顔の 2次元画像を 取得する第 5の工程を含むものであり、前記 2次元画像から前記顔の特徴的な部位 である特徴部位を抽出する第 6の工程と、前記特徴部位の 3次元座標を算出する第 7の工程とをさらに有し、前記第 2の工程は、前記特徴部位の 3次元座標に基づいて 前記 3次元局所領域を決定する工程である。  [0236] In addition, in the authentication method that is effective in another aspect, the first step includes a fifth step of acquiring a two-dimensional image of the face, and the face of the face is obtained from the two-dimensional image. A sixth step of extracting a characteristic part that is a characteristic part; and a seventh step of calculating a three-dimensional coordinate of the characteristic part, wherein the second step includes a three-dimensional part of the characteristic part. Determining the three-dimensional local region based on coordinates.

[0237] これによれば、第 1の工程が、顔の 2次元画像を取得する第 5の工程を含む工程と され、第 6の工程において、 2次元画像から顔の特徴的な部位である特徴部位が抽 出される。また、第 7の工程において、特徴部位の 3次元座標が算出される。そして、 上記第 2の工程において、特徴部位の 3次元座標に基づいて決定される。  [0237] According to this, the first step is a step including a fifth step of acquiring a two-dimensional image of the face, and in the sixth step, a characteristic part of the face is obtained from the two-dimensional image. Feature parts are extracted. In the seventh step, the three-dimensional coordinates of the feature part are calculated. In the second step, the determination is made based on the three-dimensional coordinates of the characteristic part.

[0238] したがって、この認証方法によれば、 3次元局所領域を決定するに際して 2次元的 な特徴部位の情報と関連付けることができ、当該 3次元局所領域の情報と共に特徴 部位の情報を用いた高精度の認証を行うことが可能となる。  [0238] Therefore, according to this authentication method, when determining a three-dimensional local region, it can be associated with information of a two-dimensional feature region, and the information using the feature region information can be used together with the information of the three-dimensional local region. It is possible to perform accuracy authentication.

[0239] また、他の態様に力、かる認証方法では、前記特徴部位の情報から前記顔の 2次元 的な特徴量である 2次元顔特徴量を算出する第 8の工程をさらに有し、前記第 4のェ 程は、前記 2次元顔特徴量と前記 3次元顔特徴量とを併せてなる総合的な顔特徴量 と、前記比較用顔特徴量とを比較する工程である。  [0239] In addition, in the authentication method which has power in another aspect, the method further includes an eighth step of calculating a two-dimensional face feature value that is a two-dimensional feature value of the face from the information on the feature part, The fourth step is a step of comparing a total face feature amount that is a combination of the two-dimensional face feature amount and the three-dimensional face feature amount with the comparison face feature amount.

[0240] 上記構成によれば、第 8の工程において、特徴部位の情報から顔の 2次元的な特 徴量である 2次元顔特徴量が算出され、第 4の工程において、 2次元顔特徴量と 3次 元顔特徴量とを併せてなる総合的な顔特徴量と、比較用顔特徴量とが比較される。  [0240] According to the above configuration, in the eighth step, a two-dimensional facial feature amount that is a two-dimensional facial feature amount is calculated from the feature part information, and in the fourth step, a two-dimensional facial feature amount is calculated. The total facial feature value that is a combination of the 3D face feature value and the face feature value for comparison is compared.

[0241] したがって、この認証方法によれば、 2次元顔特徴量と 3次元顔特徴量とを用いた、 より高精度な認証を行うことが可能となる。  [0241] Therefore, according to this authentication method, it is possible to perform more accurate authentication using the 2D face feature value and the 3D face feature value.

[0242] また、他の態様にかかる認証方法では、前記第 2工程により決定された 3次元局所 領域に基づ!/、て、前記全体 3次元形状における大域的な領域である 3次元大域領域 の形状に関する大域領域形状情報であって、前記顔の 3次元的な特徴量である大 域 3次元顔特徴量を算出する第 9工程をさらに備え、前記第 4工程は、前記認証対 象者に対する認証動作を行うべく前記 9工程により算出された大域 3次元顔特徴量と 、予め用意された比較用大域顔特徴量とを比較する。 [0242] In the authentication method according to another aspect, based on the 3D local area determined in the second step! /, A 3D global area that is a global area in the overall 3D shape The method further comprises a ninth step of calculating global three-dimensional face feature amounts that are global region shape information relating to the shape of the face, which is a three-dimensional feature amount of the face, wherein the fourth step includes the authentication subject. The global three-dimensional facial feature value calculated in the above-mentioned nine steps to perform the authentication operation for the above is compared with a comparative global facial feature value prepared in advance.

[0243] 上記構成によれば、第 9工程によって、第 2工程により決定された 3次元局所領域 に基づいて、全体 3次元形状における大域的な領域である 3次元大域領域の形状に 関する大域領域形状情報であって、顔の 3次元的な特徴量である大域 3次元顔特徴 量が算出され、そして、第 4工程によって、認証対象者に対する認証動作を行うべく 第 9工程により算出された大域 3次元顔特徴量と、予め用意された比較用大域顔特 徴量とが比較される。 [0243] According to the above configuration, the global region related to the shape of the three-dimensional global region, which is a global region in the overall three-dimensional shape, based on the three-dimensional local region determined in the second step by the ninth step A global 3D facial feature, which is shape information and is a 3D facial feature, is calculated, and then the global calculated by the 9th step to perform an authentication operation for the person to be authenticated in the 4th step. The three-dimensional face feature quantity is compared with a comparison-use global face feature quantity prepared in advance.

[0244] したがって、この認証システムによれば、顔認証にお!/、て、大域 3次元顔特徴量を 用いるので、認証精度がより向上させることが可能となる。また、大域領域形状情報 は、いわゆるデータ圧縮技術によって圧縮することが可能であり、データ量を低減す ることが可能である。また、大域 3次元顔特徴量が 3次元局所領域に基づいて算出さ れるので、 3次元局所領域特有の大域情報を算出することが可能となる。また、 3次 元局所領域のデータは、顔全体の 3次元形状データより少なぐデータ量が低減され ている。また、特に、 3次元局所領域が特徴部位の 3次元座標に基づいて決定される 場合では、全体の 3次元形状データから顔部位の特徴点間における大域的な情報 のみを選択することが可能となる。  [0244] Therefore, according to this authentication system, since the global 3D face feature value is used for face authentication, the authentication accuracy can be further improved. The global area shape information can be compressed by a so-called data compression technique, and the data amount can be reduced. In addition, since the global 3D facial feature value is calculated based on the 3D local region, it is possible to calculate global information unique to the 3D local region. In addition, the amount of data in the 3D local area is less than the 3D shape data of the entire face. In particular, when the 3D local region is determined based on the 3D coordinates of the feature part, it is possible to select only global information between the feature points of the face part from the entire 3D shape data. Become.

[0245] 本発明を表現するために、上述において図面を参照しながら実施形態を通して本 発明を適切且つ十分に説明したが、当業者であれば上述の実施形態を変更および /または改良することは容易に為し得ることであると認識すべきである。したがって、 当業者が実施する変更形態または改良形態が、請求の範囲に記載された請求項の 権利範囲を離脱するレベルのものでない限り、当該変更形態または当該改良形態は 、当該請求項の権利範囲に包括されると解釈される。  [0245] In order to express the present invention, the present invention has been described appropriately and fully through the embodiments with reference to the drawings in the above. However, those skilled in the art may change and / or improve the above-described embodiments. It should be recognized that this can be done easily. Therefore, unless the modification or improvement implemented by a person skilled in the art is at a level that departs from the scope of the claims described in the claims, the modification or the improvement is within the scope of the claims. To be construed as inclusive.

産業上の利用可能性  Industrial applicability

[0246] 本発明によれば、顔の認証を行う認証システム及び認証方法を提供することができ [0246] According to the present invention, it is possible to provide an authentication system and an authentication method for performing face authentication.

Claims

請求の範囲 The scope of the claims [1] 認証対象者の局所的な領域である複数の 3次元局所領域を決定する局所領域決 定部と、  [1] A local region determination unit that determines a plurality of three-dimensional local regions that are local regions of the person to be authenticated; 前記局所領域決定部により決定された 3次元局所領域における局所 3次元形状情 報から、各 3次元局所領域の形状に関する局所領域形状情報であって、前記顔の 3 次元的な特徴量である 3次元顔特徴量を算出する 3次元特徴量算出部と、  From the local 3D shape information in the 3D local area determined by the local area determination unit, the local area shape information on the shape of each 3D local area, which is the 3D feature quantity of the face 3 A three-dimensional feature amount calculation unit for calculating a three-dimensional face feature amount; 前記認証対象者に対する認証動作を行うべく前記 3次元特徴量算出部により算出 された 3次元顔特徴量と、予め用意された比較用顔特徴量とを比較する特徴量比較 部とを備えることを特徴とする認証システム。  A feature amount comparison unit that compares the three-dimensional face feature amount calculated by the three-dimensional feature amount calculation unit with a face feature value for comparison prepared in advance to perform an authentication operation on the person to be authenticated. A featured authentication system. [2] 前記認証対象者の顔の全体的な 3次元形状である全体 3次元形状の情報を取得 する 3次元形状取得部をさらに備え、 [2] It further includes a 3D shape acquisition unit that acquires information on the entire 3D shape, which is the overall 3D shape of the face of the person to be authenticated, 前記局所領域決定部は、前記 3次元形状取得部により取得された全体 3次元形状 情報から、該全体 3次元形状における局所的な領域である複数の 3次元局所領域を 決定することを特徴とする請求項 1に記載の認証システム。  The local region determination unit determines a plurality of three-dimensional local regions, which are local regions in the total three-dimensional shape, from the total three-dimensional shape information acquired by the three-dimensional shape acquisition unit. The authentication system according to claim 1. [3] 前記 3次元形状取得部は、前記顔の 2次元画像を取得する 2次元画像取得部を備 えたものであり、 [3] The 3D shape acquisition unit includes a 2D image acquisition unit that acquires a 2D image of the face. 前記 2次元画像取得部により取得された 2次元画像から前記顔の特徴的な部位で ある 2次元特徴部位を抽出する特徴部位抽出部の結果に基づいて前記 3次元局所 領域を決定することを特徴とする請求項 2に記載の認証システム。  The three-dimensional local region is determined based on a result of a feature part extraction unit that extracts a two-dimensional feature part that is a characteristic part of the face from the two-dimensional image acquired by the two-dimensional image acquisition unit. The authentication system according to claim 2. [4] 前記 3次元形状取得部は、前記特徴部位抽出部により抽出された特徴部位の 3次 元座標を算出する 3次元座標算出部とをさらに備え、 [4] The three-dimensional shape acquisition unit further includes a three-dimensional coordinate calculation unit that calculates the three-dimensional coordinates of the feature part extracted by the feature part extraction unit, 前記局所領域決定部は、前記 3次元座標算出部により算出された特徴部位の 3次 元座標に基づいて前記 3次元局所領域を決定することを特徴とする請求項 3に記載 の認、証システム。  The recognition / certification system according to claim 3, wherein the local region determination unit determines the three-dimensional local region based on the three-dimensional coordinates of the characteristic part calculated by the three-dimensional coordinate calculation unit. . [5] 前記特徴部位抽出部は、抽出された 2次元の特徴部位から 2次元画像上の局所領 域を抽出する 2次元局所領域抽出部をさらに備え、  [5] The feature region extraction unit further includes a two-dimensional local region extraction unit that extracts a local region on the two-dimensional image from the extracted two-dimensional feature region. 前記 3次元局所領域決定部は、前記 2次元局所領域抽出部により算出された 2次 元局所領域に基づいて前記 3次元局所領域を決定することを特徴とする請求項 3に 記載の認証システム。 The three-dimensional local region determination unit determines the three-dimensional local region based on the two-dimensional local region calculated by the two-dimensional local region extraction unit. The described authentication system. [6] 前記局所領域決定部は、前記 2次元局所領域に対応する領域のみを前記 3次元 局所領域として演算して抽出することを特徴とする請求項 1に記載の認証システム。  6. The authentication system according to claim 1, wherein the local region determination unit calculates and extracts only a region corresponding to the two-dimensional local region as the three-dimensional local region. [7] 前記局所領域決定部は、前記 3次元座標から定まる平面内に所定形状の部分領 域を設定するとともに、前記全体 3次元形状における当該部分領域に対応する領域 を前記 3次元局所領域として決定することを特徴とする請求項 4に記載の認証システ ム。  [7] The local region determination unit sets a partial region having a predetermined shape in a plane determined from the three-dimensional coordinates, and sets a region corresponding to the partial region in the entire three-dimensional shape as the three-dimensional local region. 5. The authentication system according to claim 4, wherein the authentication system is determined. [8] 前記全体 3次元形状情報は、複数の 3次元点からなる顔の形状データであって、 前記局所領域決定部は、前記 3次元点から前記平面に仮想的に垂直に降ろした 垂線が前記部分領域に入っている 3次元点で構成される領域を前記 3次元局所領 域として決定することを特徴とする請求項 7に記載の認証システム。  [8] The overall three-dimensional shape information is face shape data including a plurality of three-dimensional points, and the local region determination unit includes a perpendicular line that is virtually dropped from the three-dimensional point to the plane. 8. The authentication system according to claim 7, wherein an area composed of 3D points in the partial area is determined as the 3D local area. [9] 前記局所領域決定部は、前記全体 3次元形状と、予め用意された参照用 3次元部 分モデル形状とを比較し、該全体 3次元形状における前記参照用 3次元部分モデル 形状に最も類似した形状である部分を前記 3次元局所領域として決定することを特 徴とする請求項 2に記載の認証システム。  [9] The local region determination unit compares the entire three-dimensional shape with a reference three-dimensional partial model shape prepared in advance, and most closely matches the reference three-dimensional partial model shape in the whole three-dimensional shape. 3. The authentication system according to claim 2, wherein a part having a similar shape is determined as the three-dimensional local region. [10] 前記局所領域決定部は、前記全体 3次元形状と、予め用意された参照用 3次元部 分モデル形状上に定義された局所領域情報を同一空間上に変換する同一空間変 換部を備え、該同一空間変換部により変換された同一空間における前記全体 3次元 形状と、前記参照用 3次元部分モデル形状の包含関係を比較することによって、前 記 3次元局所領域として決定することを特徴とする請求項 2に記載の認証システム。  [10] The local region determination unit includes an identical space conversion unit that converts the local region information defined on the entire three-dimensional shape and a reference three-dimensional partial model shape prepared in advance into the same space. And determining the three-dimensional local region by comparing the inclusion relationship between the entire three-dimensional shape in the same space transformed by the same space transformation unit and the reference three-dimensional partial model shape. The authentication system according to claim 2. [11] 前記 3次元局所領域決定部は、前記参照用 3次元モデル上の 3次元面と前記全体 3次元形状の 3次元面との包含関係を比較することによって、前記 3次元局所領域と して決定することを特徴とする請求項 10に記載の認証システム。  [11] The three-dimensional local region determination unit determines the three-dimensional local region by comparing the inclusion relation between the three-dimensional surface on the reference three-dimensional model and the three-dimensional surface of the entire three-dimensional shape. The authentication system according to claim 10, wherein the authentication system is determined. [12] 前記 3次元局所領域決定部は、前記参照用 3次元モデル上の 3次元面と前記全体 3次元形状の 3次元座標点との包含関係を比較することによって、前記 3次元局所領 域として決定することを特徴とする請求項 10に記載の認証システム。  [12] The three-dimensional local region determination unit compares the inclusion relation between a three-dimensional surface on the reference three-dimensional model and a three-dimensional coordinate point of the entire three-dimensional shape, thereby determining the three-dimensional local region. The authentication system according to claim 10, wherein the authentication system is determined as: [13] 前記 3次元局所領域決定部は、前記参照用 3次元モデル上の 3次元座標点と前記 全体 3次元形状の 3次元面との包含関係を比較することによって、前記 3次元局所領 域として決定することを特徴とする請求項 10に記載の認証システム。 [13] The three-dimensional local region determination unit compares the inclusion relation between the three-dimensional coordinate point on the reference three-dimensional model and the three-dimensional surface of the entire three-dimensional shape, thereby determining the three-dimensional local region. 11. The authentication system according to claim 10, wherein the authentication system is determined as an area. [14] 前記局所領域決定部により決定された前記 3次元局所領域を密なデータとし、前 記 3次元局所領域以外と決定された 3次元局所外領域を疎なデータとして保持する ことを特徴とする請求項 2に記載の認証システム。 [14] The three-dimensional local region determined by the local region determination unit is set as dense data, and the three-dimensional local outside region determined other than the three-dimensional local region is stored as sparse data. The authentication system according to claim 2. [15] 前記 3次元特徴量算出部は、前記 3次元局所領域から局所 3次元形状情報を算出 したものを前記局所領域形状情報として算出することを特徴とする請求項 1ないし請 求項 14のいずれ力、 1項に記載の認証システム。 15. The three-dimensional feature amount calculation unit according to claim 1, wherein the three-dimensional feature amount calculation unit calculates local three-dimensional shape information from the three-dimensional local region as the local region shape information. Eventually, the authentication system according to item 1. [16] 前記 3次元特徴量算出部は、前記 3次元局所領域における局所 3次元形状情報を 所定の曲面情報に変換したものを前記局所領域形状情報として算出することを特徴 とする請求項 15に記載の認証システム。 16. The three-dimensional feature amount calculation unit calculates the local region shape information obtained by converting local three-dimensional shape information in the three-dimensional local region into predetermined curved surface information. The described authentication system. [17] 前記 3次元特徴量算出部は、前記 3次元局所領域における局所 3次元形状情報を 標準モデル上に定義された定義点と 3次元局所領域の対応点の距離情報をベクトル に変換したものを前記局所領域形状情報として算出することを特徴とする請求項 15 に記載の認証システム。 [17] The three-dimensional feature amount calculation unit converts local three-dimensional shape information in the three-dimensional local region into a vector from distance information between a defined point defined on the standard model and a corresponding point in the three-dimensional local region. The authentication system according to claim 16, wherein the local area shape information is calculated. [18] 前記 3次元特徴量算出部は、前記 3次元顔特徴量として、各 3次元局所領域の相 対位置関係の情報も含む 3次元顔特徴量を算出することを特徴とする請求項 1ない し請求項 17のいずれ力、 1項に記載の認証システム。 18. The three-dimensional feature quantity calculating unit calculates a three-dimensional face feature quantity including information on a relative positional relationship of each three-dimensional local region as the three-dimensional face feature quantity. The power according to claim 17, the authentication system according to claim 1. [19] 前記局所領域決定部は、前記複数の 3次元局所領域が前記顔の左右対称となる 位置に配置されるように前記全体 3次元形状における 3次元局所領域を決定すること を特徴とする請求項 1ないし請求項 18のいずれ力、 1項に記載の認証システム。 [19] The local region determining unit determines the three-dimensional local region in the entire three-dimensional shape so that the plurality of three-dimensional local regions are arranged at positions where the face is symmetrical. The authentication system according to any one of claims 1 to 18. [20] 前記局所領域決定部は、前記複数の 3次元局所領域が少なくとも前記顔の鼻及び 頰の部位を含むように前記全体 3次元形状における該 3次元局所領域を決定するこ とを特徴とする請求項 1ないし請求項 19のいずれ力、 1項に記載の認証システム。 [20] The local region determining unit determines the three-dimensional local region in the entire three-dimensional shape so that the plurality of three-dimensional local regions include at least the nose and the heel portion of the face. The authentication system according to any one of claims 1 to 19, wherein: [21] 前記特徴部位抽出部により抽出された特徴部位の情報から前記顔の 2次元的な特 徴量である 2次元顔特徴量を算出する 2次元特徴量算出部をさらに備え、 [21] The apparatus further comprises a two-dimensional feature amount calculation unit that calculates a two-dimensional face feature amount, which is a two-dimensional feature amount of the face, from information on the feature portion extracted by the feature portion extraction unit, 前記特徴量比較部は、前記 2次元特徴量算出部により算出された 2次元顔特徴量 と前記 3次元特徴量算出部により算出された 3次元顔特徴量とを併せてなる総合的 な顔特徴量と、前記比較用顔特徴量とを比較することを特徴とする請求項 4に記載の 認、証システム。 The feature amount comparison unit is a comprehensive face feature that is a combination of the two-dimensional face feature amount calculated by the two-dimensional feature amount calculation unit and the three-dimensional face feature amount calculated by the three-dimensional feature amount calculation unit. The amount is compared with the comparison face feature amount. Recognition, proof system. [22] 前記 3次元特徴量算出部は、少なくとも前記顔の特徴部位以外の部位を含む 3次 元局所領域における局所 3次元形状情報から前記 3次元顔特徴量を算出することを 特徴とする請求項 21に記載の認証システム。  [22] The three-dimensional feature quantity calculating unit calculates the three-dimensional face feature quantity from local three-dimensional shape information in a three-dimensional local region including at least a part other than the facial feature part. 24. The authentication system according to item 21. [23] 前記 2次元顔特徴量を算出するための特徴部位の情報はテクスチャ情報であって 当該テクスチャ情報に対して、前記顔の姿勢に関する補正である姿勢変動補正及 び顔に対する光源の向きに関する補正である光源変動補正を行う補正部をさらに備 えることを特徴とする請求項 21又は請求項 22に記載の認証システム。  [23] The feature part information for calculating the two-dimensional face feature amount is texture information, and with respect to the texture information, the posture variation correction that is correction related to the posture of the face and the direction of the light source with respect to the face. 23. The authentication system according to claim 21, further comprising a correction unit that performs light source fluctuation correction as correction. [24] 前記 3次元形状取得部は、 [24] The three-dimensional shape acquisition unit includes: 前記顔の 2次元画像を撮影する少なくとも 2つの撮影装置と、  At least two photographing devices for photographing a two-dimensional image of the face; 当該各撮影装置から得られた 2枚の 2次元画像から、位相限定相関法による演算 によって対応点検索処理を行い、 3次元再構成を行うことで、前記全体 3次元形状を 算出する 3次元形状算出部とを備えることを特徴とする請求項 2に記載の認証システ ム。  From the two 2D images obtained from each imaging device, the corresponding point search process is performed by calculation using the phase-only correlation method, and the 3D reconstruction is performed to calculate the overall 3D shape. The authentication system according to claim 2, further comprising a calculation unit. [25] 前記 3次元特徴量算出部により算出される 3次元顔特徴量は、ベクトル量であって 、該ベクトル量に対応する前記比較用顔特徴量としての比較用ベクトル量を記憶す る記憶部をさらに備えることを特徴とする請求項 1ないし請求項 24のいずれ力、 1項に 記載の認証システム。  [25] The three-dimensional face feature amount calculated by the three-dimensional feature amount calculation unit is a vector amount, and stores a comparison vector amount as the comparison face feature amount corresponding to the vector amount. The authentication system according to any one of claims 1 to 24, further comprising a section. [26] 前記局所領域決定部により決定された 3次元局所領域に基づいて、前記全体 3次 元形状における大域的な領域である 3次元大域領域の形状に関する大域領域形状 情報であって、前記顔の 3次元的な特徴量である大域 3次元顔特徴量を算出する大 域 3次元特徴量算出部をさらに備え、  [26] Global region shape information relating to a shape of a three-dimensional global region, which is a global region in the overall three-dimensional shape, based on the three-dimensional local region determined by the local region determination unit, the face A global 3D feature quantity calculation unit for calculating a global 3D face feature quantity, which is a 3D feature quantity of 前記特徴量比較部は、前記認証対象者に対する認証動作を行うべく前記大域 3次 元特徴量算出部により算出された大域 3次元顔特徴量と、予め用意された比較用大 域顔特徴量とを比較すること  The feature amount comparison unit includes a global three-dimensional face feature amount calculated by the global three-dimensional feature amount calculation unit to perform an authentication operation on the authentication target person, and a comparison global face feature amount prepared in advance. Comparing を特徴とする請求項 1ないし請求項 14の何れ力、 1項に記載の認証システム。  15. The authentication system according to any one of claims 1 to 14, characterized in that: [27] 前記局所領域決定部により決定された 3次元局所領域の情報に基づいて、前記全 体 3次元形状における大域的な情報である、前記顔の大域 3次元顔特徴量を算出す る大域 3次元顔特徴量算出部をさらに備え、 [27] Based on the information of the three-dimensional local region determined by the local region determination unit, A global 3D facial feature amount calculation unit for calculating the global 3D facial feature amount of the face, which is global information on the 3D shape of the body; 前記特徴量比較部は、前記認証対象者に対する認証動作を行うべく前記大域 3次 元特徴量算出部により算出された大域 3次元顔特徴量と、予め用意された比較用大 域顔特徴量とを比較すること  The feature amount comparison unit includes a global three-dimensional face feature amount calculated by the global three-dimensional feature amount calculation unit to perform an authentication operation on the authentication target person, and a comparison global face feature amount prepared in advance. Comparing を特徴とする請求項 2ないし請求項 14の何れ力、 1項に記載の認証システム。  15. The authentication system according to any one of claims 2 to 14, characterized by: [28] 前記局所領域決定部により決定された 3次元局所領域上に定義した 3次元特徴点 情報に基づいて、前記全体 3次元形状における大域的な情報である、前記顔の 3次 元的な特徴量である大域 3次元顔特徴量を算出する大域 3次元顔特徴量算出部を さらに備えること [28] Based on the three-dimensional feature point information defined on the three-dimensional local region determined by the local region determination unit, the three-dimensional information of the face is global information in the entire three-dimensional shape. A global 3D facial feature quantity calculation unit that calculates a global 3D facial feature quantity that is a feature quantity; を特徴とする請求項 27に記載の認証システム。  28. The authentication system according to claim 27. [29] 前記大域 3次元特徴量算出部は、前記 3次元局所領域上に定義した 3次元特徴点 情報に基づいて算出した標準モデルの変形パラメータの情報を抽出すること を特徴とする請求項 28に記載の認証システム。 29. The global three-dimensional feature quantity calculation unit extracts information on deformation parameters of a standard model calculated based on three-dimensional feature point information defined on the three-dimensional local area. The authentication system described in. [30] 前記大域 3次元特徴量算出部は、前記 3次元局所領域上に定義した 3次元特徴点 情報に基づいて算出した 3次元局所標準モデルと 3次元局所領域との距離情報を抽 出すること [30] The global three-dimensional feature amount calculation unit extracts distance information between the three-dimensional local standard model calculated based on the three-dimensional feature point information defined on the three-dimensional local region and the three-dimensional local region. thing を特徴とする請求項 28に記載の認証システム。  The authentication system according to claim 28, wherein: [31] 前記大域 3次元特徴量算出部は、前記 3次元局所領域上に定義した 3次元特徴点 情報に基づいて算出した前記 3次元局所領域同士の距離情報を抽出すること を特徴とする請求項 28に記載の認証システム。 [31] The global three-dimensional feature quantity calculation unit extracts distance information between the three-dimensional local areas calculated based on three-dimensional feature point information defined on the three-dimensional local area. The authentication system according to Item 28. [32] 前記局所領域決定部により決定する 3次元局所領域をライン状に抽出し、抽出され たライン状の 3次元局所領域に基づ!/、て、前記全体 3次元形状における大域的な領 域である、 3次元大域領域の形状ベクトルとして大域 3次元局顔特徴量を算出する大 域 3次元特徴量算出部をさらに備えることを [32] The 3D local area determined by the local area determining unit is extracted in a line shape, and based on the extracted line-shaped 3D local area! /, The global area in the overall 3D shape is extracted. A global 3D feature quantity calculation unit that calculates a global 3D local facial feature quantity as a shape vector of the 3D global area that is a global area. を特徴とする請求項 27に記載の認証システム。  28. The authentication system according to claim 27. [33] 前記大域 3次元特徴量算出部は、前記大域 3次元顔特徴量として前記 3次元局所 領域に関する重心の情報を算出すること を特徴とする請求項 26に記載の認証システム。 [33] The global three-dimensional feature amount calculation unit calculates center of gravity information regarding the three-dimensional local region as the global three-dimensional face feature amount. 27. An authentication system according to claim 26. [34] 前記大域 3次元特徴量算出部は、前記大域 3次元顔特徴量として前記 3次元局所 領域に関する法線の情報を算出すること [34] The global 3D feature amount calculation unit calculates normal information regarding the 3D local region as the global 3D face feature amount. を特徴とする請求項 26に記載の認証システム。  27. An authentication system according to claim 26. [35] 前記特徴量比較部は、前記大域 3次元顔特徴量と前記比較用大域顔特徴量との 比較結果に応じて、前記 3次元特徴量算出部により算出された 3次元顔特徴量と、予 め用意された比較用顔特徴量とを比較すること [35] The feature quantity comparison unit includes the 3D face feature quantity calculated by the 3D feature quantity calculation unit according to a comparison result between the global 3D face feature quantity and the comparative global face feature quantity. Compare with the prepared facial features for comparison. を特徴とする請求項 26に記載の認証システム。  27. An authentication system according to claim 26. [36] 前記特徴量比較部は、前記認証対象者に対する認証動作を行うベぐ前記大域 3 次元特徴量算出部により算出された大域 3次元顔特徴量と、予め用意された比較用 大域顔特徴量とを比較した大域比較結果、および、前記 3次元特徴量算出部により 算出された 3次元顔特徴量と、予め用意された比較用顔特徴量とを比較した局所比 較結果を統合した総合比較結果を算出すること [36] The feature amount comparison unit performs the authentication operation on the person to be authenticated, the global three-dimensional face feature amount calculated by the global three-dimensional feature amount calculation unit, and a prepared comparison global face feature. The global comparison result comparing the amount and the local comparison result comparing the three-dimensional face feature amount calculated by the three-dimensional feature amount calculation unit and the comparison facial feature amount prepared in advance are integrated. Calculating comparison results を特徴とする請求項 26に記載の認証システム。  27. An authentication system according to claim 26. [37] 認証対象者の顔の全体的な 3次元形状である全体 3次元形状の情報を取得する第 1の工程と、 [37] a first step of acquiring information on an overall 3D shape, which is an overall 3D shape of the face of the person to be authenticated; 前記全体 3次元形状情報から、該全体 3次元形状における局所的な領域である複 数の 3次元局所領域を決定する第 2の工程と、  A second step of determining a plurality of three-dimensional local regions that are local regions in the whole three-dimensional shape from the whole three-dimensional shape information; 前記 3次元局所領域における局所 3次元形状情報から、各 3次元局所領域の形状 に関する局所領域形状情報であって、前記顔の 3次元的な特徴量である 3次元顔特 徴量を算出する第 3の工程と、  A local region shape information related to the shape of each three-dimensional local region is calculated from the local three-dimensional shape information in the three-dimensional local region, and a three-dimensional face feature amount that is a three-dimensional feature amount of the face is calculated. 3 processes, 前記認証対象者に対する認証動作を行うべく前記 3次元顔特徴量と予め用意され た比較用顔特徴量とを比較する第 4の工程とを有することを特徴とする認証方法。  An authentication method comprising: a fourth step of comparing the three-dimensional face feature quantity with a comparison face feature quantity prepared in advance to perform an authentication operation on the person to be authenticated. [38] 前記第 1の工程は、前記顔の 2次元画像を取得する第 5の工程を含むものであり、 前記 2次元画像から前記顔の特徴的な部位である特徴部位を抽出する第 6の工程 と、 [38] The first step includes a fifth step of acquiring a two-dimensional image of the face, and a sixth portion that extracts a characteristic portion that is a characteristic portion of the face from the two-dimensional image. And the process of 前記特徴部位の 3次元座標を算出する第 7の工程とをさらに有し、  A seventh step of calculating the three-dimensional coordinates of the characteristic part, 前記第 2の工程は、前記特徴部位の 3次元座標に基づ!/、て前記 3次元局所領域を 決定する工程であることを特徴とする請求項 37に記載の認証方法。 The second step is based on the three-dimensional coordinates of the characteristic part! / 38. The authentication method according to claim 37, wherein the authentication method is a step of determining. [39] 前記特徴部位の情報から前記顔の 2次元的な特徴量である 2次元顔特徴量を算出 する第 8の工程をさらに有し、 [39] The method further includes an eighth step of calculating a two-dimensional facial feature quantity that is a two-dimensional feature quantity of the face from the information of the feature part, 前記第 4の工程は、前記 2次元顔特徴量と前記 3次元顔特徴量とを併せてなる総 合的な顔特徴量と、前記比較用顔特徴量とを比較する工程であることを特徴とする 請求項 38に記載の認証方法。  The fourth step is a step of comparing an overall face feature amount combining the two-dimensional face feature amount and the three-dimensional face feature amount with the comparison face feature amount. The authentication method according to claim 38. [40] 前記第 2工程により決定された 3次元局所領域に基づいて、前記全体 3次元形状 における大域的な領域である 3次元大域領域の形状に関する大域領域形状情報で あって、前記顔の 3次元的な特徴量である大域 3次元顔特徴量を算出する第 9工程 をさらに備え、 [40] Global region shape information relating to the shape of a three-dimensional global region, which is a global region in the overall three-dimensional shape, based on the three-dimensional local region determined in the second step, A ninth step of calculating a global three-dimensional facial feature quantity that is a dimensional feature quantity; 前記第 4工程は、前記認証対象者に対する認証動作を行うべく前記 9工程により算 出された大域 3次元顔特徴量と、予め用意された比較用大域顔特徴量とを比較する こと  In the fourth step, the global three-dimensional facial feature value calculated in the ninth step to perform an authentication operation for the subject to be authenticated is compared with a comparative global facial feature value prepared in advance. を特徴とする請求項 37に記載の認証方法。  38. The authentication method according to claim 37, wherein:
PCT/JP2007/071807 2006-11-10 2007-11-09 Authentication system and authentication method Ceased WO2008056777A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2008543143A JP4780198B2 (en) 2006-11-10 2007-11-09 Authentication system and authentication method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2006305739 2006-11-10
JP2006-305739 2006-11-10

Publications (1)

Publication Number Publication Date
WO2008056777A1 true WO2008056777A1 (en) 2008-05-15

Family

ID=39364587

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2007/071807 Ceased WO2008056777A1 (en) 2006-11-10 2007-11-09 Authentication system and authentication method

Country Status (2)

Country Link
JP (1) JP4780198B2 (en)
WO (1) WO2008056777A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009128192A (en) * 2007-11-22 2009-06-11 Ihi Corp Object recognition device and robot device
JP2009128191A (en) * 2007-11-22 2009-06-11 Ihi Corp Object recognition device and robot device
JP2010045770A (en) * 2008-07-16 2010-02-25 Canon Inc Image processor and image processing method
JP2012238121A (en) * 2011-05-10 2012-12-06 Canon Inc Image recognition device, control method for the device, and program
JP2013089123A (en) * 2011-10-20 2013-05-13 National Institute Of Information & Communication Technology Generation method, generation program, and generation system of individual model data
JP2013131209A (en) * 2011-12-20 2013-07-04 Apple Inc Formation of face feature vector
JP2013196046A (en) * 2012-03-15 2013-09-30 Omron Corp Authentication apparatus, control method of authentication apparatus, control program and recording medium
JP2014178969A (en) * 2013-03-15 2014-09-25 Nec Solution Innovators Ltd Information processor and determination method
JP2017016192A (en) * 2015-06-26 2017-01-19 株式会社東芝 Three-dimensional object detection apparatus and three-dimensional object authentication apparatus
CN111122687A (en) * 2019-11-21 2020-05-08 国政通科技有限公司 Anti-terrorist security inspection method for explosives
JP2023500739A (en) * 2019-12-20 2023-01-10 コーニンクレッカ フィリップス エヌ ヴェ Illumination compensation in imaging
CN117593367A (en) * 2023-10-24 2024-02-23 北京城建集团有限责任公司 Electrical equipment support positioning system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5904168B2 (en) * 2012-07-20 2016-04-13 Jfeスチール株式会社 Feature point extraction method and feature point extraction device for captured image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02224185A (en) * 1989-02-27 1990-09-06 Osaka Gas Co Ltd Method and device for identifying person
JPH0944688A (en) * 1995-05-23 1997-02-14 Matsushita Electric Ind Co Ltd Curved surface conversion method for point cloud data and shape measuring method using the same
JPH11283033A (en) * 1998-03-27 1999-10-15 Ricoh System Kaihatsu Kk Method for utilizing feature amount for image identification and recording medium for storing program therefor
JP2002216129A (en) * 2001-01-22 2002-08-02 Honda Motor Co Ltd Apparatus and method for detecting face area and computer-readable recording medium
JP2004222118A (en) * 2003-01-17 2004-08-05 Omron Corp Imaging equipment
WO2005038700A1 (en) * 2003-10-09 2005-04-28 University Of York Image recognition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02224185A (en) * 1989-02-27 1990-09-06 Osaka Gas Co Ltd Method and device for identifying person
JPH0944688A (en) * 1995-05-23 1997-02-14 Matsushita Electric Ind Co Ltd Curved surface conversion method for point cloud data and shape measuring method using the same
JPH11283033A (en) * 1998-03-27 1999-10-15 Ricoh System Kaihatsu Kk Method for utilizing feature amount for image identification and recording medium for storing program therefor
JP2002216129A (en) * 2001-01-22 2002-08-02 Honda Motor Co Ltd Apparatus and method for detecting face area and computer-readable recording medium
JP2004222118A (en) * 2003-01-17 2004-08-05 Omron Corp Imaging equipment
WO2005038700A1 (en) * 2003-10-09 2005-04-28 University Of York Image recognition

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ANDO ET AL.: "Kao Hyomen no Hosen Vector o Mochiita Kojin Shogo", THE JOURNAL OF THE INSTITUTE OF IMAGE ELECTRONICS ENGINEERS OF JAPAN, vol. 31, no. 5, 25 September 2002 (2002-09-25), pages 841 - 847 *
MASUI ET AL.: "3D Keisoku ni yoru Kao Gazo Ninshiki no Kiso Kento", ITEJ TECHNICAL REPORT, vol. 14, no. 36, 29 June 1990 (1990-06-29), pages 7 - 12 *
SHIN ET AL.: "Spin Image o Mochiita 3 Jigen Scan Data kara no Jintai no Tokuchoten Chushutsu", FIT2006 5TH FORUM ON INFORMATION TECHNOLOGY JOHO KAGAKU GIJUTSU LETTERS, vol. 5, 21 August 2006 (2006-08-21), pages 329 - 331 *
TANAKA ET AL.: "3 Jigen Kyokuritsu o Mochiita Kao no Dotei - Kao no 3 Jigen Keijo Chushutsu", THE TRANSACTIONS OF THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS, vol. J76-D-II, no. 8, 25 August 1993 (1993-08-25), pages 1595 - 1603 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009128192A (en) * 2007-11-22 2009-06-11 Ihi Corp Object recognition device and robot device
JP2009128191A (en) * 2007-11-22 2009-06-11 Ihi Corp Object recognition device and robot device
JP2010045770A (en) * 2008-07-16 2010-02-25 Canon Inc Image processor and image processing method
JP2012238121A (en) * 2011-05-10 2012-12-06 Canon Inc Image recognition device, control method for the device, and program
JP2013089123A (en) * 2011-10-20 2013-05-13 National Institute Of Information & Communication Technology Generation method, generation program, and generation system of individual model data
JP2013131209A (en) * 2011-12-20 2013-07-04 Apple Inc Formation of face feature vector
JP2013196046A (en) * 2012-03-15 2013-09-30 Omron Corp Authentication apparatus, control method of authentication apparatus, control program and recording medium
JP2014178969A (en) * 2013-03-15 2014-09-25 Nec Solution Innovators Ltd Information processor and determination method
JP2017016192A (en) * 2015-06-26 2017-01-19 株式会社東芝 Three-dimensional object detection apparatus and three-dimensional object authentication apparatus
CN111122687A (en) * 2019-11-21 2020-05-08 国政通科技有限公司 Anti-terrorist security inspection method for explosives
CN111122687B (en) * 2019-11-21 2022-09-20 国政通科技有限公司 Anti-terrorist security inspection method for explosives
JP2023500739A (en) * 2019-12-20 2023-01-10 コーニンクレッカ フィリップス エヌ ヴェ Illumination compensation in imaging
JP7209132B2 (en) 2019-12-20 2023-01-19 コーニンクレッカ フィリップス エヌ ヴェ Illumination compensation in imaging
CN117593367A (en) * 2023-10-24 2024-02-23 北京城建集团有限责任公司 Electrical equipment support positioning system

Also Published As

Publication number Publication date
JP4780198B2 (en) 2011-09-28
JPWO2008056777A1 (en) 2010-02-25

Similar Documents

Publication Publication Date Title
JP4780198B2 (en) Authentication system and authentication method
Pan et al. 3D face recognition using mapped depth images
JP4653606B2 (en) Image recognition apparatus, method and program
Hsu et al. RGB-D-based face reconstruction and recognition
JP2017016192A (en) Three-dimensional object detection apparatus and three-dimensional object authentication apparatus
JP2014081347A (en) Method for recognition and pose determination of 3d object in 3d scene
JP4696778B2 (en) Authentication apparatus, authentication method, and program
JP4752433B2 (en) Modeling system, modeling method and program
CN101770566A (en) Quick three-dimensional human ear identification method
JP2007058397A (en) Authentication system, registration system, and medium for certificate
JP4952267B2 (en) Three-dimensional shape processing apparatus, three-dimensional shape processing apparatus control method, and three-dimensional shape processing apparatus control program
CN103971122A (en) Three-dimensional human face description method and device based on depth image
CN111652018B (en) Face registration method and authentication method
CN106778491B (en) The acquisition methods and equipment of face 3D characteristic information
JP4539494B2 (en) Authentication apparatus, authentication method, and program
JP4992289B2 (en) Authentication system, authentication method, and program
JP2005351814A (en) Detector and detecting method
JP5018029B2 (en) Authentication system and authentication method
JP4539519B2 (en) Stereo model generation apparatus and stereo model generation method
Zhang et al. Face recognition using SIFT features under 3D meshes
Mian et al. 3D face recognition
Hajati et al. Pose-invariant 2.5 D face recognition using geodesic texture warping
JP4956983B2 (en) Authentication system, authentication method and program
JP2007257310A (en) Face analysis system
JP4525523B2 (en) Authentication system, authentication method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07831538

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2008543143

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07831538

Country of ref document: EP

Kind code of ref document: A1