CN116416671B - Face image correcting method and device, electronic equipment and storage medium - Google Patents
Face image correcting method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN116416671B CN116416671B CN202310685840.5A CN202310685840A CN116416671B CN 116416671 B CN116416671 B CN 116416671B CN 202310685840 A CN202310685840 A CN 202310685840A CN 116416671 B CN116416671 B CN 116416671B
- Authority
- CN
- China
- Prior art keywords
- image
- key part
- target
- face
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/242—Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Geometry (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The application discloses a face image correcting method, a face image correcting device, electronic equipment and a storage medium, wherein the face image correcting method comprises the following steps: extracting at least two key part images to be processed in the face images to be processed; selecting a target key part image from the at least two key part images to be processed according to the standard key part images in the target sample set; the target sample set is determined according to a standard key part image extracted from a standard face image; extracting feature vectors of the target sample set and the target key part image respectively to obtain a feature vector set of the target sample set and a feature vector of the target key part image; calculating the rotation angle of the target key part image based on the characteristic vector set of the target sample set and the characteristic vector of the target key part image; the face image to be processed is processed by the face image processing method based on the face rotation angle, and the problem that faces cannot be processed by the traditional image face rotation technology is solved.
Description
Technical Field
The embodiment of the invention relates to an image processing technology, in particular to a face image correcting method, a face image correcting device, electronic equipment and a storage medium.
Background
The face recognition technology is a biological feature recognition technology for recognizing a face by using a computer technology for analysis and comparison. In recent years, as more and more researchers are put into the field of face recognition, face recognition technology has been greatly developed and gradually matured.
The quality of the face image acquired by face detection directly influences the face recognition rate, and factors influencing the face detection process mainly comprise: the difference between face images of the same person in different postures is almost larger than the difference between face images of different persons in the same posture, when the face postures acquired by face detection are not standard, the recognition rate of face recognition is low, the posture problem becomes one of the most difficult problems in the face recognition field for most face recognition systems existing at present, and the face images in different postures are rotated to be positive by a face rotation technology in the prior art.
The image correcting technology generally corrects the image angle based on the displacement and angle change of the pixel of the image to be processed, which is simple, black and white, relative to the pixel of the target image, and can be better applied to the correcting of images such as bills, documents and the like, and cannot process complex face images or has unsatisfactory processing effects.
Disclosure of Invention
The invention provides a face image correcting method, a face image correcting device, electronic equipment and a storage medium, so as to correct face images in different postures.
In a first aspect, an embodiment of the present invention provides a face image correcting method, where the method includes:
extracting at least two key part images to be processed in the face images to be processed;
selecting a target key part image from the at least two key part images to be processed according to the standard key part images in the target sample set; the target sample set is determined according to a standard key part image extracted from a standard face image;
extracting feature vectors of the target sample set and the target key part image respectively to obtain a feature vector set of the target sample set and a feature vector of the target key part image;
calculating the rotation angle of the target key part image based on the characteristic vector set of the target sample set and the characteristic vector of the target key part image;
And carrying out righting processing on the face image to be processed based on the righting angle.
Optionally, the calculating the rotation angle of the target key part image based on the feature vector set of the target sample set and the feature vector of the target key part image includes:
obtaining a plurality of similarities according to the similarity between the feature vector of the target key part image and the feature vector of each image in the feature vector set of the target sample set;
determining a plurality of rotation positive angles corresponding to the plurality of similarities;
and fusing the plurality of rotation positive angles to obtain the rotation positive angle of the target key part image.
Optionally, the correcting the face image to be processed based on the correcting angle includes:
if any two target key part images are symmetrical, determining another target key part image symmetrical to the target key part images, and taking the average value of the rotation positive angles of the any two target key part images as the rotation positive angle of the symmetrical target key part images;
taking the mean value of the rotation positive angle of the symmetrical target key part image and the rotation positive angle of the asymmetrical target key part image as the rotation positive angle of the face image to be processed;
And carrying out righting processing on the face image to be processed based on the righting angle of the face image to be processed.
Optionally, the method further comprises determining the target sample set by:
performing amplification treatment on at least one standard key part image to obtain at least one amplified image corresponding to the at least one standard key part image;
and taking the set of the at least one standard critical part image and the at least one amplified image as a target sample set.
Optionally, before extracting at least two key part images to be processed in the face images to be processed, the method further includes:
acquiring a plurality of different standard face images, and extracting the central coordinate values of all standard key part images in the standard face images;
counting distance intervals among key parts of the standard face based on the central coordinate values of the standard key part images; and the distance interval between the key parts is used for verifying the correction processing result.
Optionally, the method further comprises verifying the positive processing result by:
calculating the central coordinate value of each key part image to be processed of the face image to be processed after the spin-correction processing;
And judging whether the center coordinate value after the correcting process is in the distance interval between the key parts of the standard face, if so, determining that the correcting process is correct.
In a second aspect, an embodiment of the present invention further provides a face image correcting device, where the device includes:
the key part extraction module is used for extracting at least two key part images to be processed in the face images to be processed;
the target part selection module is used for selecting a target key part image from the at least two key part images to be processed according to the standard key part images in the target sample set; the target sample set is determined according to a standard key part image extracted from a standard face image;
the feature vector extraction module is used for extracting feature vectors of the target sample set and the target key part image respectively to obtain a feature vector set of the target sample set and a feature vector of the target key part image;
the rotation angle calculation module is used for calculating the rotation angle of the target key part image based on the characteristic vector set of the target sample set and the characteristic vector of the target key part image;
and the face image correcting module is used for correcting the face image to be processed based on the correcting angle.
In a third aspect, an embodiment of the present application further provides an electronic device, including:
one or more processors;
storage means for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the face image correcting method according to any one of the embodiments of the present application.
In a fourth aspect, embodiments of the present application also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a face image normalization method according to any one of the embodiments of the present application.
The method comprises the steps of extracting at least two key part images to be processed in the face images to be processed; selecting a target key part image from the at least two key part images to be processed according to the standard key part images in the target sample set; the target sample set is determined according to a standard key part image extracted from a standard face image; extracting feature vectors of the target sample set and the target key part image respectively to obtain a feature vector set of the target sample set and a feature vector of the target key part image; calculating the rotation angle of the target key part image based on the characteristic vector set of the target sample set and the characteristic vector of the target key part image; and carrying out righting processing on the face image to be processed based on the righting angle. Before face turning, the key parts of the face are identified through a target detection technology, the turning angles of the key parts of the face are calculated, and the deflected face images in the images are turned by the turning angles in the opposite direction of the deflection direction of the face images. The technical scheme of the application is based on a mature technology, is stable and reliable, solves the problem that the face image cannot be processed by the traditional image correcting technology by using a specific method, and simultaneously can be used for processing other deep faces because the face correcting image does not deflect, so that the workload in the subsequent image processing is reduced, for example, the face correcting image can be used for face recognition, and the accuracy in the face recognition is greatly improved.
Drawings
Fig. 1 is a schematic flow chart of a face image correcting method according to a first embodiment of the present application;
fig. 2 is a schematic flow chart of a face image correcting method according to a second embodiment of the present application;
fig. 3 is a schematic flow chart of a face image correcting method according to a third embodiment of the present application;
fig. 4 is an exemplary diagram of each standard key part of a standard face image according to a fourth embodiment of the present application;
fig. 5 is an exemplary diagram of coordinate systems of respective standard key parts of a standard face image according to a fourth embodiment of the present application;
FIG. 6 is a diagram showing an example of obtaining a target sample set by amplifying each standard critical portion according to the fourth embodiment of the present application;
fig. 7 is an exemplary diagram of a to-be-detected picture according to a fourth embodiment of the present application;
fig. 8 is an exemplary diagram of a rotation calculation result of each target portion of a picture to be detected according to a fourth embodiment of the present application;
fig. 9 is a diagram showing an example of a rotation result of a picture to be detected according to a fourth embodiment of the present application;
fig. 10 is a schematic structural diagram of a face image correcting device according to a fifth embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to a sixth embodiment of the present application.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Example 1
Fig. 1 is a schematic flow chart of a face image correcting method according to an embodiment of the present invention, where the embodiment is applicable to a case of correcting an image with a non-standard posture, and the method may be performed by a face image correcting device, and the device may be implemented in a software and/or hardware manner. The device can be configured in a terminal device, and the method specifically comprises the following steps:
s110, extracting at least two key part images to be processed in the face images to be processed.
The face image to be processed may be an image with a non-standard pose, for example, a face image that is transverse, falling or inclined to a certain extent, which needs to be subjected to a spin-up process. The key part image to be processed can be a mark image of each part of the face image, for example, an image of a face organ, and can be particularly eyebrows, glasses, noses, mouths and ears; the image can also be composed of key points on the skin surface of the face, and the key points can be preselected.
In the embodiment of the application, the target detection is carried out on the input face image to be processed by any prior art means, and at least two key part images to be processed are extracted. Specifically, face detection may be performed using OpenCV (cross platform computer vision and machine learning software library): firstly converting an image into a gray image, detecting whether a human face exists or not, performing image processing, adjusting the size, cutting, blurring, sharpening and the Like after the human face is detected, then performing image segmentation, segmenting the human face in the image from other images, finally performing feature extraction through a Haar-Like (Ha Er) feature algorithm, and detecting the marker images of each part of the human face image by means of edge detection, line detection and center detection.
S120, selecting a target key part image from the at least two key part images to be processed according to the standard key part images in the target sample set.
The standard face image may be a forward face image without an inclination angle, or a person skilled in the art may define a face image with a certain inclination angle in advance as a standard face image according to requirements, and the standard face image may be multiple, may be a face image of the same person, or may be a face image of a different person. The standard critical portion images may be the portion landmark images of the standard face images, and the same type of standard critical portion images may be divided into the same sample set, for example, the left eye portion images of different standard face images are divided into sample sets EL, and the right eyebrow portion images of different standard face images are divided into sample sets EbR.
The target sample set is determined from standard key location images extracted from standard face images. Specifically, one or more sample sets can be obtained from one or more types of standard key part images extracted from the standard face image as the target sample set in the embodiment of the application.
Further, since the standard face image obtained from each angle sample is relatively costly, in an alternative embodiment, the standard face image may be amplified to increase the number of samples, and specifically, the target sample set may be determined by: performing amplification treatment on at least one standard key part image to obtain at least one amplified image corresponding to the at least one standard key part image; and taking the set of the at least one standard critical part image and the at least one amplified image as a target sample set.
Wherein the amplification process includes at least one of a stretching process, a rotation process, and a mirroring process, it should be understood that the mirroring process is only used to process symmetrical standard critical area images, such as eyes and eyebrows, and the amplified image obtained after the process should be divided into symmetrical sample sets having symmetrical relation with the standard critical area images, for example, the amplified image obtained after the mirroring process of the left eye area image may be divided into sample sets ER.
Preferably, the target sample set may be obtained by: each part image of the face standard image is rotated clockwise by a preset angle to generate a new image, the rotation is continued to generate an image until the rotation is completed by 360 degrees, and various different angles and the same type are classified into a group to be used as a target sample set. The preset angle can be set according to actual requirements.
In addition, the amplification process may be performed on one, two, and more than two standard critical-portion images. In addition, one, two or more kinds of amplification treatments may be performed on the same standard critical-portion image, and the same amplification treatment may be performed on different standard critical-portion images, or different amplification treatments may be performed. The types and the number of the amplified images in the target sample set can be set according to actual requirements, and are not limited herein.
Specifically, the target key-part image may be selected from the at least two key-part images to be processed according to the category of the standard key-part image in the target sample set. Wherein the target key-site image is consistent with the category of the target sample set, and the target key-site image can be one or more. For example, the category of the standard key part image of the target sample set can be determined through the number, the name and the like of the target sample set, the category of the key part image to be processed is determined through machine vision recognition, and finally the target key part image consistent with the category of the target sample set is determined.
S130, extracting feature vectors of the target sample set and the target key part image respectively to obtain a feature vector set of the target sample set and a feature vector of the target key part image.
And S140, calculating the rotation angle of the target key part image based on the characteristic vector set of the target sample set and the characteristic vector of the target key part image.
Since the gray scale greatly varies due to the small variation of the feature points of the image in any direction, the feature vector of the amplified image is different from the original standard key part image, and the feature point extraction commonly used is SIFT feature extraction, HOG feature extraction, neural network feature extraction and the like.
For example, extracting feature points in an image with SIFT, and calculating feature vectors for key points may be achieved by:
sift=cv2.xfeatures2d.SIFT_create()
keypoints,feature=sift.detectAndCompute(image,None)
the present application is not limited to the manner in which the feature vector is extracted.
Optionally, extracting feature vectors of each standard key part image and each amplified image included in the target sample set to obtain a feature vector set of the target sample set, where each element in the set may include information such as ID, image feature vector, image category, image value (picture base 64), image rotation angle, and the like. For example, ID:3, a step of; image feature vector: rnSTyVlZq8zFH1PL2CETwvl; image category: left eyebrow; image value (picture base 64): lG8khviJWroiAIg1; image rotation angle: 10. and extracting the feature vector of the target key position image to obtain the feature vector of the target key position image.
After the feature vector is extracted, the feature points or the feature vector can be matched for any two images. For example, there are a variety of SIFT matches implemented by two-dimensional feature points (Features 2D) of OpenCV, such as VectorDescriptorMatcher, BFMatcher.
Optionally, the calculating the rotation angle of the target key part image based on the feature vector set of the target sample set and the feature vector of the target key part image includes: obtaining a plurality of similarities according to the similarity between the feature vector of the target key part image and the feature vector of each image in the feature vector set of the target sample set; determining a plurality of rotation positive angles corresponding to the plurality of similarities; and fusing the plurality of rotation positive angles to obtain the rotation positive angle of the target key part image.
For example, a first preset number of feature vectors with higher similarity may be selected, rotation angle information corresponding to the first preset number is determined based on a feature vector set of the target sample set, a mean value of the first preset number of rotation angles is calculated, then a variance of the first preset number of rotation angles is calculated, a second preset number of rotation angles with larger differences are eliminated, and the mean value of the remaining number of rotation angles is used as the rotation angle of the target key part image. The first preset number and the second preset number are not limited in the embodiment of the present application.
And S150, performing righting processing on the face image to be processed based on the righting angle.
Optionally, after the rotation angle of the target key part image is obtained by calculation, the face image to be processed may be reversely rotated by the rotation angle to obtain a face image after rotation.
Before face turning, the key parts of the face are identified through a target detection technology, the turning angles of the key parts of the face are calculated, and the deflected face images in the images are turned by the turning angles in the opposite direction of the deflection direction of the face images. The technical scheme of the application solves the problem that the face image cannot be processed by the traditional image correcting technology, and meanwhile, the corrected image can be used for processing other deep faces because the face correcting image does not deflect, so that the workload in the subsequent image processing is reduced.
Example two
Fig. 2 is a schematic flow chart of a face image correcting method according to a second embodiment of the present application, where step S150 is preferably further optimized to "if any two target key part images are symmetrical, determine another target key part image symmetrical to the two target key part images, and take the mean value of the correcting angles of the any two target key part images as the correcting angle of the symmetrical target key part image; taking the mean value of the rotation positive angle of the symmetrical target key part image and the rotation positive angle of the asymmetrical target key part image as the rotation positive angle of the face image to be processed; and performing righting processing on the face image to be processed based on the righting angle of the face image to be processed. It should be noted that this embodiment is a further optimization of the above embodiments, and the same terms have similar definitions, principles, procedures and technical effects as in the above embodiments. The method comprises the following steps:
S210, extracting at least two key part images to be processed in the face images to be processed.
S220, selecting a target key part image from the at least two key part images to be processed according to the standard key part images in the target sample set.
And S230, extracting feature vectors of the target sample set and the target key part image respectively to obtain a feature vector set of the target sample set and a feature vector of the target key part image.
S240, calculating the rotation angle of the target key part image based on the characteristic vector set of the target sample set and the characteristic vector of the target key part image.
S250, if any two target key part images are symmetrical, determining another target key part image symmetrical to the target key part images, and taking the average value of the rotation positive angles of any two target key part images as the rotation positive angle of the symmetrical target key part images.
And S260, taking the mean value of the rotation positive angles of the symmetrical target key part images and the rotation positive angles of the asymmetrical target key part images as the rotation positive angle of the face image to be processed.
S270, performing righting processing on the face image to be processed based on the righting angle of the face image to be processed.
Optionally, because the face is symmetrical left and right, in the embodiment of the present application, there are symmetrical target key part images, such as eyes and eyebrows. The step S250 has the advantage of avoiding errors caused by special conditions such as a splay eyebrow and a slant eye, and the step S260 has the advantage of integrating the rotation angles of the images of each key part of the face to obtain the rotation angles more suitable for the images of the face to be processed, and then the images of the face to be processed can be reversely rotated by the rotation angles to obtain the images of the face after the rotation.
According to the technical scheme of the embodiment, on the basis of the above embodiments, the face image to be processed is subjected to the correcting process based on the correcting angles, if any two target key part images are symmetrical, another symmetrical target key part image is determined, and the mean value of the correcting angles of the any two target key part images is used as the correcting angle of the symmetrical target key part images; taking the mean value of the rotation positive angle of the symmetrical target key part image and the rotation positive angle of the asymmetrical target key part image as the rotation positive angle of the face image to be processed; the face image to be processed is subjected to the correcting process based on the correcting angles of the face image to be processed, so that errors among the correcting angles of all key parts of the face are eliminated, the more suitable correcting angles are determined, and a better technical effect is achieved.
Example III
Fig. 3 is a schematic flow chart of a face image correcting method according to a third embodiment of the present invention, where "obtaining a plurality of different standard face images and extracting central coordinate values of each standard key part image in the standard face images" is added on the basis of the above embodiments; counting distance intervals among key parts of the standard face based on the central coordinate values of the standard key part images; the distance interval between the key parts is used for checking the correction processing result and calculating the central coordinate value of each key part image to be processed of the face image to be processed after the correction processing; and judging whether the center coordinate value after the correcting process is in the distance interval between the key parts of the standard face, and if so, determining the correct correcting technical characteristic. It should be noted that this embodiment is a further optimization of the above embodiments, and the same terms have similar definitions, principles, procedures and technical effects as in the above embodiments. The method comprises the following steps:
s310, acquiring a plurality of different standard face images, and extracting the central coordinate values of all standard key part images in the standard face images.
S320, counting distance intervals among the key parts of the standard face based on the central coordinate values of the standard key part images.
The distance interval between the key parts is used for verifying the correction processing result.
Optionally, each standard critical part image of the standard face image, such as eyebrows, eyes, nose, mouth, etc., may be detected by any target detection technique in the prior art, the standard critical part images are connected, a horizontal line parallel to the connecting lines of the two eyes or the two eyebrows is taken as an x-axis, and a vertical line parallel to the connecting lines of the nose and the mouth is taken as a y-axis, and a two-dimensional rectangular coordinate system is established to obtain the central coordinate value of each standard critical part image.
Further, the two-dimensional rectangular coordinate system further includes the following rules: 1. the horizontal positions of the nose and the mouth are similar; 2. the nose is positioned above the mouth; 3 the nose is located between the two eyes; the 4 eyebrows are located above the eyes.
Further, the same processing is performed on a plurality of different standard face images, a large amount of data is obtained, and distance intervals among key parts of the standard face are counted.
S330, extracting at least two key part images to be processed in the face images to be processed.
S340, selecting a target key part image from the at least two key part images to be processed according to the standard key part images in the target sample set.
S350, extracting feature vectors of the target sample set and the target key part image respectively to obtain a feature vector set of the target sample set and a feature vector of the target key part image.
S360, calculating the rotation angle of the target key part image based on the characteristic vector set of the target sample set and the characteristic vector of the target key part image.
And S370, performing the face image to be processed on the basis of the face image rotation angle.
S380, calculating the central coordinate value of each key part image to be processed of the face image to be processed after the spin-correction processing.
S390, judging whether the center coordinate value after the correcting process is in the distance interval between the key parts of the standard face, if so, determining that the correcting process is correct.
The central coordinate value of the key part image to be processed may be obtained before the spin-correction process in step S370.
Preferably, the acquiring step S330 may be performed while extracting at least two key part images to be processed in the face images to be processed, and simultaneously, a two-dimensional rectangular coordinate system of the face images to be processed may be established, and a central coordinate value of the at least two key part images to be processed may be obtained. After the spin correction processing is performed in step S370, new center coordinate values of the rotated key part images to be processed are calculated according to the rotation angle and combined with the trigonometric function, and whether the new center coordinate values are in a distance interval between the key parts of the standard face is judged, if yes, the spin correction is determined to be correct.
Optionally, the method further includes determining whether the new central coordinate values of the to-be-processed key part images conform to a rule of a two-dimensional rectangular coordinate system, and if so, determining that the rotation is correct.
If a few new center coordinate values which are not met exist, eliminating the target key part image corresponding to the new center coordinate values which are not met with the rule, and then recalculating the rotation angle and reclassifying the rotation again. If a plurality of new center coordinate values which do not accord with the new center coordinate values exist, the fact that the face image to be processed does not contain a normal face is determined, and processing is stopped.
According to the technical scheme of the embodiment, on the basis of the embodiments, the steps of acquiring a plurality of different standard face images and extracting the central coordinate value of each standard key part image in the standard face images are added; counting distance intervals among key parts of the standard face based on the central coordinate values of the standard key part images; the distance interval between the key parts is used for checking the correction processing result and calculating the central coordinate value of each key part image to be processed of the face image to be processed after the correction processing; and judging whether the center coordinate value after the correcting process is in the distance interval between the key parts of the standard face, and if so, determining the correct correcting technical characteristic. According to the technical scheme, verification is added after face turning, accuracy of face turning processing can be further improved, meanwhile, the situation that some similar faces are wrongly recognized into faces can be avoided, and better experience effect is brought.
Example IV
The fourth embodiment of the application provides a face image correcting method adopting the technical scheme of the application, which comprises the following steps:
1. and establishing a characteristic library of key parts of the face image, and a relative distance and position library of the key parts.
1.1 fig. 4 is an exemplary diagram of each standard key part of a standard face image according to a fourth embodiment of the present application, as shown in fig. 4, based on a target detection technology, a logo image of each part of the standard face image is extracted, including eyebrows, eyes, nose, and mouth.
1.1.1 fig. 5 is an exemplary diagram of coordinate systems of various standard key parts of a standard face image according to a fourth embodiment of the present application, as shown in fig. 5, ABCDEF in fig. 4 is connected to form a shape similar to Y, and based on CD parallel (two eye connection lines) as x-axis and EF parallel (nose and mouth connection line) as Y-axis, a two-dimensional rectangular coordinate system is established, and two-dimensional coordinates in the picture corresponding to the target are recorded.
1.1.2 based on the coordinate system shown in fig. 5, the following rule is obtained:
rule 1: ex≡fx, i.e. the horizontal position of nose and mouth (x-axis) is similar.
Rule 2: ey > Fy, i.e. the nose is located above the mouth.
Rule 3: cx < Ex < Dx, i.e. the nose is located between the two eyes.
Rule 4: ay > Cy, by > Dy, i.e. the eyebrow is above the eye.
1.1.3 for a plurality of different standard face images, the previous steps are repeated to obtain enough corresponding data. Based on the coordinate system shown in fig. 5, and in combination with enough data, the following values were obtained statistically:
nose-to-mouth vertical distance interval L (E, F)
Eye horizontal distance interval L (C, D)
Eyebrow horizontal interval section L (A, B)
1.2, each type of pictures at each part is rotated clockwise by a designated angle to generate a new picture, the rotation is continued to generate a picture until the rotation is completed by 360 degrees, the same types of different angles are grouped into one group, fig. 6 is an exemplary diagram of a target sample set obtained by amplifying each standard key part provided by the fourth embodiment of the application, and as shown in fig. 6, each angle picture of eyebrows with n different eyebrows is grouped into one group.
1.3 extracting the picture feature vector of each generated group of pictures to obtain each group of feature vector set, wherein each element in the set comprises information such as image features, images, image rotation angles and the like, as shown in table 1, and table 1 is a mouth feature vector set example table provided in the fourth embodiment of the application.
Table 1:
2. and inputting the picture to be detected, and obtaining the feature vector.
2.1, extracting all parts of the face of the picture to be detected based on target detection. Fig. 7 is a diagram of an example of a to-be-detected picture according to a fourth embodiment of the present application.
2.2 if the related position target is successfully detected, recording the two-dimensional coordinates of the center point of the target region, and extracting the feature vector values of each position, wherein as shown in table 2, table 2 is an example table of the extraction result of the key position to be processed of the picture to be detected provided in the fourth embodiment of the application.
Table 2:
3. and (5) searching vector similarity, and calculating the rotation angle.
And 3.1, carrying out vector similarity retrieval from the feature vector set, retrieving similar feature vectors in the set, and taking the first five bits with highest similarity and corresponding rotation angle information. As shown in table 3, table 3 is an exemplary table of search results of similarity of feature vectors on the left eyebrow provided in the fourth embodiment of the present application.
Table 3:
and 3.2, calculating the average value of the 5 rotation angles in the last step, then removing two bits with the largest difference from the average value, namely removing two bits with the largest variance, calculating the average value of the remaining three bits, and recording the average value as the rotation angle of the corresponding target part.
3.2.1 the average avg1 is calculated.
Average avg1=rotation angle (a12+a78+a954+a33+a6592)/5, and avg1= (90+100+270+110+115)/5=137 in the previous schematic table can be obtained by calculation according to the formula.
3.2.2 calculate the variance of the difference in each rotation angle and avg 1.
Variance va= (A-avg 1)/(2), and calculated according to the formula, variance values of 5 rotation angles can be obtained. As shown in table 4, table 4 is an exemplary table of variance calculation results of 5 rotation angles provided in the fourth embodiment of the present application.
Table 4:
3.2.3 excluding the two bits with the largest variance, i.e. excluding the data lines corresponding to 2209 and 17689 in the table above, the mean of the remaining 3 rotation angles is calculated. The average avg2= (a78+a33+a6592)/3, calculated according to this formula, yields avg2= (100+110+115)/3= 108.333, and the rotation angle of the target portion is set to 108.333. As shown in table 5, table 5 is an exemplary table of calculation results excluding two rotation angles according to the fourth embodiment of the present application.
Table 5:
3.3 repeating steps 3.1 and 3.2, and sequentially calculating the rotation angles of all detected target positions (eyebrows, eyes, nose and mouth), as shown in fig. 8 and table 6, fig. 8 is an exemplary diagram of the rotation calculation results of each target position of a to-be-detected picture provided in the fourth embodiment of the present application, and table 6 is an exemplary table of the rotation calculation results of each target position of a to-be-detected picture provided in the fourth embodiment of the present application.
Table 6:
3.4, taking the average value of the two target rotation angles of symmetrical target parts such as eyebrows, and recording the rotation angles of the two parts as the average value so as to avoid errors of special cases such as the splayed eyebrows; the eyes and other symmetrical parts are the same. As shown in table 7, table 7 is an example table of the adjusted rotation angle result provided in the fourth embodiment of the present application.
Table 7:
and 3.5, calculating the angle of the face to be corrected according to the average value of the rotation angles of all the parts, and carrying out image correction. The face needs to rotate by an angle of = (91.833+91.5+91.667+91)/4=91.5, i.e. the image rotates anticlockwise by 91.5 degrees, so that the face in the image is forward. Fig. 9 is a diagram showing an example of a rotation result of a picture to be detected according to a fourth embodiment of the present application.
4. And (5) checking the correct rotation.
And 4.1, after the image is rotated in the previous step, calculating a new coordinate area of each rotated part according to the rotation angle and combining a trigonometric function.
And 4.2, checking whether the coordinates of each part after the screwing are in accordance with the rule by combining the rule of 1.1 and the coordinate value interval, and if so, indicating that the screwing is correct.
And 4.3, if a few parts do not accord with the rule, removing the parts not accord with the rule, and then recalculating the rotation angle and reclassifying the rotation again.
4.4 if most of the images do not accord with the normal face, the processing is stopped.
It should be noted that the fourth embodiment of the present application provides a face image correcting method by adopting the technical solution of the present application, which is only an example and is not used to limit the protection scope of the present application.
Example five
Fig. 10 is a schematic structural diagram of a face image correcting device according to a fifth embodiment of the present application, where the face image correcting device includes a key part extraction module 510, a target part selection module 520, a feature vector extraction module 530, a correcting angle calculation module 540, and a face image correcting module 550.
The key part extraction module 510 is configured to extract at least two key part images to be processed in the face images to be processed;
the target location selection module 520 is configured to select a target key location image from the at least two key location images to be processed according to a standard key location image in the target sample set; the target sample set is determined according to a standard key part image extracted from a standard face image;
the feature vector extraction module 530 is configured to perform feature vector extraction on the target sample set and the target key location image, to obtain a feature vector set of the target sample set and a feature vector of the target key location image;
The rotation angle calculating module 540 is configured to calculate a rotation angle of the target key location image based on the feature vector set of the target sample set and the feature vector of the target key location image;
the face image righting module 550 is configured to perform righting processing on the face image to be processed based on the righting angle.
Before face turning, the key parts of the face are identified through a target detection technology, the turning angles of the key parts of the face are calculated, and the deflected face images in the images are turned by the turning angles in the opposite direction of the deflection direction of the face images. The technical scheme of the application solves the problem that the face image cannot be processed by the traditional image correcting technology, and meanwhile, the corrected image can be used for processing other deep faces because the face correcting image does not deflect, so that the workload in the subsequent image processing is reduced.
As an alternative embodiment, the rotation angle calculation module includes:
the similarity determining unit is used for obtaining a plurality of similarities according to the similarity between the feature vector of the target key part image and the feature vector of each image in the feature vector set of the target sample set;
The rotation angle determining unit is used for determining a plurality of rotation angles corresponding to the plurality of similarities;
and the rotation angle calculation unit is used for fusing the rotation angles to obtain the rotation angle of the target key part image.
As an optional implementation manner, the face image correcting module includes:
the symmetrical position rotation angle determining unit is used for determining another symmetrical target key position image if any two target key position images are symmetrical, and taking the average value of the rotation angles of the any two target key position images as the rotation angle of the symmetrical target key position image;
the asymmetric position rotation angle determining unit is used for taking the average value of the rotation angle of the symmetric target key position image and the rotation angle of the asymmetric target key position image as the rotation angle of the face image to be processed;
the face image correcting unit is used for correcting the face image to be processed based on the correcting angle of the face image to be processed.
As an optional embodiment, the apparatus further comprises a target sample set determining module configured to:
Performing amplification treatment on at least one standard key part image to obtain at least one amplified image corresponding to the at least one standard key part image;
and taking the set of the at least one standard critical part image and the at least one amplified image as a target sample set.
As an optional implementation manner, the apparatus further includes a face data statistics module, configured to:
acquiring a plurality of different standard face images, and extracting the central coordinate values of all standard key part images in the standard face images;
counting distance intervals among key parts of the standard face based on the central coordinate values of the standard key part images; and the distance interval between the key parts is used for verifying the correction processing result.
As an optional implementation manner, the apparatus further includes a processing result checking module, configured to:
calculating the central coordinate value of each key part image to be processed of the face image to be processed after the spin-correction processing;
and judging whether the center coordinate value after the correcting process is in the distance interval between the key parts of the standard face, if so, determining that the correcting process is correct.
The face image correcting device provided by the embodiment of the invention can execute the face image correcting method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the executing method.
Example six
Fig. 11 is a schematic structural diagram of an electronic device according to a sixth embodiment of the present invention, as shown in fig. 11, the electronic device includes a processor 610, a memory 620, an input device 630, and an output device 640; the number of processors 610 in the electronic device may be one or more, one processor 610 being taken as an example in fig. 11; the processor 610, memory 620, input device 630, and output device 640 in the electronic device may be connected by a bus or other means, for example in fig. 11.
The memory 620 is used as a computer readable storage medium, and may be used to store software programs, computer executable programs, and modules, such as program instructions/modules corresponding to the face image rotation method in the embodiment of the present invention (for example, the key part extraction module 510, the target part selection module 520, the feature vector extraction module 530, the rotation angle calculation module 540, and the face image rotation module 550 in the face image rotation device). The processor 610 executes various functional applications of the electronic device and data processing by running software programs, instructions and modules stored in the memory 620, i.e., implements the face image alignment method described above.
Memory 620 may include primarily a program storage area and a data storage area, wherein the program storage area may store an operating system, at least one application program required for functionality; the storage data area may store data created according to the use of the terminal, etc. In addition, memory 620 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 620 may further include memory remotely located relative to processor 610, which may be connected to the electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 630 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device. The output device 640 may include a display device such as a display screen.
Example seven
A seventh embodiment of the present invention also provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are for performing a face image normalization method, the method comprising:
Extracting at least two key part images to be processed in the face images to be processed;
selecting a target key part image from the at least two key part images to be processed according to the standard key part images in the target sample set; the target sample set is determined according to a standard key part image extracted from a standard face image;
extracting feature vectors of the target sample set and the target key part image respectively to obtain a feature vector set of the target sample set and a feature vector of the target key part image;
calculating the rotation angle of the target key part image based on the characteristic vector set of the target sample set and the characteristic vector of the target key part image;
and carrying out righting processing on the face image to be processed based on the righting angle.
Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present invention is not limited to the above-mentioned method operations, and may also perform the related operations in the face image correcting method provided in any embodiment of the present invention.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, etc., and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments of the present invention.
It should be noted that, in the above-mentioned embodiments of the search apparatus, each unit and module included are only divided according to the functional logic, but not limited to the above-mentioned division, as long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.
Claims (9)
1. The face image correcting method is characterized by comprising the following steps of:
extracting at least two key part images to be processed in the face images to be processed;
Selecting a target key part image from the at least two key part images to be processed according to the standard key part images in the target sample set; the target sample set is determined according to a standard key part image extracted from a standard face image;
extracting feature vectors of the target sample set and the target key part image respectively to obtain a feature vector set of the target sample set and a feature vector of the target key part image;
calculating the rotation angle of the target key part image based on the characteristic vector set of the target sample set and the characteristic vector of the target key part image;
performing righting treatment on the face image to be treated based on the righting angle;
the correcting the face image to be processed based on the correcting angle comprises the following steps:
if any two target key part images are symmetrical, determining another target key part image symmetrical to the target key part images, and taking the average value of the rotation positive angles of any two target key part images as the rotation positive angle of the symmetrical target key part images;
taking the mean value of the rotation positive angle of the symmetrical target key part image and the rotation positive angle of the asymmetrical target key part image as the rotation positive angle of the face image to be processed;
And carrying out righting processing on the face image to be processed based on the righting angle of the face image to be processed.
2. The method of claim 1, wherein the calculating the rotation angle of the target key-site image based on the set of feature vectors of the target sample set and the feature vectors of the target key-site image comprises:
obtaining a plurality of similarities according to the similarity between the feature vector of the target key part image and the feature vector of each image in the feature vector set of the target sample set;
determining a plurality of rotation positive angles corresponding to the plurality of similarities;
and fusing the plurality of rotation positive angles to obtain the rotation positive angle of the target key part image.
3. The method of claim 1, further comprising determining the target sample set by:
performing amplification treatment on at least one standard key part image to obtain at least one amplified image corresponding to the at least one standard key part image;
and taking the set of the at least one standard critical part image and the at least one amplified image as a target sample set.
4. The method according to claim 1, further comprising, prior to extracting at least two key-part images to be processed in the face images to be processed:
Acquiring a plurality of different standard face images, and extracting the central coordinate values of all standard key part images in the standard face images;
counting distance intervals among key parts of the standard face based on the central coordinate values of the standard key part images; and the distance interval between the key parts is used for verifying the correction processing result.
5. The method of claim 4, further comprising verifying the positive rotation result by:
calculating the central coordinate value of each key part image to be processed of the face image to be processed after the spin-correction processing;
and judging whether the center coordinate value after the correcting process is in the distance interval between the key parts of the standard face, if so, determining that the correcting process is correct.
6. The utility model provides a face image righting device which characterized in that includes:
the key part extraction module is used for extracting at least two key part images to be processed in the face images to be processed;
the target part selection module is used for selecting a target key part image from the at least two key part images to be processed according to the standard key part images in the target sample set; the target sample set is determined according to a standard key part image extracted from a standard face image;
The feature vector extraction module is used for extracting feature vectors of the target sample set and the target key part image respectively to obtain a feature vector set of the target sample set and a feature vector of the target key part image;
the rotation angle calculation module is used for calculating the rotation angle of the target key part image based on the characteristic vector set of the target sample set and the characteristic vector of the target key part image;
the face image correcting module is used for correcting the face image to be processed based on the correcting angle;
wherein, the face image righting module comprises:
the symmetrical position rotation angle determining unit is used for determining another symmetrical target key position image if any two target key position images are symmetrical, and taking the average value of the rotation angles of the any two target key position images as the rotation angle of the symmetrical target key position image;
the asymmetric position rotation angle determining unit is used for taking the average value of the rotation angle of the symmetric target key position image and the rotation angle of the asymmetric target key position image as the rotation angle of the face image to be processed;
The face image correcting unit is used for correcting the face image to be processed based on the correcting angle of the face image to be processed.
7. The apparatus of claim 6, wherein the rotation angle calculation module comprises:
the similarity determining unit is used for obtaining a plurality of similarities according to the similarity between the feature vector of the target key part image and the feature vector of each image in the feature vector set of the target sample set;
the rotation angle determining unit is used for determining a plurality of rotation angles corresponding to the plurality of similarities;
and the rotation angle calculation unit is used for fusing the rotation angles to obtain the rotation angle of the target key part image.
8. An electronic device, the electronic device comprising:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the face image normalization method of any one of claims 1-5.
9. A storage medium containing computer executable instructions for performing the face image normalization method of any one of claims 1-5 when executed by a computer processor.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310685840.5A CN116416671B (en) | 2023-06-12 | 2023-06-12 | Face image correcting method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310685840.5A CN116416671B (en) | 2023-06-12 | 2023-06-12 | Face image correcting method and device, electronic equipment and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN116416671A CN116416671A (en) | 2023-07-11 |
| CN116416671B true CN116416671B (en) | 2023-10-03 |
Family
ID=87052968
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310685840.5A Active CN116416671B (en) | 2023-06-12 | 2023-06-12 | Face image correcting method and device, electronic equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN116416671B (en) |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104537609A (en) * | 2014-11-28 | 2015-04-22 | 上海理工大学 | Rotated image correction method |
| CN104809703A (en) * | 2015-04-22 | 2015-07-29 | 上海理工大学 | Simple image angle correction method |
| CN108446658A (en) * | 2018-03-28 | 2018-08-24 | 百度在线网络技术(北京)有限公司 | The method and apparatus of facial image for identification |
| CN111126376A (en) * | 2019-10-16 | 2020-05-08 | 平安科技(深圳)有限公司 | Picture correction method and device based on facial feature point detection and computer equipment |
| CN112364711A (en) * | 2020-10-20 | 2021-02-12 | 盛视科技股份有限公司 | 3D face recognition method, device and system |
| CN114979470A (en) * | 2022-05-12 | 2022-08-30 | 咪咕文化科技有限公司 | Camera rotation angle analysis method, device, equipment and storage medium |
| CN115050069A (en) * | 2022-05-30 | 2022-09-13 | 深圳科卫机器人科技有限公司 | Face and attribute recognition method and system based on deep learning and computer equipment |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10846838B2 (en) * | 2016-11-25 | 2020-11-24 | Nec Corporation | Image generation device, image generation method, and storage medium storing program |
-
2023
- 2023-06-12 CN CN202310685840.5A patent/CN116416671B/en active Active
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104537609A (en) * | 2014-11-28 | 2015-04-22 | 上海理工大学 | Rotated image correction method |
| CN104809703A (en) * | 2015-04-22 | 2015-07-29 | 上海理工大学 | Simple image angle correction method |
| CN108446658A (en) * | 2018-03-28 | 2018-08-24 | 百度在线网络技术(北京)有限公司 | The method and apparatus of facial image for identification |
| CN111126376A (en) * | 2019-10-16 | 2020-05-08 | 平安科技(深圳)有限公司 | Picture correction method and device based on facial feature point detection and computer equipment |
| CN112364711A (en) * | 2020-10-20 | 2021-02-12 | 盛视科技股份有限公司 | 3D face recognition method, device and system |
| CN114979470A (en) * | 2022-05-12 | 2022-08-30 | 咪咕文化科技有限公司 | Camera rotation angle analysis method, device, equipment and storage medium |
| CN115050069A (en) * | 2022-05-30 | 2022-09-13 | 深圳科卫机器人科技有限公司 | Face and attribute recognition method and system based on deep learning and computer equipment |
Also Published As
| Publication number | Publication date |
|---|---|
| CN116416671A (en) | 2023-07-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10049262B2 (en) | Method and system for extracting characteristic of three-dimensional face image | |
| CN110852310B (en) | Three-dimensional face recognition method and device, terminal equipment and computer readable medium | |
| CN107506693B (en) | Distorted face image correction method, device, computer equipment and storage medium | |
| WO2021027336A1 (en) | Authentication method and apparatus based on seal and signature, and computer device | |
| CN111783770B (en) | Image correction method, device and computer readable storage medium | |
| CN103577815A (en) | Face alignment method and system | |
| CN101178768A (en) | Image processing device and method and personal identification device | |
| WO2014026483A1 (en) | Character identification method and relevant device | |
| WO2017016240A1 (en) | Banknote serial number identification method | |
| CN110991258B (en) | A face fusion feature extraction method and system | |
| CN109670440B (en) | Identification method and device for big bear cat face | |
| CN110598647B (en) | Head posture recognition method based on image recognition | |
| CN113837067B (en) | Organ contour detection method, organ contour detection device, electronic device, and readable storage medium | |
| CN108154132A (en) | Method, system and equipment for extracting characters of identity card and storage medium | |
| WO2013122009A1 (en) | Reliability level acquisition device, reliability level acquisition method and reliability level acquisition program | |
| JP4414401B2 (en) | Facial feature point detection method, apparatus, and program | |
| CN114359553B (en) | Signature positioning method and system based on Internet of things and storage medium | |
| CN116434071B (en) | Determination method, determination device, equipment and medium for normalized building mask | |
| CN113128427A (en) | Face recognition method and device, computer readable storage medium and terminal equipment | |
| CN111523406A (en) | A deflected face-to-positive method based on the improved structure of generative adversarial network | |
| CN116416671B (en) | Face image correcting method and device, electronic equipment and storage medium | |
| CN106650719B (en) | Method and device for identifying picture characters | |
| CN108288024A (en) | Face identification method and device | |
| CN110785769A (en) | Face gender recognition method, face gender classifier training method and device | |
| CN111612083A (en) | Finger vein identification method, device and equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |