[go: up one dir, main page]

WO2017078627A1 - Method and system for face in vivo detection - Google Patents

Method and system for face in vivo detection Download PDF

Info

Publication number
WO2017078627A1
WO2017078627A1 PCT/SG2016/050543 SG2016050543W WO2017078627A1 WO 2017078627 A1 WO2017078627 A1 WO 2017078627A1 SG 2016050543 W SG2016050543 W SG 2016050543W WO 2017078627 A1 WO2017078627 A1 WO 2017078627A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
face
facial
illumination
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/SG2016/050543
Other languages
French (fr)
Inventor
Bin WENG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jing King Tech Holdings Pte Ltd
Original Assignee
Jing King Tech Holdings Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jing King Tech Holdings Pte Ltd filed Critical Jing King Tech Holdings Pte Ltd
Priority to SG11201803167PA priority Critical patent/SG11201803167PA/en
Publication of WO2017078627A1 publication Critical patent/WO2017078627A1/en
Priority to PH12018500945A priority patent/PH12018500945A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/164Detection; Localisation; Normalisation using holistic features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Definitions

  • the present invention relates to a method and a system for face in vivo detection and in particular, but not exclusively, for face in vivo detection based on an illumination component.
  • biometric identification technology has made great progress and some common biometric features used for identification include human face, fingerprint, and iris.
  • Biometric information for personal identification has been widely used in the world, through which real users and fake users can be distinguished.
  • the accuracy of biometric identification can be compromised by various means, such as the use of forged images of human face, fingerprint, iris, etc., during biometric verification.
  • biometric identification systems for in vivo detection to distinguish between biometric information submitted to such a system from a living individual and that of a non-living individual (such as forged images of a living individual), so as to prevent illegal forgers from stealing other people's biometric information for personal identification.
  • Biometric identification systems in particular, face recognition identification systems have been widely used for personal identification, video surveillance and video information retrieval analysis in recent years due to the convenience of use, high acceptability and other advantages.
  • security threats associated with this technology must be addressed to ensure reliability and security of face recognition systems.
  • forgery login to a face recognition system may adopt one or more of the following methods: face images, face video clips and three-dimensional face model replica.
  • face images can be obtained more easily than a face video clip or a three-dimensional face model replica, and hence is more frequently used in forgery login to a face recognition system.
  • This method requires the analysis of all kinds of actions, requires many complex algorithms for the analysis, and its verification accuracy and efficiency is far from satisfaction.
  • To analyze actions like the opening and closing of mouth and blinking of eyes requires precise tracking of the feature points on human faces, which is a very big challenge.
  • the method requires users to perform a variety of movements in strict accordance with instructions, which is not friendly to users.
  • the present invention seeks to provide a method and a system to overcome at least in part some of the aforementioned disadvantages.
  • a method for face detection comprising:
  • each of the facial images obtained is denoted as Ij, where i is a natural number.
  • each face image Ij which, according to the Lambertian model, can be denoted as: where Rj is the reflection component, representing the surface reflectance of the facial image; Lj is the illumination component, representing the illumination and shadow of the facial image, and (x,y) represents the coordinates of the pixels in the image; log- transform the face image lj, to obtain:
  • N is the length and width of the image
  • Fi(s,t) is set at 0, i.e.:
  • the illumination component of the facial image can be obtained via inverse I ogari thmi c transf ormati on, i.e.:
  • M has an empirical value of 5.
  • the step of calculating the mean local variance for the illumi nation components of the face i mages obtai ned from T successive vi deo frames compri ses: dividing the il lumination component of each face image equally into aBb image blocks with aBb pixels contained in each block, and use Bij to represent the No.j image block of the face image from the No.i frame, therefore the mean local variance for T successive video frames is:
  • the face image in the video is an image of a real face.
  • the face image in the video is not an image of a real face.
  • the threshold (T h) is set according to specific image quality, whereby a lower image resolution means a lower threshold Th.
  • an acquisition unit configured to capture a facial movement by video and to process the video to obtain a plurality of facial images from a plurality of successive video frames
  • a calculation unit configured to render each facial image obtained using the Lambertian model, compute discrete cosine transform (DTC) to obtain an illumination component of each facial image, and calculate the mean local variance for the i 11 umi nati on components of the faci al i mages; and
  • a determination unit configured to compare the mean local variance with a predetermined threshold (T h) to determine whether the facial image is an image of a real face.
  • each of the facial images obtained is denoted as Ij, where i is a natural number.
  • the illumination component of each face image Ij which, according to the Lambertian model, can be denoted as: where Rj is the reflection component, representing the surface reflectance of the facial image; Lj is the illumination component, representing the illumination and shadow of the facial image, and (x,y) represents the coordinates of the pixels in the image; log- transform the face image Ij, to obtain:
  • N is the length and width of the image
  • Fi(s,t) is set at 0, i.e.:
  • M is a parameter to be defined, which is generally set at 5
  • the i 11 umi nati on component of the f aci al i mage can be obtai ned vi a i nverse I ogari thmi c transf ormati on, i.e.:
  • M has an empirical val
  • the step of calculating the mean local variance for the illumination components of the face i mages obtai ned from T successive vi deo frames compri ses: dividing the illumination component of each face image equally into aBb image blocks with aBb pixels contained in each block, and use Bij to represent the No.j image block of the face image from the No.i frame, therefore the mean local variance for T successive video frames is:
  • var( Bij) is the variance of the pixel val ufe of the i mage bl ock B i,j.
  • the face image in the video is an image of a real face.
  • the face image in the video is not an image of a real face.
  • the threshold (Th) is set according to specific image quality, whereby a lower image resolution means a lower threshold Th.
  • the present invention provides a method and a system for face or face in vivo detection based on an illumination component which offers high accuracy, excellent real-time performance and improved user experience.
  • the technical solution of the present invention comprises a method for face or face in vivo detection based on an i 11 umi nati on component, wherei n the method i ncl udes the f ol I owi ng steps:
  • Step 1 capture a facial movement such as a human head movement in the form of a video and obtain a plurality of face images by cropping the video.
  • Step 2 use the Lambertian model to render each face image obtained from Step 1, then compute discrete cosine transform (DCT) to obtain the illumination component of each face image. It would be appreciated that in other embodiments, it can be synchronized with surface shading algorithms, ray cast and the like.
  • DCT discrete cosine transform
  • Step 3 based on the results above in Step 2, calculate the mean local variance for the illumination component of each of the face images obtained from several successive
  • Step 4 compare the mean local variance with a predetermined or predefined threshold to determi ne whether the face i mage is an i mage of a real face.
  • Step 1 above requires obtaining face images by cropping the human head movement vi deo and the face i mages are denoted as Ij.
  • Step 2 above requires extracting the illumination component of each face image Ij, which, according to the Lambertian model, can be denoted as:
  • Rj is the reflection component, representing the surface reflectance of the image scene
  • L ⁇ i s the i 11 umi nati on component represent! ng the i 11 umi nati on and shadow of the image scene
  • (x,y) represents the coordinates of the pixels in the image
  • log- transform the face image Ij to obtain:
  • fj, vj and uj respectively represent the value of I, R and L over the log-domain
  • N is the length and width of the image
  • Fi(s,t) is set at 0, i.e.:
  • M is a parameter to be defined, which is generally set at 5
  • the illumination component of the image domain can be obtained via inverse logarithmic transformation, i.e.:
  • M has an empirical value of 5. It would be appreciated that M is an adj ustabl e parameter and can take on other val ues apart from the val ue 5.
  • Step 3 requires calculating the mean local variance for the illumination component of the face images obtained from T successive video frames: Divide the illumi nation component of each face image equally into aBb i mage blocks with aBb pixels contained in each block, and use By to represent the No.j image block of the face image from the No.i frame, therefore the mean local variance for T successive video frames is:
  • var(Bi,j) is the variance of the pixel value of the image block Bij.
  • Step 4 requires conducting face in-vivo detection: Compare the Avar value obtained from Step 3 with the predefined threshold Th, if the Avar value is greater than or equal to Th, the face image in the video is an imagr of a real face, otherwise a face image.
  • the threshold Th is set according to specific image quality.
  • a lower image resolution means a lower threshold Th.
  • a method for face in vivo detection comprises the following steps:
  • Step 1 in order to make a facial movement video, such as a human head movement video, in actual practice a voice clip or a screen text can be used to give instructions to a user, requiring the user to shake or nod his/her head at the camera. It would be appreciated that other types of facial movement such as blinking of eyes, raising of eye brows, moving of lips and the like can also be performed and captured.
  • Step 2 conduct face detection for each image frame captured by the camera.
  • face detection identifies the face in photos (or video frames) and returns the location of the face.
  • Use I to represent the face image obtained from the No.i frame through cropping and being scaled down, where i is a natural number.
  • Step 3 Acquire the illumination component of each face image Ij.
  • the face image Ij can be denoted as:
  • Rj reflection component, mainly describing the surface reflectance of the image scene
  • L i is the illumination component, mainly describing the illumination and shadow of the i mage scene.
  • L og-transf orm each face i mage Ij to acqui re:
  • Compute DCT discrete cosine transform
  • N is the length and width of the image
  • Fi(s,t) is set at 0, i.e.:
  • M is a parameter to be defined, which is generally set at 5
  • Formula from (3) to (6) actually demonstrate the low frequency filtering for the face image fi over the log-domain via DCT (discrete cosine transform).
  • M is an adj ustabl e parameter and can take on other val ues apart from the val ue 5.
  • f ⁇ can be used as estimation of the illumination component, i.e.:
  • the illumination component of the image domain can be obtained by computing inverse logarithmic transformation (exponential transformation), i.e.:
  • Step 4 calculate the mean local variance for the il lumination component of the face i mages obtai ned from T successive vi deo frames.
  • var(Bi,j) is the variance of the pixel value of the image block Bij.
  • T is set at 100.
  • Step 5 conduct face in-vivo detection.
  • Each face has a unique three-dimensional geometric structure (e.g. distinct unevenness can be seen around nose, cheekbones, mouth, and eyes), therefore when a person shake or nod his/her head, the regional shadow on his/her face will experience significant changes, which are properly recorded as the illumination volume Lj.
  • a photo has a smooth surface, therefore moving the photo will not lead to significant changes in the regional shadow.
  • the mean local variance Avar can be used to distinguish between a real face and a face image. If the Avar value is greater than the preset threshold T h, the face image in the video can be considered as a real face, otherwise it is merely a face image.
  • T he threshold T h is set according to specific image type and image quality.
  • a lower image resolution means a lower threshold Th.
  • the system comprises an acquisition unit configured to capture a facial movement by video and to process the video to obtain a plurality of facial images from a plurality of successive video frames; a calculation unit configured to render each facial image obtained using the Lambertian model, compute discrete cosine transform (DTC) to obtain an illumination component of each facial image, and calculate the mean local variance for the i 11 umi nati on components of the facial i mages; and a determi nati on unit configured to compare the mean local variance with a predetermined threshold (Th) to determine whether the facial image is an image of a real face.
  • DTC discrete cosine transform
  • Each of the facial images obtained is denoted as Ij, where i is a natural number.
  • each face image Ij which, according to the Lambertian model, can be denoted as: where Rj is the reflection component, representing the surface reflectance of the facial image; Lj is the illumination component, representing the illumination and shadow of the facial image, and (x,y) represents the coordinates of the pixels in the image; log- transform the face image Ij, to obtain:
  • N is the length and width of the image, and the high frequency coefficient
  • the i 11 umi nati on component of the f aci al i mage can be obtai ned vi a i nverse logarithmic transformation, i.e.:
  • M is an adj ustabl e parameter and can take on other val ues apart from the val ue 5.
  • the step of calculating the mean local variance for the illumination components of the face images obtained from T successive video frames comprises:
  • var(Bi, j ) is the variance of the pixel val ufe of the image block Bij.
  • the face image in the video is an image of a real face. If the Avar value is less than Th, the face image in the video is not an image of a real face.
  • the threshold (Th) is set according to specific image quality, whereby a lower image resolution means a lower threshold Th.
  • the technical effects of the present invention lie in that the detection method and system of the present invention can distinguish between a real face and a face image safely, and during detection only requires a user to perform a facial movement casually, such as to move his/her head casually, instead of making different movements as strictly required at specific times, offering a more friendly user experience.
  • the present invention does not rely on the detection method based on facial feature points, several deficiencies such as lower accuracy and complex calculation caused by the detection method based on facial feature points are avoided.
  • the present invention does not involve three-dimensional face reconstruction, hence achieving higher calculation speed and performing real-time processing.
  • the present invention focuses on face in vivo detection based on the illumination information in a face image rather than relying on complex three- dimensional reconstruction and is based on facial feature points.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The present invention discloses a method and a system for face in-vivo detection based on an illumination component. The method and system focuses on in-vivo detection based on the illumination information of the face image rather than relying on complex three-dimensional reconstruction and the detection method based on facial feature points. It can distinguish between a real face and a face image safely, and during detection only requires a user to perform a facial movement casually instead of to perform different movements as strictly required at specific times, offering a more friendly user experience. As the present invention does not rely on the detection method based on facial feature points, several deficiencies such as lower accuracy and complex calculation caused by the detection method based on facial feature points are avoided. The present invention also does not involve three-dimensional face reconstruction, hence achieving higher calculation speed and performing real-time processing.

Description

M E T H OD A ND SY ST E M FOR FAC E IN VIV O DE T E CTION FIE L D OF T H E INV E NTION
The present invention relates to a method and a system for face in vivo detection and in particular, but not exclusively, for face in vivo detection based on an illumination component.
BAC K G ROU ND
The following discussion of the background to the invention is intended to facilitate an understanding of the present invention. However, it should be appreciated that the discussion is not an acknowledgment or admission that any of the material referred to was published, known or part of the common general knowledge in any jurisdiction as at the priority date of the application.
In recent years, biometric identification technology has made great progress and some common biometric features used for identification include human face, fingerprint, and iris. Biometric information for personal identification has been widely used in the world, through which real users and fake users can be distinguished. However, the accuracy of biometric identification can be compromised by various means, such as the use of forged images of human face, fingerprint, iris, etc., during biometric verification. To address this issue, there exist biometric identification systems for in vivo detection to distinguish between biometric information submitted to such a system from a living individual and that of a non-living individual (such as forged images of a living individual), so as to prevent illegal forgers from stealing other people's biometric information for personal identification.
Biometric identification systems, in particular, face recognition identification systems have been widely used for personal identification, video surveillance and video information retrieval analysis in recent years due to the convenience of use, high acceptability and other advantages. However, from the research phase to the application phase of face recognition technology, security threats associated with this technology must be addressed to ensure reliability and security of face recognition systems. In general, forgery login to a face recognition system may adopt one or more of the following methods: face images, face video clips and three-dimensional face model replica. Among them, a face image can be obtained more easily than a face video clip or a three-dimensional face model replica, and hence is more frequently used in forgery login to a face recognition system. It is thus necessary to design a face in vivo detection system that is protected against threats from forgery face image login for the purpose of practical application of a face recognition system. The technologies of face in vivo detection and face recognition are complementary; the advancement and maturity of the former technology would affect the practical applications of the latter technology.
Existing detection methods in the field of face in vivo detection to distinguish between an image of a face (or face image) and a real human face are typically as follows: 1) To estimate the three-dimensional depth information by motion. The difference between a real face and a face image is that a real human face is a three-dimensional object with depth information while a face image is a two-dimensional plane, differentiations between the two can be found by reconstructing a three-dimensional face with several photos taken during the head- turning action. The disadvantage of this method is that the three-dimensional facial reconstruction requires many photos of facial features for accurate tracking, which still needs major adjustment. In addition, the calculation based on three-dimensional facial reconstruction method is very complicated; hence it is not possible to achieve real-time application based on this. 2) To distinguish between the two by analyzing the high-frequency component ratio of a face image and a real human face. The basic assumption of this method is that the face image imaging compared to real face imaging loses high frequency information. This method can effectively detect low- resolution face images, but does not apply to high- resolution photos. 3) To extract features from face images and design classifiers to distinguish between a face image and a real face. This method does not take into account three-dimensional geometric information in real faces, making it difficult to achieve ideal distinction accuracy. 4) J udgment based on interaction. The system sends all kinds of movement instructions to users randomly (such as turning head, nodding, opening mouth, blink, etc.), users perform the corresponding actions, and subsequently the system distinguishes between real faces and face images by analyzing these actions. This method requires the analysis of all kinds of actions, requires many complex algorithms for the analysis, and its verification accuracy and efficiency is far from satisfaction. To analyze actions like the opening and closing of mouth and blinking of eyes requires precise tracking of the feature points on human faces, which is a very big challenge. In addition, the method requires users to perform a variety of movements in strict accordance with instructions, which is not friendly to users.
Therefore, there is an urgent need to address the technical problems of existing methods for distinguishing between an image of a face and a real human face. The present invention seeks to provide a method and a system to overcome at least in part some of the aforementioned disadvantages.
SU M MARY O F T H E INV E NT ION
Throughout this document, unless otherwise indicated to the contrary, the terms 'comprising,, 'consisting of_, and the like, are to be construed as non-exhaustive, or in other words, as meaning 'including, but not limited to_.
In accordance with a first aspect of the present invention, there is provided a method for face detection, comprising:
capturing a facial movement by video and processing the video to obtain a plurality of facial images from a plurality of successive video frames;
using the Lambertian model to render each facial image obtained and computing discrete cosine transform (DTC) to obtain an illumination component of each facial image;
calculating the mean local variance for the illumination components of the facial images; and
comparing the mean local variance with a predetermined threshold (Th) to determi ne whether the facial i mage is an i mage of a real face.
Preferably, each of the facial images obtained is denoted as Ij, where i is a natural number.
Preferably, wherein the illumination component of each face image Ij, which, according to the Lambertian model, can be denoted as:
Figure imgf000004_0001
where Rj is the reflection component, representing the surface reflectance of the facial image; Lj is the illumination component, representing the illumination and shadow of the facial image, and (x,y) represents the coordinates of the pixels in the image; log- transform the face image lj, to obtain:
Figure imgf000005_0003
where fj, vj and uj respectively represent the value of I, R and L over the log- domain, i.e. vj=logR, uj=logL, compute DCT for f,, i.e.:
Figure imgf000005_0001
where N is the length and width of the image, and the high frequency coefficient of Fi(s,t) is set at 0, i.e.:
Figure imgf000005_0004
where M is a parameter to be defined, which is generally set at 5, compute the inverse DCT (discrete cosine transform) for the adjusted frequency domain coefficient F ~ i.e.:
Figure imgf000005_0002
Take
Figure imgf000005_0007
the estimation of the illumination component, i.e.:
Figure imgf000005_0005
Then the illumination component of the facial image can be obtained via inverse I ogari thmi c transf ormati on, i.e.:
Figure imgf000005_0006
Preferably, M has an empirical value of 5.
Preferably, the step of calculating the mean local variance for the illumi nation components of the face i mages obtai ned from T successive vi deo frames compri ses: dividing the il lumination component of each face image equally into aBb image blocks with aBb pixels contained in each block, and use Bij to represent the No.j image block of the face image from the No.i frame, therefore the mean local variance for T successive video frames is:
Figure imgf000006_0001
where var( is the variance of the pixel val ufe of the i mage bl ock B i,j.
Figure imgf000006_0002
Preferably, in the step of comparing the mean local variance (Avar) obtained with a predetermined threshold (Th), if the Avar value is greater than or equal to Th, the face image in the video is an image of a real face.
Preferably, if the Avar value is less than Th, the face image in the video is not an image of a real face.
Preferably, the threshold (T h) is set according to specific image quality, whereby a lower image resolution means a lower threshold Th.
In accordance with a second aspect of the present invention, there is described a system for face detection, comprisi ng:
an acquisition unit configured to capture a facial movement by video and to process the video to obtain a plurality of facial images from a plurality of successive video frames;
a calculation unit configured to render each facial image obtained using the Lambertian model, compute discrete cosine transform (DTC) to obtain an illumination component of each facial image, and calculate the mean local variance for the i 11 umi nati on components of the faci al i mages; and
a determination unit configured to compare the mean local variance with a predetermined threshold (T h) to determine whether the facial image is an image of a real face.
Preferably, each of the facial images obtained is denoted as Ij, where i is a natural number.
Preferably, the illumination component of each face image Ij, which, according to the Lambertian model, can be denoted as:
Figure imgf000007_0003
where Rj is the reflection component, representing the surface reflectance of the facial image; Lj is the illumination component, representing the illumination and shadow of the facial image, and (x,y) represents the coordinates of the pixels in the image; log- transform the face image Ij, to obtain:
Figure imgf000007_0004
where fj, vj and uj respectively represent the value of I, R and L over the log- domain, i.e. vj=logR, uj=logL, compute DCT for f,, i.e.:
Figure imgf000007_0001
where N is the length and width of the image, and the high frequency coefficient of Fi(s,t) is set at 0, i.e.:
Figure imgf000007_0005
where M is a parameter to be defined, which is generally set at 5, compute the inverse DCT (discrete cosine transform) for the adjusted frequency domain coefficient F ~ i.e.:
Figure imgf000007_0002
Take f^as the estimation of the illumination component, i.e.:
Figure imgf000008_0002
T hen the i 11 umi nati on component of the f aci al i mage can be obtai ned vi a i nverse I ogari thmi c transf ormati on, i.e.:
Figure imgf000008_0003
Preferably, M has an empirical val
Preferably, the step of calculating the mean local variance for the illumination components of the face i mages obtai ned from T successive vi deo frames compri ses: dividing the illumination component of each face image equally into aBb image blocks with aBb pixels contained in each block, and use Bij to represent the No.j image block of the face image from the No.i frame, therefore the mean local variance for T successive video frames is:
Figure imgf000008_0001
where var( Bij) is the variance of the pixel val ufe of the i mage bl ock B i,j.
Preferably, in the step of comparing the mean local variance (Avar) obtained with a predetermined threshold (Th), if the Avar value is greater than or equal to Th, the face image in the video is an image of a real face.
Preferbaly, if the Avar value is less than Th, the face image in the video is not an image of a real face.
Preferably, the threshold (Th) is set according to specific image quality, whereby a lower image resolution means a lower threshold Th.
Other aspects and advantages of the invention will become apparent to those skilled in the art from a review of the ensuing description, which proceeds with reference to the f ol I owi ng i 11 ustrati ve drawi ngs of vari ous embodi ments of the i nventi on. DE TAIL E D D E SC RIPTION
Particular embodiments of the present invention will now be described. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present invention. Additionally, unless defined otherwise, all technical and scientific terms used herein have the same meanings as commonly understood by one of ordinary skill in the art to which this invention belongs.
The use of the singular forms 'a_, an_, and 'the, include both singular and plural referents unless the context clearly indicates otherwise. The use of means 'and/or, unless stated otherwise. Furthermore, the use of the
Figure imgf000009_0002
terms 'including, and 'having, as well as other forms of those terms, such as 'includes,, 'included,, 'has,, and 'have, are not limiting.
In order to overcome at least in part some of the aforementioned disadvantages of existing methods for distinguishing between an image of a face (or a face image) and a real face such as complex calculations, poor adaptability, low distinction accuracy, and low efficiency, the present invention provides a method and a system for face or face in vivo detection based on an illumination component which offers high accuracy, excellent real-time performance and improved user experience.
In order to achieve the above-mentioned technical objectives, the technical solution of the present invention comprises a method for face or face in vivo detection based on an i 11 umi nati on component, wherei n the method i ncl udes the f ol I owi ng steps:
Step 1 : capture a facial movement such as a human head movement in the form of a video and obtain a plurality of face images by cropping the video.
Step 2: use the Lambertian model to render each face image obtained from Step 1, then compute discrete cosine transform (DCT) to obtain the illumination component of each face image. It would be appreciated that in other embodiments, it can be synchronized with surface shading algorithms, ray cast and the like.
Step 3: based on the results above in Step 2, calculate the mean local variance for the illumination component of each of the face images obtained from several successive
Figure imgf000009_0001
video frames.
Step 4: compare the mean local variance with a predetermined or predefined threshold to determi ne whether the face i mage is an i mage of a real face.
Step 1 above requires obtaining face images by cropping the human head movement vi deo and the face i mages are denoted as Ij.
Step 2 above requires extracting the illumination component of each face image Ij, which, according to the Lambertian model, can be denoted as:
Figure imgf000010_0003
Where Rj is the reflection component, representing the surface reflectance of the image scene; L ί i s the i 11 umi nati on component, represent! ng the i 11 umi nati on and shadow of the image scene, and (x,y) represents the coordinates of the pixels in the image; log- transform the face image Ij, to obtain:
Figure imgf000010_0002
Where fj, vj and uj respectively represent the value of I, R and L over the log-domain,
Figure imgf000010_0004
Compute DCT (discrete cosine transform) for fi, i.e.:
Figure imgf000010_0001
Where N is the length and width of the image, and the high frequency coefficient of Fi(s,t) is set at 0, i.e.:
Figure imgf000011_0001
Where M is a parameter to be defined, which is generally set at 5,
Compute the inverse DCT (discrete cosine transform) for the adjusted frequency domain coefficient F ~ i.e.:
Figure imgf000011_0002
the esti mati on of the i 11 umi nati on component, i.e.:
Figure imgf000011_0003
Then the illumination component of the image domain can be obtained via inverse logarithmic transformation, i.e.:
Figure imgf000011_0005
Preferably, M has an empirical value of 5. It would be appreciated that M is an adj ustabl e parameter and can take on other val ues apart from the val ue 5.
Step 3 requires calculating the mean local variance for the illumination component of the face images obtained from T successive video frames: Divide the illumi nation component of each face image equally into aBb i mage blocks with aBb pixels contained in each block, and use By to represent the No.j image block of the face image from the No.i frame, therefore the mean local variance for T successive video frames is:
Figure imgf000011_0004
Where var(Bi,j) is the variance of the pixel value of the image block Bij.
The method for face in-vivo detection based on ill umination component, wherein Step 4 requires conducting face in-vivo detection: Compare the Avar value obtained from Step 3 with the predefined threshold Th, if the Avar value is greater than or equal to Th, the face image in the video is an imagr of a real face, otherwise a face image. The threshold Th is set according to specific image quality. A lower image resolution means a lower threshold Th. There is described hereinafter a method for face or face in vivo detection in accordance with various embodiments of the present invention.
In an embodiment of the present invention, a method for face in vivo detection comprises the following steps:
Step 1 : in order to make a facial movement video, such as a human head movement video, in actual practice a voice clip or a screen text can be used to give instructions to a user, requiring the user to shake or nod his/her head at the camera. It would be appreciated that other types of facial movement such as blinking of eyes, raising of eye brows, moving of lips and the like can also be performed and captured.
Step 2: conduct face detection for each image frame captured by the camera. As a well - known technology, face detection identifies the face in photos (or video frames) and returns the location of the face. Obtain a face region from a video frame through cropping according to face detection results and scale down the face region into a 100B100 image. Use I, to represent the face image obtained from the No.i frame through cropping and being scaled down, where i is a natural number.
Step 3: Acquire the illumination component of each face image Ij. According to the Lambertian model, the face image Ij can be denoted as:
Figure imgf000012_0002
Where Rj is reflection component, mainly describing the surface reflectance of the image scene; Li is the illumination component, mainly describing the illumination and shadow of the i mage scene. L og-transf orm each face i mage Ij to acqui re:
Figure imgf000012_0001
Figure imgf000013_0002
Where fj, vj and uj respectively represent the value of I, R and L over the log-domain, i.e. vj=logR, uj=logL, By this time, values of vj and uj are unknown and the value of uj needs to be estimated. Compute DCT (discrete cosine transform) for fj, i.e.:
Figure imgf000013_0001
Where N is the length and width of the image, and the high frequency coefficient in Fi(s,t) is set at 0, i.e.:
Figure imgf000013_0004
Figure imgf000013_0003
Figure imgf000013_0005
Where M is a parameter to be defined, which is generally set at 5,
Compute the inverse DCT (discrete cosine transform) for the adjusted frequency domain coefficient F ~ i.e.:
Figure imgf000013_0006
Formula from (3) to (6) actually demonstrate the low frequency filtering for the face image fi over the log-domain via DCT (discrete cosine transform).
It would be appreciated that in other embodiments, it can be synchronized with surface shading algorithms, ray cast and the like. It would also be appreciated that M is an adj ustabl e parameter and can take on other val ues apart from the val ue 5.
According to a large number of existing researches, as the illumination component in images varies slowly, the low frequency component can be used to estimate the illumination component. Therefore, f^ can be used as estimation of the illumination component, i.e.:
Figure imgf000014_0003
Then the illumination component of the image domain can be obtained by computing inverse logarithmic transformation (exponential transformation), i.e.:
Figure imgf000014_0002
Step 4: calculate the mean local variance for the il lumination component of the face i mages obtai ned from T successive vi deo frames.
Divide the illumi nation component of each face image equally into 10B10 image blocks with 10B10 pixels contained in each block. Use B to denote the No.j image block of the face image from the No.i frame, therefore the mean local variance for T successive video frames is:
Figure imgf000014_0001
Where var(Bi,j) is the variance of the pixel value of the image block Bij. In the present embodi ment, T is set at 100.
Step 5: conduct face in-vivo detection.
Each face has a unique three-dimensional geometric structure (e.g. distinct unevenness can be seen around nose, cheekbones, mouth, and eyes), therefore when a person shake or nod his/her head, the regional shadow on his/her face will experience significant changes, which are properly recorded as the illumination volume Lj. A photo has a smooth surface, therefore moving the photo will not lead to significant changes in the regional shadow. As a result, the mean local variance Avar can be used to distinguish between a real face and a face image. If the Avar value is greater than the preset threshold T h, the face image in the video can be considered as a real face, otherwise it is merely a face image. T he threshold T h is set according to specific image type and image quality. A lower image resolution means a lower threshold Th.
In accordance with another aspect of the present invention, there is described a system for face detection in accordance with an embodiment of the present invention. The system comprises an acquisition unit configured to capture a facial movement by video and to process the video to obtain a plurality of facial images from a plurality of successive video frames; a calculation unit configured to render each facial image obtained using the Lambertian model, compute discrete cosine transform (DTC) to obtain an illumination component of each facial image, and calculate the mean local variance for the i 11 umi nati on components of the facial i mages; and a determi nati on unit configured to compare the mean local variance with a predetermined threshold (Th) to determine whether the facial image is an image of a real face.
Each of the facial images obtained is denoted as Ij, where i is a natural number.
The illumination component of each face image Ij, which, according to the Lambertian model, can be denoted as:
Figure imgf000015_0002
where Rj is the reflection component, representing the surface reflectance of the facial image; Lj is the illumination component, representing the illumination and shadow of the facial image, and (x,y) represents the coordinates of the pixels in the image; log- transform the face image Ij, to obtain:
Figure imgf000015_0003
where fj, vj and uj respectively represent the value of I, R and L over the log- domain, i.e. vj=logR, uj=logL, compute DCT for f,, i.e.:
Figure imgf000015_0001
where N is the length and width of the image, and the high frequency coefficient
Figure imgf000016_0001
where M is a parameter to be defined, which is generally set at 5, compute the inverse DCT (discrete cosine transform) for the adjusted frequency domain coefficient F ~ i.e.:
Figure imgf000016_0002
Take f^as the estimation of the illumination component, i.e.:
Figure imgf000016_0003
T hen the i 11 umi nati on component of the f aci al i mage can be obtai ned vi a i nverse logarithmic transformation, i.e.:
Figure imgf000016_0005
It would be appreciated that in other embodiments, it can be synchronized with surface shading algorithms, ray cast and the li ke. It would also be appreciated that M is an adj ustabl e parameter and can take on other val ues apart from the val ue 5.
The step of calculating the mean local variance for the illumination components of the face images obtained from T successive video frames comprises:
dividing the illumination component of each face image equally into aBb image blocks with aBb pixels contained in each block, and use Bij to represent the No.j image block of the face image from the No.i frame, therefore the mean local variance for T successive video frames is:
Figure imgf000016_0004
where var(Bi,j) is the variance of the pixel val ufe of the image block Bij.
In the step of comparing the mean local variance (Avar) obtained with a predetermi ned threshold (Th), if the Avar value is greater than or equal to Th, the face image in the video is an image of a real face. If the Avar value is less than Th, the face image in the video is not an image of a real face. The threshold (Th) is set according to specific image quality, whereby a lower image resolution means a lower threshold Th.
The technical effects of the present invention lie in that the detection method and system of the present invention can distinguish between a real face and a face image safely, and during detection only requires a user to perform a facial movement casually, such as to move his/her head casually, instead of making different movements as strictly required at specific times, offering a more friendly user experience. As the present invention does not rely on the detection method based on facial feature points, several deficiencies such as lower accuracy and complex calculation caused by the detection method based on facial feature points are avoided. The present invention does not involve three-dimensional face reconstruction, hence achieving higher calculation speed and performing real-time processing.
Advantageously, the present invention focuses on face in vivo detection based on the illumination information in a face image rather than relying on complex three- dimensional reconstruction and is based on facial feature points.
It is to be understood that the above embodiments have been provided only by way of exemplification of this invention, and that further modifications and improvements thereto, as would be apparent to persons skilled in the relevant art, are deemed to fall within the broad scope and ambit of the present invention described herein. It is further to be understood that features from one or more of the described embodiments may be combi ned to form further embodi ments of the i nvention.

Claims

C LAIM S
1. A method for face detection, comprising:
capturing a facial movement by video and processing the video to obtain a plurality of facial images from a plurality of successive video frames;
using the Lambertian model to render each facial image obtained and computing discrete cosine transform (DTC) to obtain an illumination component of each facial image;
calculating the mean local variance for the illumination components of the facial images; and
comparing the mean local variance with a predetermined threshold (Th) to determi ne whether the facial i mage is an i mage of a real face.
2. The method according to claim 1, wherein each of the facial images obtained is denoted as Ij, where i is a natural number.
3. The method according to claim 2, wherein the illumination component of each face image Ij, which, according to the Lambertian model, can be denoted as:
Ii(x,y)=Ri(x,y)Li(x,y) where Rj is the reflection component, representing the surface reflectance of the facial image; Lj is the illumination component, representing the illumination and shadow of the facial image, and (x,y) represents the coordinates of the pixels in the image; log- transform the face image Ij, to obtain:
Figure imgf000019_0004
where fj, vj and uj respectively represent the value of I, R and L over the log- domain, i.e. vj=logR, uj=logL, compute DCT for f,, i.e.:
Figure imgf000019_0001
Figure imgf000019_0005
where N is the length and width of the image, and the high frequency coefficient of Fi(s,t) is set at 0, i.e.:
Figure imgf000019_0002
where M is a parameter to be defined, which is generally set at 5, compute the inverse DCT (discrete cosine transform) for the adjusted frequency domain coefficient F ~ i.e.:
Figure imgf000019_0003
Take f^as the estimation of the illumination component, i.e.:
Figure imgf000019_0006
Then the illumination component of the facial image can be obtained via inverse I ogari thmi c transf ormati on, i.e.:
Figure imgf000019_0007
4. The method according to claim 3, wherein M has an empirical value of 5.
5. The method according to claim 4, wherein the step of calculating the mean local variance for the illumination components of the face images obtained from T successive video frames comprises: dividing the illumination component of each face image equally into aBb image blocks with aBb pixels contained in each block, and use Bij to represent the No.j image block of the face image from the No.i frame, therefore the mean local variance for T successive video frames is:
Figure imgf000020_0001
wher
Figure imgf000020_0002
6. The method according to claim 5, wherein in the step of comparing the mean local variance (Avar) obtained with a predetermined threshold (Th), if the Avar value is greater than or equal to T h, the face i mage i n the v i deo i s an i mage of a real face.
7. The method according to claim 6, wherein if the Avar value is less than Th, the face image in the video is not an image of a real face.
8. The method according to any one of the preceding claims, wherein the threshold (Th) is set according to specific image quality, whereby a lower image resolution means a lower threshold Th.
9. A system for face detecti on, comprising:
an acquisition unit configured to capture a facial movement by video and to process the video to obtain a plurality of facial images from a plurality of successive video frames;
a calculation unit configured to render each facial image obtained using the Lambertian model, compute discrete cosine transform (DTC) to obtain an illumination component of each facial image, and calculate the mean local variance for the i 11 umi nati on components of the faci al i mages; and
a determination unit configured to compare the mean local variance with a predetermined threshold (Th) to determine whether the facial image is an image of a real face.
10. The system according to claim 9, wherein each of the facial images obtained is denoted as Ij, where i is a natural number.
1 1. The system according to claim 10, wherein the il lumination component of each face image Ij, which, according to the Lambertian model, can be denoted as:
Figure imgf000021_0002
where Rj is the reflection component, representing the surface reflectance of the facial image; Lj is the ill umination component, representing the ill umination and shadow of the facial image, and (x,y) represents the coordinates of the pixels in the image; log- transform the face image Ij, to obtain:
Figure imgf000021_0003
where fj, vj and uj respectively represent the value of I, R and L over the log- domain,
Figure imgf000021_0004
Figure imgf000021_0001
where N is the length and width of the image, and the high frequency coefficient of Fi(s,t) is set at 0, i.e.:
Figure imgf000021_0005
where M is a parameter to be defined, which is generally set at 5, compute the inverse DCT (discrete cosine transform) for the adjusted frequency domain coefficient F ~ i.e.:
Figure imgf000021_0006
Take f^as the estimation of the illumination component, i.e.:
Figure imgf000021_0007
T hen the i 11 umi nati on component of the f aci al i mage can be obtai ned vi a i nverse I ogari thmi c transf ormati on, i.e.:
Figure imgf000022_0002
12. The system according to claim 11, wherein M has an empirical value of 5.
13. The system according to claim 12, wherein the step of calculating the mean local variance for the illumination components of the face images obtained from T successive video frames comprises:
dividing the illumination component of each face image equally into aBb image blocks with aBb pixels contained in each block, and use Bij to represent the No.j image block of the face image from the No.i frame, therefore the mean local variance for T successive video frames is:
Figure imgf000022_0001
where var is the variance of the pixel val ufe of the i mage bl ock B i,j.
Figure imgf000022_0003
14. The system according to claim 13, wherein in the step of comparing the mean local variance (Avar) obtained with a predetermined threshold (Th), if the Avar value is greater than or equal to T h, the face i mage i n the v i deo i s an i mage of a real face.
15. The system according to claim 14, wherein if the Avar value is less than Th, the face image in the video is not an image of a real face.
16. The system according to any one of claims 9 to 15, wherein the threshold (Th) is set according to specific image quality, whereby a lower image resolution means a lower threshold Th.
PCT/SG2016/050543 2015-11-04 2016-11-04 Method and system for face in vivo detection Ceased WO2017078627A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
SG11201803167PA SG11201803167PA (en) 2015-11-04 2016-11-04 Method and system for face in vivo detection
PH12018500945A PH12018500945A1 (en) 2015-11-04 2018-05-02 Method and system for face in vivo detection

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNCN201510742510.0 2015-11-04
CN201510742510.0A CN105320947B (en) 2015-11-04 2015-11-04 A kind of human face in-vivo detection method based on illumination component

Publications (1)

Publication Number Publication Date
WO2017078627A1 true WO2017078627A1 (en) 2017-05-11

Family

ID=55248302

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2016/050543 Ceased WO2017078627A1 (en) 2015-11-04 2016-11-04 Method and system for face in vivo detection

Country Status (4)

Country Link
CN (1) CN105320947B (en)
PH (1) PH12018500945A1 (en)
SG (1) SG11201803167PA (en)
WO (1) WO2017078627A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886087A (en) * 2019-01-04 2019-06-14 平安科技(深圳)有限公司 A kind of biopsy method neural network based and terminal device

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107798282B (en) * 2016-09-07 2021-12-31 北京眼神科技有限公司 Method and device for detecting human face of living body
CN107895155A (en) * 2017-11-29 2018-04-10 五八有限公司 A kind of face identification method and device
WO2019113776A1 (en) * 2017-12-12 2019-06-20 福建联迪商用设备有限公司 Face and voiceprint-based payment authentication method, and terminal
WO2019113765A1 (en) * 2017-12-12 2019-06-20 福建联迪商用设备有限公司 Face and electrocardiogram-based payment authentication method and terminal
CN110059579B (en) * 2019-03-27 2020-09-04 北京三快在线科技有限公司 Method and apparatus for in vivo testing, electronic device, and storage medium
CN112115747B (en) * 2019-06-21 2024-11-19 阿里巴巴集团控股有限公司 Liveness detection and data processing method, device, system and storage medium
CN112307832A (en) * 2019-07-31 2021-02-02 浙江维尔科技有限公司 Living body detection method and device based on shadow analysis
CN110765923B (en) 2019-10-18 2024-05-24 腾讯科技(深圳)有限公司 A method, device, equipment and storage medium for detecting human face liveness
CN111310575B (en) * 2020-01-17 2022-07-08 腾讯科技(深圳)有限公司 Face living body detection method, related device, equipment and storage medium
CN112016505B (en) * 2020-09-03 2024-05-28 平安科技(深圳)有限公司 Living body detection method, equipment, storage medium and device based on face image
CN113723295B (en) * 2021-08-31 2023-11-07 浙江大学 Face counterfeiting detection method based on image domain frequency domain double-flow network
CN116403261A (en) * 2023-04-07 2023-07-07 平安银行股份有限公司 Human face liveness detection method, system, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110188712A1 (en) * 2010-02-04 2011-08-04 Electronics And Telecommunications Research Institute Method and apparatus for determining fake image
US8254647B1 (en) * 2012-04-16 2012-08-28 Google Inc. Facial image quality assessment
US8542879B1 (en) * 2012-06-26 2013-09-24 Google Inc. Facial recognition

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388075B (en) * 2008-10-11 2011-11-16 大连大学 Human face identification method based on independent characteristic fusion
CN103116756B (en) * 2013-01-23 2016-07-27 北京工商大学 A kind of persona face detection method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110188712A1 (en) * 2010-02-04 2011-08-04 Electronics And Telecommunications Research Institute Method and apparatus for determining fake image
US8254647B1 (en) * 2012-04-16 2012-08-28 Google Inc. Facial image quality assessment
US8542879B1 (en) * 2012-06-26 2013-09-24 Google Inc. Facial recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHEN W. ET AL.: "Illumination compensation and normalization for robust face recognition using discrete cosine transform in logarithm domain", IEEE TRANS. SYST. MAN CYBERN. B CYBERN., vol. 36, no. 2, April 2006 (2006-04-01), pages 458 - 466, XP055178159 *
ZHAO M. ET AL.: "The discrete cosine transform (DCT) plus local normalization: a novel two-stage method for de-illumination in face recognition", OPTICA APPLICATA., vol. 41, no. Issue 4, 2011, pages 825 - 839 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886087A (en) * 2019-01-04 2019-06-14 平安科技(深圳)有限公司 A kind of biopsy method neural network based and terminal device
CN109886087B (en) * 2019-01-04 2023-10-20 平安科技(深圳)有限公司 Living body detection method based on neural network and terminal equipment

Also Published As

Publication number Publication date
CN105320947B (en) 2019-03-01
SG11201803167PA (en) 2018-05-30
PH12018500945A1 (en) 2018-10-29
CN105320947A (en) 2016-02-10

Similar Documents

Publication Publication Date Title
WO2017078627A1 (en) Method and system for face in vivo detection
CN104915649B (en) A kind of biopsy method applied to recognition of face
CN106778518B (en) Face living body detection method and device
CN107862299B (en) A living face detection method based on near-infrared and visible light binocular cameras
CN108229362B (en) Binocular face recognition living body detection method based on access control system
Tan et al. Face liveness detection from a single image with sparse low rank bilinear discriminative model
CN108764058B (en) Double-camera face in-vivo detection method based on thermal imaging effect
CN112861588B (en) Living body detection method and device
CN111523344B (en) Human body living body detection system and method
CN108446690B (en) Human face in-vivo detection method based on multi-view dynamic features
WO2018040307A1 (en) Vivo detection method and device based on infrared visible binocular image
US11917158B2 (en) Static video recognition
CN108596041A (en) A kind of human face in-vivo detection method based on video
CN108875485A (en) A kind of base map input method, apparatus and system
DE112015001656T5 (en) Image processing method and apparatus
JP2016225679A (en) Image processing apparatus, image processing method, program, and recording medium
CN113673378A (en) Face recognition method and device based on binocular camera and storage medium
JP2016225702A (en) Image processing apparatus, image processing method, program and recording medium
CN111382646A (en) Living body identification method, storage medium and terminal equipment
CN111274851A (en) A kind of living body detection method and device
JP6544970B2 (en) IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM
TW200947316A (en) Recognizing apparatus and method for facial expressions
CN106611417B (en) Method and device for classifying visual elements into foreground or background
CN111242189B (en) Feature extraction method and device and terminal equipment
Shen et al. Towards intelligent photo composition-automatic detection of unintentional dissection lines in environmental portrait photos

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16862574

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 11201803167P

Country of ref document: SG

WWE Wipo information: entry into national phase

Ref document number: 12018500945

Country of ref document: PH

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16862574

Country of ref document: EP

Kind code of ref document: A1