CN105989346B - Construction method of online shopping mobile phone payment system - Google Patents
Construction method of online shopping mobile phone payment system Download PDFInfo
- Publication number
- CN105989346B CN105989346B CN201510090743.7A CN201510090743A CN105989346B CN 105989346 B CN105989346 B CN 105989346B CN 201510090743 A CN201510090743 A CN 201510090743A CN 105989346 B CN105989346 B CN 105989346B
- Authority
- CN
- China
- Prior art keywords
- face
- mobile phone
- self
- information
- payment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000010276 construction Methods 0.000 title abstract description 6
- 238000000034 method Methods 0.000 claims abstract description 75
- 230000001680 brushing effect Effects 0.000 claims abstract description 52
- 230000006870 function Effects 0.000 claims abstract description 39
- 239000013598 vector Substances 0.000 claims description 82
- 238000009826 distribution Methods 0.000 claims description 41
- 230000001815 facial effect Effects 0.000 claims description 28
- 210000005252 bulbus oculi Anatomy 0.000 claims description 25
- 210000000697 sensory organ Anatomy 0.000 claims description 20
- 230000008569 process Effects 0.000 claims description 14
- 230000008859 change Effects 0.000 claims description 8
- 239000006185 dispersion Substances 0.000 claims description 7
- 230000001133 acceleration Effects 0.000 claims description 3
- 230000004397 blinking Effects 0.000 claims description 3
- 210000003205 muscle Anatomy 0.000 claims description 3
- 210000001747 pupil Anatomy 0.000 claims description 3
- 230000008520 organization Effects 0.000 claims 1
- 230000005540 biological transmission Effects 0.000 abstract description 3
- 210000001508 eye Anatomy 0.000 description 42
- 239000011159 matrix material Substances 0.000 description 28
- 238000010586 diagram Methods 0.000 description 25
- 238000004364 calculation method Methods 0.000 description 19
- 230000003044 adaptive effect Effects 0.000 description 11
- 230000037303 wrinkles Effects 0.000 description 11
- 210000001061 forehead Anatomy 0.000 description 7
- 238000003672 processing method Methods 0.000 description 7
- 238000000605 extraction Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 210000000887 face Anatomy 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000033001 locomotion Effects 0.000 description 4
- 238000003032 molecular docking Methods 0.000 description 4
- 239000003086 colorant Substances 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 210000000056 organ Anatomy 0.000 description 3
- RWSOTUBLDIXVET-UHFFFAOYSA-N Dihydrogen sulfide Chemical compound S RWSOTUBLDIXVET-UHFFFAOYSA-N 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 210000004209 hair Anatomy 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 208000032544 Cicatrix Diseases 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 231100000241 scar Toxicity 0.000 description 1
- 230000037387 scars Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 235000011888 snacks Nutrition 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Landscapes
- Collating Specific Patterns (AREA)
- Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)
Abstract
The invention belongs to a method for forming a mobile phone payment system for online shopping of large commodities in the field of image processing, which consists of a step of selecting commodities to be purchased, a step of selecting a credit card on a mobile phone, a step of automatically entering face swiping authentication and a mobile phone payment step, and is characterized in that: the system introduces the theory of probability scale distance, can directly convert the face brushing image into the code, can be directly used as the password for mobile phone payment, has high speed in network transmission, can directly use the code converted from the face brushing image as the password for mobile phone payment, does not need to make more changes on system construction, and reduces the investment cost. In addition, the face brushing or voiceprint payment code is stable, and the system also has a self-adaptive learning function, so that the system performance can be continuously improved. The face-brushing payment has a life recognition function, and an illegal person can be prevented from cheating payment by using a photo. The consistency authentication between the credit card and the cardholder can be solved, and the mobile phone payment system has higher safety.
Description
[ technical field ] A method for producing a semiconductor device
The invention belongs to a method for constructing an online shopping mobile phone payment system in the field of image processing.
[ background of the invention ]
Recently, third party payments for online shopping have opened the barriers in the financial field for many years, and the formation of a new, network-centric financial system has become imperative. Here, the core of the intense competition in all aspects of society is the mobile phone payment technology. In china, the first technology applied to mobile payment is the public universal two-dimensional bar code. (non-patent document 1)
The publication of iPhone6 with NFC (Near Field Communication) function has focused attention on the mobile phone electronic payment system based on RFID technology. It is widely used in some convenience stores such as tickets, magazines or snacks. A large number of patent documents have been published in the meantime. A representative patent application is a "payment system and payment terminal" patent applied by japanese electrical equipment corporation (patent document 1).
[ patent document ]
[ patent document 1 ] Japanese patent document No. 78074 (open/jar 2014)
[ non-patent document 1 ] A method for using a WeChat two-dimensional code payment function
(http://news.mydrivers.com/l/198/198121.htm)
Access time: 24 days 1 month in 2015
The method described in the above-mentioned non-patent document 1 was called by a state authority financial institution as a two-dimensional code insecure, and a solution of performing mobile payment using a two-dimensional code was proposed.
The mobile payment system described in patent document 1 performs code exchange by wireless communication, and the security problem is not completely solved, and particularly, when a mobile phone is lost, a legitimate mobile phone holder is greatly lost, and thus the mobile payment method is suitable for small payment.
[ summary of the invention ]
The first object of the present invention is: the method overcomes the defects of the traditional technology, provides a method for specifying the position parameters of the positions and the sizes of all organs of the randomly distributed face in a probability scale distance space, and fixes the fuzzy information of the positions and the sizes of the organs through human intervention, and provides a more stable mobile phone face-brushing payment system composition method which not only considers the probability information of the positions and the size distribution of all organs of the face, but also considers the fuzzy information.
The second object of the present invention is: in order to solve the problem that personal privacy needs to be considered in mobile phone payment, a method for constructing a mobile phone voiceprint payment system for realizing more stable voiceprint authentication through clustering calculation of probability scales is provided.
A third object of the invention is: in order to solve the potential safety hazard that personal information can be stolen due to NFC communication, a system construction method of mobile phone 'light' payment using 3D codes for information interaction is provided.
In order to solve the above problems, the following technical solutions are proposed:
a method for forming a mobile phone payment system for online shopping of large commodities is characterized by comprising the following steps:
(1) the mobile phone needs to be directly networked; or connect to an existing system on a shopping website;
(2) after the step of face-brushing authentication is automatically carried out, a recognition window pops up on a mobile phone screen, and the face of a card holder is aligned with the recognition window;
(3) the mobile phone face brushing authentication of the local features or the global features of the human face or the fusion of the local features and the global features can be quickly carried out.
The local feature of the face is at least one method including extracting a feature value through shape information of the face, or partitioning a Gabor feature according to a spatial position, wherein the features in each block are connected in series to form a feature vector, that is, extracting the local information of the face, and the obtained result is the feature value of each part, and the feature values of each part are formed into a set of numerical vectors capable of reflecting the features of each part of the face, or extracting the feature value through skin color information of the face.
Moreover, the global features refer to: identifying facial feature points of five sense organs; or the feature value of facial recognition five sense organ size information, or a method for extracting the feature value through the information of the face frequency space.
Moreover, the local features of the human face also refer to: the facial feature image of the card is under a certain color; and the density rule of pixel distribution under a certain gray value; and (4) introducing a probability scale self-organization algorithm, and automatically extracting the position information of the facial features.
Then, the face feature information is converted into a feature vector having an image code property directly by a probability self-organization algorithm to obtain a feature value closest to a parent, and a fuzzy mathematics theory is introduced, and the obtained face information is converted into a numerical value between 0 and n by a human intervention method using a plurality of Membership functions (Membership functions) defined based on human experience.
The face brushing or voiceprint payment code is generated by recognizing a living body image of a face brushing image in which at least one of eye blinking motions, mouth opening and mouth closing motions, minute changes in muscles when the face is smiling, changes in pupils, acceleration of minute shaking of the face, and a color of the face reflects characteristics of the living body image.
The face feature information is composed of local feature information of the face brushing image and overall feature information.
[ noun interpretation ]
Distance in Probability Scale (Proavailability Scale Distance) ]
Given a sequence g with probability distribution1,g2,…gζIs G e Gf(f ═ 1, 2, …, ζ), the central value of the set is a (G), the probability scale for the central value a (G) is M [ G, a (G)]And the central value A (G) at the n-1 st time calculated by the self-organization iteration(n-1)) And a radius M [ G ] based on the center value(n-1),A(G(n-1))]Array g in which k probability distributions are present1,g2,…gkIs G(n)∈gf(f is 1, 2, …, k), then
[ EQUATION 1 ]
A(n)=A(G(n))
M(n)=M[G(n),A(G(n))]
G(n)=G{[A(G(n-1)),M[G(n-1),A(G(n-1))]]
Here, the probability scale M(n)Is a parameter of probability statistics with multiple attributes. Such as a normal distribution, an exponential distribution, an alvaran distribution, a weber distribution, a triangular distribution, a beta distribution, and the like. E.g. the probability scale M(n)It can be regarded as the dispersion value of the normal distribution.
The central value calculated from the above iterative formula 1 through several iterations is the sequence g for the probability distribution1,g2,…gζThe resulting estimated value closest to the parent, and the final reference radius value is a probability scale, with the final center value as the reference, the array g of all probability distributions within the probability scale1’,g2’,…gkAll can belong to the probability distribution sequence g1,g2,…gζTrue value of (1).
By using the calculation methods of the above formulas 1, 4, 5 and 6, a distance value closest to the parent can be obtained between the feature vectors of the two probability distributions. The invention provides the most stable result which can be obtained by converting the face image into the ID capable of authenticating the identity of the card holder under the complicated mobile phone shooting environment.
[ description of the drawings ]
FIG. 1 is a schematic view of a mobile phone payment operation process for shopping of large amount of commodities
FIG. 2 is a schematic view of a mobile phone payment operation process for shopping of small amount of commodities
FIG. 3 is a schematic diagram of off-line shopping handset face brushing payment and voiceprint payment
FIG. 4 is a schematic illustration of "light" payment for a cell phone credit card savings card
FIG. 5 is a schematic illustration of a cash deposit of a cell phone savings card
FIG. 6 is a flow of a probability scale distance self-organizing process
FIG. 7 is a schematic diagram of a method for setting facial recognition feature points of five sense organs
FIG. 8 is a schematic diagram of a method for defining feature values of facial recognition facial feature size information
FIG. 9 is a diagram illustrating feature values extracted from eye position and size information of a person
FIG. 10 is a diagram illustrating feature value extraction by shape information of a human face
FIG. 11 is a diagram illustrating feature value extraction by skin color information of a human face
FIG. 12 is a diagram illustrating feature value extraction by information of face frequency space
FIG. 13 is a schematic diagram of a processing method for using the information of the frequency space of the face as the whole information of the face
FIG. 14 is a diagram illustrating a method for digitizing voiceprint information
FIG. 15 is an example of defining shape information of a face as a membership function
FIG. 16 is a flow chart of a face-brushing or voiceprint code adaptive learning processing method
FIG. 17 is a diagram showing one example of living body image recognition
FIG. 18 is a schematic diagram of an electronic image for optically recognizing a possible 3D screen code
[ detailed description ] embodiments
The embodiments of the present invention will be described in further detail below with reference to the attached drawings, but the embodiments of the present invention are illustrative rather than restrictive.
The technical problem to be solved by the invention is realized by adopting the following technical scheme: the embodiments of the invention are explained based on the drawings of the specification.
FIG. 1 is a schematic diagram of a mobile phone payment operation process for shopping with large amount of commodities.
As shown in fig. 1: in the step (a) of selecting the commodity to be purchased on the internet, the internet can be directly accessed, the required article can be selected on the shopping websites such as Taobao, Jingdong and the like by utilizing the existing systems of the websites, and the operation is carried out according to the requirements of the used websites.
The network shopping platforms such as Taobao, Jingdong and the like have own protocol standards, and how to unify the standards is a subject to make the mobile phone payment universal, so that the problem that the popularization of the mobile phone payment without card is seriously influenced is not solved.
According to the current situation, the docking with the WeChat or Payment treasure can be performed through the plug-in given by the WeChat or Payment treasure, however, the docking with which website needs to be installed is very troublesome, and inconvenience is brought to users.
In the step (b) of selecting the credit card on the mobile phone, the selection of which credit card or bank deposit card to use is directly carried out at the mobile phone end. Here, one mobile phone may support payment of different credit cards, and the number of settlement banks may be plural. This screen may by default not have to pop up a screen to go directly to face authentication if there is only one credit card held.
After the step (c) of face brushing authentication is automatically carried out, a recognition window pops up on a mobile phone screen, and the mobile phone face brushing authentication with fusion of local features and global features of the human face can be carried out quickly as long as the face of the card holder is aligned with the recognition window. In order to prevent illegal persons from cheating authentication through photos, the invention provides a face brushing authentication method by adopting a life body identification method.
The learning data of the mobile phone face-brushing payment is placed in the mobile phone of the user, the learning data does not need to be placed on the server, and only one code needs to be sent from the mobile phone end, so that the processing speed can be higher, the personal information is protected by the mobile phone of the user, and the safety can be improved.
And (d) sending the mobile phone payment data to a bank server to finish authentication and payment operation.
Importing the characteristics of the theoretical face brushing authentication of the probability scale distance:
the face-brushing image can be directly converted into the code which can be directly used as the password for mobile payment, the speed in network transmission is high, the code converted from the face-brushing image can be directly used as the password for mobile payment, the original system does not need to be changed greatly in system construction, and the investment cost is reduced.
The face brushing or voiceprint payment code is stable, and the system also has a self-adaptive learning function, so that the system performance can be continuously improved. The face-brushing payment has a life recognition function, and an illegal person can be prevented from cheating payment by using a photo. The consistency authentication between the credit card and the cardholder can be solved, and the mobile phone payment system has very high safety.
FIG. 2 is a schematic view of a mobile phone payment operation for shopping of small amount of commodities. As shown in fig. 2: when online shopping is carried out in small amount, the mobile phone voiceprint payment also needs 3 steps, and in the step (a) of selecting the commodity to be purchased on the network, the required commodity needs to be selected on the network shopping platform as in the large amount network shopping shown in figure 1. The invention provides a standard docking plug-in unit for docking with all large websites, and all websites can be docked after the plug-in unit is installed in each website.
In the step (b) of selecting the credit card on the mobile phone, the same as the large-amount online shopping shown in fig. 1, the selection of which credit card or bank card to use is directly performed at the mobile phone end. If only one credit card is held, the screen can be popped up by default and directly enter the screen of the voiceprint authentication.
After the step (c) of automatically entering the voiceprint authentication, a voiceprint authentication recognition prompt pops up on the screen of the mobile phone, the cardholder needs to aim at the microphone of the mobile phone to say a sentence, for example, the user agrees to pay, so that the voiceprint mobile phone payment authentication with self-adaptive learning can be realized, the operation is convenient, and the authentication effect is good. The selection of voiceprint authentication or face-brushing authentication may be selected by the client himself in a system setup menu.
Voiceprint authentication allows the voiceprint authenticator to answer by randomly asking questions about the content that the cardholder has logged in advance, thereby identifying whether the voiceprint authenticator is a living body.
Voiceprint authenticated life recognition can also be achieved by whether multiple passwords are in the exact same state.
And (d) sending the mobile phone payment data to a bank server to finish authentication and payment operation.
The voice print authentication method based on the probability scale distance theory is characterized by comprising the following steps:
the voiceprint information can be directly converted into the code which can be directly used as the password for mobile payment, the speed in network transmission is high, the code converted from the voiceprint information can be directly used as the password for mobile payment, the original system does not need to be changed greatly in system construction, and the investment cost is reduced.
The payment system of the mobile phone credit card saving card capable of directly converting the voiceprint information into the code is characterized in that the code is stable, and the payment system also has the function of self-adaptive learning, so that the identification performance of the system can be continuously improved.
The payment system of the mobile phone credit card saving card which can directly convert the voiceprint information into the code also has the function of identifying the life, and can prevent illegal persons from cheating payment by using the recording of card holders. The method can solve the problem of consistency authentication between the credit card for payment and the cardholder, and can ensure that the payment system of the mobile phone credit card saving card has very high safety.
The payment system of the mobile phone credit card saving card capable of directly converting the voiceprint information into the code is simple to operate, does not have the problem of personal privacy, and can be popularized to all countries in the world.
FIG. 3 is a schematic diagram of off-line shopping handset face-brushing payment and voiceprint payment.
When off-line shopping is carried out, the same on-line shopping requires 3 steps by using mobile phone face-brushing payment or voiceprint payment, as shown in fig. 3:
in the shopping settlement step: after a supermarket selects commodities to be purchased and walks to a settlement table, a cashier inputs each commodity, commodity price is displayed on a screen of a POS (point of sale), after a card holder opens an APP (application) paid by a mobile phone, through sound waves, Bluetooth and WiFi (wireless fidelity) communication, a list and price of the purchased commodities are displayed on a screen of the mobile phone of the card holder, the card holder confirms that the commodity price is correct, and the step of selecting a credit card to be paid can be immediately carried out.
In the step of selecting a credit card to be paid: this step may be skipped if the cardholder has only one credit card.
In the cardholder authentication step: the card holder faces the mobile phone of the card holder and automatically sends the card number of the held credit card to the POS machine after the authentication is successful through the mobile phone face brushing authentication of which the local feature of the face is fused with the overall feature or the voiceprint authentication with self-adaptive learning, and the POS machine receives the card number of the credit card to finish the whole process of mobile phone payment. After authentication, the mobile phone directly communicates with the POS of the cashier desk through Bluetooth or WiFi, the card number and the password of a credit card or a deposit card of the mobile phone are sent to the POS machine, and the POS machine directly conducts transaction with a bank after receiving information. The mobile phone can also directly communicate with a cash register, and the mobile phone can directly transact with a bank after the mobile phone transmits the paid amount to the cash register.
If the cardholder chooses the mobile phone voiceprint payment authentication, the cardholder can report the payment password on the mobile phone, for example, the whole process of mobile phone payment can be completed after 'can pay'.
Figure 4 is a schematic illustration of "light" payment for a cell phone credit card deposit card.
As shown in fig. 4: in the settlement step; the cashier inputs the commodity to be purchased by the customer into the cash register through the code scanning machine, and an electronic image of the 3D screen code which is possible to be optically identified is displayed on the screen of the cash register.
In the mobile phone payment authentication step, a customer uses a camera lens of a mobile phone credit card or a deposit card to aim at a 3D screen coded image on a screen of a cashier desk for recognition, so that commodity settlement information can be received to a mobile phone end, and the authentication of the optical payment of the mobile phone credit card or the deposit card is realized;
in the payment step of the mobile phone credit card saving card, after a customer confirms the purchased goods through the mobile phone, the customer selects the credit card or the saving card through the mobile phone, and if the cardholder only has one credit card or saving card, the step can be skipped.
The mobile phone automatically sends the card number of the held credit card or deposit card to the POS machine, if the password is needed, the password can be generated through the face brushing or voiceprint authentication of the mobile phone, and the POS machine receives the card number of the credit card or deposit card to complete the whole process of mobile phone payment. After authentication, the mobile phone directly communicates with the POS of the cashier desk through Bluetooth or WiFi, the card number and the password of a credit card or a deposit card of the mobile phone are sent to the POS machine, and the POS machine directly conducts transaction with a bank after receiving information. The mobile phone can also directly communicate with a cash register, and the mobile phone can directly transact with a bank after the mobile phone transmits the paid amount to the cash register.
If the cardholder chooses the mobile phone voiceprint payment authentication, the cardholder can report the payment password on the mobile phone, for example, the whole process of mobile phone payment can be completed after 'can pay'.
The 3D screen coding image is an electronic image which is displayed on a screen of a POS machine or a mobile phone plane and comprises a two-dimensional electronic image, a three-dimensional electronic image with multi-value gray values and a three-dimensional electronic image with flickering multi-value gray values, wherein the three-dimensional electronic image can be displayed on the screen.
Here, the information embedded by the 3D screen code displayed on the mobile phone screen is dynamic and 3D information, so that the information cannot be copied by an illegal person, and has extremely high security compared with the common two-dimensional code payment.
When a cardholder pays by using the mobile phone 'light' with 3D screen coding, the mobile phone and the cashier desk are paid by using the light medium, so that interactive information is not easy to be received by nearby hidden illegal persons, and the system and the safety characteristics are realized.
In addition, the cardholder can use the mobile phone 'light' payment function of the 3D screen code, and the operation is simple and easy to master.
Because the mobile phone 'light' payment system of the 3D screen code is very close to the payment system of the mobile phone two-dimensional code, the mobile phone 'light' payment system is easy to transform for the current shopping website which uses the two-dimensional code for mobile phone payment, does not need to add a lot of equipment, and has the effect of standing bar visible light.
The 3D screen code is identified without adding any equipment, and the method has the unique advantage of popularizing the payment system of the mobile phone credit card saving card.
Figure 5 is a schematic illustration of a mobile phone savings card cash deposit.
When the mobile phone deposit card is used for depositing cash, two modes are provided, one mode is that the mobile phone deposit card number is communicated with an ATM through the Bluetooth or WiFi of a mobile phone, the ATM is informed of the mobile phone deposit card number, and then the operation content is selected through an ATM key, so that the transaction of deposit of the mobile phone storage card can be realized.
In another mode, a 3D screen code recognizer is installed on the ATM, after the cash transaction function is selected by the mobile phone, a 3D screen code can be displayed on the screen of the mobile phone, and the card number of the mobile phone deposit card can be transmitted to the ATM for cash deposit transaction by placing the mobile phone on the 3D screen code recognizer of the ATM.
When the mobile phone deposit card is used for withdrawing cash, the method firstly enters an APP program of the mobile phone deposit card, selects the type of the deposit card according to the process of mobile phone payment, and directly enters the steps of mobile phone face swiping authentication or voiceprint authentication if only one deposit card exists, and can enter the operation of withdrawing cash after the authentication is successful through the mobile phone face swiping authentication of which the local feature of the face is fused with the overall feature or the voiceprint authentication with self-adaptive learning. Here, the cash withdrawal amount can be input on the ATM, the cash withdrawal amount can also be input at the mobile phone end, and the cash withdrawal transaction of the mobile phone deposit card is realized through the communication between the mobile phone and the ATM.
FIG. 6 is a flow of a probability scale distance self-organizing process.
As shown in fig. 6: given a sequence g with probability distribution1,g2,…g1Is G e Gf(f ═ 1, 2, …, l), the probability-based metric self-organizing algorithm consists of the following 4 steps:
step 1: a pretreatment step: m(0)As an initialization probability scale, A(0)As an initial central value of the self-organization, V is a convergence value of the self-organization, MN is a maximum number of times of the self-organization, and initially n is 0 as a current number of times of the self-organization.
About M(0)As the initialized probability scale sum A(0)As a method for determining the initial center value of the self-organization, strict setting is not required. By artificial prediction, at least a portion of the values for the final range are included in the initialized probability scale M(0)Within the range of (1). Initializing a probability metric M(0)The larger the calculation time, the shorter the calculation time, and the smaller the calculation time, the incorrect result may not be obtained.
With respect to the method of setting V as the convergence value, the larger the convergence value V is, the more accurate result may not be obtained. The smaller the convergence value, the longer the calculation takes. The correct setting method is about 10% of the probability scale of the final self-organization.
Regarding the setting method of the maximum number of self-organizing times MN, generally 5 to 10 times are sufficient.
Step 2: self-organizing: performing self-organizing treatment n times to obtain A(n)As a self-organizing center value, the probability scale M(n)As a radius, the central value A(n)For reference, all values g within the radius are calculatedf(f ═ 1, 2, …, ζ) average value V(n+1)And the dispersion value S(n+1),V(n+1)=A(n+1),S(n+1)=M(n+1),n=n+1。
[ equation 2 ]
[ equation 3 ]
And step 3: and self-organization judging. Maximum number of times of self-organizing treatment (N ≧ MN) or convergence of self-organizing treatment (M)(n)-M(n+1)V) is not performed, if YES, the next self-organization treatment is not performed, and the self-organization end jumps to the step 4. If NO, go to step 2 to continue the self-organizing process.
And 4, step 4: the self-organizing process is ended.
Probability scale M(n)Is a parameter of probability statistics with multiple attributes. Such as a normal distribution, an exponential distribution, an alvaran distribution, a weber distribution, a triangular distribution, a beta distribution, and the like. E.g. the probability scale M(n)It can be regarded as the dispersion value of the normal distribution.
The invention provides an authentication code for converting a face image into mobile phone payment, which is realized by two methods, wherein one method is that each characteristic value is generated according to local information of a face, each characteristic value forms a numerical vector, each numerical vector forms a characteristic vector through a membership function, and images of different faces form a characteristic vector space.
The method for generating each characteristic value aiming at the local information of the human face comprises the following steps:
the present invention uses the position information of the face as important information for specifying the card holder, and therefore the position information of the face is defined as follows.
Fig. 7 is a schematic diagram of a setting method of facial recognition feature points of five sense organs.
As shown in fig. 7: more than 24 feature points (Landmarks) can be formed by the positions of the five sense organs of the face. For example, the left and right corner positions of the left eye constitute l1And l2Two characteristic points, the left and right canthus positions of the right eye form l3And l4Two feature points, left eye and left eye eyebrowThe vertical position of the center constitutes5And l6Two characteristic points, the vertical position of the center of the eyebrow of the right eye and the right eye forms7And l8Two characteristic points, two sides of the nose form l9And l10Two characteristic points, the vertical positions of the connecting lines of the two sides of the nose and the two eyes form11And l12Two characteristic points, two sides of the mouth form l13And l14Two characteristic points, two sides of the mouth and the middle of the nose form a15And l16Two feature points, the left eye corner of the left eye and the right eye corner of the right eye form l17And l18Two characteristic points, the connecting line of the left eye corner of the left eye and the right eye corner of the right eye is vertical to the forehead of the face to form l19And l20Two characteristic points, the connecting line of the characteristic points at the two sides of the mouth is extended and then intersects with the two sides of the face to form l21And l22Two characteristic points, i.e. a connecting line formed by the extension of the connecting line of the characteristic points at the two sides of the mouth and the intersection of the connecting line and the two sides of the face and a vertical line of the forehead of the face form a line23And l24Two feature points.
Here, the two sides of the mouth and the middle of the nose constitute16The characteristic point and the vertical position of the connecting line between the two sides of the nose and the two eyes11The characteristic points are coincident, and the left corner of the left eye and the right corner of the right eye form a position l17And l18Two feature points are1And l4Two characteristic points are coincident, i formed by left canthus of left eye and right canthus of right eye and forehead of face19And l20The characteristic point and the vertical position of the connecting line between the two sides of the nose and the two eyes12A connecting line formed by extending the characteristic point, the connecting line of the characteristic point and the two sides of the mouth and intersecting the two sides of the face and a vertical line of the face forehead24The characteristic points are overlapped, and the connecting line of the characteristic points at the two sides of the mouth is extended and then is intersected with the two sides of the face to form a connecting line and a vertical line of the face forehead23The characteristic point and the two sides of the mouth and the middle of the nose15And overlapping the characteristic points.
That is, l1And l17Is a left eye angular position of the left eye, l4And l18Is the right corner position of the right eye,/, of the right eye12And l19Is the vertical position of the line connecting the two eyes,/11And l16Is the central position of two sides of a nose, l15And l23Is the center position of the feature points on both sides of the mouth,/20And l24Is the position of the feature points of the face, so that the feature points that can actually describe the facial features are 18.
In practice, only handle l1And l2Two characteristic points, l3And l4Two characteristic points, l6And l8Two characteristic points, l9And l10Two characteristic points, l13And l14Two characteristic points, l21And l22Two feature points and24the feature points may be extracted by 13 feature points in total.
Can be used for holding1And l2Straight line connecting two characteristic points, /)3And l4Straight line connecting two characteristic points, /)5And l6Straight line connecting two characteristic points, /)7And l8Straight line connecting two characteristic points, /)9And l10Straight line connecting two characteristic points, /)11And l12Straight line connecting two characteristic points, /)13And l14Straight line connecting two characteristic points, /)15And l16Straight line connecting two characteristic points, /)17And l18Straight line connecting two characteristic points, /)19And l20Straight line connecting two characteristic points, /)21And l22Straight line connecting two characteristic points, /)23And l24A straight line connecting the two characteristic points. The invention takes the length of a total of 12 straight lines as the characteristic information describing the position of facial features, namely, takes the 12 characteristic information describing the position of facial features as the elements of the characteristic vector of 12 authenticated cardholders.
As a method of generating a feature value using local information of a human face, the present invention also proposes that size information of five sense organs of a face is used as important information for a specific card holder, and thus the size information of a face is defined as follows.
Fig. 8 is a schematic diagram illustrating a method for defining feature values of facial recognition facial feature size information.
Fig. 8(a) is a schematic size diagram of human eyes, the size of which is important information for identifying the characteristics of the card holder, and particularly the size of human eyes which are important characteristics for identifying the card holder. As shown in fig. 8 (a): (8-1) the eyeball of a human eye, which is generally dark, and (8-2) the fundus of the human eye, which is generally light. V1Is the width dimension of the eye, V2Is the diameter size of the eyeball. The present invention is directed to the eye width dimension V shown above1As 1 element of the feature vector for authenticating the card holder, the diameter V of the eyeball2As 1 element of the feature vector that authenticates the cardholder.
Fig. 8(b) is a schematic diagram showing the dimensions of the mouth, and the size and thickness of the mouth are important information for identifying the characteristics of the card holder. As shown in fig. 8 (b): (8-3) is the lips of a human, typically red in color. V4Is the width dimension of the mouth, V3Is the thickness dimension of the mouth. The present invention is not limited to the width V of the nozzle as shown above4The thickness V of the mouth is also determined as 1 element of the feature vector for authenticating the cardholder3As another element of the feature vector that authenticates the cardholder.
The invention aims at the recognition of the face of the card holder, and the face image of the card holder photographed by a mobile phone is a color image without brightness information, and the face image is converted into a code capable of specifying the card holder on the basis of not losing the color information of the face image. Since color information of the face image is highly efficient for the position of facial features, extraction of the size information of the facial features provides an important feature. For example, the color of the eyeball, darkest on the face, the fundus color lightest on the face, the lips red, etc. The five sense organs information can be extracted quickly and accurately by using the color information without brightness information, and meanwhile, the influence of the identification result on the environment is small because the brightness information which is greatly influenced by the shooting environment of the mobile phone is not used.
The invention also utilizes the density of the pixel gray scale distribution of facial five sense organ parts under a certain color to extract the information of the five sense organ parts aiming at the identification of the face of a card holder. For example, the eyeball has the highest density in black compared with other five sense organs, and the fundus has the highest density in white compared with other five sense organs. The mouth has the highest density in the red color compared to the other five sense organs.
The invention can introduce the probability scale self-organizing algorithm by utilizing the density rule of pixel distribution of the facial features image under a certain color and a certain gray value, and automatically extract the facial features position information. Since the probability scale self-organization calculation result is the result of maximizing the probability value of the pixel distribution of a certain color and a certain gray value, that is, the result of maximizing the density value of the pixel distribution of a certain color and a certain gray value can be obtained.
For example, for the recognition of the human eye feature value, it is convenient to start with the eyeball, because the eyeball is characterized by very high density of black pixels, and the position information and the size information of the human eye can be conveniently extracted by using a probabilistic self-organizing algorithm near the eyeball.
Fig. 9 is a schematic diagram of extracting feature values by eye position and size information of a person.
The method of introducing probabilistic self-organization is slightly different from the method shown in fig. 6 described above in terms of extraction of eye position and size characteristics of a person, and the method of probabilistic self-organization shown in fig. 9 is performed on one-dimensional data, and here is calculated on the distribution center and size of an eyeball in a two-dimensional space.
As shown in fig. 9 (a): as a method for probabilistic self-organization of two-dimensional spatial data, the algorithm shown in FIG. 2 is referred to, and an initial center position A is initially set near the eyeball(0)=(x0,y0) And an initial probability scale M(0). In the vicinity of the position where the eyeball is found, the initial center position A(0)=(x0,y0) And an initial probability scale M(0)Is set such that the eyeball must be included at A(0)=(x0,y0) Radius M of the center(0)In the range of (1), it is not necessary to completely include all parts of the eyeball, and even if some parts are included, since the range to be included automatically by the probabilistic self-organization algorithm can be automatically moved to the place of the pixel distribution with the highest density by the calculation of the self-organization for a plurality of times.
In this step, as shown in fig. 6 above: similarly, a convergence value V of the self-organization, a maximum number of times of the self-organization MN, and a current number of times of the self-organization h are set to 0. The method can be seen in fig. 2.
As shown in fig. 9 (b): in the following calculation of probabilistic self-organization, A(h)=(x0,y0)(h)Can be calculated with reference to equation 4.
[ EQUATION 4 ]
Likewise, the probability scale M(h)=S(h)2Can be calculated with reference to equation 5.
[ EQUATION 5 ]
Equations 4 and 5 are merely examples, and other similar equations may be used to obtain the desired results with reference to these two equations, which are not listed here.
Performing self-organizing treatment h times, and treating (x)0,y0)(h)As a self-organizing center value, the probability scale M(h)As the radius, all eye pixels I (x) within the radius are calculatedi,yj) (i 1, 2, …, k, j 1, 2, …, l) and a gray density distribution value S(h +1)。M(h+1)=S(h+1)H is h + 1. As shown in fig. 9 (b): probability scale M(h)=S(h)2At a convergent, central position A(h)=(x0,y0)(h)Gradually moving to the center of the eyeball.
After n times of self-organization calculation of the probability scale, in the step 9(c), the center position A(h)=(x0,y0)(h)Stopping at the center of the eyeball with a probability scale M(h)The radius of the eyeball is also stopped around the eyeball, and the size and the position of the eyeball are accurately calculated.
The position information of the five sense organs, the size information of the five sense organs, the shape information of the five sense organs, the color information of the five sense organs, the frequency information of the five sense organs and the like which are randomly distributed in the face brushing image can be obtained through probability scale self-organizing processing similar to the probability scale self-organizing processing, and stable face brushing information in a probability scale distance space can be obtained, which is not listed in the following.
Fig. 10 is a schematic diagram of extracting feature values from shape information of a human face.
The shape information of the face is an important characteristic for distinguishing different people, has small change depending on the shooting environment, does not have great change in a short time, and is not influenced by the hair style or the makeup, so the face shape information is relatively stable information.
As shown in fig. 10: (a) shape information indicating a face with a relatively thin face shape, (b) shape information indicating a face with a relatively normal face shape, and (c) shape information indicating a face with a relatively thick face shape. The facial form of each person can be different, and the difference information can be used as a characteristic value for distinguishing different persons.
Here, the present invention further provides another method for extracting local information of a human face, in which a kernel function of a two-dimensional Gabor Wavelet (GWT) is set as a plane wave constrained by a gaussian envelope function:
[ equation 6 ]
Wherein,kv=kmax/fvrepresenting the frequency (scale) of the kernel function, representing the direction of the kernel function. By setting different scales and directions, a group of Gabor wavelet kernel functions can be obtained. The characteristic extraction of the image is completed by performing convolution operation on a plurality of Gabor wavelet kernel functions and the image respectively.
In order to extract local information better, the Gabor features are partitioned according to spatial positions, and the features in each block are connected in series to form a feature vector. Thus, for a face image, we can obtain a plurality of feature vectors, which are called Local Gabor Feature Vectors (LGFV).
The above-mentioned fig. 8 to fig. 10 are to extract the local information of the face, the obtained result is the feature value of each part, the feature values of each part are formed into a set of digitized vectors capable of reflecting the features of each part of the face,
the whole information of the face means that each dimension of the feature vector of the face contains information of all parts (even all pixels) on the face image, and therefore the whole attribute of the face is reflected. The skin color information of the human face is a specific characteristic value.
Each dimension of the local features only corresponds to one local area on the face image, so that the emphasis is placed on extracting the detail features of the face. The invention integrates the characteristic vector obtained by the whole information of the face and the characteristic vector formed by the local information, and can reflect the whole rough information of the face and the local fine information of the face. The authentication code of the face can be obtained more accurately.
Fig. 11 is a schematic diagram of extracting feature values from skin color information of a human face.
The skin color information of the human face is important information for rapidly distinguishing different people, as shown in fig. 11: (a) represents the face of a yellow person, (b) represents the face of a white person, and (c) represents the face of a black person. The method comprises the steps of converting an RGB color face image read by a mobile phone camera into a Lab color space, removing brightness information L, representing the face image by using a and b, respectively calculating gray values of maximum distribution densities of the colors a and b by using the probability scale self-organization algorithm, and taking the two gray values as face skin color information to obtain a face skin color characteristic value through a membership function.
Fig. 12 is a schematic diagram of extracting feature values from information of a face frequency space.
As shown in fig. 12: 1201 denotes the face of a human face, 1202 is the wrinkles of the forehead of the human face, 1203 is the eye pockets of the eyes of the human face, and 1204 is the statute lines on both sides of the human mouth. As shown at 1202 of fig. 12: when wrinkles of the forehead are very dense, the characteristics of the wrinkles may be extracted by the frequency characteristics of the local area of the forehead.
As shown at 1204 of fig. 12: when the wrinkles are clear, the edge lines of the wrinkles can be found by performing differential calculation on the image of the area where the wrinkles are located, and the length of the wrinkles can be identified. The differential calculation for the wrinkle image also belongs to the calculation of the frequency space for the wrinkle image.
Fig. 13 is a schematic diagram of a processing method for using information of the frequency space of a face as the whole information of the face. The information of the frequency space of the human face is taken as the whole information of the human face, which reflects the roughness of the human face, such as the amount of wrinkles, scars of the face and the like, and in order that the recognition result does not affect the recognition stability because of white noise generated when the image is read by the fine parts of the face, only the coefficient of the low-frequency part of the fast Fourier transform result carried out by the face image is taken as the numerical vector of the frequency space of the human face.
As shown in fig. 13, the processing of using the information of the frequency space of the face as the whole information of the face is performed in three steps:
the first step is a face image reading step in which the entire image of the face is read as a target of processing of information of the frequency space of the face.
The second step is a fast fourier transform step, which performs fast fourier transform on the read face image.
And the third step is a step of forming a numerical vector, wherein the real number coefficient and the imaginary number coefficient at the low frequency end of the fast Fourier transform result form the numerical vector of the frequency space of the human face.
The invention realizes the authentication of the card holder by utilizing the position information of the facial features, the size information of the facial features, the information of the frequency space of the facial features, the shape information of the facial features and the skin color information of the facial features. However, in different mobile phone shooting environments, the information has a random distribution problem in a certain range, and the expected value and the variance value closest to the probability distribution parent can be obtained through the probability scale self-organizing algorithm.
In order to more accurately convert the face of a card holder into a relatively stable code, the randomness of face information is considered, the most effective method is adopted to calculate the characteristic value which is closest to a parent body, meanwhile, the fuzzy mathematics theory is introduced, a plurality of Membership functions (Membership functions) defined based on human experience are used for digitizing the obtained digitized information of the face in a human intervention mode, and the characteristic vector with the image code property is directly generated.
Fig. 14 is a schematic diagram of a processing method for digitizing voiceprint information.
One-dimensional voiceprint information can also be transformed by fourier transformation to generate numerical information.
As shown in fig. 14, the process of digitizing the voiceprint information is performed in three steps:
the first step is a voiceprint signal reading step, in which the voiceprint signal is read as a target of processing of the voiceprint authentication information.
The second step is a fast fourier transform step of performing fast fourier transform on the read voiceprint signal.
And a third step of forming a digitized vector, wherein the real number coefficient and the imaginary number coefficient at the low frequency end of the fast Fourier transform result form the digitized vector of the voiceprint information.
Fig. 15 is an example of defining shape information of a face as a membership function.
As shown in fig. 15 (a): from a to a1And a2A straight line of connection formed by2And a3A straight line of connection, and a3And a1The straight lines connected form a triangle, and a ' is the area of the triangle except the area occupied by the face shape, generally, the area of a ' is closer to the area of the triangle when the face shape of the human face is thinner, and the area of a ' is smaller when the face shape of the human face is thicker. Using this artificial subjective experience, the following membership functions can be constructed. Is provided with a1And a3The straight line of connection is L1From a to a2And a3The straight line of connection is L2Defining the shape information of the face as a membership function MB1This can be made from equation 7:
[ EQUATION 7 ]
When the face is very fat, its area a' is close to "0", MB1Close to 100, when the face is very thin, the area 2a 'is close to' L1*L2”,MB1Close to "0", and therefore the membership function MB1Is a feature function describing the face's thickness.
As another example of calculating the membership function of the width of the mouth, let V be the maximum width of the mouth obtained by statisticsmaxMinimum width of VminThe width of the nozzle is set to V as shown in FIG. 8(b)4Membership function MB of the width of the mouth2This can be done from equation 8:
[ EQUATION 8 ]
Where Vmax ≠ Vmin, equation 8 gives that as the mouth approaches the minimum width, V4≈VminMembership function MB of width of human mouth2Approximately 100, when the mouth is near maximum width, V4≈VmaxMembership function MB of width of human mouth2Close to 0. Hence membership function MB2Is a characteristic function describing the width of the human mouth.
The width of the canthus of other human faces, the size of the eyeball, the distance between the canthus of the eye, the distance between the eyeball and the nose, the size of the nose, the distance between the nose and the eye, the width of the mouth, the thickness of the mouth, the distance between the mouth and the nose, the width of the cheek, the distance between the mouth and the forehead and the like reflect the position and size of five sense organs, the wrinkles of the face, the skin color of the face, the shape of the face and the like, can refer to the definition method of the membership function, and the human face information can be converted into the characteristic vector of the standard numerical value from 0 to n, and whatever form of the membership function is adopted, so the method belongs to the.
Here, a feature vector space is defined, and q feature vectors are obtained from face images of q different persons by the above method, and the feature vectors of the face images each having p feature elements constitute the feature vector space of formula 1.
There are q vectors of p elements, which can be expressed by the determinant shown in equation 1.
[ equation 9 ]
In calculating the vectors w of two probability distributionsi1,wi2,…,wipAnd vector vi1,vi2,…,vipWhen the distance of the probability scale of the probability distribution of each corresponding element is determined, a vector v is seti1,vi2,…,vij,…,vipJ (th) element vijIs gj1,gj2,…gjζThe final center value A calculated by equation 4ijAnd a probability scale MijThen, then
[ EQUATION 10 ]
Substituting the result of equation 10 into equation 11 yields the distance P of the probability scalei。
Then setting a characteristic vector w of a card holder face brushing image or a voiceprint signalΥ1,wΥ2,…,wΥpAnd the feature vector V of the ith personal face imagei1,Vi2,…,VipCorresponding to psi learning data which are registered after several times of identification, a matrix L of learning data is formed.
The probability scale self-organization calculation is performed according to the above formulas 1, 4 and 12, and the central value and the variance value L of the learning data matrix can be obtained as followsAM:
[ equation 13 ]
LAB=[(A1,M1),(A2,M2),…,(Ap,Mp)]
Here, when the third bank is set for mobile phone payment, the card holder swipes the feature vector w of the face image or the voiceprint signalΥ1,wΥ2,…,wΥpAnd the feature vector space VijThe distance of the probability scale between the feature vectors in (a) can be obtained according to equations 5 and 6:
[ equation 14 ]
Where the minimum value pminCorresponding characteristic directionMagnitude VΥ1,VΥ2,…,VΥpA code value of a brushing image or a voiceprint signal of the third person.
Fig. 16 is a flow chart of a face brushing or voiceprint code adaptive learning processing method.
The feature vectors of the face image or the voiceprint signal of the card holder obtained by extracting the feature information and calculating the membership function are calculated to calculate the distance of the probability scale with each feature vector of the feature vector space, and the feature vector corresponding to the minimum value of the distance of the probability scale of the formula 14 is calculated to be used as the face or voiceprint code of the card holder. In order to improve the precision and stability of the face brushing or voiceprint payment code, the invention provides a self-adaptive face brushing or voiceprint code generation method.
As shown in fig. 16, the adaptive learning processing method for face brushing or voiceprint codes is performed in 3 steps:
step 1: and (3) constructing an adaptive feature vector space: in order to use the face brushing or voiceprint data every time as the data of one-time learning, the real learning data closest to the probability distribution matrix is continuously kept through a large amount of statistics, the unreal data exceeding the deviation is removed, and the probability scale distance is calculated in each learning data of the real characteristic vector elements, so that the face brushing or voiceprint codes are enabled to be more and more stable through self-adaptive learning, the code generation is enabled to be more accurate, and the main purpose of the self-adaptive learning is achieved.
In order to calculate the face-brushing or voiceprint code of the card holder more accurately, first, v in the feature vector space of the face-brushing image or voiceprint signal of formula 1 is calculatedijThe matrix A is replaced with the center value of the probability scale distance space by learning the data matrix formula 12 and calculating the distance in the probability scale space for the learning data matrix to obtain formula 13ijThe matrix can be used as an adaptive eigenvector space,
[ equation 15 ]
The feature vector space v can be obtained by the equations 12 and 13ijA dispersion matrix of a probability scale distance space of (1), i.e.
[ equation 16 ]
And matrix D of face-swiped or voiceprint payment codesijI.e. by
[ equation 17 ]
The central value of the probability scale distance space obtained by equations 15 and 16 and the variance value of the probability scale distance space are changed with the change of the learning data as described below, and thus may have an adaptive characteristic, and is data that continuously tends to the parent of the probability distribution. The processing can ensure that the retrieval of the feature vector space of the face brushing image or voiceprint signal code of the card holder is in the best state, and can improve the calculation result that the code of face brushing or voiceprint payment is in the best value.
Here, as shown in equation 17: since the central value of the probability scale distance space changes with the change of the learning data, it is necessary to introduce a matrix D of face brushing or voiceprint payment codes corresponding to each element of the central value matrix of the probability scale distance space, but whose numerical value is not changedij。
Setting the feature vector w obtained during face brushing payment of the third orderΥ1,wΥ2,…,wΥpAs shown in equation 15, the feature vector of the card holder's face image or voiceprint signal is associated with the adaptive feature vector space AijDistance P of the probability scaleiThe following were used:
[ equation 19 ]
When the face swiping payment is carried out at the retrieval order, the card holder swipes the feature vector w of the face image or the voiceprint signalΥ1,wΥ2,…,wΥpCentral value A from the space of probability scale distancesijThe adaptive feature vector with the smallest distance of the probability scale among the adaptive feature vectors, and the corresponding matrix D of the face brushing or voiceprint payment codesijThe code value of (a) can be used as the code for the third face brushing or voiceprint payment.
Step 2: obtaining a self-adaptive learning face brushing or voiceprint payment code; after the processing of the steps, the feature vector w of the face brushing image or the voiceprint signal for face brushing or voiceprint payment of the third time of the card holder is processedΥ1,wΥ2,…,wΥpAnd searching out a self-adaptive learning characteristic vector in a self-adaptive characteristic vector space corresponding to the distance of the minimum probability scale through distance calculation of the probability scale, finding out a code in a matrix of a face brushing or voiceprint payment code corresponding to the self-adaptive learning characteristic vector, and taking the code as the face brushing or voiceprint payment code of the self-adaptive learning.
According to the idea of the present invention, there are various methods to realize the structure of the adaptive learning data matrix, and only one example is used for explanation, and referring to formula 14, the feature vector w of the face brushing image or voiceprint signal for the third time of face brushing or voiceprint payment by the cardholder is setΥ1,wΥ2,…,wΥpMatrix A of center values from a probability scale distance spaceijThe minimum distance that a certain feature vector in the constructed adaptive feature vector space exists is pΥi_min1Another feature vector existence ratio pΥi_min1Slightly larger probability scale minimum distance is pΥi_min2。
And can also set the characteristic vector w of the card holder's face-brushing image or voiceprint signalΥ1,wΥ2,…,wΥpMatrix L of learning data of formula 12 formed with Ψ learning data recognized several timesijThe distance of the probability scale between the feature vectors in (a) can be obtained according to equations 5 and 6:
P' of equation 20 is made to { P }1’,p2’,…,pΨ' probability scale distance, and self-organizing calculation of probability scale through formulas 5 and 6 to obtain feature vector w for face-brushing image or voiceprint signal of card holderΥ1,wΥ2,…,wΥpMatrix L with learning dataijA central value a 'of, and a dispersion M'.
If { P is satisfiedΥi_min1≤1/2pΥi_min2And epsilon is not less than 1/2pΥi_min2(ii) a Or (p)Υi_min1Less than or equal to M') } then the feature vector w of the face brushing image or voiceprint signalΥ1,wΥ2,…,wΥpMatrix L as learning dataijThe new learning vector of (2).
Here, one of the conditions for adding learning data is PΥi_min1≤1/2pΥi_min2The physical meaning is as follows: the condition that the feature vector of the face brushing image or the voiceprint signal for limiting the face brushing or voiceprint payment at the third time enters the learning data is to keep a certain probability scale distance with other feature vectors as far as possible so as to prevent the phenomenon of misrecognition.
Range condition ε ≧ 1/2pΥi_min2The physical meaning of (1) is that when the distance difference from the probability scale of the mode is large, the feature vector w of the face brushing image is not causedΥ1,wΥ2,…,wΥpIs not suitable for entering the learning data and is wrongly entered into the learning data.
As a method of eliminating a learning vector that is not suitable as learning data, first, each element of the distance of the probability scale in equation 20 is subjected to { ω ═ p { (ω ═ p) }i-A’;i=1,2,…, Ψ) bias calculation, the maximum bias value ωmaxThe corresponding learning data is removed.
Here, only a method of how to adaptively enter learning data and how to eliminate a feature vector data which is not suitable as learning data is provided, but actually, the feature vector data is often entered into the learning data at the same time or step by step for several data at the time of system configuration, and actually, the feature vector data is often eliminated from the learning data at the same time or step by step for several data at the time of system configuration. The relevant processing method can be predicted as long as the above method is referred to.
In order to solve the problems, the characteristic vector of the face brushing image is often changed greatly due to the fact that a card holder changes the hair style or performs different makeup, and the like in face brushing payment, the invention also provides a method for recording α characteristic vectors which are obtained recently at present and become a matrix of auxiliary learning data while setting up a matrix of learning data:
[ equation 21 ]
If the central value a ″ in the probability scale distance space of the matrix of the supplementary learning data is calculated and the variance value M ″ is in a stable and converged state, and further, a ″ and M ″ and the central value a ' and the variance value M ' of the matrix of the learning data are in a longer distance in the probability scale distance space, the matrix L ' of the supplementary learning data and the matrix L of the learning data may be merged to regenerate a new matrix of the learning data, or a replacement of the matrix of the supplementary learning data and the matrix of the learning data may be performed, or the like. The specific method can refer to the above examples and thinking ways proposed by the present invention, and various processing means can be obtained.
And step 3: obtaining a self-adaptive learning face brushing or voiceprint payment code: after the processing of the steps, the feature vector w of the face brushing image or the voiceprint signal for face brushing or voiceprint payment of the third time of the card holder is processedΥ1,wΥ2,…,wΥpAnd searching out a self-adaptive learning characteristic vector in a self-adaptive characteristic vector space corresponding to the distance of the minimum probability scale through distance calculation of the probability scale, finding out a code in a matrix of a face brushing or voiceprint payment code corresponding to the self-adaptive learning characteristic vector, and taking the code as the face brushing or voiceprint payment code of the self-adaptive learning.
The face-swiping or voiceprint payment provided by the invention is not only used for commodity payment, but also can be used as a mode for carrying out personal authentication on a common credit card during payment, and the problem of illegal payment caused by stealing of the credit card can be solved.
In addition, the face-swiping or voiceprint payment method provided by the invention can be used for adding a step of face-swiping authentication or voiceprint authentication when cash of a common bank cash card is withdrawn, so that the safety of the bank cash card can be improved.
In order to prevent criminal behaviors of illegal persons who pay by swiping faces through photos of cardholders, the invention needs to identify whether the image swiped faces is a living body image, and the invention provides a method for identifying the living body image aiming at the image swiped faces as follows.
Fig. 17 is a schematic diagram of one example of living body image recognition.
As shown in fig. 17 (a): fig. 8 illustrates a method for recognizing the size of an eyeball using a probability scale self-organization algorithm, in order to recognize whether the image is a living body, the eyeball may blink according to the feature that the eyeball of the human face is in a brushing image having a living body, and therefore, as shown in fig. 17 (b): m in FIG. 17(a) when the brushface image blinks, using a probability scale self-organizing algorithm for recognizing the eyeball size(n)Much larger than M of FIG. 17(b)(n)', i.e. M(n)’<M(n)The state of (2) is instantaneously present.
The recognition of the face-brushing image as an image of a living body is not only the recognition of the blinking of the eyeball described above, but also a method of recognizing the movement of opening and closing the mouth, the minute change of the muscle when the face is smiling, the change of the pupil, the acceleration of the minute shake of the face, the shake of the color head of the face, and the like according to the above probability scale self-organizing algorithm method.
Fig. 18 is a schematic diagram of an electronic image for optically recognizing a possible 3D screen code.
As shown in fig. 18: (1800) the screen display can be the screen of a screen display, and can also be the display screen of a mobile phone. Definition of a so-called optically recognizable possible 3D screen-encoded electronic image displayed on a screen, as shown at 1801 of fig. 18: the code is divided into at least a plurality of small areas with the size larger than the optically readable size on the screen as the code symbols, and each symbol is physically distributed on the recorded information by the geometrical distribution including different colors, different positions, different sizes, different directions, concentration and dispersion of the symbol, or by different modulation modes, different phase differences, different propagation directions and different gray scale distribution.
Each symbol of the above-mentioned optically recognizable possible 3D screen code records information by distribution in two, three or four dimensional space. Also shown at 1802 in fig. 18: each symbol that optically recognizes a possible 3D screen code also records information simultaneously by a distribution of time. I.e. for each symbol also red, green, blue, white, black and thus the various possible colors combined, it is also possible to vary at each instant in the manner of amplitude modulation 1804 or frequency modulation 1803, e.g. recording information at different speeds of the color flash (which may be called frequency modulation information), recording information at the intensity of the black or white flash (which may be called amplitude modulation information).
Various information that can be displayed on the screen of the mobile phone, such as: the two-dimensional bar code 1805, QR code, hanxin code, trademark of the product, and the like may be recorded with information changed at every instant in accordance with the amplitude modulation 1807 and the frequency modulation 1806.
The 3D screen code refers to a QR two-dimensional code, a DM two-dimensional code, a PDF417 two-dimensional code, a Hanxin code, a screen code and an image displayed on any screen, which can transmit information on the screen.
In order not to affect the content displayed on the screen and to utilize the display space of the screen as much as possible, from the space field, it is possible to record information at different positions, for example, so that the symbol uses only less than 2% of the space, and more than 98% of the space is used for displaying the screen image.
It is also possible to hide the symbol at a certain level of grey of the screen display image, for example by placing the symbol information on the lowest bits of the bitmap.
In the time field, the flicker of the symbol can be set to be less than 0.3 second of the image display time by utilizing the visual characteristic of the motion perception of human eyes, the screen display image occupies the main display time, and the state of the symbol recording information in the time field is not easy to find under the memory of the human eyes, so that the information hiding effect can be achieved.
In a word, the possible 3D screen coding of optical identification utilizes the integrated visual characteristic of a time domain and a space domain, so that a screen can normally display a screen image, each symbol can embed information in the time domain and the space domain, and the influence on the display image is minimum.
Claims (6)
1. A method for forming an online shopping mobile phone payment system is characterized in that:
(1) the mobile phone needs to be directly networked;
(2) after the step of face-brushing authentication is automatically carried out, a recognition window pops up on a screen of the mobile phone, and the face of a card holder is aligned to the recognition window;
(3) performing mobile phone face brushing authentication of fusing local features and global features of the human face;
(4) the mobile phone face brushing authentication is realized by self-organizing probability scales in the following steps:
a pretreatment step: m(0)As an initialization probability scale, A(0)As an initial central value of the self-organization, V is a convergence value of the self-organization, MN is a maximum organization number of the self-organization, and initially n is 0 as a current number of the self-organization;
self-organizing: performing self-organizing treatment n times to obtain A(n)As a self-organizing center value, the probability scale M(n)As a radius, the central value A(n)For reference, all values within the radius are calculatedgf(f ═ 1, 2, …, ζ) average value V(n+1)And the dispersion value S(n +1),V(n+1)=A(n+1),S(n+1)=M(n+1),n=n+1;
Self-organization discrimination step: maximum number of times of self-organizing treatment (N ≧ MN) or convergence of self-organizing treatment (M)(n)-M(n+1)V) is not more than V), if YES, the step of ending is skipped; if NO, the self-organizing step is skipped to continue the self-organizing process.
2. The method for constructing the online shopping mobile phone payment system as claimed in claim 1, wherein: the local features of the human face are at least one of feature values extracted from shape information of the human face, feature points at positions of facial recognition five sense organs and feature values of size information of the facial recognition five sense organs.
3. The method for constructing the online shopping mobile phone payment system as claimed in claim 1, wherein: the global feature refers to one of a feature value extracted from skin color information of a human face or a feature value extracted from information of a human face frequency space.
4. The method for constructing the online shopping mobile phone payment system as claimed in claim 1, wherein: the local characteristics of the human face also mean that the facial features of the card are in a certain color; and the density rule of pixel distribution under a certain gray value; and (4) introducing a probability scale self-organization algorithm, and automatically extracting the position information of the facial features.
5. The method for constructing the online shopping mobile phone payment system as claimed in claim 1, wherein: the face feature information is processed by a probability scale self-organization algorithm to obtain a feature value which is closest to a parent, meanwhile, the fuzzy mathematics theory is introduced, and aiming at the obtained face information, a plurality of Membership functions (Membership functions) defined based on human experience are used for digitalization between 0 and n numerical values in a human intervention mode, so that a feature vector with the image code property is directly generated.
6. The method for constructing the online shopping mobile phone payment system as claimed in claim 1, wherein: the face-brushing payment code is generated by identifying at least one of the face-brushing images which reflect the characteristics of the living body image, including the actions of blinking of eyeballs, opening and closing mouths, the tiny change of muscles when the face is smiling, the change of pupils, the acceleration of tiny shaking of the face and the color of the face.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510090743.7A CN105989346B (en) | 2015-02-17 | 2015-02-17 | Construction method of online shopping mobile phone payment system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510090743.7A CN105989346B (en) | 2015-02-17 | 2015-02-17 | Construction method of online shopping mobile phone payment system |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN105989346A CN105989346A (en) | 2016-10-05 |
| CN105989346B true CN105989346B (en) | 2020-04-21 |
Family
ID=57038932
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201510090743.7A Active CN105989346B (en) | 2015-02-17 | 2015-02-17 | Construction method of online shopping mobile phone payment system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN105989346B (en) |
Families Citing this family (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108510079B (en) * | 2017-02-27 | 2024-04-30 | 顾泽苍 | Multi-probability scale forming method for machine learning |
| CN108510057B (en) * | 2017-02-27 | 2024-12-20 | 顾泽苍 | A method for constructing a neural network model for ultra-deep adversarial learning |
| CN108510052B (en) * | 2017-02-27 | 2024-08-02 | 顾泽苍 | Construction method of artificial intelligent neural network |
| CN107230066B (en) * | 2017-06-01 | 2020-11-03 | 珠海市魅族科技有限公司 | Payment method and device, computer device and readable storage medium |
| US20200012999A1 (en) * | 2018-07-03 | 2020-01-09 | Baidu Usa Llc | Method and apparatus for information processing |
| CN111523357B (en) * | 2019-02-02 | 2025-01-28 | 顾泽苍 | A method for recognizing image depth |
| CN112241521A (en) * | 2020-12-04 | 2021-01-19 | 北京远鉴信息技术有限公司 | Identity verification method and device of plosive, electronic equipment and medium |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1424691A (en) * | 2003-01-20 | 2003-06-18 | 天津市阿波罗信息技术有限公司 | Method for generating ID code containing human body information |
| CN101207697A (en) * | 2006-12-22 | 2008-06-25 | 上海杰得微电子有限公司 | Digital picture obscurity enhancement method for anastomosing a plurality of blurring operators |
| CN103559040A (en) * | 2013-11-12 | 2014-02-05 | 厦门卓讯信息技术有限公司 | System and method based on SDK for rapidly building mobile internet application module |
| CN103886658A (en) * | 2014-01-28 | 2014-06-25 | 北京中科虹霸科技有限公司 | Multi-modal biometric recognition-based distributed internet-of-things lock equipment and unlocking method thereof |
| CN104299136A (en) * | 2013-07-15 | 2015-01-21 | 联想(北京)有限公司 | Information processing method and electronic equipment |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1197912A3 (en) * | 2000-10-11 | 2004-09-22 | Hiroaki Kunieda | System for fingerprint authentication |
-
2015
- 2015-02-17 CN CN201510090743.7A patent/CN105989346B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1424691A (en) * | 2003-01-20 | 2003-06-18 | 天津市阿波罗信息技术有限公司 | Method for generating ID code containing human body information |
| CN101207697A (en) * | 2006-12-22 | 2008-06-25 | 上海杰得微电子有限公司 | Digital picture obscurity enhancement method for anastomosing a plurality of blurring operators |
| CN104299136A (en) * | 2013-07-15 | 2015-01-21 | 联想(北京)有限公司 | Information processing method and electronic equipment |
| CN103559040A (en) * | 2013-11-12 | 2014-02-05 | 厦门卓讯信息技术有限公司 | System and method based on SDK for rapidly building mobile internet application module |
| CN103886658A (en) * | 2014-01-28 | 2014-06-25 | 北京中科虹霸科技有限公司 | Multi-modal biometric recognition-based distributed internet-of-things lock equipment and unlocking method thereof |
Non-Patent Citations (1)
| Title |
|---|
| 交通标志识别;白莹;《中国优秀硕士学位论文全文数据库信息科技辑》;20131215(第S2期);第5页 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN105989346A (en) | 2016-10-05 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN105989346B (en) | Construction method of online shopping mobile phone payment system | |
| US12306919B2 (en) | Systems and methods for dynamic passphrases | |
| US20190251571A1 (en) | Transaction verification system | |
| CN106663157A (en) | User authentication method, device for executing same, and recording medium for storing same | |
| CN105844461B (en) | Method for generating mobile phone face-brushing payment code | |
| CN105844460A (en) | Composition of mobile phone face-scanning payment system | |
| US20210398109A1 (en) | Generating obfuscated identification templates for transaction verification | |
| CN109686011A (en) | The user identification method of self-aided terminal and self-aided terminal | |
| CA3103484A1 (en) | Systems and methods for dynamic passphrases | |
| CN105989487A (en) | Composition method of mobile phone face recognition payment and voiceprint payment system of off-line shopping | |
| Tyagi et al. | Security enhancement through iris and biometric recognition in atm | |
| JP2023530893A (en) | Data processing and trading decision system | |
| Selvakumar et al. | Face biometric authentication system for ATM using deep learning | |
| Karale et al. | Online Transaction Security Using Face Recognition: A Review | |
| Priya et al. | An Effective Cardless Atm Transaction Using Computer Vision Techniques | |
| CN105989492A (en) | Composition method of voiceprint authentication mobile phone payment system | |
| CN105989464A (en) | Composition method of cash saving and withdrawing system of mobile phone deposit card | |
| Shahila et al. | Novel Biometric ATM User Authentication and Real-Time Secure Clickbait for Multi-Bank Transactions | |
| CN105989470A (en) | Composition method of optical payment system of mobile phone credit card and deposit card | |
| Chauhan | Deep learning-based methods for detecting generated fake faces | |
| Harakannanavar et al. | Performance evaluation of face recognition based on multiple feature descriptors using Euclidean distance classifier | |
| Singh et al. | Enhancing ATM Security and Convenience with Fingerprint-Based Biometric Authentication | |
| Rafaqat et al. | Integrating Biometric Security into Digital Payment Solutions: Opportunities and Challenges | |
| BISWAS et al. | ENHANCED SECURE ATM AUTHENTICATION USING NFC TECHNOLOGY AND IRIS VERIFICATION | |
| David et al. | ATM using biometrics (iris) |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |