Fingerprint image acquisition method based on bioelectric signal enhancement
Technical Field
The invention relates to the technical field of biological identification and fingerprint acquisition, in particular to a fingerprint image acquisition method based on bioelectric signal enhancement.
Background
Fingerprint identification technology is used as an important component in the field of biological feature identification, and is widely applied to a plurality of fields such as identity authentication, safety protection, intelligent equipment unlocking and the like. Traditional fingerprint acquisition techniques rely primarily on optical, electrostatic or capacitive sensors to generate digitized images by acquiring fingerprint surface characteristic information. However, under complex scenarios, such as wet hands, dry hands, aged skin, low contact pressure, etc., conventional techniques often face problems with reduced acquisition quality. Such degradation may manifest itself as blurring of the fingerprint image, loss of detail, or increased noise, thereby affecting the accuracy and robustness of the fingerprint identification system.
Optical fingerprint acquisition techniques use reflected or transmitted light to acquire fingerprint texture, but under wet hand conditions, the refraction and scattering effects of moisture can significantly reduce image quality. Meanwhile, the optical technology is sensitive to surface pollution and is easy to be interfered by greasy dirt, dust and the like. The electrostatic fingerprint acquisition technology generates texture features by capturing electrostatic signals between a finger and an acquisition surface, but when the contact pressure is insufficient or the skin surface is too dry, the intensity of the electrostatic signals can be significantly reduced, resulting in incomplete texture features. The capacitive fingerprint acquisition technology constructs a fingerprint image by detecting the capacitance change between the skin and the electrode, but under special conditions such as aged skin, the acquired capacitance signal may deviate due to the reduction of skin elasticity and conductivity, and the definition of the image is affected.
In addition, conventional fingerprint acquisition techniques typically rely on a single signal source, and lack comprehensive utilization of multimodal information. This single signal dependence makes the system difficult to adapt when faced with complex acquisition scenarios. For example, under wet hand conditions, the electrostatic signal may fail entirely, and relying on the capacitive signal alone may not capture enough detail information. In addition, most of the signal processing and image generating methods in the prior art are static processing, and cannot dynamically adapt to real-time changes of signals, so that the system is insufficient in coping with scenes such as dynamic finger movement, pressure fluctuation or rotation.
Another important technical limitation is the deficiencies of conventional fingerprint acquisition systems in signal enhancement and texture optimization. The prior art generally improves signal acquisition capability by hardware, but this approach is costly and has limited adaptability to complex scenarios. For signal processing, the traditional method mostly adopts a fixed rule or a simple filtering algorithm, so that the inherent relevance of biological signals cannot be fully mined, and the depth modeling capability of dynamic change signals is also lacking. For example, the static signal processing method is difficult to capture the space-time characteristics of signal distribution in the finger motion process, so that the generated fingerprint image has the problems of dynamic distortion or boundary blurring.
In terms of signal enhancement and feature optimization, the prior art generally relies on low-dimensional feature modeling, and cannot adequately capture the high-dimensional distribution characteristics and complex spatio-temporal dependencies of multi-source signals. Furthermore, the prior art lacks efficient compensation mechanisms for dynamic changes in the signal. For example, in a low contact pressure or slightly sliding scene, fingerprint texture features are prone to deformation or offset, while conventional static compensation algorithms have difficulty adjusting signal distribution in real time, resulting in degradation of the acquired image quality.
Therefore, how to provide a fingerprint image acquisition method based on bioelectric signal enhancement is a problem to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a fingerprint image acquisition method based on bioelectric signal enhancement, which utilizes a multisource bioelectric signal fusion and dynamic optimization technology to realize high-precision fingerprint image acquisition under a complex scene through joint modeling and enhancement of nerve signals, electrostatic signals and capacitance signals. The problems of fingerprint acquisition quality degradation under the conditions of wet hand, dry hand, aged skin, low contact pressure and the like are effectively solved through the technologies of dynamic compensation, feature interaction enhancement, multi-mode collaborative optimization, space-time decoding and the like. The generated fingerprint image has the advantages of high resolution, clear texture details and global consistency, strong adaptability, good robustness and high acquisition stability, and provides a new support for the fingerprint identification technology.
According to the embodiment of the invention, the fingerprint image acquisition method based on bioelectric signal enhancement comprises the following steps of:
s1, collecting a multi-source bioelectric signal of a fingerprint contact area, wherein the multi-source bioelectric signal comprises a nerve signal, an electrostatic signal and a capacitance signal, and an original signal data set is constructed;
S2, dynamically modeling an original signal data set based on a high-order variation self-encoder to generate a spatial distribution model and an adaptability enhancement parameter of a signal;
S3, utilizing a contact force sensor and a displacement sensing module to monitor the sliding, rotation and contact pressure change of the finger in real time, and generating space-time distribution data of finger movement by combining a spatial distribution model of signals;
s4, compensating texture deviation caused by finger movement in real time according to the space-time distribution data of the finger movement, and generating a dynamically compensated signal distribution model;
S5, modeling the dynamically compensated signal distribution model and fingerprint texture features in a dynamic corresponding manner through a neural-fingerprint feature interaction enhancement mechanism, and enhancing the signal contrast by utilizing a neural response synchronous amplification mechanism to generate an enhanced signal;
s6, constructing a multi-mode cooperative network with electrostatic signals and enhanced signals being jointly optimized, and generating optimized characteristic signals by generating optimized texture distribution characteristics of an countermeasure network;
and S7, decoding the optimized characteristic signals by adopting a space-time characteristic decoding technology to generate a final fingerprint image.
Optionally, the S2 specifically includes:
S21, decomposing the multi-source bioelectric signals in the original signal data set, and representing the time sequence of the nerve signals, the electrostatic signals and the capacitance signals as follows:
Wherein I k (t) represents the signal strength of the kth class signal at time t, N k represents the number of frequency components of the kth class signal, a kn represents the amplitude of the nth component of the kth class signal, f kn represents the frequency of the nth component of the kth class signal, Representing the initial phase of the nth component of the kth signal;
S22, carrying out spectrum analysis on the I k (t) of each type of signal to construct a spectrum characteristic matrix Fk:
Wherein F k (i, j) represents the complex amplitude of the component of frequency F i in the kth signal on time slice j, T represents the sampling time window, F i represents the frequency of the ith component, j represents the time slice index;
s23, performing dimension reduction processing on the frequency spectrum feature matrix F k, and mapping the frequency spectrum feature matrix F k into a potential feature matrix:
Wherein Z k represents a potential feature matrix of the kth signal, and V k represents a projection matrix obtained by principal component analysis;
S24, constructing a dynamic distribution model based on the potential feature matrix:
Where p (Z k) represents the probability distribution of the latent feature matrix, M k represents the number of latent variables, Z ki represents the value of the ith latent variable, μ ki represents the mean value of the ith latent variable, σ ki represents the standard deviation of the ith latent variable;
S25, mapping the latent variable into the spatial distribution characteristic of the signal by using the dynamic distribution model to generate a spatial distribution model of the signal:
Wherein S k (x, y) represents the spatial distribution of the signal of the kth class of signal on the spatial coordinates (x, y), Φ i (x, y) represents the ith orthogonal basis function on the spatial coordinates (x, y);
S26, extracting an adaptability enhancing parameter according to a spatial distribution model of the signal:
θk={μki,σki,Mk};
Where θ k denotes the enhancement parameter set of the kth class signal.
Optionally, the step S3 specifically includes:
S31, collecting pressure change of a finger contact surface in real time by using a contact force sensor:
wherein P (t) represents the contact pressure at time t, F (t) represents the contact force at time t, and A represents the contact area of the finger with the contact surface;
s32, monitoring the two-dimensional displacement of the finger on the contact surface in real time through a displacement sensing module:
Wherein Δd (t) represents the displacement of the finger on the plane at time t, x (t) and y (t) represent the finger two-dimensional position coordinates at time t, and x 0 and y 0 represent the initial finger two-dimensional position coordinates;
s33, monitoring the rotation angle change of the finger on the contact surface through a rotation sensor:
Δθ(t)=θ(t)-θ0;
Wherein Δθ (t) represents a change in the rotation angle of the finger at time t, θ (t) represents an angle value at time t, and θ 0 represents an initial angle;
S34, generating a preliminary finger movement feature matrix by utilizing a spatial distribution model of signals and combining the changes of finger pressure, displacement and rotation angle:
M(t)=[P(t) Δd(t) Δθ(t)]·S(x,y);
wherein M (t) represents a finger motion feature matrix at time t and S (x, y) represents a spatial distribution of signals on spatial coordinates (x, y);
s35, carrying out space-time feature fusion on the finger movement feature matrix to generate space-time distribution data of finger movement:
Where S (t, x, y) represents the spatiotemporal distribution data of finger motion at time t and on spatial coordinates (x, y), G (x, y) represents the spatial weighting function on spatial coordinates (x, y), t 0 represents the start time of motion, and t n represents the end time of motion.
Optionally, the step S4 specifically includes:
S41, acquiring space-time distribution data S (t, x, y) of finger movement, sampling the space-time distribution data according to time t and space coordinates (x, y), and representing the movement offset of the finger at each sampling point as deltax (t, x, y) and deltay (t, x, y);
s42, dividing the original fingerprint texture map into grid areas with fixed sizes, wherein texture data of each grid area is represented by a texture feature matrix;
S43, calculating texture deviation values by using the space-time distribution data of the finger movement:
where DeltaT (T, x, y) represents the texture bias value, V (x, y) represents the current texture feature matrix, V 0 (x, y) represents the initial reference texture feature matrix, Representing the gradient of the current texture feature matrix in the x-direction,Representing the gradient of the current texture feature matrix in the y direction;
s44, carrying out dynamic weight adjustment on texture deviation values delta T (T, x, y) to generate a compensation weight matrix for carrying out local optimization on deviations of different positions, wherein the adjustment process is controlled by texture gradient and deviation magnitude:
Wherein W (x, y) represents the dynamic compensation weight on the spatial coordinates (x, y), K represents the texture gradient adjustment coefficient, exp represents the exponential function, Representing the gradient magnitude of the current texture feature matrix V (x, y), wherein lambda represents the deviation weight adjustment coefficient, and delta T (T, x, y) represents the absolute value of texture deviation;
S45, calculating a signal distribution model after dynamic compensation based on the compensation weight matrix W (x, y) and the texture deviation value delta T (T, x, y):
Sf(x,y)=S0(x,y)+W(x,y)·ΔT(t,x,y);
Where S f (x, y) represents the dynamically compensated signal distribution model and S 0 (x, y) represents the initial signal distribution model.
Optionally, the step S5 specifically includes:
S51, dynamically modeling the dynamically compensated signal distribution model and fingerprint texture features based on a nerve-fingerprint feature interaction mechanism to generate an interaction enhancement matrix, wherein the nerve-fingerprint feature interaction mechanism comprises:
performing point-by-point correlation analysis on the dynamically compensated signal distribution model and the fingerprint texture characteristics in space, introducing spatial weight by using a Gaussian kernel function, and enhancing the influence between adjacent areas;
mapping the dynamic compensated signal distribution model and the local change of the fingerprint texture characteristics to an interaction space, and coupling the multi-scale characteristics together through integral operation;
calculating a weighted sum of the dynamically compensated signal distribution model and the fingerprint texture characteristics in the local neighborhood:
Wherein R (x, y) represents the interaction enhancement matrix at spatial coordinates (x, y), S (u, v) represents the spatial distribution of the signal at spatial coordinates (u, v), T (x-u, y-v) represents the distribution of the fingerprint texture feature values at the offset (x-u, y-v), exp represents the exponential function, σ represents the scale parameter of the gaussian kernel function;
s52, introducing a neural response synchronous amplification mechanism to enhance the signal contrast, dynamically coupling the intensity distribution of the neural signal with the interaction enhancement matrix, and generating an enhancement signal:
Where E (x, y) represents the enhancement signal at the spatial coordinates (x, y), η represents the neural response amplification factor, N (x, y) represents the neural signal intensity distribution at the spatial coordinates (x, y), and max (N (x, y)) represents the maximum value of the neural signal intensity distribution.
Optionally, the step S6 specifically includes:
s61, acquiring an electrostatic signal and an enhancement signal, and performing time sequence alignment and normalization processing on the electrostatic signal and the enhancement signal;
S62, constructing a multi-mode collaborative network, wherein the multi-mode collaborative network consists of two parts, namely an electrostatic signal feature extraction branch and an enhanced signal feature extraction branch, the electrostatic signal feature extraction branch extracts local intensity distribution features of electrostatic signals through convolution operation, and the enhanced signal feature extraction branch extracts global features of enhanced signals and generates a joint feature map through feature fusion operation;
s63, inputting the combined characteristic map into a generated countermeasure network, wherein the generated countermeasure network comprises a generator and a discriminator, the generator generates optimized texture distribution characteristics according to the combined characteristic map, and generates optimized characteristic signals by learning the internal relation between electrostatic signals and enhancement signals;
s64, finally outputting the optimized characteristic signals.
Optionally, the step S7 specifically includes:
S71, decoding the optimized characteristic signals by using a space-time characteristic decoding technology, wherein the decoding process comprises time decoding and space decoding, the time decoding is used for extracting dynamic characteristics of the optimized characteristic signals on a time sequence to generate time characteristic distribution, and the space decoding is used for extracting distribution characteristics of the optimized characteristic signals at different space positions to generate space characteristic mapping;
S72, carrying out joint processing on the time characteristic distribution and the space characteristic mapping to form space-time characteristic mapping, wherein the joint processing comprises characteristic alignment and weight balance;
S73, reconstructing the decoded space-time feature map to generate a final fingerprint image, wherein the reconstruction process comprises noise suppression, texture enhancement and boundary correction.
The beneficial effects of the invention are as follows:
Firstly, the invention solves the problem of single signal dependence in the prior art by fusing multisource bioelectric signals, including nerve signals, electrostatic signals and capacitance signals. Under a complex acquisition scene, even if one signal source is limited due to wet hands, dry hands or skin state changes, other signal sources can still provide supplementary information, so that the acquisition integrity and stability of fingerprint images are ensured. In addition, the multi-source signal is dynamically modeled through the high-order variation self-encoder, and the method can generate an accurate signal space distribution model and an adaptability enhancement parameter, so that the high-efficiency fusion and feature extraction of the multi-mode signal are realized, and the signal can accurately reflect the space characteristics of the fingerprint texture.
Secondly, the invention designs a dynamic compensation mechanism, which can correct texture deviation caused by finger sliding, rotation and pressure change in real time and generate a dynamically compensated signal distribution model. The mechanism is particularly suitable for scenes with low contact pressure or slight sliding and the like, and effectively solves the problems of image blurring and texture distortion caused by finger movement in the traditional method. By analyzing and compensating the space-time distribution data of finger movement, the invention ensures the stability and consistency of the fingerprint image under the condition of dynamic acquisition.
In addition, the neural-fingerprint characteristic interaction enhancement mechanism dynamically enhances the signal contrast of a key region through a neural response synchronous amplification technology. The mechanism not only can improve the feature definition of a complex texture region, but also can enhance the contrast effect of fingerprint textures under the condition of low signal intensity, thereby providing richer and more accurate feature data for high-precision fingerprint identification. Particularly in complex scenes such as wet hands, aged skin and the like, the mechanism significantly improves the signal quality and the image definition.
Furthermore, the multi-mode cooperative network is constructed, and the integrity and consistency of fingerprint texture distribution are further improved through the combined optimization of the electrostatic signals and the enhanced signals. The use of the generation of the antagonism network in texture optimization enables the invention to learn the deep correlation between electrostatic signals and enhancement signals, generating optimized feature signals with high resolution and strong contrast. Through the optimization, the problem of reduced recognition accuracy caused by incomplete texture or noise interference in the traditional method is solved.
Finally, the invention decodes the optimized characteristic signals into unified fingerprint images by a space-time characteristic decoding technology. The decoding process is combined with a Gaussian-multi-sample reconstruction model, so that the texture details and the global distribution characteristics of the fingerprint image are further optimized. The generated fingerprint image has the characteristics of high resolution, clear details and global consistency, can accurately reflect the spatial texture distribution of the fingerprint, and meets the high-quality acquisition requirement in complex application scenes.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a flowchart of a fingerprint image acquisition method based on bioelectric signal enhancement according to the present invention;
fig. 2 is a flowchart of a mechanism for generating and dynamically compensating the space-time distribution data of finger motion based on a bioelectric signal enhanced fingerprint image acquisition method according to the present invention.
Detailed Description
The invention will now be described in further detail with reference to the accompanying drawings. The drawings are simplified schematic representations which merely illustrate the basic structure of the invention and therefore show only the structures which are relevant to the invention.
Referring to fig. 1 and 2, a fingerprint image acquisition method based on bioelectric signal enhancement includes the steps of:
s1, collecting a multi-source bioelectric signal of a fingerprint contact area, wherein the multi-source bioelectric signal comprises a nerve signal, an electrostatic signal and a capacitance signal, and an original signal data set is constructed;
S2, dynamically modeling an original signal data set based on a high-order variation self-encoder to generate a spatial distribution model and an adaptability enhancement parameter of a signal;
S3, utilizing a contact force sensor and a displacement sensing module to monitor the sliding, rotation and contact pressure change of the finger in real time, and generating space-time distribution data of finger movement by combining a spatial distribution model of signals;
s4, compensating texture deviation caused by finger movement in real time according to the space-time distribution data of the finger movement, and generating a dynamically compensated signal distribution model;
S5, modeling the dynamically compensated signal distribution model and fingerprint texture features in a dynamic corresponding manner through a neural-fingerprint feature interaction enhancement mechanism, and enhancing the signal contrast by utilizing a neural response synchronous amplification mechanism to generate an enhanced signal;
s6, constructing a multi-mode cooperative network with electrostatic signals and enhanced signals being jointly optimized, and generating optimized characteristic signals by generating optimized texture distribution characteristics of an countermeasure network;
and S7, decoding the optimized characteristic signals by adopting a space-time characteristic decoding technology to generate a final fingerprint image.
In this embodiment, the S2 specifically includes:
S21, decomposing the multi-source bioelectric signals in the original signal data set, and representing the time sequence of the nerve signals, the electrostatic signals and the capacitance signals as follows:
Wherein I k (t) represents the signal strength of the kth class signal at time t, N k represents the number of frequency components of the kth class signal, a kn represents the amplitude of the nth component of the kth class signal, f kn represents the frequency of the nth component of the kth class signal, Representing the initial phase of the nth component of the kth signal;
S22, carrying out spectrum analysis on the I k (t) of each type of signal to construct a spectrum characteristic matrix Fk:
Wherein F k (i, j) represents the complex amplitude of the component of frequency F i in the kth signal on time slice j, T represents the sampling time window, F i represents the frequency of the ith component, j represents the time slice index;
s23, performing dimension reduction processing on the frequency spectrum feature matrix F k, and mapping the frequency spectrum feature matrix F k into a potential feature matrix:
Wherein Z k represents a potential feature matrix of the kth signal, and V k represents a projection matrix obtained by principal component analysis;
S24, constructing a dynamic distribution model based on the potential feature matrix:
Where p (Z k) represents the probability distribution of the latent feature matrix, M k represents the number of latent variables, Z ki represents the value of the ith latent variable, μ ki represents the mean value of the ith latent variable, σ ki represents the standard deviation of the ith latent variable;
S25, mapping the latent variable into the spatial distribution characteristic of the signal by using the dynamic distribution model to generate a spatial distribution model of the signal:
Wherein S k (x, y) represents the spatial distribution of the signal of the kth class of signal on the spatial coordinates (x, y), Φ i (x, y) represents the ith orthogonal basis function on the spatial coordinates (x, y);
S26, extracting an adaptability enhancing parameter according to a spatial distribution model of the signal:
θk={μki,σki,Mk};
Where θ k denotes the enhancement parameter set of the kth class signal.
In this embodiment, the step S3 specifically includes:
S31, collecting pressure change of a finger contact surface in real time by using a contact force sensor:
wherein P (t) represents the contact pressure at time t, F (t) represents the contact force at time t, and A represents the contact area of the finger with the contact surface;
s32, monitoring the two-dimensional displacement of the finger on the contact surface in real time through a displacement sensing module:
Wherein Δd (t) represents the displacement of the finger on the plane at time t, x (t) and y (t) represent the finger two-dimensional position coordinates at time t, and x 0 and y 0 represent the initial finger two-dimensional position coordinates;
s33, monitoring the rotation angle change of the finger on the contact surface through a rotation sensor:
Δθ(t)=θ(t)-θ0;
Wherein Δθ (t) represents a change in the rotation angle of the finger at time t, θ (t) represents an angle value at time t, and θ 0 represents an initial angle;
S34, generating a preliminary finger movement feature matrix by utilizing a spatial distribution model of signals and combining the changes of finger pressure, displacement and rotation angle:
M(t)=[P(t) Δd(t) Δθ(t)]·S(x,y);
wherein M (t) represents a finger motion feature matrix at time t and S (x, y) represents a spatial distribution of signals on spatial coordinates (x, y);
s35, carrying out space-time feature fusion on the finger movement feature matrix to generate space-time distribution data of finger movement:
Where S (t, x, y) represents the spatiotemporal distribution data of finger motion at time t and on spatial coordinates (x, y), G (x, y) represents the spatial weighting function on spatial coordinates (x, y), t 0 represents the start time of motion, and t n represents the end time of motion.
In this embodiment, the S4 specifically includes:
S41, acquiring space-time distribution data S (t, x, y) of finger movement, sampling the space-time distribution data according to time t and space coordinates (x, y), and representing the movement offset of the finger at each sampling point as deltax (t, x, y) and deltay (t, x, y);
s42, dividing the original fingerprint texture map into grid areas with fixed sizes, wherein texture data of each grid area is represented by a texture feature matrix;
S43, calculating texture deviation values by using the space-time distribution data of the finger movement:
Where DeltaT (T, x, y) represents the texture bias value, V (x, y) represents the current texture feature matrix, V0 (x, y) represents the initial reference texture feature matrix, Representing the gradient of the current texture feature matrix in the x-direction,Representing the gradient of the current texture feature matrix in the y direction;
s44, carrying out dynamic weight adjustment on texture deviation values delta T (T, x, y) to generate a compensation weight matrix for carrying out local optimization on deviations of different positions, wherein the adjustment process is controlled by texture gradient and deviation magnitude:
Wherein W (x, y) represents the dynamic compensation weight on the spatial coordinates (x, y), K represents the texture gradient adjustment coefficient, exp represents the exponential function, Representing the gradient magnitude of the current texture feature matrix V (x, y), wherein lambda represents the deviation weight adjustment coefficient, and delta T (T, x, y) represents the absolute value of texture deviation;
S45, calculating a signal distribution model after dynamic compensation based on the compensation weight matrix W (x, y) and the texture deviation value delta T (T, x, y):
Sf(x,y)=S0(x,y)+W(x,y)·ΔT(t,x,y);
Where S f (x, y) represents the dynamically compensated signal distribution model and S 0 (x, y) represents the initial signal distribution model.
In this embodiment, the step S5 specifically includes:
S51, dynamically modeling the dynamically compensated signal distribution model and fingerprint texture features based on a nerve-fingerprint feature interaction mechanism to generate an interaction enhancement matrix, wherein the nerve-fingerprint feature interaction mechanism comprises:
performing point-by-point correlation analysis on the dynamically compensated signal distribution model and the fingerprint texture characteristics in space, introducing spatial weight by using a Gaussian kernel function, and enhancing the influence between adjacent areas;
mapping the dynamic compensated signal distribution model and the local change of the fingerprint texture characteristics to an interaction space, and coupling the multi-scale characteristics together through integral operation;
calculating a weighted sum of the dynamically compensated signal distribution model and the fingerprint texture characteristics in the local neighborhood:
Wherein R (x, y) represents the interaction enhancement matrix at spatial coordinates (x, y), S (u, v) represents the spatial distribution of the signal at spatial coordinates (u, v), T (x-u, y-v) represents the distribution of the fingerprint texture feature values at the offset (x-u, y-v), exp represents the exponential function, σ represents the scale parameter of the gaussian kernel function;
s52, introducing a neural response synchronous amplification mechanism to enhance the signal contrast, dynamically coupling the intensity distribution of the neural signal with the interaction enhancement matrix, and generating an enhancement signal:
Where E (x, y) represents the enhancement signal at the spatial coordinates (x, y), η represents the neural response amplification factor, N (x, y) represents the neural signal intensity distribution at the spatial coordinates (x, y), and max (N (x, y)) represents the maximum value of the neural signal intensity distribution.
In this embodiment, the step S6 specifically includes:
s61, acquiring an electrostatic signal and an enhancement signal, and performing time sequence alignment and normalization processing on the electrostatic signal and the enhancement signal;
S62, constructing a multi-mode collaborative network, wherein the multi-mode collaborative network consists of two parts, namely an electrostatic signal feature extraction branch and an enhanced signal feature extraction branch, the electrostatic signal feature extraction branch extracts local intensity distribution features of electrostatic signals through convolution operation, and the enhanced signal feature extraction branch extracts global features of enhanced signals and generates a joint feature map through feature fusion operation;
s63, inputting the combined characteristic map into a generated countermeasure network, wherein the generated countermeasure network comprises a generator and a discriminator, the generator generates optimized texture distribution characteristics according to the combined characteristic map, and generates optimized characteristic signals by learning the internal relation between electrostatic signals and enhancement signals;
s64, finally outputting the optimized characteristic signals.
In this embodiment, the step S7 specifically includes:
S71, decoding the optimized characteristic signals by using a space-time characteristic decoding technology, wherein the decoding process comprises time decoding and space decoding, the time decoding is used for extracting dynamic characteristics of the optimized characteristic signals on a time sequence to generate time characteristic distribution, and the space decoding is used for extracting distribution characteristics of the optimized characteristic signals at different space positions to generate space characteristic mapping;
S72, carrying out joint processing on the time characteristic distribution and the space characteristic mapping to form space-time characteristic mapping, wherein the joint processing comprises characteristic alignment and weight balance;
S73, reconstructing the decoded space-time feature map to generate a final fingerprint image, wherein the reconstruction process comprises noise suppression, texture enhancement and boundary correction.
Example 1:
To verify the feasibility of the invention in practice, the invention is applied to some authentication system. The test scene is arranged in a high-flow airport security check channel, different finger states and environmental conditions are simulated, and the acquisition accuracy, stability and adaptability of the system are evaluated.
During the test, 200 volunteers were selected as test subjects, and the age distribution of the volunteers was 20 to 60 years, and the finger states included wet hands, dry hands, aged skin and normal states. The test equipment adopts a customized acquisition terminal supporting the method of the invention, and the terminal is provided with a multi-source bioelectric signal acquisition module, a high-order variation self-encoder, a dynamic compensation mechanism, a nerve-fingerprint characteristic interaction enhancement module and a multi-mode collaborative optimization network.
In a specific test process, volunteers need to collect fingerprints in different finger states as required, including direct collection after wetting hands, collection after wiping with paper towels, collection after contacting alcohol and collection in a natural state. The definition, texture integrity and system response time of the fingerprint image are recorded in real time in the acquisition process, and meanwhile the usability and accuracy of the image are evaluated through subsequent identity comparison.
In order to fully verify the superiority of the invention, the test is also compared with the existing optical fingerprint acquisition technology and single electrostatic signal acquisition technology, and the acquisition quality and robustness under the same condition are evaluated.
Under wet hand conditions, the image definition of the method of the invention reaches 97%, which is far higher than 72% of optical technology and 65% of electrostatic technology. The invention effectively counteracts the interference of wet hands on electrostatic signals through a multisource signal fusion and dynamic compensation mechanism, and optimizes the contrast of texture features through a nerve-fingerprint feature interaction enhancement technology. Under dry hand conditions, the texture integrity index of the present invention is 96%, whereas conventional optical and electrostatic techniques are 78% and 68%, respectively. The result shows that the adaptability enhancing parameter of the invention can obviously improve the signal quality in the dry state of the skin. At low contact pressures, the identity ratio accuracy of the present invention remains 98% while the conventional methods are 70% and 64%, respectively. In addition, the average response time of the invention is 0.8 seconds, which is reduced by about 35% compared with the traditional method.
Through data verification, the fingerprint image acquisition quality under a complex scene is obviously superior to that of the prior art, the problem of acquisition quality degradation under special conditions such as wet hands, dry hands, aged skin and the like can be effectively solved, and the fingerprint image acquisition method has strong instantaneity and stability.
TABLE 1 comparison analysis Table for fingerprint acquisition effect in complex scene
As can be seen from table 1 above, the fingerprint image acquisition performance of the present invention under complex scene is significantly better than that of the conventional optical technique and electrostatic technique. Under wet hand conditions, the image definition of the invention reaches 97%, which is significantly higher than 72% of optical technology and 65% of electrostatic technology. The method is beneficial to effectively eliminating the interference on electrostatic signals under the wet hand condition and enhancing the definition and contrast of fingerprint textures by combining a multi-source bioelectric signal fusion technology with a dynamic compensation and nerve-fingerprint characteristic interaction enhancement mechanism.
In a dry hand scenario, the texture integrity of the present invention is 96%, which is also significantly better than 78% of optical techniques and 68% of electrostatic techniques. This is because conventional techniques generally cannot capture enough signal detail in the dry state of the skin, while the present invention ensures signal quality and texture stability in dry skin conditions through dynamic adjustment of the adaptation-enhancing parameters and optimization of the multimodal synergistic network.
For the aged skin test, the image clarity and texture integrity of the present invention were 95% and 94%, respectively, while the optical techniques were only 75% and 73%, and the electrostatic techniques were lower, only 63% and 61%. The result shows that under the condition of skin conductivity reduction or texture blurring treatment, the texture detail can be effectively captured through depth modeling and higher-order distribution optimization of the multi-source signals, and the acquired fingerprint image is ensured to be clear and complete.
In a low contact pressure scenario, the identity ratio of the present invention is up to 98% and the optical and electrostatic techniques are 70% and 64%, respectively. While the conventional method is highly sensitive to contact pressure, which easily results in insufficient signal strength and failure to generate high quality images, the invention can still ensure signal integrity and contrast at low contact pressure by utilizing a dynamic compensation mechanism and a texture deviation correction function.
For normal state testing, the method is superior to the traditional method in all indexes, the image definition and texture integrity reach 98% and 97% respectively, the identity ratio accuracy is as high as 99%, and the response time is only 0.7 seconds, which is obviously faster than 1.1 seconds of the optical technology and 1.3 seconds of the electrostatic technology. This fully embodies the real-time and high efficiency of the present invention.
In a comprehensive view, the invention has excellent adaptability and robustness under wet hands, dry hands, aged skin, low contact pressure and normal state, effectively solves the problem of acquisition quality reduction of the traditional technology in complex scenes, has higher response speed and recognition efficiency, and provides innovation direction and practical value for the development of fingerprint image acquisition technology.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should make equivalent substitutions or modifications according to the technical scheme of the present invention and the inventive concept thereof, and should be covered by the scope of the present invention.