[go: up one dir, main page]

CN120071407A - Fingerprint image acquisition method based on bioelectric signal enhancement - Google Patents

Fingerprint image acquisition method based on bioelectric signal enhancement Download PDF

Info

Publication number
CN120071407A
CN120071407A CN202510135726.4A CN202510135726A CN120071407A CN 120071407 A CN120071407 A CN 120071407A CN 202510135726 A CN202510135726 A CN 202510135726A CN 120071407 A CN120071407 A CN 120071407A
Authority
CN
China
Prior art keywords
signal
texture
feature
distribution
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202510135726.4A
Other languages
Chinese (zh)
Other versions
CN120071407B (en
Inventor
徐向辉
张媚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taizhou Dexing Electronic Technology Co ltd
Original Assignee
Taizhou Dexing Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taizhou Dexing Electronic Technology Co ltd filed Critical Taizhou Dexing Electronic Technology Co ltd
Priority to CN202510135726.4A priority Critical patent/CN120071407B/en
Publication of CN120071407A publication Critical patent/CN120071407A/en
Application granted granted Critical
Publication of CN120071407B publication Critical patent/CN120071407B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • G06V40/1306Sensors therefor non-optical, e.g. ultrasonic or capacitive sensing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/52Scale-space analysis, e.g. wavelet analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/15Biometric patterns based on physiological signals, e.g. heartbeat, blood flow
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Cardiology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physiology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Input (AREA)

Abstract

本发明公开了一种基于生物电信号增强的指纹图像采集方法,包括如下步骤:S1、构建原始信号数据集;S2、进行动态建模,并生成信号的空间分布模型和适应性增强参数;S3、监测手指滑动、旋转和压力变化,结合空间分布模型生成时空分布数据;S4、根据时空分布数据补偿运动引起的纹理偏差,生成信号分布模型;S5、通过神经‑指纹特征交互增强和神经响应同步放大机制增强信号对比度,生成增强信号;S6、构建多模态协同网络优化纹理分布特性,生成优化后的特征信号;S7、采用时空特征解码技术对优化后的特征信号进行解码,生成最终指纹图像。本发明利用多源生物电信号融合与动态优化技术,实现复杂场景下高精度指纹图像采集。

The present invention discloses a fingerprint image acquisition method based on bioelectric signal enhancement, comprising the following steps: S1, constructing an original signal data set; S2, performing dynamic modeling, and generating a spatial distribution model and adaptive enhancement parameters of the signal; S3, monitoring finger sliding, rotation and pressure changes, and generating spatiotemporal distribution data in combination with the spatial distribution model; S4, compensating for the texture deviation caused by movement according to the spatiotemporal distribution data, and generating a signal distribution model; S5, enhancing the signal contrast through neural-fingerprint feature interactive enhancement and neural response synchronous amplification mechanism, and generating an enhanced signal; S6, constructing a multimodal collaborative network to optimize the texture distribution characteristics, and generating an optimized feature signal; S7, using spatiotemporal feature decoding technology to decode the optimized feature signal, and generate a final fingerprint image. The present invention utilizes multi-source bioelectric signal fusion and dynamic optimization technology to achieve high-precision fingerprint image acquisition in complex scenarios.

Description

Fingerprint image acquisition method based on bioelectric signal enhancement
Technical Field
The invention relates to the technical field of biological identification and fingerprint acquisition, in particular to a fingerprint image acquisition method based on bioelectric signal enhancement.
Background
Fingerprint identification technology is used as an important component in the field of biological feature identification, and is widely applied to a plurality of fields such as identity authentication, safety protection, intelligent equipment unlocking and the like. Traditional fingerprint acquisition techniques rely primarily on optical, electrostatic or capacitive sensors to generate digitized images by acquiring fingerprint surface characteristic information. However, under complex scenarios, such as wet hands, dry hands, aged skin, low contact pressure, etc., conventional techniques often face problems with reduced acquisition quality. Such degradation may manifest itself as blurring of the fingerprint image, loss of detail, or increased noise, thereby affecting the accuracy and robustness of the fingerprint identification system.
Optical fingerprint acquisition techniques use reflected or transmitted light to acquire fingerprint texture, but under wet hand conditions, the refraction and scattering effects of moisture can significantly reduce image quality. Meanwhile, the optical technology is sensitive to surface pollution and is easy to be interfered by greasy dirt, dust and the like. The electrostatic fingerprint acquisition technology generates texture features by capturing electrostatic signals between a finger and an acquisition surface, but when the contact pressure is insufficient or the skin surface is too dry, the intensity of the electrostatic signals can be significantly reduced, resulting in incomplete texture features. The capacitive fingerprint acquisition technology constructs a fingerprint image by detecting the capacitance change between the skin and the electrode, but under special conditions such as aged skin, the acquired capacitance signal may deviate due to the reduction of skin elasticity and conductivity, and the definition of the image is affected.
In addition, conventional fingerprint acquisition techniques typically rely on a single signal source, and lack comprehensive utilization of multimodal information. This single signal dependence makes the system difficult to adapt when faced with complex acquisition scenarios. For example, under wet hand conditions, the electrostatic signal may fail entirely, and relying on the capacitive signal alone may not capture enough detail information. In addition, most of the signal processing and image generating methods in the prior art are static processing, and cannot dynamically adapt to real-time changes of signals, so that the system is insufficient in coping with scenes such as dynamic finger movement, pressure fluctuation or rotation.
Another important technical limitation is the deficiencies of conventional fingerprint acquisition systems in signal enhancement and texture optimization. The prior art generally improves signal acquisition capability by hardware, but this approach is costly and has limited adaptability to complex scenarios. For signal processing, the traditional method mostly adopts a fixed rule or a simple filtering algorithm, so that the inherent relevance of biological signals cannot be fully mined, and the depth modeling capability of dynamic change signals is also lacking. For example, the static signal processing method is difficult to capture the space-time characteristics of signal distribution in the finger motion process, so that the generated fingerprint image has the problems of dynamic distortion or boundary blurring.
In terms of signal enhancement and feature optimization, the prior art generally relies on low-dimensional feature modeling, and cannot adequately capture the high-dimensional distribution characteristics and complex spatio-temporal dependencies of multi-source signals. Furthermore, the prior art lacks efficient compensation mechanisms for dynamic changes in the signal. For example, in a low contact pressure or slightly sliding scene, fingerprint texture features are prone to deformation or offset, while conventional static compensation algorithms have difficulty adjusting signal distribution in real time, resulting in degradation of the acquired image quality.
Therefore, how to provide a fingerprint image acquisition method based on bioelectric signal enhancement is a problem to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a fingerprint image acquisition method based on bioelectric signal enhancement, which utilizes a multisource bioelectric signal fusion and dynamic optimization technology to realize high-precision fingerprint image acquisition under a complex scene through joint modeling and enhancement of nerve signals, electrostatic signals and capacitance signals. The problems of fingerprint acquisition quality degradation under the conditions of wet hand, dry hand, aged skin, low contact pressure and the like are effectively solved through the technologies of dynamic compensation, feature interaction enhancement, multi-mode collaborative optimization, space-time decoding and the like. The generated fingerprint image has the advantages of high resolution, clear texture details and global consistency, strong adaptability, good robustness and high acquisition stability, and provides a new support for the fingerprint identification technology.
According to the embodiment of the invention, the fingerprint image acquisition method based on bioelectric signal enhancement comprises the following steps of:
s1, collecting a multi-source bioelectric signal of a fingerprint contact area, wherein the multi-source bioelectric signal comprises a nerve signal, an electrostatic signal and a capacitance signal, and an original signal data set is constructed;
S2, dynamically modeling an original signal data set based on a high-order variation self-encoder to generate a spatial distribution model and an adaptability enhancement parameter of a signal;
S3, utilizing a contact force sensor and a displacement sensing module to monitor the sliding, rotation and contact pressure change of the finger in real time, and generating space-time distribution data of finger movement by combining a spatial distribution model of signals;
s4, compensating texture deviation caused by finger movement in real time according to the space-time distribution data of the finger movement, and generating a dynamically compensated signal distribution model;
S5, modeling the dynamically compensated signal distribution model and fingerprint texture features in a dynamic corresponding manner through a neural-fingerprint feature interaction enhancement mechanism, and enhancing the signal contrast by utilizing a neural response synchronous amplification mechanism to generate an enhanced signal;
s6, constructing a multi-mode cooperative network with electrostatic signals and enhanced signals being jointly optimized, and generating optimized characteristic signals by generating optimized texture distribution characteristics of an countermeasure network;
and S7, decoding the optimized characteristic signals by adopting a space-time characteristic decoding technology to generate a final fingerprint image.
Optionally, the S2 specifically includes:
S21, decomposing the multi-source bioelectric signals in the original signal data set, and representing the time sequence of the nerve signals, the electrostatic signals and the capacitance signals as follows:
Wherein I k (t) represents the signal strength of the kth class signal at time t, N k represents the number of frequency components of the kth class signal, a kn represents the amplitude of the nth component of the kth class signal, f kn represents the frequency of the nth component of the kth class signal, Representing the initial phase of the nth component of the kth signal;
S22, carrying out spectrum analysis on the I k (t) of each type of signal to construct a spectrum characteristic matrix Fk:
Wherein F k (i, j) represents the complex amplitude of the component of frequency F i in the kth signal on time slice j, T represents the sampling time window, F i represents the frequency of the ith component, j represents the time slice index;
s23, performing dimension reduction processing on the frequency spectrum feature matrix F k, and mapping the frequency spectrum feature matrix F k into a potential feature matrix:
Wherein Z k represents a potential feature matrix of the kth signal, and V k represents a projection matrix obtained by principal component analysis;
S24, constructing a dynamic distribution model based on the potential feature matrix:
Where p (Z k) represents the probability distribution of the latent feature matrix, M k represents the number of latent variables, Z ki represents the value of the ith latent variable, μ ki represents the mean value of the ith latent variable, σ ki represents the standard deviation of the ith latent variable;
S25, mapping the latent variable into the spatial distribution characteristic of the signal by using the dynamic distribution model to generate a spatial distribution model of the signal:
Wherein S k (x, y) represents the spatial distribution of the signal of the kth class of signal on the spatial coordinates (x, y), Φ i (x, y) represents the ith orthogonal basis function on the spatial coordinates (x, y);
S26, extracting an adaptability enhancing parameter according to a spatial distribution model of the signal:
θk={μkiki,Mk};
Where θ k denotes the enhancement parameter set of the kth class signal.
Optionally, the step S3 specifically includes:
S31, collecting pressure change of a finger contact surface in real time by using a contact force sensor:
wherein P (t) represents the contact pressure at time t, F (t) represents the contact force at time t, and A represents the contact area of the finger with the contact surface;
s32, monitoring the two-dimensional displacement of the finger on the contact surface in real time through a displacement sensing module:
Wherein Δd (t) represents the displacement of the finger on the plane at time t, x (t) and y (t) represent the finger two-dimensional position coordinates at time t, and x 0 and y 0 represent the initial finger two-dimensional position coordinates;
s33, monitoring the rotation angle change of the finger on the contact surface through a rotation sensor:
Δθ(t)=θ(t)-θ0;
Wherein Δθ (t) represents a change in the rotation angle of the finger at time t, θ (t) represents an angle value at time t, and θ 0 represents an initial angle;
S34, generating a preliminary finger movement feature matrix by utilizing a spatial distribution model of signals and combining the changes of finger pressure, displacement and rotation angle:
M(t)=[P(t) Δd(t) Δθ(t)]·S(x,y);
wherein M (t) represents a finger motion feature matrix at time t and S (x, y) represents a spatial distribution of signals on spatial coordinates (x, y);
s35, carrying out space-time feature fusion on the finger movement feature matrix to generate space-time distribution data of finger movement:
Where S (t, x, y) represents the spatiotemporal distribution data of finger motion at time t and on spatial coordinates (x, y), G (x, y) represents the spatial weighting function on spatial coordinates (x, y), t 0 represents the start time of motion, and t n represents the end time of motion.
Optionally, the step S4 specifically includes:
S41, acquiring space-time distribution data S (t, x, y) of finger movement, sampling the space-time distribution data according to time t and space coordinates (x, y), and representing the movement offset of the finger at each sampling point as deltax (t, x, y) and deltay (t, x, y);
s42, dividing the original fingerprint texture map into grid areas with fixed sizes, wherein texture data of each grid area is represented by a texture feature matrix;
S43, calculating texture deviation values by using the space-time distribution data of the finger movement:
where DeltaT (T, x, y) represents the texture bias value, V (x, y) represents the current texture feature matrix, V 0 (x, y) represents the initial reference texture feature matrix, Representing the gradient of the current texture feature matrix in the x-direction,Representing the gradient of the current texture feature matrix in the y direction;
s44, carrying out dynamic weight adjustment on texture deviation values delta T (T, x, y) to generate a compensation weight matrix for carrying out local optimization on deviations of different positions, wherein the adjustment process is controlled by texture gradient and deviation magnitude:
Wherein W (x, y) represents the dynamic compensation weight on the spatial coordinates (x, y), K represents the texture gradient adjustment coefficient, exp represents the exponential function, Representing the gradient magnitude of the current texture feature matrix V (x, y), wherein lambda represents the deviation weight adjustment coefficient, and delta T (T, x, y) represents the absolute value of texture deviation;
S45, calculating a signal distribution model after dynamic compensation based on the compensation weight matrix W (x, y) and the texture deviation value delta T (T, x, y):
Sf(x,y)=S0(x,y)+W(x,y)·ΔT(t,x,y);
Where S f (x, y) represents the dynamically compensated signal distribution model and S 0 (x, y) represents the initial signal distribution model.
Optionally, the step S5 specifically includes:
S51, dynamically modeling the dynamically compensated signal distribution model and fingerprint texture features based on a nerve-fingerprint feature interaction mechanism to generate an interaction enhancement matrix, wherein the nerve-fingerprint feature interaction mechanism comprises:
performing point-by-point correlation analysis on the dynamically compensated signal distribution model and the fingerprint texture characteristics in space, introducing spatial weight by using a Gaussian kernel function, and enhancing the influence between adjacent areas;
mapping the dynamic compensated signal distribution model and the local change of the fingerprint texture characteristics to an interaction space, and coupling the multi-scale characteristics together through integral operation;
calculating a weighted sum of the dynamically compensated signal distribution model and the fingerprint texture characteristics in the local neighborhood:
Wherein R (x, y) represents the interaction enhancement matrix at spatial coordinates (x, y), S (u, v) represents the spatial distribution of the signal at spatial coordinates (u, v), T (x-u, y-v) represents the distribution of the fingerprint texture feature values at the offset (x-u, y-v), exp represents the exponential function, σ represents the scale parameter of the gaussian kernel function;
s52, introducing a neural response synchronous amplification mechanism to enhance the signal contrast, dynamically coupling the intensity distribution of the neural signal with the interaction enhancement matrix, and generating an enhancement signal:
Where E (x, y) represents the enhancement signal at the spatial coordinates (x, y), η represents the neural response amplification factor, N (x, y) represents the neural signal intensity distribution at the spatial coordinates (x, y), and max (N (x, y)) represents the maximum value of the neural signal intensity distribution.
Optionally, the step S6 specifically includes:
s61, acquiring an electrostatic signal and an enhancement signal, and performing time sequence alignment and normalization processing on the electrostatic signal and the enhancement signal;
S62, constructing a multi-mode collaborative network, wherein the multi-mode collaborative network consists of two parts, namely an electrostatic signal feature extraction branch and an enhanced signal feature extraction branch, the electrostatic signal feature extraction branch extracts local intensity distribution features of electrostatic signals through convolution operation, and the enhanced signal feature extraction branch extracts global features of enhanced signals and generates a joint feature map through feature fusion operation;
s63, inputting the combined characteristic map into a generated countermeasure network, wherein the generated countermeasure network comprises a generator and a discriminator, the generator generates optimized texture distribution characteristics according to the combined characteristic map, and generates optimized characteristic signals by learning the internal relation between electrostatic signals and enhancement signals;
s64, finally outputting the optimized characteristic signals.
Optionally, the step S7 specifically includes:
S71, decoding the optimized characteristic signals by using a space-time characteristic decoding technology, wherein the decoding process comprises time decoding and space decoding, the time decoding is used for extracting dynamic characteristics of the optimized characteristic signals on a time sequence to generate time characteristic distribution, and the space decoding is used for extracting distribution characteristics of the optimized characteristic signals at different space positions to generate space characteristic mapping;
S72, carrying out joint processing on the time characteristic distribution and the space characteristic mapping to form space-time characteristic mapping, wherein the joint processing comprises characteristic alignment and weight balance;
S73, reconstructing the decoded space-time feature map to generate a final fingerprint image, wherein the reconstruction process comprises noise suppression, texture enhancement and boundary correction.
The beneficial effects of the invention are as follows:
Firstly, the invention solves the problem of single signal dependence in the prior art by fusing multisource bioelectric signals, including nerve signals, electrostatic signals and capacitance signals. Under a complex acquisition scene, even if one signal source is limited due to wet hands, dry hands or skin state changes, other signal sources can still provide supplementary information, so that the acquisition integrity and stability of fingerprint images are ensured. In addition, the multi-source signal is dynamically modeled through the high-order variation self-encoder, and the method can generate an accurate signal space distribution model and an adaptability enhancement parameter, so that the high-efficiency fusion and feature extraction of the multi-mode signal are realized, and the signal can accurately reflect the space characteristics of the fingerprint texture.
Secondly, the invention designs a dynamic compensation mechanism, which can correct texture deviation caused by finger sliding, rotation and pressure change in real time and generate a dynamically compensated signal distribution model. The mechanism is particularly suitable for scenes with low contact pressure or slight sliding and the like, and effectively solves the problems of image blurring and texture distortion caused by finger movement in the traditional method. By analyzing and compensating the space-time distribution data of finger movement, the invention ensures the stability and consistency of the fingerprint image under the condition of dynamic acquisition.
In addition, the neural-fingerprint characteristic interaction enhancement mechanism dynamically enhances the signal contrast of a key region through a neural response synchronous amplification technology. The mechanism not only can improve the feature definition of a complex texture region, but also can enhance the contrast effect of fingerprint textures under the condition of low signal intensity, thereby providing richer and more accurate feature data for high-precision fingerprint identification. Particularly in complex scenes such as wet hands, aged skin and the like, the mechanism significantly improves the signal quality and the image definition.
Furthermore, the multi-mode cooperative network is constructed, and the integrity and consistency of fingerprint texture distribution are further improved through the combined optimization of the electrostatic signals and the enhanced signals. The use of the generation of the antagonism network in texture optimization enables the invention to learn the deep correlation between electrostatic signals and enhancement signals, generating optimized feature signals with high resolution and strong contrast. Through the optimization, the problem of reduced recognition accuracy caused by incomplete texture or noise interference in the traditional method is solved.
Finally, the invention decodes the optimized characteristic signals into unified fingerprint images by a space-time characteristic decoding technology. The decoding process is combined with a Gaussian-multi-sample reconstruction model, so that the texture details and the global distribution characteristics of the fingerprint image are further optimized. The generated fingerprint image has the characteristics of high resolution, clear details and global consistency, can accurately reflect the spatial texture distribution of the fingerprint, and meets the high-quality acquisition requirement in complex application scenes.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a flowchart of a fingerprint image acquisition method based on bioelectric signal enhancement according to the present invention;
fig. 2 is a flowchart of a mechanism for generating and dynamically compensating the space-time distribution data of finger motion based on a bioelectric signal enhanced fingerprint image acquisition method according to the present invention.
Detailed Description
The invention will now be described in further detail with reference to the accompanying drawings. The drawings are simplified schematic representations which merely illustrate the basic structure of the invention and therefore show only the structures which are relevant to the invention.
Referring to fig. 1 and 2, a fingerprint image acquisition method based on bioelectric signal enhancement includes the steps of:
s1, collecting a multi-source bioelectric signal of a fingerprint contact area, wherein the multi-source bioelectric signal comprises a nerve signal, an electrostatic signal and a capacitance signal, and an original signal data set is constructed;
S2, dynamically modeling an original signal data set based on a high-order variation self-encoder to generate a spatial distribution model and an adaptability enhancement parameter of a signal;
S3, utilizing a contact force sensor and a displacement sensing module to monitor the sliding, rotation and contact pressure change of the finger in real time, and generating space-time distribution data of finger movement by combining a spatial distribution model of signals;
s4, compensating texture deviation caused by finger movement in real time according to the space-time distribution data of the finger movement, and generating a dynamically compensated signal distribution model;
S5, modeling the dynamically compensated signal distribution model and fingerprint texture features in a dynamic corresponding manner through a neural-fingerprint feature interaction enhancement mechanism, and enhancing the signal contrast by utilizing a neural response synchronous amplification mechanism to generate an enhanced signal;
s6, constructing a multi-mode cooperative network with electrostatic signals and enhanced signals being jointly optimized, and generating optimized characteristic signals by generating optimized texture distribution characteristics of an countermeasure network;
and S7, decoding the optimized characteristic signals by adopting a space-time characteristic decoding technology to generate a final fingerprint image.
In this embodiment, the S2 specifically includes:
S21, decomposing the multi-source bioelectric signals in the original signal data set, and representing the time sequence of the nerve signals, the electrostatic signals and the capacitance signals as follows:
Wherein I k (t) represents the signal strength of the kth class signal at time t, N k represents the number of frequency components of the kth class signal, a kn represents the amplitude of the nth component of the kth class signal, f kn represents the frequency of the nth component of the kth class signal, Representing the initial phase of the nth component of the kth signal;
S22, carrying out spectrum analysis on the I k (t) of each type of signal to construct a spectrum characteristic matrix Fk:
Wherein F k (i, j) represents the complex amplitude of the component of frequency F i in the kth signal on time slice j, T represents the sampling time window, F i represents the frequency of the ith component, j represents the time slice index;
s23, performing dimension reduction processing on the frequency spectrum feature matrix F k, and mapping the frequency spectrum feature matrix F k into a potential feature matrix:
Wherein Z k represents a potential feature matrix of the kth signal, and V k represents a projection matrix obtained by principal component analysis;
S24, constructing a dynamic distribution model based on the potential feature matrix:
Where p (Z k) represents the probability distribution of the latent feature matrix, M k represents the number of latent variables, Z ki represents the value of the ith latent variable, μ ki represents the mean value of the ith latent variable, σ ki represents the standard deviation of the ith latent variable;
S25, mapping the latent variable into the spatial distribution characteristic of the signal by using the dynamic distribution model to generate a spatial distribution model of the signal:
Wherein S k (x, y) represents the spatial distribution of the signal of the kth class of signal on the spatial coordinates (x, y), Φ i (x, y) represents the ith orthogonal basis function on the spatial coordinates (x, y);
S26, extracting an adaptability enhancing parameter according to a spatial distribution model of the signal:
θk={μkiki,Mk};
Where θ k denotes the enhancement parameter set of the kth class signal.
In this embodiment, the step S3 specifically includes:
S31, collecting pressure change of a finger contact surface in real time by using a contact force sensor:
wherein P (t) represents the contact pressure at time t, F (t) represents the contact force at time t, and A represents the contact area of the finger with the contact surface;
s32, monitoring the two-dimensional displacement of the finger on the contact surface in real time through a displacement sensing module:
Wherein Δd (t) represents the displacement of the finger on the plane at time t, x (t) and y (t) represent the finger two-dimensional position coordinates at time t, and x 0 and y 0 represent the initial finger two-dimensional position coordinates;
s33, monitoring the rotation angle change of the finger on the contact surface through a rotation sensor:
Δθ(t)=θ(t)-θ0;
Wherein Δθ (t) represents a change in the rotation angle of the finger at time t, θ (t) represents an angle value at time t, and θ 0 represents an initial angle;
S34, generating a preliminary finger movement feature matrix by utilizing a spatial distribution model of signals and combining the changes of finger pressure, displacement and rotation angle:
M(t)=[P(t) Δd(t) Δθ(t)]·S(x,y);
wherein M (t) represents a finger motion feature matrix at time t and S (x, y) represents a spatial distribution of signals on spatial coordinates (x, y);
s35, carrying out space-time feature fusion on the finger movement feature matrix to generate space-time distribution data of finger movement:
Where S (t, x, y) represents the spatiotemporal distribution data of finger motion at time t and on spatial coordinates (x, y), G (x, y) represents the spatial weighting function on spatial coordinates (x, y), t 0 represents the start time of motion, and t n represents the end time of motion.
In this embodiment, the S4 specifically includes:
S41, acquiring space-time distribution data S (t, x, y) of finger movement, sampling the space-time distribution data according to time t and space coordinates (x, y), and representing the movement offset of the finger at each sampling point as deltax (t, x, y) and deltay (t, x, y);
s42, dividing the original fingerprint texture map into grid areas with fixed sizes, wherein texture data of each grid area is represented by a texture feature matrix;
S43, calculating texture deviation values by using the space-time distribution data of the finger movement:
Where DeltaT (T, x, y) represents the texture bias value, V (x, y) represents the current texture feature matrix, V0 (x, y) represents the initial reference texture feature matrix, Representing the gradient of the current texture feature matrix in the x-direction,Representing the gradient of the current texture feature matrix in the y direction;
s44, carrying out dynamic weight adjustment on texture deviation values delta T (T, x, y) to generate a compensation weight matrix for carrying out local optimization on deviations of different positions, wherein the adjustment process is controlled by texture gradient and deviation magnitude:
Wherein W (x, y) represents the dynamic compensation weight on the spatial coordinates (x, y), K represents the texture gradient adjustment coefficient, exp represents the exponential function, Representing the gradient magnitude of the current texture feature matrix V (x, y), wherein lambda represents the deviation weight adjustment coefficient, and delta T (T, x, y) represents the absolute value of texture deviation;
S45, calculating a signal distribution model after dynamic compensation based on the compensation weight matrix W (x, y) and the texture deviation value delta T (T, x, y):
Sf(x,y)=S0(x,y)+W(x,y)·ΔT(t,x,y);
Where S f (x, y) represents the dynamically compensated signal distribution model and S 0 (x, y) represents the initial signal distribution model.
In this embodiment, the step S5 specifically includes:
S51, dynamically modeling the dynamically compensated signal distribution model and fingerprint texture features based on a nerve-fingerprint feature interaction mechanism to generate an interaction enhancement matrix, wherein the nerve-fingerprint feature interaction mechanism comprises:
performing point-by-point correlation analysis on the dynamically compensated signal distribution model and the fingerprint texture characteristics in space, introducing spatial weight by using a Gaussian kernel function, and enhancing the influence between adjacent areas;
mapping the dynamic compensated signal distribution model and the local change of the fingerprint texture characteristics to an interaction space, and coupling the multi-scale characteristics together through integral operation;
calculating a weighted sum of the dynamically compensated signal distribution model and the fingerprint texture characteristics in the local neighborhood:
Wherein R (x, y) represents the interaction enhancement matrix at spatial coordinates (x, y), S (u, v) represents the spatial distribution of the signal at spatial coordinates (u, v), T (x-u, y-v) represents the distribution of the fingerprint texture feature values at the offset (x-u, y-v), exp represents the exponential function, σ represents the scale parameter of the gaussian kernel function;
s52, introducing a neural response synchronous amplification mechanism to enhance the signal contrast, dynamically coupling the intensity distribution of the neural signal with the interaction enhancement matrix, and generating an enhancement signal:
Where E (x, y) represents the enhancement signal at the spatial coordinates (x, y), η represents the neural response amplification factor, N (x, y) represents the neural signal intensity distribution at the spatial coordinates (x, y), and max (N (x, y)) represents the maximum value of the neural signal intensity distribution.
In this embodiment, the step S6 specifically includes:
s61, acquiring an electrostatic signal and an enhancement signal, and performing time sequence alignment and normalization processing on the electrostatic signal and the enhancement signal;
S62, constructing a multi-mode collaborative network, wherein the multi-mode collaborative network consists of two parts, namely an electrostatic signal feature extraction branch and an enhanced signal feature extraction branch, the electrostatic signal feature extraction branch extracts local intensity distribution features of electrostatic signals through convolution operation, and the enhanced signal feature extraction branch extracts global features of enhanced signals and generates a joint feature map through feature fusion operation;
s63, inputting the combined characteristic map into a generated countermeasure network, wherein the generated countermeasure network comprises a generator and a discriminator, the generator generates optimized texture distribution characteristics according to the combined characteristic map, and generates optimized characteristic signals by learning the internal relation between electrostatic signals and enhancement signals;
s64, finally outputting the optimized characteristic signals.
In this embodiment, the step S7 specifically includes:
S71, decoding the optimized characteristic signals by using a space-time characteristic decoding technology, wherein the decoding process comprises time decoding and space decoding, the time decoding is used for extracting dynamic characteristics of the optimized characteristic signals on a time sequence to generate time characteristic distribution, and the space decoding is used for extracting distribution characteristics of the optimized characteristic signals at different space positions to generate space characteristic mapping;
S72, carrying out joint processing on the time characteristic distribution and the space characteristic mapping to form space-time characteristic mapping, wherein the joint processing comprises characteristic alignment and weight balance;
S73, reconstructing the decoded space-time feature map to generate a final fingerprint image, wherein the reconstruction process comprises noise suppression, texture enhancement and boundary correction.
Example 1:
To verify the feasibility of the invention in practice, the invention is applied to some authentication system. The test scene is arranged in a high-flow airport security check channel, different finger states and environmental conditions are simulated, and the acquisition accuracy, stability and adaptability of the system are evaluated.
During the test, 200 volunteers were selected as test subjects, and the age distribution of the volunteers was 20 to 60 years, and the finger states included wet hands, dry hands, aged skin and normal states. The test equipment adopts a customized acquisition terminal supporting the method of the invention, and the terminal is provided with a multi-source bioelectric signal acquisition module, a high-order variation self-encoder, a dynamic compensation mechanism, a nerve-fingerprint characteristic interaction enhancement module and a multi-mode collaborative optimization network.
In a specific test process, volunteers need to collect fingerprints in different finger states as required, including direct collection after wetting hands, collection after wiping with paper towels, collection after contacting alcohol and collection in a natural state. The definition, texture integrity and system response time of the fingerprint image are recorded in real time in the acquisition process, and meanwhile the usability and accuracy of the image are evaluated through subsequent identity comparison.
In order to fully verify the superiority of the invention, the test is also compared with the existing optical fingerprint acquisition technology and single electrostatic signal acquisition technology, and the acquisition quality and robustness under the same condition are evaluated.
Under wet hand conditions, the image definition of the method of the invention reaches 97%, which is far higher than 72% of optical technology and 65% of electrostatic technology. The invention effectively counteracts the interference of wet hands on electrostatic signals through a multisource signal fusion and dynamic compensation mechanism, and optimizes the contrast of texture features through a nerve-fingerprint feature interaction enhancement technology. Under dry hand conditions, the texture integrity index of the present invention is 96%, whereas conventional optical and electrostatic techniques are 78% and 68%, respectively. The result shows that the adaptability enhancing parameter of the invention can obviously improve the signal quality in the dry state of the skin. At low contact pressures, the identity ratio accuracy of the present invention remains 98% while the conventional methods are 70% and 64%, respectively. In addition, the average response time of the invention is 0.8 seconds, which is reduced by about 35% compared with the traditional method.
Through data verification, the fingerprint image acquisition quality under a complex scene is obviously superior to that of the prior art, the problem of acquisition quality degradation under special conditions such as wet hands, dry hands, aged skin and the like can be effectively solved, and the fingerprint image acquisition method has strong instantaneity and stability.
TABLE 1 comparison analysis Table for fingerprint acquisition effect in complex scene
As can be seen from table 1 above, the fingerprint image acquisition performance of the present invention under complex scene is significantly better than that of the conventional optical technique and electrostatic technique. Under wet hand conditions, the image definition of the invention reaches 97%, which is significantly higher than 72% of optical technology and 65% of electrostatic technology. The method is beneficial to effectively eliminating the interference on electrostatic signals under the wet hand condition and enhancing the definition and contrast of fingerprint textures by combining a multi-source bioelectric signal fusion technology with a dynamic compensation and nerve-fingerprint characteristic interaction enhancement mechanism.
In a dry hand scenario, the texture integrity of the present invention is 96%, which is also significantly better than 78% of optical techniques and 68% of electrostatic techniques. This is because conventional techniques generally cannot capture enough signal detail in the dry state of the skin, while the present invention ensures signal quality and texture stability in dry skin conditions through dynamic adjustment of the adaptation-enhancing parameters and optimization of the multimodal synergistic network.
For the aged skin test, the image clarity and texture integrity of the present invention were 95% and 94%, respectively, while the optical techniques were only 75% and 73%, and the electrostatic techniques were lower, only 63% and 61%. The result shows that under the condition of skin conductivity reduction or texture blurring treatment, the texture detail can be effectively captured through depth modeling and higher-order distribution optimization of the multi-source signals, and the acquired fingerprint image is ensured to be clear and complete.
In a low contact pressure scenario, the identity ratio of the present invention is up to 98% and the optical and electrostatic techniques are 70% and 64%, respectively. While the conventional method is highly sensitive to contact pressure, which easily results in insufficient signal strength and failure to generate high quality images, the invention can still ensure signal integrity and contrast at low contact pressure by utilizing a dynamic compensation mechanism and a texture deviation correction function.
For normal state testing, the method is superior to the traditional method in all indexes, the image definition and texture integrity reach 98% and 97% respectively, the identity ratio accuracy is as high as 99%, and the response time is only 0.7 seconds, which is obviously faster than 1.1 seconds of the optical technology and 1.3 seconds of the electrostatic technology. This fully embodies the real-time and high efficiency of the present invention.
In a comprehensive view, the invention has excellent adaptability and robustness under wet hands, dry hands, aged skin, low contact pressure and normal state, effectively solves the problem of acquisition quality reduction of the traditional technology in complex scenes, has higher response speed and recognition efficiency, and provides innovation direction and practical value for the development of fingerprint image acquisition technology.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should make equivalent substitutions or modifications according to the technical scheme of the present invention and the inventive concept thereof, and should be covered by the scope of the present invention.

Claims (7)

1.一种基于生物电信号增强的指纹图像采集方法,其特征在于,包括如下步骤:1. A fingerprint image acquisition method based on bioelectric signal enhancement, characterized in that it includes the following steps: S1、采集指纹接触区域的多源生物电信号,所述多源生物电信号包括神经信号、静电信号和电容信号,构建原始信号数据集;S1. Collect multi-source bioelectric signals of the fingerprint contact area, wherein the multi-source bioelectric signals include neural signals, electrostatic signals and capacitive signals, and construct an original signal data set; S2、基于高阶变分自编码器对原始信号数据集进行动态建模,生成信号的空间分布模型和适应性增强参数;S2, dynamically modeling the original signal data set based on high-order variational autoencoders to generate the spatial distribution model and adaptive enhancement parameters of the signal; S3、利用接触力传感器和位移感知模块实时监测手指的滑动、旋转和接触压力变化,并结合信号的空间分布模型生成手指运动的时空分布数据;S3, using the contact force sensor and displacement sensing module to monitor the sliding, rotation and contact pressure changes of the finger in real time, and combining the spatial distribution model of the signal to generate the spatiotemporal distribution data of the finger movement; S4、根据手指运动的时空分布数据实时补偿手指运动引起的纹理偏差,生成动态补偿后的信号分布模型;S4, compensating the texture deviation caused by the finger movement in real time according to the spatiotemporal distribution data of the finger movement, and generating a signal distribution model after dynamic compensation; S5、通过神经-指纹特征交互增强机制,将动态补偿后的信号分布模型与指纹纹理特征进行动态对应建模,并利用神经响应同步放大机制增强信号对比度,生成增强信号;S5. Through the neural-fingerprint feature interactive enhancement mechanism, the signal distribution model after dynamic compensation is dynamically modeled with the fingerprint texture feature, and the neural response synchronous amplification mechanism is used to enhance the signal contrast and generate an enhanced signal; S6、构建静电信号与增强信号联合优化的多模态协同网络,通过生成对抗网络优化纹理分布特性,生成优化后的特征信号;S6. Construct a multimodal collaborative network for joint optimization of electrostatic signals and enhanced signals, optimize texture distribution characteristics through generative adversarial networks, and generate optimized feature signals; S7、采用时空特征解码技术对优化后的特征信号进行解码,生成最终指纹图像。S7. Use spatiotemporal feature decoding technology to decode the optimized feature signal to generate a final fingerprint image. 2.根据权利要求1所述的一种基于生物电信号增强的指纹图像采集方法,其特征在于,所述S2具体包括:2. According to the fingerprint image acquisition method based on bioelectric signal enhancement according to claim 1, it is characterized in that S2 specifically comprises: S21、对原始信号数据集中的多源生物电信号进行分解,将神经信号、静电信号和电容信号的时间序列表示为:S21. Decompose the multi-source bioelectric signals in the original signal data set, and express the time series of neural signals, electrostatic signals and capacitive signals as: 其中,Ik(t)表示第k类信号在时间t时的信号强度,Nk表示第k类信号的频率分量数量,akn表示第k类信号第n个分量的幅值,fkn表示第k类信号第n个分量的频率,表示第k类信号第n个分量的初相位;Where I k (t) represents the signal strength of the k-th signal at time t, N k represents the number of frequency components of the k-th signal, a kn represents the amplitude of the n-th component of the k-th signal, and f kn represents the frequency of the n-th component of the k-th signal. Represents the initial phase of the nth component of the kth type signal; S22、对每类信号的Ik(t)进行频谱分析,构建频谱特征矩阵FkS22, perform spectrum analysis on I k (t) of each type of signal and construct a spectrum feature matrix F k : 其中,Fk(i,j)表示第k类信号中频率为fi的分量在时间片j上的复数幅值,T表示采样时间窗口,fi表示第i个分量的频率,j表示时间片索引;Where F k (i, j) represents the complex amplitude of the component with frequency fi in the k-th signal at time slice j, T represents the sampling time window, fi represents the frequency of the i-th component, and j represents the time slice index; S23、对频谱特征矩阵Fk进行降维处理,并映射为潜在特征矩阵:S23, perform dimension reduction processing on the spectrum feature matrix Fk and map it into a potential feature matrix: 其中,Zk表示第k类信号的潜在特征矩阵,Vk表示主成分分析得到的投影矩阵;Among them, Z k represents the potential feature matrix of the k-th type of signal, and V k represents the projection matrix obtained by principal component analysis; S24、基于潜在特征矩阵构建动态分布模型:S24. Constructing a dynamic distribution model based on the potential feature matrix: 其中,p(Zk)表示潜在特征矩阵的概率分布,Mk表示潜变量的数量,Zki表示第i个潜变量的值,μki表示第i个潜变量的均值,σki表示第i个潜变量的标准差;Where p(Z k ) represents the probability distribution of the latent feature matrix, M k represents the number of latent variables, Z ki represents the value of the i-th latent variable, μ ki represents the mean of the i-th latent variable, and σ ki represents the standard deviation of the i-th latent variable; S25、利用动态分布模型,将潜变量映射为信号的空间分布特性,生成信号的空间分布模型:S25. Using the dynamic distribution model, the latent variables are mapped to the spatial distribution characteristics of the signal to generate the spatial distribution model of the signal: 其中,Sk(x,y)表示第k类信号在空间坐标(x,y)上的信号的空间分布,Φi(x,y)表示在空间坐标(x,y)上第i个正交基函数;Wherein, Sk (x,y) represents the spatial distribution of the k-th signal on the spatial coordinate (x,y), Φi (x,y) represents the i-th orthogonal basis function on the spatial coordinate (x,y); S26、根据信号的空间分布模型提取适应性增强参数:S26. Extracting adaptive enhancement parameters according to the spatial distribution model of the signal: θk={μkiki,Mk};θ k ={μ kiki ,M k }; 其中,θk表示第k类信号的增强参数集合。Wherein, θk represents the set of enhancement parameters for the k-th type of signal. 3.根据权利要求1所述的一种基于生物电信号增强的指纹图像采集方法,其特征在于,所述S3具体包括:3. According to the fingerprint image acquisition method based on bioelectric signal enhancement in claim 1, it is characterized in that said S3 specifically comprises: S31、利用接触力传感器实时采集手指接触表面的压力变化:S31. Using the contact force sensor to collect the pressure change of the finger contact surface in real time: 其中,P(t)表示时间t时的接触压力,F(t)表示时间t时的接触力,A表示手指与接触表面的接触面积;Wherein, P(t) represents the contact pressure at time t, F(t) represents the contact force at time t, and A represents the contact area between the finger and the contact surface; S32、通过位移感知模块实时监测手指在接触表面上的二维位移:S32, real-time monitoring of the two-dimensional displacement of the finger on the contact surface through the displacement sensing module: 其中,Δd(t)表示时间t时手指在平面上的位移,x(t)和y(t)表示时间t时的手指二维位置坐标,x0和y0表示初始手指二维位置坐标;Wherein, Δd(t) represents the displacement of the finger on the plane at time t, x(t) and y(t) represent the two-dimensional position coordinates of the finger at time t, and x0 and y0 represent the initial two-dimensional position coordinates of the finger; S33、通过旋转传感器监测手指在接触表面上的旋转角度变化:S33, monitoring the rotation angle change of the finger on the contact surface through the rotation sensor: Δθ(t)=θ(t)-θ0Δθ(t)=θ(t)-θ 0 ; 其中,Δθ(t)表示时间t时手指的旋转角度变化,θ(t)表示时间t时的角度值,θ0表示初始角度;Where Δθ(t) represents the rotation angle change of the finger at time t, θ(t) represents the angle value at time t, and θ 0 represents the initial angle; S34、利用信号的空间分布模型,结合手指压力、位移和旋转角度的变化,生成初步的手指运动特征矩阵:S34. Generate a preliminary finger motion feature matrix by using the spatial distribution model of the signal and combining the changes in finger pressure, displacement and rotation angle: M(t)=[P(t) Δd(t) Δθ(t)]·S(x,y);M(t)=[P(t) Δd(t) Δθ(t)]·S(x,y); 其中,M(t)表示时间t时的手指运动特征矩阵,S(x,y)表示在空间坐标(x,y)上的信号的空间分布;Where M(t) represents the finger motion feature matrix at time t, and S(x,y) represents the spatial distribution of the signal at the spatial coordinate (x,y); S35、对手指运动特征矩阵进行时空特征融合,生成手指运动的时空分布数据:S35, performing spatiotemporal feature fusion on the finger motion feature matrix to generate spatiotemporal distribution data of the finger motion: 其中,S(t,x,y)表示手指运动在时间t时和空间坐标(x,y)上的时空分布数据,G(x,y)表示空间坐标(x,y)上的空间权重函数,t0表示运动的起始时间,tn表示运动的结束时间。Among them, S(t,x,y) represents the spatiotemporal distribution data of the finger movement at time t and the spatial coordinates (x,y), G(x,y) represents the spatial weight function on the spatial coordinates (x,y), t0 represents the start time of the movement, and tn represents the end time of the movement. 4.根据权利要求1所述的一种基于生物电信号增强的指纹图像采集方法,其特征在于,所述S4具体包括:4. The fingerprint image acquisition method based on bioelectric signal enhancement according to claim 1, characterized in that S4 specifically comprises: S41、获取手指运动的时空分布数据S(t,x,y),按时间t和空间坐标(x,y)对时空分布数据进行采样,将手指在每个采样点的运动偏移量表示为Δx(t,x,y)和Δy(t,x,y);S41, obtaining the spatiotemporal distribution data S(t,x,y) of the finger movement, sampling the spatiotemporal distribution data according to time t and spatial coordinates (x,y), and expressing the movement offset of the finger at each sampling point as Δx(t,x,y) and Δy(t,x,y); S42、将原始指纹纹理映射分块为固定大小的网格区域,每个网格区域的纹理数据用纹理特征矩阵表示;S42, dividing the original fingerprint texture mapping into grid areas of fixed size, and the texture data of each grid area is represented by a texture feature matrix; S43、利用手指运动的时空分布数据计算纹理偏差值:S43, using the spatiotemporal distribution data of the finger movement to calculate the texture deviation value: 其中,ΔT(t,x,y)表示纹理偏差值,V(x,y)表示当前纹理特征矩阵,V0(x,y)表示初始参考纹理特征矩阵,表示当前纹理特征矩阵在x方向的梯度,表示当前纹理特征矩阵在y方向的梯度;Where ΔT(t,x,y) represents the texture deviation value, V(x,y) represents the current texture feature matrix, and V 0 (x,y) represents the initial reference texture feature matrix. Represents the gradient of the current texture feature matrix in the x direction, Represents the gradient of the current texture feature matrix in the y direction; S44、对纹理偏差值ΔT(t,x,y)进行动态权重调节,生成补偿权重矩阵,用于对不同位置的偏差进行局部优化,调节过程由纹理梯度和偏差大小控制:S44, dynamically adjust the weight of the texture deviation value ΔT(t,x,y) to generate a compensation weight matrix for local optimization of the deviations at different positions. The adjustment process is controlled by the texture gradient and the deviation size: 其中,W(x,y)表示空间坐标(x,y)上的动态补偿权重,κ表示纹理梯度调节系数,exp表示指数函数,表示当前纹理特征矩阵V(x,y)的梯度大小,λ表示偏差权重调节系数,|ΔT(t,x,y)|表示纹理偏差绝对值;Among them, W(x,y) represents the dynamic compensation weight on the spatial coordinate (x,y), κ represents the texture gradient adjustment coefficient, and exp represents the exponential function. represents the gradient size of the current texture feature matrix V(x,y), λ represents the deviation weight adjustment coefficient, and |ΔT(t,x,y)| represents the absolute value of the texture deviation; S45、基于补偿权重矩阵W(x,y)和纹理偏差值ΔT(t,x,y),计算动态补偿后的信号分布模型:S45. Based on the compensation weight matrix W(x, y) and the texture deviation value ΔT(t, x, y), the signal distribution model after dynamic compensation is calculated: Sf(x,y)=S0(x,y)+W(x,y)·ΔT(t,x,y);S f (x, y) = S 0 (x, y) + W (x, y)·ΔT (t, x, y); 其中,Sf(x,y)表示动态补偿后的信号分布模型,S0(x,y)表示初始信号分布模型。Wherein, S f (x, y) represents the signal distribution model after dynamic compensation, and S 0 (x, y) represents the initial signal distribution model. 5.根据权利要求1所述的一种基于生物电信号增强的指纹图像采集方法,其特征在于,所述S5具体包括:5. The fingerprint image acquisition method based on bioelectric signal enhancement according to claim 1, characterized in that S5 specifically comprises: S51、基于神经-指纹特征交互机制,对动态补偿后的信号分布模型和指纹纹理特征进行动态建模,生成交互增强矩阵,所述神经-指纹特征交互机制包括:S51. Based on the neural-fingerprint feature interaction mechanism, dynamically model the signal distribution model and fingerprint texture features after dynamic compensation to generate an interactive enhancement matrix. The neural-fingerprint feature interaction mechanism includes: 将动态补偿后的信号分布模型和指纹纹理特征在空间上进行逐点相关性分析,利用高斯核函数引入空间权重,增强相邻区域之间的影响;The signal distribution model after dynamic compensation and fingerprint texture features are subjected to point-by-point correlation analysis in space, and the Gaussian kernel function is used to introduce spatial weights to enhance the influence between adjacent areas. 将动态补偿后的信号分布模型和指纹纹理特征的局部变化映射到交互空间,通过积分操作将多尺度特征耦合在一起;The signal distribution model after dynamic compensation and the local changes of fingerprint texture features are mapped to the interaction space, and the multi-scale features are coupled together through the integration operation; 计算动态补偿后的信号分布模型和指纹纹理特征在局部邻域内的加权和:Calculate the weighted sum of the signal distribution model and fingerprint texture features after dynamic compensation in the local neighborhood: 其中,R(x,y)表示在空间坐标(x,y)上的交互增强矩阵,S(u,v)表示在空间坐标(u,v)上的信号的空间分布,T(x-u,y-v)表示指纹纹理特征值在偏移量(x-u,y-v)处的分布,exp表示指数函数,σ表示高斯核函数的尺度参数;Where R(x,y) represents the interaction enhancement matrix at the spatial coordinates (x,y), S(u,v) represents the spatial distribution of the signal at the spatial coordinates (u,v), T(x-u,y-v) represents the distribution of the fingerprint texture feature value at the offset (x-u,y-v), exp represents the exponential function, and σ represents the scale parameter of the Gaussian kernel function; S52、引入神经响应同步放大机制增强信号对比度,将神经信号的强度分布与交互增强矩阵动态耦合,生成增强信号:S52. Introduce a neural response synchronous amplification mechanism to enhance signal contrast, dynamically couple the intensity distribution of neural signals with the interactive enhancement matrix, and generate an enhanced signal: 其中,E(x,y)表示在空间坐标(x,y)上的增强信号,η表示神经响应放大系数,N(x,y)表示在空间坐标(x,y)上的神经信号强度分布,max(N(x,y))表示神经信号强度分布的最大值。Among them, E(x,y) represents the enhanced signal at the spatial coordinates (x,y), η represents the neural response amplification factor, N(x,y) represents the neural signal intensity distribution at the spatial coordinates (x,y), and max(N(x,y)) represents the maximum value of the neural signal intensity distribution. 6.根据权利要求1所述的一种基于生物电信号增强的指纹图像采集方法,其特征在于,所述S6具体包括:6. The fingerprint image acquisition method based on bioelectric signal enhancement according to claim 1, characterized in that S6 specifically comprises: S61、获取静电信号和增强信号,并对静电信号和增强信号进行时序对齐和归一化处理;S61, acquiring an electrostatic signal and an enhanced signal, and performing time alignment and normalization processing on the electrostatic signal and the enhanced signal; S62、构建多模态协同网络,所述多模态协同网络由两部分组成:静电信号特征提取分支和增强信号特征提取分支,所述静电信号特征提取分支通过卷积操作提取静电信号的局部强度分布特征,所述增强信号特征提取分支提取增强信号的全局特征,并通过特征融合操作生成联合特征图;S62, constructing a multimodal collaborative network, the multimodal collaborative network consisting of two parts: an electrostatic signal feature extraction branch and an enhanced signal feature extraction branch, the electrostatic signal feature extraction branch extracts local intensity distribution features of the electrostatic signal through a convolution operation, the enhanced signal feature extraction branch extracts global features of the enhanced signal, and generates a joint feature map through a feature fusion operation; S63、将联合特征图输入到生成对抗网络中,生成对抗网络包括生成器和判别器,生成器根据联合特征图生成优化纹理分布特性,通过学习静电信号与增强信号之间的内在关系,生成优化后的特征信号;判别器用于评估生成器输出的纹理分布特性与实际纹理特征的相似性,指导生成器优化;S63, inputting the joint feature map into a generative adversarial network, the generative adversarial network includes a generator and a discriminator, the generator generates an optimized texture distribution characteristic according to the joint feature map, and generates an optimized feature signal by learning the intrinsic relationship between the electrostatic signal and the enhanced signal; the discriminator is used to evaluate the similarity between the texture distribution characteristic output by the generator and the actual texture characteristic, and guide the generator to optimize; S64、最终输出优化后的特征信号。S64. Finally, the optimized characteristic signal is output. 7.根据权利要求1所述的一种基于生物电信号增强的指纹图像采集方法,其特征在于,所述S7具体包括:7. The fingerprint image acquisition method based on bioelectric signal enhancement according to claim 1, characterized in that the step S7 specifically comprises: S71、利用时空特征解码技术对优化后的特征信号进行解码处理,解码过程包括时间解码和空间解码;所述时间解码用于提取优化后的特征信号在时间序列上的动态特征,生成时间特性分布;所述空间解码用于提取优化后的特征信号在不同空间位置的分布特性,生成空间特性映射;S71, using the spatiotemporal feature decoding technology to decode the optimized feature signal, the decoding process includes time decoding and space decoding; the time decoding is used to extract the dynamic characteristics of the optimized feature signal in the time series and generate the time characteristic distribution; the space decoding is used to extract the distribution characteristics of the optimized feature signal in different spatial positions and generate the space characteristic map; S72、将时间特性分布和空间特性映射进行联合处理,形成时空特征映射,联合处理包括特征对齐和权重平衡;S72, jointly processing the temporal characteristic distribution and the spatial characteristic mapping to form a temporal and spatial characteristic mapping, wherein the joint processing includes feature alignment and weight balancing; S73、将解码后的时空特征映射进行重构生成最终指纹图像,重构过程包括噪声抑制、纹理增强和边界校正。S73, reconstructing the decoded spatiotemporal feature map to generate a final fingerprint image, wherein the reconstruction process includes noise suppression, texture enhancement and boundary correction.
CN202510135726.4A 2025-02-07 2025-02-07 A fingerprint image acquisition method based on bioelectric signal enhancement Active CN120071407B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510135726.4A CN120071407B (en) 2025-02-07 2025-02-07 A fingerprint image acquisition method based on bioelectric signal enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510135726.4A CN120071407B (en) 2025-02-07 2025-02-07 A fingerprint image acquisition method based on bioelectric signal enhancement

Publications (2)

Publication Number Publication Date
CN120071407A true CN120071407A (en) 2025-05-30
CN120071407B CN120071407B (en) 2025-08-29

Family

ID=95788960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510135726.4A Active CN120071407B (en) 2025-02-07 2025-02-07 A fingerprint image acquisition method based on bioelectric signal enhancement

Country Status (1)

Country Link
CN (1) CN120071407B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10123331A1 (en) * 2001-05-14 2002-11-28 Infineon Technologies Ag Method for recognizing falsification during finger printing, involves determining the ratio of finger rills to finger lines in finger print images
WO2006026965A1 (en) * 2004-09-10 2006-03-16 Frank Bechtold Method and system for optimizing recognition or recognition reliability during identification or authentication of test objects
CN103324917A (en) * 2013-06-24 2013-09-25 中国科学技术大学 Handwriting chirography inputting device including finger information
CN106960191A (en) * 2017-03-23 2017-07-18 深圳汇通智能化科技有限公司 A kind of fingerprint recognition system
CN113204308A (en) * 2020-01-31 2021-08-03 华为技术有限公司 Touch method based on distorted fingerprints and electronic equipment
CN115188084A (en) * 2022-08-03 2022-10-14 成都理工大学 Multi-mode identity recognition system and method for non-contact voiceprint and palm print palm vein
CN116092134A (en) * 2023-02-22 2023-05-09 吉林化工学院 A Fingerprint Liveness Detection Method Based on Deep Learning and Feature Fusion
WO2024020743A1 (en) * 2022-07-25 2024-02-01 苏州中科天启遥感科技有限公司 Master-slave cluster task scheduling method for data production, and application thereof
CN117918950A (en) * 2023-11-22 2024-04-26 国科温州研究院(温州生物材料与工程研究所) Mechanical arm device for comprehensively diagnosing breast cancer
CN118609174A (en) * 2024-05-24 2024-09-06 浙江大学 A method for identifying and tracking low-power Bluetooth devices based on physical layer fingerprint
CN119296143A (en) * 2024-12-09 2025-01-10 山东承势电子科技有限公司 Fingerprint feature recognition and analysis method based on directional field guidance and spatial attention technology

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10123331A1 (en) * 2001-05-14 2002-11-28 Infineon Technologies Ag Method for recognizing falsification during finger printing, involves determining the ratio of finger rills to finger lines in finger print images
WO2006026965A1 (en) * 2004-09-10 2006-03-16 Frank Bechtold Method and system for optimizing recognition or recognition reliability during identification or authentication of test objects
CN103324917A (en) * 2013-06-24 2013-09-25 中国科学技术大学 Handwriting chirography inputting device including finger information
CN106960191A (en) * 2017-03-23 2017-07-18 深圳汇通智能化科技有限公司 A kind of fingerprint recognition system
CN113204308A (en) * 2020-01-31 2021-08-03 华为技术有限公司 Touch method based on distorted fingerprints and electronic equipment
WO2024020743A1 (en) * 2022-07-25 2024-02-01 苏州中科天启遥感科技有限公司 Master-slave cluster task scheduling method for data production, and application thereof
CN115188084A (en) * 2022-08-03 2022-10-14 成都理工大学 Multi-mode identity recognition system and method for non-contact voiceprint and palm print palm vein
CN116092134A (en) * 2023-02-22 2023-05-09 吉林化工学院 A Fingerprint Liveness Detection Method Based on Deep Learning and Feature Fusion
CN117918950A (en) * 2023-11-22 2024-04-26 国科温州研究院(温州生物材料与工程研究所) Mechanical arm device for comprehensively diagnosing breast cancer
CN118609174A (en) * 2024-05-24 2024-09-06 浙江大学 A method for identifying and tracking low-power Bluetooth devices based on physical layer fingerprint
CN119296143A (en) * 2024-12-09 2025-01-10 山东承势电子科技有限公司 Fingerprint feature recognition and analysis method based on directional field guidance and spatial attention technology

Also Published As

Publication number Publication date
CN120071407B (en) 2025-08-29

Similar Documents

Publication Publication Date Title
Vera-Rodriguez et al. Comparative analysis and fusion of spatiotemporal information for footstep recognition
Kusakunniran et al. A new view-invariant feature for cross-view gait recognition
Liu et al. One-class fingerprint presentation attack detection using auto-encoder network
CN115188084B (en) Non-contact multimodal identity recognition system and method based on voiceprint, palm print and palm vein
Yuan et al. Semi-supervised stacked autoencoder-based deep hierarchical semantic feature for real-time fingerprint liveness detection
KR20140067604A (en) Apparatus, method and computer readable recording medium for detecting, recognizing and tracking an object based on a situation recognition
Liu et al. Privacy-preserving video fall detection using visual shielding information
CN112288645A (en) Skull face restoration model construction method, restoration method and restoration system
CN119206001A (en) An interactive digital human generation method and system based on artificial intelligence
Kavita et al. A contemporary survey of multimodal presentation attack detection techniques: challenges and opportunities
Yuan et al. An Interpretable Siamese Attention Res‐CNN for Fingerprint Spoofing Detection
Ma et al. Feature extraction for visual speaker authentication against computer-generated video attacks
TWI318108B (en) A real-time face detection under complex backgrounds
CN120071407B (en) A fingerprint image acquisition method based on bioelectric signal enhancement
CN117058759B (en) Human body action recognition method combining infrared and WiFi
CN119251895A (en) Lighting-guided identification method and related equipment for deep fake faces in videos
Zhao et al. The application and implementation of face recognition in authentication system for distance education
Tamrakar et al. A study on machine learning approach for fingerprint recognition system
Liu et al. A zero-shot based fingerprint presentation attack detection system
CN115512414A (en) Manifold Learning A Deep Learning Approach to Image Feature Extraction for Face Recognition
CN113837523A (en) A method of community service quality evaluation based on natural language processing algorithm
CN120279429B (en) Heterogeneous remote sensing image change detection method and system
CN117558035B (en) Figure identity recognition system and method based on image technology
Yu et al. A Fingerprint Quality Driven Transformer-CNN Hybrid Model for External and Internal Fingerprint Fusion
CN117994290A (en) A small motion amplification method based on STB-DISF

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant