CN114913555B - Fingerprint feature point acquisition method and device, electronic equipment and storage medium - Google Patents
Fingerprint feature point acquisition method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114913555B CN114913555B CN202210596333.XA CN202210596333A CN114913555B CN 114913555 B CN114913555 B CN 114913555B CN 202210596333 A CN202210596333 A CN 202210596333A CN 114913555 B CN114913555 B CN 114913555B
- Authority
- CN
- China
- Prior art keywords
- point
- fingerprint
- pixel
- points
- ridge
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1347—Preprocessing; Feature extraction
- G06V40/1359—Extracting features related to ridge properties; Determining the fingerprint type, e.g. whorl or loop
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/34—Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
- Collating Specific Patterns (AREA)
- Image Input (AREA)
Abstract
The embodiment of the application provides a fingerprint feature point acquisition method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring an initial fingerprint prospect of an initial fingerprint image, and compensating each pixel value of the initial fingerprint prospect according to an average pixel value of a preset neighborhood to obtain an enhanced fingerprint prospect; dividing the enhanced fingerprint foreground into a plurality of image blocks, and obtaining the gradient direction of each image block; carrying out smoothing filter treatment on each image block in the gradient direction of each image block and sharpening filter treatment on each image block in the target direction to respectively obtain a plurality of ridge-valley contrast enhancement images; determining ridge tracking points according to the traversed pixel points, and acquiring key feature points of each ridge-valley contrast enhancement graph according to the gradient direction of each image block and each ridge tracking point; and determining fingerprint characteristic points according to the key characteristic points of each ridge-valley contrast enhancement graph. Therefore, the ridge line is directly tracked through the ridge-valley contrast enhancement graph, fingerprint feature points are detected on the ridge line, time consumption is less, and accuracy is high.
Description
Technical Field
The present application relates to the field of fingerprint identification technologies, and in particular, to a method and apparatus for acquiring fingerprint feature points, an electronic device, and a storage medium.
Background
At present, as the fingerprint features have uniqueness, the security is very high, and the fingerprint identification technology is widely applied to life and work of people. The fingerprint feature extraction is a key technology for realizing fingerprint identification, the main fingerprint features at present are fingerprint endpoints and fingerprint fork points, the time consumption for extracting the feature is large for large fingerprint images, and the speed and accuracy for extracting the fingerprint feature points are relatively low.
Disclosure of Invention
In order to solve the technical problems, the embodiment of the application provides a fingerprint feature point acquisition method, a fingerprint feature point acquisition device, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present application provides a method for acquiring a fingerprint feature point, where the method includes:
Acquiring an initial fingerprint prospect of an initial fingerprint image, and compensating each pixel value of the initial fingerprint prospect according to an average pixel value of a preset neighborhood to obtain an enhanced fingerprint prospect;
Dividing the enhanced fingerprint foreground into a plurality of image blocks, and obtaining the gradient direction of each image block;
Performing smoothing filter processing on each image block in the gradient direction of each image block and sharpening filter processing on each image block in the target direction to respectively obtain a plurality of ridge-valley contrast enhancement graphs, wherein the target direction is a vertical direction perpendicular to the gradient direction of each image block;
Traversing pixel points of each ridge-valley contrast enhancement graph, determining ridge line tracking points according to the traversed pixel points, and acquiring key feature points of each ridge-valley contrast enhancement graph according to the gradient direction of each image block and each ridge line tracking point;
and determining fingerprint feature points of the initial fingerprint image according to the key feature points of each ridge-valley contrast enhancement map.
In one embodiment, the step of determining the ridge tracking point according to the traversed pixel point includes:
taking the traversed pixel points as center points, and acquiring a pixel point set on the normal line of each corresponding pixel point;
judging whether a target pixel point exists in the pixel point set, wherein the pixel value of the target pixel point is smaller than or equal to the pixel value of the adjacent previous pixel point of the target pixel point, and the pixel value of the target pixel point is smaller than or equal to the pixel value of the adjacent next pixel point of the target pixel point;
If one target pixel point exists, determining the one target pixel point as the ridge line tracking point;
and if at least two target pixel points exist, determining the target pixel point closest to the center point from the at least two target pixel points as the ridge line tracking point.
In one embodiment, the key feature points include: a ridgeline end point and a ridgeline fork point;
the step of obtaining key feature points of each ridge-valley contrast enhancement map according to the gradient direction of each image block and each ridge line tracking point comprises the following steps:
Starting from each ridge tracking point according to the gradient direction of each image block, determining other tracking points from the ridge with a preset step length until the ridge end point is tracked;
And determining the ridge fork point by marking the pixel points twice as tracking points.
In an embodiment, the fingerprint feature points include fingerprint end points and fingerprint fork points, and the step of determining the fingerprint feature points of the initial fingerprint image according to the key feature points of each ridge-valley contrast enhancement map includes:
taking the ridge line end points of the image blocks as fingerprint end points of the initial fingerprint image;
and taking the ridge line fork point of each image block as the fingerprint fork point of the initial fingerprint image.
In one embodiment, the method further comprises:
if the pixel distance between the two fingerprint endpoints is smaller than or equal to the first preset pixel distance and the direction errors of the two fingerprint endpoints are in a preset direction error range, determining that the two fingerprint endpoints are fingerprint opposite head points.
In one embodiment, the method further comprises:
If the pixel distance between any two fingerprint feature points is smaller than or equal to a second preset pixel distance, calculating the quality of each fingerprint feature point according to a plurality of pixel gradient values of each fingerprint feature point;
and deleting target characteristic points with the quality smaller than or equal to a preset quality threshold value from the two fingerprint characteristic points.
In one embodiment, the step of acquiring the gradient direction of each image block includes:
acquiring a first direction gradient vector and a second direction gradient vector of each image block;
and calculating the gradient direction of each image block according to the first direction gradient vector and the second direction gradient vector of each image block.
In a second aspect, an embodiment of the present application provides a fingerprint feature point obtaining apparatus, including:
The compensation module is used for acquiring an initial fingerprint prospect of the initial fingerprint image, and compensating each pixel value of the initial fingerprint prospect according to the average pixel value of a preset neighborhood to obtain an enhanced fingerprint prospect;
The acquisition module is used for dividing the enhanced fingerprint prospect into a plurality of image blocks and acquiring the gradient direction of each image block;
The filtering module is used for carrying out smoothing filtering treatment on each image block in the gradient direction of each image block and sharpening filtering treatment on each image block in the target direction to respectively obtain a plurality of ridge-valley contrast enhancement graphs, wherein the target direction is the vertical direction perpendicular to the gradient direction of each image block;
the traversing processing module is used for traversing the pixel points of each ridge-valley contrast enhancement graph, determining ridge line tracking points according to the traversed pixel points, and acquiring key feature points of each ridge-valley contrast enhancement graph according to the gradient direction of each image block and each ridge line tracking point;
and the determining module is used for determining fingerprint characteristic points of the initial fingerprint image according to key characteristic points of each ridge-valley contrast enhancement graph.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory is configured to store a computer program, and the computer program executes the fingerprint feature point acquisition method provided in the first aspect when the processor runs.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium storing a computer program, which when run on a processor performs the fingerprint feature point acquisition method provided in the first aspect.
The fingerprint feature point acquisition method, the device, the electronic equipment and the storage medium provided by the application acquire the initial fingerprint prospect of the initial fingerprint image, and compensate each pixel value of the initial fingerprint prospect according to the average pixel value of the preset neighborhood to acquire the enhanced fingerprint prospect; dividing the enhanced fingerprint foreground into a plurality of image blocks, and obtaining the gradient direction of each image block; performing smoothing filter processing on each image block in the gradient direction of each image block and sharpening filter processing on each image block in the target direction to respectively obtain a plurality of ridge-valley contrast enhancement graphs, wherein the target direction is a vertical direction perpendicular to the gradient direction of each image block; traversing pixel points of each ridge-valley contrast enhancement graph, determining ridge line tracking points according to the traversed pixel points, and acquiring key feature points of each ridge-valley contrast enhancement graph according to the gradient direction of each image block and each ridge line tracking point; and determining fingerprint feature points of the initial fingerprint image according to the key feature points of each ridge-valley contrast enhancement map. In this way, by directly tracking the ridge line on the ridge-valley contrast enhancement graph, key feature points are detected on the ridge line, and the fingerprint feature points are determined by the key feature points, so that the time consumption is less; the influence of burrs is avoided better, and due to the direction enhancement, the fracture is connected better, the ridge line tracking effect is better, and the extraction accuracy of the fingerprint feature points is improved.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings that are required for the embodiments will be briefly described, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope of the present application. Like elements are numbered alike in the various figures.
Fig. 1 is a schematic flow chart of a fingerprint feature point obtaining method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a portion of a fingerprint image according to an embodiment of the present application;
FIG. 3 is another partial schematic view of a fingerprint image provided by an embodiment of the present application;
Fig. 4 is a schematic structural diagram of a fingerprint feature point obtaining device according to an embodiment of the present application.
Icon: 400-a fingerprint feature point acquisition device; 401-a compensation module; 402-an acquisition module; 403 a filtering module; 404-traversing the processing module; 405-determining a module; 301-a first fingerprint endpoint; 302-second fingerprint endpoint.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments.
The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present application.
The terms "comprises," "comprising," "including," or any other variation thereof, are intended to cover a specific feature, number, step, operation, element, component, or combination of the foregoing, which may be used in various embodiments of the present application, and are not intended to first exclude the presence of or increase the likelihood of one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the application belong. The terms (such as those defined in commonly used dictionaries) will be interpreted as having a meaning that is the same as the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in connection with the various embodiments of the application.
Example 1
The embodiment of the disclosure provides a fingerprint feature point acquisition method.
Specifically, referring to fig. 1, the fingerprint feature point acquisition method includes:
Step S101, an initial fingerprint foreground of an initial fingerprint image is obtained, and all pixel values of the initial fingerprint foreground are compensated according to an average pixel value of a preset neighborhood, so that an enhanced fingerprint foreground is obtained.
In this embodiment, a fingerprint image may be collected by a camera, and filtering and denoising processing may be performed on the collected fingerprint image, so as to obtain an initial fingerprint image after filtering and denoising. Specifically, the filtering denoising process may include median filtering process and/or mean filtering process, or may be other filtering process, so long as the noise signal can be removed, which is not limited herein. Thus, the noise information of the initial fingerprint image is less, and the definition of the initial fingerprint image can be improved.
In this embodiment, the initial fingerprint foreground and the initial fingerprint background are obtained by dividing the initial fingerprint image by determining the regional gradient, where the initial fingerprint foreground is a foreground image containing a large amount of fingerprint information, and the initial fingerprint background is an image not containing fingerprint information. In this embodiment, the initial fingerprint foreground obtained by segmentation is selected for subsequent pixel value compensation processing. The preset neighborhood may be 8 adjacent neighborhoods, each adjacent 8 neighborhood refers to 8 neighborhoods in which the 2 nd pixel value of the 2 nd row is a central value in the 3×3 pixel value array and the other pixel values except the central value in the 3×3 pixel value array are central pixel values, the average pixel value of the 8 neighborhoods is calculated, the central value is supplemented according to the pixel difference value between the average pixel value and the central value, and the pixel value compensation processing is performed on each pixel value of the initial fingerprint foreground, so that the enhanced fingerprint foreground is obtained.
In this way, the enhanced fingerprint foreground is obtained by performing pixel compensation on each pixel value of the initial fingerprint foreground, so that the enhanced fingerprint foreground has clearer fingerprint lines.
Step S102, dividing the enhanced fingerprint foreground into a plurality of image blocks, and obtaining the gradient direction of each image block.
In one embodiment, the step of acquiring the gradient direction of each image block in step S102 includes:
acquiring a first direction gradient vector and a second direction gradient vector of each image block;
and calculating the gradient direction of each image block according to the first direction gradient vector and the second direction gradient vector of each image block.
In this embodiment, the enhanced fingerprint foreground may be divided into a plurality of image blocks according to a preset size, which may be determined according to empirical data. For example, the enhanced fingerprint foreground may be divided into image blocks of size w×w pixels, representing an array of pixels of each image block having w rows and w columns, with the number of pixels being w×w. The following describes the calculation process of the gradient direction of each image block with the image block distance having w×w pixels.
First, gradient vectors dx and dy in each image block are obtained by using Sobel operator. For example, the Sobel operator includes a horizontal operator S1 and a vertical operator S2, and for example, the horizontal operator S1 and the vertical operator S2 may be determined by the following formula 1 and formula 2, respectively;
equation 1: the horizontal operator s1= [ -1,0,1; -2,0,2; -1,0,1];
equation 2: vertical operator s2= [ -1, -2, -1;0, 0;1,2,1];
Calculating the horizontal gradient adjacent vx of each pixel point in each image block according to the horizontal operator S1, and calculating the vertical gradient vy of each pixel point in each image block according to the vertical operator S2; and calculating gradient vectors of the image blocks according to the horizontal gradient adjacent vx and the vertical gradient vy of the pixel points in the image blocks. Specifically, the gradient vector of each image block is calculated according to the following formulas 3 and 4.
Equation 3:
Equation 4:
Wherein dx represents the x-direction gradient vector of each image block; dy represents the y-direction gradient vector for each image block.
And calculating square gradient vectors of the image blocks according to the horizontal gradient adjacent vx and the vertical gradient vy of the pixel points in the image blocks. The square gradient vector of each specific image block is calculated according to the following formula 6 and formula 7.
Equation 5: gx=2×dxx×dy;
Equation 6: gy=dx 2-dy2;
GX and GY represent square gradient vectors of the image blocks.
And calculating the gradient direction of each image block according to the square gradient vector of each image block. Specifically, the gradient direction of each image block is calculated according to the following formula 7.
Equation 7: θ=0.5×arctan (GX/GY);
where θ represents the gradient direction of each image block.
Step S103, performing smoothing filtering processing on each image block in a gradient direction of each image block, and performing sharpening filtering processing on each image block in a target direction, so as to obtain a plurality of ridge-valley contrast enhancement graphs, wherein the target direction is a vertical direction perpendicular to the gradient direction of each image block.
For example, the smoothing filter may be: [4,8, 16, 23, 26, 23, 16,8,4], the sharpening filter may be: [ -11, -19,0, 53, 82, 53,0, -19, -11]. Smoothing filtering is adopted to carry out smoothing filtering on pixel points of each image block according to the gradient direction of each image block, sharpening filtering is adopted to carry out sharpening filtering according to the vertical direction perpendicular to the gradient direction of each image block, and the specific filtering mode is as follows: and respectively carrying out weighted calculation on the pixel points in the opposite direction through a smoothing filter and a sharpening filter to obtain each ridge-valley contrast enhancement map so as to increase the contrast of the ridge valleys.
Step S104, traversing the pixel points of each ridge-valley contrast enhancement graph, determining ridge line tracking points according to the traversed pixel points, and acquiring key feature points of each ridge-valley contrast enhancement graph according to the gradient direction of each image block and each ridge line tracking point.
In one embodiment, the step of determining the ridge tracking point according to each pixel point traversed in step S104 includes:
taking the traversed pixel points as center points, and acquiring a pixel point set on the normal line of each corresponding pixel point;
judging whether a target pixel point exists in the pixel point set, wherein the pixel value of the target pixel point is smaller than or equal to the pixel value of the adjacent previous pixel point of the target pixel point, and the pixel value of the target pixel point is smaller than or equal to the pixel value of the adjacent next pixel point of the target pixel point;
If one target pixel point exists, determining the one target pixel point as the ridge line tracking point;
and if at least two target pixel points exist, determining the target pixel point closest to the center point from the at least two target pixel points as the ridge line tracking point.
Referring to fig. 2, fig. 2 is a schematic diagram of a ridge-valley contrast enhancement graph, and for the partial schematic diagram shown in fig. 2, first, traversing each pixel point from left to right and from top to bottom in the ridge-valley contrast enhancement graph shown in fig. 2, and determining whether the traversed pixel point is a point on a ridge line, i.e. determining whether the traversed pixel point is a ridge line tracking point. The traversal process is described below with reference to fig. 2.
As shown in fig. 2, let pixel s be the pixel traversed by the ridge-valley contrast enhancement map, since the ridge line width is generally 8-16 pixels, assume that the pixel s is traversed at this time, determine the gradient direction of the pixel s according to the gradient direction of each image block, determine the normal line d1 perpendicular to the gradient direction of the pixel s, and generate the pixel set x [ i ] by selecting the pixel within the preset radius range with the pixel s as the center point in the normal line d 1. For example, the preset radius range is a 9-pixel range, that is, 18 pixels are sequentially selected along the normal line d1 with the pixel s as the center point, so as to obtain a pixel set x [ i ], i=1 to 18.
Traversing each pixel in the pixel set x [ i ], specifically, performing gaussian filtering on the pixel values of the pixel set x [ i ] for 18 pixel points x [ i ], and judging whether the pixel values satisfying the following conditions exist in the pixel set x [ i ], i=1 to 18: x [ i ] <=x [ i-1] and x [ i ] <=x [ i+1], x [ i ] represents the pixel value of the pixel point i, x [ i-1] represents the pixel value of the pixel point adjacent to the last pixel point of the pixel point i, x [ i+1] represents the pixel value of the pixel point adjacent to the next pixel point of the pixel point i, if there is a pixel point satisfying the above condition, the pixel point i satisfying the above condition is determined as the target pixel point, and if the number of the target pixel points is one pixel point 8, the 8 pixel point 8 is determined as the ridge line tracking point p1. If the number of the target pixels is at least two, for example, the pixel 3 and the pixel 7 satisfy the above condition, one of the pixel 3 and the pixel 7, which is closest to the pixel s, is determined as a ridge tracking point, and if the pixel 3 is closest to the pixel s, the pixel 3 is determined as a ridge tracking point p1. If the above condition is not satisfied, it cannot be determined that the ridge tracking point is found, and the above determination operation is continued for the next pixel point of the pixel points s.
In an embodiment, the step of obtaining key feature points of each ridge-valley contrast enhancement map according to the gradient direction of each image block and each ridge line tracking point in step S104 includes:
Determining the direction of each ridge line tracking point according to the gradient direction of each image block, and selecting candidate points from the ridge line tracking points according to the preset pixel step length according to the direction of each ridge line tracking point;
judging whether the candidate point is the next ridge tracking point or not;
and if the candidate point is the next ridge line tracking point, marking the ridge line according to the ridge line tracking point and the next ridge line tracking point, and tracking the next ridge line tracking point in sequence until the next ridge line tracking point cannot be determined, and taking the last ridge line tracking point of the ridge line as a ridge line endpoint.
For example, if the pixel point p1 is a ridge line tracking point, the direction of the pixel point p1 is determined according to the gradient direction of the direction diagram where the pixel point p1 is located, tracking is performed according to the direction of the pixel point p1 by a preset pixel step size from the pixel point p1, for example, the preset pixel step size d is 3 pixels, one pixel point p2 is selected from the pixel point p1 at intervals of 3 pixels along the direction of the pixel point p1 according to the direction of the pixel point p1, whether the pixel point p2 is a ridge line tracking point is determined, if the pixel point p2 is a ridge line tracking point, the ridge line is marked according to the pixel point p1 and the pixel point p2, and tracking is performed sequentially until the ridge line cannot be marked any more. In fig. 2, the last ridge line tracking point of the ridge line is determined as a ridge line endpoint E, and if the ridge line tracking point is marked at least twice, the ridge line tracking point marked at least twice is marked as a ridge line fork point F.
Referring to fig. 2, when the ridge line tracking point is determined by traversing, if the pixel point s is determined as the ridge line tracking point, ridge line tracking is performed to the last pixel point E in the direction of the pixel point s, that is, in the direction d1 in fig. 2, and ridge line marking is performed. Then, the pixel point s is used as a starting point, the ridge line tracking is performed in the opposite direction of the d1 direction, namely, the ridge line tracking is performed in the d2 direction until the last pixel point T, and the ridge line marking is performed. So far, all the ridge lines where the pixel points s are located are marked. Therefore, when the ridge line point is judged by traversal, if one pixel point is marked as the ridge line tracking point when the ridge line has been previously tracked, it is not necessary to repeatedly judge the ridge line for the pixel point marked as the ridge line tracking point.
In this embodiment, the direction of the ridge line bifurcation point may be determined according to the following steps: determining an auxiliary circle according to a preset pixel radius by taking a ridge line fork point as a circle center, wherein the auxiliary circle is intersected with 3 corresponding ridge lines at 3 intersection points respectively;
And connecting each intersection point with the circle center to form 3 included angles, determining an acute angle in the 3 included angles, and determining the direction pointed by an angular bisector of the acute angle as the direction of the crotch point.
For example, the preset pixel radius may be a 20-pixel radius circle.
In one embodiment, the key feature points include: a ridgeline end point and a ridgeline fork point; the step of obtaining key feature points of each ridge-valley contrast enhancement map according to the gradient direction of each image block and each ridge line tracking point in step S104 includes:
Starting from each ridge tracking point according to the gradient direction of each image block, determining other tracking points from the ridge with a preset step length until the ridge end point is tracked;
And determining the ridge fork point by marking the pixel points twice as tracking points.
Step S105, determining fingerprint feature points of the initial fingerprint image according to the key feature points of each ridge-valley contrast enhancement map.
In one embodiment, the fingerprint feature points include fingerprint end points and fingerprint fork points, and the step S105 includes the following steps:
taking the ridge line end points of the image blocks as fingerprint end points of the initial fingerprint image;
and taking the ridge line fork point of each image block as the fingerprint fork point of the initial fingerprint image.
Compared with the existing fingerprint feature extraction scheme that the ridge line is tracked after the skeleton is required to be stripped layer by layer, the ridge line is directly tracked on the ridge-valley contrast enhancement graph, and time consumption is low; compared with the method that the refining image has higher requirements on image enhancement, binarization and refinement in the existing fingerprint feature extraction scheme, otherwise, cavity burrs and the like can occur to generate pseudo feature points, the method and the device for tracking the ridge line on the ridge-valley contrast enhancement image can well avoid the influence of burrs in the refining image, and can well track through pixel value differences during ridge line tracking due to direction enhancement and better connection of fracture points.
In an embodiment, the fingerprint feature point obtaining method further includes:
if the pixel distance between the two fingerprint endpoints is smaller than or equal to the first preset pixel distance and the direction errors of the two fingerprint endpoints are in a preset direction error range, determining that the two fingerprint endpoints are fingerprint opposite head points.
In this embodiment, the first preset pixel distance is a user-defined setting value, and may be set individually according to different fingerprint images, and the first preset pixel distance may be 7-13 pixels. In a special case, if the pixel distance between two fingerprint endpoints is smaller than or equal to the first preset pixel distance, and the directions of the two fingerprint endpoints are the same, determining that the two fingerprint endpoints are fingerprint opposite points.
Referring to fig. 3, a first fingerprint endpoint 301 and a second fingerprint endpoint 302 are fingerprint-to-header points.
In an embodiment, the fingerprint feature point obtaining method further includes:
If the pixel distance between any two fingerprint feature points is smaller than or equal to a second preset pixel distance, calculating the quality of each fingerprint feature point according to a plurality of pixel gradient values of each fingerprint feature point;
and deleting target characteristic points with the quality smaller than or equal to a preset quality threshold value from the two fingerprint characteristic points.
It should be noted that the second preset pixel distance is a custom value, for example, the second preset pixel distance may be 4 pixels.
In this embodiment, the step of calculating the quality of each of the fingerprint feature points according to the plurality of pixel gradient values of each of the fingerprint feature points may include the steps of:
Determining a plurality of gradient directions of the fingerprint feature points, obtaining the difference value of the fingerprint feature points in the gradient directions, and calculating a difference average value according to the difference value in the gradient directions; and determining the quality of each fingerprint feature point according to the difference value and the difference average value in each gradient direction.
For example, if the fingerprint feature point is the pixel point P0, the pixel point P0 is taken as the center point, a3×3 pixel point array is determined, and the 3×3 pixel point array may be represented as [ P4, P5, P6; p1, p0, p2; p7, p8, p9, p4 represents the 1 st row and 1 st column pixel, p5 represents the 1 st row and 2 nd column pixel, and p6 represents the 1 st row and 3 rd column pixel; p1 represents the pixel point of the 2 nd row and the 1 st column, p0 represents the pixel point of the 2 nd row and the 2 nd column, and p2 represents the pixel point of the 2 nd row and the 3 rd column; p7 represents the pixel of the 3 rd row and 1 st column, p8 represents the pixel of the 3 rd row and 2 nd column, and p9 represents the pixel of the 3 rd row and 3 rd column. The difference values in the up-down direction, the left-right direction, the up-left-right direction, and the up-right-left-down direction are calculated, respectively, and can be specifically determined according to formulas 8-11.
Equation 8: dx1=abs (p 0-p 1) +abs (p 0-p 2);
equation 9: dy1=abs (p 0-p 5) +abs (p 0-p 8);
equation 10: dm1=abs (p 0-p 4) +abs (p 0-p 9);
equation 11: dn1=abs (p 0-p 6) +abs (p 0-p 2);
where abs represents an absolute value, dx1 represents a difference value in the up-down direction, dy1 represents a difference value in the left-right direction, dm1 represents a difference value in the up-left-down-right direction, and dn1 represents a difference value in the up-right-down-left direction.
The mean value of the difference in the up-down direction, the left-right direction, the up-left-right-down direction, and the up-right-left-down direction is calculated according to equation 12.
Equation 12: mean= (dx) 1+dy1+ dm1+dn 1)/4;
Wherein mean represents the difference mean, dx1 represents the difference value in the up-down direction, dy1 represents the difference value in the left-right direction, dm1 represents the difference value in the up-left-down-right direction, and dn1 represents the difference value in the up-right-down-left direction.
The quality of each fingerprint feature point is determined according to equation 12.
Equation 12:
Q=50×((|dx1-mean|+|dy1-mean|+|dm1-mean|+|dn1-mean|)/mean)
Wherein Q represents the quality of each fingerprint feature point, mean represents the difference mean, dx1 represents the difference value in the up-down direction, dy1 represents the difference value in the left-right direction, dm1 represents the difference value in the up-left-down direction, dn1 represents the difference value in the up-right-down direction, and "|" in formula 12 represents the absolute value sign, for example, |dx1-mean| represents the absolute value of the error between the difference value in the left-right direction and the difference mean, |dy1-mean| represents the absolute value of the error between the difference value in the left-right direction and the difference mean, and |dm1-mean| represents the absolute value of the error between the difference value in the up-left-down direction and the difference mean.
In this embodiment, the larger the quality of the fingerprint feature points, the better the fingerprint feature points, the smaller the quality of the fingerprint feature points, and the worse the fingerprint feature points. The preset quality threshold may be empirically determined, for example, the preset quality threshold may be 10. Taking the fingerprint characteristic points smaller than or equal to the preset quality threshold value 10 as poor fingerprint characteristic points, taking the fingerprint characteristic points larger than the preset quality threshold value 10 as good fingerprint characteristic points, deleting the poor fingerprint characteristic points, reserving good fingerprint characteristics and improving the quality of the fingerprint characteristic points.
According to the fingerprint feature point acquisition method provided by the embodiment, an initial fingerprint prospect of an initial fingerprint image is acquired, each pixel value of the initial fingerprint prospect is compensated according to an average pixel value of a preset neighborhood, and an enhanced fingerprint prospect is obtained; dividing the enhanced fingerprint foreground into a plurality of image blocks, and obtaining the gradient direction of each image block; performing smoothing filter processing on each image block in the gradient direction of each image block and sharpening filter processing on each image block in the target direction to respectively obtain a plurality of ridge-valley contrast enhancement graphs, wherein the target direction is a vertical direction perpendicular to the gradient direction of each image block; traversing pixel points of each ridge-valley contrast enhancement graph, determining ridge line tracking points according to the traversed pixel points, and acquiring key feature points of each ridge-valley contrast enhancement graph according to the gradient direction of each image block and each ridge line tracking point; and determining fingerprint feature points of the initial fingerprint image according to the key feature points of each ridge-valley contrast enhancement map. In this way, by directly tracking the ridge line on the ridge-valley contrast enhancement graph, key feature points are detected on the ridge line, and the fingerprint feature points are determined by the key feature points, so that the time consumption is less; the influence of burrs is avoided better, and due to the direction enhancement, the fracture is connected better, the ridge line tracking effect is better, and the extraction accuracy of the fingerprint feature points is improved.
Example 2
In addition, the embodiment of the disclosure provides a fingerprint feature point acquisition device.
Specifically, referring to fig. 4, the fingerprint feature point obtaining apparatus 400 includes:
The compensation module 401 is configured to obtain an initial fingerprint foreground of an initial fingerprint image, compensate each pixel value of the initial fingerprint foreground according to an average pixel value of a preset neighborhood, and obtain an enhanced fingerprint foreground;
An obtaining module 402, configured to divide the enhanced fingerprint foreground into a plurality of image blocks, and obtain a gradient direction of each image block;
A filtering module 403, configured to perform smoothing filtering processing on each image block in a gradient direction of each image block, and perform sharpening filtering processing on each image block in a target direction, where the target direction is a vertical direction perpendicular to the gradient direction of each image block, to obtain a plurality of ridge-valley contrast enhancement graphs;
The traversal processing module 404 is configured to traverse the pixel points of the ridge-valley contrast enhancement graphs, determine ridge tracking points according to the traversed pixel points, and obtain key feature points of the ridge-valley contrast enhancement graphs according to the gradient direction of each image block and each ridge tracking point;
The determining module 405 is configured to determine fingerprint feature points of the initial fingerprint image according to key feature points of each ridge-valley contrast enhancement map.
In an embodiment, the traversal processing module 404 is further configured to obtain a set of pixel points on a normal line corresponding to each pixel point with the traversed pixel point as a center point;
judging whether a target pixel point exists in the pixel point set, wherein the pixel value of the target pixel point is smaller than or equal to the pixel value of the adjacent previous pixel point of the target pixel point, and the pixel value of the target pixel point is smaller than or equal to the pixel value of the adjacent next pixel point of the target pixel point;
If one target pixel point exists, determining the one target pixel point as the ridge line tracking point;
and if at least two target pixel points exist, determining the target pixel point closest to the center point from the at least two target pixel points as the ridge line tracking point.
In one embodiment, the key feature points include: a ridgeline end point and a ridgeline fork point; the traversal processing module 404 is further configured to determine, from each of the ridge tracking points, other tracking points from the ridge with a preset step size according to a gradient direction of each of the image blocks until the ridge endpoint is tracked;
And determining the ridge fork point by marking the pixel points twice as tracking points.
In an embodiment, the fingerprint feature points include fingerprint end points and fingerprint fork points, and the determining module 405 is further configured to use a ridge line end point of each image block as a fingerprint end point of the initial fingerprint image;
and taking the ridge line fork point of each image block as the fingerprint fork point of the initial fingerprint image.
In an embodiment, the determining module 405 is further configured to determine that the two fingerprint endpoints are fingerprint opposite points if the pixel distance between the two fingerprint endpoints is less than or equal to a first preset pixel distance and the direction errors of the two fingerprint endpoints are within a preset direction error range.
In one embodiment, the fingerprint feature point obtaining apparatus 400 further includes:
The processing module is further used for calculating the quality of each fingerprint feature point according to a plurality of pixel gradient values of each fingerprint feature point if the pixel distance between any two fingerprint feature points is smaller than or equal to a second preset pixel distance;
and deleting target characteristic points with the quality smaller than or equal to a preset quality threshold value from the two fingerprint characteristic points.
In an embodiment, the obtaining module 402 is further configured to obtain a first direction gradient vector and a second direction gradient vector of each of the image blocks;
and calculating the gradient direction of each image block according to the first direction gradient vector and the second direction gradient vector of each image block.
The fingerprint feature point acquisition device provided by the embodiment acquires an initial fingerprint prospect of an initial fingerprint image, compensates each pixel value of the initial fingerprint prospect according to an average pixel value of a preset neighborhood, and acquires an enhanced fingerprint prospect; dividing the enhanced fingerprint foreground into a plurality of image blocks, and obtaining the gradient direction of each image block; performing smoothing filter processing on each image block in the gradient direction of each image block and sharpening filter processing on each image block in the target direction to respectively obtain a plurality of ridge-valley contrast enhancement graphs, wherein the target direction is a vertical direction perpendicular to the gradient direction of each image block; traversing pixel points of each ridge-valley contrast enhancement graph, determining ridge line tracking points according to the traversed pixel points, and acquiring key feature points of each ridge-valley contrast enhancement graph according to the gradient direction of each image block and each ridge line tracking point; and determining fingerprint feature points of the initial fingerprint image according to the key feature points of each ridge-valley contrast enhancement map. In this way, by directly tracking the ridge line on the ridge-valley contrast enhancement graph, key feature points are detected on the ridge line, and the fingerprint feature points are determined by the key feature points, so that the time consumption is less; the influence of burrs is avoided better, and due to the direction enhancement, the fracture is connected better, the ridge line tracking effect is better, and the extraction accuracy of the fingerprint feature points is improved.
Example 3
Further, an embodiment of the present disclosure provides an electronic device including a memory and a processor, the memory storing a computer program that, when executed by the processor, performs the fingerprint feature point acquisition method provided in embodiment 1.
The electronic device provided in this embodiment may implement the fingerprint feature point obtaining method provided in embodiment 1, and in order to avoid repetition, details are not repeated here.
Example 4
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when run on a processor, performs the fingerprint feature point acquisition method provided in embodiment 1.
The method for obtaining the fingerprint feature points according to embodiment 1 can be implemented by a computer readable storage medium, and is not described herein in detail for avoiding repetition.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.
Claims (8)
1. A method for obtaining a fingerprint feature point, the method comprising:
Acquiring an initial fingerprint prospect of an initial fingerprint image, and compensating each pixel value of the initial fingerprint prospect according to an average pixel value of a preset neighborhood to obtain an enhanced fingerprint prospect;
Dividing the enhanced fingerprint foreground into a plurality of image blocks, and obtaining the gradient direction of each image block;
Performing smoothing filter processing on each image block in the gradient direction of each image block and sharpening filter processing on each image block in the target direction to respectively obtain a plurality of ridge-valley contrast enhancement graphs, wherein the target direction is a vertical direction perpendicular to the gradient direction of each image block;
Traversing pixel points of each ridge-valley contrast enhancement graph, determining ridge line tracking points according to the traversed pixel points, and acquiring key feature points of each ridge-valley contrast enhancement graph according to the gradient direction of each image block and each ridge line tracking point;
determining fingerprint feature points of the initial fingerprint image according to key feature points of each ridge-valley contrast enhancement graph;
the step of determining the ridge tracking point according to the traversed pixel point comprises the following steps:
taking the traversed pixel points as center points, and acquiring a pixel point set on the normal line of each corresponding pixel point;
judging whether a target pixel point exists in the pixel point set, wherein the pixel value of the target pixel point is smaller than or equal to the pixel value of the adjacent previous pixel point of the target pixel point, and the pixel value of the target pixel point is smaller than or equal to the pixel value of the adjacent next pixel point of the target pixel point;
If one target pixel point exists, determining the one target pixel point as the ridge line tracking point;
If at least two target pixel points exist, determining a target pixel point closest to the center point from the at least two target pixel points as the ridge line tracking point;
the key feature points include: a ridgeline end point and a ridgeline fork point;
the step of obtaining key feature points of each ridge-valley contrast enhancement map according to the gradient direction of each image block and each ridge line tracking point comprises the following steps:
Starting from each ridge tracking point according to the gradient direction of each image block, determining other tracking points from the ridge with a preset step length until the ridge end point is tracked;
And determining the ridge fork point by marking the pixel points twice as tracking points.
2. The method of claim 1, wherein the fingerprint feature points comprise fingerprint end points and fingerprint fork points, and wherein the step of determining the fingerprint feature points of the initial fingerprint image from key feature points of each of the ridge-valley contrast enhancement maps comprises:
taking the ridge line end points of the image blocks as fingerprint end points of the initial fingerprint image;
and taking the ridge line fork point of each image block as the fingerprint fork point of the initial fingerprint image.
3. The method according to claim 2, wherein the method further comprises:
if the pixel distance between the two fingerprint endpoints is smaller than or equal to the first preset pixel distance and the direction errors of the two fingerprint endpoints are in a preset direction error range, determining that the two fingerprint endpoints are fingerprint opposite head points.
4. The method according to claim 1, wherein the method further comprises:
If the pixel distance between any two fingerprint feature points is smaller than or equal to a second preset pixel distance, calculating the quality of each fingerprint feature point according to a plurality of pixel gradient values of each fingerprint feature point;
and deleting target characteristic points with the quality smaller than or equal to a preset quality threshold value from the two fingerprint characteristic points.
5. The method of claim 1, wherein the step of acquiring the gradient direction of each of the image blocks comprises:
acquiring a first direction gradient vector and a second direction gradient vector of each image block;
and calculating the gradient direction of each image block according to the first direction gradient vector and the second direction gradient vector of each image block.
6. A fingerprint feature point acquisition apparatus, characterized by comprising:
The compensation module is used for acquiring an initial fingerprint prospect of the initial fingerprint image, and compensating each pixel value of the initial fingerprint prospect according to the average pixel value of a preset neighborhood to obtain an enhanced fingerprint prospect;
The acquisition module is used for dividing the enhanced fingerprint prospect into a plurality of image blocks and acquiring the gradient direction of each image block;
The filtering module is used for carrying out smoothing filtering treatment on each image block in the gradient direction of each image block and sharpening filtering treatment on each image block in the target direction to respectively obtain a plurality of ridge-valley contrast enhancement graphs, wherein the target direction is the vertical direction perpendicular to the gradient direction of each image block;
the traversing processing module is used for traversing the pixel points of each ridge-valley contrast enhancement graph, determining ridge line tracking points according to the traversed pixel points, and acquiring key feature points of each ridge-valley contrast enhancement graph according to the gradient direction of each image block and each ridge line tracking point;
The determining module is used for determining fingerprint feature points of the initial fingerprint image according to key feature points of each ridge-valley contrast enhancement graph;
the traversing processing module is also used for acquiring a pixel point set on the normal line of each corresponding pixel point by taking the traversed pixel point as a center point;
judging whether a target pixel point exists in the pixel point set, wherein the pixel value of the target pixel point is smaller than or equal to the pixel value of the adjacent previous pixel point of the target pixel point, and the pixel value of the target pixel point is smaller than or equal to the pixel value of the adjacent next pixel point of the target pixel point;
If one target pixel point exists, determining the one target pixel point as the ridge line tracking point;
If at least two target pixel points exist, determining a target pixel point closest to the center point from the at least two target pixel points as the ridge line tracking point;
The key feature points include: a ridgeline end point and a ridgeline fork point; the traversal processing module is further used for determining other tracking points from the ridge line according to the gradient direction of each image block from each ridge line tracking point by a preset step length until the ridge line end points are tracked;
And determining the ridge fork point by marking the pixel points twice as tracking points.
7. An electronic device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, performs the method of fingerprint feature point acquisition of any one of claims 1 to 5.
8. A computer-readable storage medium, characterized in that it stores a computer program which, when run on a processor, performs the fingerprint feature point acquisition method of any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210596333.XA CN114913555B (en) | 2022-05-30 | 2022-05-30 | Fingerprint feature point acquisition method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210596333.XA CN114913555B (en) | 2022-05-30 | 2022-05-30 | Fingerprint feature point acquisition method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114913555A CN114913555A (en) | 2022-08-16 |
CN114913555B true CN114913555B (en) | 2024-09-13 |
Family
ID=82767957
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210596333.XA Active CN114913555B (en) | 2022-05-30 | 2022-05-30 | Fingerprint feature point acquisition method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114913555B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116630220B (en) * | 2023-07-25 | 2023-11-21 | 江苏美克医学技术有限公司 | Fluorescent image depth-of-field fusion imaging method, device and storage medium |
CN117496560B (en) * | 2023-12-29 | 2024-04-19 | 深圳市魔力信息技术有限公司 | Fingerprint line identification method and device based on multidimensional vector |
CN118522041B (en) * | 2024-07-19 | 2024-11-12 | 深圳市魔力信息技术有限公司 | Small area fingerprint image matching method, device, equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105740753A (en) * | 2014-12-12 | 2016-07-06 | 比亚迪股份有限公司 | Fingerprint identification method and fingerprint identification system |
CN112699863A (en) * | 2021-03-25 | 2021-04-23 | 深圳阜时科技有限公司 | Fingerprint enhancement algorithm, computer-readable storage medium and electronic device |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5659626A (en) * | 1994-10-20 | 1997-08-19 | Calspan Corporation | Fingerprint identification system |
CN100364480C (en) * | 2002-09-17 | 2008-01-30 | 富士通株式会社 | Biometric information acquisition device and authentication device using biometric information |
US7711158B2 (en) * | 2004-12-04 | 2010-05-04 | Electronics And Telecommunications Research Institute | Method and apparatus for classifying fingerprint image quality, and fingerprint image recognition system using the same |
CN111695386B (en) * | 2019-03-15 | 2024-04-26 | 虹软科技股份有限公司 | Fingerprint image enhancement, fingerprint identification and application program starting method |
KR20210082624A (en) * | 2019-12-26 | 2021-07-06 | 에코스솔루션(주) | Fingerprint Enhancement method |
CN113313627B (en) * | 2021-06-08 | 2023-10-20 | 中国科学院大学 | Fingerprint image reconstruction method, fingerprint image feature extraction method and device |
CN113723309A (en) * | 2021-08-31 | 2021-11-30 | 平安普惠企业管理有限公司 | Identity recognition method, identity recognition device, equipment and storage medium |
CN114399796B (en) * | 2021-12-30 | 2025-05-27 | 深圳芯启航科技有限公司 | A fingerprint identification method, device, terminal and storage medium |
-
2022
- 2022-05-30 CN CN202210596333.XA patent/CN114913555B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105740753A (en) * | 2014-12-12 | 2016-07-06 | 比亚迪股份有限公司 | Fingerprint identification method and fingerprint identification system |
CN112699863A (en) * | 2021-03-25 | 2021-04-23 | 深圳阜时科技有限公司 | Fingerprint enhancement algorithm, computer-readable storage medium and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN114913555A (en) | 2022-08-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114913555B (en) | Fingerprint feature point acquisition method and device, electronic equipment and storage medium | |
CN110866924B (en) | Line structured light center line extraction method and storage medium | |
CN106960208B (en) | Method and system for automatically segmenting and identifying instrument liquid crystal number | |
CN107392139B (en) | Lane line detection method based on Hough transform and terminal equipment | |
CN103426184A (en) | Optical flow tracking method and device | |
CN108921813B (en) | Unmanned aerial vehicle detection bridge structure crack identification method based on machine vision | |
CN111462214B (en) | Line structure light stripe center line extraction method based on Hough transformation | |
CN110660072B (en) | Method and device for identifying straight line edge, storage medium and electronic equipment | |
CN113780183B (en) | Standard parking determining method and device for shared vehicle and computer equipment | |
CN115116018B (en) | A method and device for fitting lane lines | |
CN106980851B (en) | Method and device for positioning data matrix DM code | |
CN109784328A (en) | Position method, terminal and the computer readable storage medium of bar code | |
CN111275049B (en) | Method and device for acquiring text image skeleton feature descriptors | |
CN114025089B (en) | A video image acquisition jitter processing method and system | |
CN111931537B (en) | Granular QR two-dimensional code positioning method | |
CN113284117A (en) | Steel coil unwinding identification method, system, medium and terminal | |
CN114399796B (en) | A fingerprint identification method, device, terminal and storage medium | |
CN116071272A (en) | Image correction method and device, electronic equipment and storage medium thereof | |
CN111126248A (en) | Method and device for identifying shielded vehicle | |
CN119649068A (en) | A multi-scale machine vision matching method | |
CN112950594B (en) | Product surface defect detection method, equipment and storage medium | |
CN115438682B (en) | Method and device for determining decoding direction and decoding equipment | |
CN114492496B (en) | Method, equipment and storage medium for quickly recognizing and reading dot matrix code | |
CN104574396B (en) | Straight line detection method and system | |
CN111429437A (en) | Object detection-oriented image definition quality detection method without reference |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |