[go: up one dir, main page]

CN107220580B - Image identification random sampling consistent algorithm based on voting decision and least square method - Google Patents

Image identification random sampling consistent algorithm based on voting decision and least square method Download PDF

Info

Publication number
CN107220580B
CN107220580B CN201610167808.8A CN201610167808A CN107220580B CN 107220580 B CN107220580 B CN 107220580B CN 201610167808 A CN201610167808 A CN 201610167808A CN 107220580 B CN107220580 B CN 107220580B
Authority
CN
China
Prior art keywords
voting
point
points
template
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610167808.8A
Other languages
Chinese (zh)
Other versions
CN107220580A (en
Inventor
王丰
张靖恺
龙文勇
吕虹晓
郑邦雄
曾梦旭
曾艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FocalTech Systems Ltd
Original Assignee
FocalTech Systems Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FocalTech Systems Ltd filed Critical FocalTech Systems Ltd
Priority to CN201610167808.8A priority Critical patent/CN107220580B/en
Publication of CN107220580A publication Critical patent/CN107220580A/en
Application granted granted Critical
Publication of CN107220580B publication Critical patent/CN107220580B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Character Input (AREA)
  • Image Analysis (AREA)

Abstract

A voting decision and least square method based random sampling consistency algorithm for image recognition is characterized in that a voting decision mode is introduced to find out the most probable correct point pairs, and a least square method principle is adopted to calculate a rotational translation transformation matrix. The rotational translation transformation matrix calculated by the method is accurate, high in speed and stable, and the calculation result of each time cannot fluctuate randomly; the invention not only optimizes the image comparison method, but also is beneficial to the subsequent learning function of the data processing chip, namely, the characteristic information of the reference image is continuously increased, thereby effectively improving the false rejection rate and enabling the data processing chip to be more intelligent; the invention also realizes image splicing more efficiently and accurately through the continuously updated rotation and translation transformation matrix.

Description

Image identification random sampling consistent algorithm based on voting decision and least square method
Technical Field
The invention relates to a data processing method, in particular to a data processing method for image comparison and image splicing.
Background
In order to realize image comparison and image splicing according to an image comparison result, in the prior art, a basic image for image comparison is used as a template image, the image for comparison is used as an image to be tested, descriptors of characteristic points of the template image and the image to be tested are subjected to Hamming matching and spatial false removal to obtain a plurality of coarse matching point pairs, and a Random Sampling consistent matching Consensus algorithm, which is abbreviated as RANSAC in English, is used for removing mismatching and calculating a rotation-translation transformation matrix. The conventional random sampling Consensus RANSAC idea is to estimate an H matrix by randomly selecting three pairs of points in a coarse matching point pair set, then use the estimated H matrix to judge a consistency set Consensus set of all the points in two point sets, judge that the Euclidean distance between a point in one point set and a corresponding point in the other point set is smaller than a certain threshold after H matrix mapping, consider that the points belong to the consistency set of the H matrix, and finally select the consistency set containing the most points as a final precise matching point pair, wherein the corresponding H matrix is a final rotational translation transformation matrix.
The traditional random sampling consensus RANSAC algorithm estimates an H matrix by randomly selecting point pairs each time, has stronger randomness, larger operation amount and lower efficiency, and can find a proper H matrix only by repeating iteration for many times, especially when the number of error point pairs in two matching point sets of the input random sampling consensus RANSAC algorithm is more and the ratio is larger, the required repeated iteration times are larger, otherwise the H matrix is possibly not found, but the operation time and the storage amount are increased in proportion when the repeated iteration times are larger, and the H matrix and the consistent set which are calculated after the random sampling consensus RANSAC algorithm are not completely the same each time.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an image identification random sampling consistent algorithm based on voting decision and least square method to avoid the defects of the prior art, introduce a voting decision mode to find out the most probable correct point pair, calculate a rotation translation transformation matrix by adopting the least square method principle, and design a new random sampling consistent RANSAC algorithm to solve the problems. The method obtains two rough matching point sets, namely a template image rough matching point set reference set, an English abbreviation set and a test image rough matching point set testset, after Hamming matching and spatial false removal of descriptors of feature points of a template image and an image to be tested, wherein each point has a sequence label index in the point set, rough matching point pairs in the two point sets are in one-to-one correspondence, but the correctness of the rough matching needs further verification, namely false removal is carried out through a random sampling consistency RANSAC algorithm of the method.
The technical problem to be solved by the invention can be realized by adopting the following technical scheme:
an image identification random sampling consistency algorithm based on voting decision and a least square method is implemented, one of stored image templates is used as a template image, an input image is used as a test image, and whether the test image is matched with the template image or not is discriminated through comparison of feature point information of the test image and feature point information of each template image. Particularly, the feature point information of the test image and the feature point information of each template image are processed by the following steps:
A. selecting characteristic points in the test image to form a test image characteristic point set, and selecting characteristic points in the template image to form a template image characteristic point set;
finding out coarse matching points for each feature point of the test image in the template image feature point set, so that two corresponding coarse matching points belonging to the test image feature point set and the template image feature point set become coarse matching point pairs;
if the number of the rough matching point pairs is larger than the set rough matching threshold, performing the step B; otherwise, performing the step J;
B. the rough matching points belonging to the test image feature point set in the rough matching point pair form a test image rough matching point set, and the rough matching points belonging to the template image feature point set form a template image rough matching point set;
C. finding two mutually corresponding pairs of points with highest votes in the coarse matching point set of the test image and the coarse matching point set of the template image by a voting decision method to form two optimal pairs of points;
D. calculating a rotation translation transformation matrix by using two optimal point pairs consisting of the test image rough matching point set and the template image rough matching point set based on a least square method principle;
E. screening point pairs from the coarse matching point set of the test image and the coarse matching point set of the template image by means of a rotation translation transformation matrix to form a consistent set;
if the number of the consistent set inner point pairs is larger than the set consistent point pair number threshold, performing step F; otherwise, performing the step J;
F. judging that the test image is matched with the template image;
J. the algorithm ends.
The invention also provides an image splicing method based on the random sample consensus RANSAC algorithm to further improve the image comparison precision and enlarge the template image range, and the following steps are added between the step F and the step J,
G. calculating an updated rotation translation transformation matrix by using point pairs in a consistent set based on a least square method principle;
I. and mapping points in the test image feature point set into updated splicing feature points by means of the updated rotation translation transformation matrix, and adding the updated splicing feature points into the template image feature point set.
One possible solution adopted by the voting decision method is that the step C includes the following sub-steps:
C11. setting serial numbers for M points of a template image coarse matching point set, sequentially setting a point pair distance voting matrix of M rows and M columns, wherein elements of non-main diagonal lines in the matrix are voting elements;
C12. acquiring the template rough matching point distance between any point in the template image rough matching point set and other points and the test rough matching point distance between any point in the test image rough matching point set and other points, and enabling the template rough matching point distances and the test rough matching point distances to be in one-to-one correspondence according to the serial number corresponding relation of the template image rough matching point set and each rough matching point of the test image rough matching point set;
C13. judging whether the difference value between the corresponding template coarse distribution point distance and the test coarse distribution point distance is smaller than a set coarse distribution distance difference threshold value one by one, if so, setting two elements in the point pair distance voting matrix, wherein the two elements are row numbers and column numbers of two points corresponding to the template coarse distribution point distance; if not, setting two elements in the point pair distance voting matrix, which take the serial numbers of the two points as row numbers and column numbers, as non-voting marks;
C14. selecting the points of the template image with the serial numbers of two rows with the number of the voting marks arranged in the first two bits, and forming two pairs of optimal points with the points corresponding to the test image;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
The voting decision method is further derived based on the sub-steps C11 to C14, and the step C comprises the following sub-steps:
C21. setting serial numbers for M points of a template image coarse matching point set, sequentially setting a point pair distance voting matrix of M rows and M columns, wherein elements of non-main diagonal lines in the matrix are voting elements;
C22. acquiring the template rough matching point distance between any point in the template image rough matching point set and other points and the test rough matching point distance between any point in the test image rough matching point set and other points, and enabling the template rough matching point distances and the test rough matching point distances to be in one-to-one correspondence according to the serial number corresponding relation of the template image rough matching point set and each rough matching point of the test image rough matching point set;
C23. judging whether the difference value between the corresponding template coarse distribution point distance and the test coarse distribution point distance is smaller than a set coarse distribution distance difference threshold value one by one, if so, setting two elements in the point pair distance voting matrix, wherein the two elements are row numbers and column numbers of two points corresponding to the template coarse distribution point distance; if not, setting two elements in the point pair distance voting matrix, which take the serial numbers of the two points as row numbers and column numbers, as non-voting marks;
C24. selecting Z points represented by the line sequence numbers of the Z rows arranged at the front Z position in the voting mark number from the point pair distance voting matrix to form a template image initial judgment point set, wherein the template image initial judgment point set forms a test image initial judgment point set at the Z points corresponding to the test image rough matching point set, and Z is a natural number not less than 2;
C25. acquiring vectors between any two points in a template image initial judgment point set to form a template vector set with Y vectors, wherein Y is the combination number of the two points taken from Z points, correspondingly acquiring the vectors between any two points in a test image initial judgment point set to form a test vector set with Y vectors, and enabling the template vectors to correspond to the test vectors one by one according to the corresponding relation of the points between the template image initial judgment point set and the test image initial judgment point set;
C26. setting serial numbers for Y vectors of a template vector set, and sequentially setting and constructing a vector included angle voting matrix of Y rows and Y columns, wherein elements of non-main diagonal lines in the matrix are voting elements;
C27. calculating an included angle between any two vectors in the template vector set, correspondingly, calculating an included angle between any two vectors in the test vector set, and judging whether a difference value between the included angle in the template vector set and the included angle in the test vector set with the same serial number is smaller than a set vector included angle difference threshold value or not, if so, setting two elements in the vector included angle voting matrix, which take the serial numbers of the two vectors as a row number and a column number of each other, as voting marks; if not, setting two elements in the vector voting matrix, which use the two vector sequence numbers as a row number and a column number of each other, as a non-voting mark;
C28. selecting a template vector represented by the row serial number of the row with the most voting marks in the vector included angle voting matrix, thereby finding out two points forming the template vector, and forming two optimal point pairs by the two points and the corresponding two points of the test image;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
Specifically, the Z is a natural number not less than 2 and not more than one tenth of the number of points in the template image rough matching point set.
Another scheme that may be adopted by the voting decision method is that the step C includes the following sub-steps:
C31. acquiring vectors between any two points in the template image coarse matching point set to form a template vector set with X vectors, wherein X is the combination number of two points taken from the points of the template image coarse matching point set, correspondingly acquiring the vectors between any two points in the test image coarse matching point set to form a test vector set with X vectors, and enabling the template vectors to correspond to the test vectors according to the corresponding relation of the points between the template image coarse matching point set and the test image coarse matching point set;
C32. setting serial numbers for X vectors of a template vector set, and sequentially setting and constructing vector angle voting matrixes of X rows and X columns, wherein elements of non-main diagonal lines in the matrixes are voting elements;
C33. calculating an included angle between any two vectors in the template vector set, correspondingly, calculating an included angle between any two vectors in the test vector set, and judging whether a difference value between the included angle in the template vector set and the included angle in the test vector set with the same serial number is smaller than a set vector included angle difference threshold value or not, if so, setting two elements in the vector included angle voting matrix, which take the serial numbers of the two vectors as a row number and a column number of each other, as voting marks; if not, setting two elements in the vector voting matrix, which use the two vector sequence numbers as a row number and a column number of each other, as a non-voting mark;
C34. selecting a template vector represented by the row serial number of the row with the most voting marks in the vector included angle voting matrix, thereby finding out two points forming the template vector, and forming two optimal point pairs by the two points and the corresponding two points of the test image;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
The voting decision method is further derived based on the sub-steps C31 to C34, and the step C comprises the following sub-steps:
C41. acquiring vectors between any two points in the template image coarse matching point set to form a template vector set with X vectors, wherein X is the combination number of two points taken from the points of the template image coarse matching point set, correspondingly acquiring the vectors between any two points in the test image coarse matching point set to form a test vector set with X vectors, and enabling the template vectors to correspond to the test vectors according to the corresponding relation of the points between the template image coarse matching point set and the test image coarse matching point set;
C42. setting serial numbers for X vectors of a template vector set, and sequentially setting and constructing vector angle voting matrixes of X rows and X columns, wherein elements of non-main diagonal lines in the matrixes are voting elements;
C43. calculating an included angle between any two vectors in a template vector set, correspondingly, calculating an included angle between any two vectors in a test vector set, and judging whether the difference value between the included angle in the template vector set and the included angle in the test vector set with the same serial number is smaller than a set vector included angle difference threshold value or not, if so, setting two elements of the two vector serial numbers which are a row number and a column number to each other in the vector included angle voting matrix as voting marks; if not, setting two elements in the vector included angle voting matrix, which use the serial numbers of the two vectors as row numbers and column numbers of each other, as non-voting marks;
C44. selecting end points of W vectors represented by the row sequence numbers of W rows arranged in the front W positions of the voting mark number in the vector included angle voting matrix as a template image initial judgment point set, wherein the template image initial judgment point set forms a test image initial judgment point set at the end points of the W vectors corresponding to the test image rough matching point set, and W is a natural number not less than 2;
C45. setting serial numbers for N points of a template image initial judgment point set, wherein N is less than or equal to 2 xW, sequentially setting a point pair distance voting matrix of N rows and N columns, and elements of non-main diagonal lines in the matrix are voting elements;
C46. acquiring template initial judgment point distances between any point in a template image initial judgment point set comprising N points and other points and between any point in a test image initial judgment point set comprising N points and other points, and enabling each template initial judgment point distance to correspond to the test initial judgment point distance one by one according to the corresponding relation of the sequence numbers of the template image initial judgment point set and each point of the test image initial judgment point set;
C47. judging whether the difference value between the corresponding template initial judgment point distance and the test initial judgment point distance is smaller than a set initial judgment distance difference threshold value one by one, if so, setting two elements of the serial numbers of two points corresponding to the template initial judgment point distance as a row number and a column number in the point pair distance voting matrix as voting marks; if not, setting two elements in the point pair distance voting matrix, which take the serial numbers of the two points as row numbers and column numbers, as non-voting marks;
C48. selecting the points of the template image with the serial numbers of two rows with the number of the voting marks arranged in the first two bits, and forming two pairs of optimal points with the points corresponding to the test image;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
Specifically, the directions of the vectors in the template vector set and the test vector set in the above substeps point from a point with a smaller sequence number to a point with a larger sequence number.
Specifically, in the step D, the coordinates of the two optimal points in the template image point set found in the step C are (x) r0 ,y r0 ) And (x) r1 ,y r1 ) The corresponding two optimal point coordinates in the test image point set are (x) t0 ,y t0 ) And (x) t1 ,y t1 );
Then, the rotational-translational transformation matrix is
Figure GDA0003588502180000061
Wherein,
Figure GDA0003588502180000062
Figure GDA0003588502180000063
Figure GDA0003588502180000064
and,
when in use
Figure GDA0003588502180000065
When the temperature of the water is higher than the set temperature,
Figure GDA0003588502180000066
when in use
Figure GDA0003588502180000067
When the temperature of the water is higher than the set temperature,
Figure GDA0003588502180000068
the specific scheme of the step E for forming the consistent set and judging the image matching is that the step E comprises the following sub-steps,
E1. the following steps are performed one by one for all points in the set of coarse matching points of the test image,
E11. mapping the coordinates of the current point to the coordinates of the mapping point through a rotation translation transformation matrix,
calculating the sum of the absolute values of the deviations of the coordinates of the corresponding points of the current point in the rough matching point set of the template image and the coordinates of the mapping points as a point pair deviation value,
E12. if the point pair deviation value is smaller than the set consistency set threshold value, adding the point and a point in the template image rough matching point set corresponding to the point into a consistency set as a pair of point pairs; otherwise, not adding the consistent set;
E2. for the consistent set obtained through the substep E1, if the number of the point pairs in the consistent set is greater than the set threshold value of the number of the point pairs, performing a step F; otherwise, go to step J.
In order to obtain an updated rotational-translational transformation matrix to realize image stitching, the coordinates of each point in the consistent set, which belongs to the coarse matching point set of the test image, are (x) t0 ,y t0 )、(x t1 ,y t1 )、……、(x tN ,y tN ) Then the coordinates of their corresponding points in the coarse matching point set of the template image are (x) r0 ,y r0 )、(x r1 ,y r1 )、……、(x rN ,y rN ) That is, the number of matching point pairs in the consistency set is N + 1;
the updated roto-translational transformation matrix in step G is then
Figure GDA0003588502180000071
Wherein,
Figure GDA0003588502180000072
Figure GDA0003588502180000073
Figure GDA0003588502180000074
compared with the prior art, the technical effect of the image identification random sampling consistent algorithm based on voting decision and least square method of the invention is as follows:
the rotational translation transformation matrix calculated by the method is accurate, high in speed and stable, and the calculation result of each time cannot fluctuate randomly; the invention not only optimizes the image comparison method, but also is beneficial to the subsequent learning function of the data processing chip, namely, the characteristic information of the template image is continuously increased, thereby effectively improving the false rejection rate and enabling the data processing chip to be more intelligent; the invention also realizes image splicing more efficiently and accurately through the continuously updated rotation translation transformation matrix.
Drawings
FIG. 1 is a schematic flow chart of a preferred embodiment of the image recognition random sampling consensus algorithm based on voting decision and least square method in the invention.
Detailed Description
The following is a further detailed description of preferred embodiments with reference to the accompanying drawings.
The invention provides an image identification random sampling consistency algorithm based on voting decision and a least square method, and the preferred embodiment of the invention specifically explains the scheme of the invention by taking a fingerprint identification process applied to equipment unlocking as an example. The image recognition random sample consensus RANSAC algorithm of the present invention is based on hardware devices comprising a memory, a data processor and an input means. Referring to step 101 shown in fig. 1, a fingerprint template capable of unlocking hardware equipment is pre-recorded in a memory, a fingerprint is input from an input device, the fingerprint is compared with the fingerprint template one by using an image recognition random sampling consensus RANSAC algorithm described in detail below, when the fingerprint is judged to be matched with the fingerprint template, the hardware equipment is unlocked, otherwise, the fingerprint image to be tested cannot unlock the hardware equipment. The fingerprint template is the stored image template, and the process of recording the fingerprint template is the process of storing the image characteristic points of the template image Reference. The fingerprint image used for comparison is the template image Reference which is one of the stored image templates. As shown in step 102 of fig. 1, the fingerprint input from the input device is in the form of an image as a Test image, and the Test image is expressed by using feature points as data. Whether the Test image is matched with the template Reference image or not is discriminated through comparison of the Test image data and the template image Reference data, namely whether the input fingerprint is matched with the fingerprint image in the fingerprint template or not is discriminated through comparison of the input fingerprint and the fingerprint image in the fingerprint template. The Test image data and the template Reference image data are processed as follows:
A. as shown in steps 101 and 102 of fig. 1, feature points are selected from the fingerprint image as an input of the Test image to form a Test image feature point set, and feature points are selected from the template Reference image to form a template Reference image feature point set.
As shown in step 103 of fig. 1, finding out coarse matching points in the template Reference image feature point set for each feature point of the Test image, so that two corresponding coarse matching points belonging to the Test image feature point set and the template Reference image feature point set become coarse matching point pairs;
the preferred embodiment of the invention uses the feature from accessed Segment Test high-speed feature detection algorithm, called FAST algorithm for short, to extract feature points and find out the x and y coordinates and main direction of each feature point and the 128-bit and 16-byte ultrashort binary descriptor, called USB descriptor for short. The method for solving the characteristic points, the coordinate main direction of the characteristic points and the USB descriptor is well planned by a program. For each feature point in the Test image, a corresponding matching point is searched in the template Reference image feature point set, that is, each feature point in the Test image and all feature points in the template Reference image respectively carry out 128-bit USB descriptor discrimination Hamming distance search on the feature points and the matching points. And screening out the final matching point pairs according to a set program. However, the matching point pair obtained in the above process is a coarse matching point pair, some mismatching points exist, and the correctness of the matching point pair cannot be predicted, so it is necessary to perform fine matching and false and true removing by the random sample consensus RANSAC algorithm for image recognition provided by the present invention.
As shown in fig. 1, if the number of the rough matching point pairs is greater than the set rough matching threshold, performing step B; otherwise, go to step J.
In the preferred embodiment of the invention, the rough matching threshold is set to be 20 pairs, and if the number of the rough matching point pairs is not less than 20 pairs, the two images are considered to be not matched, and the fingerprint unlocking cannot be carried out. If the number of the matching point pairs exceeds 20 pairs, the subsequent steps are continued, and the coarse matching point pairs are subjected to false removal and true preservation. Typically, the number of coarse matching point pairs of two images of 88 × 88 pixels derived from the same finger can reach 150 to 500 or more.
B. As shown in step 104 in fig. 1, the coarse matching points in the coarse matching point pair belonging to the Test image feature point set constitute a Test image coarse matching point set testset, and the coarse matching points belonging to the template Reference image feature point set constitute a template image coarse matching point set Reference.
Based on two images of 88 × 88 pixels, and reference set and testset come from the same finger, the number of pairs of coarse-matching points entering the subsequent step can often reach 150 to 500 or even more. The coordinates (x, y, θ) of each coarse matching point are already known in the extraction stage.
C. As shown in step 105 of fig. 1, the voting decision method is used to find the best points with the highest votes and corresponding to each other in the coarse matching point set of the test image and the coarse matching point set of the template image, and form the best point pairs. The point pairs are composed of points in the test image rough matching point set and points in the template image rough matching point set.
The voting decision method can adopt a distance voting method, a vector angle voting method or a mode of combining the distance voting and the vector angle voting. For the way of combining the distance voting and the vector angle voting, the sequence of the distance voting and the vector angle voting is not limited in principle. The preferred embodiment of the present invention first performs distance voting and then performs vector angle voting in a manner of combining distance voting and vector angle voting, and then, the preferred embodiment of the present invention describes the distance voting and the vector angle voting in detail, but any of them is used alone, and a variation scheme of first performing the vector angle voting and then performing the distance voting, and linking the two is adopted, which will be apparent to those skilled in the art from the detailed description of the preferred embodiment of the present invention.
And (C) adopting a distance voting method, wherein the step C comprises the following sub-steps:
C11. setting serial numbers for M points of a template image coarse matching point set, sequentially setting M rows and M columns of point pair distance voting matrixes, wherein elements of non-main diagonal lines in the matrixes are voting elements;
C12. acquiring the template rough matching point distance between any point in the template image rough matching point set and other points and the test rough matching point distance between any point in the test image rough matching point set and other points, and enabling the template rough matching point distances and the test rough matching point distances to be in one-to-one correspondence according to the serial number corresponding relation of the template image rough matching point set and each rough matching point of the test image rough matching point set;
C13. judging whether the difference value between the corresponding template coarse distribution point distance and the test coarse distribution point distance is smaller than a set coarse distribution distance difference threshold value one by one, if so, setting two elements in the point pair distance voting matrix, wherein the two elements are row numbers and column numbers of two points corresponding to the template coarse distribution point distance; if not, setting two elements in the point pair distance voting matrix, which take the serial numbers of the two points as row numbers and column numbers, as non-voting marks;
C14. selecting points in the template image feature point set corresponding to the serial numbers of the two rows with the number of the voting marks arranged in the first two bits, and forming two pairs of optimal points with the points in the corresponding test image feature point set;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
In the preferred embodiment of the invention, the importance ptimmontance of each point pair in the template image coarse matching point set (refset for short) and the test image coarse matching point set testset of the random sample consensus RANSAC algorithm for identifying the input image is found, and the importance is obtained by a voting decision method. reference set is hereinafter abbreviated as reference. And respectively and sequentially judging the Euclidean distance between each point pair and all other point pairs, judging whether the Euclidean distances of the spatial positions of the four points are consistent each time, and voting if the difference value between the Euclidean distance of two points in the refset and the Euclidean distance of two points in the testset is smaller than a certain set rough matching distance difference threshold value ransac _ Dist, wherein the rough matching distance difference threshold value ransac _ Dist is set to be 2 in the preferred embodiment of the invention. The "voting" is performed on a constructed matrix whose importance is measured according to distance, that is, a point pair, hereinafter referred to as a D matrix, is labeled 1 from the corresponding position of the voting matrix, and the row number and the column number of the matrix are the index of the two point pairs respectively.
For example: the Distance between the ith and jth points in refset is Distance1, the Distance between the ith and jth points in testset is Distance2, if Distance1 minus Distance2 is less than threshold 2, the ith row and jth column elements of the D matrix are set to 1, otherwise, 0 is set. After traversing each point pair, a D (distance) matrix is obtained, and the number of 1 in the ith row of the D matrix is represented by ptimmontance [ i ], namely representing the importance of the ith point pair.
The distribution of the D matrix thus obtained in the preferred embodiment of the present invention is such that n is the logarithm of the matching point pairs among the point sets refset and testset of the input image recognition random sample consensus RANSAC algorithm, which is a symmetric matrix with respect to the main diagonal whose elements do not take values. In the preferred embodiment of the present invention, the matrix element value is 0 or 1, meaning that the voting flag is 1, and the non-voting flag is 0.
Point pair serial number
Figure GDA0003588502180000101
D matrix
Figure GDA0003588502180000102
The larger the number of 1 in the ith row of the matrix, i.e. the larger the value of ptimmontance [ i ], the more point pairs are, the more important the ith point pair is considered to be, the more likely it is to be a correct point pair, i.e. the more people who give a vote to it, the more important it is, and the more likely it is to be a correct point. This is the distance voting method, i.e. the scheme of sub-steps C11 to C14 above. Two points of the template image corresponding to the serial numbers of two rows with the numerical value of ptimmontance [ i ] arranged at the first two bits and two points corresponding to the test image form two pairs of optimal points.
However, the importance of the point pairs in the actual fingerprint identification process, namely the ptimmontance value, does not show bipolar differentiation, and considering factors such as rotation of feature points of fingerprint images, the importance of the point pairs is not enough only by using Euclidean distance to judge, namely, the importance of the point pairs is not enough only by using a distance voting method, an A (angle) matrix needs to be added on the basis, and the importance matrix is measured according to an included angle between two vectors, namely, a vector included angle voting method is added.
And C, combining distance voting and angle voting, wherein the step C comprises the following sub-steps:
C21. setting serial numbers for M points of a template image coarse matching point set, sequentially setting a point pair distance voting matrix of M rows and M columns, wherein elements of non-main diagonal lines in the matrix are voting elements;
C22. acquiring the template rough matching point distance between any point in the template image rough matching point set and other points and the test rough matching point distance between any point in the test image rough matching point set and other points, and enabling the template rough matching point distances and the test rough matching point distances to be in one-to-one correspondence according to the serial number corresponding relation of the template image rough matching point set and each rough matching point of the test image rough matching point set;
C23. judging whether the difference value between the corresponding template coarse distribution point distance and the test coarse distribution point distance is smaller than a set coarse distribution distance difference threshold value one by one, if so, setting two elements in the point pair distance voting matrix, wherein the two elements are row numbers and column numbers of two points corresponding to the template coarse distribution point distance; if not, setting two elements in the point pair distance voting matrix, which take the serial numbers of the two points as row numbers and column numbers, as non-voting marks;
C24. selecting Z points in a template image rough matching point set represented by the row sequence number of the Z rows with the voting mark number arranged at the front Z position from the point pair distance voting matrix to form a template image initial judgment point set, wherein the template image initial judgment point set forms a test image initial judgment point set at the Z points corresponding to the test image rough matching point set, and Z is a natural number not less than 2;
C25. acquiring vectors between any two points in a template image initial judgment point set to form a template vector set with Y vectors, wherein Y is the combination number of the two points taken from Z points, correspondingly acquiring the vectors between any two points in a test image initial judgment point set to form a test vector set with Y vectors, and enabling the template vectors to correspond to the test vectors one by one according to the corresponding relation of the points between the template image initial judgment point set and the test image initial judgment point set;
C26. setting serial numbers for Y vectors of a template vector set, and sequentially setting and constructing a vector included angle voting matrix of Y rows and Y columns, wherein elements of non-main diagonal lines in the matrix are voting elements;
C27. calculating an included angle between any two vectors in the template vector set, correspondingly, calculating an included angle between any two vectors in the test vector set, and judging whether a difference value between the included angle in the template vector set and the included angle in the test vector set with the same serial number is smaller than a set vector included angle difference threshold value or not, if so, setting two elements in the vector included angle voting matrix, which take the serial numbers of the two vectors as a row number and a column number of each other, as voting marks; if not, setting two elements in the vector voting matrix, which use the two vector sequence numbers as a row number and a column number of each other, as a non-voting mark;
C28. selecting a template vector represented by the row serial number of the row with the most voting marks in the vector included angle voting matrix, thereby finding out two points forming the template vector, and forming two optimal point pairs by the two points and the corresponding two points of the test image;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
The processes of the substeps C21-C24 are distance voting methods, and the processes of the substeps C25-C28 are vector angle voting methods.
The method for finding the pair of points with larger ptimmoport [ i ] by using the D matrix, i.e. the distance voting method, has been described in detail above, and on the basis of this method, the first 10 pairs of points with larger values of ptimmoport [ i ] containing the voting flag 1 in each line calculated by using the D matrix are found, and the importance is further determined by using the a matrix, i.e. Z equals 10 in steps C21 to C28. And Z is a natural number not less than 2 and not more than one tenth of the number of points in the template image coarse matching point set. The first 10 points with larger ptimmontance [ i ] value are taken out and reconstructed into two point sets, the template image initial judgment point set refset2 contains 10 points, the test image initial judgment point set testset2 contains 10 points, every 10 points in refset2 form a vector in pairs, the direction of the vector is defined as that the point with smaller index points to the point with larger index, 45 vectors are obtained, namely the combination number of 2 taken out of 10 is the vector number, namely Y in the steps C21 to C28 is 45. Likewise testset2 also gets the corresponding 45 vectors, one for each of refset 2. For each pair of vectors and all other pairs of vectors, angle determination is performed in sequence, and each time it is determined whether the angular relationships of the spatial positions of the four vectors are consistent, if the difference between the angle between two vectors in refset2 and the angle between two vectors in testset2 is smaller than a certain set vector included angle difference threshold value ransac _ angle, "vote" is performed, in the preferred embodiment of the present invention, the vector included angle difference threshold value ransac _ angle is set to 2 degrees. A vector angle voting matrix, namely an A matrix, is constructed according to the importance of the angle, and the row number and the column number of the A matrix are respectively the sequence index _ vector of the two pairs of vectors. For example: the angle between the ith vector and the jth vector in refset2 is angle1, the angle between the ith vector and the jth vector in testset2 is angle2, if the difference obtained by subtracting angle2 from angle1 is less than the vector included angle difference threshold 2 (angle value), the ith row and jth column elements of the a matrix are set to 1, otherwise, the ith row and jth column elements are set to 0. And traversing each pair of vectors to obtain an A matrix, and expressing the number of 1 in the ith row of the A matrix by vector and [ i ], namely representing the importance of the ith pair of vectors.
The distribution of the a matrix thus obtained is possible as follows, where n-45 is the logarithm of the vector pair constructed as described above, and the matrix is symmetric about the main diagonal, and the elements at the main diagonal positions do not take values. In the preferred embodiment of the present invention, the matrix element value is 0 or 1, meaning that the voting flag is 1, and the non-voting flag is 0.
Point pair serial number
Figure GDA0003588502180000131
A matrix
Figure GDA0003588502180000132
Finding the largest element vectormportance [ i ] in vectormportance array]A corresponding pair of vectors (among the corresponding refset 2)Two points of (2) and two points of testset 2), the H matrix used for calculation is the theoretical optimal roto-translational transformation matrix, i.e., the
Figure GDA0003588502180000133
Obviously, the voting decision method can also adopt a vector angle voting method alone, and a method of voting at a vector angle first and then voting at a distance.
The voting decision method independently adopts a vector angle voting method, and the step C comprises the following sub-steps:
C31. acquiring vectors between any two points in the template image coarse matching point set to form a template vector set with X vectors, wherein X is the combination number of two points taken from the points of the template image coarse matching point set, correspondingly acquiring the vectors between any two points in the test image coarse matching point set to form a test vector set with X vectors, and enabling the template vectors to correspond to the test vectors according to the corresponding relation of the points between the template image coarse matching point set and the test image coarse matching point set;
C32. setting serial numbers for X vectors of a template vector set, and sequentially setting and constructing vector angle voting matrixes of X rows and X columns, wherein elements of non-main diagonal lines in the matrixes are voting elements;
C33. calculating an included angle between any two vectors in the template vector set, correspondingly, calculating an included angle between any two vectors in the test vector set, and judging whether a difference value between the included angle in the template vector set and the included angle in the test vector set with the same serial number is smaller than a set vector included angle difference threshold value or not, if so, setting two elements in the vector included angle voting matrix, which take the serial numbers of the two vectors as a row number and a column number of each other, as voting marks; if not, setting two elements in the vector included angle voting matrix, which use the serial numbers of the two vectors as row numbers and column numbers of each other, as non-voting marks;
C34. selecting a template vector represented by the row serial number of the row with the most voting marks in the vector included angle voting matrix, thereby finding out two points forming the template vector, and forming two optimal point pairs by the two points and the corresponding two points of the test image;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
The voting decision method adopts a method of voting at a vector included angle first and then voting at a distance, and the step C comprises the following sub-steps:
C41. acquiring vectors between any two points in the template image coarse matching point set to form a template vector set with X vectors, wherein X is the combination number of two points taken from the points of the template image coarse matching point set, correspondingly acquiring the vectors between any two points in the test image coarse matching point set to form a test vector set with X vectors, and enabling the template vectors to correspond to the test vectors according to the corresponding relation of the points between the template image coarse matching point set and the test image coarse matching point set;
C42. setting serial numbers for X vectors of a template vector set, and sequentially setting and constructing vector angle voting matrixes of X rows and X columns, wherein elements of non-main diagonal lines in the matrixes are voting elements;
C43. calculating an included angle between any two vectors in the template vector set, correspondingly, calculating an included angle between any two vectors in the test vector set, and judging whether a difference value between the included angle in the template vector set and the included angle in the test vector set with the same serial number is smaller than a set vector included angle difference threshold value or not, if so, setting two elements in the vector included angle voting matrix, which take the serial numbers of the two vectors as a row number and a column number of each other, as voting marks; if not, setting two elements in the vector included angle voting matrix, which use the serial numbers of the two vectors as row numbers and column numbers of each other, as non-voting marks;
C44. selecting end points of W vectors represented by the row sequence numbers of W rows arranged in the front W positions of the voting mark number in the vector included angle voting matrix as a template image initial judgment point set, wherein the template image initial judgment point set forms a test image initial judgment point set at the end points of the W vectors corresponding to the test image rough matching point set, and W is a natural number not less than 2;
C45. setting a sequence number for N points of a template image initial judgment point set, wherein N is less than or equal to 2 xW because the end points of each vector are possibly coincident, sequentially setting a point pair distance voting matrix of N rows and N columns, wherein elements of non-main diagonal lines in the matrix are voting elements;
C46. acquiring template initial judgment point distances between any point in a template image initial judgment point set comprising N points and other points and between any point in a test image initial judgment point set comprising N points and other points, and enabling each template initial judgment point distance to correspond to the test initial judgment point distance one by one according to the corresponding relation of the sequence numbers of the template image initial judgment point set and each point of the test image initial judgment point set;
C47. judging whether the difference value between the corresponding template initial judgment point distance and the test initial judgment point distance is smaller than a set initial judgment distance difference threshold value one by one, if so, setting two elements of the serial numbers of two points corresponding to the template initial judgment point distance as a row number and a column number in the point pair distance voting matrix as voting marks; if not, setting two elements in the point pair distance voting matrix, which take the serial numbers of the two points as row numbers and column numbers, as non-voting marks;
C48. selecting the points of the template image with the serial numbers of two rows with the number of the voting marks arranged in the first two bits, and forming two pairs of optimal points with the points corresponding to the test image;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
The directions of the vectors in the template vector set and the test vector set are from the point with the smaller sequence number to the point with the larger sequence number.
After finding the optimal point pair through voting decision, obtaining a rotation-translation transformation matrix H through the optimal point pair, namely step D:
D. and calculating a rotation translation transformation matrix by using the optimal point pairs corresponding to the test image point set and the template image point set based on a least square method principle.
In the preferred embodiment of the present invention, the H matrix is calculated using the two best point pairs found, i.e. the coordinates of the four points are: (x) r0 ,y r0 )、(x r1 ,y r1 )、(x t0 ,y t0 ) And (x) t1 ,y t1 ). Using minimumCalculating a rotation translation transformation matrix H by using a two-multiplication principle, wherein 3 unknowns of the H matrix are required to be solved: the rotation angle theta and the translation amounts dx and dy are used to rotate two points (x) in testset t0 ,y t0 ) And (x) t1 ,y t1 ) The corresponding mapping points are obtained after the H matrix is rotated and translated
Figure GDA0003588502180000151
And
Figure GDA0003588502180000152
Figure GDA0003588502180000153
is that
Figure GDA0003588502180000154
Figure GDA0003588502180000155
Is that
Figure GDA0003588502180000156
In order to make two mapping points after H matrix rotation and translation
Figure GDA0003588502180000157
Match two points (x) in the refset r0 ,y r0 )、(x r1 ,y r1 ) With minimum deviation therebetween, i.e.
Figure GDA0003588502180000158
To be minimal, the partial derivatives of error with respect to the rotation angle θ and the translation amount dx, dy should be made zero.
error=(x t0 ·cosθ-y t0 ·sinθ+dx-x r0 ) 2 +(x t0 ·sinθ+y t0 ·cosθ+dy-y r0 ) 2 +(x t1 ·cosθ-y t1 ·sinθ+dx-x r1 ) 2 +(x t1 ·sinθ+y t1 ·cosθ+dy-y r1 ) 2
By
Figure GDA0003588502180000159
So as to obtain the compound with the characteristics of,
Figure GDA00035885021800001510
get it solved
Figure GDA00035885021800001511
By
Figure GDA00035885021800001512
So as to obtain the compound with the characteristics of,
Figure GDA0003588502180000161
get it solved
Figure GDA0003588502180000162
By
Figure GDA0003588502180000163
So as to obtain the compound with the characteristics of,
Figure GDA0003588502180000164
substituting the expression of dx and dy into the expression,
(2x t0 ·cosθ-2y t0 ·sinθ-x t0 ·cosθ+y t0 ·sinθ+x r1 -x t1 ·cosθ+y t1 ·sinθ-x r0 )·(-x t0 ·sinθ-y t0 ·cosθ)++(2x t0 ·sinθ+2y t0 ·cosθ-x t0 ·sinθ-y t0 ·cosθ+y r1 -x t1 ·sinθ-y t1 ·cosθ-y r0 )·(x t0 ·cosθ-y t0 ·sinθ)++(2x t1 ·cosθ-2y t1 ·sinθ+x r0 -x t0 ·cosθ+y t0 ·sinθ-x t1 ·cosθ+y t1 ·sinθ-x r1 )·(-x t1 ·sinθ-y t1 ·cosθ)++(2x t1 ·sinθ+2y t1 ·cosθ+y r0 -x t0 ·sinθ-y t0 ·cosθ-x t1 ·sinθ-y t1 ·cosθ-y r1 )·(x t1 ·cosθ-y t1 sin θ) 0 by reduction
[sinθ·(y t1 -y t0 )+cosθ·(x t0 -x t1 )+(x r1 -x r0 )]·(-x t0 ·sinθ-y t0 ·cosθ)+
+[sinθ·(x t0 -x t1 )+cosθ·(y t0 -y t1 )+(y r1 -y r0 )]·(x t0 ·cosθ-y t0 ·sinθ)+
+[sinθ·(y t0 -y t1 )+cosθ·(x t1 -x t0 )+(x r0 -x r1 )]·(-x t1 ·sinθ-y t1 ·cosθ)+
+[sinθ·(x t1 -x t0 )+cosθ·(y t1 -y t0 )+(y r0 -y r1 )]·(x t1 ·cosθ-y t1 ·sinθ)=0
Merging simplification and utilization of sin 2 θ+cos 2 Theta is 1 ═ d
(x r1 -x r0 )·(-x t0 ·sinθ-y t0 ·cosθ)+(y r1 -y r0 )·(x t0 ·cosθ-y t0 ·sinθ)+
+(x r0 -x r1 )·(-x t1 ·sinθ-y t1 ·cosθ)+(y r0 -y r1 )·(x t1 ·cosθ-y t1 ·sinθ)=0
Is finished to obtain
Figure GDA0003588502180000171
Namely, it is
Figure GDA0003588502180000172
The special case is
Figure GDA0003588502180000173
When the temperature of the water is higher than the set temperature,
Figure GDA0003588502180000174
② when
Figure GDA0003588502180000175
Time of flight
Figure GDA0003588502180000176
The acquisition of the roto-translational transformation matrix enables the formation of a consistent set by the following step E.
E. As shown in step 106 of fig. 1, screening point pairs from the coarse matching point set of the test image and the coarse matching point set of the template image by using a rotation-translation transformation matrix to form a consistent set;
if the number of the point pairs in the consistent set is not less than the set threshold value of the number of the consistent point pairs, performing the step F; otherwise, performing the step J;
in the preferred embodiment of the present invention, the specific scheme for constructing the consistent set and determining the image matching in step E is that step E comprises the following sub-steps,
E1. the following steps are performed one by one for all points in the set of coarse matching points of the test image,
E11. mapping the coordinates of the current point to the coordinates of the mapping point through a rotation translation transformation matrix,
calculating the sum of the absolute values of the deviations of the coordinates of the corresponding points of the current point in the rough matching point set of the template image and the coordinates of the mapping points as a point pair deviation value,
E12. if the point pair deviation value is smaller than the set consistency set threshold value, adding the point and a point in the template image rough matching point set corresponding to the point into a consistency set as a pair of point pairs; otherwise, not adding the consistent set;
after all points in the set of coarse match points for the test image have passed through substeps E11 through E12, a consistent set is obtained for substep E2,
E2. for the consistent set obtained through the substep E1, if the number of the point pairs in the consistent set is greater than the set threshold value of the number of the point pairs, performing a step F; otherwise, go to step J.
In the preferred embodiment of the present invention, a consistent set of consensus sets is found according to the calculated H matrix of the rotational-translational transformation matrix. For a point pair among the refset and testset already matched, i.e., a point pair obtained by rough matching, the sum of absolute values of deviations of coordinates after the point on testset of each point pair is mapped using the H matrix and the coordinates of the corresponding point on refset is obtained as err ═ x ri -x ti ·cosθ+y ti ·sinθ-dx|+|y ri -x ti ·sinθ-y ti ·cosθ-dy|。
A consistency set threshold is set to 4, i.e. the ith point pair is added to the consistency set when err of the ith point pair is < 4.
Whether the images are matched can be judged through the acquired consistent set, and whether the fingerprints are matched can be judged for the preferred embodiment of the invention, namely the following steps F,
F. in step 107 shown in FIG. 1, it is determined that the test image matches the template image.
The end point of the image recognition random sample consensus RANSAC algorithm of the invention is the following step J,
J. the algorithm ends, as shown in step 110 of fig. 1.
The RANSAC algorithm for image identification and random sampling consistency completes the false and true removal of the rough matching points, namely the image comparison and discrimination process. As an extension, in order to implement image stitching, the accuracy of the rotational-translational transformation matrix is increased, then the rotational-translational transformation matrix, i.e., the H 'matrix, obtained by re-updating all the point pairs in the consistent set is used, and the updated rotational-translational transformation matrix is used, so that the image comparison precision is further improved by the H' matrix, the image stitching is implemented, and the template image range is expanded. Thus, as shown in FIG. 1, the following steps may be added between the above step F and step J,
G. the updated roto-translational transformation matrix is computed using the point pairs in the consistent set, step 108 shown in fig. 1.
The preferred embodiment of the present invention recalculates the updated roto-translational transformation matrix, i.e., the H' matrix, using all point pairs in the consistent set. Let the logarithm of the matching point pairs in the congruence set be N +1, i.e. the subscripts of the point pairs are from 0 to N, respectively.
In order to make N +1 mapping points after the H' matrix rotation translation
Figure GDA0003588502180000181
Match N +1 matching points (x) in the refset r0 ,y r0 )、(x r1 ,y r1 )、……、(x rN ,y rN ) With minimum deviation therebetween, i.e. error function
Figure GDA0003588502180000182
The sum of the squares of the euclidean distances is minimized so that the partial derivatives of error with respect to the rotation angle theta and the translation amounts dx, dy are both zero. Namely to
error=(x t0 ·cosθ-y t0 ·sinθ+dx-x r0 ) 2 +(x t0 ·sinθ+y t0 ·cosθ+dy-y r0 ) 2 +(x t1 ·cosθ-y t1 ·sinθ+dx-x r1 ) 2 +(x t1 ·sinθ+y t1 ·cosθ+dy-y r1 ) 2 +………+(x tN ·cosθ-y tN ·sinθ+dx-x rN ) 2 +(x tN ·sinθ+y tN ·cosθ+dy-y rN ) 2
By
Figure GDA0003588502180000183
So as to obtain the compound with the characteristics of,
Figure GDA0003588502180000184
the solution is obtained by dissolving the raw materials,
Figure GDA0003588502180000185
that is to say that the first and second electrodes,
Figure GDA0003588502180000191
by
Figure GDA0003588502180000192
So as to obtain the compound with the characteristics of,
Figure GDA0003588502180000193
the solution is obtained by dissolving the raw materials,
Figure GDA0003588502180000194
that is to say that the first and second electrodes,
Figure GDA0003588502180000195
by
Figure GDA0003588502180000196
So as to obtain the compound with the characteristics of,
Figure GDA0003588502180000197
substituting the expression of dx 'and dy',
Figure GDA0003588502180000198
Figure GDA0003588502180000201
merging simplification and utilization of sin 2 θ+cos 2 The theta is obtained by 1, and the alpha is,
Figure GDA0003588502180000202
the same items are arranged and combined to obtain,
Figure GDA0003588502180000203
Figure GDA0003588502180000211
the left side and the right side of the equation are multiplied by the number N +1 of the point pairs in the consistent set at the same time and are sorted out,
Figure GDA0003588502180000212
the polynomial containing the sin function is placed to the left of the equal sign, the polynomial containing the cos function is placed to the right of the equal sign,
the coefficients of the sin function to the left of the equal sign are,
Figure GDA0003588502180000213
the coefficients of the cos function to the right of the equal sign are,
Figure GDA0003588502180000214
order to
Figure GDA0003588502180000215
Figure GDA0003588502180000221
Figure GDA0003588502180000222
The coefficients A, A1, B, B1 of sin and cos functions may be implemented using two for loops, where A1 and B1 accumulate within the inner for loop (loop from 0 to N for k), and A and B accumulate outside the outer for loop (loop from 0 to N for i) and the inner for loop.
Figure GDA0003588502180000223
Namely, it is
Figure GDA0003588502180000224
Combining the previously calculated translation amounts dx ' and dy ' to construct an updated rotational-translational transformation H ' matrix,
Figure GDA0003588502180000225
the image stitching can be realized by using the updated rotation translation transformation H' matrix, namely the following steps I,
I. and mapping points in the test image feature point set into updated splicing feature points by means of the updated rotation translation transformation matrix, and adding the splicing feature points into the template image feature point set.
In the preferred embodiment of the present invention, as shown in step 109 in fig. 1, the feature points in the fingerprint image to be tested are spliced into the template fingerprint image through the following rotation-translation transformation.
Figure GDA0003588502180000231
Figure GDA0003588502180000232
Is (x) t0 ,y t0 ) Point coordinate passing rotationAnd (4) adding the characteristic point coordinates of the template fingerprint image after converting the translation into H'.
And (4) not splicing the points of which the deviation between the coordinates and the matching points is less than a certain threshold after the characteristic points in the image to be tested are subjected to updated rotational translation transformation H' matrix mapping.
Most of fingerprint identification chips required in the current mobile phone market are small in area, fingerprint information recorded in a template at each time usually only occupies a small part of a finger, after two images are matched, if the matching score is high, namely the number of pairs N +1 of matching points in a consensus set in the algorithm is large, the two images can be considered to be from the same real finger, and the two images can be spliced together into a larger fingerprint image containing more characteristic points of the real finger as the template by using a calculated updated rotation translation transformation H' matrix. Meanwhile, the invention is also beneficial to the subsequent learning function of the fingerprint identification chip, namely, the fingerprint information of the real finger is continuously increased, and the latest state information of the finger is used for updating and replacing the old information according to the subtle change of the wetting and drying of the finger in summer in winter, so that the false rejection rate is effectively improved, and the chip is more intelligent.
Experiments in the android system show that: the time for accurately solving the H matrix by centering, removing the fake and storing the true matching points within five hundred is less than two milliseconds, and experiments show that the rotation-translation transformation H matrix calculated by the random sample consensus (RANSAC) algorithm for image identification is accurate, high in speed and stable, and the calculation result does not fluctuate randomly.

Claims (11)

1. An image recognition random sampling consistent algorithm based on voting decision and least square method is characterized in that one of stored image templates is used as a template image, an input image is used as a test image, and whether the test image is matched with the template image or not is discriminated through comparison of feature point information of the test image and feature point information of each template image; the method is characterized in that the characteristic point information of the test image and the characteristic point information of each template image are processed by the following steps:
A. selecting characteristic points in the test image to form a test image characteristic point set, and selecting characteristic points in the template image to form a template image characteristic point set;
finding out coarse matching points for each feature point of the test image in the template image feature point set, so that two corresponding coarse matching points belonging to the test image feature point set and the template image feature point set become coarse matching point pairs;
if the number of the rough matching point pairs is larger than the set rough matching threshold, performing the step B; otherwise, performing the step J;
B. the rough matching points belonging to the test image feature point set in the rough matching point pair form a test image rough matching point set, and the rough matching points belonging to the template image feature point set form a template image rough matching point set;
C. finding out the best points which have the highest votes and are corresponding to each other in the coarse matching point set of the test image and the coarse matching point set of the template image by a voting decision method to form a best point pair;
D. calculating a rotation translation transformation matrix by using an optimal point pair consisting of the test image rough matching point set and the template image rough matching point set based on a least square method principle;
E. screening point pairs from the coarse matching point set of the test image and the coarse matching point set of the template image by means of a rotation translation transformation matrix to form a consistent set;
if the number of the consistent set inner point pairs is larger than the set consistent point pair number threshold, performing step F; otherwise, performing the step J;
F. judging that the test image is matched with the template image;
J. the algorithm ends.
2. The voting decision and least squares based image recognition random sampling consensus algorithm of claim 1, wherein:
the following steps are added between the step F and the step J,
G. calculating an updated rotation translation transformation matrix by using point pairs in a consistent set based on a least square method principle;
I. and mapping points in the test image feature point set into updated splicing feature points by means of the updated rotation translation transformation matrix, and adding the updated splicing feature points into the template image feature point set.
3. The voting decision and least squares based image recognition random sampling consensus algorithm of claim 1, wherein:
the step C comprises the following sub-steps,
C11. setting serial numbers for M points of a template image coarse matching point set, sequentially setting a point pair distance voting matrix of M rows and M columns, wherein elements of non-main diagonal lines in the matrix are voting elements;
C12. acquiring the template rough matching point distance between any point in the template image rough matching point set and other points and the test rough matching point distance between any point in the test image rough matching point set and other points, and enabling the template rough matching point distances and the test rough matching point distances to be in one-to-one correspondence according to the serial number corresponding relation of the template image rough matching point set and each rough matching point of the test image rough matching point set;
C13. judging whether the difference value between the corresponding template coarse distribution point distance and the test coarse distribution point distance is smaller than a set coarse distribution distance difference threshold value one by one, if so, setting two elements in the point pair distance voting matrix, wherein the two elements are row numbers and column numbers of two points corresponding to the template coarse distribution point distance; if not, setting two elements in the point pair distance voting matrix, which take the serial numbers of the two points as row numbers and column numbers, as non-voting marks;
C14. selecting the points of the template image with the serial numbers of two rows with the number of the voting marks arranged in the first two bits, and forming two pairs of optimal points with the points corresponding to the test image;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
4. The voting decision and least squares based image recognition random sampling consensus algorithm of claim 1, wherein:
the step C comprises the following sub-steps,
C21. setting serial numbers for M points of a template image coarse matching point set, sequentially setting a point pair distance voting matrix of M rows and M columns, wherein elements of non-main diagonal lines in the matrix are voting elements;
C22. acquiring the template rough matching point distance between any point in the template image rough matching point set and other points and the test rough matching point distance between any point in the test image rough matching point set and other points, and enabling the template rough matching point distances and the test rough matching point distances to be in one-to-one correspondence according to the serial number corresponding relation of the template image rough matching point set and each rough matching point of the test image rough matching point set;
C23. judging whether the difference value between the corresponding template coarse distribution point distance and the test coarse distribution point distance is smaller than a set coarse distribution distance difference threshold value one by one, if so, setting two elements in the point pair distance voting matrix, wherein the two elements are row numbers and column numbers of two points corresponding to the template coarse distribution point distance; if not, setting two elements in the point pair distance voting matrix, which take the serial numbers of the two points as row numbers and column numbers, as non-voting marks;
C24. selecting Z points represented by the line sequence numbers of the Z rows arranged at the front Z position in the voting mark number from the point pair distance voting matrix to form a template image initial judgment point set, wherein the template image initial judgment point set forms a test image initial judgment point set at the Z points corresponding to the test image rough matching point set, and Z is a natural number not less than 2;
C25. acquiring vectors between any two points in a template image initial judgment point set to form a template vector set with Y vectors, wherein Y is the combination number of the two points taken from Z points, correspondingly acquiring the vectors between any two points in a test image initial judgment point set to form a test vector set with Y vectors, and enabling the template vectors to correspond to the test vectors one by one according to the corresponding relation of the points between the template image initial judgment point set and the test image initial judgment point set;
C26. setting serial numbers for Y vectors of a template vector set, and sequentially setting and constructing a vector included angle voting matrix of Y rows and Y columns, wherein elements of non-main diagonal lines in the matrix are voting elements;
C27. calculating an included angle between any two vectors in the template vector set, correspondingly, calculating an included angle between any two vectors in the test vector set, and judging whether a difference value between the included angle in the template vector set and the included angle in the test vector set with the same serial number is smaller than a set vector included angle difference threshold value or not, if so, setting two elements in the vector included angle voting matrix, which take the serial numbers of the two vectors as a row number and a column number of each other, as voting marks; if not, setting two elements in the vector voting matrix, which use the two vector sequence numbers as a row number and a column number of each other, as a non-voting mark;
C28. selecting a template vector represented by the row serial number of the row with the most voting marks in the vector included angle voting matrix, thereby finding out two points forming the template vector, and forming two optimal point pairs by the two points and the corresponding two points of the test image;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
5. The voting decision and least squares based image recognition random sampling consensus algorithm of claim 4, wherein:
and Z is a natural number not less than 2 and not more than one tenth of the number of points in the template image rough matching point set.
6. The voting decision and least squares based image recognition random sampling consensus algorithm of claim 1, wherein:
the step C comprises the following sub-steps,
C31. acquiring vectors between any two points in the template image coarse matching point set to form a template vector set with X vectors, wherein X is the combination number of two points taken from the points of the template image coarse matching point set, correspondingly acquiring the vectors between any two points in the test image coarse matching point set to form a test vector set with X vectors, and enabling the template vectors to correspond to the test vectors according to the corresponding relation of the points between the template image coarse matching point set and the test image coarse matching point set;
C32. setting serial numbers for X vectors of a template vector set, and sequentially setting and constructing vector angle voting matrixes of X rows and X columns, wherein elements of non-main diagonal lines in the matrixes are voting elements;
C33. calculating an included angle between any two vectors in a template vector set, correspondingly, calculating an included angle between any two vectors in a test vector set, and judging whether the difference value between the included angle in the template vector set and the included angle in the test vector set with the same serial number is smaller than a set vector included angle difference threshold value or not, if so, setting two elements of the two vector serial numbers which are a row number and a column number to each other in the vector included angle voting matrix as voting marks; if not, setting two elements in the vector voting matrix, which use the two vector sequence numbers as a row number and a column number of each other, as a non-voting mark;
C34. selecting a template vector represented by the row serial number of the row with the most voting marks in the vector included angle voting matrix, thereby finding out two points forming the template vector, and forming two optimal point pairs by the two points and the corresponding two points of the test image;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
7. The voting decision and least squares based image recognition random sampling consensus algorithm of claim 1, wherein:
the step C comprises the following sub-steps:
C41. acquiring vectors between any two points in the template image coarse matching point set to form a template vector set with X vectors, wherein X is the combination number of two points taken from the points of the template image coarse matching point set, correspondingly acquiring the vectors between any two points in the test image coarse matching point set to form a test vector set with X vectors, and enabling the template vectors to correspond to the test vectors according to the corresponding relation of the points between the template image coarse matching point set and the test image coarse matching point set;
C42. setting serial numbers for X vectors of a template vector set, and sequentially setting and constructing vector angle voting matrixes of X rows and X columns, wherein elements of non-main diagonal lines in the matrixes are voting elements;
C43. calculating an included angle between any two vectors in the template vector set, correspondingly, calculating an included angle between any two vectors in the test vector set, and judging whether a difference value between the included angle in the template vector set and the included angle in the test vector set with the same serial number is smaller than a set vector included angle difference threshold value or not, if so, setting two elements in the vector included angle voting matrix, which take the serial numbers of the two vectors as a row number and a column number of each other, as voting marks; if not, setting two elements in the vector angle voting matrix, which use the two vector sequence numbers as a row number and a column number, as non-voting marks;
C44. selecting end points of W vectors represented by the row sequence numbers of W rows arranged in the front W positions of the voting mark number in the vector included angle voting matrix as a template image initial judgment point set, wherein the template image initial judgment point set forms a test image initial judgment point set at the end points of the W vectors corresponding to the test image rough matching point set, and W is a natural number not less than 2;
C45. setting serial numbers for N points of a template image initial judgment point set, wherein N is less than or equal to 2 xW, sequentially setting a point pair distance voting matrix of N rows and N columns, and elements of non-main diagonal lines in the matrix are voting elements;
C46. acquiring template initial judgment point distances between any point in a template image initial judgment point set comprising N points and other points and between any point in a test image initial judgment point set comprising N points and other points, and enabling each template initial judgment point distance to correspond to the test initial judgment point distance one by one according to the corresponding relation of the sequence numbers of the template image initial judgment point set and each point of the test image initial judgment point set;
C47. judging whether the difference value between the corresponding template initial judgment point distance and the test initial judgment point distance is smaller than a set initial judgment distance difference threshold value one by one, if so, setting two elements of the serial numbers of two points corresponding to the template initial judgment point distance as a row number and a column number in the point pair distance voting matrix as voting marks; if not, setting two elements in the point pair distance voting matrix, which take the serial numbers of the two points as row numbers and column numbers, as non-voting marks;
C48. selecting the points of the template image with the serial numbers of two rows with the number of the voting marks arranged in the first two bits, and forming two pairs of optimal points with the points corresponding to the test image;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
8. A voting decision and least squares based image recognition random sample consensus algorithm according to any one of claims 4, 6 or 7, wherein:
the direction of each vector in the template vector set and the test vector set is from a point with a smaller sequence number to a point with a larger sequence number.
9. The voting decision and least squares based image recognition random sampling consensus algorithm of claim 1, wherein:
in the step D, the coordinates of two optimal points in the template image point set found in the step C are set as (x) r0 ,y r0 ) And (x) r1 ,y r1 ) The corresponding two optimal point coordinates in the test image point set are (x) t0 ,y t0 ) And (x) t1 ,y t1 );
Then, the rotational-translational transformation matrix is
Figure FDA0003588502170000051
Wherein,
Figure FDA0003588502170000052
Figure FDA0003588502170000053
Figure FDA0003588502170000054
and,
when in use
Figure FDA0003588502170000061
When the temperature of the water is higher than the set temperature,
Figure FDA0003588502170000062
when in use
Figure FDA0003588502170000063
When the temperature of the water is higher than the set temperature,
Figure FDA0003588502170000064
10. the voting decision and least squares based image recognition random sampling consensus algorithm of claim 1, wherein:
said step E comprises the sub-steps of,
E1. the following steps are performed one by one for all points in the set of coarse matching points of the test image,
E11. mapping the coordinates of the current point to the coordinates of the mapping point through a rotation translation transformation matrix,
calculating the sum of the absolute values of the deviations of the coordinates of the corresponding points of the current point in the rough matching point set of the template image and the coordinates of the mapping points as a point pair deviation value,
E12. if the point pair deviation value is smaller than the set consistency set threshold value, adding the point and a point in the template image rough matching point set corresponding to the point into a consistency set as a pair of point pairs; otherwise, not adding the consistent set;
E2. for the consistent set obtained through the substep E1, if the number of the point pairs in the consistent set is greater than the set threshold value of the number of the point pairs, performing a step F; otherwise, go to step J.
11. The voting decision and least squares based image recognition random sampling consensus algorithm of claim 2, wherein:
let the coordinates of each point in the consistent set belonging to the coarse matching point set of the test image be (x) t0 ,y t0 )、(x t1 ,y t1 )、……、(x tN ,y tN ) Then the coordinates of their corresponding points in the coarse matching point set of the template image are (x) r0 ,y r0 )、(x r1 ,y r1 )、……、(x rN ,y rN ) That is, the number of matching point pairs in the consistency set is N + 1;
the updated roto-translational transformation matrix in step G is then
Figure FDA0003588502170000065
Wherein,
Figure FDA0003588502170000066
Figure FDA0003588502170000067
Figure FDA0003588502170000071
CN201610167808.8A 2016-03-22 2016-03-22 Image identification random sampling consistent algorithm based on voting decision and least square method Active CN107220580B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610167808.8A CN107220580B (en) 2016-03-22 2016-03-22 Image identification random sampling consistent algorithm based on voting decision and least square method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610167808.8A CN107220580B (en) 2016-03-22 2016-03-22 Image identification random sampling consistent algorithm based on voting decision and least square method

Publications (2)

Publication Number Publication Date
CN107220580A CN107220580A (en) 2017-09-29
CN107220580B true CN107220580B (en) 2022-08-09

Family

ID=59928272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610167808.8A Active CN107220580B (en) 2016-03-22 2016-03-22 Image identification random sampling consistent algorithm based on voting decision and least square method

Country Status (1)

Country Link
CN (1) CN107220580B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288511B (en) 2019-05-10 2023-04-07 台州宏达电力建设有限公司台州经济开发区运检分公司 Minimum error splicing method and device based on double camera images and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310122A (en) * 2013-07-10 2013-09-18 北京航空航天大学 Parallel random sampling consensus method and device
CN104040590A (en) * 2011-12-19 2014-09-10 三菱电机株式会社 Method for estimating pose of object
CN104077769A (en) * 2014-06-06 2014-10-01 华南理工大学 Error matching point pair removing algorithm in image registration
CN104077596A (en) * 2014-06-18 2014-10-01 河海大学 Landmark-free tracking registering method
CN105046235A (en) * 2015-08-03 2015-11-11 百度在线网络技术(北京)有限公司 Lane line recognition modeling method and apparatus and recognition method and apparatus

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8676030B2 (en) * 2008-04-15 2014-03-18 Shlomo Selim Rakib Methods and systems for interacting with viewers of video content

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104040590A (en) * 2011-12-19 2014-09-10 三菱电机株式会社 Method for estimating pose of object
CN103310122A (en) * 2013-07-10 2013-09-18 北京航空航天大学 Parallel random sampling consensus method and device
CN104077769A (en) * 2014-06-06 2014-10-01 华南理工大学 Error matching point pair removing algorithm in image registration
CN104077596A (en) * 2014-06-18 2014-10-01 河海大学 Landmark-free tracking registering method
CN105046235A (en) * 2015-08-03 2015-11-11 百度在线网络技术(北京)有限公司 Lane line recognition modeling method and apparatus and recognition method and apparatus

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
A Probabilistic Criterion to Detect Rigid Point Matches Between Two Images and Estimate the Fundamental Matrix;LIONEL MOISAN等;《International Journal of Computer Vision》;20041231;第57卷(第3期);第201-218页 *
SIFT Feature Point Matching Based on Improved RANSAC Algorithm;Guangjun Shi等;《2013 Fifth International Conference on Intelligent Human-Machine Systems and Cybernetics》;20131231;第474-477页 *
SIFT算法在点云配准中的应用;王程冬等;《传感器与微系统》;20121231;第31卷(第2期);第149-152页 *
一种基于L-M算法的RANSAC图像拼接算法;姚佳宝等;《浙江理工大学学报(自然科学版)》;20150731;第33卷(第4期);第552-557页 *
一种带预处理的RANSAC图像拼接算法;付倩文等;《电子设计工程》;20130831;第21卷(第15期);第183-186、190页 *
一种投票式并行RANSAC算法及其FPGA实现;江洁等;《电子与信息学报》;20140531;第36卷(第5期);第1145-1150页 *
基于区域分块的SIFT图像匹配技术研究与实现;杜京义等;《光电工程》;20130831;第40卷(第8期);第52-58页 *
基于双向异步投票策略解点匹配的X线医学图像拼接;王双玲;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20120415(第04期);第I138-1669页 *
基于特征点的全自动无缝图像拼接方法;李寒等;《计算机工程与设计》;20070531;第28卷(第9期);第2083-2085页 *
基于角点特征的图像配准技术研究;钱为;《中国优秀硕士学位论文全文数据库》;20091115(第11期);第I138-1057页 *

Also Published As

Publication number Publication date
CN107220580A (en) 2017-09-29

Similar Documents

Publication Publication Date Title
CN107145829B (en) A Palm Vein Recognition Method Integrating Texture Features and Scale-Invariant Features
US8798357B2 (en) Image-based localization
CN110287873B (en) Non-cooperative target pose measurement method and system based on deep neural network and terminal equipment
JP3053388B2 (en) Fingerprint image special color correlator
CN101567051B (en) Image matching method based on characteristic points
CN104036480B (en) Quick elimination Mismatching point method based on surf algorithm
CN111667506A (en) Motion estimation method based on ORB feature points
US10474872B2 (en) Fingerprint matching using virtual minutiae
Liu et al. Rotation-invariant siamese network for low-altitude remote-sensing image registration
CN114840925B (en) A registration method for body part measurement data to the overall CAD model
CN113011509A (en) Lung bronchus classification method and device, electronic equipment and storage medium
CN115690803B (en) Digital image recognition method, device, electronic device and readable storage medium
CN103954280A (en) Rapid, high-robustness and autonomous fixed star identification method
CN107862319B (en) Heterogeneous high-light optical image matching error eliminating method based on neighborhood voting
CN114511012A (en) SAR image and optical image matching method based on feature matching and position matching
CN113128518A (en) Sift mismatch detection method based on twin convolution network and feature mixing
CN111199558A (en) Image matching method based on deep learning
Maltoni et al. Fingerprint matching
CN116883463A (en) Three-dimensional registration reconstruction method based on multi-domain multi-dimensional feature map
CN113221914A (en) Image feature point matching and mismatching elimination method based on Jacobsad distance
Elashry et al. Improving ransac feature matching based on geometric relation
Lee et al. Domain-agnostic meta-learning for cross-domain few-shot classification
CN107220580B (en) Image identification random sampling consistent algorithm based on voting decision and least square method
CN112419464A (en) Three-dimensional fragment splicing method based on point cloud local concavity and convexity
CN112509019B (en) Three-dimensional corresponding relation grouping method based on compatibility characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant