[go: up one dir, main page]

CN113343839B - A target texture recognition method, device, recognition equipment and storage medium - Google Patents

A target texture recognition method, device, recognition equipment and storage medium Download PDF

Info

Publication number
CN113343839B
CN113343839B CN202110620074.5A CN202110620074A CN113343839B CN 113343839 B CN113343839 B CN 113343839B CN 202110620074 A CN202110620074 A CN 202110620074A CN 113343839 B CN113343839 B CN 113343839B
Authority
CN
China
Prior art keywords
feature
target
data
point cloud
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110620074.5A
Other languages
Chinese (zh)
Other versions
CN113343839A (en
Inventor
郭飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huanchuang Technology Co ltd
Original Assignee
Shenzhen Huanchuang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huanchuang Technology Co ltd filed Critical Shenzhen Huanchuang Technology Co ltd
Priority to CN202110620074.5A priority Critical patent/CN113343839B/en
Publication of CN113343839A publication Critical patent/CN113343839A/en
Application granted granted Critical
Publication of CN113343839B publication Critical patent/CN113343839B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The embodiment of the invention relates to a target texture identification method, a device, an identification device and a storage medium, wherein the method comprises the steps of acquiring brightness data and extracting a feature vector from the brightness data; the method comprises the steps of obtaining a feature vector, obtaining an approximation coefficient by using an offline module, carrying out space frequency filtering processing on the approximation coefficient, judging whether the feature vector is a target feature or not based on the approximation coefficient after the space frequency filtering processing, wherein the target feature corresponds to the feature template of a target, and if so, calculating the position and the direction of the target feature in brightness data. The working states such as the sampling rate and the rotating speed of the radar do not need to be changed temporarily, data interference caused by switching states is avoided, target textures can be identified effectively, and the processing cost of targets is reduced.

Description

Target texture recognition method, device, recognition equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of identification equipment, in particular to a target texture identification method, a target texture identification device, identification equipment and a storage medium.
Background
With the development of SLAM (Simultaneous Localization AND MAPPING, instant localization and mapping) technology, lidar has been widely used in control navigation systems of mobile robots, and in order to sense the surrounding environment comprehensively and in multiple dimensions, different types of sensors such as infrared ranging, cameras, IMU and GPS are also matched in the control navigation systems of mobile robots, so as to improve the accuracy of the overall system.
However, consumer-level robot products (such as intelligent sweeping robots and unmanned aerial vehicles) are often limited by mold design, platform calculation force and the like when additional sensors are added. By utilizing the additional brightness information output by the existing laser radar to perform target identification, similar functions can be realized on the basis of not adding additional equipment, and the production cost can be effectively controlled.
Disclosure of Invention
The embodiment of the invention aims to provide a target texture identification method, a device, identification equipment and a storage medium, which do not need to temporarily change the working states such as the sampling rate and the rotating speed of a radar, avoid data interference caused by switching states, effectively identify target textures and reduce the processing cost of targets.
In a first aspect, an embodiment of the present invention provides a target texture identifying method, where the method includes:
Acquiring brightness data and extracting feature vectors from the brightness data;
Evaluating the feature vector by using an offline module to obtain an approximation coefficient;
performing spatial frequency filtering processing on the approximation coefficient;
Judging whether the feature vector is a target feature or not based on the approximation coefficient after the spatial frequency filtering processing, wherein the target feature corresponds to a feature template of a target;
If yes, calculating the position and the direction of the target feature in the brightness data.
In some embodiments, the evaluating the feature vector with an offline module to obtain an approximation coefficient includes:
Calculating the Euclidean distance or direction cosine between the feature vector and the feature template to obtain an original evaluation parameter;
and normalizing the original evaluation parameters to obtain the first approximation coefficient.
In some embodiments, the evaluating the feature vector with an offline module to obtain an approximation coefficient includes:
evaluating the feature vector by using a pre-trained feature classifier to obtain an original cost coefficient;
and normalizing the original cost coefficient to obtain a second approximation coefficient.
In some embodiments, the method further comprises:
Moving the identification equipment to a preset sampling position, and collecting radar point cloud data of the preset sampling position, wherein the preset sampling position is distributed in a preset sampling area, and the radar point cloud data comprises position and brightness information;
determining radar point cloud data corresponding to the target position in the radar point cloud data according to the position information in the point cloud data and the position information of the target;
performing feature extraction based on brightness information in the radar point cloud data corresponding to the target position to obtain a feature positive sample;
obtaining negative sample data except the radar point cloud data corresponding to the target position in the radar point cloud data, and extracting features of the negative sample data to obtain a plurality of feature negative samples;
and obtaining the characteristic template according to the characteristic positive sample and the characteristic negative sample.
In some embodiments, after the obtaining the feature template, the method further comprises:
And training a classifier model based on the feature positive sample and the label corresponding to the feature negative sample to obtain the feature classifier.
In some embodiments, the classifier model is W T x-b, and the training the classifier model based on the positive feature samples and the labels corresponding to the positive feature samples, and the negative feature samples and the labels corresponding to the negative feature samples to obtain the feature classifier includes:
Inputting the characteristic positive sample, the label corresponding to the characteristic positive sample and the label corresponding to the characteristic negative sample into the classifier model one by one, and obtaining parameters W and b of the classifier model by taking an optimized cost function as a target;
wherein the optimization cost function is sigma 1-y k(WT -b).
In some embodiments, after the moving the identification device to a preset sampling location and collecting radar point cloud data of the preset sampling location, the method further comprises:
correcting the position in the radar point cloud data based on a preset map;
and intercepting a data segment of interest from the radar point cloud data.
In some embodiments, after the acquiring the luminance data, the method further comprises:
and carrying out sliding window processing of dynamic length on the real-time brightness data to intercept a section of local brightness data.
In a second aspect, an embodiment of the present invention further provides a target texture identifying apparatus, where the apparatus includes:
a feature vector acquisition module for acquiring brightness data, extracting a feature vector from the brightness data;
The evaluation module is used for evaluating the feature vector by using the offline module to obtain an approximation coefficient;
the filtering module is used for carrying out spatial frequency filtering processing on the approximation coefficient;
The judging module is used for judging whether the feature vector is a target feature or not based on the approximation coefficient after the spatial frequency filtering processing, wherein the target feature corresponds to a feature template of a target;
and the calculating module is used for calculating the position and the direction of the target feature in the brightness data if the target feature is located in the brightness data.
In a third aspect, an embodiment of the present invention further provides an identification device, including:
At least one processor, and
And a memory communicatively coupled to the at least one processor, the memory storing instructions executable by the at least one processor, the instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
In a fourth aspect, embodiments of the present invention also provide a non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by an identification device, cause the identification device to perform a method as described above.
In a fifth aspect, embodiments of the present invention also provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by an identification device, cause the identification device to perform the above-described method.
Compared with the prior art, the target texture recognition method, device, recognition equipment and storage medium have the advantages that at least brightness data are obtained without changing the sampling rate of radar collected brightness data temporarily, the feature vector can be extracted from the brightness data to describe the features of a target better, recognition stability is improved, an off-line module is utilized to evaluate the feature vector to obtain an approximation coefficient, the approximation coefficient can be used for judging a recognition result preliminarily, in order to reduce the influence of noise points and abnormal data in the approximation coefficient, spatial frequency filtering processing is conducted on the approximation coefficient, whether the feature vector is a target feature is judged based on the approximation coefficient after the spatial frequency filtering processing, wherein the target feature corresponds to a feature template of a target, when the feature vector is the target feature, the position and the direction of the target feature in the brightness data are calculated, disturbance caused by specific data can be effectively restrained, and the accuracy of target texture recognition is improved.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which the figures of the drawings are not to be taken in a limiting sense, unless otherwise indicated.
FIG. 1 is a schematic diagram of one embodiment of an identification device of the present invention;
FIG. 2 is a flow chart of one embodiment of a target texture recognition method of the present invention;
FIG. 3 is a schematic diagram of preset sampling locations of one embodiment of a target texture recognition method of the present invention;
FIG. 4 is a schematic representation of a feature template of a target of one embodiment of the target texture recognition method of the present invention;
FIG. 5A is a schematic diagram of a target of one embodiment of a target texture recognition method of the present invention;
FIG. 5B is a schematic view of the brightness corresponding to FIG. 5A;
FIG. 6 is a waveform diagram of approximation coefficients of one embodiment of a target texture recognition method of the present invention;
FIG. 7 is a graph comparing the alignment of radar point cloud data at a preset sampling location according to one embodiment of the target texture recognition method of the present invention;
FIG. 8 is a graph comparing waveforms of approximation coefficients before and after a filtering process for one embodiment of a target texture recognition method of the present invention;
FIG. 9 is a schematic diagram illustrating the structure of an embodiment of the object texture recognition apparatus of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present invention.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The identification device can be a robot, such as an intelligent sweeping robot, an unmanned aerial vehicle and the like. The identification equipment is provided with a laser radar. The working states such as the sampling rate and the rotating speed of the radar do not need to be changed temporarily, data interference caused by switching states is avoided, target textures can be identified effectively, and the processing cost of targets is reduced.
The identification device may be a device with a vision-based ranging navigation system, without limitation.
Fig. 1 schematically shows a hardware structure of an identification device 100, and as shown in fig. 1, the identification device 100 includes a memory 11 and a processor 12, and the processor 12 is connected to the memory 11. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is not limiting of the identification device, and the identification device may include more or fewer components than shown, or may combine certain components, or split certain components, or may be a different arrangement of components.
The memory 11 is used as a nonvolatile computer-readable storage medium for storing a nonvolatile software program, a nonvolatile computer-executable program, and a module. The memory 11 may include a storage program area that may store an operating system, an application program required for at least one function, and a storage data area that may store data created according to the use of the identification device, etc. In addition, memory 11 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory 11 may optionally include memory located remotely from processor 12, which may be connected to the identification device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor 12 is a control center of the recognition device, and connects various parts of the entire recognition device using various interfaces and lines, and executes various functions and processes data of the recognition device by running or executing software programs and/or modules stored in the memory 11 and calling data stored in the memory 11, thereby performing overall monitoring of the recognition device, for example, implementing the target texture recognition method according to any embodiment of the present invention.
The number of processors 12 may be one or more, one processor 12 being illustrated in fig. 1. The processor 12 and the memory 11 may be connected by a bus or otherwise, which is illustrated in fig. 1 as a bus connection. Processor 12 may include a Central Processing Unit (CPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a controller, a Field Programmable Gate Array (FPGA) device, or the like. Processor 12 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The method may be performed by the recognition device 100, and in particular, in some embodiments, the method may be performed by the processor 12.
Fig. 2 is a flow chart of a target texture recognition method according to an embodiment of the present invention, as shown in fig. 2, where the method includes:
And 101, acquiring brightness data and extracting a characteristic vector from the brightness data.
When the identification equipment carries out target texture identification, the real-time brightness data can be obtained through a laser radar, and the real-time brightness data is intercepted to obtain a section of local brightness data.
In the scheme, the surface structure of the target texture is not limited, and the method is suitable for the laser radar to sample certain specific texture structures, and the obtained brightness information shows remarkable frequency characteristics, so that the texture design can be properly adjusted on the premise of ensuring the spatial frequency characteristics, for example, black and white stripes, gradual color change, texture size or shape change and other modes are adopted.
In some embodiments, in acquiring the luminance data, the method may further include:
and carrying out sliding window processing of dynamic length on the real-time brightness data to intercept a section of local brightness data.
Specifically, when the recognition device recognizes the target texture online, a continuous data stream needs to be dealt with, therefore, a buffer area with a certain length needs to be set and a data sliding window needs to be set, the buffer area is used for recording historical brightness data, the sliding window is used for intercepting brightness sampling data which may correspond to a potential target, namely, dynamic-length sliding window processing is carried out on real-time brightness data, so that a section of local brightness data is obtained.
It can be appreciated that, to cope with the requirements of different application scenarios, the length of the buffer should be dynamically adjusted according to the current radar running state, so as to ensure that the window length can cover the data segment corresponding to the whole potential target.
After obtaining a section of local brightness data, preprocessing the local brightness data, and then extracting features to obtain a group of feature vectors.
Correspondingly, the local brightness data is preprocessed first, and then a common sequence feature extraction algorithm can be adopted to calculate the feature vector. For example, in one embodiment, the feature vectors may be constructed using collocated and rearranged image moments.
And 102, evaluating the feature vector by using an offline module to obtain an approximation coefficient.
When the target texture is identified, the feature vector can be evaluated by using an offline module, wherein the offline module comprises a feature template and a trained feature classifier, which are obtained in an offline processing stage. The evaluation can be performed in a low-calculation-force mode, namely, the feature vector and the feature template are compared and calculated to obtain a first approximation coefficient, and the feature template is obtained in an off-line learning mode based on a small sample. The method can also adopt a high-calculation mode, namely, the feature vector is evaluated by utilizing a pre-trained feature classifier to obtain an original cost coefficient, and the original cost coefficient is normalized to obtain a second approximation coefficient.
In addition, when the real-time brightness data is intercepted, an appropriate interception interval needs to be set according to a sampling position, and in some embodiments, the obtaining of the feature template may include:
Moving the identification equipment to a preset sampling position, and collecting radar point cloud data of the preset sampling position, wherein the preset sampling position is distributed in a preset sampling area, and the radar point cloud data comprises position and brightness information;
determining radar point cloud data corresponding to the target position in the radar point cloud data according to the position information in the point cloud data and the position information of the target;
performing feature extraction based on brightness information in the radar point cloud data corresponding to the target position to obtain a feature positive sample;
obtaining negative sample data except the radar point cloud data corresponding to the target position in the radar point cloud data, and extracting features of the negative sample data to obtain a plurality of feature negative samples;
and obtaining the characteristic template according to the characteristic positive sample and the characteristic negative sample.
Specifically, the identification device is moved to a preset sampling position, radar point cloud data of the preset sampling position are collected, the preset sampling position can be uniformly distributed in a preset sampling area, or the preset sampling position is not distributed in the preset sampling area, and the setting of the preset sampling position is not limited. The radar point cloud data includes location and brightness information. The preset sampling area refers to an area where a high detection rate of the laser radar on a target is expected, and the area can be set according to application scene adaptability, can be a sector, a rectangle and other areas with regular shapes and symmetry, can be an open area or a closed area, and is not limited.
The setting of the preset sampling position is also not particularly required, and may be a layout capable of covering different distances and different angles. According to the distance between the targets and the sampling rate of the radar, the number of sampling positions acquired on the same target is different, as shown in fig. 3, the radar comprises sampling positions A, sampling positions B and sampling positions C, the number of lines represents the number of sampling points, namely the number of scanning lines, the radar obtains data one by one through one scanning line, under the condition of a certain rotating speed, the closer to the scanning line, the more dense the target area can be scanned, otherwise, the more distant from the scanning line, the more sparse the scanned data, therefore, the sampling position A is farthest from the target, the more sparse the scanned data, and the more dense the scanned data.
Therefore, the sampling positions A, B and C can be respectively used for sampling to obtain corresponding radar point cloud data, interpolation and resampling are carried out on different numbers of points to obtain a piece of brightness information with fixed length, for example, the number of points of the radar point cloud data of the distant sampling position A is less than 24, the number of points of the radar point cloud data of the near sampling position C is more than 52, and the points are classified into 32 points after interpolation and resampling, so that the information on different distances is comparable.
And then, determining radar point cloud data corresponding to the target position in the radar point cloud data according to the position information in the point cloud data and the position information of the target. Namely, the radar point cloud data of the preset sampling position comprises radar point cloud data except the target, and the radar point cloud data of the target in the cloud data can be primarily judged Lei Dadian through the position of the radar point cloud data of the preset sampling position and the position of the target.
Finally, feature extraction is carried out according to brightness information in the radar point cloud data corresponding to the target position to obtain a feature positive sample, negative sample data except the radar point cloud data corresponding to the target position in the radar point cloud data is obtained, feature extraction is carried out on the negative sample data to obtain a plurality of feature negative samples, and the feature template is obtained according to the feature positive sample and the feature negative sample. As shown in fig. 4, fig. 4 is a schematic diagram of a feature template of a target.
The target may be a structure corresponding to the target texture, such as a black-white equidistant stripe structure, as shown in fig. 5A, fig. 5A is a schematic diagram of the target, and fig. 5B is a sample schematic diagram of brightness corresponding to fig. 5A. The feature templates are used for subsequent comparison calculations to obtain approximation coefficients.
In a low-power example, the calculation of the approximation coefficients comprises the steps of first calculating the Euclidean distance between the feature vector and the feature template, and then mapping the Euclidean distance to a value between 0 and 1 by using an exponential function, wherein the value is regarded as the first approximation coefficient. The exponential function is not a unique model, and a one-to-one monotonic mapping in a general sense can achieve a similar effect.
And comparing the feature vector with a feature template, and calculating the waveform corresponding to the obtained approximation coefficient as shown in fig. 6. In fig. 6, it is obvious that waveforms of two stages 100-200 and 600-800 are regular, and the target feature corresponding to the target to be measured can be basically determined.
Besides adopting a low-calculation mode to compare and calculate a group of feature vectors with a feature template of a target obtained in advance to obtain an original approximation coefficient, the approximation coefficient can also be obtained by adopting a high-calculation mode, and in a high-calculation mode, the calculation mode of the approximation coefficient is mainly determined by a classifier model. In a typical SVM (Support Vector Machine ) based example, the output of the classifier can also be normalized to the Euclidean distance of the high-dimensional data point corresponding to the feature vector in the hyperplane relative to the segmentation boundary. Similarly, the Euclidean distance may also be mapped to a value interval of 0-1, thereby being an expression of the approximation coefficient.
In some of these embodiments, the method may further comprise:
Evaluating the feature vector by using a feature classifier to obtain an original approximation coefficient;
and normalizing the original cost coefficient to obtain a second approximation coefficient.
The method comprises the steps of obtaining real-time brightness data when a high-computation classifier mode is adopted for calculation, dynamically intercepting the real-time brightness data to obtain a section of local brightness data, preprocessing the local brightness data and extracting features to obtain a group of feature vectors, evaluating the feature vectors by using a feature classifier to obtain an original approximation coefficient, carrying out spatial frequency filtering on the original approximation coefficient, judging whether the feature vectors are target features or not based on the approximation coefficient after the spatial frequency filtering, wherein the target features correspond to feature templates of targets, and if yes, calculating positions and directions of the target features in the brightness data.
Compared with a low-calculation-force mode, the high-calculation-force mode adopts an SVM classifier model for recognition, and meanwhile, the spatial filtering processing is added, so that disturbance caused by specific data can be effectively restrained, and the accuracy of target texture recognition is improved.
Of course, the present invention is not limited to the evaluation of the feature vector by the high-calculation force or the low-calculation force, but the feature vector may be evaluated by both methods.
In some of these embodiments, to obtain the feature classifier, after the obtaining the feature template, the method may further include:
And training a classifier model based on the feature positive sample and the label corresponding to the feature negative sample to obtain the feature classifier.
The method comprises the steps of firstly moving identification equipment to a preset sampling position, collecting radar point cloud data of the preset sampling position, wherein the preset sampling position is distributed in a preset sampling area (the distribution of the preset sampling position is not limited in this case), the radar point cloud data comprise position and brightness information, determining the radar point cloud data of a target in the radar point cloud data according to the position and the position of the target, then carrying out feature extraction based on the brightness information in the radar point cloud data of the target to obtain feature positive samples, obtaining negative sample data except the radar point cloud data of the target in the radar point cloud data, and carrying out feature extraction on the negative sample data to obtain a plurality of feature negative samples. The feature extraction may be performed using a common sequential feature extraction algorithm. The radar point cloud data of the target corresponds to a target position, which is a characteristic positive sample x P={xPi }, i=1..N, the radar point cloud data of the target corresponds to a non-target position, which is a characteristic negative sample x N={xNi }, j=1..M, in Lei Dadian cloud data, and the characteristic classifier is obtained by training a classifier model based on the characteristic positive sample, the label corresponding to the characteristic positive sample and the label corresponding to the characteristic negative sample, which is different from a characteristic template for obtaining the target. The method comprises the steps of setting a label corresponding to a feature positive sample as 1, setting a label corresponding to a feature negative sample as-1, combining a feature vector and the label into a binary group { x Ck,yCk }, C epsilon { P, N }, k epsilon { i, j }, and enabling a classifier model to be W T x-b, specifically, inputting the feature positive sample, the label corresponding to the feature negative sample and the label corresponding to the feature negative sample into the classifier model one by one, and taking an optimization cost function as a target to obtain parameters W and b of the classifier model, wherein the optimization cost function is sigma [1-y (W T x-b) ].
In some examples of the implementation of the classifier, additional processing links may be introduced, such as performing various transformations on the sample data to augment the dataset, performing PCA projection on the features, introducing kernel functions to perform nonlinear segmentation, etc., and of course, these links should be considered as small-magnitude improvements over the original classifier, rather than principle differentiation.
In some embodiments, after the moving the identification device to a preset sampling location and collecting radar point cloud data of the preset sampling location, the method further includes:
correcting the position in the radar point cloud data based on a preset map;
and intercepting a data segment of interest from the radar point cloud data.
Specifically, in the off-line processing stage, preprocessing is similar to the preprocessing process of local brightness data, specifically, the preprocessing is performed on alignment and interception of original data, after the identification device is moved to a preset sampling position and radar point cloud data of the preset sampling position is collected, the position in the radar point cloud data is corrected based on a preset map, for example, if the radar point cloud data of the preset sampling position is not aligned with a corresponding position in a preset map, the radar point cloud data of the preset sampling position can be subjected to correction alignment processing, as shown in fig. 7, fig. 7 is a comparison graph before and after alignment of the radar point cloud data of the preset sampling position, wherein an outer frame with scales is a coordinate system of the preset map, and the radar point cloud data of the preset sampling position on the coordinate system is aligned with an actual scene by correcting the position in the radar point cloud data based on the preset map. And then, intercepting a data segment of interest from the radar point cloud data, wherein the data segment of interest can be a data segment with an upper indent in the figure.
The intercepting can be performed in a manual range setting mode, and the method can be completed in an adaptive mode according to the position information of the drawing, and the data intercepting precision is not required to be set too high because the characteristic extraction and other operations are performed later. As shown in fig. 7, the radar point cloud data of interest is an upper concave portion, and the portion is truncated as the radar point cloud data of interest. And, the radar point cloud data of interest contains luminance information data corresponding to the target texture.
And 103, performing spatial frequency filtering processing on the approximation coefficient.
And 104, judging whether the feature vector is a target feature based on the approximation coefficient after the spatial frequency filtering processing, wherein the target feature corresponds to a feature template of a target.
And 105, if yes, calculating the position and the direction of the target feature in the brightness data.
When the waveform of the approximation coefficient is obtained, whether the target feature corresponding to the target to be detected exists or not can be preliminarily judged. However, if the presence or absence of the target is judged directly according to the original approximation coefficient due to the influence of noise and abnormal data, the result lacks stability. Therefore, it is necessary to further perform spatial frequency filtering processing on the approximation coefficients.
Specifically, by specifying the target texture, the waveform of the approximation coefficient can form a specific frequency mode near the target signal, and the partial signal can generate a more stable response after being processed by the band-pass filter, so that the presence or absence of the target is judged according to the response after the band-pass filter, disturbance of the signal can be effectively restrained, and the accuracy of a judgment result is improved. The parameters of the band-pass filter should be calculated according to the physical size, and also can be estimated directly according to the sampling data.
In one embodiment, the target texture is set to a three-segment white, two-segment black, equal-length interlaced pattern, so that its corresponding original luminance signal forms a "mountain" shaped waveform. When a sliding window passes through the segment of signal, the original approximation coefficients between the eigenvectors and the eigenvalues of the truncated signal form a high-low interleaved waveform. The reason for this is that the two signatures produce a greater approximation response when partially aligned, whereas a smaller response when partially staggered. Since the textures are equally long intervals, the intervals between the size responses should be equally long, and it is understood from the perspective of a continuous signal that the approximation waveform will produce a high and low oscillation of a particular frequency on the segment. As shown in fig. 8, the waveform of the approximation coefficient is a comparison between the waveform before and after the filtering process, and it is obvious that the waveform at the target feature corresponding to the target to be detected is reinforced after the filtering process, so that the disturbance caused by the specific data can be effectively suppressed, and the recognition accuracy is improved.
After determining the target feature, calculating the position and direction of the target feature in the brightness data, namely, according to the position of a section of local brightness data cut out from the real-time brightness data, further calculating the position and direction of the target feature corresponding to the local brightness data, corresponding to the target texture, on the map, wherein the position and direction of the target in the brightness data are the coordinates and the tangent/normal vector of the center point of the target specifically for the plane target.
The target texture recognition method of the embodiment of the invention acquires the brightness data without temporarily changing the sampling rate of the radar acquisition brightness data, extracts the feature vector from the brightness data to better describe the feature of the target and improve the recognition stability, evaluates the feature vector by using an offline module to obtain an approximation coefficient, and can preliminarily judge the recognition result, carries out spatial frequency filtering processing on the approximation coefficient to reduce the influence of noise and abnormal data in the approximation coefficient, and judges whether the feature vector is a target feature or not based on the approximation coefficient after the spatial frequency filtering processing, wherein the target feature corresponds to a feature template of a target, and when the feature vector is the target feature, the position and the direction of the target feature in the brightness data are calculated, so that disturbance caused by specific data can be effectively restrained, and the accuracy of target texture recognition is improved.
Compared with the prior art, the application has the following advantages:
1. the extracted features can better describe the feature space of the target by setting a plurality of sampling positions, so that the recognition stability is improved.
2. The setting of the sampling position has no particularly strict requirement, and is convenient for the design of the production test flow.
3. The rotating speed, the direction or the sampling rate of the robot are properly adjusted according to the position of the target, so that the control flow is simplified, and the robot is more suitable for being configured on a small embedded system.
4. Aiming at a system with stronger computing capability, a classifier interface based on small sample learning is provided, and the identification capability of target features can be further enhanced, so that the overall identification accuracy is improved.
5. The texture of the target is controlled, so that the matching degree coefficient can form a specific frequency response, and disturbance caused by abnormal data can be effectively restrained after filtering processing.
6. The texture space frequency and the target structure have no particularly strong dependency, so that the texture frequency characteristics are ensured, and meanwhile, the structure still maintains a large degree of freedom in processing, thereby being convenient for controlling the manufacturing cost.
Accordingly, as shown in fig. 9, the embodiment of the present invention further provides a target texture recognition apparatus, which may be applied to a recognition device, for example, the recognition device 100 shown in fig. 1, where the target texture recognition apparatus 800 includes:
a feature vector obtaining module 801, configured to obtain luminance data, and extract a feature vector from the luminance data;
the evaluation module 802 is configured to evaluate the feature vector by using an offline module to obtain an approximation coefficient;
A filtering module 803, configured to perform spatial frequency filtering on the first approximation coefficient;
A judging module 804, configured to judge whether the feature vector is a target feature based on the approximation coefficient after the spatial frequency filtering process, where the target feature corresponds to a feature template of a target;
and a calculating module 805, configured to if yes, calculate a position and a direction of the target feature in the luminance data if yes.
The target texture recognition method, the device, the recognition equipment and the storage medium of the embodiment of the invention acquire brightness data without temporarily changing the sampling rate of the radar acquisition brightness data, extract the feature vector from the brightness data to better describe the features of the target and improve the recognition stability, evaluate the feature vector by using an offline module to obtain an approximation coefficient, and preliminarily judge the recognition result, perform spatial frequency filtering processing on the approximation coefficient to reduce the influence of noise and abnormal data in the approximation coefficient, and judge whether the feature vector is a target feature based on the approximation coefficient after the spatial frequency filtering processing, wherein the target feature corresponds to a feature template of a target, and when the feature vector is the target feature, calculate the position and the direction of the target feature in the brightness data, thereby effectively inhibiting disturbance caused by specific data and improving the accuracy of target texture recognition.
In some low-computational-effort-supported embodiments, the evaluation module 802 is further configured to:
and comparing and calculating the feature vector with the feature template to obtain a first approximation coefficient.
In some low-computational-effort-supported embodiments, the evaluation module 802 is further configured to:
Calculating the Euclidean distance or direction cosine between the feature vector and the feature template to obtain an original evaluation parameter;
and normalizing the original evaluation parameters to obtain the first approximation coefficient.
In some high-computing-power-supported embodiments, the evaluation module 802 is further configured to:
evaluating the feature vector by using a pre-trained feature classifier to obtain an original cost coefficient;
and normalizing the original cost coefficient to obtain a second approximation coefficient.
In some embodiments, the target texture recognition device 800 further comprises:
A feature template acquisition module 806, configured to:
Moving the identification equipment to a preset sampling position, and collecting radar point cloud data of the preset sampling position, wherein the preset sampling position is distributed in a preset sampling area, and the radar point cloud data comprises position and brightness information;
determining radar point cloud data corresponding to the target position in the radar point cloud data according to the position information in the point cloud data and the position information of the target;
performing feature extraction based on brightness information in the radar point cloud data corresponding to the target position to obtain a feature positive sample;
obtaining negative sample data except the radar point cloud data corresponding to the target position in the radar point cloud data, and extracting features of the negative sample data to obtain a plurality of feature negative samples;
and obtaining the characteristic template according to the characteristic positive sample and the characteristic negative sample.
In some embodiments, after the feature template acquisition module 806 performs the obtaining the feature template, the target texture recognition device 800 further includes:
the feature classifier acquisition module 807 is further configured to:
And training a classifier model based on the feature positive sample and the label corresponding to the feature negative sample to obtain the feature classifier.
In some embodiments, the classifier model is W T x-b, and the feature classifier acquisition module 812 is further configured to:
Inputting the characteristic positive sample, the label corresponding to the characteristic positive sample and the label corresponding to the characteristic negative sample into the classifier model one by one, and obtaining parameters W and b of the classifier model by taking an optimized cost function as a target;
Wherein the optimized cost function is sigma [1-y (W T x-b) ].
In some embodiments, the target texture recognition apparatus 800 further comprises a preprocessing module 808 for:
correcting the position in the radar point cloud data based on a preset map;
and intercepting a data segment of interest from the radar point cloud data.
In some embodiments, after performing the acquisition of the luminance data, the feature vector acquisition module 801 is further configured to:
And carrying out sliding window processing of dynamic length on the real-time brightness data to intercept the section of local brightness data.
The device has the corresponding function modules and beneficial effects of the method. Technical details which are not described in detail in the device embodiments may be found in the methods provided by the embodiments of the present application.
Embodiments of the present application also provide a non-transitory computer readable storage medium storing computer executable instructions for execution by one or more processors, such as one of the processors 12 of fig. 1, to cause the one or more processors to perform the target texture recognition method of any of the method embodiments described above, such as performing method steps 101 through 105 of fig. 2 described above, to implement the functions of blocks 801-808 of fig. 9.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, but may also be implemented by means of hardware. Those skilled in the art will appreciate that all or part of the processes implementing the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and where the program may include processes implementing the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random-access Memory (Random Access Memory, RAM), or the like.
It should finally be noted that the above embodiments are only intended to illustrate the technical solution of the present invention and not to limit it, that the technical features of the above embodiments or of the different embodiments may be combined in any order, and that many other variations in the different aspects of the present invention as described above exist, which are not provided in details for the sake of brevity, and that although the invention has been described in the detailed description with reference to the foregoing embodiments, it should be understood by those skilled in the art that it may still make modifications to the technical solution described in the foregoing embodiments or equivalent to some of the technical features thereof, where these modifications or substitutions do not depart from the essence of the corresponding technical solution from the scope of the technical solution of the embodiments of the present invention.

Claims (16)

1. A method of target texture recognition, the method comprising:
Acquiring brightness data and extracting feature vectors from the brightness data;
Evaluating the feature vector by using an offline module to obtain an approximation coefficient;
The step of evaluating the feature vector by using an offline module to obtain an approximation coefficient comprises the following steps:
calculating the Euclidean distance or direction cosine between the feature vector and the feature template to obtain an original evaluation parameter;
Normalizing the original evaluation parameters to obtain an approximation coefficient;
performing spatial frequency filtering processing on the approximation coefficient;
Judging whether the feature vector is a target feature or not based on the approximation coefficient after the spatial frequency filtering processing, wherein the target feature corresponds to a feature template of a target;
If yes, calculating the position and the direction of the target feature in the brightness data.
2. The method according to claim 1, wherein the method further comprises:
Moving the identification equipment to a preset sampling position, and collecting radar point cloud data of the preset sampling position, wherein the preset sampling position is distributed in a preset sampling area, and the radar point cloud data comprises position and brightness information;
determining radar point cloud data corresponding to the target position in the radar point cloud data according to the position information in the point cloud data and the position information of the target;
performing feature extraction based on brightness information in the radar point cloud data corresponding to the target position to obtain a feature positive sample;
obtaining negative sample data except the radar point cloud data corresponding to the target position in the radar point cloud data, and extracting features of the negative sample data to obtain a plurality of feature negative samples;
and obtaining the characteristic template according to the characteristic positive sample and the characteristic negative sample.
3. The method of claim 2, wherein after the obtaining the feature template, the method further comprises:
And training a classifier model based on the feature positive sample and the label corresponding to the feature positive sample and the feature negative sample and the label corresponding to the feature negative sample to obtain a feature classifier.
4. The method of claim 3, wherein the classifier model is W r x-b, and wherein the training the classifier model based on the positive feature samples and the labels corresponding to the positive feature samples, and the negative feature samples and the labels corresponding to the negative feature samples to obtain the feature classifier comprises:
Inputting the characteristic positive sample, the label corresponding to the characteristic positive sample and the label corresponding to the characteristic negative sample into the classifier model one by one, and obtaining parameters W and b of the classifier model by taking an optimized cost function as a target;
wherein the optimization cost function is sigma 1-y k(Wr -b).
5. The method of claim 2, wherein after said moving the identification device to a preset sampling location and collecting radar point cloud data for the preset sampling location, the method further comprises:
correcting the position in the radar point cloud data based on a preset map;
and intercepting the data segment of interest from the radar point cloud data.
6. The method of claim 1, wherein after the acquiring the luminance data, the method further comprises:
and carrying out sliding window processing of dynamic length on the real-time brightness data to intercept a section of local brightness data.
7. A method of target texture recognition, the method comprising:
Acquiring brightness data and extracting feature vectors from the brightness data;
Evaluating the feature vector by using an offline module to obtain an approximation coefficient;
The step of evaluating the feature vector by using an offline module to obtain an approximation coefficient comprises the following steps:
evaluating the feature vector by using a pre-trained feature classifier to obtain an original cost coefficient;
Normalizing the original cost coefficient to obtain an approximation coefficient;
performing spatial frequency filtering processing on the approximation coefficient;
Judging whether the feature vector is a target feature or not based on the approximation coefficient after the spatial frequency filtering processing, wherein the target feature corresponds to a feature template of a target;
If yes, calculating the position and the direction of the target feature in the brightness data.
8. The method of claim 7, wherein the method further comprises:
Moving the identification equipment to a preset sampling position, and collecting radar point cloud data of the preset sampling position, wherein the preset sampling position is distributed in a preset sampling area, and the radar point cloud data comprises position and brightness information;
determining radar point cloud data corresponding to the target position in the radar point cloud data according to the position information in the point cloud data and the position information of the target;
performing feature extraction based on brightness information in the radar point cloud data corresponding to the target position to obtain a feature positive sample;
obtaining negative sample data except the radar point cloud data corresponding to the target position in the radar point cloud data, and extracting features of the negative sample data to obtain a plurality of feature negative samples;
and obtaining the characteristic template according to the characteristic positive sample and the characteristic negative sample.
9. The method of claim 8, wherein after the obtaining the feature template, the method further comprises:
And training a classifier model based on the feature positive sample and the label corresponding to the feature positive sample and the feature negative sample and the label corresponding to the feature negative sample to obtain a feature classifier.
10. The method of claim 9, wherein the classifier model is W r x-b, and wherein the training the classifier model based on the positive feature samples and the labels corresponding to the positive feature samples, and the negative feature samples and the labels corresponding to the negative feature samples to obtain the feature classifier comprises:
Inputting the characteristic positive sample, the label corresponding to the characteristic positive sample and the label corresponding to the characteristic negative sample into the classifier model one by one, and obtaining parameters W and b of the classifier model by taking an optimized cost function as a target;
wherein the optimization cost function is sigma 1-y k(Wr -b).
11. The method of claim 8, wherein after said moving the identification device to a preset sampling location and collecting radar point cloud data for the preset sampling location, the method further comprises:
correcting the position in the radar point cloud data based on a preset map;
and intercepting the data segment of interest from the radar point cloud data.
12. The method of claim 7, wherein after the acquiring the luminance data, the method further comprises:
and carrying out sliding window processing of dynamic length on the real-time brightness data to intercept a section of local brightness data.
13. A target texture recognition apparatus, the apparatus comprising:
a feature vector acquisition module for acquiring brightness data, extracting a feature vector from the brightness data;
The evaluation module is used for evaluating the feature vector by using the offline module to obtain an approximation coefficient;
the evaluation module is also used for calculating the Euclidean distance or direction cosine between the feature vector and the feature template to obtain an original evaluation parameter;
the filtering module is used for carrying out spatial frequency filtering processing on the approximation coefficient;
The judging module is used for judging whether the feature vector is a target feature or not based on the approximation coefficient after the spatial frequency filtering processing, wherein the target feature corresponds to a feature template of a target;
and the calculating module is used for calculating the position and the direction of the target feature in the brightness data if the target feature is located in the brightness data.
14. A target texture recognition apparatus, the apparatus comprising:
a feature vector acquisition module for acquiring brightness data, extracting a feature vector from the brightness data;
The evaluation module is used for evaluating the feature vector by using the offline module to obtain an approximation coefficient;
The evaluation module is also used for evaluating the feature vector by utilizing a pre-trained feature classifier to obtain an original cost coefficient;
the filtering module is used for carrying out spatial frequency filtering processing on the approximation coefficient;
The judging module is used for judging whether the feature vector is a target feature or not based on the approximation coefficient after the spatial frequency filtering processing, wherein the target feature corresponds to a feature template of a target;
and the calculating module is used for calculating the position and the direction of the target feature in the brightness data if the target feature is located in the brightness data.
15. An identification device, characterized in that the identification device comprises:
At least one processor, and
A memory communicatively coupled to the processor, the memory storing instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-12.
16. A non-transitory computer readable storage medium storing computer executable instructions which, when executed by an identification device, cause the identification device to perform the method of any of claims 1-12.
CN202110620074.5A 2021-06-03 2021-06-03 A target texture recognition method, device, recognition equipment and storage medium Active CN113343839B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110620074.5A CN113343839B (en) 2021-06-03 2021-06-03 A target texture recognition method, device, recognition equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110620074.5A CN113343839B (en) 2021-06-03 2021-06-03 A target texture recognition method, device, recognition equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113343839A CN113343839A (en) 2021-09-03
CN113343839B true CN113343839B (en) 2025-03-28

Family

ID=77473514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110620074.5A Active CN113343839B (en) 2021-06-03 2021-06-03 A target texture recognition method, device, recognition equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113343839B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114004935A (en) * 2021-11-08 2022-02-01 优奈柯恩(北京)科技有限公司 Method and device for three-dimensional modeling through three-dimensional modeling system
CN115966032A (en) * 2022-12-29 2023-04-14 杭州海康威视数字技术股份有限公司 Biological identification method and device and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745236A (en) * 2013-12-20 2014-04-23 清华大学 Texture image identification method and texture image identification device
CN109816626A (en) * 2018-12-13 2019-05-28 深圳高速工程检测有限公司 Road surface crack detection method, device, computer equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8786437B2 (en) * 2000-09-08 2014-07-22 Intelligent Technologies International, Inc. Cargo monitoring method and arrangement
CN105403883A (en) * 2015-10-29 2016-03-16 河南工业大学 Ground penetrating radar underground target position detection method
CN110110738A (en) * 2019-03-20 2019-08-09 西安电子科技大学 A kind of Recognition Method of Radar Emitters based on multi-feature fusion

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745236A (en) * 2013-12-20 2014-04-23 清华大学 Texture image identification method and texture image identification device
CN109816626A (en) * 2018-12-13 2019-05-28 深圳高速工程检测有限公司 Road surface crack detection method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113343839A (en) 2021-09-03

Similar Documents

Publication Publication Date Title
CN108445480B (en) Mobile platform self-adaptive extended target tracking system and method based on laser radar
US11195064B2 (en) Cross-modal sensor data alignment
CN113424079A (en) Obstacle detection method, obstacle detection device, computer device, and storage medium
CN105405154A (en) Target object tracking method based on color-structure characteristics
CN113490965A (en) Image tracking processing method and device, computer equipment and storage medium
CN113343839B (en) A target texture recognition method, device, recognition equipment and storage medium
CN113469195B (en) Target identification method based on self-adaptive color quick point feature histogram
CN112001378B (en) Lane line processing method and device based on feature space, vehicle-mounted terminal and medium
CN116337072A (en) Construction method, construction equipment and readable storage medium for engineering machinery
CN114792417B (en) Model training method, image recognition method, device, equipment and storage medium
CN118570765B (en) Obstacle detection and tracking method, device, computer equipment and storage medium
CN112414403A (en) Robot positioning and attitude determining method, equipment and storage medium
CN117372536A (en) Laser radar and camera calibration method, system, equipment and storage medium
CN118226421A (en) LiDAR-Camera Online Calibration Method and System Based on Reflectivity Map
CN114092530B (en) Ladle visual alignment method, device and equipment based on deep learning semantic segmentation and point cloud registration
CN107609479A (en) Attitude estimation method and system based on the sparse Gaussian process with noise inputs
CN118968252A (en) Obstacle detection method, device, electronic device and storage medium
CN119168954A (en) Zero-shot defect detection method and defect detection equipment based on prompt learning
CN109635692B (en) Scene Re-identification Method Based on Ultrasonic Sensor
CN112163521A (en) Vehicle driving behavior identification method, device and equipment
CN116820096A (en) Robot automatic obstacle avoidance recognition control system based on artificial intelligence
Bentley et al. A pseudo-derivative method for sliding window path mapping in robotics-based image processing
CN116823909A (en) Method, device, equipment and medium for extracting comprehensive information of driving environment
CN111860123B (en) Method for identifying boundary of working area
Ristić-Durrant et al. Deep learning-based obstacle detection and distance estimation using object bounding box

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 518000, Floor 1801, Block C, Minzhi Stock Commercial Center, North Station Community, Minzhi Street, Longhua District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen Huanchuang Technology Co.,Ltd.

Address before: 518000 2407-2409, building 4, phase II, Tian'an Yungu Industrial Park, Gangtou community, Bantian street, Longgang District, Shenzhen, Guangdong

Applicant before: SHENZHEN CAMSENSE TECHNOLOGIES Co.,Ltd.

GR01 Patent grant
GR01 Patent grant