[go: up one dir, main page]

CN116311203A - Character recognition method and system for steel coil end face based on lightweight feature extraction network - Google Patents

Character recognition method and system for steel coil end face based on lightweight feature extraction network Download PDF

Info

Publication number
CN116311203A
CN116311203A CN202310205677.8A CN202310205677A CN116311203A CN 116311203 A CN116311203 A CN 116311203A CN 202310205677 A CN202310205677 A CN 202310205677A CN 116311203 A CN116311203 A CN 116311203A
Authority
CN
China
Prior art keywords
steel coil
image
circular ring
area
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310205677.8A
Other languages
Chinese (zh)
Inventor
王晓晨
刘瑾妍
闫书宗
杨荃
徐冬
孙友昭
何海楠
赵剑威
彭功状
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology Beijing USTB
Original Assignee
University of Science and Technology Beijing USTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology Beijing USTB filed Critical University of Science and Technology Beijing USTB
Priority to CN202310205677.8A priority Critical patent/CN116311203A/en
Publication of CN116311203A publication Critical patent/CN116311203A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a steel coil end face character recognition method and system based on a lightweight characteristic extraction network, and relates to the technical field of steel coil production automation. The method comprises the following steps: acquiring accurate edge areas of all steel coils in the image, and dividing the steel coil image to obtain a sub-pixel level steel coil image; aiming at the sub-pixel level steel coil image, calculating the coordinates of the circle center position of the steel coil and the radius value information of the circular ring boundary, and determining the circular ring area of the steel coil; flattening the steel coil circular ring area into a rectangular area, determining pixel values of the rectangular area, and obtaining a rectangular image of the steel coil after the steel coil circular ring area is unfolded; and (3) extracting a network model by adopting light-weight characteristics, and identifying the characters of the end face of the steel coil in the rectangular image of the steel coil. The method and the system for directly intercepting the steel coil image from the field monitoring, restoring the pixel information with high precision after flattening and carrying out the character recognition of the label end face of the steel coil are based on the lightweight characteristic extraction network, and have strong reliability and high recognition rate.

Description

Steel coil end face character recognition method and system based on lightweight characteristic extraction network
Technical Field
The invention relates to the technical field of steel coil production automation, in particular to a steel coil end face character recognition method and system based on a lightweight characteristic extraction network.
Background
In the strip steel production process, in order to facilitate the transportation of subsequent steel, the strip steel is curled in the last step of production, each coil of steel corresponds to a number, and the number contains information such as production batch, type, model and the like of the steel coil, so that the key for tracking the product quality and tracing the whole production flow information is provided. At present, most of domestic steel factories acquire character information such as steel coil numbers by manually checking monitoring videos or directly checking the monitoring videos on site. The direction of the end face characters is random and inconvenient to read when finished steel coils are stacked, the imaging background environment is complex, errors are unavoidable when monitoring videos are manually checked for a long time, and the process of directly checking the end face characters to the site is complex. At present, part of steel mills adopt a traditional machine vision mode to identify, hardware equipment is greatly relied on, equipment maintenance cost is high, algorithm accuracy is poor, identification speed is low, pixel loss is high when curved end face characters are simply flattened, and identification accuracy is greatly affected.
The end face character recognition of the steel coil is one of end face character recognition in industrial scenes, and usually has image defects caused by complex imaging environment, disordered background, bending deformation of the end face characters and uneven printing surfaces.
Disclosure of Invention
Aiming at the problems, the invention provides a method and a system for directly intercepting a steel coil image from site monitoring, restoring pixel information with high precision after flattening and carrying out character recognition on the label end face of the steel coil. In addition, the method has the advantages of accurate extraction of the position information of the steel coil, small pixel loss in flattening treatment, high accuracy, high speed and strong generalization capability of the end face character recognition algorithm, greatly improves the detection accuracy, and reduces the labor cost and the equipment maintenance cost.
According to a first aspect of the technical scheme of the invention, a steel coil end face character recognition method based on a lightweight characteristic extraction network is provided, and the recognition method comprises the following steps:
s1: acquiring accurate edge areas of all steel coils in the image, and dividing the steel coil image to obtain a sub-pixel level steel coil image;
s2: aiming at the sub-pixel level steel coil image, calculating the coordinates of the circle center position of the steel coil and the radius value information of the circular ring boundary, and determining the circular ring area of the steel coil;
s3: flattening the steel coil circular ring area into a rectangular area, determining pixel values of the rectangular area, and obtaining a rectangular image of the steel coil after the steel coil circular ring area is unfolded;
s4: and (3) extracting a network model by adopting light-weight characteristics, and identifying the characters of the end face of the steel coil in the rectangular image of the steel coil.
Further, the step S1 specifically includes:
s11: acquiring a steel coil stacking image;
s12: dividing the steel coil from the background by adopting a dynamic threshold segmentation mode, extracting all edges of the steel coil, and obtaining an original steel coil image;
s13: aiming at an original steel coil image, a sub-pixel edge detection algorithm is combined, an improved filtering operator is utilized to perform rough positioning of edge points, and a cubic spline interpolation method is utilized to perform interpolation operation and edge refinement on the image, so that a sub-pixel steel coil image is obtained.
Further, the step S2 specifically includes: for the sub-pixel level steel coil image, extracting a complete ring in the sub-pixel level steel coil image by adopting Hough transformation, obtaining the circle center position coordinates of the complete ring and the radius information of the maximum and minimum circles corresponding to the ring boundary, and determining the steel coil ring area.
Further, the step S2 specifically includes:
s21: aiming at the sub-pixel level steel coil image, extracting a complete circular ring in the sub-pixel level steel coil image by adopting Hough transformation;
s22: calculating gradient information of all points according to the sub-pixel edge detection points, drawing edge gradient lines along the gradient direction, accumulating all normal lines by an accumulator, and determining the point with larger summation value as a center position coordinate (x 0 ,y 0 );
S23: calculating distances from all edge points to the circle center according to the circle center position, determining a distance value with the largest frequency as a radius, and discarding a non-complete circle region through setting a complete degree threshold of a circle to be detected; obtaining the corresponding maximum and minimum circle radii of the circular ring boundary, respectively marked as R 2 、R 1 And further determining the circular ring area of the steel coil.
Further, the step S3 specifically includes:
s31: flattening the circular ring area of the steel coil into a rectangular area;
s32: taking any point A in the rectangular area, wherein the coordinates of the point A are (x, y), the corresponding point in the circular area of the steel coil is A ', calculating the angle value and the length value of the point A' under the polar coordinates, and then obtaining the floating point coordinates of the circular area through a polar coordinate transformation formula;
s33: calculating the pixel value of the point A' through bilinear interpolation algorithm;
s34: according to the corresponding relation between the pixel points of the circular ring area and the rectangular area of the steel coil in the space position and the pixel value, calculating to obtain the pixel value of the point A in the rectangular area; and traversing each point of the rectangular area to obtain all pixel values of the rectangular area to be output, and finally obtaining the rectangular image of the steel coil after the circular ring area of the steel coil is unfolded.
Further, in S31, the correspondence between the flattened rectangular area and the steel coil circular area before transformation is:
the length of the ring is equal to the outer arc length of the ring
Figure BDA0004110863230000031
Wherein->
Figure BDA0004110863230000032
Is the radian between the left radius of the circular arc and the initial line, < >>
Figure BDA0004110863230000033
Is the radian between the right radius of the circular arc and the initial line, R 2 The maximum radius corresponds to the boundary of the circular ring; the height is the maximum and minimum circular radius difference R corresponding to the circular ring boundary 2 -R 1
Further, the step S32 specifically includes:
let the corresponding angle of each unit pixel length be
Figure BDA0004110863230000034
Wherein->
Figure BDA0004110863230000035
R is radian corresponding to unit pixel 2 For the maximum circle radius corresponding to the circle boundary, y is the ordinate value of the unit pixel point in the rectangular area, and the angle value of A' under the polar coordinate in the circle area is +.>
Figure BDA0004110863230000036
The method comprises the following steps:
Figure BDA0004110863230000037
length value R of A' point in polar coordinate A The method comprises the following steps:
R A =R 2 -y (2)
the floating point coordinates (x ', y ') of the circular ring area A ' are obtained through polar coordinate transformation
Figure BDA0004110863230000038
Figure BDA0004110863230000039
Further, the step S33 specifically includes:
calculating by using the neighborhood pixel value of the point A '4 through a bilinear interpolation formula of a two-dimensional space to obtain a pixel value f (x', y ') of the point A';
let x' =i+u, (5)
y′=j+v (6)
Wherein u, v E (0, 1) represent the non-integer part of the coordinate and i, j represent the integer part of the coordinate;
then it is possible to obtain:
Figure BDA0004110863230000041
further, the step S4 specifically includes:
s41: extracting end face character features in the rectangular image of the steel coil by using a CSPDarknet53 network in YOLOv4, and outputting a low-level global feature map;
s42: carrying out character region enhanced feature extraction on the low-level global feature map by a spatial pyramid pooling structure (spatial pyramid pooling, SPP) and a PANet structure in the feature pyramid to obtain a character prediction output feature map;
s43: the extracted reinforcement features are converted into character recognition prediction results by yolhead.
Further, the CSPDarknet53 network in step S41: changing a cross-stage local area network (cross stage partial net-work, CSPNet) structure on the structure of a stacked residual error unit (Resblock_body), splitting the original residual error block network stack into two parts, wherein one part is kept unchanged in a stacking form, the other part is used as a residual error edge, and then the two parts are connected after passing through a convolution layer; the improved structure effectively simplifies the operation complexity, and further improves the detection efficiency of rectangular images of the steel coil. The activation function of the convolution block DarknetConv2D is modified into Mish by the LeakyReLU, saturation caused by capping of the activation function is avoided, more stable network gradient flow can be obtained, the Mish function is a smooth activation function, image information can be enabled to enter into a network structure, and therefore accuracy and generalization capability of character recognition in images are improved.
Further, the step S41 specifically includes:
s411: performing size normalization on the rectangular image of the steel coil to obtain a three-channel picture, and inputting the three-channel picture into a CSPDarknet53 network;
s412: the dark convolution is performed followed by five Resblock _ bodies operations (essentially large convolutions of a series of residual networks) to output a low-level global feature map.
Further, the step S42 specifically includes:
s421: accessing the low-level global feature map into an SPP structure, separating remarkable context features by utilizing the largest pooled kernels with different sizes, stacking pooled results, and carrying out multi-scale feature fusion;
s422: a top-down pyramid is added by adopting a PANet structure, extraction of semantic information and low-layer strong positioning information is enhanced, parameter aggregation is carried out on three feature images with different scales from three enhanced feature layers, and three character prediction output feature images with different scales are output.
Further, the step S43 specifically includes:
s431: respectively processing the three character prediction output feature graphs with different scales, setting three prior frames for each feature point of each feature layer to extract an accurate single character area, decoding the prior frames according to the prior frames and the prediction output feature graph, performing confidence decoding on the characters in the frames, decoding by adopting a sigmoid function, and setting a confidence value interval in [0,1 ];
s432: decoding the character category to obtain the category and confidence corresponding to the characters of each frame; setting a confidence threshold, screening out frames lower than the threshold, and outputting a character recognition prediction result of the whole network through non-maximum suppression.
According to a second aspect of the present invention, there is provided a steel coil end face character recognition device based on a lightweight feature extraction network, the recognition device comprising:
the sub-pixel level steel coil image acquisition unit is used for acquiring the accurate edge area of each steel coil in the image, and dividing the steel coil image to obtain a sub-pixel level steel coil image;
the steel coil circular ring area determining unit is used for calculating the coordinates of the circle center position of the steel coil and the radius value information of the circular ring boundary aiming at the sub-pixel level steel coil image to determine the steel coil circular ring area;
the steel coil rectangular image acquisition unit is used for flattening the steel coil circular ring area into a rectangular area and determining pixel values of the rectangular area to obtain a steel coil rectangular image after the steel coil circular ring area is unfolded;
and the steel coil rectangular image recognition unit is used for extracting a network model by adopting light-weight characteristics and recognizing the characters of the end face of the steel coil in the steel coil rectangular image.
According to a third aspect of the present invention, there is provided a steel coil end face character recognition system based on a lightweight feature extraction network, the system comprising: a processor and a memory for storing executable instructions; wherein the processor is configured to execute the executable instructions to perform the steel coil end face character recognition method based on the lightweight feature extraction network as described in any one of the above aspects.
According to a fourth aspect of the present invention, there is provided a computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, implements the steel coil end face character recognition method based on the lightweight feature extraction network as described in any one of the above aspects.
The technical scheme of the invention has the following beneficial effects:
according to the technical scheme, the steel coil image is extracted by utilizing the video monitoring image picture in the factory, no new hardware equipment is needed, interference of a non-steel coil area is effectively eliminated, and a sub-pixel level accurate steel coil edge area is obtained, so that the center position coordinate and the radius of a circular ring can be extracted by using accurate edge information, the pixel loss generated by flattening end face characters is reduced, an improved end face character recognition deep learning algorithm is adopted, a light-weight feature extraction network is adopted, more shallow feature information is reserved, the end face character detection accuracy is guaranteed, and meanwhile, the detection speed is effectively improved.
The invention accords with the characteristics of automation and digital intelligence of the production site, does not need to add image acquisition equipment, saves detection cost, can accurately and rapidly identify the end face characters of the label of the steel coil with different fonts, sizes and imaging definition, reduces manual detection errors and saves label information acquisition time. The method is beneficial to realizing the whole process traceable target of the steel production information, and reduces the loss caused by the fact that the steel production information cannot be inquired or is inquired by mistake.
Drawings
Fig. 1A is a step diagram of a steel coil end face character recognition processing method of the invention; fig. 1B is a flowchart of the process for recognizing characters on the end face of a steel coil according to the invention;
FIG. 2A is a schematic view of a polar-transformed torus region; FIG. 2B is a schematic diagram of a rectangular region after polar coordinate conversion;
FIG. 3 is a schematic diagram of SPP structure composition;
fig. 4 is a diagram of a modified YOLOv4 network structure (wherein Conv: convolutional layer, upsampling: upsampling, downsampling: concat: splicing).
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein, for example.
Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
A plurality, including two or more.
And/or, it should be understood that for the term "and/or" used in this disclosure, it is merely one association relationship describing associated objects, meaning that there may be three relationships. For example, a and/or B may represent: a exists alone, A and B exist together, and B exists alone.
The technical scheme of the invention provides a light-weight feature extraction network steel coil end face character recognition method with high reliability and high recognition rate, which is shown in fig. 1A and 1B and comprises the following steps:
s1: and acquiring the accurate edge area of each steel coil in the image, and dividing the image of the steel coil.
For a steel coil stacking image of a steel coil storage area, segmenting a background and steel coils in a dynamic threshold segmentation mode, extracting all edges of the steel coils in the image, and obtaining complete steel coil image information; performing coarse positioning of edge points by utilizing an improved filtering operator in combination with a sub-pixel edge detection algorithm, and performing interpolation operation and edge refinement on the image by using a cubic spline interpolation method to obtain all sub-pixel-level steel coil edge images;
s2: and calculating the circle center coordinates and the radius of the circular ring of the steel coil through Hough transformation.
And drawing edge gradient lines along the gradient direction so as to determine the center position. And calculating distances from all edge points to the circle center, determining a distance value with the largest frequency as a radius R, and discarding the non-complete circle region through setting the integrity threshold of the circle to be detected. Thereby obtaining the center position coordinate (x) of the complete ring of the steel coil 0 ,y 0 ) The corresponding maximum and minimum circle radii of the circular ring boundary are respectively recorded as R 2 、R 1
S3: performing polar coordinate transformation on the obtained center coordinates and radius value information, flattening the circular ring area into a rectangular area, and simultaneously determining pixel values of the rectangular area by using bilinear interpolation to obtain a rectangular image after the circular ring area is unfolded;
s4: and adopting a lightweight characteristic extraction network model to identify characters in the flattened rectangular image.
Wherein, the specific steps of S3 are as follows:
s31: dividing the region where the detected ring is located according to the corresponding relation between the rectangular region after transformation and the ring region before transformation, and obtaining the center coordinates and the radius of the detected ring;
the length of the ring is equal to the outer arc length of the ring
Figure BDA0004110863230000081
Wherein->
Figure BDA0004110863230000082
Is the radian between the left radius of the circular arc and the initial line, < >>
Figure BDA0004110863230000083
Is the radian between the right radius of the circular arc and the initial line, R 2 The circular ring boundary corresponds to the maximum radius, and the height corresponds to the maximum and minimum circular radius difference R 2 -R 1 . Hall through S2The method of the Fu transform divides the region where the detected ring is located (namely the ROI region) and obtains the center coordinates (x) 0 ,y 0 ) And a radius R. For a certain point a in the expansion area, let its coordinates be (x, y), and the corresponding point in the annular area be a'.
S32: calculating to obtain an angle value and a length value of a corresponding point of the circular ring area under a polar coordinate, and then obtaining a floating point coordinate of the circular ring area through a polar coordinate transformation formula;
let the corresponding angle of each unit pixel length be
Figure BDA0004110863230000084
Wherein->
Figure BDA0004110863230000085
R is radian corresponding to unit pixel 2 For the maximum circle radius corresponding to the circle boundary, y is the ordinate value of the unit pixel point in the rectangular area, and the angle value of A' under the polar coordinate in the circle area is +.>
Figure BDA0004110863230000086
The method comprises the following steps:
Figure BDA0004110863230000087
length R of A' point in polar coordinates A The method comprises the following steps:
R A =R 2 -y (2)
at this time, floating point coordinates (x ', y ') of the circular ring area A ' are obtained by polar coordinate transformation
Figure BDA0004110863230000088
Figure BDA0004110863230000089
S33: the a' pixel value is calculated by a bilinear interpolation algorithm.
The coordinates (x ', y ') of the point A ' are usually non-integer values, the pixel value of the point A ' cannot be directly obtained, and the pixel value f (x ', y ') of the point A ' is obtained by utilizing the 4 neighborhood pixel value and performing operation through a bilinear interpolation formula of a two-dimensional space;
let x' =i+u, (5)
y′=j+v (6)
Wherein u, v E (0, 1) represent the non-integer part of the coordinate and i, j represent the integer part of the coordinate;
then it is possible to obtain:
Figure BDA0004110863230000091
s34: and according to the corresponding relation between the spatial position and the pixel value of the pixel point of the circular ring area and the rectangular area, calculating to obtain the pixel value of the pixel point in the rectangular area. And traversing each point of the rectangular area to obtain all pixel values of the rectangular area to be output, and finally obtaining a rectangular image, namely a character picture of the end face of the steel coil, after the corresponding annular area of the steel coil is unfolded.
Wherein, the specific steps of S4 are as follows:
s41: end face character feature extraction is performed by the CSPDarknet53 network.
Performing size normalization on the flattened character pictures on the end face of the steel coil to obtain three-channel pictures with the size of 416 multiplied by 416, firstly performing Darknet convolution, then performing five Resblock_body operations (basically large convolution blocks formed by a series of residual error networks), performing height and width of a compression characteristic layer, expanding the number of channels, reserving higher character semantic information, and outputting a 13 multiplied by 1024 low-level global characteristic picture;
s42: character region enhanced feature extraction is performed on the low-level global feature map by a spatial pyramid pooling structure (spatial pyramid pooling, SPP) and a PANet structure in the feature pyramid.
The feature map of 13 multiplied by 1024 is accessed into an SPP structure, the largest pooling cores with different sizes are utilized to separate remarkable contextual features, pooled results are stacked, multi-scale feature fusion is carried out, a top-down pyramid is added by adopting a PANet structure, extraction of semantic information and low-layer strong positioning information is enhanced, parameter aggregation is carried out on three feature maps with different scales from three enhanced feature layers respectively, and three character prediction output feature maps with different scales are output:
s43: the extracted reinforcement features are converted into predicted results by yolhead.
And (3) respectively processing the three reinforced feature layers obtained in the last step, setting three prior frames for each feature point of each feature layer to extract an accurate single character region, illustrating the prior frame according to the prior frame and the predicted output feature, performing confidence decoding on the characters in the frame, decoding by adopting a sigmoid function, and setting the confidence value interval in [0,1] after decoding. And finally, decoding the character category to obtain the category and the confidence corresponding to the characters of each frame. Setting a confidence threshold, screening out frames lower than the threshold, and outputting a character recognition prediction result of the whole network through non-maximum suppression.
The improvement of the network structure in step S41 is specifically:
CSPDarknet53 in YOLOv4 was modified in two ways: 1. a cross-phase local area network (CSPNet) architecture is used instead of the resblock_body architecture. The original residual block network stack is split into two parts, one part is kept unchanged in the stack form, the other part is used as a residual edge, and then the two parts are connected after passing through a convolution layer; 2. the activation function of the convolution block DarknetConv2D is modified by the leak ReLU to Mish. The network structure is optimized in a split mode, the stacking times of residual blocks are reduced, and therefore the operation of unnecessary characteristic parameters is avoided, the accuracy and the speed of characteristic extraction are improved, and the memory consumption is reduced; and adopting a Mish activation function without an upper limit, avoiding saturation caused by the existence of the upper limit, and simultaneously, smoothing a function curve and extracting depth information.
The specific steps of step S42 are:
adding an SPP structure between two DS convolution structures (formed by three convolution, normalization and activation functions) so that the size of an input image is not limited, simultaneously replacing a continuous convolution structure by adopting depth separable convolution, reducing the number of parameters, processing by using a (11×11, 9×9, 5×5, 1×1) maximum pooling layer with 4 different scales, increasing a receptive field, ensuring the operation speed and separating the most obvious context characteristics; and simultaneously, the PANet is utilized to repeatedly extract the characteristics, and the pyramid from bottom to top is utilized to transfer the strong positioning characteristics of the lower layer of the character, so that the multi-layer semantics and positioning information are fused, and the detection effect of the multi-scale character target is improved.
Thus, the recognition of the characters on the end face of the steel coil based on the lightweight characteristic extraction network is completed.
Examples
The embodiment comprises the following steps:
s1: acquiring accurate edge areas of each steel coil in the image, and dividing the image of the steel coil;
s2: calculating the circle center coordinates of the steel coil and the corresponding maximum and minimum circle radii of the circular ring boundary through Hough transformation;
s3: performing polar coordinate transformation on the obtained center coordinates and radius value information, flattening the circular ring area into a rectangular area, and simultaneously determining pixel values of the rectangular area by using bilinear interpolation to obtain a rectangular image after the circular ring area is unfolded;
s4: and (3) adopting a lightweight characteristic extraction network model to the rectangular area image generated in the step (S3), and identifying the characters of the end face of the steel coil in the flattened rectangular image.
The following description is provided in connection with the specific embodiments shown in fig. 2A and 2B.
In one embodiment, the specific process of S1 is: selecting a steel coil storage area from a user interface through a monitoring camera arranged in a steel coil storage area to intercept steel coil images, determining a proper threshold value through a smoothing filter operator (binominal filter) by combining a dynamic threshold segmentation algorithm to segment the background and the steel coil because the steel coil images are affected by shooting field environment and the background is uneven, and simultaneously performing rough positioning of edge points and edge refinement on the images by utilizing an improved mathematical morphological gradient filter operator by combining a sub-pixel edge detection algorithm to extract all the steel coil edge images at a sub-pixel level;
in a specific embodiment, the specific process of S2 is: extracting a complete ring in an image by Hough transformation, and obtaining the center position coordinate of the ring and the radius information of the ring, wherein the specific calculation process is as follows:
calculating gradient information of all points according to the sub-pixel edge detection points in the step S1, drawing edge gradient lines along the gradient direction, namely, the normal lines of circles, accumulating all the normal lines through an accumulator, and setting up the sum value to be larger, wherein the more the normal line intersecting times on the point are, the closer to the circle center, finally determining the position of the circle center through accumulation and threshold setting, calculating the distance from all the edge points to the circle center, determining the distance value with the largest frequency as the radius, and meanwhile, discarding the non-complete circle region through the integrity threshold setting of the circle to be detected.
In a specific embodiment, since the printing process of the coil end face characters is affected by the flatness of the end face of the coiled coil, the coil end face characters may have a missing printing phenomenon, the character recognition error rate of the coiled image is high, and the subsequent coil quality tracking is inconvenient, so that the circular ring is flattened according to the center coordinates and the radius of the circle obtained in the step S2, and the specific process of the step S3 is as follows:
s31: in order to transform the end face characters within the ring into a horizontal arrangement, the ring area needs to be flattened into a rectangular area. The rectangular area after transformation and the circular area before transformation have a corresponding relationship, as shown in fig. 2 (a) and (b): the length of the ring is equal to the outer arc length of the ring
Figure BDA0004110863230000111
Wherein->
Figure BDA0004110863230000112
Is the radian between the radius of the left side of the circular arc and the initial line,
Figure BDA0004110863230000113
is the radian between the right radius of the circular arc and the initial line, R 2 Corresponding to the maximum of the boundary of the circular ringRadius, height is the difference R of the maximum and minimum circle radius corresponding to the boundary of the circular ring 2 -R 1
By calculation, it can be obtained:
Figure BDA0004110863230000114
R 2 =9.5,R 1 =7. Pi is preferably 3.14.
For a certain point a in the expansion area, the coordinate thereof is (x, y) = (6, 8), and the corresponding point in the annular area is a ', and the coordinate thereof is (x ', y ').
S32: calculating to obtain an angle value and a length value of a corresponding point of the circular ring area under a polar coordinate, and then obtaining a floating point coordinate of the circular ring area through a polar coordinate transformation formula;
let the corresponding angle of each unit pixel length be
Figure BDA0004110863230000115
Wherein->
Figure BDA0004110863230000116
R is radian corresponding to unit pixel 2 For the maximum circle radius corresponding to the circle boundary, y is the ordinate value of the unit pixel point in the rectangular area, and the angle value of A' under the polar coordinate in the circle area is +.>
Figure BDA0004110863230000117
The method comprises the following steps:
Figure BDA0004110863230000118
the calculation can be obtained by:
Figure BDA0004110863230000121
length R of A' point in polar coordinates A The method comprises the following steps:
R A =R 2 -y (2)
the calculation can be obtained by:
R A =9.5×3.14-8=21.83
at this time, floating point coordinates (x ', y ') of the circular ring area a ' are obtained by polar coordinate transformation, where x 0 、y 0 Is (0, 0).
Figure BDA0004110863230000122
Figure BDA0004110863230000123
The calculation can be obtained by:
x′=0+21.83·cos3.843
=21.78
y′=0+21.83·sin3.843
=1.46
s33: the a' pixel value is calculated by a bilinear interpolation algorithm.
The coordinates (x ', y ') of the point a ' are usually non-integer values, the pixel value of the point a ' cannot be directly obtained, and the pixel value f (x ', y ')=f (21.78,1.46) of the point a ' is obtained by utilizing the 4 neighborhood pixel value and performing operation through a bilinear interpolation formula in a two-dimensional space; the method comprises the steps of carrying out a first treatment on the surface of the
Let x' =i+u, (5)
y′=j+v (6)
By substitution it is possible to obtain:
x′=21+0.78
y′=1+0.46
wherein u, v e (0, 1) represents a non-integer part of the coordinates and i, j represents an integer part of the coordinates;
then it is possible to obtain:
Figure BDA0004110863230000131
the calculation can be obtained by:
f(21.78,1.46)=f(21+0.78,1+0.46)
=(1-0.78)×(1-0.46)×f(21,1)+(1-0.78)×0.46×f(21,2)
+0.78(1-0.46)f(22,1)+0.78×0.46×f(22,2)
=0.12×f(21,1)+0.10×f(21,2)
+0.42f(22,1)+0.36×f(22,2)
s34: according to the corresponding relation between the spatial position and the pixel value of the pixel points of the circular ring area and the rectangular area, f (x ', y') obtained after calculation is the pixel value of the point A in the rectangular area. And traversing each point of the rectangular area to obtain all pixel values of the rectangular area to be output, and finally obtaining a rectangular image, namely a character picture of the end face of the steel coil, after the corresponding annular area of the steel coil is unfolded.
In a specific implementation example, the specific process of S4 is:
s41: performing end face character feature extraction by using a CSPDarknet53 network, performing size normalization on flattened steel coil end face character pictures to obtain three-channel pictures with the size of 416 multiplied by 416, firstly performing Darknet convolution, then performing five Resblock_bodies (basically large convolution blocks formed by a series of residual error networks), performing height, width and expansion channel numbers of compression feature layers, and finally performing subsequent operation by using the feature layers of the last three shapes, and reserving higher character semantic information;
s42: the method comprises the steps of performing enhanced feature extraction by a spatial pyramid pooling structure (spatial pyramid pooling, SPP) and a PANet structure in a feature pyramid, performing three convolution operations on a 13×13×1024 feature layer, then connecting the SPP structure (the SPP has four branches, performing maximum pooling by using maximum pooling cores with different sizes, stacking pooled results, as shown in fig. 3), and performing three convolution operations;
s43: converting the extracted features into a prediction result by yolhead; up-sampling the feature layer after three convolutions for 2 times to obtain deep features, stacking the up-sampled 26 multiplied by 26 feature layer with the 26 multiplied by 512 feature layer obtained in the main network, and then up-sampling and stacking the 52 multiplied by 256 feature layer to finish the structure of the feature pyramid; meanwhile, a top-down pyramid is added by adopting a PANet structure, so that the extraction of semantic information and low-layer strong positioning information is enhanced, the image size is reduced through convolution and downsampling, and shallow-layer information is extracted. And predicting the feature graphs with different scales, and finally performing decoding and post-processing operations on the prediction results, and obtaining the category of the character through 1X 1 convolution to obtain a final recognition result.
The improvement of the network structure in step S41 specifically includes:
CSPDarknet53 in YOLOv4 (as shown in FIG. 4) is modified in two ways: 1. a cross-phase local area network (CSPNet) architecture is used instead of the resblock_body architecture. The original residual block network stack is split into two parts, one part is kept unchanged in the stack form, the other part is used as a residual edge, and then the two parts are connected after passing through a convolution layer; 2. the activation function of the convolution block DarknetConv2D is defined by Leaky
ReLU is modified to Mish. The network structure is optimized in a split mode, the stacking times of residual blocks are reduced, and therefore the operation of unnecessary characteristic parameters is avoided, the accuracy and the speed of characteristic extraction are improved, and the memory consumption is reduced; and a Mish activation function without an upper limit is adopted, so that saturation caused by the existence of the upper limit is avoided, and meanwhile, a function curve is smooth, so that the depth information extraction is facilitated.
The specific process of step S42 is as follows:
the SPP structure is added between the two three convolution, normalization and activation functions, so that the size of an input image is not limited, meanwhile, the interior of the input image adopts a depth separable convolution to replace a continuous convolution structure, the number of parameters is reduced, the maximum pooling layers (11×11, 9×9, 5×5 and 1×1) with 4 different scales are used for processing, the receptive field is increased, and the most obvious contextual characteristics are separated while the operation speed is ensured; and simultaneously, the PANet is utilized to repeatedly extract the features, and the pyramid from bottom to top is utilized to transfer the strong positioning features of the lower layer, so that the multi-layer semantics and positioning information are fused, and the detection effect of the multi-scale target is improved.
The improved lightweight network structure combines the precision and the speed of an effective balance algorithm required by an industrial field, adopts a mode of reducing the number of residual block stacks and enabling depth separable convolution to replace a continuous convolution structure, reduces the number of parameters, improves the operation speed, simultaneously avoids disappearance of characteristic information due to too deep convolution layers, repeatedly extracts the characteristics by adopting a PANet structure, better utilizes shallow characteristic information and effectively fuses characteristics of each level. Through multi-frame image fusion, the recognition accuracy is improved, multi-frame images are intercepted and recognized, and the influence of camera shake on the recognition result in the strip steel production process is effectively reduced by voting on the recognition result.
In summary, the invention provides a steel coil end face character recognition method based on a lightweight characteristic extraction network, and belongs to the technical field of steel coil production automation. The character number of the end face of the steel coil corresponds to the production related information of the steel coil and is a key point for tracing the production process and measuring the quality. The method utilizes a steel coil stacking image obtained from a steel coil storage area, and firstly adopts a method of combining a sub-pixel edge detection algorithm with dynamic threshold segmentation to identify complete steel coil rings and incomplete steel coil arcs in the image; then calculating the gradient of the steel coil image on the complete steel coil ring through Hough transformation to obtain the circle center coordinate and the circle radius of the steel coil; and then flattening the circular ring based on the center coordinates to enable the end face characters to be horizontally arranged, determining the pixel values of the rectangular area through the corresponding relation between the pixels and the positions, and positioning and identifying the flattened label end face characters by using a deep learning neural network model. According to the method, the edge characteristics of the outline of the steel coil in the image are accurately extracted, the annular area is flattened on the premise of maximally restoring the corresponding relation between the pixels and the positions, and meanwhile, the light characteristic extraction network model is adopted to identify the characters on the end face of the steel coil, so that the method has the characteristics of high identification rate and high reliability.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be apparent to those skilled in the art that the above implementation may be implemented by means of software plus necessary general purpose hardware platform, or of course by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.

Claims (10)

1. A steel coil end face character recognition method based on a lightweight characteristic extraction network is characterized by comprising the following steps:
s1: acquiring accurate edge areas of all steel coils in the image, and dividing the steel coil image to obtain a sub-pixel level steel coil image;
s2: aiming at the sub-pixel level steel coil image, calculating the coordinates of the circle center position of the steel coil and the radius value information of the circular ring boundary, and determining the circular ring area of the steel coil;
s3: flattening the steel coil circular ring area into a rectangular area, determining pixel values of the rectangular area, and obtaining a rectangular image of the steel coil after the steel coil circular ring area is unfolded;
s4: and (3) extracting a network model by adopting light-weight characteristics, and identifying the characters of the end face of the steel coil in the rectangular image of the steel coil.
2. The method for recognizing the end face character of the steel coil according to claim 1, wherein S1 specifically comprises:
s11: acquiring a steel coil stacking image;
s12: dividing the steel coil from the background by adopting a dynamic threshold segmentation mode, extracting all edges of the steel coil, and obtaining an original steel coil image;
s13: aiming at an original steel coil image, a sub-pixel edge detection algorithm is combined, an improved filtering operator is utilized to perform rough positioning of edge points, and a cubic spline interpolation method is utilized to perform interpolation operation and edge refinement on the image, so that a sub-pixel steel coil image is obtained.
3. The method for recognizing the end face character of the steel coil according to claim 1, wherein the step S2 specifically comprises:
s21: aiming at the sub-pixel level steel coil image, extracting a complete circular ring in the sub-pixel level steel coil image by adopting Hough transformation;
s22: calculating gradient information of all points according to the sub-pixel edge detection points, drawing edge gradient lines along the gradient direction, accumulating all normal lines by an accumulator, and determining the point with larger summation value as a center position coordinate (x 0 ,y 0 );
S23: calculating distances from all edge points to the circle center according to the circle center position, determining a distance value with the largest frequency as a radius, and discarding a non-complete circle region through setting a complete degree threshold of a circle to be detected; obtaining the corresponding maximum and minimum circle radii of the circular ring boundary, respectively marked as R 2 、R 1 And further determining the circular ring area of the steel coil.
4. The method for recognizing the end face character of the steel coil according to claim 1, wherein the step S3 specifically comprises:
s31: flattening the circular ring area of the steel coil into a rectangular area;
s32: taking any point A in the rectangular area, wherein the coordinates are (x, y), the corresponding point in the circular ring area of the steel coil is A ', calculating the angle value and the length value of the point A' under the polar coordinates, and obtaining the floating point coordinates of the circular ring area through a polar coordinate transformation formula;
s33: calculating the pixel value of the point A' through bilinear interpolation algorithm;
s34: obtaining a pixel value of a point A in the rectangular area according to the corresponding relation between the pixel points of the circular ring area and the rectangular area of the steel coil in the space position and the pixel value; traversing each point of the rectangular area to obtain all pixel values of the rectangular area to be output, and finally obtaining the rectangular image of the steel coil after the circular ring area of the steel coil is unfolded.
5. The method for recognizing characters on an end face of a steel coil according to claim 4, wherein in S31, the correspondence between the flattened rectangular area and the annular area of the steel coil before transformation is:
the length of the ring is equal to the outer arc length of the ring
Figure FDA0004110863220000021
Wherein->
Figure FDA0004110863220000022
Is the radian between the left radius of the circular arc and the initial line, < >>
Figure FDA0004110863220000023
Is the radian between the right radius of the circular arc and the initial line, R 2 The maximum radius corresponds to the boundary of the circular ring; the height is the maximum and minimum circular radius difference R corresponding to the circular ring boundary 2 -R 1
6. The method for recognizing the end face character of the steel coil according to claim 1, wherein the step S4 specifically comprises:
s41: extracting end face character features in the rectangular image of the steel coil by using a CSPDarknet53 network in YOLOv4, and outputting a low-level global feature map;
s42: carrying out character region enhanced feature extraction on the low-level global feature map by using a space pyramid pooling structure and a PANet structure in the feature pyramid to obtain a character prediction output feature map;
s43: the extracted reinforcement features are converted into character recognition prediction results by yolhead.
7. The method of claim 6, wherein the CSPDarknet53 network in step S41: changing a cross-stage local area network structure on the structure of a Resblock_body, and splitting the original residual block network stack into two parts, wherein one part is kept unchanged in a stack form, and the other part is used as a residual edge and is connected after passing through a convolution layer; the activation function of the convolution block DarknetConv2D is Mish.
8. A steel coil end face character recognition device based on a lightweight feature extraction network, characterized in that the steel coil end face character recognition device is processed based on the steel coil end face character recognition method based on a lightweight feature extraction network according to any one of claims 1 to 7, the steel coil end face character recognition device comprising:
the sub-pixel level steel coil image acquisition unit is used for acquiring the accurate edge area of each steel coil in the image, and dividing the steel coil image to obtain a sub-pixel level steel coil image;
the steel coil circular ring area determining unit is used for calculating the coordinates of the circle center position of the steel coil and the radius value information of the circular ring boundary aiming at the sub-pixel level steel coil image to determine the steel coil circular ring area;
the steel coil rectangular image acquisition unit is used for flattening the steel coil circular ring area into a rectangular area and determining pixel values of the rectangular area to obtain a steel coil rectangular image after the steel coil circular ring area is unfolded;
and the steel coil rectangular image recognition unit is used for extracting a network model by adopting light-weight characteristics and recognizing the characters of the end face of the steel coil in the steel coil rectangular image.
9. A steel coil end face character recognition system based on a lightweight feature extraction network, the system comprising: a processor and a memory for storing executable instructions; wherein the processor is configured to execute the executable instructions to perform the steel coil end face character recognition method based on the lightweight feature extraction network according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which when executed by a processor implements the steel coil end face character recognition method based on a lightweight feature extraction network according to any one of claims 1 to 7.
CN202310205677.8A 2023-03-06 2023-03-06 Character recognition method and system for steel coil end face based on lightweight feature extraction network Pending CN116311203A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310205677.8A CN116311203A (en) 2023-03-06 2023-03-06 Character recognition method and system for steel coil end face based on lightweight feature extraction network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310205677.8A CN116311203A (en) 2023-03-06 2023-03-06 Character recognition method and system for steel coil end face based on lightweight feature extraction network

Publications (1)

Publication Number Publication Date
CN116311203A true CN116311203A (en) 2023-06-23

Family

ID=86812580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310205677.8A Pending CN116311203A (en) 2023-03-06 2023-03-06 Character recognition method and system for steel coil end face based on lightweight feature extraction network

Country Status (1)

Country Link
CN (1) CN116311203A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118823786A (en) * 2024-06-26 2024-10-22 鞍钢集团自动化有限公司 A method for automatic positioning and identification of steel coil numbers based on industrial vision
CN119470473A (en) * 2025-01-09 2025-02-18 成都精工华耀科技有限公司 A method and system for quality inspection of perforated workpieces

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103854279A (en) * 2012-12-03 2014-06-11 美国亚德诺半导体公司 Hough transform for circles
CN112818970A (en) * 2021-01-28 2021-05-18 北京科技大学设计研究院有限公司 General detection method for steel coil code spraying identification

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103854279A (en) * 2012-12-03 2014-06-11 美国亚德诺半导体公司 Hough transform for circles
CN112818970A (en) * 2021-01-28 2021-05-18 北京科技大学设计研究院有限公司 General detection method for steel coil code spraying identification

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118823786A (en) * 2024-06-26 2024-10-22 鞍钢集团自动化有限公司 A method for automatic positioning and identification of steel coil numbers based on industrial vision
CN119470473A (en) * 2025-01-09 2025-02-18 成都精工华耀科技有限公司 A method and system for quality inspection of perforated workpieces

Similar Documents

Publication Publication Date Title
CN114627052A (en) Infrared image air leakage and liquid leakage detection method and system based on deep learning
CN112101195B (en) Crowd density estimation method, device, computer equipment and storage medium
CN110135455A (en) Image matching method, device and computer readable storage medium
CN111741211A (en) Image display method and device
CN112037129A (en) Image super-resolution reconstruction method, device, device and storage medium
CN110163207B (en) Ship target positioning method based on Mask-RCNN and storage device
CN111582022A (en) A fusion method, system and electronic device of mobile video and geographic scene
CN112802197B (en) Visual SLAM method and system based on fully convolutional neural network in dynamic scenes
CN113744142A (en) Image restoration method, electronic device and storage medium
CN116279592A (en) Method for dividing travelable area of unmanned logistics vehicle
CN116311203A (en) Character recognition method and system for steel coil end face based on lightweight feature extraction network
CN111242954A (en) Panorama segmentation method with bidirectional connection and shielding processing
CN117974648B (en) A method for detecting fabric defects
CN112163995A (en) Splicing generation method and device for oversized aerial photographing strip images
CN115797911A (en) Parking space identification method, computer equipment and storage medium
US11797854B2 (en) Image processing device, image processing method and object recognition system
CN114266771A (en) Pipeline defect detection method and device based on improved extended feature pyramid model
CN114004839A (en) Image segmentation method, device, computer equipment and storage medium of panoramic image
CN114322793B (en) Workpiece size measuring method and device based on global segmentation network and storage medium
CN110910438B (en) High-speed stereo matching algorithm for ultrahigh-resolution binocular image
CN115457120A (en) A Method and System for Absolute Position Awareness under GPS Denied Conditions
CN118506154A (en) Small target detection model training method, device and electronic equipment
CN118096652A (en) Spot extraction method, data processing device and head-mounted device
CN117765493A (en) Target recognition method, electronic equipment and storage media of fisheye images
CN116823603A (en) Unmanned aerial vehicle image stitching method, unmanned aerial vehicle image stitching device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination