CN111476054B - Decoding method and electronic equipment - Google Patents
Decoding method and electronic equipment Download PDFInfo
- Publication number
- CN111476054B CN111476054B CN202010378011.9A CN202010378011A CN111476054B CN 111476054 B CN111476054 B CN 111476054B CN 202010378011 A CN202010378011 A CN 202010378011A CN 111476054 B CN111476054 B CN 111476054B
- Authority
- CN
- China
- Prior art keywords
- symbol
- line
- determining
- sub
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1408—Methods for optical code recognition the method being specifically adapted for the type of code
- G06K7/1417—2D bar codes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1439—Methods for optical code recognition including a method step for retrieval of the optical code
- G06K7/1452—Methods for optical code recognition including a method step for retrieval of the optical code detecting bar code edges
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/146—Methods for optical code recognition the method including quality enhancement steps
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Toxicology (AREA)
- Electromagnetism (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
The invention relates to a decoding method and electronic equipment, which relate to the field of computer graphics and image processing, and comprise the following steps: determining a plurality of row direction pixel lines of the decoded image according to a plurality of pixel points of the decoded image containing the symbol area; determining a column boundary between each column of symbols in the symbol region according to pixel points of a plurality of row direction pixel lines; and determining a line boundary between each line of symbols within the symbol area from the plurality of line direction pixel lines; and determining the position of each symbol in the symbol area according to the line boundary and the column boundary, and decoding each symbol according to the determined position. According to the embodiment of the invention, the row direction pixel lines are obtained by decoding the pixel points in the image, and the column boundary and the row boundary are determined according to the row direction pixel lines, so that the column boundary and the row boundary can be changed according to the change of the image, the position of each symbol can be better positioned, and the decoding accuracy is improved.
Description
Technical Field
The present invention relates to the field of computer graphics and image processing, and in particular, to a decoding method and an electronic device.
Background
A PDF (Portable Data File) 417 two-dimensional code (hereinafter, referred to as PDF417) is a Portable Data File with high density and high information content. The PDF417 is composed of a plurality of rows of bar codes, each row of bar codes is composed of black and white, and a large amount of data can be stored without connecting a database. Due to these advantages of PDF417, PDF finds a great number of applications in the fields of hospitals, drivers licenses, material management, freight transportation, and the like.
To obtain the information in the PDF417, generally, a scanner or a camera is used to acquire an image of the PDF417, and then the PDF417 is decoded from the image of the PDF417, so as to obtain the information in the PDF 417.
In the prior art, according to the image of PDF417, an edge detection method is adopted to project edge points in the horizontal and vertical directions, respectively, divide the image into symbols, and then decode the symbols.
However, in an actual application process, when an image of the PDF417 is acquired by using a scanner or a camera, the image may be affected by lens abnormality, so that the scanned image of the PDF417 has unclear black and white, or has image distortion. When image distortion occurs, because the edges between the symbols are nonlinear, the edge lines projected horizontally and vertically are linear by adopting an edge detection method and are not matched with the edge of an actual curve, so that cutting errors are caused, and the decoding accuracy is low.
In summary, when the image of the PDF417 is acquired abnormally, the error rate is relatively high by using the conventional decoding method.
Disclosure of Invention
The invention provides a decoding method and electronic equipment, which determine a column boundary and a row boundary on a pixel line in a row direction determined by a pixel point in an image shape, so that the column boundary and the row boundary can change along with the change of the shape of a decoded image, thereby better positioning the position of each symbol and improving the decoding accuracy.
In a first aspect, a decoding method provided in an embodiment of the present invention is applied to an electronic device, and includes:
determining a plurality of row direction pixel lines of a decoded image according to a plurality of pixel points of the decoded image containing a symbol area;
determining a column boundary between each column of symbols in the symbol region according to the pixel points of the plurality of row direction pixel lines; and determining a line boundary between each line of symbols within the symbol area from the plurality of line direction pixel lines;
and determining the position of each symbol in the symbol area according to the row boundary and the column boundary, and decoding each symbol according to the determined position.
According to the method, a plurality of row direction pixel lines of the decoded image containing the symbol area are determined through a plurality of pixel points of the decoded image, column boundary lines between each column of symbols in the symbol area are determined from the pixel points of the row direction pixel lines, and row boundary lines are found from the plurality of row direction pixel lines.
In a possible implementation manner, the determining a column boundary between each column of symbols in the symbol region according to the pixel points of the plurality of row direction pixel lines includes:
determining a width of a target symbol in the decoded image; the target symbol is a symbol outside a symbol area in the decoded image;
determining a reference width range of each symbol in the symbol area according to the width of the target symbol;
and searching pixel points meeting the boundary characteristics between each column of symbols in a plurality of pixel points which are positioned in the reference width range on the plurality of row direction pixel lines, and determining a column boundary line according to the searched pixel points.
According to the method, the target symbol is determined in the symbols except the symbol area, the width of the target symbol is determined, the reference width range of each symbol is determined according to the width of the target symbol, then verification is carried out through the boundary characteristics among all the columns of symbols, and the pixel points which are located in the reference width range on each row direction pixel line and meet the verification requirement are obtained to determine the column boundary line.
In one possible implementation, the determining the width of the target symbol in the decoded image includes:
selecting one of a plurality of line direction pixel lines, and finding a line segment matched with the gray value of a pixel point of a preset pixel line on the selected line direction pixel line;
and taking the length of the line segment as the width of the target symbol.
According to the method, the gray value of the pixel point of the preset pixel line is matched with one row direction pixel line selected from the plurality of row direction pixel lines, the line segment obtained through matching is used as the width of the target symbol, and the width of the target symbol can be quickly locked by using the preset pixel line.
In a possible implementation manner, the determining a reference width range of each symbol according to the width of the target symbol includes:
if the target symbol is the initial symbol in the decoded image, determining the reference width range of each symbol in the symbol area according to the width of the initial symbol and a preset error value; or
And if the target symbol is the termination symbol in the decoded image, determining the reference width range of each symbol in the symbol region according to the width of the termination symbol, the width ratio of the preset termination symbol to the symbols in the symbol region and a preset error value.
According to the method, because the width of the initial symbol in the decoded image is the same as that of the symbol in the symbol area, when the target symbol is the initial symbol, the reference width range of each symbol in the symbol area is determined according to the width of the initial symbol and the preset error value, and because the width of the terminal symbol in the decoded image is proportional to that of the symbol in the symbol area, when the target symbol is the terminal symbol, the reference width range of each symbol in the symbol area is determined according to the width of the terminal symbol, the width ratio of the preset terminal symbol to the symbol in the symbol area and the preset error value.
In one possible implementation, the determining a line boundary between each line of symbols within the symbol region from the plurality of line-direction pixel lines includes:
and taking a line of row-direction pixels satisfying the boundary characteristics as a line boundary between each row of symbols in the symbol area from among a plurality of line-direction pixels between an upper edge line and a lower edge line of the symbol area.
In the above method, since the decoded image may further include an image that is not a symbol region, the plurality of formed line direction pixel lines include pixel points that are not a symbol region, and in order to increase the budget speed, a line direction pixel line that satisfies a boundary feature from among a plurality of line direction pixel lines between an upper edge line and a lower edge line of the symbol region is used as a line boundary between each line of symbols in the symbol region.
In one possible implementation, the decoding each symbol according to the determined position includes:
dividing the region where the symbol is located into sub-regions with preset number according to the column direction;
determining the gray value of each sub-region according to the gray value of the pixel point of each sub-region;
aiming at any one of the preset multiple code words, determining an actual contrast value corresponding to the code word according to the code word and the gray value of the sub-regions with the preset number;
comparing the actual contrast value corresponding to the code word with a preset reference contrast value of the code word to obtain a similarity value of the code word;
and determining the code word corresponding to the symbol from preset code words according to the similarity value of the code word.
The method can be influenced by uneven illumination in the practical application process, so that the gray value of the area where the symbol is located is compared with the gray values of a plurality of code words during decoding, and the code word with the highest similarity is used as the code word of the symbol, so that the problem of low decoding accuracy caused by uneven illumination is solved.
In a possible implementation manner, the determining, according to the codeword and the grayscale values of the preset number of sub-regions, an actual contrast value corresponding to the codeword includes:
determining a first ratio according to the gray value of the first type sub-area in the preset number of sub-areas and the number of the first type sub-areas; the first type sub-region is a sub-region of which the position of the blank sub-region in the code word corresponds to the same position in the symbol;
determining a second ratio according to the gray value of the second type sub-area in the preset number of sub-areas and the number of the second type sub-areas; the second type module is a sub-region of which the position of the sub-region of the bar in the code word corresponds to the same position in the symbol;
and taking the difference between the first ratio and the second ratio as an actual contrast value corresponding to the code word.
In the above method, because the symbol is affected by light, the gray value in each region does not change obviously in the region corresponding to the standard codeword, so when calculating the actual contrast value corresponding to the codeword, a first ratio needs to be determined according to the gray value of the first-type sub-region in the preset number of sub-regions and the number of the first-type sub-regions, the first-type sub-region is a sub-region where the position of the blank sub-region in the codeword corresponds to the same position in the symbol, similarly, a second ratio is determined according to the gray value of the second-type sub-region where the position of the sub-region of the bar in the codeword in the preset number of sub-regions corresponds to the same position in the symbol and the number of the second-type sub-regions, and the difference between the first ratio and the second ratio is used as the actual contrast value corresponding to the codeword.
In a second aspect, an embodiment of the present invention provides an electronic device, including:
a processor and a memory for storing processor-executable instructions;
wherein the processor is configured to:
determining a plurality of row direction pixel lines of a decoded image according to a plurality of pixel points of the decoded image containing a symbol area;
determining a column boundary between each column of symbols in the symbol region according to the pixel points of the plurality of row direction pixel lines; and determining a line boundary between each line of symbols within the symbol area from the plurality of line direction pixel lines;
and determining the position of each symbol in the symbol area according to the row boundary and the column boundary, and decoding each symbol according to the determined position.
In one possible implementation, the processor is specifically configured to:
determining a width of a target symbol in the decoded image; the target symbol is a symbol outside a symbol area in the decoded image;
determining a reference width range of each symbol in the symbol area according to the width of the target symbol;
and searching pixel points meeting the boundary characteristics between each column of symbols in a plurality of pixel points which are positioned in the reference width range on the plurality of row direction pixel lines, and determining a column boundary line according to the searched pixel points.
In one possible implementation, the processor is specifically configured to:
selecting one of a plurality of line direction pixel lines, and finding a line segment matched with the gray value of a pixel point of a preset pixel line on the selected line direction pixel line;
and taking the length of the line segment as the width of the target symbol.
In one possible implementation, the processor is specifically configured to:
if the target symbol is the initial symbol in the decoded image, determining the reference width range of each symbol in the symbol area according to the width of the initial symbol and a preset error value; or
And if the target symbol is the termination symbol in the decoded image, determining the reference width range of each symbol in the symbol region according to the width of the termination symbol, the width ratio of the preset termination symbol to the symbols in the symbol region and a preset error value.
In one possible implementation, the processor is specifically configured to:
and taking a line of row-direction pixels satisfying the boundary characteristics as a line boundary between each row of symbols in the symbol area from among a plurality of line-direction pixels between an upper edge line and a lower edge line of the symbol area.
In one possible implementation, the processor is specifically configured to:
dividing the region where the symbol is located into sub-regions with preset number according to the column direction;
determining the gray value of each sub-region according to the gray value of the pixel point of each sub-region;
aiming at any one of the preset multiple code words, determining an actual contrast value corresponding to the code word according to the code word and the gray value of the sub-regions with the preset number;
comparing the actual contrast value corresponding to the code word with a preset reference contrast value of the code word to obtain a similarity value of the code word;
and determining the code word corresponding to the symbol from preset code words according to the similarity value of the code word.
In one possible implementation, the processor is specifically configured to:
determining a first ratio according to the gray value of the first type sub-area in the preset number of sub-areas and the number of the first type sub-areas; the first type sub-region is a sub-region of which the position of the blank sub-region in the code word corresponds to the same position in the symbol;
determining a second ratio according to the gray value of the second type sub-area in the preset number of sub-areas and the number of the second type sub-areas; the second type module is a sub-region of which the position of the sub-region of the bar in the code word corresponds to the same position in the symbol;
and taking the difference between the first ratio and the second ratio as an actual contrast value corresponding to the code word.
In a third aspect, the present application also provides a computer storage medium having a computer program stored thereon, which when executed by a processing unit, performs the steps of the decoding method of the first aspect.
In addition, for technical effects brought by any one implementation manner of the second aspect to the third aspect, reference may be made to technical effects brought by different implementation manners of the first aspect, and details are not described here.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention and are not to be construed as limiting the invention.
FIG. 1 is a schematic diagram of a projection method using an edge method in the background art;
FIG. 2 is a block diagram of a symbol according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an actual edge line and an edge line obtained by edge detection when an image is distorted in the prior art;
fig. 4 is a flowchart illustrating a decoding method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of forming pixel lines by pixel points in a row direction during decoding according to an embodiment of the present invention;
fig. 6 is a schematic diagram of symbol 1 and symbol 3 portions in a decoded image formed according to pixel points according to an embodiment of the present invention;
fig. 7 is a schematic diagram of finding a line segment matching a preset pixel line in a row direction pixel line according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of determining a line boundary line between symbol 1 and symbol 2 provided by an embodiment of the present invention;
FIG. 9 is a flowchart of decoding according to the position of a character according to an embodiment of the present invention;
fig. 10 is a flowchart of an overall PDF417 decoding process provided by an embodiment of the present invention;
fig. 11 is a schematic decoding diagram of a boundary point and a boundary line of PDF417 in a decoding process according to an embodiment of the present invention;
fig. 12 is a block diagram of an electronic device according to an embodiment of the present invention;
fig. 13 is a block diagram of another electronic device according to an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The application scenario described in the embodiment of the present invention is for more clearly illustrating the technical solution of the embodiment of the present invention, and does not form a limitation on the technical solution provided in the embodiment of the present invention, and it can be known by a person skilled in the art that with the occurrence of a new application scenario, the technical solution provided in the embodiment of the present invention is also applicable to similar technical problems. Wherein, in the description of the present invention, unless otherwise indicated, "a plurality" means.
The present invention is applicable to a PDF417 decoding process, where, as shown in fig. 1, the PDF417 includes a as a start symbol, a symbol region b, and c as a stop symbol, where each symbol in the PDF417 is composed of 4 bars (black part in fig. 2) and 4 spaces (white part in fig. 2). Each symbol is composed of 17 bar-shaped regions, each of the 17 bar-shaped regions has a different bar or null, which can be represented by 51111125 in fig. 2, for example, the 1 st to 5 th regions are bars, the 6 th region is null, the 7 th region is a bar, the 8 th region is null, the 9 th region is a bar, the 10 th region is null, the 12 th and 13 regions are bars, and the 13 th to 17 th regions are null. For convenience, each region may be represented by one pixel in the row direction and 4 pixels in the column direction, as shown in fig. 5.
Currently, when decoding an image, an edge detection method is generally used to project edge points in horizontal and vertical directions, and as shown in fig. 1, taking two lines of symbols in a box in the whole decoded image as an example, a horizontal projection is used, i.e., an edge line (C1) is horizontally projected to divide a first line and a second line, divide the image into symbols, and then decode the symbols.
In the practical application process, due to the influence of abnormal lens and other conditions, the decoded image is distorted, and the original straight line boundary is changed into a curved boundary. When the edge detection method is used, as shown in fig. 3, the edge line (C1) projected horizontally is a straight line, and does not match the actual curve (C2), resulting in a cutting error and thus a low decoding accuracy.
The embodiment of the invention provides a decoding method and electronic equipment, wherein a plurality of row direction pixel lines of a decoded image are determined according to a plurality of pixel points of the decoded image containing a symbol area, a plurality of row direction pixel lines with the same line shape as the image are obtained, and a column boundary and a row boundary are determined from the plurality of row direction pixel lines, so that the position of each symbol is determined and decoding is performed.
The following detailed description is made with reference to the accompanying drawings.
As shown in fig. 4, an embodiment of the present invention provides a decoding method, including the following steps:
s400: and determining a plurality of row direction pixel lines of the decoded image according to a plurality of pixel points of the decoded image containing the symbol area.
For example, as shown in fig. 4, a first row of pixels in the graph forms a first row direction pixel line L1, a second row of pixels in the graph forms a second row direction pixel line L2, a third row of pixels in the graph forms a third row direction pixel line L3, a fourth row of pixels in the graph forms a fourth row direction pixel line L4, and so on, a fifth row of pixels forms L5, a sixth row of pixels forms L6, a seventh row of pixels forms L7, and an eighth row of pixels forms L8.
S401: and determining a column boundary between each column of symbols in the symbol region according to the pixel points of the plurality of row direction pixel lines, and determining a row boundary between each row of symbols in the symbol region from the plurality of row direction pixel lines.
S402: and determining the position of each symbol in the symbol area according to the line boundary and the column boundary, and decoding each symbol according to the determined position.
For example, referring to fig. 5, a decoded image formed by pixels is shown, and pixels are connected in series by lines to obtain a plurality of line direction pixel lines, i.e., L1, L2, L3, L4, L5, L6, L7, and L8. Determining pixel points of boundaries of each symbol in corresponding row direction pixel lines according to pixel points in L1, L2, L3, L4, L5, L6, L7 and L8, then connecting the pixel points of the boundaries determined from 8 row direction pixel lines in the column direction to obtain five column boundary lines of S1, S2, S3, S4 and S5, and determining row boundary lines from 8 row direction pixel lines of L1, L2, L3, L4, L5, L6, L7 and L8, wherein the boundary lines between the symbol rows are L4 because L1-L4 are symbols of one row and L5-L8 are symbols of one row. After five column boundary lines of L4, S1, S2, S3, S4 and S5 are determined, it can be determined that the regions between S1, S2, L1 and L4 are a symbol, the symbol is 1, wherein 17 regions in the symbol each have 4 pixels in the column direction and 1 pixel in the row direction, and the number of pixels in each region can be increased along with the increase of the resolution. The regions between S1, S2 and L5 and L8 are one symbol, the regions between S2, S3 and L1 and L4 are one symbol, the regions between symbol 3, S2, S3 and L5 and L8 are one symbol, the regions between symbol 4, S3, S4 and L1 and L4 are one symbol, the regions between symbol 5, S3, S4 and L5 and L8 are one symbol, the regions between symbol 6, S4, S5 and L1 and L4 are one symbol, the regions between symbol 7, S4, S5 and L5 and L8 are one symbol, and symbol 8. So that 8 symbols included in the decoded image can be determined. The 8 symbols are then decoded according to their position.
In conclusion, the pixel lines determined by the pixel points are used as supports, and the determined column boundary and row boundary can more accurately position each character in the symbol area and improve the accuracy of decoding.
In the embodiment of the present invention, the pixel points of the boundary determined from the 8 lines of the plurality of line direction pixels are connected along the column direction, and the manner of determining the column boundary line may be:
the pixel points of the determined boundary on the pixel lines in the row direction are arranged from left to right or from right to left, the pixel point of the leftmost or rightmost boundary in each pixel line in the row direction adopts a ransan (random sampling consistency) algorithm to determine a column boundary line, the second pixel point of the boundary starting from the left or starting from the right in each pixel line in the row direction adopts a ransan (random sampling consistency) algorithm to determine a column boundary line, and the rest is repeated until all column boundary lines are determined.
In order to more quickly lock the column boundary line between each column of symbols, the embodiment of the invention provides a method, which adopts the width of a target symbol to determine the reference width range of the symbols in a symbol area, and then verifies through the boundary characteristics in the column direction, so that the correct boundary pixel point can be found and the column boundary line can be determined. Specifically, the method comprises the following steps:
determining a width of a target symbol in a decoded image; the target symbol is a symbol outside the symbol area in the decoded image;
determining a reference width range of each symbol in the symbol area according to the width of the target symbol;
and searching pixel points meeting the boundary characteristics between each column of symbols in a plurality of pixel points which are positioned in the reference width range on the plurality of row direction pixel lines, and determining a column boundary line according to the searched pixel points.
Because the first region of the symbol is a bar and the last region of the symbol is empty, the boundary characteristic between each column of symbols is that the adjacent pixel points between the adjacent symbols meet the characteristic of changing from white to black, namely, the place changing from idle to bar is the boundary.
Fig. 6 shows a partial view of the column boundary line S2 and lines of pixels in a plurality of row directions, determining a reference width range as the width of the target symbol plus two pixel points, and a range of two pixel points (two pixel points in parentheses). According to the boundary characteristics of white to black, the boundary line of the symbol 1 and the symbol 3 is the boundary line from the white pixel point from left to right to the black pixel point in the bracket.
Taking PDF417 as an example, since the width of the initial symbol in PDF417 is the same as the width of each symbol in the symbol region, when the target symbol is the initial symbol, the reference width range of each symbol in the symbol region is determined according to the width of the initial symbol and a preset error value.
Since the width of the terminator in the PDF417 is a fixed ratio to the width of each symbol in the symbol region, when the target symbol is a terminator, the reference width range of each symbol in the symbol region is determined according to the width of the terminator, the ratio of the preset terminator to the width of the symbol in the symbol region, and the preset error value.
Wherein determining the width of the target symbol in the decoded image comprises:
selecting one of the plurality of line direction pixel lines, and finding a line segment matched with the gray value of the pixel point of the preset pixel line on the selected line direction pixel line;
the length of the line segment is taken as the width of the target symbol.
Taking PDF417 as an example, since the width of the initial symbol in PDF417 is the same as the width of each symbol in the symbol region, when the target symbol is the initial symbol, the preset pixel line is determined from the initial symbol, where the arrangement of the bars and spaces of the initial symbol is 81111113, i.e., the 1 st to 8 th regions are bars, the 9 th region is empty, the 10 th region is bars, the 11 th region is empty, the 12 th region is bars, the 13 th region is empty, the 14 th region is bars, and the 15 th to 17 th regions are empty. Taking each region with only one pixel as an example, 17 pixels to the left of S1 in fig. 5 can be formed. When L1 is selected from a plurality of line direction pixel lines, if 81111113 is satisfied in L1 from the first pixel point to the 17 th pixel point from the left (e.g., a line segment in parentheses in fig. 7), the line segment is determined as a line segment matching the gradation value of the pixel point of the preset pixel line as the width of the initial character. By analogy, when L2, or L3, or L4, or L5, or L6, or L7, or L8 is selected, the first pixel point to the 17 th pixel point are all started from the left.
Similarly, it can be verified that the terminator can be 711311121, i.e., the 1 st to 7 th regions are bars, the 8 th region is empty, the 9 th region is bars, the 10 th to 12 th regions are empty, the 13 th region is bars, the 14 th region is empty, the 15 th region is bars, the 15 th to 16 th regions are empty, and the 17 th region is bars. The corresponding of the partial pixel points of the bar is black, and the preset pixel line is 711311121 as the same as the initial character, so as to obtain the length of the line segment of the terminator.
Further, in order to improve the accuracy of the line segments, the line segments matched with the gray values of the pixels of the preset pixel lines can be found on each line direction pixel line, then the initial points of the line segments form the initial boundary of the start symbol in the decoded image by using a ransan algorithm, the end points of the line segments form the end boundary of the start symbol in the decoded image by using a ransan algorithm, that is, the width of the start symbol in the decoded image is determined according to the initial boundary and the end boundary. Therefore, deviation caused by uneven illumination during gray scale comparison is avoided, and deviation of line segment selection is avoided. Similarly, the same applies to the terminator, and since it is the same as the initial terminator, the description will not be repeated here.
In order to avoid performing pixel line operation on the regions not belonging to the symbol region, the invention provides a method for taking a row direction pixel line satisfying boundary characteristics as a row boundary line between each row of symbols in the symbol region from among a plurality of row direction pixel lines between an upper edge line and a lower edge line of the symbol region.
In the process of extracting the line boundary, the line direction pixel lines which do not belong to the symbol area are removed, and the line direction pixel lines which meet the boundary characteristics are used as the line boundary between each line of symbols in the symbol area from the rest of the line direction pixel lines.
The line direction pixel lines of the boundary features are line direction pixel lines which have the same gray value of the pixel points corresponding to one of the two adjacent line direction pixel lines and have the same gray value of the pixel points corresponding to the other of the two adjacent line direction pixel lines.
In connection with fig. 8, in symbol 1 formed by S1, S2, and L1 and L4, symbol 1 may be represented as 41111144, and in symbol 2 formed by S1, S2, and L5 and L8, symbol 2 may be represented as 41111315, where it can be seen that on two adjacent row-direction pixel lines of boundary line L4, the gray scale values of the pixel points corresponding to 17 sub-regions of L3 can be seen, 41111144 is formed, the gray scale values of the pixel points at the corresponding positions of row-direction pixel line L4 are the same, the pixel points in L3 connected by two arrows and the pixel points in L4 at the corresponding positions have the same gray scale value. For L5 forming 41111315, after L4 is finished, the gray scale values of the pixels at the corresponding positions to the line direction pixel line L4 are not uniform, the pixel points in L4 connected by two arrows and the pixel points in L5 at the corresponding positions have different gray scale values corresponding to the 10 th to 11 th pixel points, L4 is black, L5 is white, the gray scale values corresponding to the 13 th pixel point are different, L4 is black, L5 is white, the gray scale values corresponding to the 1 st to 9 th pixel points are the same, the gray scale values corresponding to the 12 th pixel point are the same, and the gray scale values corresponding to the 14 th to 17 th pixel points are the same. Therefore, the gray scale values of the pixels at the corresponding positions of L4 and L5 are not uniform, and the pixel line in the row direction changes to another null expression form, that is, the sign changes, so L4 is a row boundary line.
Furthermore, when the resolution of the image is too high, the pixel points on the pixel line can be uniformly used during comparison.
Similarly, when the resolution of the image is too high and the number of formed pixel lines is large, uniform pixel lines may be processed, for example, 100 pixel lines formed in the decoded image may be obtained, and 10 pixel lines may be uniformly obtained from 100 lines and confirmed as line direction pixel lines, in order to improve efficiency.
In practical applications, the decoding process determines whether the 17 regions in the symbol are bars (black in color) or spaces (white in color), and the decoding is performed according to different arrangements of the bars and spaces. However, due to the influence of uneven illumination, the originally black part may cause the shot picture to be too bright and black and white not to be prominent, which may easily cause decoding errors during decoding. In the embodiment of the invention, the symbol is compared with the standard code word by the gray value, and the code word corresponding to the symbol is determined as the code word with the highest contrast.
Specifically, an embodiment of the present invention provides a decoding method, which is shown in fig. 9 and includes:
s900: and dividing the region where the symbol is located into a preset number of sub-regions according to the column direction. I.e. the area where the symbol is located is divided into 17 sub-areas in the column direction, as shown in fig. 2.
S901: and determining the gray value of each sub-region according to the gray value of the pixel point of each sub-region. When the resolution of the decoded image is relatively high, more than one pixel point in each sub-region is needed, and the average value of the gray values corresponding to all the pixel points in the sub-region can be calculated to serve as the gray value of the sub-region. Or randomly selecting one gray value from the gray values corresponding to all the pixel points in the sub-region as the gray value of the sub-region. These are, of course, merely exemplary and are not intended to be limiting.
S902: and aiming at any one of a plurality of preset code words, determining an actual contrast value corresponding to the code word according to the code word and the gray values of the sub-regions with the preset number.
The calculation is as follows: determining a first ratio according to the gray value of the first type sub-area in the preset number of sub-areas and the number of the first type sub-areas; the first type subarea is a subarea of which the position of a blank subarea in the code word corresponds to the same position in the symbol;
determining a second ratio according to the gray value of the second type sub-area in the preset number of sub-areas and the number of the second type sub-areas; the second type module is a sub-region of which the position of the sub-region of the bar in the code word corresponds to the same position in the symbol;
and taking the difference between the first ratio and the second ratio as an actual contrast value corresponding to the code word.
S903: and comparing the actual contrast value corresponding to the code word with the preset reference contrast value of the code word to obtain the similarity value of the code word.
The calculation method of the reference contrast value of the preset code word is as follows:
determining a first ratio according to the gray value of the first type sub-region in the sub-region and the number of the first type sub-regions, determining a second ratio according to the gray value of the second type sub-region in the sub-region and the number of the second type sub-regions, and taking the difference between the first ratio and the second ratio as a reference contrast value corresponding to the code word.
S904: and determining the code word corresponding to the symbol from the preset code words according to the similarity value of the code words.
For example, in connection with table 1, a partial symbol and a corresponding codeword set are shown, where b in table 1 denotes a bar and s denotes a null.
TABLE 1
The symbol corresponding to the codeword is illustrated as 0, and the corresponding symbol is 31111136, that is, the 1 st to 3 rd sub-regions are bars, the 4 th sub-region is empty, the 5 th sub-region is bars, the 6 th sub-region is empty, the 7 th sub-region is bars, the 8 th sub-region is empty, the 9 th to 11 th sub-regions are bars, and the 12 th to 17 th sub-regions are empty.
In the actually obtained symbol, the sum of the gray values of the sub-regions of the bar of the symbol is determined corresponding to the same positions of the sub-regions of all the bars corresponding to the code word, the number of the sub-regions of the bar is 9, the sum of the gray values of the empty sub-regions of the symbol and the number of the sub-regions of the bar are determined corresponding to the same positions of all the empty sub-regions corresponding to the code word, the sum of the gray values of the empty sub-regions of the symbol is divided by the number 9, the sum of the gray values of the sub-regions of the bar is divided by the number 8, the two divisions are subtracted, the actual contrast value of the code word of the symbol in the decoded image is obtained, and then the reference contrast value with the code word being 0 is compared, and the similarity value is obtained. The formula can be further expressed as follows:
wherein spaceGraySum is the gray sum of the empty sub-regions, spaceNum is the number of the empty sub-regions, barGrayNum is the gray sum of the sub-regions of the bars, and barNum is the number of the sub-regions of the bars.
And analogizing in sequence, processing all the code words according to the process to obtain the similarity numerical value. Then, the code word with the largest value is found from the similarity values to be used as the code word of the symbol.
The number of the code words is 2787, the actual contrast values can be 2787, and the code word with the highest similarity is selected from the 2787 code words to serve as the code word of the symbol.
Based on the above description of the decoding method, the embodiment of the present invention provides an overall PDF417 decoding process, which is described in detail below with reference to fig. 10 and 11. Fig. 10 shows an overall decoding process of the PDF417, and fig. 11 shows a decoding schematic diagram of boundary points and boundary lines of the PDF417 in the decoding process.
As shown in fig. 10, the method includes:
s1010: and determining a plurality of line direction pixel lines according to a plurality of pixel points in the PDF417 positioning frame. As shown in fig. 11, the line direction pixel lines are P1 to P19.
S1020: the method comprises the steps of matching pixel lines in a preset start symbol template with a plurality of line direction pixel lines to obtain an initial point and an end point of a start symbol in each line direction pixel line, and matching pixel lines in a preset end symbol template with the plurality of line direction pixel lines to obtain an initial point and an end point of an end symbol in each line direction pixel line.
For example, as shown in fig. 11, the initial point of the start character in the PDF417 is determined according to the pixel line in the preset start character template, the initial point of the start character in the PDF417 is the focus of P1-P19 and the line segment EF, and the end point of the start character in the PDF417 is the focus of P1-P19 and the line segment AD.
Similarly, the initial point of the start character in the PDF417 is determined according to the pixel line in the preset terminator template, the initial point of the terminator in the PDF417 is the focus of the line segments BC from P1 to P19, and the end point of the terminator in the PDF417 is the focus of the line segments GH from P1 to P19.
S1030: a start boundary a1 of the symbol area of the PDF417 is obtained by fitting according to the end point of the start character in each row direction pixel line, and an end boundary a5 of the symbol area of the PDF417 is obtained by fitting according to the initial point of the end character in each row direction pixel line.
As shown in fig. 11, the line segment AD is a start boundary a1, and the line segment BC is an end boundary a 5.
S1040: the width of the start character is determined from the initial point and the end point of the start character in each line direction pixel line, and the reference range of the width of each symbol of the symbol area of the PDF417 is determined.
As shown in fig. 11, one of the plurality of line direction pixel lines is selected, and a line between the selected line direction pixel line and the focal point of the line segment BC and the focal point of the line segment AD is a width of the start symbol. For example, the pixel point O1 in the first row direction pixel line P1 is the focus of the line segment BC, and the pixel point O2 is the focus of the line segment AD. The distance between the pixel point O1 and the pixel point O2 on the first line direction pixel line is the width of the start symbol.
S1050: and searching pixel points meeting the boundary characteristics between each column of symbols in a plurality of pixel points which are positioned in the reference width range on the plurality of row direction pixel lines, and determining a column boundary line according to the searched pixel points.
Referring to fig. 11, after the pixel point O2, a first pixel point satisfying the boundary characteristic between each column of symbols is found in the distance reference width range on the first row direction pixel line P1, and then, after the first pixel point is found, a second pixel point satisfying the boundary characteristic between each column of symbols is found in the distance reference width range on the first row direction pixel line, and the second pixel points are sequentially derived, so that a plurality of pixel points satisfying the boundary characteristic between each column of symbols are determined on the first row direction pixel line. Then, the second line of pixels P2 is searched with reference to the first line of pixels, and pixel points satisfying the boundary characteristics between the symbols of each column are determined. Up to the last line of row-wise pixels P19. After the pixel points meeting the boundary characteristics between the symbols of each column are found in the P1-P19, the column boundary lines, namely A2, A3, A4 and A5, are determined according to the pixel points.
S1060: and searching upwards at intersections of the line direction pixel lines and the plurality of column boundary lines one by one, and if the pixel points adjacent upwards at the intersection in the column direction and the pixel points adjacent to the line direction do not meet the boundary characteristics between the column symbols, determining the line direction pixel lines as upper edge lines.
Referring to fig. 11, after a pixel point O2, on a distance reference width range on any row direction pixel line, finding out that a first pixel point satisfying a boundary characteristic between each column symbol is an intersection point of any row direction pixel line and a column boundary line a2, searching upward, each intersection point determining whether a pixel point adjacent thereto satisfies a white-to-black characteristic, until the white-to-black characteristic is not satisfied, taking a pixel point adjacent to the pixel point which does not satisfy the white-to-black characteristic as a point of an upper boundary line, for example, the intersection point is O3, a pixel point adjacent to O3 is O6, O3 is white, O6 is black, and satisfies the white-to-black characteristic, O3 is an upward adjacent pixel point O4 in the column direction, and a pixel point O5 adjacent to O4 in the row direction, wherein, a pixel point between O4 and O5 does not satisfy the white-to-black characteristic, determining O3 as a point of the upper boundary line, and sequentially determining a point of the upper boundary line from column boundary line A3, a4, a5, from the points of the upper edge line, the upper edge line is determined, as shown in fig. 11, as P1.
S1070: and searching downwards at intersections of the line direction pixel lines and the plurality of column boundary lines one by one, and if the pixel points of the intersections which are adjacent downwards in the column direction and the corresponding pixel points of the rows which are adjacent do not meet the boundary characteristics between the column symbols, determining the line direction pixel lines as the lower edge lines.
Similarly, for the lower edge line, searching downwards on a column boundary line a2, determining a pixel point O7 with a pixel point in the column direction, and a pixel point O8 adjacent to O7 in the row direction, where a white-to-black characteristic is satisfied between O7 and O8, a pixel point adjacent to O7 in the column direction is O9, a pixel point adjacent to O9 in the row direction is O10, and O9 and O10 do not satisfy the white-to-black characteristic, determining O7 and O8 as the lower edge line points, sequentially determining the lower edge line points from column boundary lines A3, a4, and a5, and determining a lower edge line L according to the lower edge line points, as shown in fig. 11, where the lower edge line is P19.
S1080: from among a plurality of line-direction pixel lines between upper and lower edge lines of the symbol area, a line-direction pixel line satisfying a boundary feature is taken as a line boundary between each line of symbols within the symbol area.
Referring to fig. 11, pixel lines between the upper edge line Lu and the lower edge line Ld are P1 to P9, and it can be seen that the PDF417 is composed of 6 lines of characters, that is, 5 pixel lines are required, and from the existing pixel lines, it can be determined that P4, P7, P10, P13, and P16 satisfy the boundary characteristics, and the above-described process can be referred to for the boundary characteristics. The line 1 character and the line 2 character can be separated by P4. The line 2 character and the line 3 character can be separated by P7. The line 3 character and the line 4 character can be separated by P10. The line 4 character and the line 5 character can be separated by P13. The line 5 character and the line 6 character can be separated by P16. For making the boundary line more accurate as shown in fig. 11, the line of row-direction pixels may be added more.
S1090: and decoding the position of the symbol determined according to the line boundary and the column boundary.
Fig. 12 is a block diagram of an electronic device according to an embodiment of the present invention, where the electronic device may be a camera, a scanner, a mobile phone, a computer, or the like, and the electronic device 1200 includes: a processor 1210 and a memory 1220 for storing the processor-executable instructions;
wherein the processor 1210 is configured to:
determining a plurality of row direction pixel lines of a decoded image according to a plurality of pixel points of the decoded image containing a symbol area;
determining a column boundary between each column of symbols in the symbol region according to the pixel points of the plurality of row direction pixel lines; and determining a line boundary between each line of symbols within the symbol area from the plurality of line direction pixel lines;
and determining the position of each symbol in the symbol area according to the row boundary and the column boundary, and decoding each symbol according to the determined position.
Optionally, the processor 1210 is specifically configured to:
determining a width of a target symbol in the decoded image; the target symbol is a symbol outside a symbol area in the decoded image;
determining a reference width range of each symbol in the symbol area according to the width of the target symbol;
and searching pixel points meeting the boundary characteristics between each column of symbols in a plurality of pixel points which are positioned in the reference width range on the plurality of row direction pixel lines, and determining a column boundary line according to the searched pixel points.
Optionally, the processor 1210 is specifically configured to:
selecting one of a plurality of line direction pixel lines, and finding a line segment matched with the gray value of a pixel point of a preset pixel line on the selected line direction pixel line;
and taking the length of the line segment as the width of the target symbol.
Optionally, the processor 1210 is specifically configured to:
if the target symbol is the initial symbol in the decoded image, determining the reference width range of each symbol in the symbol area according to the width of the initial symbol and a preset error value; or
And if the target symbol is the termination symbol in the decoded image, determining the reference width range of each symbol in the symbol region according to the width of the termination symbol, the width ratio of the preset termination symbol to the symbols in the symbol region and a preset error value.
Optionally, the processor 1210 is specifically configured to:
and taking a line of row-direction pixels satisfying the boundary characteristics as a line boundary between each row of symbols in the symbol area from among a plurality of line-direction pixels between an upper edge line and a lower edge line of the symbol area.
Optionally, the processor 1210 is specifically configured to:
dividing the region where the symbol is located into sub-regions with preset number according to the column direction;
determining the gray value of each sub-region according to the gray value of the pixel point of each sub-region;
aiming at any one of the preset multiple code words, determining an actual contrast value corresponding to the code word according to the code word and the gray value of the sub-regions with the preset number;
comparing the actual contrast value corresponding to the code word with a preset reference contrast value of the code word to obtain a similarity value of the code word;
and determining the code word corresponding to the symbol from preset code words according to the similarity value of the code word.
Optionally, the processor 1210 is specifically configured to:
determining a first ratio according to the gray value of the first type sub-area in the preset number of sub-areas and the number of the first type sub-areas; the first type sub-region is a sub-region of which the position of the blank sub-region in the code word corresponds to the same position in the symbol;
determining a second ratio according to the gray value of the second type sub-area in the preset number of sub-areas and the number of the second type sub-areas; the second type module is a sub-region of which the position of the sub-region of the bar in the code word corresponds to the same position in the symbol;
and taking the difference between the first ratio and the second ratio as an actual contrast value corresponding to the code word.
In an exemplary embodiment, there is also provided a storage medium comprising instructions, such as a memory comprising instructions, executable by a first processor of an intelligent appliance to perform the method described above. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In the embodiment of the present invention, in addition to the electronic device described in fig. 12, the electronic device may also have a structure as shown in fig. 13, where the electronic device 1300 includes: a camera 1310, a Radio Frequency (RF) circuit 1320, a Wireless Fidelity (Wi-Fi) module 1330, a communication interface 1340, a display unit 1350, a power supply 1360, a processor 1210, a memory 1220, and the like. Those skilled in the art will appreciate that the configuration of the electronic device shown in fig. 13 does not constitute a limitation of the electronic device, and the electronic device provided in the embodiments of the present application may include more or less components than those shown, or may combine some components, or may be arranged in different components.
The following describes each component of the electronic device 1300 in detail with reference to fig. 13:
the camera 1310 is configured to implement a shooting function of the electronic device 1300, and take pictures or videos. The camera 1310 may also be used to implement a scanning function of the electronic device 1300, and scan a scanned object (two-dimensional code/barcode).
The electronic device 1300 of the present invention may capture a decoded image using the camera 1310, and transmit the decoded image to the processor 1210, so that the processor 1210 can decode the decoded image.
For the present invention, the manner of acquiring the decoded image may be to connect with the electronic device of the present invention through another device, and send the decoded image to the electronic device provided by the present invention. Based on this, the present invention further comprises:
the RF circuit 1320 may be used for receiving and transmitting data during communication. In particular, the RF circuit 1320 sends downlink data of a base station to the processor 1210 for processing; and in addition, sending the uplink data to be sent to the base station. Generally, the RF circuit 1320 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
In addition, the RF circuitry 1320 may also communicate with networks and other electronic devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The Wi-Fi technology belongs to a short-distance wireless transmission technology, and the electronic device 1300 may connect to an Access Point (AP) through a Wi-Fi module 1330, thereby implementing Access to a data network. The Wi-Fi module 1330 may be used for receiving and transmitting data during communication.
The electronic device 1300 may be physically connected to other electronic devices via the communication interface 1340. Optionally, the communication interface 1340 is connected to the communication interfaces of the other electronic devices through a cable, so as to implement data transmission between the electronic device 1300 and the other electronic devices.
In this embodiment of the application, the electronic device 1300 is capable of implementing a communication service to send information to other contacts, so that the electronic device 1300 needs to have a data transmission function, that is, the electronic device 1300 needs to include a communication module inside. Although fig. 13 illustrates communication modules such as the RF circuit 1320, the Wi-Fi module 1330, and the communication interface 1340, it is to be appreciated that at least one of the above components or other communication modules (e.g., bluetooth modules) for enabling communication may be present in the electronic device 1300 for data transmission.
For example, when the electronic device 1300 is a mobile phone, the electronic device 1300 may include the RF circuit 1320 and may also include the Wi-Fi module 1330; when the electronic device 1300 is a computer, the electronic device 1300 may include the communication interface 1340, and may also include the Wi-Fi module 1330; when the electronic device 1300 is a tablet computer, the electronic device 1300 may include the Wi-Fi module.
For displaying the decoded codeword of the parsed image to the user, the electronic device further includes a display unit 1350.
The display unit 1350 may be used to display information input by or provided to the user and various menus of the electronic device 1300. The display unit 1350 is a display system of the electronic device 1300, and is configured to present an interface to implement human-computer interaction.
The display unit 1350 may include a display panel 1351. Alternatively, the Display panel 1351 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The memory 1220 may be used to store software programs and modules. The processor 1210 executes various functional applications and data processing of the electronic device 1300 by executing the software programs and modules stored in the memory 1220, and after the processor 1210 executes the program codes in the memory 1220, part or all of the processes in fig. 11 according to the embodiment of the present invention can be implemented.
Alternatively, the memory 1220 may mainly include a program storage area and a data storage area. The storage program area can store an operating system, various application programs (such as communication application), a face recognition module and the like; the storage data area may store data (such as various multimedia files like pictures, video files, etc., and face information templates) created according to the use of the electronic device, and the like.
Further, the memory 1220 may include a high speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 1210 is a control center of the electronic device 1300, connects various components using various interfaces and lines, and executes various functions and processes data of the electronic device 1300 by running or executing software programs and/or modules stored in the memory 1220 and calling data stored in the memory 1220, thereby implementing various services based on the electronic device.
Optionally, the processor 1210 may include one or more processing units. Optionally, the processor 1210 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, an application program, and the like, and the modem processor mainly processes wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 1210.
The electronic device 1300 also includes a power source 1360 (e.g., a battery) for powering the various components. Optionally, the power source 1360 may be logically connected to the processor 1210 through a power management system, so as to implement functions of managing charging, discharging, power consumption, and the like through the power management system.
An embodiment of the present invention further provides a computer program product, which, when running on an electronic device, enables the electronic device to execute any one of the decoding methods described above in the embodiments of the present invention.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.
Claims (10)
1. A decoding method applied to an electronic device includes:
determining a plurality of row direction pixel lines of a decoded image according to a plurality of pixel points of the decoded image containing a symbol area;
determining a column boundary between each column of symbols in the symbol region according to the pixel points of the plurality of row direction pixel lines; and determining a line boundary between each line of symbols within the symbol area from the plurality of line direction pixel lines;
determining the position of each symbol in the symbol area according to the row boundary and the column boundary, and decoding each symbol according to the determined position;
determining a column boundary between each column of symbols in the symbol region according to the pixel points of the plurality of row direction pixel lines, including:
determining a width of a target symbol in the decoded image; the target symbol is a symbol outside a symbol area in the decoded image;
determining a reference width range of each symbol in the symbol area according to the width of the target symbol;
and searching pixel points meeting the boundary characteristics between each column of symbols in a plurality of pixel points which are positioned in the reference width range on the plurality of row direction pixel lines, and determining a column boundary line according to the searched pixel points.
2. The decoding method of claim 1, wherein said determining a width of a target symbol in the decoded image comprises:
selecting one of a plurality of line direction pixel lines, and finding a line segment matched with the gray value of a pixel point of a preset pixel line on the selected line direction pixel line;
and taking the length of the line segment as the width of the target symbol.
3. The decoding method according to claim 1, wherein the determining the reference width range of each symbol according to the width of the target symbol comprises:
if the target symbol is the initial symbol in the decoded image, determining the reference width range of each symbol in the symbol area according to the width of the initial symbol and a preset error value; or
And if the target symbol is the termination symbol in the decoded image, determining the reference width range of each symbol in the symbol region according to the width of the termination symbol, the width ratio of the preset termination symbol to the symbols in the symbol region and a preset error value.
4. The decoding method according to claim 1, wherein the determining a line boundary between each line of symbols within a symbol region from a plurality of line direction pixel lines comprises:
and taking a line of row-direction pixels satisfying the boundary characteristics as a line boundary between each row of symbols in the symbol area from among a plurality of line-direction pixels between an upper edge line and a lower edge line of the symbol area.
5. The decoding method according to any one of claims 1 to 4, wherein the decoding each symbol according to the determined position comprises:
dividing the region where the symbol is located into sub-regions with preset number according to the column direction;
determining the gray value of each sub-region according to the gray value of the pixel point of each sub-region;
aiming at any one of the preset multiple code words, determining an actual contrast value corresponding to the code word according to the code word and the gray value of the sub-regions with the preset number;
comparing the actual contrast value corresponding to the code word with a preset reference contrast value of the code word to obtain a similarity value of the code word;
determining a code word corresponding to the symbol from preset code words according to the similarity value of the code word;
determining an actual contrast value corresponding to the codeword according to the codeword and the gray values of the preset number of sub-regions, including:
determining a first ratio according to the gray value of the first type sub-area in the preset number of sub-areas and the number of the first type sub-areas; the first type sub-region is a sub-region of which the position of the blank sub-region in the code word corresponds to the same position in the symbol;
determining a second ratio according to the gray value of the second type sub-area in the preset number of sub-areas and the number of the second type sub-areas; the second type module is a sub-region of which the position of the sub-region of the bar in the code word corresponds to the same position in the symbol;
and taking the difference between the first ratio and the second ratio as an actual contrast value corresponding to the code word.
6. An electronic device, comprising:
a processor and a memory for storing processor-executable instructions;
wherein the processor is configured to:
determining a plurality of row direction pixel lines of a decoded image according to a plurality of pixel points of the decoded image containing a symbol area;
determining a column boundary between each column of symbols in the symbol region according to the pixel points of the plurality of row direction pixel lines; and determining a line boundary between each line of symbols within the symbol area from the plurality of line direction pixel lines;
determining the position of each symbol in the symbol area according to the row boundary and the column boundary, and decoding each symbol according to the determined position;
the processor is specifically configured to:
determining a width of a target symbol in the decoded image; the target symbol is a symbol outside a symbol area in the decoded image;
determining a reference width range of each symbol in the symbol area according to the width of the target symbol;
and searching pixel points meeting the boundary characteristics between each column of symbols in a plurality of pixel points which are positioned in the reference width range on the plurality of row direction pixel lines, and determining a column boundary line according to the searched pixel points.
7. The electronic device of claim 6, wherein the processor is specifically configured to:
selecting one of a plurality of line direction pixel lines, and finding a line segment matched with the gray value of a pixel point of a preset pixel line on the selected line direction pixel line;
and taking the length of the line segment as the width of the target symbol.
8. The electronic device of claim 6, wherein the processor is specifically configured to:
if the target symbol is the initial symbol in the decoded image, determining the reference width range of each symbol in the symbol area according to the width of the initial symbol and a preset error value; or
And if the target symbol is the termination symbol in the decoded image, determining the reference width range of each symbol in the symbol region according to the width of the termination symbol, the width ratio of the preset termination symbol to the symbols in the symbol region and a preset error value.
9. The electronic device of claim 6, wherein the processor is specifically configured to:
and taking a line of row-direction pixels satisfying the boundary characteristics as a line boundary between each row of symbols in the symbol area from among a plurality of line-direction pixels between an upper edge line and a lower edge line of the symbol area.
10. The electronic device according to any one of claims 6 to 9, wherein the processor is specifically configured to:
dividing the region where the symbol is located into sub-regions with preset number according to the column direction;
determining the gray value of each sub-region according to the gray value of the pixel point of each sub-region;
aiming at any one of the preset multiple code words, determining an actual contrast value corresponding to the code word according to the code word and the gray value of the sub-regions with the preset number;
comparing the actual contrast value corresponding to the code word with a preset reference contrast value of the code word to obtain a similarity value of the code word;
determining a code word corresponding to the symbol from preset code words according to the similarity value of the code word;
the processor is specifically configured to:
determining a first ratio according to the gray value of the first type sub-area in the preset number of sub-areas and the number of the first type sub-areas; the first type sub-region is a sub-region of which the position of the blank sub-region in the code word corresponds to the same position in the symbol;
determining a second ratio according to the gray value of the second type sub-area in the preset number of sub-areas and the number of the second type sub-areas; the second type module is a sub-region of which the position of the sub-region of the bar in the code word corresponds to the same position in the symbol;
and taking the difference between the first ratio and the second ratio as an actual contrast value corresponding to the code word.
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010378011.9A CN111476054B (en) | 2020-05-07 | 2020-05-07 | Decoding method and electronic equipment |
| KR1020227040154A KR102895245B1 (en) | 2020-05-07 | 2021-05-06 | System and method for barcode decoding |
| JP2022566603A JP7481494B2 (en) | 2020-05-07 | 2021-05-06 | Systems and methods for barcode decoding |
| PCT/CN2021/091910 WO2021223709A1 (en) | 2020-05-07 | 2021-05-06 | Systems and methods for barcode decoding |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010378011.9A CN111476054B (en) | 2020-05-07 | 2020-05-07 | Decoding method and electronic equipment |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111476054A CN111476054A (en) | 2020-07-31 |
| CN111476054B true CN111476054B (en) | 2022-03-08 |
Family
ID=71757288
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010378011.9A Active CN111476054B (en) | 2020-05-07 | 2020-05-07 | Decoding method and electronic equipment |
Country Status (4)
| Country | Link |
|---|---|
| JP (1) | JP7481494B2 (en) |
| KR (1) | KR102895245B1 (en) |
| CN (1) | CN111476054B (en) |
| WO (1) | WO2021223709A1 (en) |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111476054B (en) * | 2020-05-07 | 2022-03-08 | 浙江华睿科技股份有限公司 | Decoding method and electronic equipment |
| TWI790783B (en) * | 2021-10-20 | 2023-01-21 | 財團法人工業技術研究院 | Encoded substrate, coordinate-positioning system and method thereof |
| CN119374445B (en) * | 2024-12-27 | 2025-04-04 | 南京市标准化研究院(南京市组织机构代码管理中心) | Commodity barcode comparison device and comparison method |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101833640A (en) * | 2010-06-01 | 2010-09-15 | 福建新大陆电脑股份有限公司 | Module for calculating bar space boundary pixel points and calculating method thereof |
| CN101908122A (en) * | 2010-06-01 | 2010-12-08 | 福建新大陆电脑股份有限公司 | Bar space margin processing module, bar code identifying device and method thereof |
| CN101908126A (en) * | 2010-06-01 | 2010-12-08 | 福建新大陆电脑股份有限公司 | PDF417 barcode decoding chip |
| CN103034831A (en) * | 2011-09-30 | 2013-04-10 | 无锡爱丁阁信息科技有限公司 | Method and system for identifying linear bar code |
| CN106446750A (en) * | 2016-07-07 | 2017-02-22 | 深圳市华汉伟业科技有限公司 | Bar code reading method and device |
| CN109388999A (en) * | 2017-08-11 | 2019-02-26 | 杭州海康威视数字技术股份有限公司 | A kind of barcode recognition method and device |
Family Cites Families (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5635697A (en) * | 1989-03-01 | 1997-06-03 | Symbol Technologies, Inc. | Method and apparatus for decoding two-dimensional bar code |
| EP0565738A1 (en) * | 1990-01-05 | 1993-10-20 | Symbol Technologies, Inc. | System for encoding and decoding data in machine readable graphic form |
| KR100393423B1 (en) * | 2001-03-21 | 2003-08-02 | 김지영 | A method for recognizing 2D barcode information |
| KR100414524B1 (en) * | 2002-10-31 | 2004-01-16 | 주식회사 아이콘랩 | Two-dimensional Code having superior decoding property which is possible to control the level of error correcting codes, and method for encoding and decoding the same |
| JP2005174128A (en) * | 2003-12-12 | 2005-06-30 | Tohken Co Ltd | Code reader |
| CN100507939C (en) * | 2004-04-02 | 2009-07-01 | 西尔弗布鲁克研究有限公司 | Surfaces having coded data disposed therein or thereon |
| US8313029B2 (en) * | 2008-01-31 | 2012-11-20 | Seiko Epson Corporation | Apparatus and methods for decoding images |
| JP5246146B2 (en) * | 2009-12-01 | 2013-07-24 | コニカミノルタビジネステクノロジーズ株式会社 | Image forming apparatus and image reading apparatus |
| CN102184378B (en) * | 2011-04-27 | 2014-10-29 | 茂名职业技术学院 | Method for cutting portable data file (PDF) 417 standard two-dimensional bar code image |
| CN102521559B (en) * | 2011-12-01 | 2014-01-01 | 四川大学 | A 417 barcode recognition method based on sub-pixel edge detection |
| JP6095194B2 (en) * | 2013-03-28 | 2017-03-15 | 日本電産サンキョー株式会社 | Stack bar code reading apparatus and stack bar code reading method |
| CN111476054B (en) * | 2020-05-07 | 2022-03-08 | 浙江华睿科技股份有限公司 | Decoding method and electronic equipment |
-
2020
- 2020-05-07 CN CN202010378011.9A patent/CN111476054B/en active Active
-
2021
- 2021-05-06 KR KR1020227040154A patent/KR102895245B1/en active Active
- 2021-05-06 JP JP2022566603A patent/JP7481494B2/en active Active
- 2021-05-06 WO PCT/CN2021/091910 patent/WO2021223709A1/en not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101833640A (en) * | 2010-06-01 | 2010-09-15 | 福建新大陆电脑股份有限公司 | Module for calculating bar space boundary pixel points and calculating method thereof |
| CN101908122A (en) * | 2010-06-01 | 2010-12-08 | 福建新大陆电脑股份有限公司 | Bar space margin processing module, bar code identifying device and method thereof |
| CN101908126A (en) * | 2010-06-01 | 2010-12-08 | 福建新大陆电脑股份有限公司 | PDF417 barcode decoding chip |
| CN103034831A (en) * | 2011-09-30 | 2013-04-10 | 无锡爱丁阁信息科技有限公司 | Method and system for identifying linear bar code |
| CN106446750A (en) * | 2016-07-07 | 2017-02-22 | 深圳市华汉伟业科技有限公司 | Bar code reading method and device |
| CN109388999A (en) * | 2017-08-11 | 2019-02-26 | 杭州海康威视数字技术股份有限公司 | A kind of barcode recognition method and device |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2021223709A1 (en) | 2021-11-11 |
| JP7481494B2 (en) | 2024-05-10 |
| KR20230002813A (en) | 2023-01-05 |
| KR102895245B1 (en) | 2025-12-05 |
| JP2023525500A (en) | 2023-06-16 |
| CN111476054A (en) | 2020-07-31 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111476054B (en) | Decoding method and electronic equipment | |
| US11468255B2 (en) | Two-dimensional code and method, terminal, and apparatus for recognizing two-dimensional code | |
| CN109977935B (en) | Text recognition method and device | |
| US12380302B2 (en) | Method, device, and system for generating, repairing, and identifying an incomplete QR code | |
| CN101882227B (en) | Recognition method and system based on image matching and network searching | |
| US8908975B2 (en) | Apparatus and method for automatically recognizing a QR code | |
| EP3882822A1 (en) | Encoded pattern processing method and device , storage medium and electronic device | |
| CN112613348B (en) | Character recognition method and electronic equipment | |
| EP1841215A2 (en) | Apparatus and method for out-of-focus shooting using portable terminal | |
| US6902113B2 (en) | Selection of colors for color bar codes | |
| CN102799920A (en) | Two-dimensional code generation system and method and two-dimensional code identification system and method in combination with image | |
| US12423773B2 (en) | Image data processing method for laser imaging, computer device, and computer-readable storage medium | |
| US20120051633A1 (en) | Apparatus and method for generating character collage message | |
| CN113496133B (en) | Two-dimensional code identification method and device, electronic equipment and storage medium | |
| CN117540762A (en) | Bar code identification method, device, equipment and readable storage medium | |
| CN112866797A (en) | Video processing method and device, electronic equipment and storage medium | |
| CN112733568B (en) | One-dimensional bar code recognition method, device, equipment and storage medium | |
| JP4415010B2 (en) | Two-dimensional code region extraction method, two-dimensional code region extraction device, electronic device, two-dimensional code region extraction program, and recording medium recording the program | |
| KR20190014223A (en) | Qr code, and terminal using the same | |
| CN113822280B (en) | Text recognition method, device, system and nonvolatile storage medium | |
| EP3345126B1 (en) | Method and system for correction of an image from a hand-held scanning device | |
| US20170061182A1 (en) | Method for processing information from a hand-held scanning device | |
| CN113537218A (en) | Image recognition method and device | |
| US9692929B2 (en) | Method and system for correction of an image from a hand-held scanning device | |
| CN117636319A (en) | License plate recognition method and device, nonvolatile storage medium and electronic equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| CB02 | Change of applicant information |
Address after: C10, No. 1199 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province Applicant after: Zhejiang Huarui Technology Co.,Ltd. Address before: C10, No. 1199 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province Applicant before: ZHEJIANG HUARAY TECHNOLOGY Co.,Ltd. |
|
| CB02 | Change of applicant information | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |