[go: up one dir, main page]

CN120449914A - Pattern encoding method, pattern decoding method, device, medium and equipment - Google Patents

Pattern encoding method, pattern decoding method, device, medium and equipment

Info

Publication number
CN120449914A
CN120449914A CN202510535769.1A CN202510535769A CN120449914A CN 120449914 A CN120449914 A CN 120449914A CN 202510535769 A CN202510535769 A CN 202510535769A CN 120449914 A CN120449914 A CN 120449914A
Authority
CN
China
Prior art keywords
coding
target
matrix
coding matrix
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510535769.1A
Other languages
Chinese (zh)
Inventor
李玉龙
钱烽
许诗起
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ant Blockchain Technology Shanghai Co Ltd
Original Assignee
Ant Blockchain Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ant Blockchain Technology Shanghai Co Ltd filed Critical Ant Blockchain Technology Shanghai Co Ltd
Priority to CN202510535769.1A priority Critical patent/CN120449914A/en
Publication of CN120449914A publication Critical patent/CN120449914A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The application provides a pattern coding method, a pattern decoding method, a device, a medium and equipment, wherein the pattern coding method comprises the steps of obtaining an original pattern corresponding to a target object and identification information; the method comprises the steps of generating a first coding matrix by encryption according to identification information, determining a first target area corresponding to the first coding matrix in an original pattern, and performing color increment adjustment on the first target area to obtain a target pattern, wherein the target pattern comprises a second target area corresponding to the first target area, and the colors of a subarea corresponding to the first coding element and a subarea corresponding to the second coding element in the second target area are different. Based on the scheme of the application, the concealment of the identification information can be improved.

Description

Pattern encoding method, pattern decoding method, device, medium and apparatus
Technical Field
The present description relates to the field of computer technology, and more particularly, to a pattern encoding method, a pattern decoding method, an apparatus, a medium, and a device in the field of computer technology.
Background
Channeling refers to the act of unauthorized transfer of items from a designated sales area to other areas for sale, which not only disturbs market order, but also compromises brand interest. In the anti-channeling goods scene, the identification information of the goods is particularly important, and the effect is mainly reflected in tracking the circulation path of the goods through the identification information, so that the channeling goods behavior is effectively identified and restrained. However, the identification information of an article is often recorded on a pattern of its packaging, and the related personnel may disable or illegally recognize the identification information by means of painting, covering or replacing the packaging, etc. Therefore, there is a need to improve the concealment of the identification information of the article.
Disclosure of Invention
The specification provides a pattern encoding method, a pattern decoding method, a device, a medium and equipment, wherein the method can improve the concealment of identification information and does not depend on professional equipment identification.
In a first aspect, a pattern encoding method is provided, including:
Acquiring an original pattern corresponding to a target object and identification information;
Encrypting according to the identification information to generate a first coding matrix, wherein the first coding matrix comprises a first coding element and a second coding element with different binary logic states;
Determining a first target area corresponding to the first coding matrix in the original pattern;
And performing color increment adjustment on the first target area to obtain a target pattern, wherein the target pattern comprises a second target area corresponding to the first target area, and the colors of a sub-area corresponding to the first coding element and a sub-area corresponding to the second coding element in the second target area are different.
In a second aspect, there is provided a pattern decoding method, including:
Acquiring a shooting image of a target object, wherein the shooting image comprises a target pattern, the target pattern comprises a second target area corresponding to a first coding matrix, the first coding matrix comprises a first coding element and a second coding element with different binary logic states, and the colors of a subarea corresponding to the first coding element and a subarea corresponding to the second coding element in the second target area are different;
Binarizing the shot image to obtain a binarized image, determining a third target area corresponding to the second target area from the binarized image, wherein the gray values of the subareas corresponding to the first coding elements and the subareas corresponding to the second coding elements in the third target area are different;
acquiring a first coding matrix according to a third target area;
Decrypting the first coding matrix to obtain the identification information corresponding to the target object.
In a third aspect, there is provided a pattern encoding apparatus comprising:
the acquisition unit is used for acquiring the original pattern and the identification information corresponding to the target object;
The generating unit is used for generating a first coding matrix by encryption according to the identification information, wherein the first coding matrix comprises a first coding element and a second coding element with different binary logic states;
A determining unit, configured to determine a first target area corresponding to the first coding matrix in the original pattern;
The adjusting unit is used for performing color increment adjustment on the first target area to obtain a target pattern, wherein the target pattern comprises a second target area corresponding to the first target area, and the colors of the sub-areas corresponding to the first coding elements and the sub-areas corresponding to the second coding elements in the second target area are different.
In a fourth aspect, there is provided a pattern decoding apparatus including:
The first acquisition unit is used for acquiring a shooting image of a target object, wherein the shooting image comprises a target pattern, the target pattern comprises a second target area corresponding to a first coding matrix, the first coding matrix comprises a first coding element and a second coding element with different binary logic states, and the colors of a subarea corresponding to the first coding element and a subarea corresponding to the second coding element in the second target area are different;
The processing unit is used for carrying out binarization processing on the shot image to obtain a binarized image, and determining a third target area corresponding to the second target area from the binarized image, wherein the gray values of the subareas corresponding to the first coding elements and the subareas corresponding to the second coding elements in the third target area are different;
a second acquisition unit configured to acquire a first encoding matrix according to a third target area;
And the decryption unit is used for decrypting the first coding matrix to obtain the identification information corresponding to the target object.
In a fifth aspect, a computer readable storage medium is provided, the computer readable storage medium storing computer program code which, when executed, implements the method described above.
In a sixth aspect, an electronic device is provided comprising a processor and a memory, wherein the memory stores a computer program adapted to be loaded by the processor and to perform the steps of the method described above.
In a seventh aspect, a computer program product is provided, which stores at least one instruction that when executed by a processor implements the steps of the method described above.
In the embodiment of the specification, on one hand, an original pattern corresponding to a target object and identification information are obtained, then a first coding matrix is generated through encryption according to the identification information, and on the other hand, a first target area corresponding to the first coding matrix is determined in the original pattern, and color increment adjustment is carried out on the first target area to obtain a target pattern, so that colors of a sub-area corresponding to a first coding element and a sub-area corresponding to a second coding element in a second target area are different. The first coding matrix is not tamper-resistant and can not be easily cracked in the encryption process, the difference between the second target area and the peripheral area after the color increment adjustment is small, and sufficient concealment is provided and identification is not performed by professional equipment. Thus, the problems of insufficient concealment of the identification information or dependence on professional equipment identification in the related technology are effectively solved.
The method comprises the steps of obtaining a shooting image of a target object, providing reliable input for subsequent processing, obtaining a second target area corresponding to a first coding matrix in the shooting image, obtaining a third target area corresponding to the second target area by binarization processing of the shooting image, determining a sub-area corresponding to the first coding element in the third target area and a sub-area corresponding to the second coding element in the third target area, obtaining a first coding matrix according to the third target area, decrypting the first coding matrix to obtain identification information corresponding to the target object, and reducing the identification information. Thus, the accurate extraction of the first coding matrix is ensured, the identification information is recovered through the decryption process, and reliable data is provided for the subsequent anti-channel conflict verification process.
Drawings
FIG. 1 is a diagrammatic illustration of a tamper-proof scenario provided by embodiments of the present description;
FIG. 2 is a schematic flow chart of a pattern encoding method according to an embodiment of the present disclosure;
FIG. 3 is an exemplary schematic diagram of a first encoding matrix provided in an embodiment of the present disclosure;
FIG. 4 is an exemplary schematic diagram of converting an original pattern into a target pattern according to an embodiment of the present disclosure;
FIG. 5 is an exemplary schematic diagram of a sub-region corresponding to a coding element according to an embodiment of the present disclosure;
FIG. 6 is an exemplary schematic diagram of a trace-source pattern provided in an embodiment of the present disclosure;
FIG. 7 is a schematic flow chart of a color increment adjustment according to an embodiment of the present disclosure;
FIG. 8 is a schematic flow chart of a color increment adjustment according to an embodiment of the present disclosure;
FIG. 9 is a schematic flow chart of encrypting a first encoding matrix according to an embodiment of the present disclosure;
FIG. 10 is a schematic flow chart of a redundant encoding process provided in the embodiment of the present disclosure;
FIG. 11 is an exemplary schematic diagram of a positioning coding matrix provided in an embodiment of the present disclosure;
fig. 12 is a flowchart of a pattern decoding method according to an embodiment of the present disclosure;
fig. 13 is a schematic flow chart of a binarization process according to an embodiment of the present disclosure;
FIG. 14 is a schematic flow chart of determining a third target area according to an embodiment of the present disclosure;
FIG. 15 is a schematic flow chart of decryption of identification information according to an embodiment of the present disclosure;
fig. 16 is a schematic structural view of a pattern encoding device according to an embodiment of the present disclosure;
Fig. 17 is a schematic structural view of a pattern decoding device according to an embodiment of the present disclosure;
fig. 18 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the present specification will be clearly and thoroughly described with reference to the accompanying drawings. In the description of the embodiments of the present specification, unless otherwise indicated, "/" means or, for example, a/B may mean a or B, "and/or" in the text is only one association relationship describing the association object, and it means that there may be three relationships, for example, a and/or B, three cases where a exists alone, a and B exist together, and B exists alone, and further, in the description of the embodiments of the present specification, "a plurality" means two or more.
The terms "first," "second," and the like, are used below for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature.
Channeling refers to the act of unauthorized transfer of items from a designated sales area to other areas for sale, which not only disturbs market order, but also compromises brand interest. In the anti-channeling goods scene, the identification information of the goods is particularly important, and the effect is mainly reflected in tracking the circulation path of the goods through the identification information, so that the channeling goods behavior is effectively identified and restrained. However, the identification information of an article is often recorded on a pattern of its packaging, and the related personnel may disable or illegally recognize the identification information by means of painting, covering or replacing the packaging, etc. Therefore, there is a need to improve the concealment of the identification information of the article.
The identification information of the article refers to information uniquely associated with the article for identifying and tracking the identity of the article and its circulation path. The identification information may be unique, i.e. an item corresponds to an identification information, such as a unique serial number for each commodity, or may be a type or a batch of items sharing an identification information, such as a batch number of the same batch of products. Illustratively, the identification information may include the unique ID of the merchandise, the date of production, the production lot, the sales area, etc., and may be used to track the entire circulation process from production to sales.
Referring to fig. 1, fig. 1 is a schematic view of a scenario for preventing blowby provided in an embodiment of the present disclosure. The user side and the server side are connected through communication to acquire and analyze the identification information. The user side may be a smart phone, tablet computer or other portable device, and the server side may be a server of a brand side/related service platform, or a cloud platform. The packaging pattern of the target article includes areas containing identification information, which may or may not be encrypted. The user terminal shoots the target object, a corresponding shooting image can be obtained, and the shooting image contains identification information. The user side uploads the shot image to the server side, and the server side receives and analyzes the shot image to extract the identification information of the target object. The service side can perform various operations such as verifying authenticity of the article, tracking a circulation path of the article, identifying a blow-by behavior, and the like by using the identification information.
In one related art, identification information of an article is recorded using an explicit two-dimensional code. The two-dimensional code has the advantages of large information capacity and easiness in generation and scanning, but has the disadvantage of being too obvious and easy to be found and tampered by related personnel. For example, related personnel can escape from tracking by coating or covering the two-dimensional code, rendering it invalid or illegible. In addition, the two-dimensional code is easy to forge due to the conspicuity, and the effectiveness of preventing the goods from being fleed is further reduced.
In another related art, digital watermarking is used to record identification information of an item. Digital watermarking is a technology for embedding information into an image, and has the advantages of being certain in concealment and not easy to be perceived by naked eyes. However, the digital watermark has the disadvantage that its identification information is still directly displayed and is easily extracted and tampered with by professionals. For example, related personnel can extract the identification information in the digital watermark through an image processing technology and tamper or forge the identification information, so that tracking of brands can be avoided.
In yet another related art, identification information of an item is recorded using micron-sized code points. The micron-sized code dots are very small identification information carriers, and have the advantages of being invisible to human eyes and having a certain concealment. However, the disadvantage of the micron-scale code dots is that they require specialized equipment to read, adding to the complexity and cost of use. In addition, the accuracy of reading the micro-scale code points is also affected by environmental factors, such as illumination, packaging materials, etc., further reducing the reliability of practical applications.
In summary, the above related technologies have certain problems in the anti-channel conflict scene, such as over-conspicuity of two-dimensional code, limited concealment of digital watermark, and requirement of professional equipment for micron-sized code points. In order to solve the above problems, the present application provides a pattern encoding method and a pattern decoding method based on color invisibility.
The pattern coding method comprises the steps of firstly obtaining an original pattern corresponding to a target object and identification information, secondly generating a first coding matrix through encryption according to the identification information, thirdly determining a first target area corresponding to the first coding matrix in the original pattern, and performing color increment adjustment on the first target area to obtain the target pattern, so that colors of a sub-area corresponding to a first coding element and a sub-area corresponding to a second coding element in a second target area are different. The first coding matrix is not tamper-resistant and can not be easily cracked in the encryption process, the difference between the second target area and the peripheral area after the color increment adjustment is small, and sufficient concealment is provided and identification is not performed by professional equipment. Thus, through the combination of the steps, the pattern coding method which is hidden and does not depend on professional equipment identification is provided, and the problem that identification information in the related technology is insufficient in concealment or depends on professional equipment identification is effectively solved.
The pattern decoding method comprises the steps of firstly obtaining a shooting image of a target object, providing reliable input for subsequent processing, secondly, obtaining a second target area corresponding to a first coding matrix from the shooting image, wherein colors of a subarea corresponding to a first coding element in the second target area and a subarea corresponding to a second coding element are different, the color difference provides basic characteristics for decoding, thirdly, performing binarization processing on the shooting image to obtain a binarized image, determining a third target area corresponding to the second target area from the binarized image, and finally obtaining the first coding matrix according to the third target area, and decrypting the first coding matrix to obtain identification information corresponding to the target object, thereby realizing reduction of the identification information. Thus, through the combination of the steps, the accurate extraction of the first coding matrix is ensured, and the identification information is recovered through the decryption process, so that reliable data is provided for the subsequent anti-channel conflict verification process.
Based on the scenario shown in fig. 1, the pattern encoding method and the pattern decoding method provided in the embodiments of the present disclosure will be described in detail below with reference to fig. 2 to 15.
Referring to fig. 2, a schematic flow chart of a pattern encoding method is provided in the embodiment of the present disclosure. As shown in fig. 2, the method of the embodiment of the present specification may include the following steps S102 to S108.
S102, acquiring an original pattern corresponding to the target object and identification information.
Specifically, the target object according to the present embodiment may be any object, such as food, medicine, electronic product, or the like. The original pattern corresponding to the target object refers to a pattern which is not subjected to coding processing, and the identification information corresponding to the target object refers to information uniquely associated with the target object and is mainly used for uniquely identifying the commodity. For example, the identification information corresponding to the target article may include a unique ID of the target article, or may further include auxiliary information such as a production date, a production lot, a sales area, etc. corresponding to the target article.
Regarding the process of obtaining the original pattern and the identification information corresponding to the target object, in some possible implementations, the original pattern and the identification information corresponding to the target object are pre-stored, and then the original pattern and the identification information corresponding to the target object can be directly obtained by querying a pre-stored database or file. In some possible implementations, the package of the target item may be scanned to achieve acquisition of the original pattern, and the identification information acquired by reading a database or tag of the target item. In some possible implementations, the original pattern corresponding to the target item and the identification information input by the user may be received.
S104, encrypting according to the identification information to generate a first coding matrix, wherein the first coding matrix comprises a first coding element and a second coding element with different binary logic states.
Specifically, the first coding matrix according to this embodiment refers to a matrix structure formed by arranging a plurality of coding elements according to a specific rule, and is used for carrying encrypted identification information. The first coding matrix comprises first and second coding elements having different binary logic states, wherein the different binary logic states mean that the first and second coding elements represent different binary logic states, respectively, e.g. the first coding element represents a binary logic state "0", the second coding element represents a binary logic state "1", or the first coding element represents a binary logic state "1", the second coding element represents a binary logic state "0".
For an understanding of the first coding matrix according to the present embodiment, please refer to fig. 3, which is a schematic diagram illustrating an example of the first coding matrix according to the present embodiment. The first coding matrix is a matrix formed by binary codes of 0 and 1, each element in the matrix represents one coding element, and the arrangement sequence and the position of the coding elements are determined according to encryption rules. Wherein the first encoding element represents a binary logic state "1" and the second encoding element represents a binary logic state "0". It should be noted that the size of the first coding matrix may be adjusted according to practical needs, for example, 13×13, 8×8, 16×16 or other size matrices may be used.
Regarding the process of generating the first encoding matrix based on the encryption of the identification information, in some possible implementations, the identification information may be encrypted using a preset key to generate encrypted data, and the first encoding matrix may be generated based on the encrypted data. In some possible implementations, the identification information may be encrypted without using a key, the encrypted data is generated, and the first encoding matrix is generated from the encrypted data. In some possible implementations, other encryption methods, such as a hash algorithm, a symmetric encryption algorithm, or an asymmetric encryption algorithm, may be used to encrypt the identification information, generate encrypted data, and generate the first encoding matrix based on the encrypted data.
S106, determining a first target area corresponding to the first coding matrix in the original pattern.
Specifically, the first target area according to this embodiment refers to an area for carrying the first coding matrix in the original pattern, where the size and shape of the area match the first coding matrix to ensure that the first coding matrix can be completely embedded therein. The association between the first coding matrix and the first target area is that the size of the first coding matrix determines the way in which the first target area is divided. For example, if the first coding matrix is a 13×13 matrix, the first target region needs to be equally divided into 13×13 grids, each corresponding to one coding element in the first coding matrix, to ensure that each coding element in the first coding matrix can be accurately mapped to a corresponding sub-region in the first target region.
Regarding the process of determining the first target area corresponding to the first coding matrix in the original pattern, in some possible implementations, a rectangular area may be selected as the first target area in the original pattern by a preset rule, where the size of the rectangular area is proportional to the size of the first coding matrix. For example, if the first encoding matrix is a 13×13 matrix, the first target area may be a rectangular area of 13×13 mm, and the specific size may be adjusted according to the actual requirement. In some possible implementations, a suitable region may be automatically identified in the original pattern as the first target region by an image processing algorithm, where features of the region include color uniformity, texture simplicity, etc., to ensure that subsequent color delta adjustments can be performed successfully. In some possible implementations, one region in the original pattern may be manually selected by the user as the first target region, and the user may frame one region in the original pattern through the interactive interface, the size and shape of the region being set by the user according to the size and shape of the first encoding matrix.
S108, performing color increment adjustment on the first target area to obtain a target pattern, wherein the target pattern comprises a second target area corresponding to the first target area, and the colors of the sub-areas corresponding to the first coding elements and the sub-areas corresponding to the second coding elements in the second target area are different.
Specifically, the color increment related to the present embodiment refers to a value for adjusting the color value of each sub-region in the first target region, which may be a positive number, a negative number, or zero, for changing the color attribute of the sub-region, such as brightness, hue, or saturation. It should be noted that the adjustment range of the color increment is generally smaller, so as to ensure that the visual difference between the second target area and the original pattern is not obvious, thereby enhancing the concealment.
The target pattern refers to a pattern subjected to color increment adjustment, which retains the overall visual effect of the original pattern, while embedding the information of the first encoding matrix in the second target area. The target pattern includes a second target area corresponding to the first target area, which refers to an area formed by adjusting the color increment of the first target area in the target pattern, wherein the size and shape of the second target area are consistent with those of the first target area, but the color attribute is changed. The first target area is an area before the color increment adjustment, and the second target area is an area after the color increment adjustment, which correspond to each other in space position but have different color attributes. The sub-regions corresponding to the first coding elements and the sub-regions corresponding to the second coding elements in the second target region are different in color, which means that in the second target region, the sub-regions corresponding to the first coding elements and the sub-regions corresponding to the second coding elements differ in color attribute, such as brightness, hue or saturation, which provides a basic feature for the subsequent decoding process.
In some possible implementation manners, the process of obtaining the target pattern by performing color increment adjustment on the first target area may obtain a first color increment corresponding to the first coding element and a second color increment corresponding to the second coding element, where the first color increment is different from the second color increment, perform color increment adjustment on a sub-area corresponding to the first coding element in the first target area according to the first color increment, and perform color increment adjustment on a sub-area corresponding to the second coding element in the first target area according to the second color increment, so as to obtain the target pattern. In some possible implementation manners, a third color increment corresponding to a target coding element can be obtained, the target coding element is one of the first coding element and the second coding element, and color increment adjustment is performed on a subarea corresponding to the target coding element in the first target area according to the third color increment, so that a target pattern is obtained.
Note that the above-described color increment adjustment may be performed in an RGB color space, a YUV color space, or other color spaces, which is not limited thereto. The target pattern according to the present embodiment may be applied to the package of the target object in various forms such as spraying, pasting, printing, etc., and the expression form may be text, graphics, symbols, or a combination thereof, which is not limited.
For easy understanding of the embodiment, please refer to fig. 4-5.
Fig. 4 is a schematic diagram illustrating an example of converting an original pattern into a target pattern according to an embodiment of the present disclosure. The first target area in the original pattern is a rectangular area, the target pattern can be obtained after the processing of the related steps, the target pattern comprises a second target area with the same size as the first target area, and the color difference is reflected after the color increment adjustment of each sub-area in the second target area.
Fig. 5 is an exemplary schematic diagram of a sub-region corresponding to a coding element according to an embodiment of the present disclosure. The sub-region corresponding to the first coding element is a region with darker color, and the sub-region corresponding to the second coding element is a region with lighter color, so that the color difference between the sub-region corresponding to the first coding element and the sub-region corresponding to the second coding element in the second target region is embodied.
In the embodiment, the original pattern and the identification information corresponding to the target object are firstly obtained, then the first coding matrix is generated through encryption according to the identification information, and then the first target area corresponding to the first coding matrix is determined in the original pattern, and the color increment of the first target area is adjusted to obtain the target pattern, so that the colors of the sub-area corresponding to the first coding element and the sub-area corresponding to the second coding element in the second target area are different. The first coding matrix is not tamper-resistant and can not be easily cracked in the encryption process, the difference between the second target area and the peripheral area after the color increment adjustment is small, and sufficient concealment is provided and identification is not performed by professional equipment. Thus, through the combination of the steps, the pattern coding method which is hidden and does not depend on professional equipment identification is provided, and the problem that identification information in the related technology is insufficient in concealment or depends on professional equipment identification is effectively solved.
In an embodiment, the target pattern includes a two-dimensional code region, and the two-dimensional code region and the second target region do not overlap each other.
Specifically, an explicit two-dimensional code can be carried in the two-dimensional code area, and the two-dimensional code is used for guiding a user to shoot.
For example, please refer to fig. 6, which is a schematic diagram illustrating an exemplary tracing pattern according to an embodiment of the present disclosure. The two-dimensional code area can bear a traceability two-dimensional code, wherein the traceability two-dimensional code refers to a two-dimensional code uniquely associated with a target object and contains traceability information of the target object, such as production information, circulation information, sales information and the like. The user terminal can be a smart phone, a tablet personal computer or other portable equipment, and the user can operate the user terminal to shoot the target object to obtain a shooting image, wherein the shooting image contains the target pattern. The user side uploads the shot image to the server side so that the server side can analyze the tracing information of the target object by using the tracing two-dimensional code, and the tracing information is fed back to the user side so that the user can check tracing details of the target object.
Meanwhile, the shot image also comprises a second target area of the target pattern, the service end can restore the identification information corresponding to the target commodity by utilizing the content in the second target area, and the identification information is utilized to realize anti-channel conflict verification. It should be noted that the traceable two-dimensional code may be replaced by an anti-counterfeit two-dimensional code or other two-dimensional codes for guiding the user to shoot, which is not limited.
It will be appreciated that the average user is not generally concerned about preventing blowby, but is concerned about tracing items. According to the embodiment, the two-dimensional code area is arranged in the target pattern, and the traceability two-dimensional code is borne in the two-dimensional code area, so that a common user can actively provide information for anti-channel conflict verification, and the method is specifically characterized in that: the common user actively operates the user terminal to shoot the target object in order to pay attention to object tracing, and uploads the shot image to the server terminal, and the server terminal can acquire a second target area in the shot image while analyzing the tracing two-dimensional code, so that identification information corresponding to the target commodity is restored, and anti-channel conflict verification is realized. Therefore, the tracing two-dimensional code is combined with the second target area, so that the demand of common users on tracing the objects is met, necessary information sources are provided for anti-channel conflict verification, and initiative and concealment of anti-channel conflict verification are realized. In addition, because the two-dimensional code area and the second target area are not overlapped with each other, the interference of the two-dimensional code area to the second target area is avoided, the integrity and the identifiability of the first coding matrix in the second target area are ensured, and the accuracy and the reliability of the anti-channel conflict verification are further improved. Meanwhile, the practicality of the target pattern is enhanced due to the introduction of the traceability two-dimensional code, so that the target pattern can bear identification information, the traceability function can be provided, and the application value of the target pattern is further improved.
Referring to fig. 7, a flowchart of color increment adjustment is provided in the embodiment of the present disclosure. As shown in fig. 7, the method of the embodiment of the present disclosure may include the following steps S202 to S204, and steps S202 to S204 may be used as refinement steps for step S108 of the embodiment shown in fig. 2.
S202, acquiring a first color increment corresponding to a first coding element and a second color increment corresponding to a second coding element, wherein the first color increment and the second color increment are different;
s204, performing color increment adjustment on the sub-region corresponding to the first coding element in the first target region according to the first color increment, and performing color increment adjustment on the sub-region corresponding to the second coding element in the first target region according to the second color increment, so as to obtain a target pattern.
Specifically, the first color increment and the second color increment related to the present embodiment are numerical values for performing color adjustment on sub-areas corresponding to the first coding element and the second coding element, respectively. The first color increment and the second color increment are different, which means that the two color increments are different in value so as to ensure that the sub-areas corresponding to the first coding element and the second coding element can be distinguished in color attribute.
Regarding the process of obtaining the first color increment corresponding to the first coding element and the second color increment corresponding to the second coding element, in some possible implementations, the first color increment and the second color increment may be directly specified according to a preset rule. For example, the preset rule may specify that each color component of the sub-region corresponding to the first coding element increases by 5 units in the RGB color space, and each color component of the sub-region corresponding to the second coding element decreases by 5 units in the RGB color space. In some possible implementations, the first color increment and the second color increment may be generated from identification information of the target item. In some possible implementations, the first color increment and the second color increment may be obtained by way of user input, e.g., a user may input specific values of the first color increment and the second color increment through an interactive interface.
It should be noted that, the color increment adjustment is performed on the sub-area corresponding to the first coding element in the first target area according to the first color increment, which is to add the first color increment on the basis of the original color value of the sub-area corresponding to the first coding element, where the original color value of the sub-area corresponding to the first coding element may be an average, a mode of taking a value of the original color value of each pixel in the sub-area, or a representative color may be selected by other means, or an average, a mode of taking a value of the original color value of each pixel in the first target area, or a representative color may be selected by other means. Similarly, the color increment adjustment is performed on the subarea corresponding to the second coding element in the first target area according to the second color increment, which is to add the second color increment on the basis of the original color value of the subarea corresponding to the second coding element, wherein the original color value of the subarea corresponding to the second coding element may be an average, a mode of taking the average, or other modes of selecting a representative color value of the original color value addition of each pixel in the subarea, or the average, the mode of taking the average, or other modes of selecting a representative color of the original color value addition of each pixel in the first target area.
Regarding the process of performing color increment adjustment on the sub-region corresponding to the first coding element in the first target region according to the first color increment and performing color increment adjustment on the sub-region corresponding to the second coding element in the first target region according to the second color increment, in some possible implementations, the sub-region in the first target region may be color adjusted by an image processing algorithm. For example, color delta adjustment may be achieved by adjusting the RGB values, YUV values, or other color space values of the sub-regions. In some possible implementations, the sub-region in the first target region may be color adjusted by invoking an image processing library or software. For example, a color adjustment function in the OpenCV library may be invoked to make color delta adjustments to the sub-regions.
Illustratively, for the RGB color space, the color delta adjustment can be achieved by the following formula:
for the sub-region corresponding to the first coding element, the RGB values thereof are adjusted to be:
R'=R+ΔR1
G'=G+ΔG1
B'=B+ΔB1
wherein R, G and B are the original RGB values of the subareas corresponding to the first coding elements, delta R1, delta G1 and delta B1 are the first color increment, and |delta R1|, |delta G1|, and|delta B1| are less than or equal to 10.
For the sub-region corresponding to the second coding element, the RGB values thereof are adjusted to:
R'=R+ΔR2
G'=G+ΔG2
B'=B+ΔB2
wherein, deltaR 2, deltaG 2 and DeltaB 2 are the second color increment, and |DeltaR 2|, |DeltaG2|, and |DeltaB 2| are less than or equal to 10.
Through the adjustment, the sub-areas corresponding to the first coding element and the second coding element are ensured to have obvious color differences in the RGB color space, and meanwhile, the adjustment amplitude is controlled within 10 units, so that the visual influence on the original pattern is avoided being overlarge.
Illustratively, for the YUV color space, the color delta adjustment can be achieved by the following formula:
for the sub-region corresponding to the first coding element, its YUV value is adjusted to:
Y'=Y+ΔY1
U'=U+ΔU1
V'=V+ΔV1
Wherein Y, U and V are YUV values of the sub-region, delta Y1, delta U1 and delta V1 are first color increment, and |delta Y1|, |delta U1|, and |delta V1| are less than or equal to 10.
For the sub-region corresponding to the second coding element, its YUV value is adjusted to:
Y'=Y+ΔY2
U'=U+ΔU2
V'=V+ΔV2
wherein, deltaY 2, deltaU 2 and DeltaV 2 are second color increment, and |DeltaY 2|, |DeltaU 2|, and |DeltaV 2| are less than or equal to 10.
Through the adjustment, the sub-areas corresponding to the first coding element and the second coding element are ensured to have obvious color difference in the YUV color space, and meanwhile, the adjustment amplitude is controlled within 10 units, so that the visual influence on the original pattern is avoided being overlarge.
In the embodiment, firstly, a first color increment corresponding to a first coding element and a second color increment corresponding to a second coding element are obtained to ensure that the two are different in numerical value, secondly, color increment adjustment is carried out on a subarea corresponding to the first coding element in a first target area according to the first color increment, and color increment adjustment is carried out on a subarea corresponding to the second coding element in the first target area according to the second color increment, so that the colors of the subarea corresponding to the first coding element and the subarea corresponding to the second coding element in the second target area are different. Wherein the magnitude of the color delta adjustment is typically small to ensure that the second target area is not visually distinct from the original pattern, thereby enhancing concealment.
Referring to fig. 8, a flowchart of color increment adjustment is provided in the embodiment of the present disclosure. As shown in fig. 8, the method of the embodiment of the present disclosure may include the following steps S302 to S304, and steps S302 to S304 may be used as refinement steps to step S108 of the embodiment shown in fig. 2.
S302, acquiring a third color increment corresponding to a target coding element, wherein the target coding element is one of a first coding element and a second coding element;
And S304, performing color increment adjustment on the subarea corresponding to the target coding element in the first target area according to the third color increment to obtain a target pattern.
Specifically, the target coding element according to this embodiment is one of the first coding element and the second coding element, and the third color increment is a value for performing color adjustment on the sub-region corresponding to the target coding element. The value of the third color increment may be positive, negative or zero for changing a color attribute of the sub-area, such as brightness, hue or saturation.
With respect to the process of obtaining the third color increment corresponding to the target encoding element, in some possible implementations, the third color increment may be directly specified according to a preset rule. For example, the preset rule may specify that each color component of the sub-region corresponding to the target coding element in the RGB color space is increased by 5 units or decreased by 5 units. In some possible implementations, the third color increment may be generated based on identification information of the target item. In some possible implementations, the third color increment may be obtained by way of user input, e.g., a user may input a specific value of the third color increment through an interactive interface.
It should be noted that, the color increment adjustment is performed on the sub-region corresponding to the target coding element in the first target region according to the third color increment, which is to add the third color increment on the basis of the original color value of the sub-region corresponding to the target coding element, where the original color value of the sub-region corresponding to the target coding element may be an average, a mode of taking a value of the original color value of each pixel in the sub-region, or another mode of selecting a representative color, or an average, a mode of taking a value of the original color value of each pixel in the first target region, or another mode of selecting a representative color.
Regarding the process of performing color increment adjustment on the sub-region corresponding to the target coding element in the first target region according to the third color increment, in some possible implementations, the sub-region in the first target region may be subjected to color adjustment by an image processing algorithm. For example, color delta adjustment may be achieved by adjusting the RGB values, YUV values, or other color space values of the sub-regions. In some possible implementations, the sub-region in the first target region may be color adjusted by invoking an image processing library or software. For example, a color adjustment function in the OpenCV library may be invoked to make color delta adjustments to the sub-regions.
Illustratively, for the RGB color space, the color delta adjustment can be achieved by the following formula:
for the sub-region corresponding to the target coding element, the RGB values are adjusted to be:
R'=R+ΔR3
G'=G+ΔG3
B'=B+ΔB3
Wherein R, G and B are the original RGB values of the subareas corresponding to the target coding elements, delta R3, delta G3 and delta B3 are the third color increment, and |delta R3|, |delta G3|, and delta B3| are less than or equal to 10.
Illustratively, for the YUV color space, the color delta adjustment can be achieved by the following formula:
For the sub-region corresponding to the target coding element, its YUV value is adjusted to be:
Y'=Y+ΔY3
U'=U+ΔU3
V'=V+ΔV3
Wherein Y, U and V are YUV values of the sub-region, delta Y3, delta U3 and delta V3 are third color increment, and |delta Y3|, |delta U3|, and |delta V3| are less than or equal to 10.
Through the adjustment, the sub-region corresponding to the target coding element is ensured to have obvious color change in an RGB or YUV color space, and meanwhile, the adjustment amplitude is controlled within 10 units, so that the visual influence on the original pattern is avoided.
In the embodiment, firstly, the color attribute of the sub-region is adjusted by acquiring a third color increment corresponding to the target coding element, and secondly, the color increment adjustment is performed on the sub-region corresponding to the target coding element in the first target region according to the third color increment, so that the colors of the sub-region corresponding to the target coding element in the second target region and the sub-region which is not adjusted are different. The color increment adjustment is only performed on the subarea corresponding to the target coding element, but not the subareas corresponding to the first coding element and the second coding element, so that the visual influence on the original pattern can be further reduced, and the concealment is improved.
Referring to fig. 9, a flowchart of encrypting a first encoding matrix is provided in an embodiment of the present disclosure. As shown in fig. 9, the method of the embodiment of the present disclosure may include the following steps S402 to S406, and the steps S402 to S406 may be the refinement step of step S104 of the embodiment shown in fig. 2.
S402, generating initial data according to the identification information;
s404, encrypting the initial data according to a preset secret key to obtain encrypted data;
s406, generating a first coding matrix according to the encrypted data.
Specifically, the initial data related to the present embodiment refers to the original data generated from the identification information for the subsequent encryption processing. The generation of the initial data may include formatting, encoding or other preprocessing of the identification information to ensure that it is suitable for encryption processing. The encrypted data refers to data after the initial data is processed by an encryption algorithm, and has non-tamper-ability and confidentiality.
Regarding the process of generating the initial data from the identification information, in some possible implementations, the initial data may be generated by formatting the identification information. For example, the identification information may be converted into a character string or binary data in a specific format. In some possible implementations, the initial data may be generated by encoding the identification information. For example, base64 encoding, ASCII encoding, or other encoding means may be used to convert the identification information into the initial data. In some possible implementations, the initial data may be generated by hashing the identification information. For example, the identification information may be hashed using MD5, SHA-1, or other hashing algorithm to generate the initial data.
Regarding the process of encrypting the initial data according to the preset key to obtain the encrypted data, in some possible implementations, the initial data may be encrypted using a symmetric encryption algorithm. For example, the initial data may be encrypted using AES, DES, or other symmetric encryption algorithms in combination with a preset key to generate encrypted data. In some possible implementations, the initial data may be encrypted using an asymmetric encryption algorithm. For example, the initial data may be encrypted using RSA, ECC, or other asymmetric encryption algorithms in combination with a preset key to generate encrypted data. In some possible implementations, the initial data may be encrypted using a hybrid encryption approach. For example, the initial data may be encrypted by combining a symmetric encryption algorithm and an asymmetric encryption algorithm to generate encrypted data.
Regarding the process of generating the first encoding matrix from the encrypted data, in some possible implementations, the first encoding matrix may be generated by converting the encrypted data into binary data. For example, each byte of encrypted data may be converted into 8-bit binary data and arranged in a specific rule to generate a first encoding matrix. In some possible implementations, the first encoding matrix may be generated by redundantly encoding the encrypted data. For example, the encrypted data may be encoded using a hamming code, a Reed-solomon (RS) code, or other redundant encoding scheme to generate the first encoding matrix. In some possible implementations, the first encoding matrix may be generated by matrix transforming the encrypted data. For example, the encrypted data may be processed using matrix multiplication, matrix transposition, or other matrix transformation to generate the first encoding matrix.
Illustratively, the encryption process may be represented as the following relationship:
Cencoded=AES(Iinput,Kkey)
Where Cencoded denotes encrypted data, input denotes initial data, kkey denotes a preset key, and AES () denotes an AES encryption algorithm function.
In the embodiment, the method comprises the steps of firstly generating initial data according to the identification information to provide a basis for subsequent encryption processing, secondly encrypting the initial data according to a preset secret key to generate encrypted data, ensuring the non-tamper property and confidentiality of the data, and finally generating a first coding matrix according to the encrypted data to provide a carrier for embedding the identification information. The encryption processing makes the first coding matrix non-tamper-proof and not easy to crack, and provides a reliable data base for the subsequent anti-channel conflict verification process.
In an embodiment, further refining step S402 of the embodiment shown in fig. 9, the following steps may be included:
acquiring random identification information corresponding to a target object;
and generating initial data according to the identification information, the random identification information and the preset secret key.
Specifically, the random identification information related to the present embodiment refers to randomly generated data associated with the target item, for enhancing the complexity and security of the initial data. The random identification information may be a random number, a random string, or other randomly generated data, which may be generated based on a time stamp, hardware information, or other random source. The initial data refers to original data generated according to the identification information, the random identification information and the preset key, and is used for subsequent encryption processing.
With respect to the process of obtaining random identification information corresponding to the target item, in some possible implementations, the random identification information may be generated by a random number generator. For example, the random identification information may be generated using a pseudo-random number generator or a true random number generator. In some possible implementations, the random identification information may be generated by hardware information. For example, the random identification information may be generated based on a hardware serial number, a MAC address, or other hardware information of the target item. In some possible implementations, the random identification information may be generated by a timestamp. For example, random identification information may be generated based on the current timestamp to ensure its uniqueness and randomness.
Regarding the process of generating the initial data from the identification information, the random identification information, and the preset key, in some possible implementations, the initial data may be generated by concatenating the identification information, the random identification information, and the preset key. For example, the identification information, the random identification information, and the preset key may be spliced into a character string or binary data in a specific order, generating the initial data. In some possible implementations, the initial data may be generated by hashing the identification information, the random identification information, and the preset key. For example, the identification information, the random identification information, and the preset key may be hashed using MD5, SHA-1, or other hashing algorithm to generate the initial data. In some possible implementations, the initial data may be generated by encrypting the identification information, the random identification information, and the preset key. For example, the identification information, the random identification information, and the preset key may be encrypted using AES, DES, or other encryption algorithms to generate the initial data. In some possible implementations, the initial data may be generated by weighted summing the identification information, the random identification information, and the preset key. For example, weight values may be respectively assigned to the identification information, the random identification information, and the preset key, and the three may be weighted and summed according to the weight values to generate the initial data. The weight value may be a fixed value or a dynamic value, the fixed value may be determined based on a preset rule, and the dynamic value may be determined based on an attribute of the target item, a time stamp, or other dynamic factors.
Illustratively, the process of generating the initial data may be expressed as the following relationship:
Iinput=f(Iproduct,Kkey,Irandom)
Where input denotes initial data, f () denotes a function that generates initial data, iproduct denotes identification information of a target commodity, kkey denotes a preset key, and Irandom denotes random identification information.
In the embodiment, the method provides additional randomness and complexity for generating the initial data by acquiring the random identification information corresponding to the target object, and generates the initial data according to the identification information, the random identification information and the preset key, thereby ensuring the uniqueness and the safety of the initial data. The introduction of the random identification information enhances the anti-cracking capability of the initial data, and the use of the preset key further improves the safety of the data, thereby providing a reliable basis for the subsequent encryption processing.
Referring to fig. 10, a schematic flow chart of a redundancy encoding process is provided in the embodiment of the present disclosure. As shown in fig. 10, the method of the embodiment of the present disclosure may include the following steps S502 to S506, and the steps S502 to S506 may be refinement steps of step S406 of the embodiment shown in fig. 9.
S502, generating a second coding matrix according to the encrypted data;
s504, performing redundancy coding processing on the second coding matrix to obtain a third coding matrix;
s506, extracting the third coding matrix based on the preset first size to obtain a first coding matrix.
Specifically, the second coding matrix according to the present embodiment refers to a coding matrix generated from encrypted data for subsequent redundant coding processing. The third coding matrix refers to a coding matrix obtained by performing redundancy coding processing on the second coding matrix, and has error correction capability and anti-interference performance. The first coding matrix refers to a final coding matrix obtained by extracting the third coding matrix based on a preset first size, and is used for being embedded in the target pattern.
Regarding the process of generating the second encoding matrix from the encrypted data, in some possible implementations, the second encoding matrix may be generated by converting the encrypted data into binary data. For example, each byte of encrypted data may be converted into 8-bit binary data and arranged in a specific rule to generate a second encoding matrix. In some possible implementations, the second encoding matrix may be generated by matrix transforming the encrypted data. For example, the encrypted data may be processed using matrix multiplication, matrix transposition, or other matrix transformation to generate the second encoding matrix. In some possible implementations, the second encoding matrix may be generated by performing a blocking process on the encrypted data. For example, the encrypted data may be divided into a plurality of data blocks, and each data block is converted into a matrix form, generating the second encoding matrix.
Regarding the process of performing redundancy encoding on the second encoding matrix to obtain the third encoding matrix, in some possible implementations, the second encoding matrix may be subjected to redundancy encoding using hamming codes. For example, a third encoding matrix with error correction capability may be generated by adding check bits in the second encoding matrix. In some possible implementations, the second coding matrix may be redundantly encoded using RS codes. For example, a third encoding matrix with interference immunity may be generated by adding redundant data in the second encoding matrix. In some possible implementations, the second encoding matrix may be redundantly encoded using convolutional codes. For example, the third encoding matrix having error correction capability may be generated by performing a convolution operation on the second encoding matrix.
Regarding the process of extracting the third coding matrix based on the preset first size to obtain the first coding matrix, in some possible implementations, the first coding matrix may be generated by clipping the third coding matrix. For example, a submatrix meeting the size requirement may be cut out from the third coding matrix according to a preset first size, and used as the first coding matrix. In some possible implementations, the first encoding matrix may be generated by compressing the third encoding matrix. For example, the third coding matrix may be compressed according to a preset first size to generate a first coding matrix meeting the size requirement. The predetermined first size may be 13×13, and the size of the third coding matrix may be different according to the specific application scenario and the coding requirement, for example, 16×8, 15×15, 20×20, or 25×25, etc., which is not limited.
Illustratively, generating the second encoding matrix from the encrypted data may be represented as the following relationship:
Mencode=reshape(Cencoded,16,8)
Where Mencode denotes a second encoding matrix, cencoded denotes encrypted data, 16, 8 are specified sizes, and reshape () generates a function. It is known that the size of the second coding matrix is 16×8.
And performing redundancy coding on the second coding matrix to obtain a third coding matrix, wherein the third coding matrix can be expressed as the following relation:
Mrs=rs(Mencode,k,n)
Wherein Mrs refers to a third coding matrix, mencode refers to a second coding matrix, k is a data dimension before redundancy coding, n is a data dimension after redundancy coding, and rs () is a redundancy coding processing function.
Extracting the third coding matrix based on a preset first size, and obtaining a first coding matrix which can be expressed as the following relation:
Munit=extract(Mrs,13,13)
Wherein Mrs denotes a third coding matrix, 13 are preset first sizes, munit denotes the first coding matrix, and extract () denotes an extraction function. It is known that the size of the first coding matrix is 13×13.
In the embodiment, a basis is provided for redundancy coding processing by firstly generating a second coding matrix according to encrypted data, secondly, performing redundancy coding processing on the second coding matrix to generate a third coding matrix, so that the error correction capability and anti-interference performance of the coding matrix are enhanced, and finally, extracting the third coding matrix based on a preset first size to obtain the first coding matrix, so that the size of the coding matrix meets the requirements. The redundancy coding process enables the first coding matrix to have stronger anti-interference performance and error correction capability, and provides a reliable data base for the subsequent anti-channel conflict verification process.
In an embodiment, further refining step S506 of the embodiment shown in fig. 10, the following steps may be included:
Extracting the third coding matrix based on a preset first size to obtain a fourth coding matrix;
Acquiring a first positioning coding matrix with a preset second size and a second positioning coding matrix with a preset third size, wherein the preset second size and the preset third size are different and smaller than the preset first size;
Embedding a first positioning coding matrix and a second positioning coding matrix into the fourth coding matrix to obtain a first coding matrix, wherein the first positioning coding matrix and the second positioning coding matrix are distributed diagonally in the first coding matrix.
Specifically, the fourth coding matrix according to this embodiment refers to a coding matrix obtained by extracting the third coding matrix based on a preset first size, where the size of the coding matrix is consistent with the preset first size. The first and second positioning coding matrices refer to coding matrices having a preset second size and a preset third size, respectively, for providing positioning information in the fourth coding matrix for a subsequent decoding process. The preset second size and the preset third size are different and are smaller than the preset first size, so that the first positioning coding matrix and the second positioning coding matrix can be embedded into the fourth coding matrix, and meanwhile main body information of the fourth coding matrix is not influenced. Illustratively, the above-mentioned preset first size may be 13×13, the preset second size may be 5×5, and the preset third size may be 3×3.
The first coding matrix refers to a final coding matrix obtained by embedding a first positioning coding matrix and a second positioning coding matrix in a fourth coding matrix, the size of the final coding matrix is consistent with a preset first size, and the first positioning coding matrix and the second positioning coding matrix are distributed diagonally in the first coding matrix so as to facilitate rapid positioning in a subsequent decoding process.
Regarding the process of extracting the third coding matrix based on the preset first size to obtain the fourth coding matrix, in some possible implementations, the fourth coding matrix may be generated by clipping the third coding matrix. For example, a sub-matrix meeting the size requirement can be cut out from the third coding matrix according to the preset first size, and the sub-matrix is used as a fourth coding matrix. In some possible implementations, the fourth encoding matrix may be generated by compressing the third encoding matrix. For example, the third coding matrix may be compressed according to a preset first size to generate a fourth coding matrix meeting the size requirement.
Regarding the process of obtaining the first positioning code matrix of the preset second size and the second positioning code matrix of the preset third size, in some possible implementations, the first positioning code matrix and the second positioning code matrix may be generated by preset rules. For example, the preset rule may specify that the first and second positioning coding matrices are each composed of a specific binary code to ensure that they are unique and identifiable.
Regarding the process of embedding the first and second positioning coding matrices in the fourth coding matrix to obtain the first coding matrix, in some possible implementations, the first and second positioning coding matrices may be embedded in the fourth coding matrix by way of matrix replacement. For example, elements at specific positions in the fourth encoding matrix may be replaced with elements of the first positioning encoding matrix and the second positioning encoding matrix to generate the first encoding matrix.
In the embodiment, the fourth coding matrix is obtained by firstly extracting the third coding matrix based on the preset first size, so that the size of the coding matrix meets the requirement, secondly, the first positioning coding matrix with the preset second size and the second positioning coding matrix with the preset third size are obtained, positioning information is provided for the fourth coding matrix, and finally, the first positioning coding matrix and the second positioning coding matrix are embedded into the fourth coding matrix, so that the first coding matrix is obtained, and the first coding matrix is ensured to have definite positioning information. The first positioning coding matrix and the second positioning coding matrix are distributed diagonally in the first coding matrix, so that rapid positioning in the subsequent decoding process is facilitated, and the decoding efficiency and accuracy are improved.
In an embodiment, the step of obtaining the first positioning code matrix of the preset second size and the second positioning code matrix of the preset third size in the above embodiment may further be further refined, and the step may include the following steps:
Acquiring a third positioning coding matrix with a preset second size and a fourth positioning coding matrix with a preset third size;
Acquiring first description information and second description information of a target object;
Adding preset first positioning information in a fixed coding region of a third positioning coding matrix, and adding first description information in a non-fixed coding region of the third positioning coding matrix to obtain a first positioning coding matrix with a preset second size;
Adding preset second positioning information in a fixed coding region of the fourth positioning coding matrix, and adding second description information in a non-fixed coding region of the fourth positioning coding matrix to obtain a second positioning coding matrix with a preset third size.
Specifically, the third positioning coding matrix and the fourth positioning coding matrix related to this embodiment refer to initial coding matrices having a preset second size and a preset third size, respectively, for subsequent positioning information addition. The first descriptive information and the second descriptive information of the target item refer to descriptive information associated with the target item. The preset first positioning information and the preset second positioning information refer to fixed information for identifying the positioning coding matrix, such as a specific binary code or symbol, respectively, so as to ensure that the positioning coding matrix has uniqueness and identifiability. The fixed coding region refers to a region for carrying preset positioning information in the positioning coding matrix, and the non-fixed coding region refers to a region for carrying description information in the positioning coding matrix.
Regarding the process of obtaining the third positioning code matrix of the preset second size and the fourth positioning code matrix of the preset third size, in some possible implementations, the third positioning code matrix and the fourth positioning code matrix may be generated by preset rules. For example, the preset rule may specify that the third positioning coding matrix and the fourth positioning coding matrix are each composed of a specific binary code. In some possible implementations, the third and fourth positioning coding matrices may be generated by means of random generation. For example, the third and fourth positioning coding matrices may be generated using a random number generator.
With respect to the process of obtaining the first descriptive information and the second descriptive information of the target item, in some possible implementations, the first descriptive information and the second descriptive information may be obtained by querying a database of the target item. For example, the first descriptive information may be a production date of the target item and the second descriptive information may be a production lot of the target item. In some possible implementations, the first description information and the second description information may be acquired by way of user input. For example, the user may input specific contents of the first description information and the second description information through the interactive interface.
Regarding the process of adding the preset first positioning information in the fixed coding region of the third positioning coding matrix and adding the first description information in the non-fixed coding region of the third positioning coding matrix to obtain the first positioning coding matrix with the preset second size, in some possible implementations, the preset first positioning information may be added in the fixed coding region of the third positioning coding matrix by a matrix replacement manner. For example, elements of the fixed coding region in the third positioning coding matrix may be replaced with elements of the preset first positioning information.
Regarding the process of adding the preset second positioning information in the fixed coding region of the fourth positioning coding matrix and adding the second description information in the non-fixed coding region of the fourth positioning coding matrix to obtain the second positioning coding matrix with the preset third size, in some possible implementations, the preset second positioning information may be added in the fixed coding region of the fourth positioning coding matrix by a matrix replacement manner. For example, elements of the fixed coding region in the fourth positioning coding matrix may be replaced with elements of the preset second positioning information.
For easy understanding of the present embodiment, please refer to fig. 11, which is a schematic diagram illustrating a positioning coding matrix according to an embodiment of the present disclosure. The preset second size is 5×5, the area corresponding to the first positioning coding matrix is distributed at the upper left corner of the second target area, the fixed coding area is used for bearing preset first positioning information and the non-fixed coding area is used for bearing first description information, the preset third size is 3×3, the area corresponding to the second positioning coding matrix is distributed at the lower right corner of the second target area, the fixed coding area is used for bearing preset second positioning information and the non-fixed coding area is used for bearing second description information.
In the embodiment, a basis is provided for adding positioning information by acquiring a third positioning coding matrix with a preset second size and a fourth positioning coding matrix with a preset third size, secondly, first description information and second description information of a target object are acquired, additional descriptive information is provided for the positioning coding matrix, finally, the preset first positioning information is added in a fixed coding area of the third positioning coding matrix, the first description information is added in an unfixed coding area of the third positioning coding matrix, the first positioning coding matrix with the preset second size is obtained, the preset second positioning information is added in a fixed coding area of the fourth positioning coding matrix, and the second positioning coding matrix with the preset third size is obtained by adding the second description information in the unfixed coding area of the fourth positioning coding matrix. The preset positioning information in the fixed coding region ensures the uniqueness and the identifiability of the positioning coding matrix, and the descriptive information in the non-fixed coding region enhances the complexity and the safety of the positioning coding matrix, thereby providing a reliable basis for the subsequent decoding process.
Referring to fig. 12, a flowchart of a pattern decoding method is provided in the embodiment of the present disclosure. As shown in fig. 12, the method of the embodiment of the present specification may include the following steps S602 to S608.
S602, acquiring a shooting image of a target object, wherein the shooting image comprises a target pattern, the target pattern comprises a second target area corresponding to a first coding matrix, the first coding matrix comprises a first coding element and a second coding element with different binary logic states, and the colors of a subarea corresponding to the first coding element and a subarea corresponding to the second coding element in the second target area are different.
Specifically, the image taken by the target object according to the embodiment refers to an image obtained by taking the image of the target object through a user terminal, which may be a smart phone, a tablet computer or other portable devices. The definition of the related nouns such as the target pattern, the first encoding matrix, the second target area, the binary logic state, the first encoding element, the second encoding element, etc. please refer to the embodiment of the pattern encoding method, which is not described herein.
With respect to the process of acquiring a captured image of a target item, in some possible implementations, a captured image of a target item may be acquired by receiving a captured image uploaded by a user. For example, a user may upload a captured image to a server through an interactive interface, and the server receives and stores the captured image. In some possible implementations, the captured image of the target item may be obtained by querying a pre-stored database or file. For example, a database or file stored in advance may be queried to acquire a captured image of the target object.
S604, binarizing the shot image to obtain a binarized image, determining a third target area corresponding to the second target area from the binarized image, wherein the gray values of the subareas corresponding to the first coding elements and the subareas corresponding to the second coding elements in the third target area are different.
Specifically, the binarization processing related to the present embodiment refers to converting a captured image into an image containing only two kinds of gradation values, and the binarized image refers to an image subjected to the binarization processing, such as black (gradation value of 0) and white (gradation value of 255), for the convenience of subsequent image processing and analysis. The third target region refers to a region corresponding to the second target region in the binarized image, and the size and shape of the third target region are identical to those of the second target region, but the gradation value attribute is changed. The difference in gray value between the sub-region corresponding to the first coding element and the sub-region corresponding to the second coding element in the third target region means that in the third target region, there is a difference in gray value attribute, such as black and white, between the sub-region corresponding to the first coding element and the sub-region corresponding to the second coding element, which provides a basic feature for the subsequent decoding process.
Regarding the process of binarizing the captured image to obtain a binarized image, in some possible implementations, a pre-trained semantic segmentation model may be invoked to perform binarization on the pre-processed image to obtain a binarized image. In some possible implementations, the captured image may be binarized by an image processing algorithm. For example, the captured image may be binarized using a threshold segmentation algorithm to generate a binarized image. In some possible implementations, the captured image may be binarized by invoking an image processing library or software. For example, a binarization processing function in the OpenCV library may be called to perform binarization processing on the captured image, and a binarized image may be generated.
With respect to the process of determining the third target region corresponding to the second target region from the binarized image, in some possible implementations, a positioning encoding matrix may be used to determine the third target region corresponding to the second target region from the binarized image. In some possible implementations, the third target region corresponding to the second target region may be automatically identified in the binarized image by an image processing algorithm. For example, an edge detection algorithm or region growing algorithm may be used to identify the third target region in the binarized image. In some possible implementations, one region in the binarized image may be manually selected by a user as the third target region. For example, the user may frame an area in the binarized image through the interactive interface, the size and shape of the area being set by the user according to the size and shape of the second target area.
S606, acquiring a first coding matrix according to the third target area.
In particular, regarding the process of acquiring the first coding matrix according to the third target area, in some possible implementations, a plurality of coding elements may be determined based on the pixel gray value statistics, and the first coding matrix may be determined according to the plurality of coding elements. In some possible implementations, the first encoding matrix may be extracted in the third target region by an image processing algorithm. For example, a matrix extraction algorithm may be used to extract the first encoding matrix in the third target region. In some possible implementations, the first encoding matrix may be extracted in the third target region by invoking an image processing library or software. For example, a matrix extraction function in the OpenCV library may be invoked to extract the first encoding matrix in the third target area.
S608, decrypting the first coding matrix to obtain the identification information corresponding to the target object.
In particular, regarding the process of decrypting the first encoding matrix to obtain the identification information corresponding to the target object, in some possible implementations, the first encoding matrix may be decrypted using a symmetric decryption algorithm. For example, AES, DES or other symmetric decryption algorithms may be used to decrypt the first encoding matrix with a preset key to obtain the identification information corresponding to the target article. In some possible implementations, the first encoding matrix may be decrypted using an asymmetric decryption algorithm. For example, RSA, ECC, or other asymmetric decryption algorithms may be used to decrypt the first encoding matrix with a preset key to obtain the identification information corresponding to the target article. In some possible implementations, the first encoding matrix may be decrypted using a hybrid decryption approach. For example, the first encoding matrix may be decrypted by combining a symmetric decryption algorithm and an asymmetric decryption algorithm, so as to obtain the identification information corresponding to the target object.
In this embodiment, a captured image of a target object is obtained, binarization processing is performed on the captured image, a third target area corresponding to the second target area is determined from the binarization image, a first encoding matrix is obtained according to the third target area, and finally the first encoding matrix is decrypted to obtain identification information corresponding to the target object. It can be understood that the target pattern has higher concealment in the target pattern by color increment adjustment and encryption processing, is not easy to be perceived or tampered by naked eyes, and ensures the accuracy and efficiency of the decoding process by binarization processing and matrix extraction. In addition, the embodiment does not depend on professional equipment, shooting can be completed only by taking common shooting equipment as a user side, decoding is completed by a server side, and the use cost and complexity are reduced. Through the combination of the steps, the problems of insufficient concealment of the identification information or dependence on professional equipment in the related technology are effectively solved, and reliable data support is provided for anti-channel conflict verification.
Referring to fig. 13, a flowchart of a binarization process is provided in the embodiment of the present disclosure. As shown in fig. 13, the method according to the embodiment of the present disclosure may include the following steps S702 to S704, and the steps S702 to S704 may be used as refinement steps for "binarizing the captured image to obtain a binarized image" in step S604 shown in fig. 12.
S702, preprocessing a shot image to obtain a preprocessed image;
S704, invoking a pre-trained semantic segmentation model to perform binarization processing on the preprocessed image to obtain a binarized image.
Specifically, the preprocessing related to the present embodiment refers to performing a series of processing operations on a captured image to improve image quality and facilitate subsequent binarization processing. The preprocessed image refers to an image after preprocessing, and the quality of the preprocessed image is better than that of an original photographed image, such as denoising, graying, resizing, and the like. The semantic segmentation model refers to a model obtained through machine learning training and is used for classifying images at a pixel level so as to realize binarization processing. The binarized image refers to an image obtained by processing the semantic segmentation model, and the pixel value of the image only comprises two gray values, such as black (gray value of 0) and white (gray value of 255), so as to facilitate subsequent image processing and analysis. It can be understood that, because the colors of the sub-regions corresponding to the first coding elements and the sub-regions corresponding to the second coding elements in the target pattern are different, the color difference provides an explicit classification basis for the semantic segmentation model, so that the semantic segmentation model can accurately identify and distinguish different coding elements in the target pattern, thereby generating a high-quality binarized image.
Regarding the process of preprocessing the captured image to obtain a preprocessed image, in some possible implementations, the captured image may be denoised by an image processing algorithm. For example, the captured image may be denoised using gaussian filtering, median filtering, or other denoising algorithms to remove noise interference from the image. In some possible implementations, the captured image may be converted to a grayscale image by an image processing algorithm. For example, a weighted average or other graying algorithm may be used to convert the captured image to a gray scale image to simplify the image processing. In some possible implementations, the captured image may be resized by an image processing algorithm. For example, the captured image may be resized using bilinear interpolation, bicubic interpolation, or other interpolation algorithms to ensure that its dimensions meet the requirements of subsequent processing.
Regarding the process of invoking a pre-trained semantic segmentation model to binarize the pre-processed image to obtain a binarized image, in some possible implementations, the pre-processed image may be processed by loading the pre-trained semantic segmentation model. For example, a deep learning-based semantic segmentation model, such as U-Net, deep Lab, or other semantic segmentation model, may be loaded to classify the preprocessed image at the pixel level to generate the binarized image.
In some possible implementations, the training process of the semantic segmentation model can be realized by firstly collecting a large amount of image data containing target patterns and labeling each image, wherein labeling content comprises position and category information of the target patterns, secondly dividing the labeled image data into a training set, a verification set and a test set for training, verifying and testing the model, thirdly selecting a proper semantic segmentation model architecture such as U-Net, deep Lab or other model architectures and initializing model parameters, then training the model by using the training set, optimizing the model parameters by a back propagation algorithm to minimize a loss function, finally verifying and testing the trained model by using the verification set and the test set, evaluating the performance of the model and optimizing the model according to an evaluation result.
Illustratively, the binarization process can be expressed as the following relation:
Ibinary=binarize(Iresized)
Where Ibinary denotes a binarized image, iresized denotes a preprocessed image, and binarize () denotes a binarized processing function of the semantic segmentation model.
In the embodiment, firstly, the shot image is preprocessed, the image quality is improved, the subsequent binarization processing is facilitated, secondly, a pre-trained semantic segmentation model is called to conduct binarization processing on the preprocessed image, a binarized image is generated, and a reliable basis is provided for the subsequent decoding process. The semantic segmentation model ensures the accuracy and efficiency of binarization processing through pixel-level classification, enhances the generalization capability and robustness of the model through a deep learning mode, and provides reliable data support for the subsequent anti-channel conflict verification process.
In an embodiment, further refinement of step S702 of the embodiment shown in fig. 13 may include the following steps:
Denoising the shot image to obtain a denoised image;
converting the denoising image into a gray level image;
and carrying out size adjustment on the gray level image according to a preset fourth size to obtain a preprocessed image.
Specifically, the denoising process according to the present embodiment refers to removing noise interference in a captured image by an image processing algorithm to improve image quality. The denoising image refers to an image subjected to denoising treatment, noise interference of the denoising image is obviously reduced, and details of the image are clearer. A grayscale image refers to converting a color image into an image containing only grayscale information, the grayscale value of each pixel of which represents the luminance information of that pixel. The preset fourth size refers to a preset image size for ensuring that the image size meets the requirements of subsequent processing. The preprocessed image refers to an image subjected to noise removal, graying and size adjustment, and has quality superior to that of the original photographed image, so that subsequent binarization processing is facilitated.
Regarding the process of denoising the captured image to obtain a denoised image, in some possible implementations, the captured image may be denoised by an image processing algorithm. For example, a gaussian filter algorithm may be used to denoise the captured image, and each pixel in the image is weighted averaged by a convolution operation to remove noise interference in the image. In some possible implementations, a median filtering algorithm may be used to denoise the captured image, and the median value of the pixel neighborhood is taken to replace the current pixel value to remove salt-and-pepper noise in the image. In some possible implementations, other denoising algorithms, such as bilateral filtering, non-local mean filtering, etc., may be used to denoise the captured image to improve image quality.
Regarding the process of converting the denoised image to a grey scale image, in some possible implementations the denoised image may be converted to a grey scale image by an image processing algorithm. For example, the denoised image may be converted to a grey scale image using a weighted average method, and the grey scale values may be obtained by multiplying the red, green and blue channels of the colour image by different weight coefficients respectively, and summing. In some possible implementations, other graying algorithms, such as maximum value, average value, etc., may be used to convert the denoised image to a gray image. It will be appreciated that conversion to a grey scale image may reduce the complexity of subsequent image processing.
Regarding the process of resizing the gray image according to the preset fourth size to obtain the pre-processed image, in some possible implementations, the gray image may be resized by an image processing algorithm. For example, a bilinear interpolation algorithm may be used to resize the gray image, and the gray value of the target pixel may be obtained by calculating the position of the target pixel in the original image and performing weighted average according to the gray values of four pixels around the target pixel. In some possible implementations, the gray-scale image may be resized using a bicubic interpolation algorithm, and the gray-scale value of the target pixel is obtained by calculating the position of the target pixel in the original image and performing weighted average according to the gray-scale values of sixteen pixels around the target pixel. In some possible implementations, other interpolation algorithms, such as nearest neighbor interpolation, lanczos interpolation, etc., may be used to resize the gray scale image to ensure that its dimensions meet the requirements of subsequent processing.
Illustratively, the gray scale image is resized according to the preset fourth size, and the obtained preprocessed image may be represented as the following relation:
Iresized=resize(Iimage,w,h)
Where, iimage refers to a gray image, iresized refers to a pre-processed image, w is a specified width, h is a specified height, w and h together define a preset fourth size, and resize () refers to a resizing function.
In the embodiment, firstly, noise interference in an image is removed by denoising the shot image, the image quality is improved, secondly, the denoising image is converted into a gray image, the image processing process is simplified, and finally, the gray image is subjected to size adjustment according to a preset fourth size, so that the image size is ensured to meet the requirement of subsequent processing. Through the steps, the preprocessing image is obtained, and high-quality input data is provided for subsequent binarization processing, so that the accuracy and efficiency of the binarization processing are improved.
Referring to fig. 14, a flowchart of determining a third target area is provided in the embodiment of the present disclosure. As shown in fig. 14, the method of the embodiment of the present specification may include the following steps S802 to S806, and steps S802 to S806 may be regarded as refinement steps of "determining the third target area corresponding to the second target area from the binarized image" in step S604 shown in fig. 12.
S802, acquiring a first positioning coding matrix with a preset second size and a second positioning coding matrix with a preset third size;
S804, determining a first mask area from the binarized image according to the structural information corresponding to the first coding matrix, the first positioning coding matrix and the second positioning coding matrix;
s806, determining a third target area corresponding to the second target area based on the first mask area.
Specifically, the first positioning coding matrix and the second positioning coding matrix related to this embodiment refer to coding matrices having a preset second size and a preset third size, respectively, for providing positioning information in the binarized image so as to determine the third target area. The preset second size and the preset third size are different and are smaller than the preset first size, so that the first positioning coding matrix and the second positioning coding matrix can be embedded into the first coding matrix, and main body information of the first coding matrix is not influenced. The structure information corresponding to the first coding matrix refers to information such as the size and shape of the first coding matrix and the arrangement rule of the coding elements, and is used for guiding the process of extracting the first coding matrix from the binarized image. The first mask region refers to a region determined according to the first positioning coding matrix and the second positioning coding matrix in the binarized image, and is used for further extracting a third target region. The third target region refers to a region corresponding to the second target region in the binarized image, and the size and shape thereof are identical to those of the second target region, but the gradation value attribute is changed.
Regarding the process of obtaining the first positioning code matrix of the preset second size and the second positioning code matrix of the preset third size, in some possible implementations, the first positioning code matrix and the second positioning code matrix may be generated by preset rules. For example, the preset rule may specify that the first and second positioning coding matrices are each composed of a specific binary code to ensure that they are unique and identifiable. In some possible implementations, the first and second location encoding matrices may be obtained by querying a pre-stored database or file. For example, a pre-stored database or file may be queried to obtain specific contents of the first and second positioning encoding matrices.
Regarding the process of determining the first mask region from the binarized image based on the corresponding structural information of the first encoding matrix, the first positioning encoding matrix, and the second positioning encoding matrix, in some possible implementations, the positions of the first positioning encoding matrix and the second positioning encoding matrix may be identified in the binarized image by an image processing algorithm. For example, a template matching algorithm may be used to search the binarized image for areas that match the first and second positioning coding matrices, determining their locations. In some possible implementations, the locations of the first and second positioning coding matrices may be identified in the binarized image by invoking an image processing library or software. For example, a template matching function in the OpenCV library may be invoked to search the binarized image for areas matching the first and second positioning coding matrices, and determine their locations. In some possible implementations, the first mask region may be determined according to structural information corresponding to the first encoding matrix, in combination with positions of the first positioning encoding matrix and the second positioning encoding matrix. For example, an area may be framed in the binarized image as a first mask area in accordance with the size and shape of the first encoding matrix, in combination with the positions of the first positioning encoding matrix and the second positioning encoding matrix.
Regarding the process of determining a third target region corresponding to the second target region based on the first mask region, in some possible implementations, the third target region may be extracted in the first mask region by an image processing algorithm. For example, an edge detection algorithm or region growing algorithm may be used to identify a third target region in the first mask region. In some possible implementations, the third target region may be extracted in the first mask region by invoking an image processing library or software. For example, an edge detection function or region growing function in the OpenCV library may be invoked to identify a third target region in the first mask region. In some possible implementations, one of the first mask regions may be manually selected by a user as the third target region. For example, the user may frame a region in the first mask region through the interactive interface, the size and shape of the region being set by the user according to the size and shape of the second target region.
Illustratively, the process of determining the first mask region may be expressed as the following relationship:
maskunit=detect_mask(Ibinary,Munit,Ppattern)
Wherein maskunit denotes a first mask region, ibinary denotes a binarized image, munit denotes structural information corresponding to a first encoding matrix, ppattern denotes a first positioning encoding matrix and a second positioning encoding matrix, and detect_mask () denotes a processing function for determining the first mask region.
In the embodiment, firstly, positioning information is provided for determining a third target area by acquiring a first positioning coding matrix with a preset second size and a second positioning coding matrix with a preset third size, secondly, a first mask area is determined from a binarized image according to structural information corresponding to the first coding matrix, the first positioning coding matrix and the second positioning coding matrix, a basis is provided for extracting the third target area, and finally, the third target area corresponding to the second target area is determined based on the first mask area, so that accuracy and completeness of the third target area are ensured. The use of the first positioning coding matrix and the second positioning coding matrix enhances the positioning accuracy, the determination of the first mask region further reduces the search range, improves the extraction efficiency of the third target region, and provides a reliable basis for the subsequent decoding process.
In an embodiment, the step S806 of the embodiment shown in fig. 14 is further refined, and may include the following steps:
performing perspective transformation correction on the first mask region to obtain a second mask region;
The second mask region is determined as a third target region corresponding to the second target region.
Specifically, the perspective transformation correction according to the present embodiment refers to performing geometric transformation on the first mask region by using an image processing algorithm to correct image distortion caused by photographing angle or perspective deformation, thereby obtaining the second mask region. The second mask region refers to a region which is subjected to perspective transformation and correction, and the shape and the size of the second mask region are more matched with those of the second target region, so that subsequent decoding processing is facilitated.
Regarding the process of perspective transformation rectifying the first mask region to obtain the second mask region, in some possible implementations, the perspective transformation rectifying the first mask region may be performed by an image processing algorithm. For example, the first mask region may be geometrically transformed using a perspective transformation algorithm, and the second mask region may be generated by calculating four corner coordinates of the first mask region and mapping them to target coordinates. In some possible implementations, perspective transformation rectification may be performed on the first mask region by invoking an image processing library or software. For example, a perspective transformation function in the OpenCV library may be invoked to geometrically transform the first mask region, generating a second mask region. In some possible implementations, perspective transformation correction may be performed on the first mask region by manually inputting the corner coordinates and the target coordinates of the first mask region by a user. For example, the user may input corner coordinates and target coordinates of the first mask region through the interactive interface, and generate the second mask region.
Regarding the process of determining the second mask region as a third target region corresponding to the second target region, in some possible implementations, the third target region may be extracted in the second mask region by an image processing algorithm. For example, an edge detection algorithm or region growing algorithm may be used to identify a third target region in the second mask region. In some possible implementations, the third target region may be extracted in the second mask region by invoking an image processing library or software. For example, an edge detection function or region growing function in the OpenCV library may be invoked to identify a third target region in the second mask region. In some possible implementations, one of the second mask regions may be manually selected by the user as the third target region. For example, the user may frame a region in the second mask region through the interactive interface, and the size and shape of the region may be set by the user according to the size and shape of the second target region.
In the embodiment, the second mask region is obtained by performing perspective transformation correction on the first mask region, so that image distortion caused by shooting angles or perspective deformation is corrected, the shape and the size of the second mask region are more matched with those of the second target region, and the second mask region is determined to be a third target region corresponding to the second target region, so that accuracy and integrity of the third target region are ensured. The perspective transformation correction improves the extraction precision of the third target area, and the use of the second mask area further optimizes the shape and the size of the third target area, thereby providing a reliable basis for the subsequent decoding process.
In an embodiment, the step S606 of the embodiment shown in fig. 12 is further refined, and may include the following steps:
Dividing a third target area into a plurality of grid areas according to the structure information corresponding to the first coding matrix;
Carrying out pixel gray value statistics on each grid region in the grid regions to obtain a pixel gray value statistics result of each grid region;
determining the coding elements corresponding to each grid region according to the pixel gray value statistical result of each grid region, wherein the coding elements corresponding to the grid region are the first coding elements or the second coding elements;
and determining a first coding matrix according to the coding elements corresponding to each grid region.
The grid area according to the embodiment refers to a plurality of sub-areas that divide the third target area into according to the structure information corresponding to the first coding matrix, and each grid area corresponds to one coding element in the first coding matrix. Pixel gray value statistics refer to the calculation and analysis of pixel gray values within each grid region to determine gray features for that grid region. The coding elements refer to basic units in the first coding matrix, and the values of the basic units are the first coding elements or the second coding elements, which respectively represent different binary information. The first coding matrix refers to a matrix composed of a plurality of coding elements for storing and delivering specific information.
Regarding the process of dividing the third target area into a plurality of grid areas according to the structure information corresponding to the first encoding matrix, in some possible implementations, the third target area may be divided into a plurality of grid areas by an image processing algorithm. For example, the third target area may be divided into a plurality of grid areas according to a preset grid size according to the size and shape of the first coding matrix, and the size and shape of each grid area corresponds to the coding elements in the first coding matrix one by one. In some possible implementations, the third target region may be divided into multiple grid regions by invoking an image processing library or software. For example, an image segmentation function in the OpenCV library may be invoked to divide the third target area into a plurality of mesh areas according to a preset mesh size. In some possible implementations, the third target area may be divided into a plurality of grid areas by the user manually setting the grid size. For example, the user may set a mesh size through the interactive interface, dividing the third target area into a plurality of mesh areas.
Regarding the process of performing pixel gray value statistics on each of the plurality of grid areas to obtain a pixel gray value statistics result for each grid area, in some possible implementations, the number of pixels with gray value of 0 and the number of pixels with gray value of 255 in each grid area may be counted as the pixel gray value statistics result for the grid area. For example, a loop structure may be used to traverse all pixels within each grid region, determine whether the gray value of each pixel is 0 or 255, and count separately. In some possible implementations, pixel gray value statistics may be performed for each grid region by invoking an image processing library or software. For example, a statistical function in the OpenCV library may be invoked to calculate the average, median, or mode of all pixel gray values in each grid region as a pixel gray value statistic for that grid region.
Regarding the process of determining the coding elements corresponding to each grid region according to the pixel gray value statistics result of each grid region, in some possible implementations, the coding elements corresponding to the grid region having the number of pixels with the gray value of 0 greater than the number of pixels with the gray value of 255 may be determined as the first coding elements, and the coding elements corresponding to the grid region having the number of pixels with the gray value of 0 less than or equal to the number of pixels with the gray value of 255 may be determined as the second coding elements. Regarding the use of an average, median, or mode as the pixel gray value statistic, in some possible implementations, the average, median, or mode of all pixel gray values within each grid region may be calculated as the pixel gray value statistic for that grid region. For example, if the average value of the pixel gray values in the grid area is greater than the preset gray threshold, determining the coding element corresponding to the grid area as the first coding element, and otherwise, determining the coding element as the second coding element. Similarly, the coding elements corresponding to the grid region may also be determined by classification based on the median or mode statistics.
Regarding the process of determining the first coding matrix from the coding elements corresponding to each grid region, in some possible implementations, the first coding matrix may be generated by arranging the coding elements corresponding to each grid region according to their positions in the third target region. For example, the coding elements corresponding to each mesh region may be arranged in the row-column order in the third target region, so as to generate the first coding matrix. In some possible implementations, the first encoding matrix may be generated by calling an image processing library or software to arrange the encoding elements corresponding to each grid region. For example, a matrix generation function in the OpenCV library may be called to arrange the coding elements corresponding to each grid region to generate a first coding matrix. In some possible implementations, the first encoding matrix may be generated by a user manually arranging the encoding elements corresponding to each grid region. For example, the user may arrange the coding elements corresponding to each grid region through the interactive interface to generate a first coding matrix.
Illustratively, it is assumed that the structure information corresponding to the first coding matrix indicates that the size of the first coding matrix is 13×13, the size of the third target area is m×n, and the third target area is divided into 13×13 mesh areas according to the structure information corresponding to the first coding matrix, each mesh area having a size of (M/13) × (N/13). A pixel gray value statistics is performed for each grid region, and a pixel number C0 (i, j) with a gray value of 0 and a pixel number C255 (i, j) with a gray value of 255 are respectively calculated, wherein i and j respectively represent a row-column index of the grid region. According to a preset classification rule, if C0 (i, j) > C255 (i, j), determining the coding element corresponding to the grid region as a first coding element, and if C0 (i, j) is less than or equal to C255 (i, j), determining the coding element corresponding to the grid region as a second coding element. Finally, the coding elements corresponding to each grid region are arranged according to the positions of the coding elements in the third target region, and a 13×13 first coding matrix is generated.
In the embodiment, a third target area is divided into a plurality of grid areas according to the structure information corresponding to the first coding matrix to provide a basis for subsequent pixel gray value statistics and coding element determination, then the pixel gray value statistics of each grid area is obtained through the pixel gray value statistics of each grid area in the plurality of grid areas to provide a basis for determining the coding element corresponding to each grid area, and then the coding element corresponding to each grid area is determined according to the pixel gray value statistics of each grid area to ensure the accuracy of the coding element, and finally the first coding matrix is determined according to the coding element corresponding to each grid area to complete the conversion from the third target area to the first coding matrix. The grid area is divided, so that the extraction precision of the first coding matrix is improved, the generation process of the first coding matrix is further optimized through pixel gray value statistics and coding element determination, and a reliable basis is provided for the subsequent decoding process.
Referring to fig. 15, a schematic flow chart of decryption to obtain identification information is provided in the embodiment of the present disclosure. As shown in fig. 15, the method of the embodiment of the present disclosure may include the following steps S902 to S906, and steps S902 to S906 may be used as refinement steps for step S608 of the embodiment shown in fig. 12.
S902, obtaining encrypted data from a first coding matrix;
S904, decrypting the encrypted data according to a preset key to obtain initial data;
S906, acquiring identification information corresponding to the target object from the initial data.
In particular, regarding the process of obtaining the encrypted data from the first encoding matrix, in some possible implementations, the encrypted data may be extracted from the first encoding matrix by an image processing algorithm. For example, the encrypted data may be extracted from the first encoding matrix according to a preset extraction rule according to the size and shape of the first encoding matrix. In some possible implementations, the encrypted data may be extracted from the first encoding matrix by invoking an image processing library or software. For example, a matrix extraction function in the OpenCV library may be invoked to extract encrypted data from the first encoding matrix. In some possible implementations, the encrypted data may be extracted from the first encoding matrix by a user manually setting an extraction rule. For example, the user may set extraction rules through the interactive interface to extract encrypted data from the first encoding matrix.
Regarding the process of decrypting the encrypted data according to the preset key to obtain the initial data, in some possible implementations, a symmetric decryption algorithm may be used to decrypt the encrypted data. For example, the encrypted data may be decrypted using AES, DES, or other symmetric decryption algorithm, in combination with a predetermined key, to obtain the initial data. In some possible implementations, the encrypted data may be decrypted using an asymmetric decryption algorithm. For example, the encrypted data may be decrypted using RSA, ECC, or other asymmetric decryption algorithms, in combination with a predetermined key, to obtain the initial data. In some possible implementations, the encrypted data may be decrypted using a hybrid decryption approach. For example, the encrypted data may be decrypted by combining a symmetric decryption algorithm and an asymmetric decryption algorithm to obtain the initial data.
Regarding the process of obtaining the identification information corresponding to the target item from the initial data, in some possible implementations, the identification information corresponding to the target item may be obtained by parsing the initial data. For example, the initial data may be parsed according to a preset parsing rule, and the identification information corresponding to the target object may be extracted. In some possible implementations, the initial data may be parsed by invoking a data processing library or software. For example, an analysis function in the data processing library may be called to analyze the initial data, and the identification information corresponding to the target object may be extracted. In some possible implementations, the initial data may be parsed by a user manually setting parsing rules. For example, a user can set an analysis rule through an interactive interface, analyze the initial data and extract the identification information corresponding to the target object.
Illustratively, the decryption process and the identification information acquisition process may be expressed as the following relationship:
Iproduct=AES_decrypt(Mrs,Kkey)
Wherein Iproduct denotes identification information corresponding to a target object, kkey denotes a preset key, mrs denotes encrypted data, and aes_decrypt () denotes an AES decryption algorithm function.
In the embodiment, firstly, the basis is provided for subsequent decryption processing by acquiring the encrypted data from the first coding matrix, secondly, the encrypted data is decrypted according to the preset secret key to obtain initial data, the integrity and the accuracy of the data are ensured, and finally, the identification information corresponding to the target object is acquired from the initial data, so that the conversion from the first coding matrix to the identification information corresponding to the target object is completed. The acquisition and decryption processing of the encrypted data ensures the non-tamper property and confidentiality of the data, the analysis of the initial data further optimizes the extraction process of the identification information, and reliable data support is provided for the subsequent anti-channel conflict verification process.
In an embodiment, further refining step S902 of the embodiment shown in fig. 15, the following steps may be included:
Performing redundant decoding processing on the first coding matrix to obtain encrypted data
Specifically, the redundant decoding processing related to the present embodiment refers to a process of removing redundant information by performing a decoding operation on the first encoding matrix to extract encrypted data. The purpose of the redundant decoding process is to recover the original encrypted data and ensure the integrity and accuracy of the data. The encrypted data refers to data obtained by the redundant decoding process for subsequent decryption processes.
Regarding the process of redundantly decoding the first encoding matrix to obtain encrypted data, in some possible implementations, hamming codes may be used to redundantly decode the first encoding matrix. For example, the original encrypted data may be recovered by removing the check bits in the first encoding matrix. In some possible implementations, the first encoding matrix may be redundantly decoded using Reed-Solomon codes (Reed-Solomon codes). For example, the original encrypted data may be recovered by removing redundant data in the first encoding matrix. In some possible implementations, the first encoding matrix may be redundantly decoded using convolutional codes. For example, the original encrypted data may be restored by performing an inverse operation of the convolution operation on the first encoding matrix.
Regarding the redundant decoding process of the hamming code, in some possible implementations, this may be achieved by first determining the check bit positions in the first encoding matrix according to the encoding rules of the hamming code, secondly correcting the data in the first encoding matrix by the check bits, removing the erroneous data, and finally removing the check bits, recovering the original encrypted data. Regarding the redundant decoding process of the reed-solomon code, in some possible implementations, this may be achieved by first determining the positions of the redundant data in the first encoding matrix according to the encoding rules of the reed-solomon code, second correcting the data in the first encoding matrix by the redundant data to remove the erroneous data, and finally removing the redundant data to recover the original encrypted data. Regarding the redundant decoding process of the convolutional code, in some possible implementations, the method may be implemented by firstly determining a convolutional operation parameter in a first encoding matrix according to an encoding rule of the convolutional code, secondly decoding data in the first encoding matrix through an inverse operation of the convolutional operation to remove redundant information, and finally recovering original encrypted data.
Regarding the specific implementation of the redundancy decoding process, in some possible implementations, the redundancy decoding process may be performed on the first encoding matrix by an image processing algorithm. For example, the first encoding matrix may be subjected to a redundant decoding process using a decoding algorithm of hamming code, reed-solomon code or convolutional code, to recover the original encrypted data. In some possible implementations, the first encoding matrix may be subjected to a redundant decoding process by invoking an image processing library or software. For example, a decoding function in the OpenCV library may be called to perform redundancy decoding processing on the first encoding matrix, and original encrypted data is recovered. In some possible implementations, the redundant decoding process may be performed on the first encoding matrix by a user manually setting a decoding rule. For example, the user may set a decoding rule through the interactive interface, perform redundancy decoding processing on the first encoding matrix, and recover the original encrypted data.
Illustratively, the redundancy decoding process may be expressed as the following relationship:
Mrs=rs_decode(Munit)
Where Munit denotes a first coding matrix, mrs denotes encrypted data, and rs_decode () denotes a redundant decoding processing function.
In this embodiment, redundant information is removed by performing redundant decoding processing on the first encoding matrix, so as to recover original encrypted data, thereby ensuring the integrity and accuracy of the data. The redundant decoding process restores the original encrypted data by removing the check bit, the redundant data or the redundant information of convolution operation, and provides a reliable basis for the subsequent decryption process. The use of redundant decoding processing further enhances the anti-interference and error correction capabilities of the data, providing reliable data support for subsequent anti-blowby verification processes.
The pattern encoding device provided in the embodiment of the present disclosure will be described in detail with reference to fig. 16. It should be noted that, the pattern encoding device 1 in fig. 16 is used to execute the method of the embodiment shown in fig. 2 to 15 of the present specification, and for convenience of explanation, only the portion relevant to the embodiment of the present specification is shown, and specific technical details are not disclosed, please refer to the embodiment shown in fig. 2 to 15 of the present specification. The pattern encoding device 1 specifically includes:
an acquiring unit 11, configured to acquire an original pattern and identification information corresponding to a target object;
A generating unit 12 for generating a first encoding matrix by encryption according to the identification information, the first encoding matrix including a first encoding element and a second encoding element having different binary logic states;
a determining unit 13, configured to determine a first target area corresponding to the first coding matrix in the original pattern;
The adjusting unit 14 is configured to perform color increment adjustment on the first target area to obtain a target pattern, where the target pattern includes a second target area corresponding to the first target area, and colors of a sub-area corresponding to the first coding element and a sub-area corresponding to the second coding element in the second target area are different.
Optionally, the adjusting unit 14 is further configured to obtain a first color increment corresponding to the first coding element and a second color increment corresponding to the second coding element, where the first color increment and the second color increment are different, perform color increment adjustment on a sub-region corresponding to the first coding element in the first target region according to the first color increment, and perform color increment adjustment on a sub-region corresponding to the second coding element in the first target region according to the second color increment, so as to obtain the target pattern.
Optionally, the adjusting unit 14 is further configured to obtain a third color increment corresponding to a target coding element, where the target coding element is one of the first coding element and the second coding element, and perform color increment adjustment on a sub-region corresponding to the target coding element in the first target region according to the third color increment, so as to obtain the target pattern.
Optionally, the generating unit 12 is further configured to generate initial data according to the identification information, encrypt the initial data according to a preset key to obtain encrypted data, and generate a first encoding matrix according to the encrypted data.
Optionally, the generating unit 12 is further configured to obtain random identification information corresponding to the target object, and generate initial data according to the identification information, the random identification information, and a preset key.
Optionally, the generating unit 12 is further configured to generate a second encoding matrix according to the encrypted data, perform redundant encoding processing on the second encoding matrix to obtain a third encoding matrix, and extract the third encoding matrix based on a preset first size to obtain a first encoding matrix.
Optionally, the generating unit 12 is further configured to extract the third encoding matrix based on a preset first size to obtain a fourth encoding matrix, obtain a first positioning encoding matrix of a preset second size and a second positioning encoding matrix of a preset third size, where the preset second size and the preset third size are different and smaller than the preset first size, embed the first positioning encoding matrix and the second positioning encoding matrix in the fourth encoding matrix to obtain the first encoding matrix, and the first positioning encoding matrix and the second positioning encoding matrix are diagonally distributed in the first encoding matrix.
Optionally, the generating unit 12 is further configured to obtain a third positioning encoding matrix with a preset second size and a fourth positioning encoding matrix with a preset third size, obtain first description information and second description information of the target object, add the preset first positioning information in a fixed encoding area of the third positioning encoding matrix, add the first description information in an unfixed encoding area of the third positioning encoding matrix to obtain a first positioning encoding matrix with a preset second size, add the preset second positioning information in a fixed encoding area of the fourth positioning encoding matrix, and add the second description information in an unfixed encoding area of the fourth positioning encoding matrix to obtain a second positioning encoding matrix with a preset third size.
The effects achieved by this embodiment are referred to the related embodiments of the above pattern encoding method, and will not be described herein.
Next, a pattern decoding apparatus provided in the embodiment of the present specification will be described in detail with reference to fig. 17. It should be noted that, the pattern decoding device 2 in fig. 17 is used to execute the method of the embodiment shown in fig. 2 to 15 of the present specification, and for convenience of explanation, only the portion relevant to the embodiment of the present specification is shown, and specific technical details are not disclosed, please refer to the embodiment shown in fig. 2 to 15 of the present specification. The pattern decoding device 2 specifically includes:
A first obtaining unit 21, configured to obtain a captured image of a target object, where the captured image includes a target pattern, and the target pattern includes a second target area corresponding to a first coding matrix, and the first coding matrix includes a first coding element and a second coding element that have different binary logic states, and a sub-area corresponding to the first coding element and a sub-area corresponding to the second coding element in the second target area have different colors;
A processing unit 22, configured to perform binarization processing on the captured image to obtain a binarized image, and determine a third target area corresponding to the second target area from the binarized image, where a sub-area corresponding to the first coding element and a sub-area corresponding to the second coding element in the third target area have different gray values;
A second acquisition unit 23 for acquiring a first encoding matrix according to a third target area;
And the decryption unit 24 is configured to decrypt the first encoding matrix to obtain identification information corresponding to the target article.
Optionally, the processing unit 22 is further configured to pre-process the captured image to obtain a pre-processed image, and call a pre-trained semantic segmentation model to perform binarization processing on the pre-processed image to obtain a binarized image.
Optionally, the processing unit 22 is further configured to denoise the captured image to obtain a denoised image, convert the denoised image into a gray image, and resize the gray image according to a preset fourth size to obtain a preprocessed image.
Optionally, the processing unit 22 is further configured to obtain a first positioning coding matrix with a preset second size and a second positioning coding matrix with a preset third size, determine a first mask area from the binarized image according to the structure information corresponding to the first coding matrix, the first positioning coding matrix and the second positioning coding matrix, and determine a third target area corresponding to the second target area based on the first mask area.
Optionally, the processing unit 22 is further configured to perform perspective transformation correction on the first mask area to obtain a second mask area, and determine the second mask area as a third target area corresponding to the second target area.
Optionally, the second obtaining unit 23 is further configured to divide the third target area into a plurality of grid areas according to the structure information corresponding to the first coding matrix, perform pixel gray value statistics on each grid area in the plurality of grid areas to obtain a pixel gray value statistics result of each grid area, determine, according to the pixel gray value statistics result of each grid area, a coding element corresponding to each grid area, where the coding element corresponding to each grid area is a first coding element or a second coding element, and determine, according to the coding element corresponding to each grid area, the first coding matrix.
Optionally, the decryption unit 24 is further configured to obtain encrypted data from the first encoding matrix, decrypt the encrypted data according to a preset key to obtain initial data, and obtain identification information corresponding to the target object from the initial data.
Optionally, the decryption unit 24 is further configured to perform redundancy decoding processing on the first encoding matrix to obtain encrypted data.
The effects achieved by this embodiment are referred to the related embodiments of the pattern decoding method, and are not described herein.
Referring to fig. 18, a schematic structural diagram of an electronic device is provided in an embodiment of the present disclosure. As shown in fig. 18, the electronic device 1000 may include at least one processor 1001, e.g., a CPU, at least one network interface 1004, an input output interface 1003, a memory 1005, and at least one communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 1005 may also optionally be at least one storage device located remotely from the processor 1001. As shown in fig. 18, an operating system, a network communication module, an input-output interface module, and an application program may be included in a memory 1005, which is one type of computer storage medium.
In the electronic device 1000 shown in fig. 18, the input-output interface 1003 is mainly used as an interface for providing input for a user, and acquires data input by the user.
In one embodiment, the processor 1001 may be configured to invoke an application program stored in the memory 1005 and specifically perform the following operations:
Acquiring an original pattern corresponding to a target object and identification information;
Encrypting according to the identification information to generate a first coding matrix, wherein the first coding matrix comprises a first coding element and a second coding element with different binary logic states;
Determining a first target area corresponding to the first coding matrix in the original pattern;
And performing color increment adjustment on the first target area to obtain a target pattern, wherein the target pattern comprises a second target area corresponding to the first target area, and the colors of a sub-area corresponding to the first coding element and a sub-area corresponding to the second coding element in the second target area are different.
Optionally, when performing color increment adjustment on the first target area to obtain the target pattern, the processor 1001 specifically performs the operations of obtaining a first color increment corresponding to the first coding element and a second color increment corresponding to the second coding element, where the first color increment is different from the second color increment, performing color increment adjustment on a sub-area corresponding to the first coding element in the first target area according to the first color increment, and performing color increment adjustment on the sub-area corresponding to the second coding element in the first target area according to the second color increment to obtain the target pattern.
Optionally, when performing color increment adjustment on the first target area to obtain the target pattern, the processor 1001 specifically performs the operations of obtaining a third color increment corresponding to a target coding element, where the target coding element is one of the first coding element and the second coding element, and performing color increment adjustment on a sub-area corresponding to the target coding element in the first target area according to the third color increment to obtain the target pattern.
Alternatively, the processor 1001 specifically performs the following operations of generating initial data according to the identification information, performing encryption processing on the initial data according to a preset key to obtain encrypted data, and generating the first encoding matrix according to the encrypted data when performing encryption generation of the first encoding matrix according to the identification information.
Optionally, the processor 1001 specifically performs the operations of acquiring random identification information corresponding to the target item when performing the generation of the initial data according to the identification information, and generating the initial data according to the identification information, the random identification information, and the preset key.
Optionally, the processor 1001 specifically performs the following operations when executing the generation of the first encoding matrix according to the encrypted data, the generation of the second encoding matrix according to the encrypted data, the redundant encoding processing of the second encoding matrix to obtain the third encoding matrix, and the extraction of the third encoding matrix based on the preset first size to obtain the first encoding matrix.
Optionally, when the processor 1001 performs extraction of the third coding matrix based on the preset size to obtain the first coding matrix, the processor specifically performs the following operations of extracting the third coding matrix based on the preset first size to obtain the fourth coding matrix, obtaining a first positioning coding matrix of a preset second size and a second positioning coding matrix of a preset third size, where the preset second size and the preset third size are different and are smaller than the preset first size, and embedding the first positioning coding matrix and the second positioning coding matrix in the fourth coding matrix to obtain the first coding matrix, where the first positioning coding matrix and the second positioning coding matrix are diagonally distributed in the first coding matrix.
Optionally, the processor 1001 specifically performs the operations of acquiring a third positioning encoding matrix of a preset second size and a fourth positioning encoding matrix of a preset third size when executing the acquisition of the first positioning encoding matrix of the preset second size and the second positioning encoding matrix of the preset third size, acquiring the first description information and the second description information of the target object, adding the preset first positioning information in the fixed encoding region of the third positioning encoding matrix, adding the first description information in the non-fixed encoding region of the third positioning encoding matrix, obtaining the first positioning encoding matrix of the preset second size, adding the preset second positioning information in the fixed encoding region of the fourth positioning encoding matrix, and adding the second description information in the non-fixed encoding region of the fourth positioning encoding matrix, obtaining the second positioning encoding matrix of the preset third size.
In one embodiment, the processor 1001 may be configured to invoke an application program stored in the memory 1005 and specifically perform the following operations:
Acquiring a shooting image of a target object, wherein the shooting image comprises a target pattern, the target pattern comprises a second target area corresponding to a first coding matrix, the first coding matrix comprises a first coding element and a second coding element with different binary logic states, and the colors of a subarea corresponding to the first coding element and a subarea corresponding to the second coding element in the second target area are different;
Binarizing the shot image to obtain a binarized image, determining a third target area corresponding to the second target area from the binarized image, wherein the gray values of the subareas corresponding to the first coding elements and the subareas corresponding to the second coding elements in the third target area are different;
acquiring a first coding matrix according to a third target area;
Decrypting the first coding matrix to obtain the identification information corresponding to the target object.
Optionally, when performing binarization processing on the captured image to obtain a binarized image, the processor 1001 specifically performs the following operations of preprocessing the captured image to obtain a preprocessed image, and invoking a pre-trained semantic segmentation model to perform binarization processing on the preprocessed image to obtain a binarized image.
Alternatively, when preprocessing a captured image to obtain a preprocessed image, the processor 1001 specifically performs denoising processing on the captured image to obtain a denoised image, converting the denoised image into a grayscale image, and resizing the grayscale image according to a preset fourth size to obtain the preprocessed image.
Optionally, the processor 1001 specifically performs the operations of acquiring a first positioning encoding matrix of a preset second size and a second positioning encoding matrix of a preset third size when determining a third target area corresponding to the second target area from the binarized image, determining a first mask area from the binarized image according to the structure information corresponding to the first encoding matrix, the first positioning encoding matrix and the second positioning encoding matrix, and determining the third target area corresponding to the second target area based on the first mask area.
Optionally, the processor 1001 specifically performs, when determining a third target area corresponding to the second target area based on the first mask area, performing perspective transformation rectification on the first mask area to obtain the second mask area, and determining the second mask area as the third target area corresponding to the second target area.
Optionally, when the processor 1001 obtains the first coding matrix according to the third target area, the processor specifically performs the following operations of dividing the third target area into a plurality of grid areas according to the structure information corresponding to the first coding matrix, performing pixel gray value statistics on each grid area in the plurality of grid areas to obtain a pixel gray value statistics result of each grid area, determining the coding element corresponding to each grid area according to the pixel gray value statistics result of each grid area, wherein the coding element corresponding to each grid area is the first coding element or the second coding element, and determining the first coding matrix according to the coding element corresponding to each grid area.
Optionally, when decrypting the first encoding matrix to obtain the identification information corresponding to the target object, the processor 1001 specifically performs the following operations of obtaining encrypted data from the first encoding matrix, decrypting the encrypted data according to a preset key to obtain initial data, and obtaining the identification information corresponding to the target object from the initial data.
Alternatively, the processor 1001 specifically performs, when performing the acquisition of the encrypted data from the first encoding matrix, the operation of performing redundancy decoding processing on the first encoding matrix to obtain the encrypted data.
The effects achieved by this embodiment are referred to the related embodiments of the above-mentioned pattern encoding method and pattern decoding method, and will not be described herein.
The embodiment of the present disclosure further provides a computer storage medium, in which computer program codes are stored, and when the computer program codes are executed, the pattern encoding method and the pattern decoding method of the embodiment shown in fig. 2 to 15 are implemented, and the specific implementation process may refer to the specific description of the embodiment shown in fig. 2 to 15, which is not repeated herein.
The embodiment of the present disclosure further provides a computer program product, where at least one instruction is stored in the computer program product, and when the at least one instruction is executed by a processor, the pattern encoding method and the pattern decoding method in the embodiment shown in fig. 2 to 15 are implemented, and the specific implementation process may refer to the specific description of the embodiment shown in fig. 2 to 15, which is not repeated herein.
Those skilled in the art will appreciate that the processes implementing all or part of the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, and the program may be stored in a computer readable storage medium, and the program may include the processes of the embodiments of the methods as above when executed. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random-access Memory (Random Access Memory, RAM), or the like.
The foregoing disclosure is only illustrative of the preferred embodiments of the present invention and is not to be construed as limiting the scope of the claims, which follow the meaning of the claims of the present invention.

Claims (21)

1.一种图案编码方法,包括:1. A pattern coding method, comprising: 获取目标物品对应的原始图案以及标识信息;Obtain the original pattern and identification information corresponding to the target object; 根据所述标识信息加密生成第一编码矩阵,所述第一编码矩阵包括二进制逻辑状态互异的第一编码元素和第二编码元素;Encrypting and generating a first coding matrix according to the identification information, the first coding matrix including first coding elements and second coding elements with different binary logic states; 在所述原始图案中确定所述第一编码矩阵对应的第一目标区域;Determining a first target area corresponding to the first encoding matrix in the original pattern; 对所述第一目标区域进行颜色增量调整得到目标图案,所述目标图案包括与所述第一目标区域对应的第二目标区域,所述第二目标区域中所述第一编码元素对应的子区域和所述第二编码元素对应的子区域颜色互异。A target pattern is obtained by performing color incremental adjustment on the first target area, wherein the target pattern includes a second target area corresponding to the first target area, and a sub-area corresponding to the first coding element and a sub-area corresponding to the second coding element in the second target area have different colors. 2.根据权利要求1所述的方法,所述对所述第一目标区域进行颜色增量调整得到目标图案,包括:2. The method according to claim 1, wherein the step of performing color incremental adjustment on the first target area to obtain the target pattern comprises: 获取所述第一编码元素对应的第一颜色增量,以及所述第二编码元素对应的第二颜色增量,所述第一颜色增量与所述第二颜色增量互异;Obtaining a first color increment corresponding to the first coding element and a second color increment corresponding to the second coding element, wherein the first color increment and the second color increment are different from each other; 根据所述第一颜色增量对所述第一目标区域中所述第一编码元素对应的子区域进行颜色增量调整,以及根据所述第二颜色增量对所述第一目标区域中所述第二编码元素对应的子区域进行颜色增量调整,得到目标图案。A target pattern is obtained by performing a color increment adjustment on a subregion corresponding to the first coding element in the first target region according to the first color increment, and performing a color increment adjustment on a subregion corresponding to the second coding element in the first target region according to the second color increment. 3.根据权利要求1所述的方法,所述对所述第一目标区域进行颜色增量调整得到目标图案,包括:3. The method according to claim 1, wherein the step of performing color incremental adjustment on the first target area to obtain the target pattern comprises: 获取目标编码元素对应的第三颜色增量,所述目标编码元素为所述第一编码元素和所述第二编码元素中的一种;Obtaining a third color increment corresponding to a target coding element, where the target coding element is one of the first coding element and the second coding element; 根据所述第三颜色增量对所述第一目标区域中所述目标编码元素对应的子区域进行颜色增量调整,得到目标图案。According to the third color increment, a color increment adjustment is performed on the sub-region corresponding to the target coding element in the first target region to obtain a target pattern. 4.根据权利要求1所述的方法,所述根据所述标识信息加密生成第一编码矩阵,包括:4. The method according to claim 1, wherein encrypting and generating a first coding matrix based on the identification information comprises: 根据所述标识信息生成初始数据;generating initial data according to the identification information; 根据预设密钥对所述初始数据进行加密处理,得到加密数据;Encrypting the initial data according to a preset key to obtain encrypted data; 根据所述加密数据生成第一编码矩阵。A first encoding matrix is generated according to the encrypted data. 5.根据权利要求4所述的方法,所述根据所述标识信息生成初始数据,包括:5. The method according to claim 4, wherein generating initial data according to the identification information comprises: 获取所述目标物品对应的随机标识信息;Obtaining random identification information corresponding to the target object; 根据所述标识信息、所述随机标识信息以及预设密钥,生成初始数据。Initial data is generated according to the identification information, the random identification information and a preset key. 6.根据权利要求4所述的方法,所述根据所述加密数据生成第一编码矩阵,包括:6. The method according to claim 4, wherein generating a first encoding matrix according to the encrypted data comprises: 根据所述加密数据生成第二编码矩阵;generating a second encoding matrix according to the encrypted data; 对第二编码矩阵进行冗余编码处理,得到第三编码矩阵;performing redundant coding processing on the second coding matrix to obtain a third coding matrix; 基于预设第一尺寸对所述第三编码矩阵进行提取,得到第一编码矩阵。The third encoding matrix is extracted based on a preset first size to obtain a first encoding matrix. 7.根据权利要求6所述的方法,所述基于预设尺寸对所述第三编码矩阵进行提取,得到第一编码矩阵,包括:7. The method according to claim 6, wherein extracting the third encoding matrix based on a preset size to obtain the first encoding matrix comprises: 基于预设第一尺寸对所述第三编码矩阵进行提取,得到第四编码矩阵;Extracting the third encoding matrix based on a preset first size to obtain a fourth encoding matrix; 获取预设第二尺寸的第一定位编码矩阵和预设第三尺寸的第二定位编码矩阵,所述预设第二尺寸和所述预设第三尺寸互异且均小于所述预设第一尺寸;Obtaining a first positioning coding matrix of a preset second size and a second positioning coding matrix of a preset third size, wherein the preset second size and the preset third size are different from each other and are both smaller than the preset first size; 在所述第四编码矩阵中嵌入所述第一定位编码矩阵和所述第二定位编码矩阵,得到第一编码矩阵,所述第一定位编码矩阵和所述第二定位编码矩阵在所述第一编码矩阵中呈对角分布。The first positioning coding matrix and the second positioning coding matrix are embedded in the fourth coding matrix to obtain a first coding matrix, where the first positioning coding matrix and the second positioning coding matrix are diagonally distributed in the first coding matrix. 8.根据权利要求7所述的方法,所述获取预设第二尺寸的第一定位编码矩阵和预设第三尺寸的第二定位编码矩阵,包括:8. The method according to claim 7, wherein obtaining the first positioning coding matrix of the preset second size and the second positioning coding matrix of the preset third size comprises: 获取预设第二尺寸的第三定位编码矩阵,以及预设第三尺寸的第四定位编码矩阵;Obtaining a third positioning coding matrix of a preset second size and a fourth positioning coding matrix of a preset third size; 获取所述目标物品的第一描述信息和第二描述信息;Obtaining first description information and second description information of the target item; 在所述第三定位编码矩阵的固定编码区中添加预设第一定位信息,以及在所述第三定位编码矩阵的非固定编码区中添加所述第一描述信息,得到预设第二尺寸的第一定位编码矩阵;Adding the preset first positioning information to the fixed coding area of the third positioning coding matrix, and adding the first description information to the non-fixed coding area of the third positioning coding matrix, to obtain a first positioning coding matrix of a preset second size; 在所述第四定位编码矩阵的固定编码区中添加预设第二定位信息,以及在所述第四定位编码矩阵的非固定编码区中添加所述第二描述信息,得到预设第三尺寸的第二定位编码矩阵。The preset second positioning information is added to the fixed coding area of the fourth positioning coding matrix, and the second description information is added to the non-fixed coding area of the fourth positioning coding matrix to obtain a second positioning coding matrix of a preset third size. 9.一种图案解码方法,包括:9. A pattern decoding method, comprising: 获取目标物品的拍摄图像,所述拍摄图像包括目标图案,所述目标图案包括第一编码矩阵对应的第二目标区域,所述第一编码矩阵包括二进制逻辑状态互异的第一编码元素和第二编码元素,所述第二目标区域中所述第一编码元素对应的子区域和所述第二编码元素对应的子区域颜色互异;Acquiring a captured image of a target object, the captured image including a target pattern, the target pattern including a second target area corresponding to a first coding matrix, the first coding matrix including first coding elements and second coding elements having different binary logic states, and a sub-area corresponding to the first coding element and a sub-area corresponding to the second coding element in the second target area having different colors; 对所述拍摄图像进行二值化处理得到二值化图像,从所述二值化图像中确定与所述第二目标区域对应的第三目标区域,所述第三目标区域中所述第一编码元素对应的子区域和所述第二编码元素对应的子区域灰度值互异;performing binarization processing on the captured image to obtain a binarized image, and determining a third target region corresponding to the second target region from the binarized image, wherein a subregion corresponding to the first coding element and a subregion corresponding to the second coding element in the third target region have different grayscale values; 根据所述第三目标区域获取所述第一编码矩阵;Acquire the first coding matrix according to the third target area; 对所述第一编码矩阵进行解密得到所述目标物品对应的标识信息。The first coding matrix is decrypted to obtain identification information corresponding to the target object. 10.根据权利要求9所述的方法,所述对所述拍摄图像进行二值化处理得到二值化图像,包括:10. The method according to claim 9, wherein the binarization processing of the captured image to obtain a binarized image comprises: 对所述拍摄图像进行预处理得到预处理图像;Preprocessing the captured image to obtain a preprocessed image; 调用预先训练好的语义分割模型对所述预处理图像进行二值化处理,得到二值化图像。A pre-trained semantic segmentation model is called to perform binarization processing on the pre-processed image to obtain a binarized image. 11.根据权利要求10所述的方法,所述对所述拍摄图像进行预处理得到预处理图像,包括:11. The method according to claim 10, wherein preprocessing the captured image to obtain a preprocessed image comprises: 对所述拍摄图像进行去噪处理,得到去噪图像;performing denoising processing on the captured image to obtain a denoised image; 将所述去噪图像转换为灰度图像;Converting the denoised image into a grayscale image; 根据预设第四尺寸对所述灰度图像进行尺寸调整,得到预处理图像。The grayscale image is resized according to a preset fourth size to obtain a preprocessed image. 12.根据权利要求9所述的方法,所述从所述二值化图像中确定与所述第二目标区域对应的第三目标区域,包括:12. The method according to claim 9, wherein determining a third target area corresponding to the second target area from the binarized image comprises: 获取预设第二尺寸的第一定位编码矩阵和预设第三尺寸的第二定位编码矩阵;Obtaining a first positioning coding matrix of a preset second size and a second positioning coding matrix of a preset third size; 根据所述第一编码矩阵对应的结构信息、所述第一定位编码矩阵以及所述第二定位编码矩阵,从所述二值化图像中确定第一掩膜区域;determining a first mask area from the binarized image according to the structural information corresponding to the first coding matrix, the first positioning coding matrix, and the second positioning coding matrix; 基于所述第一掩膜区域,确定与所述第二目标区域对应的第三目标区域。Based on the first mask area, a third target area corresponding to the second target area is determined. 13.根据权利要求12所述的方法,所述基于所述第一掩膜区域,确定与所述第二目标区域对应的第三目标区域,包括:13. The method according to claim 12, wherein determining a third target area corresponding to the second target area based on the first mask area comprises: 对所述第一掩膜区域进行透视变换矫正,得到第二掩膜区域;Performing perspective transformation correction on the first mask area to obtain a second mask area; 将所述第二掩膜区域确定为与所述第二目标区域对应的第三目标区域。The second mask area is determined as a third target area corresponding to the second target area. 14.根据权利要求9所述的方法,所述根据所述第三目标区域获取所述第一编码矩阵,包括:14. The method according to claim 9, wherein obtaining the first encoding matrix according to the third target area comprises: 根据所述第一编码矩阵对应的结构信息,将所述第三目标区域划分为多个网格区域;Dividing the third target area into a plurality of grid areas according to the structural information corresponding to the first encoding matrix; 对多个所述网格区域中的各所述网格区域进行像素灰度值统计,得到各所述网格区域的像素灰度值统计结果;Performing pixel grayscale value statistics on each of the plurality of grid areas to obtain pixel grayscale value statistics results for each of the grid areas; 根据各所述网格区域的像素灰度值统计结果,确定各所述网格区域对应的编码元素,所述网格区域对应的编码元素为所述第一编码元素或者所述第二编码元素;Determining, based on pixel grayscale value statistics of each grid area, a coding element corresponding to each grid area, where the coding element corresponding to the grid area is the first coding element or the second coding element; 根据各所述网格区域对应的编码元素,确定所述第一编码矩阵。The first coding matrix is determined according to the coding elements corresponding to each of the grid areas. 15.根据权利要求9所述的方法,所述对所述第一编码矩阵进行解密得到所述目标物品对应的标识信息,包括:15. The method according to claim 9, wherein decrypting the first coding matrix to obtain identification information corresponding to the target object comprises: 从所述第一编码矩阵中获取加密数据;Obtaining encrypted data from the first encoding matrix; 根据预设密钥对所述加密数据进行解密处理,得到初始数据;Decrypting the encrypted data according to a preset key to obtain initial data; 从所述初始数据中获取所述目标物品对应的标识信息。Acquire identification information corresponding to the target item from the initial data. 16.根据权利要求15所述的方法,所述从所述第一编码矩阵中获取加密数据,包括:16. The method according to claim 15, wherein obtaining encrypted data from the first encoding matrix comprises: 对所述第一编码矩阵进行冗余解码处理,得到加密数据。Redundant decoding is performed on the first encoding matrix to obtain encrypted data. 17.一种图案编码装置,包括:17. A pattern coding device comprising: 获取单元,用于获取目标物品对应的原始图案以及标识信息;An acquisition unit, configured to acquire the original pattern and identification information corresponding to the target object; 生成单元,用于根据所述标识信息加密生成第一编码矩阵,所述第一编码矩阵包括二进制逻辑状态互异的第一编码元素和第二编码元素;A generating unit, configured to generate a first coding matrix by encryption according to the identification information, wherein the first coding matrix includes a first coding element and a second coding element having different binary logic states; 确定单元,用于在所述原始图案中确定所述第一编码矩阵对应的第一目标区域;a determining unit, configured to determine, in the original pattern, a first target area corresponding to the first encoding matrix; 调整单元,用于对所述第一目标区域进行颜色增量调整得到目标图案,所述目标图案包括与所述第一目标区域对应的第二目标区域,所述第二目标区域中所述第一编码元素对应的子区域和所述第二编码元素对应的子区域颜色互异。an adjustment unit, configured to perform incremental color adjustment on the first target area to obtain a target pattern, wherein the target pattern includes a second target area corresponding to the first target area, and a sub-area corresponding to the first coding element and a sub-area corresponding to the second coding element in the second target area have different colors. 18.一种图案解码装置,包括:18. A pattern decoding device, comprising: 第一获取单元,用于获取目标物品的拍摄图像,所述拍摄图像包括目标图案,所述目标图案包括第一编码矩阵对应的第二目标区域,所述第一编码矩阵包括二进制逻辑状态互异的第一编码元素和第二编码元素,所述第二目标区域中所述第一编码元素对应的子区域和所述第二编码元素对应的子区域颜色互异;a first acquisition unit, configured to acquire a captured image of a target object, the captured image including a target pattern, the target pattern including a second target area corresponding to a first coding matrix, the first coding matrix including first coding elements and second coding elements having different binary logic states, and a sub-area corresponding to the first coding element and a sub-area corresponding to the second coding element in the second target area having different colors; 处理单元,用于对所述拍摄图像进行二值化处理得到二值化图像,从所述二值化图像中确定与所述第二目标区域对应的第三目标区域,所述第三目标区域中所述第一编码元素对应的子区域和所述第二编码元素对应的子区域灰度值互异;a processing unit, configured to perform binarization processing on the captured image to obtain a binarized image, and determine, from the binarized image, a third target region corresponding to the second target region, wherein a subregion corresponding to the first coding element and a subregion corresponding to the second coding element in the third target region have different grayscale values; 第二获取单元,用于根据所述第三目标区域获取所述第一编码矩阵;a second acquiring unit, configured to acquire the first coding matrix according to the third target area; 解密单元,用于对所述第一编码矩阵进行解密得到所述目标物品对应的标识信息。A decryption unit is used to decrypt the first coding matrix to obtain identification information corresponding to the target object. 19.一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序代码,当所述计算机程序代码被执行时,实现如权利要求1至16中任一项所述的方法。19. A computer-readable storage medium storing computer program code, wherein when the computer program code is executed, the method according to any one of claims 1 to 16 is implemented. 20.一种电子设备,所述电子设备包括:处理器和存储器;其中,所述存储器存储有计算机程序,所述计算机程序适于由所述处理器加载并执行如权利要求1至16中任意一项所述方法的步骤。20. An electronic device, comprising: a processor and a memory; wherein the memory stores a computer program, and the computer program is suitable for being loaded by the processor and executing the steps of the method according to any one of claims 1 to 16. 21.一种计算机程序产品,所述计算机程序产品存储有至少一条指令,所述至少一条指令被处理器执行时实现权利要求1至16中任意一项所述方法的步骤。21. A computer program product, wherein the computer program product stores at least one instruction, and when the at least one instruction is executed by a processor, the steps of the method according to any one of claims 1 to 16 are implemented.
CN202510535769.1A 2025-04-25 2025-04-25 Pattern encoding method, pattern decoding method, device, medium and equipment Pending CN120449914A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510535769.1A CN120449914A (en) 2025-04-25 2025-04-25 Pattern encoding method, pattern decoding method, device, medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510535769.1A CN120449914A (en) 2025-04-25 2025-04-25 Pattern encoding method, pattern decoding method, device, medium and equipment

Publications (1)

Publication Number Publication Date
CN120449914A true CN120449914A (en) 2025-08-08

Family

ID=96616366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510535769.1A Pending CN120449914A (en) 2025-04-25 2025-04-25 Pattern encoding method, pattern decoding method, device, medium and equipment

Country Status (1)

Country Link
CN (1) CN120449914A (en)

Similar Documents

Publication Publication Date Title
JP4557866B2 (en) Mixed code, mixed code generation method and apparatus, and recording medium
CN110766594B (en) Information hiding method and device, detection method and device and anti-counterfeiting tracing method
CA2586274C (en) Mixed code, and method and apparatus for generating the same, and method and apparatus for decoding the same
JP4515999B2 (en) Mixed code decoding method and apparatus, and recording medium
US9594993B2 (en) Two dimensional barcode and method of authentication of such barcode
JP2023036579A (en) Generating and reading optical codes with variable density to match visual quality and reliability
US11216631B2 (en) Contrast edge barcodes
US10986245B2 (en) Encoded signal systems and methods to ensure minimal robustness
US11057539B2 (en) Method of embedding watermark data in an image by adjusting a pixel when color channel values are above or below a threshold and based on a pixel of a transformed noise-based image being white or black
US7720288B2 (en) Detecting compositing in a previously compressed image
CN105701757B (en) Product anti-counterfeiting method and device based on digital watermark and graphic code
JP4595014B2 (en) Digital watermark embedding device and detection device
CN111368960A (en) Quantum anti-counterfeiting two-dimensional code generation method and scanning method
CN109840574B (en) Two-dimensional code information hiding method and device, electronic equipment and storage medium
CN109671011B (en) Copyright information embedding method, copyright information extracting method and electronic equipment
CN106934756B (en) Method and system for embedding information in single-color or special-color image
Mayer et al. Fundamentals and Applications of Hardcopy Communication
CN120449914A (en) Pattern encoding method, pattern decoding method, device, medium and equipment
CN110955889A (en) Electronic document tracing method based on digital fingerprints
CN114330621A (en) Two-dimensional code anti-counterfeiting method and device based on identification information and storage medium
CN109784454A (en) A kind of information concealing method based on two dimensional code, device and electronic equipment
JP4469301B2 (en) Information embedding device, printing medium, and information reading device
CN118967171B (en) Method for checking authenticity of tobacco package printed product based on two-dimensional code
CN109829844B (en) Information hiding method and device based on two-dimension code and electronic equipment
Mishra Region Identification and Decoding Of Security Markers Using Image Processing Tools

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination