[go: up one dir, main page]

CN115510887A - Graphic code processing method, apparatus, storage medium and program product - Google Patents

Graphic code processing method, apparatus, storage medium and program product Download PDF

Info

Publication number
CN115510887A
CN115510887A CN202211160670.0A CN202211160670A CN115510887A CN 115510887 A CN115510887 A CN 115510887A CN 202211160670 A CN202211160670 A CN 202211160670A CN 115510887 A CN115510887 A CN 115510887A
Authority
CN
China
Prior art keywords
image
code
decoded image
graphic
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211160670.0A
Other languages
Chinese (zh)
Inventor
曾超然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202211160670.0A priority Critical patent/CN115510887A/en
Publication of CN115510887A publication Critical patent/CN115510887A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1443Methods for optical code recognition including a method step for retrieval of the optical code locating of the code in an image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/453Help systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10821Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices
    • G06K7/10861Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices sensing of data fields affixed to objects or articles, e.g. coded labels
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1456Methods for optical code recognition including a method step for retrieval of the optical code determining the orientation of the optical code with respect to the reader and correcting therefore
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Toxicology (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a graphic code processing method, a device, a storage medium and a program product, wherein the method comprises the following steps: responding to a first operation instruction of a user, and acquiring a decoding image of a graphic code in an image to be processed; when the decoded image comprises a plurality of graphic codes, acquiring size information of a displayable area in a user interface; generating mark information of the plurality of graphic codes according to the decoded image and the size information; generating a code identifying interface according to the image to be processed, the decoded image and the marking information; and displaying the code identifying interface in the displayable area, wherein the plurality of graphic codes and the mark information corresponding to each graphic code are displayed in the code identifying interface. According to the method and the device, under the multi-code image scene, the accuracy of the code frame marking position is improved, the multi-code frame display effect is improved, the interaction performance of the terminal is improved, and the user experience is improved.

Description

Graphic code processing method, apparatus, storage medium and program product
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method, an apparatus, a storage medium, and a program product for processing a graphic code.
Background
Bar codes and two-dimensional codes are common graphic codes in life of people, and are widely applied to various scenes due to simple forms and capability of carrying rich data information. For the identification of bar codes or two-dimensional codes, image recognition technology is generally adopted to process, analyze and understand images thereof so as to identify various information carried by the bar codes or the two-dimensional codes.
For example, in an e-commerce scene, an image recognition technology may be used to recognize a two-dimensional code, a barcode, etc. in an image, so as to obtain information related to a commodity carried in the code.
In the existing code recognizing method, for a scene with one image and multiple codes, a user needs to select a target graphic code and then output the content information of the target graphic code. However, when displaying multiple codes, the existing method often has the phenomenon of inaccurate graphic code display, so that a user cannot accurately select a target graphic code, the terminal interaction performance is poor, and the user experience is affected.
Disclosure of Invention
The embodiments of the present application mainly aim to provide a graphic code processing method, a graphic code processing apparatus, a storage medium, and a program product, which improve a multi-frame display effect, improve accuracy of a frame marking position, improve interaction performance of a terminal, and improve user experience in a multi-code image scene.
In a first aspect, an embodiment of the present application provides a method for processing a graphic code, including: responding to a first operation instruction of a user, and acquiring a decoding image of a graphic code in an image to be processed; when the decoded image comprises a plurality of graphic codes, acquiring size information of a displayable area in a user interface; generating mark information of the plurality of graphic codes according to the decoded image and the size information; generating a code identifying interface according to the image to be processed, the decoded image and the marking information; and displaying the code identifying interface in the displayable area, wherein the plurality of graphic codes and the mark information corresponding to each graphic code are displayed in the code identifying interface.
In an embodiment, the obtaining a decoded image of a graphic code in an image to be processed in response to a first operation instruction of a user includes: responding to the first operation instruction of a user, and acquiring the image to be processed; and decoding the graphic code in the image to be processed to obtain the decoded image of the graphic code in the image to be processed.
In one embodiment, the decoding the image comprises: a code frame of each graphic code in the plurality of graphic codes; the generating the mark information of the plurality of graphic codes according to the decoded image and the size information comprises: converting the decoded image according to the size information to obtain a processed decoded image, wherein the code frame of each graphic code in the processed decoded image is adapted to the displayable area; and generating the mark information of each graphic code according to the processed decoded image.
In an embodiment, the converting the decoded image according to the size information to obtain a processed decoded image includes: and rotating the decoded image by a preset angle according to the size information, wherein each frame in the rotated decoded image is adapted to the display direction of the displayable area.
In an embodiment, the converting the decoded image according to the size information to obtain a processed decoded image includes: determining a frame scaling according to the size information and the decoded image; and carrying out scaling processing on the decoded image according to the frame scaling ratio, wherein each frame in the scaled decoded image is in the range of the displayable area.
In an embodiment, the converting the decoded image according to the size information to obtain a processed decoded image includes: and performing coordinate transformation processing on the code frame of each graphic code according to the size information and the decoded image, wherein the code frame of each graphic code in the decoded image after the coordinate transformation processing and the displayable area are in the same coordinate system.
In an embodiment, before performing a conversion process on the decoded image according to the size information to obtain a processed decoded image, the method further includes: judging whether all the code frames in the decoded image are suitable for the displayable area or not; and when a frame which is not adapted to the displayable area exists in the frames of the plurality of graphic codes, performing conversion processing on the decoded image according to the size information to obtain the processed decoded image.
In one embodiment, the content summary information of the graphic code, the type mark of the graphic code and the prompt mark of the graphic code are one or more.
In one embodiment, the decoding the image comprises: a code frame of each graphic code in the plurality of graphic codes; generating a code recognition interface according to the image to be processed, the decoded image and the mark information, wherein the mark information of the plurality of graphic codes is displayed in the code recognition interface, and the method comprises the following steps: generating a mask layer image according to the decoded image and the mark information, wherein the mask layer image comprises: a code frame of each graphic code in the plurality of graphic codes and mark information of each code frame; and overlapping the covering layer image on the image to be processed to generate the code identifying interface.
In an embodiment, the image to be processed is a current frame image acquired by an image acquirer of the terminal; the step of overlapping the masking layer image on the image to be processed to generate the code recognition interface comprises the following steps: when the decoded image comprises a plurality of graphic codes, generating a cache map of the current frame image; and overlapping the covering image on the cache image to generate the code identifying interface.
In an embodiment, the image to be processed is a current frame image acquired by an image acquisition device of a terminal; the step of overlaying the masking layer image on the image to be processed to generate the code recognition interface comprises the following steps: when the decoded image comprises a plurality of graphic codes, controlling the image collector to stay at the current frame image; and overlapping the masking layer image on the current frame image to generate the code identifying interface.
In an embodiment, in the code recognition interface, the mark information is displayed at a position corresponding to a center point of a code frame of the graphic code.
In one embodiment, the marking information is dynamically displayed in the code recognition interface.
In an embodiment, after displaying the code identifying interface in the displayable region, the method further includes: and responding to a second operation instruction of the user on the mark information, and outputting content information carried by the selected target graphic code according to the decoded image.
In an embodiment, after displaying the code recognition interface in the displayable region, the method further includes: and responding to a third operation instruction of the user, and removing the code recognition interface.
In a second aspect, an embodiment of the present application provides a graphic code processing apparatus, including:
the first acquisition module is used for responding to a first operation instruction of a user and acquiring a decoding image of a graphic code in an image to be processed;
the second acquisition module is used for acquiring the size information of a displayable area in a user interface when the decoded image comprises a plurality of graphic codes;
a first generating module, configured to generate tag information of the plurality of graphic codes according to the decoded image and the size information;
the second generation module is used for generating a code identification interface according to the image to be processed, the decoded image and the mark information;
and the display module is used for displaying the code identifying interface in the displayable area, and the plurality of graphic codes and the mark information corresponding to each graphic code are displayed in the code identifying interface.
In an embodiment, the first obtaining module is configured to obtain the image to be processed in response to the first operation instruction of a user; and decoding the graphic code in the image to be processed to obtain the decoded image of the graphic code in the image to be processed.
In one embodiment, the decoding the image comprises: a code frame of each graphic code in the plurality of graphic codes; the first generation module is used for performing conversion processing on the decoded image according to the size information to obtain a processed decoded image, and a code frame of each graphic code in the processed decoded image is adapted to the displayable area; and generating the mark information of each graphic code according to the processed decoded image.
In an embodiment, the first generating module is configured to rotate the decoded image by a preset angle according to the size information, and each frame in the rotated decoded image is adapted to a display direction of the displayable region.
In an embodiment, the first generating module is configured to determine a frame scaling ratio according to the size information and the decoded image; and carrying out scaling processing on the decoded image according to the frame scaling ratio, wherein each frame in the scaled decoded image is in the range of the displayable area.
In an embodiment, the first generating module is configured to perform coordinate transformation processing on the frame of each graphics code according to the size information and the decoded image, and the frame of each graphics code in the decoded image after the coordinate transformation processing and the displayable area are in the same coordinate system.
In one embodiment, the method further comprises: a judging module, configured to judge whether all frames in the decoded image are adapted to the displayable region before performing conversion processing on the decoded image according to the size information to obtain a processed decoded image; and the first generating module is further configured to, when a frame that is not adapted to the displayable region exists in the frames of the plurality of graphic codes, perform conversion processing on the decoded image according to the size information to obtain the processed decoded image.
In one embodiment, the graphic code comprises one or more of content summary information, a type mark of the graphic code and a prompt mark of the graphic code.
In one embodiment, the decoding the image comprises: a code frame of each graphic code in the plurality of graphic codes; the second generating module is configured to generate a mask layer image according to the decoded image and the flag information, where the mask layer image includes: a code frame of each graphic code in the plurality of graphic codes and mark information of each code frame; and overlapping the masking layer image on the image to be processed to generate the code identifying interface.
In an embodiment, the image to be processed is a current frame image acquired by an image acquisition device of a terminal; the second generating module is configured to generate a cache map of the current frame image when the decoded image includes a plurality of graphic codes; and overlapping the covering image on the cache image to generate the code identifying interface.
In an embodiment, the image to be processed is a current frame image acquired by an image acquirer of the terminal; the second generation module is configured to control the image collector to stay at the current frame image when the decoded image includes a plurality of graphic codes; and overlapping the masking layer image on the current frame image to generate the code identifying interface.
In an embodiment, in the code recognition interface, the marking information is displayed at a position corresponding to a center point of a code frame of the graphic code.
In an embodiment, the marking information is dynamically displayed in the code recognition interface.
In one embodiment, the method further comprises: and the output module is used for responding to a second operation instruction of the user on the mark information after the code identifying interface is displayed in the displayable area, and outputting the content information carried by the selected target graphic code according to the decoded image.
In one embodiment, the method further comprises: and the removing module is used for responding to a third operation instruction of the user after the code identifying interface is displayed in the displayable area, and removing the code identifying interface.
In a third aspect, an embodiment of the present application provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to cause the electronic device to perform the method of any of the above aspects.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, in which computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the method according to any one of the above aspects is implemented.
In a fifth aspect, the present application provides a computer program product, including a computer program, which when executed by a processor, implements the method of any one of the above aspects.
According to the graphic code processing method, the graphic code processing device, the storage medium and the program product, the decoded image of the graphic code in the image to be processed is obtained in real time by responding to the first operation instruction of the user, then the decoded image is analyzed, if the image to be processed in the decoded image comprises a plurality of graphic codes, the mark information of the graphic code is generated based on the decoded image and the size information of the displayable area, and then the marks of the plurality of graphic codes are respectively displayed for the user to select. Therefore, the marking information can adapt to size display of the displayable area, the display effect of the multi-frame is improved, the accuracy of the marking position of the frame is improved, the interaction performance of the terminal is improved, and the user experience is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive exercise.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2A is a schematic view of an application scenario of a graphic code processing system according to an embodiment of the present application;
fig. 2B is a schematic diagram of a user interface provided in an embodiment of the present application;
fig. 2C is a schematic architecture diagram of a graphics code processing system according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a graphic code processing method according to an embodiment of the present disclosure;
fig. 4A is a schematic diagram of a displayable region according to an embodiment of the present application;
fig. 4B is a schematic diagram of a displayable region according to an embodiment of the disclosure;
FIG. 5A is a diagram illustrating a comparison between an original decoded image and a screen according to an embodiment of the present application;
FIG. 5B is a schematic diagram illustrating a comparison between a rotated decoded image and a screen according to an embodiment of the present application;
FIG. 5C is a diagram illustrating a comparison between a scaled decoded image and a screen according to an embodiment of the present application;
FIG. 6A is a diagram of a decoded image 1 according to an embodiment of the present disclosure;
FIG. 6B is a diagram illustrating a decoded picture 2 according to an embodiment of the present application;
fig. 6C is a schematic diagram of a code recognition interface according to an embodiment of the present disclosure;
fig. 6D is a schematic diagram of another code recognition interface provided in the embodiment of the present application;
fig. 7 is a schematic flowchart of a graphic code processing method according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a graphic code processing apparatus according to an embodiment of the present application.
Specific embodiments of the present application have been shown by way of example in the drawings and will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present application.
The term "and/or" is used herein to describe an association relationship of associated objects, and specifically indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist simultaneously, and B exists alone.
To clearly describe the technical solutions of the embodiments of the present application, first, terms referred to in the present application are defined as follows:
two-dimensional code: the graphic code is a graphic code which is distributed on a plane (in two-dimensional direction) according to a certain rule by using a certain specific geometric figure, is black and white and is alternated and records data symbol information. A common two-dimensional Code is a QR Code (Quick Response Code, a matrix two-dimensional Code symbol), which is a popular encoding method for mobile devices in recent years.
Bar code: bar Code, a common graphic Code, is a graphic identifier that arranges a plurality of black bars and spaces with different widths according to a certain coding rule to express a group of information. A common bar code is a pattern of parallel lines of dark bars (bars for short) and white bars (spaces for short) of widely differing reflectivity.
And (3) frame stacking: the method refers to an identification frame along a peripheral edge line of a graphic code in the image processing technology, and is used for enclosing the graphic code in the frame to realize the positioning of the graphic code.
Anchor point: the name anchor is a hyperlink in the webpage making, and is a hyperlink in the page like a quick locator.
App: application, application.
And (3) SDK: software Development Kit, software Development Kit.
CGfloat: the basic type of floating point value.
A user interface: user Interface, also called User Interface for short, is a medium for interaction and information exchange between system and User, and it realizes the conversion between internal form of information and human acceptable form.
As shown in fig. 1, the present embodiment provides an electronic device 1 including: at least one processor 11 and a memory 12, one processor being exemplified in fig. 1. The processor 11 and the memory 12 are connected by a bus 10. The memory 12 stores instructions executable by the processor 11, and the instructions are executed by the processor 11, so that the electronic device 1 may execute all or part of the processes of the methods in the embodiments described below, so as to implement the target code selection participated by the user in a multi-code image scene, improve the interaction performance of the terminal, and improve the user experience.
In one embodiment, the memory 12 may be separate or integrated with the processor 11.
In an embodiment, the electronic device 1 may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, or a large computing system composed of multiple computers.
The method and the device can be applied to any field needing graphic code identification.
Bar codes and two-dimensional codes are common graphic codes in life of people, and are widely applied to various scenes due to simple forms and capability of carrying rich data information. For the identification of bar codes or two-dimensional codes, image recognition technology is generally adopted to process, analyze and understand images of the bar codes or the two-dimensional codes so as to identify various information carried by the bar codes or the two-dimensional codes. For example, in an e-commerce scenario, a user may obtain information related to a commodity by scanning a two-dimensional code on a commodity tag. In the process, generally, the image containing the two-dimensional code on the tag is collected through a mobile phone of the user, and then the two-dimensional code in the image can be identified by adopting an image identification technology so as to obtain the commodity information carried in the two-dimensional code.
With the widespread introduction of graphic codes, code scanning identification functions are also loaded into various large user terminals of users in various forms. For example, the scanning function of App can acquire relevant information by recognizing the graphic code in the image. With the increasing demand of users for scanning codes, the scanning function of App becomes one of the core functions of the terminal, and it can help users to identify various graphic codes, such as two-dimensional codes, bar codes, etc., in complicated pictures.
With the development of electronic information technology, the electric business is more and more abundant, and graphic codes are more and more common, and a picture often contains a plurality of graphic codes. In the existing code recognizing method, for a scene with one graph and multiple codes, a user is required to select a target graph code and then content information of the target graph code is output. However, in the existing method, the positioning of the graphic code is often inaccurate, so that the user cannot determine which graphic code is selected,
resulting in poor terminal interaction performance and affecting user experience.
In order to solve the above problem, an embodiment of the present application provides an image recognition scheme, which can implement size display that mark information can adapt to a displayable area in a multi-code image scene, improve a display effect, improve accuracy of a graphic code mark position, improve interaction performance of a terminal, and improve user experience.
Fig. 2A is a schematic view of a scenario of a graphic code processing system according to an embodiment of the present disclosure. As shown in fig. 2A, the system includes: a server 210 and a terminal 220, wherein the server 210 may be a data platform of an e-commerce, such as an online shopping platform. In practical scenarios, there may be multiple servers 210 in an online shopping platform, and 1 server 210 is taken as an example in fig. 2A. The terminal 220 may be a computer, a mobile phone, a tablet, or other devices used when logging in to the online shopping platform, and there may be a plurality of terminals 220, which is illustrated by 2 terminals 220 in fig. 2A as an example.
Information may be transmitted between the terminal 220 and the server 210 via the internet so that the terminal 220 may access data on the server 210. The terminal 220 and/or the server 210 may be implemented by the electronic device 1.
As shown in fig. 2B, which is a schematic view of a user interface 221 of the terminal 220 according to the embodiment of the present application, a control 1 for triggering graphic code processing, for example, a control 1 corresponding to a "scan" function, may be configured in the user interface 221. When the user wants to recognize a graphic code in an image, the user may trigger a sweep function to recognize a plurality of graphic codes in a specific image. Such as a two-dimensional code and a bar code on an entity map, the two-dimensional code carries store links, and the bar code carries logistics information. When a user wants to view content information of a two-dimensional code and/or a barcode in the entity map, the user may touch the control 1 on the user interface 221, start a scanning function to scan the entity map, and then perform a processing procedure of one map with multiple codes by using the method of the embodiment of the present application.
An algorithm engine for processing the graphic code may be deployed in the server 210, for example, a decoding algorithm for the graphic code may be deployed in the server, and an invocation interface is opened to the terminal in an SDK manner, so as to provide algorithm support for a graphic code processing operation triggered by a user.
As shown in fig. 2C, a schematic view of a scene architecture for processing a graphic code according to an embodiment of the present application mainly includes: user operation part, service implementation layer and scanning decoding plug-in layer. Taking the mobile phone camera to scan the commodity tag as an example, the user can start a scanning function through the mobile phone to scan the code: and scanning the graphic code on the commodity tag to obtain an image to be processed. The method comprises the steps that a to-be-processed image is transmitted to a scanning decoding plug-in layer by responding to a first operation instruction of a user, the decoding plug-in layer acquires a decoding image of a graphic code in the to-be-processed image in real time (namely, a new decoding link is entered), then the decoding image is analyzed, if the to-be-processed image in the decoding image comprises a plurality of graphic codes (namely, the result is a plurality of codes), firstly, mark information for displaying is generated according to the decoding image of the plurality of graphic codes, and then the decoding image and the mark information are transmitted to a service implementation layer together.
The service implementation layer mainly needs to mark the central point of the anchor code frame through an arrow in the current frame after recognizing the multi-code, and then further response is carried out after the operation of the user. Specifically, when entering a multi-code scene at a service implementation layer, a current frame cache map may be added on a scanned shot page, a multi-code scene page (i.e., a code recognition interface) is added to a shot page container, and then code frame center point mark information of a plurality of graphic codes is respectively displayed in the code recognition interface for a user to select. When a user selects one of the mark information, for example, clicking a code box on a multi-code scene page, the selected graphic code can enter a single-code identification link, where the single-code identification link refers to outputting content information carried by the selected graphic code, for example, jumping to a page to which the selected graphic code points. Therefore, in a multi-code image scene, the user participates in selecting the target code, the interaction performance of the terminal 220 is improved, and the user experience is improved.
If the user clicks to return in the multi-code scene page, the service implementation layer can quit the multi-code scene, specifically, the current frame cache map can be removed, the multi-code scene page can be removed, and the image collector can recover the scanning state.
On the other hand, if the decoding result of the scanning decoding plug-in layer to the image to be processed is a single code, the scanning decoding plug-in layer directly enters a single code identification link, namely, the decoded image and the marking information are transmitted to the service implementation layer together, and content information carried by a single graphic code is output, for example, the scanning decoding plug-in layer jumps to a page pointed by the single graphic code.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The features of the embodiments and examples described below may be combined with each other without conflict between the embodiments. In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
Please refer to fig. 3, which is a graphic code processing method according to an embodiment of the present application, and the method may be executed by the electronic device 1 shown in fig. 1 and may be applied to the application scenarios of graphic code processing shown in fig. 2A to 2C, so as to achieve the purposes of improving the accuracy of the graphic code marking position, improving the multi-code display effect, and improving the interaction performance of the terminal 220 in the multi-code image scenario. The method comprises the following steps:
step 301: and responding to a first operation instruction of a user, and acquiring a decoding image of the graphic code in the image to be processed.
In this step, the first operation instruction is used to trigger the scan code recognition function, such as the scan function described above, the first operation instruction may be an instruction to trigger the control 1, and when the first operation instruction is captured, the terminal 220 starts the scan function. The first operation instruction may be a contact type gesture instruction, such as a touch instruction. The first operation instruction may also be a non-contact instruction, such as a voice instruction, an air gesture instruction, and the like. The image to be processed may be an image that the user wants to perform graphic code recognition, such as an image of a merchandise tag. The decoded image is decoding result information obtained by decoding a graphic code in the image to be processed, and the decoded image includes but is not limited to: the type of the graphic code in the image to be processed, the frame of the graphic code, the position of the frame, the content information carried by the graphic code and the like.
In an actual scene, when a user wants to recognize a graphic code in one image, a scanning function can be triggered to recognize the graphic code in a specific image. For example, the user can trigger control 1 to start a scanning function of the shopping App to acquire the decoded image through the user interface 221 displayed on the screen of the mobile phone.
In an embodiment, step 301 may specifically include: and responding to a first operation instruction of a user, and acquiring the image to be processed. And decoding the graphic code in the image to be processed to obtain a decoded image of the graphic code in the image to be processed.
In this embodiment, after the first operation instruction of the user triggers the control 1 and starts the scan function, the image to be processed is first acquired in response to the first operation instruction, and then the image to be processed is decoded to acquire a decoded image of the graphic code in the image to be processed. Here, the image to be processed may be decoded by the decoding SDK deployed in the server 210, the user may transmit the image to be processed to the server 210 through the terminal 220, call the multi-code interface, and the server 210 returns the decoded image to the terminal 220 after performing decoding processing according to the decoding SDK configured in advance. In this way, the decoding process can be handed over to the server 210, which simplifies the calculation amount of the terminal 220 and improves the resource utilization efficiency of the terminal 220.
In an embodiment, a corresponding decoding algorithm may also be preconfigured in the terminal 220, and when the image to be processed is obtained, the preconfigured decoding algorithm is directly used to perform decoding processing on the image to be processed, so as to obtain a decoded image of the graphic code in the image to be processed. Therefore, the decoding process is carried out locally, the condition of failure of the decoding process caused by disconnection of network connection can be avoided, and the decoding process is guaranteed to be completed smoothly.
In an embodiment, after obtaining the decoded image of the image to be processed, the decoded image may be filtered to filter out some decoding results of the graphic code that do not need to be identified. The type of the graphic code to be filtered can be configured in advance, for example, in an e-commerce shopping scene, a user generally does not care about the graphic code of the lottery advertisement, and the decoding result of the graphic code can be filtered from the decoded image, so that the quantity is simplified, and the calculation efficiency is improved.
In an embodiment, the acquiring, in response to a first operation instruction of a user, an image to be processed may specifically include: in response to the first operation instruction, the image collector of the terminal 220 is started, and the collected current frame image is used as the image to be processed.
In this embodiment, the image to be processed may be acquired in real time by the image acquirer. The image collector may be a built-in camera of the terminal 220 or an external camera. When the user triggers the control 1, the scanning function is started, and at this time, the terminal 220 may give a prompt to inquire whether the user starts the camera, and when the user determines to start the camera, the current frame image acquired by the camera may be acquired in real time as the image to be processed. For example, a two-dimensional code and a bar code exist on one commodity tag at the same time, the two-dimensional code carries store links, the bar code carries logistics information, and when a user wants to check content information of the two-dimensional code and/or the bar code in the commodity tag, the user can trigger a scanning function on a mobile phone App to start so as to scan the commodity tag and obtain an image to be processed. The method is suitable for scene of scene image acquisition, and is convenient for users to flexibly select.
If the image to be processed is obtained through the image collector, if the current frame image has a corresponding decoding image, the code scanning is considered to be hit, the images of other frames do not need to be processed, and a prompt tone can be sent at the moment, so that the user can be prompted to successfully scan the code.
In one embodiment, the image to be processed may also be an image selected from a database.
In this embodiment, the database may be a local database, such as a local album of a mobile phone, or a remote database, such as an album in the cloud server 210. In some scenarios, the image to be processed may be an electronic picture sent by a friend, for example, a picture shared by stores sent by the friend, where the picture includes a graphic code linked with the stores, and at this time, the picture may be cached in a local album, and when a user wants to identify the graphic code in the picture, a picture may be selected in the album as the image to be processed. According to the mode, a camera does not need to be started, the resources of the terminal 220 are saved, the operation is simple, and the flexible selection of a user is facilitated.
Step 302: when a plurality of graphic codes are included in the decoded image, size information of a displayable area in the user interface 221 is acquired.
In this step, the decoded image will include the decoding results of the graphic codes in the image to be processed, so the number of graphic codes in the image to be processed can be obtained from the decoded image. When the decoded image includes a plurality of graphic codes, in order to enable the user to participate in the selection process of the graphic codes, the plurality of graphic codes need to be displayed in the user interface 221, and a triggerable selection control is given, so that the user can flexibly select the object code. To better present the plurality of graphic codes, size information of the displayable area of the real-time user interface 221 is first displayed.
The displayable region refers to a region of a multi-code presentation configured in the user interface 221, i.e., the displayable region is used to present the code recognition interface. Taking a mobile phone as an example, as shown in fig. 4A, assuming that the user interface 221 is displayed in a full screen, the displayable region may be configured as a partial region in the user interface 221, for example, a half region of the user interface 221 is used as the displayable region for displaying the code recognition interface. In this way, the remaining area in the user interface 221 may also display other content, such as navigation bar information, to improve the utilization efficiency of the screen.
A corresponding control may be configured in the user interface 221, and when the user slides the control, the size of the displayable region may be adjusted along with the sliding direction of the control, so as to improve the interaction performance of the terminal 220.
As shown in fig. 4B, the displayable region may also be the entire region of the user interface 221, so that the user can flexibly select the displayable region based on actual requirements, thereby improving the interactive performance of the terminal 220.
Step 303: and generating a plurality of graphic code mark information according to the decoded image and the size information.
In this step, the marking information is used for prompting the position of the graphic code of the user on one hand, and is used for providing a triggerable selection control on the other hand, so that the user can enter a selection instruction. In order to make the position marking of the plurality of graphic codes accurate when displaying, the size information of the decoded image and the displayable area needs to be considered comprehensively, namely the marking information of the plurality of graphic codes is generated according to the decoded image and the size information. Therefore, the marking information can adapt to the size of the displayable area during display, the display effect is improved, and the user experience is improved.
In one embodiment, decoding the image may include: a frame of each of the plurality of graphic codes. Generating a plurality of graphic code mark information according to the decoded image and the size information, comprising: and converting the decoded image according to the size information to obtain a processed decoded image, wherein the code frame of each graphic code in the processed decoded image is adapted to the displayable area. And generating the mark information of each graphic code according to the processed decoded image.
In this embodiment, the original data of the decoded image returned by the decoding SDK in the actual scene often does not match the size or direction of the displayable area, and if the original data is directly displayed, the position of the frame and the corresponding graphic code may not be in the same position, or even the display of the frame is incomplete, which may cause trouble to the user, and the user cannot determine which frame is clicked to select the target graphic code. Therefore, the decoded image can be converted based on the size information of the displayable area, so that the frame of each graphic code after conversion is adapted to the displayable area, and the corresponding graphic code can be accurately represented based on the generated marking information. The mapping of the original data of the decoded image returned by the decoding SDK to the displayable region is achieved.
The adaptive display here means that each frame of the graphic code can be completely displayed in the displayable area, and the position of each graphic code matches with the position of the corresponding frame, that is, the graphic code falls within the range of the corresponding frame. Otherwise, it indicates incompatibility. The position of the code frame in the displayable area can be completely and accurately represented by the decoded image after conversion processing, so that the mark information of each graphic code is generated based on the decoded image after conversion processing, and the mark information of one graphic code can be associated with the position of the code frame of the graphic code, so that the position of each graphic code can be more accurately identified, and the accuracy of the mark information is improved.
In one embodiment, the conversion process may include: one or more of a rotation process, a scaling process, and a coordinate system transformation process.
In this embodiment, in order to implement multi-code visual display and accurately display the mark information on each graphic code, a corresponding conversion processing manner may be selected for a specific situation of the decoded image.
In an embodiment, the converting the decoded image according to the size information to obtain a processed decoded image specifically includes: and rotating the decoded image by a preset angle according to the size information, wherein each frame in the rotated decoded image is adapted to the display direction of the displayable area.
In this embodiment, the preset angle may be set based on actual conditions, for example, based on a deviation angle between the decoded image and the displayable region in the displaying direction. For the condition that the display direction of the decoded image has deviation from the display direction of the displayable area, the original decoded image returned by the decoding SDK can be rotated by a preset angle based on the size information of the displayable area, so that the rotated frame and the display direction of the displayable area are displayed in a suitable manner, and the unfriendly conditions such as reverse image and the like are avoided.
In an embodiment, the converting the decoded image according to the size information to obtain a processed decoded image may further include: and determining the frame scaling according to the size information and the decoded image. And carrying out scaling processing on the decoded image according to the frame scaling, wherein each frame in the scaled decoded image is in the range of the displayable area.
In this embodiment, the decoded image may not match the size of the displayable region, for example, the decoded image may be much larger than the displayable region, and in order to make the frame in the decoded image displayed reasonably, the frame scaling may be determined based on the size information and the decoded image, for example, the frame scaling may be determined based on the ratio between the width of the decoded image and the width of the displayable region, and/or the frame scaling may be determined based on the ratio between the height of the decoded image and the height of the displayable region, and then the decoded image is scaled according to the frame scaling, so that the scaled frame may be displayed completely in the displayable region, thereby avoiding the incomplete display.
In an embodiment, the converting the decoded image according to the size information to obtain a processed decoded image specifically includes: and performing coordinate transformation processing on the code frame of each graphic code according to the size information and the decoded image, wherein the code frame of each graphic code in the decoded image after the coordinate transformation processing and the displayable area are in the same coordinate system.
In this embodiment, the frame coordinates in the decoded image may be based on the coordinate system of the image to be processed, or may be converted into the coordinate system by the decoding SDK, and therefore may be inconsistent with the coordinate system of the displayable region, and if the frame coordinates are directly displayed, the frame coordinates may be misaligned.
Taking the two-dimensional code as an example, assuming that the user interface 221 is displayed in a full screen mode and a mobile phone of a user is used in a vertical screen mode, the displayable area is the whole area of the user interface 221, that is, the displayable area is the whole mobile phone screen, the original decoded image returned by the decoding SDK is an image to be processed based on the horizontal screen, and the preset angle is 90 degrees at this time. As shown in fig. 5A, which is a schematic diagram illustrating a comparison between an original decoded image and a screen in this embodiment, the original decoded image is in a horizontal screen direction, the screen is in a vertical screen direction, and the size of the original decoded image is larger than that of a displayable region, and the original decoded image includes two-dimensional code frames, for example, the following method may be adopted to perform conversion processing on the original decoded image returned by the decoding SDK:
the first step is as follows: the original decoded image is rotated by 0 ° → 90 ° clockwise, that is, the code frame rect in the original decoded image needs to be rotated by 90 ° clockwise, and a schematic diagram of the comparison between the original decoded image and the screen after rotation in fig. 5A is shown in fig. 5B, which can be implemented by using a CGRect function, where pseudo codes are as follows:
frame rotation rect = CGRect: (c) ((c))
x is the width of the original image (original frame, y + original frame, height),
y is the original code frame, x,
width, original code frame, height,
height is the original code frame width);
wherein x is the distance from the left, y is the distance from the top, width is the width of the frame itself, and height is the height of the frame itself.
The second step is that: the rotated decoding image is zoomed to the same size with the mobile phone screen, and whether the image is a long screen can be judged by adopting a BOOL function, wherein the pseudo code is as follows:
whether the BOOL is long screen = picture width of decoded image/picture height of decoded image > screen width/screen height? YES is NO;
zoom scale = long screen? Screen height/picture height of decoded image screen width/picture width of decoded image.
Frame scaling rect = CGRect: (CGRect)
x is picture rotation rect. X scale,
y is the picture rotation rect. Y scale,
width, picture rotation rect. Width scale,
height, picture rotation rect, height scale);
FIG. 5C is a diagram illustrating the decoded image after scaling process compared to the screen in FIG. 5B.
The third step: and (3) frame coordinate transformation: coordinate points of the original decoded image → transform to screen coordinate points, the pseudo code may be as follows:
frame coordinate transformation rect = null;
if (long screen) ready pocket
CGFloat xOffset = (picture width × scale-screen width)/2.0;
frame coordinate transform rect = CGRectMake (c-frame coordinate transform)
x is the frame scaling rect. X-xOffset,
y is the frame scaling rect.y,
width, frame scaling rect. Width,
height, frame scaling rect. Height);
}else{
CGFloat yOffset = (picture height × scale-screen height)/2.0;
frame coordinate transformation rect = CGRectMake (x: frame scaled rect. X,
y frame scaling rect. Y-yOffset,
width, frame scaling rect. Width,
height, frame scaling rect. Height);
}
transforming rect by the return code frame coordinate;
therefore, after the frame coordinate conversion processing, the frame can be displayed in the middle of the screen as much as possible, and the screen overflow is avoided. So that the user can comfortably click on the code frame.
In one embodiment, in some scenarios, the image to be processed may include a plurality of different types of graphic codes, such as a two-dimensional code and a barcode. For an image with a barcode, it is sometimes encountered that there is an error in decoding the frame data of the barcode returned by the SDK: such as the codebox rect has no altitude data (i.e., altitude returns to 0), or the codebox rect y coordinates are not reasonable.
For the height of 0, the height of the original frame rect may be preset to a value, for example, 50. Then, the above-described clockwise rotation by 90 °, scaling processing, and coordinate conversion operation are attempted as in the two-dimensional code procedure. It is assumed that the frame of the barcode can be concluded from the verification:
a. the frame rect does not need to be turned over by 90 °.
b. The code frame rect needs to be scaled.
c. The frame rect x coordinates need not be transformed.
The witdh of the frame can be expected and the x-axis can be expected by using the above conversion processing method.
For the case that the frame rect y coordinate does not meet the expectation, it may be that the image to be processed is scaled by the decoded SDK to some extent, so that an error exists in the mapping between the scaled decoded image and the screen resolution, and the scaling of the image to be processed may be determined in the following manner:
assuming that the image to be processed is obtained by scanning the code, the returned decoded image 1 after the decoding SDK process is as shown in fig. 6A. When the same image to be processed is not zoomed, the decoded image 2 shown in fig. 6B is obtained, and comparing fig. 6A and fig. 6B, it can be found that, for the same image to be processed, the positions of the frames of the barcode in the decoded image are different, mainly the distances from the top end of the image are different, that is, the rect y coordinate of the frame is not in accordance with the expectation. In order to solve the problem caused by this situation, if the frame y value of the barcode in fig. 6A is y1, and the frame y value of the barcode in fig. 6B is y2, y2-y1 may be used as the scaling used when the decoding SDK parses the to-be-processed image, and assuming that the scaling of the decoding SDK parsing the to-be-processed image is 60, then the following conversion processing method is adopted to process the frame of the barcode:
CGFloat scale1= screen width/picture width of decoded image;
CGFloat scale2= screen height/60.0; //60.
Bar code scaled rect = CGRectMake (c-rcctmake)
x is the original frame, x scale1,
y is the original frame, y scale2,
width, original frame, width scale1,
height:50);
when the processed bar code frame is displayed on the screen, the bar code frame and the corresponding bar code in the image to be processed are located at the same position, namely the bar code falls into the corresponding frame range, so that accurate display is realized, and the bar code is convenient for a user to check and select. Therefore, the accurate anchor point can be realized no matter the image to be processed contains the two-dimensional code or the bar code.
In an embodiment, before step 303, the method may further include: and judging whether all the code frames in the decoded image are suitable for the displayable area. And when a code frame which is not suitable for the displayable area exists in the code frames of the plurality of graphic codes, converting the decoded image according to the size information to obtain the processed decoded image.
In this embodiment, it may be determined whether the original decoded image returned by the decoded SDK can fit the size of the display area, and if not, the decoded image may be converted to obtain accurate tag information, so that each frame can be displayed completely and accurately in the displayable area. Therefore, the original decoding images are screened, conversion processing can be accurately carried out on the decoding images which are not adaptive, unnecessary calculation amount is avoided, and calculation efficiency is improved.
Step 304: and generating a code identifying interface according to the image to be processed, the decoded image and the marking information.
In this step, the code recognition interface is used for displaying a plurality of graphic codes in the image to be processed, and a code frame and mark information of each graphic code. The decoding image comprises the code frame of each graphic code, and when the identification interface is generated, the graphic codes in the image to be processed are associated with the corresponding mark information and the corresponding code frame, so that the graphic codes can be accurately displayed to a user by the code identification interface, and the interaction performance of the terminal 220 is improved.
Step 305: and displaying a code identification interface in the displayable area, wherein a plurality of graphic codes and mark information corresponding to each graphic code are displayed in the code identification interface.
In this step, the code recognition interface is used for displaying a plurality of graphic codes in the image to be processed, and a code frame and mark information of each graphic code, display data of the code recognition interface can be transmitted to the display system, the display system displays the code recognition interface in a displayable area, so that a user can see each graphic code, a code frame corresponding to each graphic code and mark information corresponding to each graphic code in the image to be processed through a mobile phone screen, and the mark information can obviously prompt the position of the graphic code of the user, so that the user can see the mark information at a glance, and the user can conveniently check the mark information. In one embodiment, the marking information includes: one or more of content digest information of the graphic code, a type flag of the graphic code, and a prompt flag of the graphic code.
In this embodiment, the content summary information of the graphic code may be obtained from the decoded image, and is used to represent the content summary of the graphic code, so as to remind the user what the content corresponding to the graphic code is probably like, and help the user make a prediction. In an actual scene, it is assumed that two-dimensional codes are included on one commodity tag, the two-dimensional code 1 is shop connection of commodities, and the two-dimensional code 2 is logistics information of the commodities. After the user scans the codes, the two-dimensional codes are displayed, but the user does not know which two-dimensional code is the code of the logistics information, and at this time, the content summary information can be displayed on the code frame of the corresponding two-dimensional code or near the code frame as a piece of mark information in a code recognition interface based on the decoded image, as shown in fig. 6C, so as to assist the user in selecting and improve the friendliness of multi-code recognition.
Similarly, the decoded image may also include the type of each graphic code, and the type marks of the graphic codes may be displayed together in the code recognition interface, so as to give enough prompt information to the user, assist the user in making a decision, and improve the interactive performance of the terminal 220.
The prompt of the graphic code can be a striking symbol, such as an arrow icon, and the prompt is mainly used for guiding the position clicked by the user, reducing the deviation of the touch position of the user during selection and improving the accuracy of instruction entry.
In one embodiment, the marking information is dynamically displayed in the code recognition interface. In order to improve the saliency of the mark information, the mark information may be dynamically displayed, for example, the indicator arrow is blue, and gradually becomes larger (for example, may be enlarged by 0.5 times to 1 times), and background content other than the graphic code and the mark information may be gradually displayed in a dark state (for example, the background is set with black, and the transparency is set in a range of 0 to 0.5), so as to increase the distinction degree between the mark information and the background. The arrows can be displayed in a breathing mode, the eye-catching degree of the marked information can be further improved, and the user can check the marked information conveniently. The dynamic display process can dynamically display the marking information within a preset time period, such as 0.4 second, so as to avoid excessive energy consumption caused by continuous dynamic display. In one embodiment, in the code recognition interface, the marking information is displayed at the position of the center point of the code frame corresponding to the graphic code. The mark information can be displayed at the central point position of the corresponding graphic code in an anchor point mode, as shown in fig. 6D, by taking an arrow prompt as an example, the central point of the arrow can be overlapped with the central point of the code frame, so that the arrow can be displayed at the central position of the code frame of the corresponding two-dimensional code, and the position of the graphic code of the user can be conveniently and accurately guided.
In other embodiments, the marking information may be located at different positions of the code recognition interface, for example, a floating window may be popped up on one side of the code frame or above the code frame to display the marking information. The diversified display modes can improve the interaction friendliness of the terminal 220 and improve the user experience.
Step 306: and responding to a second operation instruction of the user on the mark information, and outputting the content information carried by the selected target graphic code according to the decoded image.
In this step, the second operation instruction is used to select a frame of the target graphic code. The selection control can be set for each graphic code in the code recognition interface, after the code recognition interface is displayed, a user can select a target graphic code which is desired to enter based on the prompt of the marking information of the code recognition interface, and trigger the corresponding selection control to enter a second operation instruction, the terminal 220 responds to the second operation instruction, calls out content information carried by the target graphic code from the decoded image, and outputs the content information to the user interface 221, so that the code recognition process is realized.
In an embodiment, the content information carried by the target graphic code may be a link of a web page, or a piece of text, or an audio/video, and the like, and the manner of outputting the content information carried by the target graphic code may also be different based on the difference of the content information, for example, if the content information is a store link, a jump may be made directly to the page to which the link points. If the content information is a segment of text, a floating window can be popped up, and the text is displayed in the floating window. If the content information is a piece of audio and video, the audio and video can be directly played. If the content information output mode is diversified, the method can be better suitable for different scenes.
In an embodiment, after step 305, the method further includes: and removing the code recognizing interface in response to a third operation instruction of the user.
In this step, the third operation instruction is used to trigger a return, and a return control may be configured on the code recognition interface. In some cases, after the user sees the code recognition interface, the user may not want to further recognize the code, so that the user can enter the third operation instruction by touching the return control, and when the terminal 220 captures the third operation instruction, the user can remove the code recognition interface, so that the user no longer occupies resources of the display system, and the resource utilization rate is improved.
In an embodiment, after step 301, the method further includes: when the single graphic code is included in the decoded image, the content information carried by the single graphic code is directly output.
In this step, when the decoded image includes a single graphic code, the process of single code recognition may be directly entered, and the content information carried by the single graphic code may be output without displaying a recognition interface, and the output mode may be different based on the difference of the content information, which may refer to the detailed description of the above embodiment and is not described herein again.
According to the graphic code processing method, the decoding image of the graphic code in the image to be processed is obtained in real time by responding to the first operation instruction of the user, then the decoding image is analyzed, if the image to be processed in the decoding image comprises a plurality of graphic codes, the mark information of the graphic code is generated based on the decoding image and the size information of the displayable area, and then the marks of the plurality of graphic codes are respectively displayed for the user to select. Therefore, the marking information can adapt to the size display of the displayable area, the display effect of the multi-frame is improved, the accuracy of the marking position of the frame is improved, the interaction performance of the terminal 220 is improved, and the user experience is improved.
Please refer to fig. 7, which is a graphical code processing method according to an embodiment of the present application, and the method may be executed by the electronic device 1 shown in fig. 1 and may be applied to the application scenarios of the graphical code processing shown in fig. 2A to 2C, so as to achieve an effect of improving the multi-frame display, improve the accuracy of the frame marking positions, and improve the interaction performance of the terminal 220 in the multi-code image scenario. The method comprises the following steps:
step 701: and responding to a first operation instruction of a user, and acquiring an image to be processed. Reference is made in detail to the related description of step 301 in the above embodiment.
Step 702: and decoding the graphic code in the image to be processed to obtain a decoded image of the graphic code in the image to be processed. Reference is made in detail to the above embodiment for the description of step 301.
Step 703: when a plurality of graphic codes are included in the decoded image, size information of a displayable area in the user interface 221 is acquired. Reference is made in detail to the related description of step 302 in the above embodiment.
Step 704: and converting the decoded image according to the size information to obtain a processed decoded image, wherein the code frame of each graphic code in the processed decoded image is adapted to the displayable area. Reference is made in detail to the above embodiment for the description of step 303.
Step 705: and generating the mark information of each graphic code according to the processed decoded image. Reference is made in detail to the description of step 303 in the above embodiment.
Step 706: and generating a mask layer image according to the decoded image and the mark information.
In this step, the mask image includes: and decoding the code frame of each graphic code and the mark information of each code frame in the image. In order to display the graphic code, the code frame and the marking information at the same time on the code recognition interface, the graphic code, the code frame and the marking information can be displayed by covering a masking layer image on an original image of the image to be processed, the masking layer image can be generated based on a coordinate system of a displayable area, and the part of the masking layer image except the code frame and the marking information can be transparent so as to avoid covering the image to be processed.
Step 707: and overlapping the covering layer image on the image to be processed to generate a code identifying interface.
In this step, the masking layer image is covered on the image to be processed, and the overlapped image is used as the display content of the code recognition interface, so that the code frame in the masking layer image and the graphic code at the corresponding position in the image to be processed are displayed in an overlapping manner, and the mapping from the decoding result to the image is realized. The operation is simple.
In an embodiment, the image to be processed is a current frame image collected by an image collector of the terminal 220. Step 707 may specifically include: when the decoding image comprises a plurality of graphic codes, generating a buffer image of the current frame image. And overlaying the overlay image on the cache map to generate a code recognition interface.
In this embodiment, if the user starts the scanning function and then starts the camera lens of the mobile phone to acquire the image to be processed, the scanned current frame image is the image to be processed, and when the code recognition interface is generated, the current frame image may be cached to generate a cache map, for example, the cache map of the current frame image is obtained in a screenshot manner, and then a layer image is covered on the cache map to display the marking information to generate the code recognition interface.
Specifically, the recognition interface may be implemented by a modified container, and in order to enumerate the circulation of different states (normal, single code, and multi-code), a state machine mode may be used, taking a scan function as an example, and when the recognition interface shown in fig. 6C is generated, the hierarchy of the container may be as follows:
lens page container
Lens scanning page
Current frame buffer map
Masking image
Navigation bar
Frame
Prompt text box
In an embodiment, the image to be processed is a current frame image collected by an image collector of the terminal 220. Step 707 may specifically include: and when the decoded image comprises a plurality of graphic codes, controlling the image collector to stay at the current frame image. And overlapping the masking layer image on the current frame image to generate a code identifying interface.
In this embodiment, when the image to be processed is captured by the camera, the camera may also be stopped to stop the camera from capturing the image, so that the lens stays at the current frame, specifically, the camera session may be stopped to stay at the current frame, and the current frame image is covered by the layer image to display the mark information and generate the identification interface. The method does not need to additionally cache the graph, so that the memory resource is not additionally consumed, and the energy consumption can be saved.
Step 708: and displaying a code identifying interface in the displayable area, wherein a plurality of graphic codes and mark information corresponding to each graphic code are displayed in the code identifying interface. Reference is made in detail to the above embodiment for the description of step 305.
Step 709: and responding to a second operation instruction of the user on the mark information, and outputting the content information carried by the selected target graphic code according to the decoded image. Reference is made in detail to the above embodiments with respect to step 306.
For details of the graphic code processing method, reference may be made to the description of the corresponding method embodiment in the foregoing embodiment. The implementation principle and technical effect are similar, and this embodiment is not described herein again.
Please refer to fig. 8, which is a graphic code processing apparatus 800 according to an embodiment of the present application, and the apparatus can be applied to the electronic device 1 shown in fig. 1 and can be applied to the application scenarios of graphic code processing shown in fig. 2A to 2C, so as to achieve the purposes of improving the accuracy of the graphic code marking position, improving the multi-code display effect, and improving the interaction performance of the terminal 220 in the multi-code image scenario. The device comprises: the system comprises a first acquisition module 801, a second acquisition module 802, a first generation module 803, a second generation module 804 and a display module 805, wherein the principle relationship of the modules is as follows:
a first obtaining module 801, configured to obtain a decoded image of a graphic code in an image to be processed in response to a first operation instruction of a user.
A second obtaining module 802, configured to obtain size information of a displayable area in the user interface 221 when the decoded image includes a plurality of graphic codes.
A first generating module 803, configured to generate a plurality of graphic code label information according to the decoded image and the size information.
And a second generating module 804, configured to generate a code recognizing interface according to the image to be processed, the decoded image, and the tag information.
The display module 805 is configured to display a code recognition interface in a displayable area, where the code recognition interface displays a plurality of graphic codes and mark information corresponding to each graphic code.
In an embodiment, the first obtaining module 801 is configured to obtain an image to be processed in response to a first operation instruction of a user. And decoding the graphic code in the image to be processed to obtain a decoded image of the graphic code in the image to be processed.
In one embodiment, decoding the image comprises: a frame of each of the plurality of graphic codes. And the first generating module 803 is configured to perform conversion processing on the decoded image according to the size information to obtain a processed decoded image, where a frame of each graphic code in the processed decoded image is adapted to the displayable area. And generating the mark information of each graphic code according to the processed decoded image.
In an embodiment, the first generating module 803 is configured to rotate the decoded image by a predetermined angle according to the size information, and each frame in the rotated decoded image is adapted to the display direction of the displayable region.
In an embodiment, the first generating module 803 is configured to determine a frame scaling according to the size information and the decoded image. And carrying out scaling processing on the decoded image according to the frame scaling, wherein each frame in the scaled decoded image is in the range of the displayable area.
In an embodiment, the first generating module 803 is configured to perform coordinate transformation processing on a frame of each graphic code according to the size information and the decoded image, where the frame of each graphic code in the decoded image after the coordinate transformation processing is in the same coordinate system as the displayable area.
In one embodiment, the method further comprises: and the judging module is used for judging whether all the code frames in the decoded image are suitable for the displayable area or not before the decoded image is converted according to the size information and the processed decoded image is obtained. The first generating module 803 is further configured to, when a frame that is not suitable for the displayable region exists among the frames of the plurality of graphic codes, perform conversion processing on the decoded image according to the size information to obtain a processed decoded image.
In one embodiment, the content summary information of the graphic code, the type mark of the graphic code and the prompt mark of the graphic code are one or more.
In one embodiment, decoding the image comprises: a frame of each of the plurality of graphic codes. A second generating module 804, configured to generate a mask layer image according to the decoded image and the flag information, where the mask layer image includes: and the code frame of each graphic code in the plurality of graphic codes and the mark information of each code frame. And overlapping the covering layer image on the image to be processed to generate a code identifying interface.
In an embodiment, the image to be processed is a current frame image collected by the image collector of the terminal 220. A second generating module 804, configured to generate a buffer map of the current frame image when the decoded image includes multiple graphic codes. And overlaying the overlay image on the cache map to generate a code recognition interface.
In an embodiment, the image to be processed is a current frame image collected by an image collector of the terminal 220. And a second generating module 804, configured to control the image collector to stay at the current frame image when the decoded image includes a plurality of graphic codes. And overlapping the masking layer image on the current frame image to generate a code identifying interface.
In one embodiment, in the code recognition interface, the marking information is displayed at the position of the center point of the code frame corresponding to the graphic code.
In one embodiment, the marking information is dynamically displayed in the code recognition interface.
In one embodiment, the method further comprises: and the output module 806 is configured to, after the code recognition interface is displayed in the displayable region, respond to a second operation instruction of the user on the tag information, and output the content information carried by the selected target graphic code according to the decoded image.
In one embodiment, the method further comprises: a removing module 807, configured to remove the code interface in response to a third operation instruction of the user after displaying the code interface in the displayable region.
For detailed description of the graphic code processing apparatus 800, please refer to the description of the related method steps in the above embodiments, which have similar implementation principles and technical effects, and this embodiment is not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, in which computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the method of any one of the foregoing embodiments is implemented.
The embodiments of the present application also provide a computer program product, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the method of any one of the foregoing embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of modules is only one logical division, and other divisions may be realized in practice, for example, a plurality of modules may be combined or integrated into another system, or some features may be omitted, or not executed.
The integrated module implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor to execute some steps of the methods according to the embodiments of the present application.
It should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in the incorporated application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor. The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile storage NVM, such as at least one disk memory, and may also be a usb disk, a removable hard disk, a read-only memory, a magnetic or optical disk, etc.
The storage medium may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and the storage medium may reside as discrete components in an electronic device or host device.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (such as a ROM/RAM, a magnetic disk, and an optical disk), and includes several instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method of the embodiments of the present application.
In the technical scheme of the application, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the related user data and other information all accord with the regulations of related laws and regulations and do not violate the good customs of the public order.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (18)

1. A graphic code processing method is characterized by comprising the following steps:
responding to a first operation instruction of a user, and acquiring a decoding image of a graphic code in an image to be processed;
when the decoded image comprises a plurality of graphic codes, acquiring size information of a displayable area in a user interface;
generating mark information of the plurality of graphic codes according to the decoded image and the size information;
generating a code identifying interface according to the image to be processed, the decoded image and the mark information;
and displaying the code identifying interface in the displayable area, wherein the plurality of graphic codes and the mark information corresponding to each graphic code are displayed in the code identifying interface.
2. The method of claim 1, wherein the obtaining a decoded image of a graphic code in the image to be processed in response to the first operation instruction of the user comprises:
responding to the first operation instruction of a user, and acquiring the image to be processed;
and decoding the graphic code in the image to be processed to obtain the decoded image of the graphic code in the image to be processed.
3. The method of claim 1, wherein decoding the image further comprises: a code frame of each graphic code in the plurality of graphic codes; the generating the mark information of the plurality of graphic codes according to the decoded image and the size information comprises:
converting the decoded image according to the size information to obtain a processed decoded image, wherein the code frame of each graphic code in the processed decoded image is adapted to the displayable area;
and generating the mark information of each graphic code according to the processed decoded image.
4. The method according to claim 3, wherein said converting the decoded image according to the size information to obtain a processed decoded image comprises:
and rotating the decoded image by a preset angle according to the size information, wherein each frame in the rotated decoded image is adapted to the display direction of the displayable area.
5. The method according to claim 3, wherein said converting the decoded image according to the size information to obtain a processed decoded image comprises:
determining the frame scaling according to the size information and the decoded image;
and carrying out scaling processing on the decoded image according to the frame scaling ratio, wherein each frame in the scaled decoded image is in the range of the displayable area.
6. The method according to claim 3, wherein said converting the decoded image according to the size information to obtain a processed decoded image comprises:
and performing coordinate transformation processing on the code frame of each graphic code according to the size information and the decoded image, wherein the code frame of each graphic code in the decoded image after the coordinate transformation processing and the displayable area are in the same coordinate system.
7. The method according to claim 3, wherein before performing the conversion process on the decoded image according to the size information to obtain a processed decoded image, the method further comprises:
judging whether all the code frames in the decoded image are adapted to the displayable area or not;
and when a frame which is not suitable for the displayable area exists in the frames of the plurality of graphic codes, converting the decoded image according to the size information to obtain the processed decoded image.
8. The method of claim 1, wherein the marking information comprises: one or more of content summary information of the graphic code, a type tag of the graphic code, and a prompt tag of the graphic code.
9. The method of claim 1, wherein decoding the image further comprises: a code frame of each graphic code in the plurality of graphic codes; generating a code recognition interface according to the image to be processed, the decoded image and the mark information, wherein the mark information of the plurality of graphic codes is displayed in the code recognition interface, and the method comprises the following steps:
generating a mask layer image according to the decoded image and the mark information, wherein the mask layer image comprises: a code frame of each graphic code in the plurality of graphic codes and mark information of each code frame;
and overlapping the masking layer image on the image to be processed to generate the code identifying interface.
10. The method according to claim 9, wherein the image to be processed is a current frame image acquired by an image acquirer of a terminal; the step of overlapping the masking layer image on the image to be processed to generate the code recognition interface comprises the following steps:
when the decoded image comprises a plurality of graphic codes, generating a cache map of the current frame image;
and overlapping the mask image on the cache map to generate the code identifying interface.
11. The method according to claim 9, wherein the image to be processed is a current frame image acquired by an image acquirer of a terminal; the step of overlapping the masking layer image on the image to be processed to generate the code recognition interface comprises the following steps:
when the decoded image comprises a plurality of graphic codes, controlling the image collector to stay at the current frame image;
and overlapping the masking layer image on the current frame image to generate the code identifying interface.
12. The method according to claim 1, wherein the mark information is displayed at a position corresponding to a center point of a frame of the graphic code in the code recognition interface.
13. The method of claim 1, wherein the tagging information is dynamically displayed in the code recognition interface.
14. The method of claim 1, further comprising, after presenting the code recognition interface within the displayable region:
and responding to a second operation instruction of the user on the mark information, and outputting content information carried by the selected target graphic code according to the decoded image.
15. The method of claim 1, further comprising, after presenting the code recognition interface within the displayable region:
and responding to a third operation instruction of the user, and removing the code recognition interface.
16. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to cause the electronic device to perform the method of any of claims 1-15.
17. A computer-readable storage medium having computer-executable instructions stored thereon which, when executed by a processor, implement the method of any one of claims 1-15.
18. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, carries out the method according to any one of claims 1-15.
CN202211160670.0A 2022-09-22 2022-09-22 Graphic code processing method, apparatus, storage medium and program product Pending CN115510887A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211160670.0A CN115510887A (en) 2022-09-22 2022-09-22 Graphic code processing method, apparatus, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211160670.0A CN115510887A (en) 2022-09-22 2022-09-22 Graphic code processing method, apparatus, storage medium and program product

Publications (1)

Publication Number Publication Date
CN115510887A true CN115510887A (en) 2022-12-23

Family

ID=84505708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211160670.0A Pending CN115510887A (en) 2022-09-22 2022-09-22 Graphic code processing method, apparatus, storage medium and program product

Country Status (1)

Country Link
CN (1) CN115510887A (en)

Similar Documents

Publication Publication Date Title
JP6595714B2 (en) Method and apparatus for generating a two-dimensional code image having a dynamic effect
KR102173123B1 (en) Method and apparatus for recognizing object of image in electronic device
US8424751B2 (en) Embedded media barcode links and systems and methods for generating and using them
US20140210857A1 (en) Realization method and device for two-dimensional code augmented reality
CN103400099A (en) Terminal and QR code recognition method
CN111931771B (en) Bill content identification method, device, medium and electronic equipment
CN111860232A (en) Information analysis method and device, equipment, storage medium
CN117392276A (en) Image processing method, device and storage medium
CN115004261B (en) Text line detection
WO2023202570A1 (en) Image processing method and processing apparatus, electronic device and readable storage medium
KR20200127928A (en) Method and apparatus for recognizing object of image in electronic device
CN119444897A (en) A method, device, equipment and medium for batch generation of picture materials
CN113867875A (en) Method, device, equipment and storage medium for editing and displaying marked object
US9836799B2 (en) Service provision program
US20250037491A1 (en) Method and device for scanning multiple documents for further processing
CN116679897A (en) Screen projection analysis method and device
CN112749769A (en) Graphic code detection method and device, computer equipment and storage medium
CN115510887A (en) Graphic code processing method, apparatus, storage medium and program product
US12148062B2 (en) Generating content adaptive watermarks for digital images
CN114116418B (en) Information processing method and device, computer storage medium and electronic equipment
CN107872730A (en) The acquisition methods and device of a kind of insertion content in video
CN113505623B (en) QR code recognition method and device
CN112583976B (en) Graphic code display method, equipment and readable storage medium
CN114913070B (en) A text image splicing method, device, electronic device and storage medium
CN115422958B (en) Picture operation method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination