CN109934041B - Information processing method, information processing system, medium, and computing device - Google Patents
Information processing method, information processing system, medium, and computing device Download PDFInfo
- Publication number
- CN109934041B CN109934041B CN201910235576.9A CN201910235576A CN109934041B CN 109934041 B CN109934041 B CN 109934041B CN 201910235576 A CN201910235576 A CN 201910235576A CN 109934041 B CN109934041 B CN 109934041B
- Authority
- CN
- China
- Prior art keywords
- image
- data information
- target sub
- area
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 46
- 238000003672 processing method Methods 0.000 title claims abstract description 34
- 238000000034 method Methods 0.000 claims abstract description 52
- 230000008569 process Effects 0.000 claims abstract description 17
- 238000012545 processing Methods 0.000 claims description 48
- 230000004044 response Effects 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 8
- 206010034719 Personality change Diseases 0.000 claims description 5
- 230000009191 jumping Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Landscapes
- User Interface Of Digital Computer (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides an information processing method. The information processing method includes: acquiring multiple frames of images, wherein at least one frame of image in the multiple frames of images comprises an identifier to be identified; dividing a plurality of frame images into at least a first image group and a second image group, and executing the following processes in parallel for current images in the first image group and the second image group: identifying image content in a preset area in a current image to obtain data information of an identifier to be identified; under the condition that data information is failed to be obtained through image content in a preset area, identifying the image content of a target sub-area in the preset area to obtain data information of a to-be-identified mark; and in the case that the data information acquisition fails through the image content of the target subregion, acquiring another frame image in the image group where the current image is located so as to acquire the data information of the to-be-identified mark through the other frame image. Furthermore, embodiments of the present invention provide an information processing system, medium, and computing device.
Description
Technical Field
Embodiments of the present invention relate to the field of computers, and more particularly, embodiments of the present invention relate to an information processing method, an information processing system, a computer-readable medium, and a computing device.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
At present, the development of electronic technology and computer technology brings great convenience to the life of people. For example, the wide application of identification technologies such as two-dimensional codes and bar codes enables operations such as transferring accounts, adding communication friends and searching to be more convenient and faster.
Disclosure of Invention
However, identification techniques such as two-dimensional codes and bar codes are not yet sophisticated. In the prior art, the complexity of the operation of waiting for the identification of the two-dimensional code and the bar code is high, for example, a user needs to place an image of the identification to be identified in a scanning frame, which results in overlong identification time and low identification accuracy.
Therefore, in the prior art, identifying the two-dimensional code and the bar code waiting for the identification mark is a very annoying process.
For this reason, an improved information processing method is highly required to reduce the complexity of the operation of recognizing the mark to be recognized and to improve the speed and accuracy of the recognition.
In this context, embodiments of the present invention are intended to provide an information processing method and system.
In a first aspect of embodiments of the present invention, there is provided a method comprising: acquiring multi-frame images, wherein at least one frame image in the multi-frame images comprises an identifier to be identified, dividing the multi-frame images into at least a first image group and a second image group, and executing the following processing in parallel aiming at current images in the first image group and the second image group: the method comprises the steps of identifying image content in a preset area in a current image to obtain data information of an identifier to be identified, identifying image content in a target sub-area in the preset area to obtain the data information of the identifier to be identified in the case that the data information obtained through the image content in the preset area fails, and obtaining another frame image in an image group where the current image is located in the case that the data information obtained through the image content in the target sub-area fails, so that the data information of the identifier to be identified is obtained through the other frame image.
In one embodiment of the invention, the method further comprises: acquiring a posture change parameter of the electronic equipment in the process of acquiring the multi-frame image, and adjusting the range of the target sub-region based on the posture change parameter.
In another embodiment of the present invention, the method further comprises: generating a recognition result in the case that the data information is failed to be obtained through the image content in the target sub-region, wherein the recognition result comprises the position relation between the mark to be recognized and the target sub-region, and adjusting the range of the target sub-region based on the recognition result.
In yet another embodiment of the present invention, the adjusting the extent of the target sub-region comprises: and increasing the area of the target sub-region, or keeping the area of the target sub-region unchanged, and adjusting the position of the target sub-region in the preset region.
In a further embodiment of the present invention, in a case that obtaining the data information via the image content in the preset region fails, identifying the image content in the target sub-region in the preset region to obtain the data information of the identifier to be identified includes: and under the condition that the data information is failed to be obtained through the image content in the preset area, determining the range of a target sub-area, and cutting the preset area based on the range of the target sub-area to obtain the image content in the target sub-area.
In a further embodiment of the present invention, the following processing is also performed in parallel for the current image in the first image group and the second image group: generating response content based on the data information in a case where the data information is successfully obtained via the image content in the preset region, or generating response content based on the data information in a case where the data information is successfully obtained via the image content in the target sub-region.
In yet another embodiment of the present invention, the generating of the response content based on the data information includes: and jumping to a webpage corresponding to the identification to be identified based on the data information.
In a second aspect of embodiments of the present invention, there is provided an information processing system including an acquisition module and a processing module. The acquisition module is used for acquiring multi-frame images, wherein at least one frame of image in the multi-frame images comprises an identifier to be identified. The processing module is used for dividing the multi-frame image into at least a first image group and a second image group, and for the current image in the first image group and the second image group, the following processing is executed in parallel: the method comprises the steps of identifying image content in a preset area in a current image to obtain data information of an identifier to be identified, identifying image content in a target sub-area in the preset area to obtain the data information of the identifier to be identified in the case that the data information obtained through the image content in the preset area fails, and obtaining another frame image in an image group where the current image is located in the case that the data information obtained through the image content in the target sub-area fails, so that the data information of the identifier to be identified is obtained through the other frame image.
In one embodiment of the invention, the system further comprises a first determination module and a first adjustment module. The first determining module is used for obtaining the posture change parameter of the electronic equipment in the process of acquiring the multi-frame image. The first adjusting module is used for adjusting the range of the target sub-area based on the attitude change parameter.
In another embodiment of the present invention, the system further comprises a generation module and a second adjustment module. The generation module is used for generating a recognition result under the condition that the data information is failed to be obtained through the image content in the target sub-region, wherein the recognition result comprises the position relation between the mark to be recognized and the target sub-region. And the second adjusting module is used for adjusting the range of the target sub-area based on the identification result.
In yet another embodiment of the present invention, the processing module includes a second determination submodule and a clipping submodule. The second determining submodule is used for determining the range of the target sub-area under the condition that the data information obtained through the image content in the preset area fails. And the cutting sub-module is used for cutting the preset area based on the range of the target sub-area so as to obtain the image content in the target sub-area.
In a further embodiment of the invention, the processing module further performs the following processing in parallel for a current image in the first image group and the second image group: generating response content based on the data information in a case where the data information is successfully obtained via the image content in the preset region, or generating response content based on the data information in a case where the data information is successfully obtained via the image content in the target sub-region.
In a third aspect of embodiments of the present invention, there is provided a medium storing computer-executable instructions that, when executed by a processing unit, are configured to implement the information processing method according to any one of the above embodiments. .
In a fourth aspect of embodiments of the present invention, there is provided a computing device comprising: a processing unit, and a storage unit storing computer-executable instructions, which when executed by the processing unit, are configured to implement the information processing method of any one of the above embodiments.
According to the information processing method and the information processing system, the complexity of the user operation can be reduced, and the identification speed and accuracy can be improved, so that better experience is brought to the user.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
fig. 1 schematically shows an application scenario of an information processing method according to an exemplary embodiment of the present invention;
fig. 2A and 2B schematically show a flowchart of an information processing method according to an embodiment of the present invention;
fig. 2C schematically shows a system architecture of an information processing method according to an embodiment of the present invention;
FIG. 3 schematically shows a flow chart of an information processing method according to another embodiment of the present invention;
FIGS. 4A-4C schematically illustrate adjusting the range of a target sub-region according to an embodiment of the invention;
FIG. 5 schematically shows a flow chart of an information processing method according to another embodiment of the present invention;
FIG. 6 schematically shows a flow chart of an information processing method according to another embodiment of the present invention;
FIG. 7A schematically illustrates a block diagram of an information handling system in accordance with an embodiment of the present invention;
FIG. 7B schematically shows a block diagram of an information handling system according to another embodiment of the invention;
FIG. 7C schematically illustrates a block diagram of an information handling system according to another embodiment of the present invention;
FIG. 7D schematically illustrates a block diagram of a processing module according to an embodiment of the invention;
FIG. 8 schematically shows a schematic view of a computer-readable storage medium product according to an embodiment of the invention; and
FIG. 9 schematically shows a block diagram of a computing device according to an embodiment of the present invention.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the invention, and are not intended to limit the scope of the invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to the embodiment of the invention, an information processing method, a medium, an apparatus and a computing device are provided.
In this document, it is to be understood that any number of elements in the figures are provided by way of illustration and not limitation, and any nomenclature is used for differentiation only and not in any limiting sense.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments of the invention.
Summary of The Invention
The inventor finds that in the prior art, the identification of the waiting identification marks of the two-dimensional codes and the bar codes requires a user to completely bring the waiting identification marks into a scanning frame, so that the operation complexity of the user is high, and the identification time is long. The method provided by the embodiment of the invention identifies the image content in the preset area by parallelly executing the image identification of the images in the at least two image groups, and identifies the image content of the target sub-area in the preset area to obtain the data information of the identifier to be identified under the condition that the data information of the identifier to be identified is not obtained by identifying the image content in the preset area, so that the operation complexity of a user is reduced, the identification speed is increased, and the identification accuracy is improved.
Having described the general principles of the invention, various non-limiting embodiments of the invention are described in detail below.
Application scene overview
Referring first to fig. 1, fig. 1 schematically shows an application scenario of an information processing method according to an exemplary embodiment of the present invention. It should be noted that fig. 1 is only an example of a scenario in which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, but does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the application scenario includes a mobile client 100, a display screen of the mobile client 100 is displayed with an image 112, and a scan frame 110 is included on the display screen. The scan frame 110 is used to guide a user to put an image of an identifier to be recognized, such as the two-dimensional code 111 in fig. 1, into an appropriate position (i.e., in the scan frame 110).
According to the embodiment of the present disclosure, the mobile client 100 scans the area where the two-dimensional code 111 is located, obtains a multi-frame image, divides the multi-frame image into at least a first image group and a second image group, and performs corresponding processing procedures for the current image of each group in parallel, where an embodiment of the processing procedure performed on the current image of the present invention is described below by taking the image 112 shown in fig. 1 as the current image of the first image group or the current image of the second image group as an example.
As shown in fig. 1, the current image 112 includes a preset region, which may be set as a region corresponding to the entire image, for example. The preset region includes a target sub-region, which may be, for example, a region formed by the scan frame 110. It should be understood that the target sub-region is not limited to the region formed by the scan frame 110, and may be other regions, and the range of the target sub-region may be fixed or may be dynamically adjustable. For convenience of description, the following describes an embodiment of the present disclosure by taking the target sub-region as a region formed by the scan frame 110 as an example.
According to an embodiment of the present disclosure, the processing performed on the current image 112 includes: identifying image content in a preset area in the current image 112; in the case where the data information of the two-dimensional code 111 is not obtained by identifying the image content in the preset region in the current image 112, the identification target sub-region may be, for example, the image content in the scan frame 110. In the case where the data information of the two-dimensional code 111 is not obtained by recognizing the image content in the scan frame 110, another frame image in the image group in which the current image 112 is located is obtained so as to obtain the data information of the two-dimensional code through the other frame image.
Exemplary method
A method of information processing according to an exemplary embodiment of the present invention is described below with reference to fig. 2A, 2B, and 2C in conjunction with the application scenario of fig. 1. It should be noted that the above application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present invention, and the embodiments of the present invention are not limited in this respect. Rather, embodiments of the present invention may be applied to any scenario where applicable.
Fig. 2A and 2B schematically show a flowchart of an information processing method according to an embodiment of the present invention.
Fig. 2C schematically shows a system architecture of an information processing method according to an embodiment of the present invention.
As shown in fig. 2A, the information processing method includes operations S210 to S220.
In operation S210, a plurality of frame images are acquired, wherein at least one frame image of the plurality of frame images includes an identifier to be identified.
In operation S220, the multi-frame image is divided into at least a first image group and a second image group, and a process is performed in parallel with respect to current images in the first and second image groups.
Wherein the processing procedure shown in fig. 2B is executed in parallel for the current images in the first image group and the second image group in operation S220.
As shown in fig. 2B, the processing procedure performed in parallel for the current images in the first image group and the second image group includes operations S221 to S223.
In operation S221, image content in a preset area in a current image is identified to obtain data information of the identifier to be identified.
In operation S222, in a case that obtaining the data information via the image content in the preset area fails, the image content in the target sub-area in the preset area is identified to obtain the data information of the identifier to be identified.
In operation S223, in a case that obtaining the data information via the image content in the target sub-area fails, another frame image in the image group where the current image is located is obtained, so as to obtain the data information of the identifier to be identified through the another frame image.
According to an exemplary embodiment of the present invention, the plurality of frames of images may be continuous or discontinuous in operation S210. The identification to be recognized may be, for example, a two-dimensional code, a bar code, a product identification, etc. As shown in fig. 2C, a camera on the electronic device may capture a plurality of frames of images.
According to an exemplary embodiment of the present invention, in operation S220, for example, the multi-frame image may be divided into at least two or more image groups frame by frame, wherein two adjacent frames may be divided into different image groups. For example, images numbered as odd frames are divided into a first image group, and images numbered as even frames are divided into a second image group. Alternatively, a preset number of adjacent numbered multi-frame images may be divided into one group. For example, images numbered 1-3 are divided into a first image group, and images numbered 4-6 are divided into a second image group.
As shown in fig. 2C, for example, the electronic device may include a decoder management module therein, and the decoder management module divides the multi-frame image into the first image group and the second image group. Operations S211 to S213 shown in fig. 2B are performed in parallel by decoders corresponding to the first image group and the second image group, respectively. For example, as shown in fig. 2C, the decoder 1 processes a first image group (image frame numbered odd), and the decoder 2 processes a second image group (image frame numbered even).
According to an exemplary embodiment of the present invention, in operation S221, a preset region in the current image may be, for example, an entire region of the current image, or the like. For example, in the scenario shown in fig. 1, decoder 1 or decoder 2 identifies the image content in the current image 112.
According to an exemplary embodiment of the present invention, in operation S222, it may be that, for example, in the case where the image content in the current image 112 is identified without obtaining the data information of the two-dimensional code 111, the image content in the scan frame 110 is identified again. According to an exemplary embodiment of the present invention, the target sub-area may be a predetermined area. For example, may be the scan frame shown in fig. 1. For another example, the boundary of the target sub-region may be between the boundary of the whole image and the boundary of the scanning frame, for example, a square 7/10 with the side length of the target sub-region being the width of the screen, is located at the very center of the screen, and includes the region where the whole scanning frame is located. The target sub-region may also be a region whose area and/or position can be adjusted at any time, for example, in the case of unsuccessful recognition of the content in the scan frame, the range of the target sub-region is automatically enlarged.
According to an exemplary embodiment of the present invention, in operation S222, in a case where obtaining the data information via the image content in the preset region fails, identifying the image content in the target sub-region in the preset region to obtain the data information of the identifier to be identified includes, in a case where obtaining the data information via the image content in the preset region fails, determining a range of the target sub-region, and cropping the preset region based on the range of the target sub-region to obtain the image content in the target sub-region. For example, in the scenario shown in fig. 1, if the image 112 is not recognized, the image 112 is cropped to obtain the image content of the target sub-region, which may be, for example, a scan frame.
According to an exemplary embodiment of the present invention, in operation S223, for example, in the scenario shown in fig. 2C, the decoder 1 identifies an image encoded as an odd frame, and if the decoder 1 fails to identify the current image numbered 3, an image numbered 5 is acquired to identify the image numbered 5. Similarly, the decoder 2 performs a similar process to the decoder 1 described above, and identifies even-numbered pictures.
According to an exemplary embodiment of the present invention, a plurality of frames of images to be processed may be stored in a queue, for example, to a specific location from which the electronic device may read a next frame of image to identify the next frame of image in the case where data information of an identifier to be identified is not identified in a current image.
According to the exemplary embodiment of the invention, the information processing method firstly identifies the image in the preset area, and then identifies the image content of the target sub-area under the condition that the image content of the preset area is failed to identify, and the invention divides the multi-frame image into at least two image groups, and identifies the images in the plurality of image groups in parallel, thereby at least partially reducing the complexity of user operation, reducing the identification time, and if the image content of the preset area is failed, identifying the image content of the target sub-area, thereby reducing the interference of the image content of other sub-areas on the identification to be identified, and further improving the identification accuracy.
Fig. 3 schematically shows a flow chart of an information processing method according to another embodiment of the present invention.
As shown in fig. 3, the information processing method further includes operations S310 and S320 on the basis of the foregoing embodiment.
In operation S310, a posture change parameter of the electronic device during the process of acquiring the multi-frame image is obtained.
In operation S320, a range of the target sub-region is adjusted based on the posture variation parameter.
According to the exemplary embodiment of the invention, the method can adjust the range of the target sub-area, thereby further improving the recognition speed and reducing the complexity of user operation.
According to the exemplary embodiment of the invention, when a user uses the electronic device to recognize the two-dimensional code waiting recognition identifier, if the electronic device fails to recognize the two-dimensional code waiting recognition identifier at the current position and posture, the user usually rotates or moves the mobile phone to place the two-dimensional code image in the scanning frame, so that the posture and the position of the electronic device are changed.
According to an exemplary embodiment of the present invention, in operation S310, the posture change parameter of the electronic device may be, for example, a direction in which the electronic device moves or rotates, a distance of the movement, an angle of the rotation, and the like. The attitude change parameter of the electronic device may be obtained, for example, by an acceleration sensor in the electronic device.
According to an exemplary embodiment of the present invention, adjusting the extent of the target sub-region includes increasing the area of the target sub-region in operation S320. For example, the range of the initial target sub-region is a region formed by a dotted line shown in fig. 4A, and if it is detected that the electronic device deflects to the left, the target sub-region is adjusted to the region formed by the dotted line shown in fig. 4B, that is, the left boundary of the target sub-region is expanded to the left boundary of the preset region.
According to an exemplary embodiment of the present invention, adjusting the extent of the target sub-region includes keeping the area of the target sub-region constant, and adjusting the position of the target sub-region in the preset region. For example, the range of the initial target sub-region is a region formed by a dotted line shown in fig. 4A, and if it is detected that the electronic device deflects to the left, as shown in fig. 4C, the target sub-region is moved to the left while keeping the area of the target sub-region unchanged.
Fig. 5 schematically shows a flowchart of an information processing method according to another embodiment of the present invention.
As shown in fig. 5, the information processing method further includes operations S510 and S520 on the basis of the foregoing embodiment.
In operation S510, in a case that obtaining the data information via the image content in the target sub-region fails, a recognition result is generated, where the recognition result includes a positional relationship between the to-be-recognized identifier and the target sub-region.
In operation S520, the range of the target sub-region is adjusted based on the recognition result.
According to the method, the range of the target sub-area is adjusted according to the recognition result, the recognition speed is further increased, and the complexity of user operation is reduced.
According to an exemplary embodiment of the present invention, in operation S510, for example, the initial target sub-region is a region formed by a dotted line shown in fig. 4A, and in a case where the image content in the dotted line is recognized without obtaining the data information of the identifier to be recognized, a recognition result is generated. The identification result comprises the position relation between the mark to be identified and the target sub-area. For example, the initial target sub-area is identified, and the identified partial two-dimensional code information and other interference information can be obtained, so that the position relationship between the identifier to be identified and the initial target sub-area is determined according to the identified partial two-dimensional code information and other interference information and the position relationship between the two-dimensional code and other interference information. For example, the image content in the initial target sub-region is identified, the obtained image content in the initial target sub-region includes the two-dimensional code and the interference information, wherein the interference information is located on the left side of the two-dimensional code, and it can be determined that the remaining two-dimensional code which is not identified is located on the right side of the initial target sub-region.
According to the exemplary embodiment of the present invention, in operation S520, for example, if the recognition result shows that the left side 2/3 area of the identifier to be recognized is not in the target sub-area, the range of the target sub-area is adjusted. For example, the left boundary of the target sub-region is expanded to the left, or the area of the target sub-region is ensured to be unchanged, and the position of the target sub-region is moved. The adjusting of the range of the target sub-region is described in operation S320, and is not described herein again.
According to an exemplary embodiment of the present invention, the following processing is also performed in parallel for a current image in the first image group and the second image group: generating response content based on the data information in the case that the data information acquisition via the image content in the preset area is successful; or in the case where the acquisition of the data information via the image content in the target sub-area is successful, generating the response content based on the data information.
According to an exemplary embodiment of the present invention, generating the response content based on the data information includes jumping to a web page corresponding to the identifier to be recognized based on the data information.
Fig. 6 schematically shows a flowchart of an information processing method according to another embodiment of the present invention. The information processing method can be applied to any processing module corresponding to the image group.
As shown in fig. 6, the method includes operations S610 to S670.
In operation S610, reading a frame image;
in operation S620, the frame image is subjected to recognition processing, for example, operation S221 described above with reference to fig. 2B is performed. Wherein the preset region in operation S221 is the entire region of the frame image.
In operation S630, it is determined whether data information of the identifier to be recognized is obtained by performing a recognition process on the frame image. If the data information of the identifier to be recognized is successfully obtained, operation S670 is executed, and a recognition result is returned, for example, a jump may be made to a webpage corresponding to the identifier to be recognized. If the data information of the identifier to be recognized is not obtained, operation S640 is performed.
In operation S640, the frame image is cropped to obtain image content corresponding to the target sub-region.
In operation S650, the image content in the target sub-region is subjected to a recognition process, for example, operation S222 described above with reference to fig. 2B is performed.
In operation S660, it is determined whether data information of the identifier to be recognized is obtained by performing a recognition process on the image content of the target sub-area. If the data information of the identifier to be recognized is successfully obtained, operation S670 is executed, and a recognition result is returned, for example, a jump may be made to a webpage corresponding to the identifier to be recognized.
If the data information of the identifier to be recognized is not obtained, for example, the operation S223 described above with reference to fig. 2B is performed, and another frame image in the image group where the image is located is continuously read.
Exemplary System
Having described the method of the exemplary embodiment of the present invention, next, an information processing system of the exemplary embodiment of the present invention will be explained with reference to fig. 7A to 7D.
The embodiment of the invention provides an information processing system.
FIG. 7A schematically shows a block diagram of an information handling system 700 according to an embodiment of the present invention.
As shown in fig. 7A, the system may include: an acquisition module 710 and a processing module 720.
The obtaining module 710, for example, performs the operation S210 described above with reference to fig. 2A, to obtain multiple frames of images, where at least one of the multiple frames of images includes the identifier to be recognized.
The processing module 720, for example, performs operation S220 described above with reference to fig. 2A, for dividing the multi-frame image into at least a first image group and a second image group, and for a current image in the first image group and the second image group, performs the following processes in parallel: the method comprises the steps of identifying image content in a preset area in a current image to obtain data information of an identifier to be identified, identifying image content in a target sub-area in the preset area to obtain the data information of the identifier to be identified in the case that the data information obtained through the image content in the preset area fails, and obtaining another frame image in an image group where the current image is located in the case that the data information obtained through the image content in the target sub-area fails, so that the data information of the identifier to be identified is obtained through the other frame image.
Fig. 7B schematically shows a block diagram of an information processing system 800 according to another embodiment of the present invention.
As shown in fig. 7B, the system 800 may further include, on the basis of the foregoing embodiments: a first determination module 810 and a first adjustment module 820.
The first determining module 810, for example, performs operation S310 described above with reference to fig. 3, for obtaining a posture change parameter of the electronic device during the process of acquiring the multiple frames of images.
The first adjusting module 820, for example, performs the operation S320 described above with reference to fig. 3, for adjusting the range of the target sub-region based on the posture change parameter.
Fig. 7C schematically shows a block diagram of an information processing system 900 according to another embodiment of the present invention.
As shown in fig. 7C, the system 900 may further include, on the basis of the foregoing embodiments: a generating module 910 and a second adjusting module 920.
The generating module 910, for example, performs operation S510 described above with reference to fig. 5, and is configured to generate a recognition result in a case that obtaining the data information via the image content in the target sub-region fails, where the recognition result includes a positional relationship between the to-be-recognized identifier and the target sub-region
The second adjusting module 920, for example, performs operation S520 described above with reference to fig. 5, for adjusting the range of the target sub-region based on the identification result.
Fig. 7D schematically illustrates a block diagram of a processing module 720 according to an embodiment of the invention.
As shown in fig. 7D, the processing module 720 may include: a second determination submodule 721 and a cropping submodule 722.
The second determining sub-module 721 is configured to determine the range of the target sub-area in case that the data information is not obtained via the image content in the preset area.
The cropping sub-module 722 is configured to crop the preset region based on the range of the target sub-region to obtain the image content in the target sub-region.
According to an exemplary embodiment of the present invention, the processing module further performs, in parallel, processing of generating a response content based on the data information in a case where the data information is successfully obtained via the image content in the preset region or generating a response content based on the data information in a case where the data information is successfully obtained via the image content in the target sub-region, for a current image in the first image group and the second image group.
According to an exemplary embodiment of the present invention, generating the response content based on the data information includes jumping to a web page corresponding to the identifier to be recognized based on the data information.
Exemplary Medium
An embodiment of the present invention provides a medium storing computer-executable instructions, which when executed by the processing unit, are configured to implement the information processing method of any one of the method embodiments.
In some possible embodiments, aspects of the present invention may also be implemented in the form of a program product including program code for causing a computing device to perform steps in a data processing method for diagrams according to various exemplary embodiments of the present invention described in the above section "exemplary method" of this specification, when the program product is run on the computing device, for example, the electronic device may perform step S210 as shown in fig. 2A: acquiring a multi-frame image; step S220: dividing the multi-frame image into at least a first image group and a second image group, and executing processing in parallel aiming at current images in the first image group and the second image group. Wherein the processing procedure includes step S221 shown in fig. 2B: identifying image content in a preset area in a current image to obtain data information of the identifier to be identified; step S222: in the case that obtaining the data information via the image content in the preset area fails, identifying the image content in a target sub-area in the preset area; and step S223, under the condition that the data information is failed to be obtained through the image content in the target subregion, obtaining another frame image in the image group where the current image is located.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As shown in fig. 8, a program product 80 for data processing of charts according to an embodiment of the present invention is depicted, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a computing device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Exemplary computing device
Having described the methods, media, and apparatus of the exemplary embodiments of the invention, a computing device of the exemplary embodiments of the invention is next described with reference to FIG. 9
The embodiment of the invention also provides the computing equipment. The computing device includes: a processing unit; and a storage unit storing computer-executable instructions that, when executed by the processing unit, are configured to implement the information processing method of any one of the method embodiments.
The embodiment of the invention also provides the computing equipment. As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible embodiments, a computing device according to the present invention may include at least one processing unit, and at least one memory unit. Wherein the storage unit stores program code which, when executed by the processing unit, causes the processing unit to perform the steps in the information presentation methods according to various exemplary embodiments of the present invention described in the above section "exemplary methods" of this specification. For example, the processing unit may perform step S210 as shown in fig. 2A: acquiring a multi-frame image; step S220: dividing the multi-frame image into at least a first image group and a second image group, and executing processing in parallel aiming at current images in the first image group and the second image group. Wherein the processing procedure includes step S221 shown in fig. 2B: identifying image content in a preset area in a current image to obtain data information of the identifier to be identified; step S222: in the case that obtaining the data information via the image content in the preset area fails, identifying the image content in a target sub-area in the preset area; and step S223, under the condition that the data information is failed to be obtained through the image content in the target subregion, obtaining another frame image in the image group where the current image is located.
A computing device 90 for information processing according to this embodiment of the present invention is described below with reference to fig. 9. The computing device 90 shown in FIG. 9 is only one example and should not be taken to limit the scope of use and functionality of embodiments of the present invention.
As shown in fig. 9, computing device 90 is embodied in the form of a general purpose computing device. Components of computing device 90 may include, but are not limited to: the at least one processing unit 901, the at least one memory unit 902, and the bus 903 connecting the various system components (including the memory unit 902 and the processing unit 901).
The storage unit 902 may include readable media in the form of volatile memory, such as a Random Access Memory (RAM)9021 and/or a cache memory 9022, and may further include a Read Only Memory (ROM) 9023.
It should be noted that although in the above detailed description several units/modules or sub-units/modules of the system are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module according to embodiments of the invention. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
While the spirit and principles of the invention have been described with reference to several particular embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit. The invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
Claims (14)
1. An information processing method comprising:
acquiring multiple frames of images, wherein at least one frame of image in the multiple frames of images comprises an identifier to be identified;
dividing the multi-frame image into at least a first image group and a second image group, and executing the following processes in parallel aiming at the current image in the first image group and the second image group:
identifying image content in a preset area in a current image to obtain data information of the identifier to be identified;
under the condition that the data information is failed to be obtained through the image content in the preset area, identifying the image content in a target sub-area in the preset area to obtain the data information of the identifier to be identified;
and under the condition that the data information is not obtained through the image content in the target subregion, obtaining another frame image in the image group where the current image is located so as to obtain the data information of the identifier to be identified through the other frame image.
2. The method of claim 1, further comprising:
acquiring attitude change parameters of the electronic equipment in the process of acquiring the multi-frame images; and
adjusting the range of the target sub-region based on the attitude change parameter.
3. The method of claim 1, further comprising:
generating a recognition result under the condition that the data information is failed to be obtained through the image content in the target sub-region, wherein the recognition result comprises the position relation between the mark to be recognized and the target sub-region; and
adjusting the range of the target sub-region based on the identification result.
4. The method of claim 2 or 3, wherein the adjusting the extent of the target sub-region comprises:
increasing the area of the target sub-region; or
Keeping the area of the target sub-region unchanged, and adjusting the position of the target sub-region in the preset region.
5. The method according to claim 1, wherein in a case that obtaining the data information via the image content in the preset area fails, identifying the image content in the target sub-area in the preset area to obtain the data information to be identified comprises:
determining a range of a target sub-region in the event that obtaining the data information via image content in the preset region fails;
and cutting the preset area based on the range of the target sub-area to obtain the image content in the target sub-area.
6. The method according to claim 1, wherein said following is also performed in parallel for a current image of said first and second group of images:
if the data information is successfully obtained through the image content in the preset area, generating response content based on the data information; or
In the case that obtaining the data information via the image content in the target sub-area is successful, generating response content based on the data information.
7. The method of claim 6, wherein the generating response content based on the data information comprises:
and jumping to a webpage corresponding to the identification to be identified based on the data information.
8. An information processing system comprising:
the device comprises an acquisition module, a recognition module and a recognition module, wherein the acquisition module is used for acquiring multi-frame images, and at least one frame image in the multi-frame images comprises an identifier to be recognized;
a processing module, configured to divide the multi-frame image into at least a first image group and a second image group, and execute the following processing for a current image in the first image group and the second image group in parallel:
identifying image content in a preset area in a current image to obtain data information of the identifier to be identified;
under the condition that the data information is failed to be obtained through the image content in the preset area, identifying the image content in a target sub-area in the preset area to obtain the data information of the identifier to be identified; and
and under the condition that the data information is not obtained through the image content in the target subregion, obtaining another frame image in the image group where the current image is located so as to obtain the data information of the identifier to be identified through the other frame image.
9. The system of claim 8, further comprising:
the first determining module is used for obtaining the posture change parameter of the electronic equipment in the process of acquiring the multi-frame image; and
and the first adjusting module is used for adjusting the range of the target sub-region based on the attitude change parameter.
10. The system of claim 8, further comprising:
the generation module is used for generating a recognition result under the condition that the data information is failed to be obtained through the image content in the target sub-area, wherein the recognition result comprises the position relation between the mark to be recognized and the target sub-area; and
and the second adjusting module is used for adjusting the range of the target sub-area based on the identification result.
11. The system of claim 8, wherein the processing module comprises:
a second determining sub-module for determining a range of a target sub-region in a case where obtaining the data information via the image content in the preset region fails; and
and the cutting sub-module is used for cutting the preset area based on the range of the target sub-area so as to obtain the image content in the target sub-area.
12. The system of claim 8, the processing module further performs the following in parallel for a current image in the first group of images and the second group of images:
if the data information is successfully obtained through the image content in the preset area, generating response content based on the data information; or
In the case that obtaining the data information via the image content in the target sub-area is successful, generating response content based on the data information.
13. A computer-readable medium storing computer-executable instructions for implementing the information processing method of any one of claims 1 to 7 when executed by a processing unit.
14. A computing device, comprising:
a processing unit; and
a storage unit storing computer-executable instructions for implementing the information processing method of any one of claims 1 to 7 when executed by the processing unit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910235576.9A CN109934041B (en) | 2019-03-26 | 2019-03-26 | Information processing method, information processing system, medium, and computing device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910235576.9A CN109934041B (en) | 2019-03-26 | 2019-03-26 | Information processing method, information processing system, medium, and computing device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109934041A CN109934041A (en) | 2019-06-25 |
CN109934041B true CN109934041B (en) | 2021-12-17 |
Family
ID=66988429
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910235576.9A Active CN109934041B (en) | 2019-03-26 | 2019-03-26 | Information processing method, information processing system, medium, and computing device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109934041B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112308056B (en) * | 2019-07-26 | 2024-12-03 | 深圳怡化电脑股份有限公司 | Method, device, equipment and storage medium for acquiring bill feature area |
CN110659567B (en) * | 2019-08-15 | 2023-01-10 | 创新先进技术有限公司 | Method and device for identifying damaged part of vehicle |
CN110598562B (en) * | 2019-08-15 | 2023-03-07 | 创新先进技术有限公司 | Vehicle image acquisition guiding method and device |
CN111860190B (en) * | 2020-06-24 | 2024-04-12 | 国汽(北京)智能网联汽车研究院有限公司 | Method, device, equipment and storage medium for target tracking |
CN111767895B (en) * | 2020-07-09 | 2023-08-25 | 中国工商银行股份有限公司 | Device identification method, device, robot, and computer-readable storage medium |
CN112749631A (en) * | 2020-12-21 | 2021-05-04 | 北京百度网讯科技有限公司 | Data processing method and device based on image recognition, electronic equipment and medium |
CN113033236A (en) * | 2021-03-26 | 2021-06-25 | 北京有竹居网络技术有限公司 | Method, device, terminal and non-transitory storage medium for acquiring recognition target |
CN113312936B (en) * | 2021-05-13 | 2024-08-13 | 阳光电源股份有限公司 | Image positioning identification method and server |
CN113807410B (en) * | 2021-08-27 | 2023-09-05 | 北京百度网讯科技有限公司 | Image recognition method, device and electronic equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101840491A (en) * | 2010-05-25 | 2010-09-22 | 福建新大陆电脑股份有限公司 | Barcode decoder supporting multi-image concurrent processing |
CN101882210A (en) * | 2010-06-01 | 2010-11-10 | 福建新大陆电脑股份有限公司 | Matrix two-dimensional barcode decoding chip and its decoding method |
US8542930B1 (en) * | 2010-12-30 | 2013-09-24 | Cognex Corporation | Mark reader configured to prioritize images |
CN106599758A (en) * | 2016-11-29 | 2017-04-26 | 努比亚技术有限公司 | Image quality processing method and terminal |
CN107862314A (en) * | 2017-10-25 | 2018-03-30 | 武汉楚锐视觉检测科技有限公司 | A kind of coding recognition methods and identification device |
CN109450736A (en) * | 2018-12-11 | 2019-03-08 | 杭州网易再顾科技有限公司 | Network interface test method and device, medium and calculating equipment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7016532B2 (en) * | 2000-11-06 | 2006-03-21 | Evryx Technologies | Image capture and identification system and process |
-
2019
- 2019-03-26 CN CN201910235576.9A patent/CN109934041B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101840491A (en) * | 2010-05-25 | 2010-09-22 | 福建新大陆电脑股份有限公司 | Barcode decoder supporting multi-image concurrent processing |
CN101882210A (en) * | 2010-06-01 | 2010-11-10 | 福建新大陆电脑股份有限公司 | Matrix two-dimensional barcode decoding chip and its decoding method |
US8542930B1 (en) * | 2010-12-30 | 2013-09-24 | Cognex Corporation | Mark reader configured to prioritize images |
CN106599758A (en) * | 2016-11-29 | 2017-04-26 | 努比亚技术有限公司 | Image quality processing method and terminal |
CN107862314A (en) * | 2017-10-25 | 2018-03-30 | 武汉楚锐视觉检测科技有限公司 | A kind of coding recognition methods and identification device |
CN109450736A (en) * | 2018-12-11 | 2019-03-08 | 杭州网易再顾科技有限公司 | Network interface test method and device, medium and calculating equipment |
Non-Patent Citations (1)
Title |
---|
Development of Color QR Code for Increasing Capacity;Nutchanad Taveerad 等;《 2015 11th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)》;20160208;第645-648页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109934041A (en) | 2019-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109934041B (en) | Information processing method, information processing system, medium, and computing device | |
US9578248B2 (en) | Method for generating thumbnail image and electronic device thereof | |
CN109120984B (en) | Barrage display method and device, terminal and server | |
US9354701B2 (en) | Information processing apparatus and information processing method | |
US8600106B1 (en) | Method and apparatus for tracking objects within a video frame sequence | |
CN111612696B (en) | Image stitching method, device, medium and electronic equipment | |
US11070729B2 (en) | Image processing apparatus capable of detecting moving objects, control method thereof, and image capture apparatus | |
US11048913B2 (en) | Focusing method, device and computer apparatus for realizing clear human face | |
RU2613038C2 (en) | Method for controlling terminal device with use of gesture, and device | |
US10291843B2 (en) | Information processing apparatus having camera function and producing guide display to capture character recognizable image, control method thereof, and storage medium | |
CN109934229B (en) | Image processing method, device, medium and computing equipment | |
CN107992366B (en) | Method, system and electronic equipment for detecting and tracking multiple target objects | |
CN111314626A (en) | Method and apparatus for processing video | |
CN110443772B (en) | Picture processing method and device, computer equipment and storage medium | |
CN110111241B (en) | Method and apparatus for generating dynamic image | |
US11216064B2 (en) | Non-transitory computer-readable storage medium, display control method, and display control apparatus | |
CN115147623B (en) | Target image acquisition method and related equipment | |
CN115514897B (en) | Method and device for processing image | |
CN111782121A (en) | Page rolling control method and device, readable storage medium and electronic equipment | |
CN114067145B (en) | Passive optical splitter detection method, device, equipment and medium | |
CN110969161B (en) | Image processing method, circuit, vision-impaired assisting device, electronic device, and medium | |
JP2014203382A (en) | Display device, display method, and display program | |
KR20190049350A (en) | Method for image processing of projector for object and apparatus for performing the method | |
US10455145B2 (en) | Control apparatus and control method | |
US20150234517A1 (en) | Display apparatus and method and computer program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |