HK1186273B - Method and apparatus for ordering code candidates in image for decoding attempts - Google Patents
Method and apparatus for ordering code candidates in image for decoding attempts Download PDFInfo
- Publication number
- HK1186273B HK1186273B HK13113612.6A HK13113612A HK1186273B HK 1186273 B HK1186273 B HK 1186273B HK 13113612 A HK13113612 A HK 13113612A HK 1186273 B HK1186273 B HK 1186273B
- Authority
- HK
- Hong Kong
- Prior art keywords
- image
- fov
- code
- candidates
- code candidates
- Prior art date
Links
Description
Cross reference to first please
Not applicable to
Statement regarding federally sponsored research or development
Not applicable to
Background
The present invention relates to a code reader, and more particularly to a code reader that attempts to optimize a decoding process for decoding code candidates in an obtained fast successive image.
Automatic identification of products using optical codes has been widely practiced throughout industrial operations and in many other applications for many years. An optical code is a pattern of cells with different light reflections or emissions, which cells are combined according to a predetermined rule. The cells in an optical code may be bars or spaces in a linear bar code, or an on/off pattern in a two-dimensional matrix code. The bar code or symbol can be printed on a label provided on the package of the merchandise, or directly on the merchandise itself by direct partial marking. The information encoded in the bar code or symbol can be decoded using a fixed mounting device or an optical reader in the portable handheld device. In the case of stationary mounted equipment, a transmission line or the like is typically provided which moves an object marked with a code or symbol through the equipment detector so that the detector can produce an image of the code.
At least some reader devices include a camera capable of producing a two-dimensional image of the field of view. For example, many current systems are equipped with two-dimensional CCD image sensors that acquire images and generate image data that is provided to a processor. The processor is programmed to examine the image data to identify a code candidate (e.g., a bar code or symbol candidate) and attempt to decode the code candidate. At least some reader devices are programmed to acquire images of FOVs in rapid evolution and attempt to decode any code candidate in the acquired images as quickly as possible. For decoding, the processor runs one or more decoding algorithms.
As images are acquired, objects are moved through the FOV of the apparatus by a transmission line or the like, and in many cases, a large number of images of the same object and applied code are acquired as the object and code are at different locations along the direction of travel through the FOV. Here, some code candidates in an image will be new to the FOV (i.e., the code candidate is outside the FOV during the previous image), and some codes will exit the FOV before the subsequent image is produced.
When acquiring an image of a code, the quality of the image depends on several factors including the sensor angle with respect to the surface to which the code is applied, the material and texture of the surface to which the code is applied, the quality of the code markings or damage after the markings, the ambient environment and equipment lighting characteristics, distortion of the applied symbols, the speed of the transmission line, the distance to the surface to which the code is applied, optical speckle, camera resolution, etc. Image quality affects the ability of the processor to run the particular algorithm used to decode the code. For example, in many cases, a simple decoding algorithm will not be able to successfully decode the code in the image unless the surroundings of the image acquisition are sufficiently ideal.
To compensate for defective image acquisition, relatively complex decoding algorithms have been developed. For example, several decoding algorithms have been developed that can at least partially compensate for defective illumination, curved surfaces to which the code is applied, defective sensor angles relative to the surface to which the code is applied, and so forth.
Although complex decoding algorithms work well in compensating for defective images, one drawback is that complex algorithms typically require a greater amount of processing power and considerable time to execute. While more expensive algorithms are not a problem in some applications, in applications where images are acquired in rapid succession, expensive algorithms that require long cycles to complete can result in computational requirements that far exceed the capability of the reader processor. More particularly, in some cases, the image sensor can acquire and provide images quickly, such that the reader device processor cannot perform complex algorithms on all code candidates in an image before receiving the next image.
In the case where the processor cannot meet the computational requirements required to perform complex algorithms for all code candidates in a fast succession of images, one solution is to look ahead in analyzing the candidates for the next image until the reader device has attempted to decode all code candidates in the current image. Thus, for example, if the second through fourth images are obtained during a period in which it is necessary to attempt to decode all code candidates in the first image, the second through fourth images will be discarded when the fifth image is obtained, and the code candidates in the fourth image will be examined. Although this solution guarantees that a complex algorithm is applied to all candidates in the current or first image, the solution simply ignores code candidates in the second through fourth images when a candidate for the first image is processed, regardless of whether candidates in subsequent images may be more suitable for decoding. In this case, the end result may be that some codes that pass through the observation domain have never been successfully decoded.
Summary of The Invention
It has been recognized that for decoding attempts, code candidates in the acquired image can be sorted in a candidate order as a function of various factors, including the location of the candidate in the image currently being examined, the location of the candidate in a previously acquired image, the results of the decoding attempt in a previous image, the direction of travel of the object/code through the field of view, the speed of travel through the field of view, whether the candidate will likely appear in a subsequent image, whether the candidate is being re-placed within the field of view, and many other factors. Here, after the order of candidates is established, the processor attempts to decode the candidates in the established order until the next picture event occurs, at which point the processor discards or at least stops attempting to decode candidates in the current picture and begins processing of code candidates in the next picture. Here, the next picture event may be the receipt of the next picture, the end of a timeout period, the completion of a number of decoding attempts selected to be able to complete before the next picture is expected to be received, the completion of decoding attempts of all code candidates that may not have been decoded in the previous picture, and so on.
For example, in at least some embodiments, the transmission line moves the object/code through the device field of view in the direction of travel such that a new image enters the field of view along the entrance edge and exits along the exit edge, the processor may be programmed to sort the code candidates in the acquired images so that the processor attempts to decode the candidate closest to the entrance edge first and then decodes the candidate a little further away from the entrance edge until either the processor has attempted to decode all candidates or the next image event (e.g., the next image is acquired) occurs. When the next picture event occurs before attempting to decode all candidates, the processor discards or pre-processes the candidates for which the processor has not yet begun attempting to decode, and, in at least some cases, may stop decoding the candidate currently being decoded and begin processing the next picture.
As another example, the processor may be programmed to order code candidates in the acquired image so that the processor attempts to decode the candidates closest to the exit edge first and then to decode candidates further away from the exit edge until the processor has attempted to decode all candidates, or the next image event (e.g., the next image is acquired) occurs. In other cases, the processor may divide the FOV into different regions of interest (ROIs) and then attempt to decode the candidates in the ROI in sequential order starting with a candidate in the first ROI, followed by the second ROI, and so on, until all candidates have been attempted or until the next image event occurs.
In other embodiments, the processor may be programmed to identify candidates that are likely not to appear in the next image based on the direction of travel, speed, and position of the candidates in the current image, or a subset of these factors, and may order the candidates such that candidates that are likely not to appear in the next image are processed first in the order.
In other embodiments, the processor may be programmed to record the locations of successfully decoded candidates in the previous image, and sort the candidates for decoding attempts by which code candidates in the current image are likely to be successfully decoded in the previous image and which cannot be successfully decoded. For example, in the case where two of the ten candidates in the current picture may correspond to two successfully decoded candidates from a previous picture, the processor may order the other eight candidates first, and only attempt to decode those two candidates that may have been successfully decoded before (if there is time before obtaining the next picture) last.
In still other embodiments, the processor may rank the candidates based on their positions in the image or other factors, depending on whether the candidates will likely appear in a subsequent image and whether the candidates are likely to be decoded in a previous image.
Some embodiments include a method for decoding a code applied to an object for use with a camera and a conveyor system, wherein the camera includes an image sensor having a two-dimensional field of view (FOV), and the conveyor system moves the object in a first direction of travel through the FOV such that the object enters the FOV along an entrance edge and exits the FOV along an exit edge, the method including the step of providing a processor programmed to perform the steps of: obtaining an image of the FOV; for each image, identifying code candidates in at least a portion of the image; ordering at least a subset of the code candidates for decoding in a candidate order, wherein the candidate order is determined at least in part by a first direction of travel through the FOV; decoding of the code candidates is attempted in an order specified by the direction of travel, and at least a portion of the identified code candidates are first attempted to be decoded when a new image event occurs.
In some embodiments, the step of ranking at least a subset of the code candidates comprises identifying at least first and second regions of interest (ROIs) in the FOV, the first and second regions of interest being adjacent to an entrance edge and an exit edge of the FOV, respectively; identifying code candidates in each of at least the first and second ROIs; and ordering code candidates in one of the first and second ROIs prior to code candidates in the other of the first and second ROIs. In some embodiments, code candidates in the first ROI are ordered in candidate order before code candidates in the second ROI. In some embodiments, code candidates in the second ROI are ordered in candidate order before code candidates of the first ROI.
In some embodiments, the step of ordering at least a subset of the code candidates further comprises the steps of: code candidates that will likely be outside the FOV when the next image is obtained are identified and the code candidates that will likely be outside the FOV when the next image is obtained are sorted near the beginning of the candidate order. In some cases, the step of identifying code candidates that will likely be outside the FOV further comprises the steps of: for code candidates that will likely be outside the FOV when the next image is obtained, code candidates that are likely to have been decoded in the previous image and code candidates that are likely not to have been decoded in the previous image are identified, and code candidates that are likely not to have been decoded in the previous image are ordered before code candidates that are likely to have been decoded in the previous image.
In some cases, the step of ordering at least a subset of the code candidates further comprises the steps of: code candidates that are likely to be new to the FOV are identified, and code candidates that are likely to be in the previous image and that will be likely to be in the FOV when the subsequent image is obtained are ordered near the end of the candidate order, and code candidates that are likely to be in the previous image are ordered near the approximate middle of the candidate order.
In some embodiments, the step of ordering at least a subset of the code candidates comprises: code candidates that are new to the FOV near the beginning of the candidate order are identified. In some embodiments, the step of ordering at least a subset of the code candidates comprises: identifying code candidates that may be outside the FOV when the next image is obtained and that may not be decoded in the previous image as a first candidate subset; identifying code candidates that may be outside the FOV when the next image is obtained and that may have been decoded in a previous image as a second candidate subset; identifying code candidates that may be in the FOV and that may not be decoded in the previous image when the next image is obtained as a third candidate subset; identifying code candidates that may be in the FOV and that may have been decoded in a previous image when a next image was obtained as a fourth candidate subset; and ordering the subsets such that the first candidate subset occurs before the second candidate subset, the second candidate subset occurs before the third candidate subset, and the third candidate subset occurs before the fourth candidate subset.
In some cases, when a new picture is obtained, the method further includes discarding the code candidates in a candidate order, such that no attempt to decode occurs. In some cases, the conveying system conveys the object in a first direction of travel at a conveying speed, and wherein the candidate order is determined at least in part according to the conveying speed. In some embodiments, at least first and second different decoding algorithms may be used to attempt to decode any code candidate, the method further comprising the steps of: one of the first and second decoding algorithms is assigned to each candidate in at least a subset of the candidate orders, wherein the algorithm assigned to each candidate is based at least in part on the candidate order.
Some embodiments further comprise the step of: identifying code candidates that may be successfully decoded in the previous picture and code candidates that may not be successfully decoded in the previous picture, the step of assigning the first and second decoding algorithms comprising assigning the decoding algorithms based at least in part on whether the code candidates may be decoded in the previous picture.
In some embodiments, the first decoding algorithm requires more time to complete than the second decoding algorithm. In some cases, a new image event occurs when a new image is obtained. In some embodiments, a new image event occurs when the image acquisition period has elapsed.
A method for decoding a code applied to an object and for use with a camera and a conveyor system, wherein the camera includes an image sensor having a two-dimensional field of view (FOV) and the conveyor system moves the object in a first direction of travel through the FOV such that the object enters the FOV along an entrance edge and exits the FOV along an exit edge, the method comprising the step of providing a processor programmed to perform the steps of: identifying a first direction of travel through the FOV; for each image, obtaining an image of the FOV, wherein the most recently obtained image is the current image; identifying code candidates in at least part of the image; attempting to decode code candidates near the FOV exit edge; after attempting to decode code candidates near the exit edge of the FOV, attempting to decode code candidates near the entrance edge of the FOV and, when a new image event occurs, first attempting to decode at least a portion of the identified code candidates.
Some embodiments include a method for decoding a code applied to an object and for use with a camera including an image sensor having a two-dimensional field of view (FOV), and a transport system that moves the object in a first direction through the FOV such that the object enters the FOV along an entrance edge of the FOV and exits the FOV along an exit edge of the FOV, such that the sensor produces an image having at least first and second different regions of interest adjacent to the entrance and exit edges, respectively. The method includes the step of providing a processor programmed to perform the steps of: obtaining an image of the FOV; for each image, identifying code candidates for at least a portion of the image; identifying at least first and second different regions of interest (ROIs) in the obtained image; attempting to decode code candidates in the first ROI for a first time; attempting to decode code candidates in the second ROI after attempting to decode candidates in the first ROI, attempting to decode code candidates in the first ROI a second time after attempting to decode candidates in the second ROI, and attempting to decode at least a portion of the identified code candidates in advance when a new image event occurs.
In some embodiments, the step of attempting to decode the code candidates in the first ROI a first time comprises using a first decoding algorithm, and the step of attempting to decode the code candidates in the first ROI a second time comprises attempting to decode the code candidates using a second decoding algorithm different from the first decoding algorithm. In some embodiments, the first ROI is proximate one of the entrance and exit edges of the FOV and the second ROI is proximate the other of the entrance and exit edges of the FOV.
Some embodiments further comprise the step of ordering the code candidates in the first ROI in a candidate order, the step of attempting to decode the code candidates in the first ROI for the first time comprising attempting to decode the code candidates in an order specified by the candidate order. Some cases further include: a step of identifying code candidates in the first ROI that may have been previously decoded in at least one previous image; a step of ordering the code candidates, the step comprising ordering the code candidates at least partly according to which candidate may have been previously decoded in at least one previous picture.
Other embodiments include an apparatus for decoding a code applied to an object and for use with a camera and a conveyor system, wherein the camera includes an image sensor having a two-dimensional field of view (FOV), and the conveyor system moves the object in a first direction of travel through the FOV such that the object enters the FOV along an entrance edge and exits the FOV along an exit edge, the apparatus including a processor programmed to perform the steps of: obtaining an image of the FOV; for each image, identifying code candidates in at least a portion of the image; ordering at least a subset of the code candidates for decoding in a candidate order, wherein the candidate order is determined at least in part on a first direction of travel through the FOV; the code candidates are attempted to be decoded in an order specified by the direction of travel, and at least a portion of the identified code candidates are attempted to be decoded in advance when a new image event occurs.
In some embodiments, the processor is programmed to perform the steps of: sorting at least a subset of the code candidates by identifying at least first and second regions of interest (ROIs) in the FOV, the first and second regions of interest being adjacent to an entrance edge and an exit edge of the FOV, respectively; code candidates in each of at least the first and second ROIs are identified, and code candidates in one of the first and second ROIs are ordered for processing before code candidates in the other of the first and second ROIs.
In some embodiments, the processor is programmed to perform the steps of: at least a subset of the code candidates is ordered by identifying code candidates that will likely be outside the FOV when the next image is obtained, and ordering code candidates that will likely be outside the FOV when the next image is obtained is near the beginning of the candidate order. In some embodiments, the processor is programmed to perform the steps of: at least a subset of the code candidates are ordered near the beginning of the candidate order by identifying code candidates that are new to the FOV.
Other embodiments include an apparatus for decoding a code applied to an object and for use with a camera and a conveyor system, wherein the camera includes an image sensor having a two-dimensional field of view (FOV), and the conveyor system moves the object in a first direction of travel through the FOV such that the object enters the FOV along an entrance edge and exits the FOV along an exit edge, the apparatus including a processor programmed to perform the steps of: identifying a first direction of travel through the FOV; acquiring a FOV image; for each image, identifying code candidates in at least a portion of the image if the most recently obtained image is the current image; attempting to decode code candidates near the FOV exit edge; after attempting to decode code candidates near the FOV exit edge, attempting to decode code candidates near the FOV entry edge, and when a new image event occurs, preemptively attempting to decode at least a portion of the identified code candidates.
To the accomplishment of the foregoing and related ends, the invention, then, comprises the features hereinafter fully described. The following description and the annexed drawings set forth in detail certain illustrative aspects of the invention. These aspects are indicative, however, of but a few of the various ways in which the invention may be practiced. Other aspects, advantages and novel features of the invention will become apparent from the following detailed description of the invention when considered in conjunction with the drawings.
Brief description of the drawings
FIG. 1 is a schematic diagram illustrating a machine vision system and transmission line, wherein the machine vision system includes features consistent with at least some aspects of the present disclosure;
FIG. 2 illustrates an exemplary observation domain, an exemplary region of interest in the observation domain, and an exemplary code candidate in the observation domain;
FIG. 3 is similar to FIG. 2, although showing code candidates in different relative positions with respect to the observation domain and the region of interest;
FIG. 4 is a flow chart illustrating a method by which code candidates in an obtained image are sorted for attempted decoding according to the travel speed and travel direction of the candidates through an observation domain;
FIG. 5 is a flow chart illustrating a method by which code candidates in an obtained image are grouped into subsets according to which region of interest in the observation domain they appear in, and are processed by a processor in an order related to the regions of interest of the code candidates;
FIG. 6 is a flow chart illustrating a method by which a processor sorts code candidates according to whether those code candidates are likely to be decoded in a previous image; and
fig. 7 and 8 show a single flow diagram in which the processor sorts the code candidates for decoding attempts according to whether the code candidates are likely to be decoded in a previously obtained image and whether the code candidates will likely appear in a subsequent image.
Detailed description of the invention
Various aspects of the subject invention are now described with reference to the drawings, wherein like reference numerals correspond to like parts throughout the several views. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the claimed subject matter to the particular form disclosed. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the claimed subject matter.
As used herein, the terms "component," "system," and the like are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of example, both an application running on a computer and the computer can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers or processors.
The word "exemplary" is used herein as an example, instance, or illustration. Any aspect or design described herein as "exemplary" is not necessarily required to explain the preferences or advantages of other aspects or designs.
Still further, the disclosed subject matter may be implemented as a system, method, apparatus, or method using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a device-based computer or processor to implement aspects described herein. The term "article of manufacture" (or alternatively "computer program product") as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer-readable media can include, but are not limited to, magnetic storage devices (e.g., hard disk, floppy disk, magnetic sheets.), optical disks (e.g., Compact Disk (CD), Digital Versatile Disk (DVD)), smart cards, and flash memory devices (e.g., card, stick). Further, it should be appreciated that a carrier wave can be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a Local Area Network (LAN). Of course, those skilled in the art will recognize many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter.
Referring now to the drawings, wherein like reference numbers correspond to like components throughout the several views, and more particularly to fig. 1, the present invention will be described in the context of an exemplary imaging system/transmission line 10, in which a transmission line 30 moves objects 26a, 26b, 26c, etc. along a direction of travel 25. In the present example, each object has similar physical characteristics, and therefore, only object 26a will be described in detail. In particular, object 26b includes a surface 27 that faces generally upward as object 26b is moved by transfer line 30. A matrix code 24a is applied to the upper surface 27 for identification purposes. A similar matrix code 24a is applied on the upper surface of each of the objects 26a and 26 c.
Still referring to FIG. 1, the image processing system includes a camera 22 including optics 24 defining a field of view 28 beneath the camera 22 through which a production line 30 moves objects 26a, 26b, 26c, etc. Thus, as the object moves along the direction of travel 25, each upper surface 27 enters the field of view 28. In the present example, the field of view 28 is large enough that the entire upper surface 27 is located at one or another point within the field of view 28, and thus any code applied to the upper surface 27 of the object passes through the field of view 28 and can be captured in an image by the camera 22. The imaging system also includes a computer or processor 14 (or multiple processors) that receives the image from the camera 22, examines the image to identify sub-portions of the image that may include instances of the matrix code as code candidates, and then attempts to decode each code candidate to identify objects within the current field of view 28. To this end, a camera 22 is connected to the processor 14. An interface device 16/18 is also connected to the processor 14 to provide video and audio output to the system user as well as provide input to the user to control the imaging system, set imaging system operating parameters, address imaging system problems, and the like. In at least some embodiments, the imaging system further includes a tachometer 33 disposed adjacent to the transmission line 30 that can be used to identify the direction of travel 25 and/or the speed at which the transmission line 30 transports objects through the field of view.
Referring now to FIG. 2, an exemplary image of the field of view 28 is shown. Within the observation field 28, as shown, there are three code candidates 24a, 24b, 24c, of which only code candidate 24a corresponds to the actual matrix code. Thus, the code candidates 24b and 24c simply represent image artifacts that have some characteristics consistent with the artifacts of the actual matrix code to be decoded, subject to initial analysis of the image. The direction of travel 25 of the transmission line 30 shown in fig. 1 is shown in fig. 2 as going from right to left as shown. Thus, an object entering the field of view 28 and a code marked thereon enters the field of view from the right and passes to the left. For this reason, the right side 36 of the viewing field 28 is referred to as the entrance edge, while the left side 38 is referred to as the exit edge.
Referring again to fig. 1 and 2, the camera 22 is a high speed camera that obtains successive images of the field of view 28 very quickly so that, in many cases, the camera 22 will obtain a large number of successive images of the code at different positions between the entrance and exit edges 36 and 38, respectively, as the code on the object moves through the field of view 28, so that, for example, the camera 22 can obtain 20 individual images with the code 24a appearing in each of the 20 images as the code 24a moves across the field of view 28. Although processor 14 is a high speed processor, in many cases processor 14 is not capable of processing all of the data in all of the successive images produced by camera 22. This is particularly true where a large number of pseudocode candidates are present in the acquired image and the number of pseudocode candidates in the image is typically dependent on the environment in which the image was acquired. For example, in the case of insufficient illumination of the observation domain, the number of the pseudocode candidates may be large. As another example, the speed at which the transmission line 30 moves the object through the observation domain 28 may affect the number of pseudocode candidates identified. In the present description, it will be assumed that in many cases the camera 22 produces images so rapidly that the processor 14 cannot process all code candidates in all acquired images for at least some time.
It has been recognized that the system processor 14 will attempt to decode many code candidates in an image obtained in a previous image at least once and in many cases multiple times. For example, referring again to fig. 2 and now also to fig. 3, fig. 3 shows the field of view 28 at one point and slightly in time after the image shown in fig. 2 is obtained, at which time the code candidate 24c has moved to a different position along the direction of travel 25 within the field of view 28. Similarly, the candidate 24a has moved along the direction of travel within the field of view 28. Candidate 24b has moved out of the field of view 28 and therefore no longer appears in fig. 3. At this point, when the image of fig. 3 is obtained, it is possible that processor 14 has attempted to decode each of candidates 24a and 24c at least once and in many cases several times, because candidates 24a and 24c appear in the image shown in fig. 2 and may appear within many other images obtained between the times at which the images of fig. 2 and 3 were obtained.
Still referring to fig. 3, three additional code candidates 24d, 24e, and 24f appear in the image that do not appear in the image shown in fig. 2. Here, the artifact causing the candidate 24d-24f has moved to a location within the field of view 28 between the times at which the images of fig. 2 and 3 were acquired. In this case, the processor 14 may not attempt to decode any of the candidates 24d-24f in the previous image, or may only attempt to decode each candidate that is relatively new within the observation domain a few times, at a minimum, that is less than the number of times the processor attempts to decode the relatively earlier candidates 24a and 24 c. Whether the processor 14 has attempted to decode candidates within the acquired image of the observation domain 28 depends on the direction of travel 25, the speed at which the transmission line 30 (see again fig. 1) moves the object and the code marked thereon through the observation domain 28, the speed at which the imaging system acquired the image, and the speed at which the processor decoded the acquired candidates.
Thus, in at least some embodiments, the processor 14 may be programmed to efficiently sort code candidates within the obtained image for decoding in a candidate order, where a new candidate for the observation domain 28 is tried first, followed by candidates that have been within the observation domain 28 for a relatively long period of time.
Referring now to fig. 4, a process 40 is shown that may be performed by processor 14 to rank code candidates within an image, where new code candidates within a view domain are processed first. At block 42, the processor 14 obtains an image from the camera 22. At block 44, the processor 14 identifies a direction of travel through the observation domain 28 and a speed of the transmission line 30. Here, the linear speed and the traveling direction of the conveyor may be determined by the tachometer 33. In other embodiments, the transmission line speed and direction of travel may be reprogrammed so that a tachometer is not necessary. In other embodiments, the processor 14 may be further programmed to control the line speed and direction of travel, and thus may already have that information for processing purposes. Continuing, at block 46, code candidates within the obtained image are identified. At block 48, based at least in part on the direction/speed of travel, processor 14 creates a candidate order, wherein, consistent with that described above, relatively newer code candidates within observation domain 28 are placed at the front of the candidate order, while relatively older candidates are placed near the end of the order.
At block 50, processor 14 attempts to decode the next code candidate in the candidate order. On the first pass through block 50, the processor attempts to decode the first code candidate in the candidate order for an image, on the second pass through block 50, the processor 14 attempts to decode the second code candidate in that order for an image, and so on. At block 52, processor 14 determines whether the code candidate was successfully decoded. If the code candidate has been successfully decoded, control passes to block 56, at block 56 the processor 14 indicates that the decoding attempt has been successful, after which control passes to block 58.
Referring again to block 52, if the code candidate was not successfully decoded, control passes to block 58. At block 58, the processor 14 determines whether the next image has been obtained. If a next image has been acquired, control passes back to block 46, at block 46, the processor 14 again identifies all code candidates within the acquired image, and the process described above continues. Here, it should be appreciated that when the next image is obtained at block 58 before processor 14 attempts to decode all code candidates within the current image, in at least some embodiments, the candidates that the processor has not yet attempted to decode are simply discarded. Thus, for example, referring back to FIG. 3, it may be that processor 14 only has time to attempt to decode the relatively newer images 24d, 24e, and 24f before processor 14 obtains the next image, in which case candidates 24a and 24c in FIG. 3 would be discarded. Here, it should be appreciated that processor 14 may have attempted to decode candidates 24a and 24c several times each in the previous image when those candidates appear earlier in the candidate order.
In at least some embodiments, the processor 14 can be programmed to stop decoding code candidates that the current processor is attempting to decode when the next image is obtained. In other embodiments, if the processor is substantially complete executing the decoding algorithm, the processor may continue the current decoding process until completion.
Referring again to fig. 4, if the next image is not obtained at block 58, control passes down to block 60 where the processor 14 determines whether all code candidates in the order have been tried at block 60. If the processor does not attempt to decode all code candidates in the order, control passes back to block 50 where the processor attempts to decode the next code candidate in the candidate order, block 50. In some cases, for a particular image, the processor can try all code candidates in that order. In this case, at block 60, control passes back to block 42, at block 42, processor 14 waits for the next image to be acquired of field of view 28.
Consistent with another aspect of at least some embodiments of the present invention, it has been recognized that in any image of the field of view 28, at least some code candidates within the image will likely not appear in subsequent images, and thus, it may be beneficial to place code candidates that may not appear in subsequent images at or near the top of the candidate order. For example, referring back to FIG. 2, candidate 24b is proximate to the exit edge 38 of the field of view 28 and, therefore, is relatively less likely to appear in subsequent images of the field of view 28 than candidates 24a and 24 c. For this reason, referring to fig. 3, in fig. 3 candidates 24a and 24c appear in the subsequent image, while candidate 24b no longer appears in the image. Here, the image shown in fig. 2 may represent the last possible instance of a candidate 24b for which a decoding attempt can be made.
Thus, referring back to FIG. 4, in at least some embodiments, the candidate order created at block 48 will place code candidates that may not appear in subsequent images within the field of view at the top of the candidate order, and new candidates within the field of view 28 at the back of the order. In fig. 2, for example, this means that the candidates 24a to 24c are sorted into 24b, 24a, and 24 c. Except for the difference in the order of candidates, the process 40 described with respect to fig. 4 will be performed in the manner described above.
In some embodiments, instead of generating the candidate order, the observation domain may be divided into different regions of interest, and the code candidates within the different regions may be processed in an order that depends on within which region the code candidate will appear. For example, referring again to fig. 2, the field of view 28 is shown divided into first and second regions of interest 32 and 34, respectively, wherein the first region of interest 32 includes a first half of the field of view 28 adjacent or proximate to the inlet edge 36 and the second region of interest 34 includes a second half of the field of view 28 adjacent or proximate to the outlet edge 38. Here, in embodiments where relatively new code candidates should first be processed for the observation field 28, the processor 14 may attempt to decode a candidate in the first region of interest 32 before attempting to decode a candidate in the second region of interest 34. Similarly, in embodiments where code candidates that are more likely to not appear in subsequent images should be processed first, the processor 14 will process the candidates in the second region of interest 34 before processing any of the candidates in the first region of interest 32.
Referring now to fig. 5, an exemplary process 170 is shown in which processor 14 processes code candidates according to which region of interest of the observation domain the candidate appears. At block 172, processor 14 obtains an image of field of view 28. At block 174, code candidates within the obtained image are identified. At block 176, the processor 14 identifies a region of interest within the image. Referring again to FIG. 2, in the example of FIG. 2, in at least some embodiments, processor 14 identifies first and second regions of interest 32 and 34, respectively. At block 178, the processor 14 attempts to decode the next code candidate in the first region of interest 32. Here, on the first pass through block 178, for an image, processor 14 attempts to decode a first code candidate within the first region of interest 32, on the second pass through block 178, processor 14 attempts to decode a second code candidate within the region of interest 32, and so on. At block 180, processor 14 determines whether the code candidate has been successfully decoded. If the code candidate has been successfully decoded, control passes to block 182, where the processor 14 indicates successful decoding, after which control passes to block 186. At decision block 180, if the code candidate was not successfully decoded, control passes to block 186.
Still referring to FIG. 5, at block 186, processor 14 determines whether processor 14 has received the next image from camera 22. If a next image has been received, control passes back to block 174, at block 174 processor 14 identifies code candidates in the new image, and the process continues. Here, it should be appreciated that code candidates within the first region of interest that the processor has not attempted to decode are discarded if the next image is obtained before the processor 14 has attempted to decode all of the code candidates within the first region of interest 32. Furthermore, all code candidates within the second region of interest 34 are also discarded in favor of starting to attempt to decode candidates within the new image. In other embodiments, after a new image is obtained, processor 14 may be programmed to complete decoding attempts for any candidates within the first region before processing the new or next image.
Still referring to block 186, if a next image is not yet obtained, control passes to block 184. At block 184, processor 14 determines whether the processor has attempted to decode all code candidates within the first region of interest. If there are additional code candidates within the first region of interest that have not been attempted, control passes back to block 178 where the process continues as described above. Once all code candidates for the first region of interest have been attempted, control passes to block 188 where the processor 14 next attempts to decode the next code candidate within the second region of interest at block 188. Here, on the first pass through block 188, for an image, processor 14 attempts to decode a first code candidate within the second region of interest 34, the second pass through block 188, processor 14 attempts to decode a second code candidate within the region of interest, and so on.
Referring again to fig. 5, at block 190, processor 14 determines whether the code candidate was successfully decoded. If the candidate was successfully decoded, control passes to block 192, where the processor 14 indicates successful decoding in block 192. After block 192, control passes to block 193. If the code candidate was not successfully decoded at block 190, control passes down to block 193. At block 193, the processor 14 determines whether the next image has been obtained. If a next image has been acquired, control passes back to block 174 where the process continues at block 174. Here, it should again be appreciated that if the next image is obtained before the processor attempts to decode all code candidates within the second region of interest 34, the code candidates that were not attempted are discarded.
Referring again to block 193, if the next image has not yet been obtained, control passes down to block 194 where the processor 14 determines whether the processor is attempting to decode all code candidates within the second region of interest at block 194. If there is at least one untried code candidate within the second region of interest, then control passes back to block 188 and the process continues at block 188. Once all code candidates within the second region of interest have been tried, control passes back to block 172 where the processor 14 waits to receive the next image from the camera 22 at block 172.
There are many different algorithms that can be used to attempt to decode code candidates within the acquired image. Some of these algorithms are relatively simple, while others are relatively complex, with complex decoding algorithms generally being more successful in decoding candidates than simple algorithms. However, in many cases, complex algorithms are extremely computationally intensive and therefore require relatively more time to complete than simple algorithms. In at least some embodiments, it is contemplated that different decoding algorithms may be used to attempt to decode a code candidate in an image, where the decoding algorithm used may depend on where the code appears in the acquired image, the direction of travel of the code candidate through the field of view, the speed of travel, and so forth. For example, referring back to fig. 2, different decoding algorithms may be used to attempt to decode code candidates in different regions of interest 32 and 34. For example, in some cases, a simple decoding algorithm may be used to decode any code candidate within the first region of interest 32, and a second, more complex decoding algorithm may be used to attempt to decode all code candidates within the second region of interest 34 until the next image is obtained, at which point any code candidates that the processor has not attempted to decode will be discarded. Consistent with this embodiment, reference is made to fig. 5, where parenthesis of blocks 178 and 188 indicate that first and second different decoding algorithms are used to attempt to decode code candidates within first and second regions of interest, respectively.
In at least some embodiments, the processor 14 may be programmed to associate code candidates in the acquired image with code candidates in previously processed images, and in particular to associate code candidates in the acquired image with candidates that were previously successfully decoded in previous images, and may order the candidates for decoding attempts according to which candidates have likely been previously decoded and which candidates have not been previously decoded. To this end, it has been recognized that when a code candidate in an acquired image may have been previously decoded, another successful decoding of that candidate will simply confirm the results of the previous successful decoding. In at least some embodiments, the processor is programmed to attempt to decode a code that has been successfully decoded in a previous image before attempting to decode a code that may have been previously decoded.
Referring now to fig. 6, a process 240 is shown that the process 240 may be performed by the processor 14 to attempt to decode code candidates based on whether those code candidates were successfully decoded in a previous image. At block 242, the processor 14 obtains an image from the field of view of the camera 22. At block 244, code candidates are identified within the image. At block 246, code candidates that may have been decoded in the previous image are identified based on the speed of the transmission line 30, the direction of travel of the transmission line, and the locations of the code candidates in the previous image and the current image. At block 248, the processor 14 identifies code candidates that may not be decoded in the previous image based on the speed, direction of travel, and location of the candidates in the previous image and the current image. At block 252, processor 14 creates a candidate order in which candidates that may not have been tried in the previous image are placed near the top of the order. Here, as above, the order may be further determined by the position of the candidate in the current image. For example, new code candidates that have never been identified in a previous image may be placed in a higher order than candidates that have been identified in a previous image as not being successfully decoded. At block 250, processor 14 attempts to decode the next code candidate, which was not decoded in at least one previous picture, in order. If the code candidate is successfully decoded at block 254, control passes down to block 255 where the processor 14 stores the location of the decoded code candidate in memory for subsequent use at block 246 described above at block 255. After block 255, control passes to block 256, and in block 256 the processor 14 indicates successful decoding, and then control passes down to decision block 258. In block 254, if the code candidate was not successfully decoded, control passes down to block 258.
At decision block 258, the processor 14 determines whether the next image has been obtained from the camera 22. If a next image has been obtained from the camera, control passes back to block 244, code candidates in the next image are identified at block 244, and the process continues. If the next picture is not yet obtained at block 258, control passes upward to block 260 where processor 14 determines whether the processor has attempted to decode all code candidates that may not have been decoded in the previous picture at block 260. If the processor does not attempt to decode all code candidates that may not have been decoded in the previous picture, control passes back to block 250 where the processor 14 attempts to decode the next code candidate that may not have been decoded, and the process continues.
If, at block 260, the processor has attempted to decode all code candidates that may not have been decoded in the previous picture, control passes down to block 262 where the processor 14 attempts to decode the next code candidate that may have been decoded in the previous picture at block 262. At decision block 264, the processor 14 determines whether the code candidate was successfully decoded. If the code candidate was successfully decoded, control passes to block 266, at block 266 processor 14 stores the location of the decoded candidate, and at block 268 processor 14 indicates successful decoding. In block 264, if the code candidate was not successfully decoded, control passes down to block 270.
At block 270, processor 14 determines whether the next image has been obtained from camera 22. If a next image has been obtained, control passes back to block 244 where the process continues as described above. If the next picture is not obtained, control passes down to block 272 where processor 14 determines whether the processor has attempted to decode all code candidates that may be decoded in the previous picture at block 272. If the processor has tried all of the candidates that could have been previously decoded, control passes back to block 242 where the process continues as described above. If there is at least one more code candidate that the processor has likely decoded in the previous image and that the processor has not attempted to decode in the current image, then control passes back up to block 262 where the process continues.
Still referring to fig. 6, in at least some embodiments, the decoding algorithm used to attempt to decode a candidate that may have been previously decoded and the decoding algorithm used to decode a code candidate that may not have been previously decoded may be different and may be selected to optimize the decoding process. For example, for code candidates that may have been previously decoded, a first relatively complex decoding algorithm may be used (see brackets in block 250), while for codes that may have been previously decoded in earlier pictures, a second relatively simple decoding algorithm may be used to attempt to decode these candidates (see brackets in block 262).
Referring again to fig. 6, the above-described embodiment with respect to fig. 6 orders the code candidates for attempted decoding according to whether the code candidates are likely to be successfully decoded in a previous image, whereas in other embodiments, the code candidates may be ordered according to whether the processor is likely to have attempted decoding of the candidates. For example, if the processor has attempted to decode a first code candidate in a previous image and has not attempted to decode a second code candidate in the previous image, the processor 14 may attempt to decode the second code candidate before the first code candidate without regard to whether the attempt to decode the first code candidate in the previous image was successful. In other embodiments, the processor may be programmed to attempt to decode the first code candidate before the second code candidate given the same circumstances. To this end, blocks 246, 248, 250, 260, 262, and 272 each include one of the qualifiers (trial) or (not trial), and in these blocks they may be replaced with the phrases "possibly decoded" and "possibly undecoded", respectively. When fig. 6 is modified in this manner, process 240 corresponds to a system in which processor 14 is programmed to sort decoding attempts based on whether the processor may have attempted to decode code candidates in a previous image.
In still other embodiments, the processor 14 may be programmed to rank the code candidates according to whether the code is likely to be successfully decoded in a previous image and whether the code candidates will appear in a subsequent image. To this end, referring to FIG. 7, another process 350 is shown that is performed by the processor 14 consistent with at least some aspects of the present invention. At block 352, the processor 14 obtains an image from the camera 22. At block 354, code candidates in the image are identified. At block 356, processor 14 identifies code candidates that may not have been decoded in at least one previous image and that may not appear in subsequent images as group 1 candidates. At block 358, processor 14 identifies code candidates that may not have been decoded in the previous image and that may appear in the subsequent image as group 2 candidates. At block 360, processor 14 identifies code candidates that may be decoded in a previous image and that may not appear in a subsequent image as group 3 candidates. At block 362, processor 14 identifies code candidates that may have been decoded in the previous image and code candidates that may appear in subsequent images as group 4 candidates.
Continuing at block 364, processor 14 attempts to decode the next group 1 code candidate (i.e., the next code candidate that may not be decoded in the previous picture and may not be present in the subsequent picture). Here, processor 14 attempts to decode a first set 1 of candidates for an image on a first pass through block 364, processor 14 attempts to decode a second set 1 of candidates on a second pass through block 364, and so on.
At block 366, processor 14 determines whether the code candidate has been successfully decoded. If the code candidate has been successfully decoded, control passes to block 368, and then to block 370, where the processor 14 stores the location of the decoded candidate and indicates successful decoding, at block 370. After block 370, control passes to block 372. At block 366, if the code candidate was not successfully decoded, control passes down to block 372.
At block 372, processor 14 determines whether the next image has been obtained. If a next image has been obtained, control passes back to block 354 where the process is repeated. Here, if the next picture is obtained at block 372, all code candidates in the current picture that the processor has not attempted to decode are discarded. If, at block 372, a next image is not obtained, control passes to block 374 where processor 14 determines whether all group 1 candidates have been tried. If not all group 1 candidates have been tried, control passes back to block 364 where the process continues. In block 374, once all group candidates have been tried, control passes to block 380 of FIG. 8.
Referring now to fig. 8, at block 380, processor 14 attempts to decode the next group 2 candidate. Here, processor 14 attempts to decode the first 2 nd group candidate on the first pass through block 380, processor 14 attempts to decode the second 2 nd group candidate on the second pass through block 380, and so on. At block 382, processor 14 determines whether the set 2 candidates have been successfully decoded. If the set 2 candidates have been successfully decoded, control passes to blocks 384 and 386, and the processor 14 stores the location of the decoded candidate and indicates a successful decode at block 386. After block 386, control passes to block 388. At block 382, if the code candidate was not successfully decoded, control passes to block 388.
At block 388, the processor 14 determines whether the next image has been obtained from the camera 22. If the next image has been obtained from the camera, control passes back to block 354 in FIG. 7, where the process continues at block 354. Here, if the next image is obtained at block 388, code candidates in the current image that the processor has not attempted to decode are discarded. If, at block 388, the next image is not obtained, control passes to block 390 where the processor 14 determines whether all group 2 candidates are tried. If at least one group 2 candidate is not attempted, control passes back to block 380 where the process continues. If all group 2 candidates have been tried at block 390, control passes upward to block 392.
At block 392, processor 14 attempts to decode the next group 3 candidate. Here, processor 14 attempts to decode the first group 3 candidate on the first pass through block 392, processor 14 attempts to decode the second group 3 candidate on the second pass through block 392, and so on. At block 394, processor 14 determines whether the code candidate has been successfully decoded. If the code candidate has been successfully decoded, control passes to blocks 396 and 398 where the process 14 stores the location of the decoded candidate and indicates a successful decode at blocks 396 and 398. After block 398, control passes to block 400.
At block 400, the processor 14 determines whether the next image has been obtained. If the next image has been obtained, control passes back to block 354 in FIG. 7 and code candidates in the current image that the processor has not attempted to decode are discarded. If a next image is not obtained at block 400, control passes to block 402. At block 402, processor 14 determines whether the processor has attempted to decode all group 3 candidates. If at least one group 3 candidate is not attempted, control passes back to block 392 and the process continues. If all group 3 candidates have been tried, control passes to block 404.
At block 404, processor 14 attempts to decode the next group 4 code candidate. Here, processor 14 attempts to decode the first group 4 code candidate on the first pass through block 404, processor 14 attempts to decode the second group 4 code candidate on the second pass through block 404, and so on. At block 406, processor 14 determines whether the code candidate was successfully decoded. If the code candidate has been successfully decoded, control passes to blocks 408 and 410, where the processor 14 stores the location of the decoded candidate and indicates a successful decode at blocks 408 and 410. After block 410, control passes to block 412. In block 406, if the code candidate is not successfully decoded, control passes to block 412. At block 412, the processor 14 determines whether the next image has been obtained from the camera 22. If the next image has been obtained, control passes to block 354 in FIG. 7, the process continues at block 354, and the 4 th group of code candidates that the processor has not attempted to decode are discarded. If, at block 412, the next picture is not obtained, control passes down to block 414 where the processor 414 determines whether the processor has attempted to decode all group 4 candidates. If at least one group 4 candidate is not attempted, control passes back to block 404 where the process continues as described above. If all group 4 candidates have been tried, control passes to block 352 in FIG. 7 where the processor 14 waits to receive the next image from the camera 22.
In some embodiments, it is contemplated that processor 14 may be programmed to attempt to decode at least a subset of the code candidates in the image, if possible, using at least first and second different decoding algorithms before obtaining the next image. For example, referring again to fig. 2, the processor 14 may be programmed to initially attempt to decode all code candidates within the second region of interest 34 using a relatively simple decoding algorithm first (i.e., attempt to decode all code candidates in the second half of the observation domain adjacent the exit edge 38), and then attempt to decode all code candidates within the first region of interest 32 using a relatively simple and less time consuming decoding algorithm if time permits before the next new image event occurs, and then attempt to decode code candidates in the first region of interest 32 using a more complex decoding algorithm if time permits before the next image event occurs. Other orders of using two or more different decoding algorithms for different subsets of code candidates in interleaved form are contemplated.
In still other embodiments, the processor 14 may be programmed to rank the code candidates in different candidate sets according to position in the camera field of view. For example, in the process described above with respect to fig. 7 and 8, where the code candidates have been grouped into four different candidate groups based on whether the code candidates have been linearly decoded and whether the candidates are to appear in subsequent pictures, the candidates in group 1 (i.e., the candidates have not been previously decoded and may not appear in subsequent pictures) may be ordered as: candidates near the entrance edge of the observation domain are ranked higher than candidates further away from the entrance edge. Candidates in other groups 2 through 4 may be ranked in a similar manner.
In other embodiments, it is contemplated that code candidates that are not attempted may be stored for subsequent consideration when the next image event occurs before the processor attempts to decode all code candidates in the current image. For example, assume that there are ten code candidates in the first image, and that the processor can only attempt to decode four of the ten code candidates before the second image is obtained (i.e., before the next image event), so that, at least initially, the processor does not attempt to decode six of the ten code candidates in the first image. It is also assumed that there are only two code candidates in the second image and that the processor completes decoding attempts of both code candidates before the third image is obtained, so that the processor will have time to attempt to decode additional code candidates in the first image before the third image is obtained. In this case, the processor may be programmed to retrieve at least a subset of code candidates from the first image and attempt to decode those candidates before obtaining the third image.
The subset of candidates chosen in the previous image that attempt to decode may be based on the candidates that appear in the subsequent image. For example, in the above example, if only two code candidates appear in the second image, processor 14 may be programmed to correlate two candidates from the second image with two of the six candidates not attempted in the first image, and may attempt to decode two candidates from the first image before receiving the third image.
It should be appreciated that different embodiments of the present invention consider different factors when ordering code candidates in an acquired image to attempt decoding by a processor. For example, the processor may be programmed to consider any one or a subset of the following factors: the location of the code candidate in the acquired image, the direction of travel of the code candidate through the field of view of the camera, the speed of travel of the code candidate through the field of view, previous attempts to decode the candidate in a previous image, previous successful decoding attempts of the candidate, whether the code candidate is new in the field of view or is about to exit the field of view, and so forth.
While each of the above embodiments require that code candidates that the processor has not attempted to decode be discarded when a new image is obtained, in at least some embodiments, it is contemplated that in at least some instances, the new image may not be received until the processor attempts to decode at least a subset of the code candidates that appear in the image. For example, referring back to fig. 7, if the set 1 of candidates includes code candidates that may not be decoded in a previous image and will likely not appear in a next image, the processor 14 may be programmed to complete decoding attempts of all the set 1 candidates before receiving the next image for processing, even if the camera produces the next image before completing the attempts. Alternatively, if the next image is obtained before all attempts of group 1 candidates are completed, the next image may be stored or partially processed until all of group 1 candidates are attempted. In other embodiments, the processor may be programmed to request decoding attempts of other code candidate subsets before receiving or processing a next image.
Although many of the embodiments described above stop attempting to decode a code candidate when the next picture is obtained, other next picture events may cause the processor to stop attempting to decode a candidate. For example, the processor 14 may be programmed to time out for a period after which decoding attempts for an image are discarded rather than waiting for the next image to be obtained. Here, the timeout period is calculated as a period that is similar to or slightly less than the period that may be required to obtain the next image. As another example, the processor 14 may be programmed to only attempt to decode a maximum number of code candidates, where the maximum number is calculated to be completed before the next image is obtained. Here, the timeout period and the maximum number of candidates to decode and the acquisition of the next image are collectively referred to as the next image event, which operates as a processor trigger to discard code candidates for which the processor has not attempted decoding or only completed a subset of the code candidates before processing the next acquired image. Other next image events are contemplated.
All of the embodiments described above include the processor 14 sequentially attempting to decode code candidates one at a time, it being understood that in at least some embodiments, the processor 14 is capable of processing two or more code candidates simultaneously. Here, for example, in at least some embodiments, when the code candidates are partitioned into different groups according to their locations in different regions of interest or according to other characteristics, processor 14 may attempt to decode all candidates in one group at the same time and then attempt to decode all candidates in a subsequent group in a simultaneous manner.
Referring again to FIG. 2, although the field of view 28 is divided into two distinct regions of interest 32 and 34 in the illustrated example, it should be understood that the field of view 28 may be divided into three or more fields of view in any fashion, or irregularly shaped fields of view.
While at least some of the look-ahead processing of the above processes may have decoded code candidates in previous pictures, in other embodiments, the processor may only attempt to decode more than one code look-ahead. For example, the processor 14 may be programmed to require that the code be successfully decoded two or more times before prior attempts to the same code in subsequent images.
In other embodiments, processor 14 may be programmed to look ahead to attempt to decode at least some instances of code candidates in some images, in which instances the processor has attempted to decode but failed to decode candidates in one or more previous images. For example, assume that the first code candidate appears in the first and second consecutive images and is not successfully decoded. In this case, when the first candidate may appear in the third, fourth, fifth, and sixth subsequent pictures, processor 14 may be programmed to look ahead to attempt to decode the first code candidate in the third, fourth, and fifth pictures so that processor 14 next attempts to decode the first code candidate in the sixth picture. Skipping decoding attempts may be beneficial because candidates often appear in successive images with similar features, which are features that change substantially in subsequent images when the object and code are at different locations in the field of view of the camera. Thus, it may often be that after the processor fails to decode an instance in the first and second images, the instance of the first code candidate in the sixth image is more suitable for decoding than the instance in the third image.
The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below.
Accordingly, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the following appended claims.
To apprise the scope of the present disclosure, the following claims are made.
Claims (29)
1. A method for decoding a code applied to an object and for use with a camera and a conveyor system, wherein the camera includes an image sensor having a two-dimensional field of view FOV and the conveyor system moves the object in a first direction of travel through the FOV such that the object enters the FOV along an entrance edge and exits the FOV along an exit edge, the method comprising the steps of:
providing a processor programmed to perform the steps of:
obtaining an image of the FOV;
for each image:
(i) identifying code candidates in at least part of the image;
(ii) ordering at least a subset of the code candidates for decoding in a candidate order, wherein the candidate order is determined at least in part on a first direction of travel through the FOV;
(iii) attempting to decode code candidates in an order specified by the candidate order; and
(iv) when a new image event occurs, an attempt is made to decode at least a portion of the identified code candidates in advance.
2. The method of claim 1, wherein the step of ordering at least a subset of the code candidates comprises: identifying at least first and second regions of interest, ROIs, in the FOV, adjacent an entrance edge and an exit edge of the FOV, respectively; identifying code candidates in each of the at least first and second ROIs; and ordering the code candidates in one of the first and second ROIs for processing before the code candidates in the other of the first and second ROIs.
3. The method of claim 2, wherein code candidates in a first ROI are ordered in candidate order before code candidates in a second ROI.
4. The method of claim 2, wherein code candidates in the second ROI are ordered in candidate order before code candidates in the first ROI.
5. The method of claim 1, wherein the step of ordering at least a subset of the code candidates further comprises the steps of: code candidates that will likely be outside the FOV when the next image is obtained are identified, and code candidates that will likely be outside the FOV when the next image is obtained are sorted near the beginning of the candidate order.
6. The method of claim 5, wherein the step of identifying code candidates that will likely be outside the FOV further comprises the steps of: for code candidates that will likely be outside the FOV when the next image is obtained, code candidates that may have been decoded in the previous image and code candidates that may not have been decoded in the previous image are identified and ordered before code candidates that may have been decoded in the previous image.
7. The method of claim 5, wherein the step of ordering at least a subset of the code candidates further comprises the steps of: code candidates that are likely to be new to the FOV are identified, and code candidates that are likely to be in the previous image and code candidates that are likely to be in the FOV when the subsequent image is obtained are identified, and the code candidates that are in the previous image and that are likely to be in the FOV when the subsequent image is obtained are sorted near the end of the candidate order, and the code candidates that are likely to be in the previous image are sorted near the approximate middle of the candidate order.
8. The method of claim 1, wherein the method of ordering at least a subset of code candidates comprises: code candidates that are likely to be new for the FOV near the beginning in the candidate order are identified.
9. The method of claim 1, wherein the step of ordering at least a subset of the code candidates comprises:
identifying code candidates that may be outside the FOV when the next image is obtained and that may not be decoded in the previous image as a first candidate subset;
identifying code candidates that may be outside the FOV when the next image is obtained and that may have been decoded in a previous image as a second subset of candidates;
identifying code candidates that may be in the FOV and that may not be decoded in the previous image when the next image is obtained as a third subset of candidates;
identifying code candidates that may be in the FOV and that may have been decoded in a previous image when a next image was obtained as a fourth subset of candidates; and
the subsets are ordered such that the first candidate subset occurs before the second candidate subset, the second candidate subset occurs before the third candidate subset, and the third candidate subset occurs before the fourth candidate subset.
10. The method of claim 1, wherein when a new picture is obtained, the method further comprises discarding code candidates in the candidate order for which no attempted decoding has occurred.
11. The method of claim 1, wherein the conveying system conveys the objects in a first direction of travel at a conveying speed, and wherein the candidate order is determined in part based on the conveying speed.
12. The method of claim 1, wherein at least first and second different decoding algorithms can be used to attempt to decode any one of the code candidates, the method further comprising the steps of: one of the first and second decoding algorithms is assigned to each of at least a subset of the code candidates in a candidate order, wherein the algorithm assigned to each candidate is based at least in part on the candidate order.
13. The method of claim 12, further comprising the step of identifying code candidates that may be successfully decoded in a previous picture and code candidates that may not be successfully decoded in the previous picture, the step of assigning the first and second decoding algorithms comprising assigning the decoding algorithms based at least in part on whether the code candidates are decoded in the previous picture.
14. The method of claim 12, wherein the first decoding algorithm requires more time to complete than the second decoding algorithm.
15. The method of claim 1, wherein a new image event occurs when a new image is obtained.
16. The method of claim 1, wherein a new image event occurs when an image acquisition period elapses.
17. The method of claim 1, wherein the first ROI is proximate one of an entrance edge and an exit edge of the FOV and the second ROI is proximate the other of the entrance edge and the exit edge of the FOV.
18. A method for decoding a code applied to an object and for use with a camera and a conveyor system, wherein the camera includes an image sensor having a two-dimensional field of view FOV and the conveyor system moves the object in a first direction of travel through the FOV such that the object enters the FOV along an entrance edge and exits the FOV along an exit edge, the method comprising the steps of:
providing a processor programmed to perform the steps of:
identifying a first direction of travel through the FOV;
obtaining an image of the FOV;
for each image, where the most recently obtained image is the current image:
(i) identifying code candidates in at least part of the image;
(ii) attempting to decode code candidates near the FOV exit edge;
(iii) attempting to decode code candidates near an entry edge of the FOV after attempting to decode code candidates near an exit edge of the FOV; and
(iv) when a new image event occurs, an attempt is made to decode at least a portion of the identified code candidates.
19. A method for decoding a code applied to an object and for use with a camera and a conveyor system, wherein the camera includes an image sensor having a two-dimensional field of view FOV and the conveyor system moves the object in a first direction through the FOV such that the object enters the FOV along an entrance edge and exits the FOV along an exit edge, such that the sensor produces an image having at least first and second different regions of interest, the first and second regions of interest being adjacent to the entrance and exit edges, respectively, the method comprising the steps of:
providing a processor programmed to perform the steps of:
obtaining an image of the FOV;
for each image:
(i) identifying code candidates in at least a portion of the image;
(ii) identifying at least first and second different regions of interest, ROIs, in the obtained image;
(iii) attempting to decode code candidates in the first ROI for a first time;
(iv) after attempting to decode a candidate in the first ROI, attempting to decode a code candidate in the second ROI;
(v) attempting to decode the code candidates in the first ROI a second time after attempting to decode the code candidates in the second ROI; and
(vi) when a new image event occurs, an attempt is made to decode at least a portion of the identified code candidate.
20. The method of claim 19, wherein the step of attempting to decode code candidates in the first ROI a first time comprises using a first decoding algorithm, and the step of attempting to decode code candidates in the first ROI a second time comprises attempting to decode code candidates using a second decoding algorithm different from the first decoding algorithm.
21. The method of claim 19, further comprising ordering code candidates in the first ROI in an order of candidates, the step of attempting to decode code candidates in the first FOI for the first time comprising attempting to decode code candidates in an order specified by the order of candidates.
22. The method of claim 21, further comprising identifying code candidates in the first ROI that are likely to be previously decoded in the at least one previous picture, the step of ordering code candidates comprising ordering code candidates based at least in part on which candidates in the at least one previous picture were previously decoded.
23. A method for decoding a code applied to an object and for use with a camera and a conveyor system, wherein the camera includes an image sensor having a two-dimensional field of view FOV and the conveyor system moves the object in a first direction of travel through the FOV such that the object enters the FOV along an entrance edge and exits the FOV along an exit edge, the method comprising the steps of:
providing a processor programmed to perform the steps of:
obtaining an image of the FOV;
for each image, where the most recently obtained image is the current image:
(i) code candidates identifying at least a portion of an image;
(ii) ordering the code candidates in a candidate order, wherein the candidate order is determined at least in part on at least one of a position of the code candidate in the FOV, a first direction of travel, a likelihood of the code candidate being decoded in a previous image, and a speed of travel of the object through the FOV;
(iii) when a new image event occurs, an attempt is made to decode at least a portion of the identified code candidates.
24. An apparatus for decoding a code applied to an object and for use with a camera and a conveyor system, wherein the camera includes an image sensor having a two-dimensional field of view FOV and the conveyor system moves the object in a first direction of travel through the FOV such that the object enters the FOV along an entrance edge and exits the FOV along an exit edge, the apparatus comprising the steps of:
a processor programmed to perform the steps of:
obtaining an image of the FOV;
for each image:
(i) identifying code candidates in at least a portion of the image;
(ii) ordering at least a subset of the code candidates for decoding in a candidate order, wherein the code candidates are determined at least in part on a first direction of travel through the FOV;
(iii) attempting to decode code candidates in an order specified by the candidate order;
(iv) when a new image event occurs, an attempt is made to decode at least a portion of the identified code candidates in advance.
25. The apparatus of claim 24, wherein the processor is programmed to perform the step of ranking at least a subset of code candidates by: identifying at least first and second regions of interest, ROIs, in the FOV, adjacent an entrance edge and an exit edge of the FOV, respectively; identifying code candidates in each of the at least first and second ROIs; and ordering code candidates in one of the first and second ROIs before code candidates in the other of the first and second ROIs.
26. The apparatus of claim 24, wherein the processor is programmed to perform the step of ranking at least a subset of code candidates by: code candidates that will likely be outside the FOV when the next image is obtained are identified and sorted near the beginning of the candidate order when the next image is obtained.
27. The apparatus of claim 24, wherein the processor is programmed to perform the step of ranking at least a subset of code candidates by: code candidates that are likely to be new for the FOV near the beginning of the candidate order are identified.
28. An apparatus for decoding a code applied to an object and for use with a camera and a conveyor system, wherein the camera includes an image sensor having a two-dimensional field of view FOV and the conveyor system moves the object in a first direction of travel through the FOV such that the object enters the FOV along an entrance edge and exits the FOV along an exit edge, said apparatus comprising the steps of:
a processor programmed to perform the steps of:
identifying a first direction of travel through the FOV;
obtaining an image of the FOV;
for each image, where the most recently obtained image is the current image:
(i) identifying code candidates in at least a portion of an image;
(ii) attempting to decode code candidates near an exit edge of the FOV;
(iii) attempting to decode code candidates near an entrance edge of the FOV after attempting to decode code candidates near an exit edge of the FOV; and
(iv) when a new image event occurs, an attempt is made to decode at least a portion of the identified code candidates.
29. An apparatus for decoding a code applied to an object and for use with a camera and a conveyor system, wherein the camera includes an image sensor having a two-dimensional field of view FOV and the conveyor system moves the object in a first direction through the FOV such that the object enters the FOV along an entrance edge and exits the FOV along an exit edge, such that the sensor produces an image having at least first and second different regions of interest, the first and second regions of interest being adjacent the entrance and exit edges, respectively, said apparatus comprising the steps of:
a processor programmed to perform the steps of:
obtaining an image of the FOV;
for each image:
(i) identifying code candidates in at least part of the image;
(ii) identifying at least first and second different regions of interest, ROIs, in the obtained image;
(iii) attempting to decode code candidates in the first ROI for a first time;
(iv) after attempting to decode a candidate in the first ROI, attempting to decode a code candidate in the second ROI;
(v) attempting to decode the code candidates in the first ROI a second time after attempting to decode the code candidates in the second ROI; and
(vi) when a new image event occurs, an attempt is made to decode at least a portion of the identified code candidate.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US13/288,104 | 2011-11-03 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1186273A HK1186273A (en) | 2014-03-07 |
| HK1186273B true HK1186273B (en) | 2018-08-17 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107563239B (en) | Method and apparatus for ordering code candidates in an image for decoding attempts | |
| US10970508B2 (en) | Method and apparatus for performing different decoding algorithms in different locations | |
| US20180157882A1 (en) | Systems and methods for tracking optical codes | |
| Zamberletti et al. | Robust angle invariant 1d barcode detection | |
| US9786059B2 (en) | Apparatus and method for resource-adaptive object detection and tracking | |
| US8035687B2 (en) | Image processing apparatus and program | |
| CN101061487A (en) | System and method for aiming an optical code scanning device | |
| CN1514397A (en) | Human ege detecting method, apparatus, system and storage medium | |
| US11928550B2 (en) | Methods and apparatus to locate and decode an arranged plurality of barcodes in an image | |
| CN112613508A (en) | Object identification method, device and equipment | |
| US10140496B2 (en) | System for and method of stitching barcode fragments of a barcode symbol to be read in an imaging-based presentation workstation | |
| HK1186273A (en) | Method and apparatus for ordering code candidates in image for decoding attempts | |
| HK1186273B (en) | Method and apparatus for ordering code candidates in image for decoding attempts | |
| US10587821B2 (en) | High speed image registration system and methods of use | |
| WO2007001710A2 (en) | System and method for locating a predetermined pattern within an image | |
| US12131224B1 (en) | Deep learning based image enhancement for barcode decoding | |
| JP2001167225A (en) | Bar code recognizing device using ccd camera | |
| CN112686064A (en) | Method and device for identifying motion bar code | |
| US20250209288A1 (en) | Systems and methods for tuning symbol readers | |
| JP7382177B2 (en) | can identification system | |
| EP2605183A2 (en) | System and method for identifying a character-of-interest | |
| US20130159660A1 (en) | System and method for identifying a character-of-interest | |
| WO2024243333A1 (en) | Systems and methods for sorting objects | |
| CN115965797A (en) | Automatic identification method and system for intelligent assembly parts | |
| CN120913190A (en) | Identification recognition method and device, material handling equipment and storage medium |