[go: up one dir, main page]

HK1239611A1 - Barcode tag detection in side view sample tube images for laboratory automation - Google Patents

Barcode tag detection in side view sample tube images for laboratory automation Download PDF

Info

Publication number
HK1239611A1
HK1239611A1 HK17113416.0A HK17113416A HK1239611A1 HK 1239611 A1 HK1239611 A1 HK 1239611A1 HK 17113416 A HK17113416 A HK 17113416A HK 1239611 A1 HK1239611 A1 HK 1239611A1
Authority
HK
Hong Kong
Prior art keywords
level
low
sample tube
barcode
tube
Prior art date
Application number
HK17113416.0A
Other languages
Chinese (zh)
Other versions
HK1239611B (en
Inventor
Stefan Kluckner
Yao-Jen Chang
Wen Wu
Benjamin Pollack
Terrence Chen
Original Assignee
Siemens Healthcare Diagnostics Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Healthcare Diagnostics Inc. filed Critical Siemens Healthcare Diagnostics Inc.
Publication of HK1239611A1 publication Critical patent/HK1239611A1/en
Publication of HK1239611B publication Critical patent/HK1239611B/en

Links

Description

Bar code label detection in side view sample tube images for laboratory automation
Cross Reference to Related Applications
This application claims priority to U.S. provisional application serial No. 62/117,270, entitled "bar TAG DETECTION IN VIEW SAMPLE TUBE image for absolute estimate automatic", filed on day 2/17 of 2015, the disclosure of which is hereby incorporated by reference in its entirety.
This application is related to several concepts described in U.S. patent application publication No. US 2016/0025757 and international publication No. WO 2015/191702, which are incorporated herein by reference in their entirety.
Technical Field
The present invention relates generally to detecting the condition of barcode labels, and more particularly, to classifying the condition of barcode labels on sample tubes using side-looking sample tube images.
Background
Barcode labels are frequently used on sample tubes in clinical laboratory automation systems to uniquely identify and track sample tubes, and are generally the only means to associate a patient with a sample inside a particular sample tube. With normal daily use, the condition of the bar code label may be degraded, including tearing, peeling, discoloration, and other distortions. Such degradation prevents laboratory automation systems from streamlining sample tube processing.
Therefore, there is a need to detect barcode label conditions on sample tubes to streamline sample tube processing in advanced clinical laboratory automation systems. There is also a need to make such classification automated, efficient and unobtrusive.
Disclosure of Invention
Embodiments are directed to detecting barcode label conditions on sample tubes from side view images to streamline sample tube processing in advanced clinical laboratory automation systems.
Drawings
The foregoing and other aspects of the invention are best understood from the following detailed description, when read with the accompanying drawings. For the purpose of illustrating the invention, there is shown in the drawings embodiments which are presently preferred, it being understood, however, that the invention is not limited to the specific instrumentalities disclosed. Included in the drawing are the following figures:
FIG. 1 is a representation of an exemplary tube characterization station to which sample tubes are transferred for detecting barcode label conditions on the sample tubes from side view images, according to an embodiment;
fig. 2 is a depiction of an exemplary sample tube with a barcode label, in accordance with an embodiment.
Fig. 3 illustrates a workflow for barcode label detection with several parallel low-level representations combined into a medium-level representation according to an embodiment.
FIG. 4 is a representation of a condition of a barcode label according to an embodiment.
FIG. 5 illustrates the use of a low-level prompt for detecting a barcode label condition according to an embodiment.
FIG. 6 illustrates the use of additional low-level indications for detecting barcode label conditions, according to an embodiment.
Detailed Description
Embodiments are directed to classifying barcode label conditions on sample tubes according to side view images to streamline sample tube processing in advanced clinical laboratory automation systems. According to embodiments provided herein, classification of barcode label conditions advantageously results in automated detection of problematic barcode labels, allowing a system or user to take the necessary steps to repair the problematic barcode labels. For example, the identified sample tubes with problematic barcode labels may be dispatched to a separate workflow separate from the normal tube processing to correct the problematic barcode labels.
Methods according to embodiments provided herein enable barcode label detection in a tube image to arrive at an automated determination regarding label condition. The status of the barcode label may be classified into a good category (OK) Alarm (warning)WARNING) And errors (ERROR). According to an embodiment, each of the three categories is divided into a list of additional sub-categories to enable a refined decision on the quality of the barcode label. These subcategories cover individual characteristics of label quality such as, for example, intact, torn, damaged, folded, skewed, deformed, or stained. Additional or alternative categories and subcategories may be used.
According to an embodiment, a Tube Characterization Station (TCS) is utilized to derive side view images for classifying barcode label conditions on sample tubes. The TCS enables simultaneous collection of three images for each tube, resulting in a 360 degree side view for each tube. The proposed method is based on a supervised scene understanding concept, resulting in the interpretation of each pixel into its semantic meaning. According to an embodiment, two parallel low-level hints for condition recognition are utilized in conjunction with the tube model extraction hints. The semantic scene information is then integrated into the mid-level representation for final decision making (final decision making) into one of the three condition categories.
Semantic segmentation focuses on interpreting each pixel in the image domain with respect to a defined semantic object label. Due to the pixel-level segmentation, object boundaries can be accurately captured. Evaluation of the reference data set shows that the supervised concept performs best in terms of reliability and classification accuracy. In general, these methods are based on training and testing phases, taking into account complex and combined feature descriptors derived at various levels and hierarchies.
Image triplets are acquired by using TCS. The condition of the barcode label may differ in location, orientation, quality of attachment, and barcode readability. Detection of barcode label conditions requires accurate capture of mid-level representations of spatial and appearance features with respect to the tube model. The medium level representation captures multi-view information (e.g., 360 degree views from image triples) according to various parallel low-level representations that are individually trained and evaluated on the relevant image structure.
Fig. 1 is a representation of an exemplary tube characterization station to which sample tubes are transferred for sorting barcode label conditions on the sample tubes according to a side view image, according to an embodiment. Sample tubes may be transferred to the TCS from, for example, a drawer system in which the tube trays and the sample tubes contained thereon are stored. In an embodiment, one or more drawers are provided within a work scope (work environment) for a sample processor in an In Vitro Diagnostic (IVD) environment. The sample tubes may be transferred to the TCS via an arm movable between the drawer system and the TCS.
The TCS includes three cameras, each configured to capture an image of a particular sample tube to collectively capture a full 360 degree view.
Fig. 2 is a depiction of an exemplary sample tube with a bar code label. As shown, tube model types and barcode label conditions can vary significantly between sample tubes.
Inputting data: the input data for the system includes three images showing a full 360 degree view of the tube.
SUMMARY: the proposed method takes as input the triplet of tube images and outputs a flag for the condition (good, warning, error) of the barcode label. Fig. 3 shows the proposed workflow: several parallel low-level representations are utilized that are combined into a medium-level representation. This representation captures the multi-view information and is used for final decision-making regarding the barcode label condition. The low-level representation includes separate semantic segmentation into barcode/label and background regions and good versus bad regions, respectively. The additional low-level representation supports the extraction of the pipe model and provides important spatial information for generating the mid-level representation. The proposed methods are not limited to these particular cues and may be extended to additional or alternative cues. The proposed concept is based on a training and evaluation phase, requiring labeled input data for the training phase.
FIG. 4 is a representation of barcode label conditions by category and subcategories of individual characteristics that cover label quality according to an embodiment.
And (4) low-level prompting: semantic segmentation: the proposed concept is based on multiple prompts running in a supervision mode. These hints make extensive use of supplemental feature descriptors (such as color and orientation histograms), statistical descriptors, and approximate local binary patterns that are trained and evaluated at the pixel level. Due to the desire for short response times, the proposed concept utilizes efficient image structures (such as integral images) and uses fast classifiers (such as random decision trees or decision trees). The training phase requires labeled input data: due to the pixel-level classification, annotation can be done quickly by using image region annotation of strokes. Training must be performedOnce and including data from different acquisitions having various characteristics. Separately for each low-level cue, separate random forest classifiers are trained for binary (good/bad area) and multi-class tasks (barcode/label/background area). During runtime, the trained classifier provides possibilities on a pixel level with respect to the trained semantic classes. These classifier responses are integrated directly into the mid-level representation as discriminative attributes for final decision making. FIG. 5 illustrates an exemplary response of a semantically segmented low-level prompt applied to a sample tube.
And (4) low-level prompting: tube model extraction: in order to provide spatial information for data aggregation, a segmented region of the tube and some supporting sections in the image are used. The tube model may be derived by using calibrated three-dimensional mechanisms and external tube detection information (i.e., rendering the tube geometry into the image), or may be extracted from the image separately (i.e., robust extraction of tube boundaries by using robust line detection methods and logical processing or reasoning). The tube model is segmented in the image, enabling the segmentation of the tube and neighboring regions into smaller patch regions (patch). These blocks are used to aggregate the classifier response from the low-level cues and information directly from the image. Fig. 6 illustrates an exemplary response to a tube model applied to a sample tube extracting a low-level prompt.
Intermediate level representation: in order to arrive at a final decision on the barcode label condition, the proposed method makes use of clustering low-level representation responses into medium-level representations, which can be seen as descriptors for triples of the input image. The descriptors include, but are not limited to: the classifier responds and the image features (e.g., orientation information and color statistics) extracted with the support of the tube model segmentation. Since the representation comprises information from multiple views, a data sequence ordering according to the size of the covered barcode region is applied.
Barcode label status classification: to derive the most appropriate of the situationThe final class label uses a classifier such as a random decision tree or a Support Vector Machine (SVM). Refinement of the classification results in subcategories and can be done through additional classification stages or directly on the mid-level representation.
A controller is provided for managing image analysis of images taken by the camera for classifying barcode label conditions on the sample tube according to the side view image. According to an embodiment, the controller may be part of a sample processor that is used in an in-tube diagnostic (IVD) environment to process and move tube trays and tubes between storage locations to an analyst. One or more memory devices may be associated with the controller. The one or more memory devices may be internal or external to the controller.
Although the present invention has been described with reference to exemplary embodiments, the present invention is not limited thereto. Those skilled in the art will recognize that many changes and modifications may be made to the preferred embodiments of the present invention, and that such changes and modifications may be made without departing from the true spirit of the present invention. It is therefore intended that the following appended claims be interpreted as covering all such equivalent variations as fall within the true spirit and scope of the invention.

Claims (16)

1. A method of detecting a barcode label condition on a sample tube, the method comprising:
acquiring, by an image capture system comprising a plurality of cameras, a side view image of a sample tube comprising a barcode label; and
analyzing, by one or more processors in communication with the image capture system, the side view image, the analyzing comprising:
applying a plurality of low-level cues to the side view image of the sample tube to obtain semantic segmentation and spatial information for the sample tube;
aggregating results of the application of the plurality of low-level hints to form a mid-level representation; and
identifying a category of the barcode label based on the mid-level representation.
2. The method of claim 1, wherein the plurality of low-level cues comprise two parallel low-level cues for condition recognition and a tube model extraction cue for spatial information.
3. The method of claim 2, wherein the two parallel low-level cues comprise (i) a semantic segmentation for barcode tag and background regions and (ii) a semantic segmentation for barcode tag quality regions.
4. The method of claim 1, wherein the mid-level representation comprises descriptors for side-view images, the descriptors comprising classifier responses and image feature extractions.
5. The method of claim 4, wherein the classifier response and image feature extraction are used to identify a class from a plurality of predefined classes.
6. The method of claim 5, wherein the predefined categories each include a plurality of subcategories.
7. The method of claim 1, wherein the image capture system comprises three cameras for acquiring 360 degree views of the sample tube.
8. The method of claim 1, wherein the images are acquired simultaneously.
9. A vision system for use in an in-test tube diagnostic environment for detecting barcode label conditions on sample tubes, the vision system comprising:
a plurality of cameras configured to capture side view images of sample tubes containing barcode labels;
a processor in communication with the plurality of cameras, the processor configured to perform the steps of:
applying a plurality of low-level cues to the side view image of the sample tube to obtain semantic segmentation and spatial information for the sample tube;
aggregating results of the application of the plurality of low-level hints to form a mid-level representation; and
identifying a category of the barcode label based on the mid-level representation.
10. The system of claim 9, wherein the plurality of low-level cues comprise two parallel low-level cues for condition recognition and a tube model extraction cue for spatial information.
11. The system of claim 10, wherein the two parallel low-level cues comprise (i) a semantic segmentation for barcode tag and background regions and (ii) a semantic segmentation for barcode tag quality regions.
12. The system of claim 9, wherein the mid-level representation comprises descriptors for side-view images, the descriptors comprising classifier responses and image feature extractions.
13. The system of claim 12, wherein the classifier response and image feature extraction are used to identify a class from a plurality of predefined classes.
14. The system of claim 13, wherein the predefined categories each include a plurality of subcategories.
15. The system of claim 9, wherein the image capture system comprises three cameras for acquiring 360 degree views of the sample tube.
16. The system of claim 9, wherein the images are acquired simultaneously.
HK17113416.0A 2015-02-17 2016-02-16 Barcode tag detection in side view sample tube images for laboratory automation HK1239611B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US62/117270 2015-02-17

Publications (2)

Publication Number Publication Date
HK1239611A1 true HK1239611A1 (en) 2018-05-11
HK1239611B HK1239611B (en) 2021-05-14

Family

ID=

Similar Documents

Publication Publication Date Title
CA2976771C (en) Barcode tag detection in side view sample tube images for laboratory automation
US11244450B2 (en) Systems and methods utilizing artificial intelligence for placental assessment and examination
JP2018512567A5 (en)
JP6560757B2 (en) Classification of barcode tag states from top view sample tube images for laboratory automation
US9760789B2 (en) Robust cropping of license plate images
US11354549B2 (en) Method and system for region proposal based object recognition for estimating planogram compliance
CN110533654A (en) The method for detecting abnormality and device of components
CN106203237A (en) The recognition methods of container-trailer numbering and device
CN114332058A (en) Serum quality identification method, device, equipment and medium based on neural network
CN111161295B (en) Dish image background stripping method
KR20190114241A (en) Apparatus for algae classification and cell countion based on deep learning and method for thereof
Suksawatchon et al. Shape recognition using unconstrained pill images based on deep convolution network
CN107403179A (en) A kind of register method and device of article packaged information
CN112579808A (en) Data annotation processing method, device and system
US20170309040A1 (en) Method and device for positioning human eyes
HK1239611A1 (en) Barcode tag detection in side view sample tube images for laboratory automation
WO2015083170A1 (en) Fine grained recognition method and system
HK1239611B (en) Barcode tag detection in side view sample tube images for laboratory automation
CN114973277A (en) Method and system for detecting label of traditional Chinese medicine bottle in medicine production
Carnimeo et al. A voting procedure supported by a neural validity classifier for optic disk detection
Patel et al. Design and Development of Vegetable Detection and Recognition Model Using Deep Learning in Market Environment
Theiler et al. Approach to target detection based on relevant metric for scoring performance
Wang et al. Automatic classification of images of an angiography sequence using modified shape context-based spatial pyramid kernels
HK1241793A1 (en) Classification of barcode tag conditions from top view sample tube images for laboratory automation
George Classification red blood cells using support vector machine