Disclosure of Invention
The embodiment of the invention aims to provide a test paper identification method, a test paper identification device, a storage medium and equipment, which solve the problem that the accuracy of a test paper detection result is low due to the influence of a complex background on test paper identification in the prior art, and the identification technology universality is poor for various brands of test paper.
In order to achieve the above object, an embodiment of the present invention provides a test paper identification method, including: acquiring an image to be identified, wherein the image to be identified comprises a test paper image to be identified; extracting the test paper image to be recognized from the image to be recognized by using an image recognition method based on first deep learning; identifying the category to which the mark in the test paper image to be identified belongs by using a preset category model, and extracting detection line information corresponding to the category; determining the position of a recognition line from the test paper image to be recognized by using an image recognition method based on second deep learning; and determining a test paper detection result according to the pixel brightness value corresponding to the identification line position and the detection line information by using a regular image processing method.
Further, the extracting, by using the image recognition method based on the first deep learning, the test strip image to be recognized from the image to be recognized includes: acquiring an edge gray-scale image corresponding to the test paper image to be identified from the image to be identified by using a preset HED model; and mapping the edge gray-scale image in the image to be identified by utilizing an OpenCV technology to obtain the test paper image to be identified.
Further, the preset HED model is established in the following manner: preprocessing the image sample to obtain an image training sample which accords with the specified pixel size; marking the edge of the test paper in the image training sample to obtain an image marked sample; and training a pre-trained HED model by using the image training sample and the corresponding image marking sample, and determining the pre-trained HED model as the preset HED model until the loss function value of the pre-trained HED model is stable.
Further, the preset category model is obtained by: acquiring a test paper image training set, classifying test paper images in the test paper image training set according to the types of marks, and storing the types and detection line information corresponding to the types; and training a pre-training CNN model by using the test paper image training set and the category corresponding to each test paper image until the loss function value of the pre-training CNN model is stable, and determining the pre-training CNN model as the preset category model.
Further, the determining the position of the identification line from the test strip image to be identified by using the image identification method based on the second deep learning method comprises the following steps: obtaining a region classification in the test paper image to be recognized by using a preset CRNN model, wherein the region classification comprises a blank region, a left region of a recognition line position and a right region of the recognition line position; and determining the position of the identification line in the test paper image to be identified according to the region classification.
Further, the obtaining of the region classification in the test paper image to be recognized by using the preset CRNN model includes: converting the test paper image to be identified into characteristic sequence information by using the preset CRNN model; and determining the region classification according to the color jump in the characteristic sequence information.
Further, the CRNN model is built in the following manner: preprocessing a test paper image sample to obtain a test paper image training sample which accords with a set pixel size; classifying and marking the regions in the test paper image training sample to obtain a test paper image marking sample; and training a pre-trained CRNN model by using the test paper image training sample and a corresponding test paper image marking sample, and determining the pre-trained CRNN model as the preset CRNN model until the loss function value of the pre-trained CRNN model tends to be stable.
Further, the determining, by using the regular image processing method, a test strip detection result according to the pixel brightness value corresponding to the identification line position and the detection line information includes: converting the color space of the identification line position into an LAB color space; extracting corresponding pixel brightness values of the identification line positions in an LAB color space; and determining the test paper detection result according to the comparison result of the pixel brightness value and the detection line information.
Further, after determining the test result of the test strip, the method further comprises: determining the current physiological information of the user corresponding to the image to be identified according to the test paper detection result; and providing the test paper detection result and the current physiological information of the user to the user.
Further, the image to be recognized further includes a user identifier, and after the determination of the test paper detection result, the method further includes: extracting a user identifier included in the image to be recognized; searching historical detection information corresponding to the user identification in a database, wherein the historical detection information comprises a historical test paper detection result and historical physiological information; determining a physiological cycle corresponding to the user according to the historical physiological information and the current physiological information; and determining the suggested time for the user to perform the next test paper detection according to the physiological cycle, and providing the suggested time for the user.
Correspondingly, the embodiment of the invention also provides a test paper identification device, which comprises: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an image to be identified, and the image to be identified comprises a test paper image to be identified; the test paper image extraction unit is used for extracting the test paper image to be identified from the image to be identified by using an image identification method based on first deep learning; the category determining unit is used for identifying the category to which the mark in the test paper image to be identified belongs by using a preset category model, and extracting detection line information corresponding to the category; the position determining unit is used for determining the position of a recognition line from the test paper image to be recognized by using an image recognition method based on second deep learning; and the result determining unit is used for determining a test paper detection result according to the image brightness value corresponding to the identification line position and the detection line information by using a regular image processing method.
Further, the test paper image extraction unit is further configured to: acquiring an edge gray-scale image corresponding to the test paper image to be identified from the image to be identified by using a preset HED model; and mapping the edge gray-scale image in the image to be identified by utilizing an OpenCV technology to obtain the test paper image to be identified.
Further, the preset HED model is established in the following manner: preprocessing the image sample to obtain an image training sample which accords with the specified pixel size; marking the edge of the test paper in the image training sample to obtain an image marked sample; and training a pre-trained HED model by using the image training sample and the corresponding image marking sample, and determining the pre-trained HED model as the preset HED model until the loss function value of the pre-trained HED model is stable.
Further, the preset category model is obtained by: acquiring a test paper image training set, classifying test paper images in the test paper image training set according to the types of marks, and storing the types and detection line information corresponding to the types; and training a pre-training CNN model by using the test paper image training set and the category corresponding to each test paper image until the loss function value of the pre-training CNN model is stable, and determining the pre-training CNN model as the preset category model.
Further, the position determination unit is further configured to: obtaining a region classification in the test paper image to be recognized by using a preset CRNN model, wherein the region classification comprises a blank region, a left region of a recognition line position and a right region of the recognition line position; and determining the position of the identification line in the test paper image to be identified according to the region classification.
Further, the position determination unit is further configured to: converting the test paper image to be identified into characteristic sequence information by using the preset CRNN model; and determining the region classification according to the color jump in the characteristic sequence information.
Further, the CRNN model is built in the following manner: preprocessing a test paper image sample to obtain a test paper image training sample which accords with a set pixel size; classifying and marking the regions in the test paper image training sample to obtain a test paper image marking sample; and training a pre-trained CRNN model by using the test paper image training sample and a corresponding test paper image marking sample, and determining the pre-trained CRNN model as the preset CRNN model until the loss function value of the pre-trained CRNN model tends to be stable.
Further, the result determination unit is further configured to: converting the color space of the identification line position into an LAB color space; extracting corresponding image brightness values of the identification line positions in an LAB color space; and determining the test result of the test paper according to the comparison result of the image brightness value and the detection line information.
Further, the apparatus further comprises: the physiological information determining unit is used for determining the current physiological information of the user corresponding to the image to be identified according to the test paper detection result; and the processing unit is used for providing the test paper detection result and the current physiological information of the user to the user.
Further, the image to be recognized further includes a user identifier, and the apparatus further includes: the identification extraction unit is used for extracting a user identification included in the image to be identified; the searching unit is used for searching historical detection information corresponding to the user identification in a database, wherein the historical detection information comprises a historical test paper detection result and historical physiological information; the period determining unit is used for determining a physiological period corresponding to the user according to the historical physiological information and the current physiological information; the processing unit is further used for determining the suggested time for the user to perform the next test paper detection according to the physiological cycle and providing the suggested time for the user.
Accordingly, the embodiment of the present invention also provides a machine-readable storage medium, which stores instructions for causing a machine to execute the test strip identification method as described above.
Correspondingly, the embodiment of the invention also provides equipment, which comprises at least one processor, at least one memory and a bus, wherein the memory and the bus are connected with the processor; the processor and the memory complete mutual communication through the bus; the processor is configured to invoke program instructions in the memory to perform the test strip identification method described above.
According to the technical scheme, the test paper image is extracted from the image to be identified by using the test paper identification method combining deep learning and regular image processing, so that the influence of a complex background on the test paper image is avoided, and then the position of the identification line in the test paper image is directly determined through color hopping, so that the test result of the test paper is determined according to the image brightness value corresponding to the position of the identification line and the detection line information corresponding to the type of the test paper. The embodiment of the invention solves the problem of low accuracy of the test paper detection result caused by the influence of a complex background on the test paper identification in the prior art, improves the identification universality of the test paper image, is convenient and quick, and saves time and cost.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating embodiments of the invention, are given by way of illustration and explanation only, not limitation.
Fig. 1 is a schematic flow chart of a test paper identification method according to an embodiment of the present invention. As shown in fig. 1, the method is applied to a server, and the method includes the following steps:
step 101, obtaining an image to be identified, wherein the image to be identified comprises a test paper image to be identified;
102, extracting a test paper image to be recognized from the image to be recognized by using an image recognition method based on first deep learning;
step 103, identifying the category to which the mark in the test paper image to be identified belongs by using a preset category model, and extracting detection line information corresponding to the category;
104, determining the position of a recognition line from the test paper image to be recognized by using an image recognition method based on second deep learning;
and 105, determining a test paper detection result according to the image brightness value corresponding to the identification line position and the detection line information by using a regular image processing method.
The test paper identified in the embodiment of the present invention includes POCT (point-of-care testing) test paper, such as pregnancy test paper, ovulation test paper, multiple T-line test paper, and the like.
After a user uses the test paper to test, the test paper can be photographed by using equipment with a photographing function, and a to-be-recognized image obtained by photographing is uploaded to a server so as to be recognized by the test paper, so that the user only needs to store the to-be-recognized image after testing is completed, the test paper does not need to be stored, and great convenience is provided for the user.
The image to be recognized obtained by the user through the photographing mode also comprises other unnecessary complex background information besides the test paper image to be recognized. Therefore, in the embodiment of the present invention, after the server receives the to-be-identified image including the to-be-identified test paper image uploaded by the user, the to-be-identified image needs to be extracted from the complex background.
In addition, because the specificity of the test paper image to be recognized is different from other shot images, the test paper image to be recognized is a long and thin strip-shaped image, and because the length-width ratio is too large, a conventional image segmentation algorithm cannot be adopted, in step 102, in the embodiment of the present invention, a preset HED model is used to obtain an edge grayscale image corresponding to the test paper image to be recognized from the image to be recognized, and an OpenCV technique is used to map the edge grayscale image in the image to be recognized to obtain the test paper image to be recognized.
The preset HED model is established in the following mode:
firstly, preprocessing an image sample to obtain an image training sample with a specified pixel size, for example, the specified pixel size is 128 × 128 pixels, and if the resolution of the image sample exceeds the resolution, clipping is performed, and if the resolution of the image sample is insufficient, stretching and tiling are performed.
And then, marking the edge of the test paper in the image training sample to obtain an image marking sample. The marking mode can be marked by adopting marking software, for example, positions of 4 vertexes of the rectangular test paper edge are obtained, the 4 coordinates are mapped into a quadrangle, then a binary all-black image is newly built, and the corresponding edge of the quadrangle is set to be white.
And then, training a pre-trained HED model by using the image training sample and the corresponding image marking sample, and determining the pre-trained HED model as the preset HED model until the loss function value of the pre-trained HED model tends to be stable.
The pre-trained HED model comprises 5 convolutional layers, 5 characteristic maps are obtained after image training samples pass through the convolutional layers, and the dimension of each characteristic map is shown in the following table 1. And then, deconvoluting the 5 layers of feature maps into feature maps of 256 × 1, merging the 5 deconvoluted feature maps to obtain one feature map corresponding to the image training sample, comparing the merged feature map with the image marking sample corresponding to the image training sample to obtain a loss function value, and determining the pre-trained HED model as the pre-trained HED model when the loss function value tends to be stable after continuous training optimization, for example, 10 ten thousand iterations. The model optimizer selects random gradient descent, the learning rate is set to be 0.001, and the loss function is a cross entropy function.
TABLE 1
| Number of layers
|
Feature map generated after convolution filter
|
| 1
|
256*256*5
|
| 2
|
128*128*14
|
| 3
|
74*74*40
|
| 4
|
37*37*92
|
| 5
|
18*18*196 |
Through predetermine HED model and OpenCV technique, will wait to discern the test paper image follow pick out in waiting to discern the image, avoided the influence of complicated background.
Thereafter, in step 103, since there are multiple brands of multi-line test strips and multiple T-line test strips, in order to distinguish the test data represented by each identification line on the test strip, a mark (logo) on each test strip is used for distinguishing. Because the marks on different brands of test paper are different, the marks on test paper of the same brand containing different indexes are also different. Therefore, the type to which the mark in the test paper image to be identified belongs can be identified by using a preset type model, and the detection line information corresponding to the type is extracted. Firstly, a preset class model is required to be established in advance, a test paper image training set is obtained, test paper images in the test paper image training set are classified according to the classes of marks, and the classes and detection line information corresponding to the classes are stored in a class database, wherein the detection line information comprises indexes corresponding to identification lines on the class of test paper and a preset range of the indexes. For example, when the category is C, T line test paper, the detection line information further includes a preset ratio range of brightness values corresponding to the T line and the C line. And then, training a pre-training CNN model by using the test paper image training set and the category corresponding to each test paper image, wherein the model training process comprises the steps of passing through 5 layers of convolution layers, then carrying out full connection, and then classifying until the loss function value of the pre-training CNN model tends to be stable, and determining the pre-training CNN model as the preset category model.
Meanwhile, in step 104, a region classification in the test paper image to be recognized is obtained by using the preset CRNN model, and the recognition line position in the test paper image to be recognized is determined according to the region classification. Specifically, the preset CRNN model is used for converting the test paper image to be identified into feature sequence information, and the region classification is determined according to color hopping in the feature sequence information, wherein the region classification includes a blank region, a left region of the identification line position, and a right region of the identification line position.
The CRNN model is established in the following way:
firstly, a test paper image sample is preprocessed to obtain a test paper image training sample which accords with a set pixel size. For example, after the test paper image sample is cut or stretched and laid flat, a test paper image training sample with a size of 1080 × 96 is obtained. Then, the area classifications in the test paper image training sample are marked to obtain a test paper image marked sample, for example, the area classifications in the test paper image training sample are marked as a blank area None, an identification line position left side area I-left, and an identification line position right side area I-right, respectively. Of course, for test strips including a plurality of identification lines, the region classification also includes a left region and a right region of the plurality of identification line positions. And then, training a pre-trained CRNN model by using the test paper image training sample and a corresponding test paper image marking sample, and determining the pre-trained CRNN model as the preset CRNN model until the loss function value of the pre-trained CRNN model tends to be stable.
The test paper can be regarded as a piece of sequence information, and the test paper is respectively a brand mark area and an identification line area from right to left, so that a test paper image can be regarded as data with the sequence information, and the sequence position of the identification line is predicted through a pre-training CRNN model. The pre-training CRNN model is divided into three parts, the first part is a convolution structure, feature extraction is carried out in the vertical direction through three layers of convolution neural networks, the scale of the input and output of the convolution structure in the horizontal direction is guaranteed not to change, and test paper image training samples are converted into feature sequence information. The second part is a three-layer bidirectional recurrent neural network structure, wherein the recurrent neural network unit selected by us is a GRU structure, because the GRU is improved in calculation speed and training effect compared with the traditional RNN and LSTM. And the third part of the model is a full connection layer, the characteristics output by different sequence coordinates in the cyclic neural network structure are subjected to region classification, and the predicted region classification is compared with the region classification in the corresponding test paper image marking sample to obtain a loss function value. After continuous training optimization, for example, 200 iterations, the loss function value tends to be stable, and the pre-trained CRNN model at this time is determined as the preset CRNN model. The model optimizer selects random gradient descent, the learning rate is set to be 0.001, and the loss function is a cross entropy function.
After the identification line position is determined by the above steps, in step 105, the color space of the identification line position is converted into an LAB color space. Then, the image brightness value corresponding to the identification line position in the LAB color space is extracted and compared with the brightness value range in the detection line information. For example, when the test paper is a multi-line test paper or a multi-T-line test paper, the indicator corresponding to each identification line on the test paper and the preset range of the indicator can be determined from the information of the corresponding detection line, so that the detection result of the indicator is determined by the value of the image brightness value corresponding to the preset range. And when the test paper is C, T line test paper, identifying the line positions as a T line and a C line, and after obtaining image brightness values corresponding to the T line and the C line, determining a test paper detection result corresponding to the ratio according to the ratio of the image brightness values corresponding to the T line and the C line and a preset ratio range in detection line information corresponding to the type of test paper.
According to the embodiment of the invention, the image recognition method based on the first deep learning is utilized to extract the test paper image to be recognized from the image to be recognized, so that the influence of a complex background on the test paper detection result is avoided, and all brands of test paper have marks, so that the classes of the marks on the test paper are recognized by utilizing a preset class model, the detection line information corresponding to the classes of the marks is obtained, the recognition line position is directly determined by utilizing the image recognition method based on the second deep learning, and the test paper detection result is determined by utilizing the regular image processing method according to the comparison between the image brightness value corresponding to the recognition line position and the detection line information. The embodiment of the invention can quickly identify the test paper detection result through the server, is convenient and quick, and saves time and cost.
To facilitate understanding of the embodiment of the present invention, fig. 2 is a schematic flow chart of a test paper identification method according to the embodiment of the present invention. The embodiment of the invention is described by taking ovulation test paper as an example, and as shown in fig. 2, the method comprises the following steps:
step 201, shooting an image of ovulation test paper to be identified by an intelligent terminal to obtain an image to be identified, and uploading the image to be identified to a server, wherein the image to be identified comprises a user identifier and the image of the ovulation test paper to be identified.
After a user uses the ovulation test paper for testing, the intelligent terminal with the photographing function can be used for photographing the ovulation test paper, and an image to be recognized obtained through photographing is uploaded to a server to obtain a test paper detection result. Therefore, the user only needs to save the image to be identified after the test is finished, and the test paper is not needed to be saved, so that great convenience is provided for the user.
202, a server receives an image to be identified uploaded by an intelligent terminal;
step 203, acquiring an edge gray-scale image corresponding to the test paper image to be identified from the image to be identified by using the preset HED model;
step 204, mapping the edge gray level image in the image to be recognized according to the edge gray level image by using an OpenCV technology to obtain the test paper image to be recognized;
step 205, identifying the category to which the mark in the test paper image to be identified belongs by using a preset category model, and extracting detection line information corresponding to the category, wherein the detection line information indicates that the category is ovulation test paper and comprises a preset ratio range corresponding to the ratio of the image brightness values of the T line and the C line;
step 206, converting the test paper image to be identified into characteristic sequence information by using the preset CRNN model;
step 207, determining the region classification according to the color jump in the feature sequence information, wherein the region classification comprises a blank region, a left region of an identification line position and a right region of the identification line position;
step 208, determining the position of the identification line in the ovulation test paper image to be identified according to the region classification;
step 209, converting the color space of the identification line position into an LAB color space;
step 210, extracting corresponding image brightness values of a C line and a T line in the identification line position in an LAB color space, and obtaining a ratio of the image brightness value corresponding to the T line position to the image brightness value corresponding to the C line position;
step 211, determining an ovulation test paper detection result corresponding to the ratio according to the ratio and a preset ratio range;
step 212, determining the current physiological information of the user corresponding to the image to be identified according to the test paper detection result;
step 213, extracting a user identifier included in the image to be recognized;
step 214, searching historical detection information corresponding to the user identification in a database, wherein the historical detection information comprises a historical ovulation test paper detection result and historical physiological information;
step 215, determining a physiological cycle corresponding to the user, for example, an ovulation cycle corresponding to the user, according to the historical physiological information and the current physiological information;
step 216, determining the suggested time for the user to perform the next ovulation test paper detection according to the physiological cycle;
step 217, providing the test strip detection result, the current physiological information of the user and the suggested time to the user.
For example, historical physiological information determined by the ovulation test strip detection result corresponding to the user can be used for obtaining the ovulation cycle corresponding to the user, and an analysis suggestion is comprehensively given by combining the current physiological information obtained by the current test strip detection result, namely the suggestion time when the ovulation test strip detection should be carried out next time. In addition, the extracted ovulation test paper image to be identified can be displayed on the intelligent terminal, so that the user can conveniently check the ovulation test paper image.
According to the embodiment of the invention, the intelligent terminal is used for rapidly detecting the ovulation test paper, so that the ovulation detection method is more convenient and simple, time and cost benefits are saved, a user can finish detection without going out of home, the physical condition of the user is known, an additional expensive instrument is not needed, convenience and rapidness are realized, time and money are saved, and the individual privacy of the user is better protected. In addition, the embodiment of the invention can quickly and accurately read the test result of the test paper, so that the test of the test paper is converted from the traditional qualitative test into the quantitative analysis, the suggested time of the test is provided for a user, and the user can predict the optimal conception or pregnancy time.
Of course, the embodiment of the invention is also suitable for result detection of other multi-line test paper, multi-T-line test paper and 1C-line test paper.
Correspondingly, fig. 3 is a schematic structural diagram of a test paper identification device according to an embodiment of the present invention. As shown in fig. 3, the apparatus is applied to a server, and the apparatus includes: the acquiring unit 31 is configured to acquire an image to be recognized, where the image to be recognized includes a test paper image to be recognized; a test paper image extraction unit 32, configured to extract the test paper image to be recognized from the image to be recognized by using an image recognition method based on a first deep learning; the category determining unit 33 is configured to identify, by using a preset category model, a category to which a mark in the test paper image to be identified belongs, and extract detection line information corresponding to the category; a position determining unit 34, configured to determine a recognition line position from the test strip image to be recognized by using an image recognition method based on a second deep learning; and a result determining unit 35, configured to determine a test strip detection result according to the image brightness value corresponding to the identification line position and the detection line information by using a regular image processing method.
Further, the test paper image extraction unit is further configured to: acquiring an edge gray-scale image corresponding to the test paper image to be identified from the image to be identified by using a preset HED model; and mapping the edge gray-scale image in the image to be identified by utilizing an OpenCV technology to obtain the test paper image to be identified.
Further, the preset HED model is established in the following manner: preprocessing the image sample to obtain an image training sample which accords with the specified pixel size; marking the edge of the test paper in the image training sample to obtain an image marked sample; and training a pre-trained HED model by using the image training sample and the corresponding image marking sample, and determining the pre-trained HED model as the preset HED model until the loss function value of the pre-trained HED model is stable.
Further, the preset category model is obtained by: acquiring a test paper image training set, classifying test paper images in the test paper image training set according to the types of marks, and storing the types and detection line information corresponding to the types; and training a pre-training CNN model by using the test paper image training set and the category corresponding to each test paper image until the loss function value of the pre-training CNN model is stable, and determining the pre-training CNN model as the preset category model.
Further, the position determination unit is further configured to: obtaining a region classification in the test paper image to be recognized by using a preset CRNN model, wherein the region classification comprises a blank region, a left region of a recognition line position and a right region of the recognition line position; and determining the position of the identification line in the test paper image to be identified according to the region classification.
Further, the position determination unit is further configured to: converting the test paper image to be identified into characteristic sequence information by using the preset CRNN model; and determining the region classification according to the color jump in the characteristic sequence information.
Further, the CRNN model is built in the following manner: preprocessing a test paper image sample to obtain a test paper image training sample which accords with a set pixel size; classifying and marking the regions in the test paper image training sample to obtain a test paper image marking sample; and training a pre-trained CRNN model by using the test paper image training sample and a corresponding test paper image marking sample, and determining the pre-trained CRNN model as the preset CRNN model until the loss function value of the pre-trained CRNN model tends to be stable.
Further, the result determination unit is further configured to: converting the color space of the identification line position into an LAB color space; extracting corresponding image brightness values of the identification line positions in an LAB color space; and determining the test result of the test paper according to the comparison result of the image brightness value and the detection line information.
Further, as shown in fig. 4, the apparatus further includes: a physiological information determining unit 41, configured to determine, according to the test paper detection result, current physiological information of the user corresponding to the image to be identified; a processing unit 42, configured to provide the test strip detection result and the current physiological information of the user to the user.
Further, the image to be recognized further includes a user identifier, as shown in fig. 5, the apparatus further includes: an identifier extracting unit 51, configured to extract a user identifier included in the image to be recognized; the searching unit 52 is configured to search historical detection information corresponding to the user identifier in a database, where the historical detection information includes a historical test paper detection result and historical physiological information; a cycle determining unit 53, configured to determine a physiological cycle corresponding to the user according to the historical physiological information and the current physiological information; the processing unit is further used for determining the suggested time for the user to perform the next test paper detection according to the physiological cycle and providing the suggested time for the user.
According to the embodiment of the invention, an image recognition method based on deep learning is combined with a regular image processing method, the test paper image to be recognized is extracted from the image to be recognized, the influence of a complex background on the test paper detection result is avoided, and all brands of test paper have marks, so that the classes of the marks on the test paper are recognized by using a preset class model, the detection line information corresponding to the classes of the marks is obtained, and after the position of the recognition line is determined, the test paper detection result is determined by comparing the image brightness value corresponding to the position of the recognition line with the detection line information. The embodiment of the invention can quickly identify the test paper detection result through the server, is convenient and quick, and saves time and cost.
The operation process of the device refers to the implementation process of the test paper identification method.
Accordingly, the embodiment of the present invention also provides a machine-readable storage medium, which stores instructions for causing a machine to execute the test strip identification method as described above.
Correspondingly, fig. 6 is a schematic structural diagram of an apparatus provided in an embodiment of the present invention, and as shown in fig. 6, the apparatus 60 includes at least one processor 61, and at least one memory 62 and a bus 63 connected to the processor; the processor and the memory complete mutual communication through the bus; the processor is configured to call the program instructions in the memory to execute the test strip identification method according to the above embodiment.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a device includes one or more processors (CPUs), memory, and a bus. The device may also include input/output interfaces, network interfaces, and the like.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip. The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.