[go: up one dir, main page]

CN113537203A - Test paper identification method and device, storage medium and equipment - Google Patents

Test paper identification method and device, storage medium and equipment Download PDF

Info

Publication number
CN113537203A
CN113537203A CN202010301577.1A CN202010301577A CN113537203A CN 113537203 A CN113537203 A CN 113537203A CN 202010301577 A CN202010301577 A CN 202010301577A CN 113537203 A CN113537203 A CN 113537203A
Authority
CN
China
Prior art keywords
image
test paper
model
identified
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010301577.1A
Other languages
Chinese (zh)
Other versions
CN113537203B (en
Inventor
王胤
王威
左志伟
罗培克
黄真
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Xingjian Medical Union Technology Co ltd
Original Assignee
Beijing Aikangtai Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aikangtai Technology Co ltd filed Critical Beijing Aikangtai Technology Co ltd
Priority to CN202010301577.1A priority Critical patent/CN113537203B/en
Publication of CN113537203A publication Critical patent/CN113537203A/en
Application granted granted Critical
Publication of CN113537203B publication Critical patent/CN113537203B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

本发明实施例提供一种试纸识别方法、装置、存储介质及设备,属于图像处理技术领域。所述方法包括:获取待识别图像,所述待识别图像包括待识别试纸图像;利用基于第一种深度学习的图像识别方法,在所述待识别图像中提取所述待识别试纸图像;利用预设类别模型,识别所述待识别试纸图像中的标志所属的类别,并提取所述类别对应的检测线信息;利用基于第二种深度学习的图像识别方法,从所述待识别试纸图像中确定识别线位置;利用规则性图像处理方法,根据所述识别线位置对应的图像亮度值以及所述检测线信息确定试纸检测结果。本发明实施例适用于试纸识别过程。

Figure 202010301577

Embodiments of the present invention provide a test strip identification method, device, storage medium and device, which belong to the technical field of image processing. The method includes: acquiring an image to be recognized, the image to be recognized includes an image of a test strip to be recognized; using the image recognition method based on the first deep learning to extract the image of the test strip to be recognized from the image to be recognized; Set a category model, identify the category to which the mark in the test strip image to be identified belongs, and extract the detection line information corresponding to the category; use the image recognition method based on the second deep learning to determine from the test strip image to be identified Identify the line position; using a regular image processing method, determine the test strip detection result according to the image brightness value corresponding to the identification line position and the detection line information. The embodiment of the present invention is suitable for the test strip identification process.

Figure 202010301577

Description

Test paper identification method and device, storage medium and equipment
Technical Field
The invention relates to the technical field of image processing, in particular to a test paper identification method, a test paper identification device, a storage medium and equipment.
Background
The test paper for female pregnancy is one of the main means for scientific pregnancy preparation and artificial pregnancy assistance. The female pregnancy test paper mainly comprises two main types of ovulation test paper and early pregnancy test paper. Ovulation strips predict when ovulation occurs primarily by detecting the peak level of Luteinizing Hormone (LH). The early pregnancy test paper is mainly used for determining whether a fetus is pregnant or not and judging the development condition of the fetus by detecting chorionic gonadotropin (HCG). For the two types of test paper, the color depth of a contrast detection line (T) and a contrast line (C) is mainly observed by human eyes at present to judge the negative and positive of the index to be detected. When the detection line (T) is deeper than or equal to the control line (C), the test paper shows positive; otherwise, the test paper shows negative, and the human eye identification is prone to have deviation.
With the rise of mobile internet, internet of things and artificial intelligence, miniaturized data acquisition equipment and test paper image identification methods based on smart phones gradually become mainstream. In the prior art, two recognition modes generally exist, one is an image recognition method based on deep learning, a convolutional neural network is used to train a deep learning model of an image, deep learning features of the image are extracted and constructed into a database, and finally, classification recognition is performed on the features through an SVM (Support Vector Machine); the other method is an image identification method for extracting features based on regular image information, and comprises the steps of firstly carrying out Harris corner detection and image calibration, then converting an image space from RGB to CIELAB, calculating color difference between test paper colors and color blocks of a standard colorimetric card, and classifying the test paper colors to the corresponding classes of the colorimetric card by using a nearest neighbor algorithm to complete analysis of test paper measurement indexes. However, with the above method for identifying test paper in the prior art, the test result of the test paper cannot be prevented from being affected by a complex background and by test paper of various brands, and the identification technology in the prior art cannot achieve universal identification.
Disclosure of Invention
The embodiment of the invention aims to provide a test paper identification method, a test paper identification device, a storage medium and equipment, which solve the problem that the accuracy of a test paper detection result is low due to the influence of a complex background on test paper identification in the prior art, and the identification technology universality is poor for various brands of test paper.
In order to achieve the above object, an embodiment of the present invention provides a test paper identification method, including: acquiring an image to be identified, wherein the image to be identified comprises a test paper image to be identified; extracting the test paper image to be recognized from the image to be recognized by using an image recognition method based on first deep learning; identifying the category to which the mark in the test paper image to be identified belongs by using a preset category model, and extracting detection line information corresponding to the category; determining the position of a recognition line from the test paper image to be recognized by using an image recognition method based on second deep learning; and determining a test paper detection result according to the pixel brightness value corresponding to the identification line position and the detection line information by using a regular image processing method.
Further, the extracting, by using the image recognition method based on the first deep learning, the test strip image to be recognized from the image to be recognized includes: acquiring an edge gray-scale image corresponding to the test paper image to be identified from the image to be identified by using a preset HED model; and mapping the edge gray-scale image in the image to be identified by utilizing an OpenCV technology to obtain the test paper image to be identified.
Further, the preset HED model is established in the following manner: preprocessing the image sample to obtain an image training sample which accords with the specified pixel size; marking the edge of the test paper in the image training sample to obtain an image marked sample; and training a pre-trained HED model by using the image training sample and the corresponding image marking sample, and determining the pre-trained HED model as the preset HED model until the loss function value of the pre-trained HED model is stable.
Further, the preset category model is obtained by: acquiring a test paper image training set, classifying test paper images in the test paper image training set according to the types of marks, and storing the types and detection line information corresponding to the types; and training a pre-training CNN model by using the test paper image training set and the category corresponding to each test paper image until the loss function value of the pre-training CNN model is stable, and determining the pre-training CNN model as the preset category model.
Further, the determining the position of the identification line from the test strip image to be identified by using the image identification method based on the second deep learning method comprises the following steps: obtaining a region classification in the test paper image to be recognized by using a preset CRNN model, wherein the region classification comprises a blank region, a left region of a recognition line position and a right region of the recognition line position; and determining the position of the identification line in the test paper image to be identified according to the region classification.
Further, the obtaining of the region classification in the test paper image to be recognized by using the preset CRNN model includes: converting the test paper image to be identified into characteristic sequence information by using the preset CRNN model; and determining the region classification according to the color jump in the characteristic sequence information.
Further, the CRNN model is built in the following manner: preprocessing a test paper image sample to obtain a test paper image training sample which accords with a set pixel size; classifying and marking the regions in the test paper image training sample to obtain a test paper image marking sample; and training a pre-trained CRNN model by using the test paper image training sample and a corresponding test paper image marking sample, and determining the pre-trained CRNN model as the preset CRNN model until the loss function value of the pre-trained CRNN model tends to be stable.
Further, the determining, by using the regular image processing method, a test strip detection result according to the pixel brightness value corresponding to the identification line position and the detection line information includes: converting the color space of the identification line position into an LAB color space; extracting corresponding pixel brightness values of the identification line positions in an LAB color space; and determining the test paper detection result according to the comparison result of the pixel brightness value and the detection line information.
Further, after determining the test result of the test strip, the method further comprises: determining the current physiological information of the user corresponding to the image to be identified according to the test paper detection result; and providing the test paper detection result and the current physiological information of the user to the user.
Further, the image to be recognized further includes a user identifier, and after the determination of the test paper detection result, the method further includes: extracting a user identifier included in the image to be recognized; searching historical detection information corresponding to the user identification in a database, wherein the historical detection information comprises a historical test paper detection result and historical physiological information; determining a physiological cycle corresponding to the user according to the historical physiological information and the current physiological information; and determining the suggested time for the user to perform the next test paper detection according to the physiological cycle, and providing the suggested time for the user.
Correspondingly, the embodiment of the invention also provides a test paper identification device, which comprises: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an image to be identified, and the image to be identified comprises a test paper image to be identified; the test paper image extraction unit is used for extracting the test paper image to be identified from the image to be identified by using an image identification method based on first deep learning; the category determining unit is used for identifying the category to which the mark in the test paper image to be identified belongs by using a preset category model, and extracting detection line information corresponding to the category; the position determining unit is used for determining the position of a recognition line from the test paper image to be recognized by using an image recognition method based on second deep learning; and the result determining unit is used for determining a test paper detection result according to the image brightness value corresponding to the identification line position and the detection line information by using a regular image processing method.
Further, the test paper image extraction unit is further configured to: acquiring an edge gray-scale image corresponding to the test paper image to be identified from the image to be identified by using a preset HED model; and mapping the edge gray-scale image in the image to be identified by utilizing an OpenCV technology to obtain the test paper image to be identified.
Further, the preset HED model is established in the following manner: preprocessing the image sample to obtain an image training sample which accords with the specified pixel size; marking the edge of the test paper in the image training sample to obtain an image marked sample; and training a pre-trained HED model by using the image training sample and the corresponding image marking sample, and determining the pre-trained HED model as the preset HED model until the loss function value of the pre-trained HED model is stable.
Further, the preset category model is obtained by: acquiring a test paper image training set, classifying test paper images in the test paper image training set according to the types of marks, and storing the types and detection line information corresponding to the types; and training a pre-training CNN model by using the test paper image training set and the category corresponding to each test paper image until the loss function value of the pre-training CNN model is stable, and determining the pre-training CNN model as the preset category model.
Further, the position determination unit is further configured to: obtaining a region classification in the test paper image to be recognized by using a preset CRNN model, wherein the region classification comprises a blank region, a left region of a recognition line position and a right region of the recognition line position; and determining the position of the identification line in the test paper image to be identified according to the region classification.
Further, the position determination unit is further configured to: converting the test paper image to be identified into characteristic sequence information by using the preset CRNN model; and determining the region classification according to the color jump in the characteristic sequence information.
Further, the CRNN model is built in the following manner: preprocessing a test paper image sample to obtain a test paper image training sample which accords with a set pixel size; classifying and marking the regions in the test paper image training sample to obtain a test paper image marking sample; and training a pre-trained CRNN model by using the test paper image training sample and a corresponding test paper image marking sample, and determining the pre-trained CRNN model as the preset CRNN model until the loss function value of the pre-trained CRNN model tends to be stable.
Further, the result determination unit is further configured to: converting the color space of the identification line position into an LAB color space; extracting corresponding image brightness values of the identification line positions in an LAB color space; and determining the test result of the test paper according to the comparison result of the image brightness value and the detection line information.
Further, the apparatus further comprises: the physiological information determining unit is used for determining the current physiological information of the user corresponding to the image to be identified according to the test paper detection result; and the processing unit is used for providing the test paper detection result and the current physiological information of the user to the user.
Further, the image to be recognized further includes a user identifier, and the apparatus further includes: the identification extraction unit is used for extracting a user identification included in the image to be identified; the searching unit is used for searching historical detection information corresponding to the user identification in a database, wherein the historical detection information comprises a historical test paper detection result and historical physiological information; the period determining unit is used for determining a physiological period corresponding to the user according to the historical physiological information and the current physiological information; the processing unit is further used for determining the suggested time for the user to perform the next test paper detection according to the physiological cycle and providing the suggested time for the user.
Accordingly, the embodiment of the present invention also provides a machine-readable storage medium, which stores instructions for causing a machine to execute the test strip identification method as described above.
Correspondingly, the embodiment of the invention also provides equipment, which comprises at least one processor, at least one memory and a bus, wherein the memory and the bus are connected with the processor; the processor and the memory complete mutual communication through the bus; the processor is configured to invoke program instructions in the memory to perform the test strip identification method described above.
According to the technical scheme, the test paper image is extracted from the image to be identified by using the test paper identification method combining deep learning and regular image processing, so that the influence of a complex background on the test paper image is avoided, and then the position of the identification line in the test paper image is directly determined through color hopping, so that the test result of the test paper is determined according to the image brightness value corresponding to the position of the identification line and the detection line information corresponding to the type of the test paper. The embodiment of the invention solves the problem of low accuracy of the test paper detection result caused by the influence of a complex background on the test paper identification in the prior art, improves the identification universality of the test paper image, is convenient and quick, and saves time and cost.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the embodiments of the invention without limiting the embodiments of the invention. In the drawings:
FIG. 1 is a schematic flow chart of a test paper identification method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another method for identifying test strips according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a test paper identification apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of another test paper identification apparatus provided in the embodiment of the present invention;
fig. 5 is a schematic structural diagram of another test paper identification device provided in the embodiment of the present invention;
fig. 6 is a schematic structural diagram of an apparatus according to an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating embodiments of the invention, are given by way of illustration and explanation only, not limitation.
Fig. 1 is a schematic flow chart of a test paper identification method according to an embodiment of the present invention. As shown in fig. 1, the method is applied to a server, and the method includes the following steps:
step 101, obtaining an image to be identified, wherein the image to be identified comprises a test paper image to be identified;
102, extracting a test paper image to be recognized from the image to be recognized by using an image recognition method based on first deep learning;
step 103, identifying the category to which the mark in the test paper image to be identified belongs by using a preset category model, and extracting detection line information corresponding to the category;
104, determining the position of a recognition line from the test paper image to be recognized by using an image recognition method based on second deep learning;
and 105, determining a test paper detection result according to the image brightness value corresponding to the identification line position and the detection line information by using a regular image processing method.
The test paper identified in the embodiment of the present invention includes POCT (point-of-care testing) test paper, such as pregnancy test paper, ovulation test paper, multiple T-line test paper, and the like.
After a user uses the test paper to test, the test paper can be photographed by using equipment with a photographing function, and a to-be-recognized image obtained by photographing is uploaded to a server so as to be recognized by the test paper, so that the user only needs to store the to-be-recognized image after testing is completed, the test paper does not need to be stored, and great convenience is provided for the user.
The image to be recognized obtained by the user through the photographing mode also comprises other unnecessary complex background information besides the test paper image to be recognized. Therefore, in the embodiment of the present invention, after the server receives the to-be-identified image including the to-be-identified test paper image uploaded by the user, the to-be-identified image needs to be extracted from the complex background.
In addition, because the specificity of the test paper image to be recognized is different from other shot images, the test paper image to be recognized is a long and thin strip-shaped image, and because the length-width ratio is too large, a conventional image segmentation algorithm cannot be adopted, in step 102, in the embodiment of the present invention, a preset HED model is used to obtain an edge grayscale image corresponding to the test paper image to be recognized from the image to be recognized, and an OpenCV technique is used to map the edge grayscale image in the image to be recognized to obtain the test paper image to be recognized.
The preset HED model is established in the following mode:
firstly, preprocessing an image sample to obtain an image training sample with a specified pixel size, for example, the specified pixel size is 128 × 128 pixels, and if the resolution of the image sample exceeds the resolution, clipping is performed, and if the resolution of the image sample is insufficient, stretching and tiling are performed.
And then, marking the edge of the test paper in the image training sample to obtain an image marking sample. The marking mode can be marked by adopting marking software, for example, positions of 4 vertexes of the rectangular test paper edge are obtained, the 4 coordinates are mapped into a quadrangle, then a binary all-black image is newly built, and the corresponding edge of the quadrangle is set to be white.
And then, training a pre-trained HED model by using the image training sample and the corresponding image marking sample, and determining the pre-trained HED model as the preset HED model until the loss function value of the pre-trained HED model tends to be stable.
The pre-trained HED model comprises 5 convolutional layers, 5 characteristic maps are obtained after image training samples pass through the convolutional layers, and the dimension of each characteristic map is shown in the following table 1. And then, deconvoluting the 5 layers of feature maps into feature maps of 256 × 1, merging the 5 deconvoluted feature maps to obtain one feature map corresponding to the image training sample, comparing the merged feature map with the image marking sample corresponding to the image training sample to obtain a loss function value, and determining the pre-trained HED model as the pre-trained HED model when the loss function value tends to be stable after continuous training optimization, for example, 10 ten thousand iterations. The model optimizer selects random gradient descent, the learning rate is set to be 0.001, and the loss function is a cross entropy function.
TABLE 1
Number of layers Feature map generated after convolution filter
1 256*256*5
2 128*128*14
3 74*74*40
4 37*37*92
5 18*18*196
Through predetermine HED model and OpenCV technique, will wait to discern the test paper image follow pick out in waiting to discern the image, avoided the influence of complicated background.
Thereafter, in step 103, since there are multiple brands of multi-line test strips and multiple T-line test strips, in order to distinguish the test data represented by each identification line on the test strip, a mark (logo) on each test strip is used for distinguishing. Because the marks on different brands of test paper are different, the marks on test paper of the same brand containing different indexes are also different. Therefore, the type to which the mark in the test paper image to be identified belongs can be identified by using a preset type model, and the detection line information corresponding to the type is extracted. Firstly, a preset class model is required to be established in advance, a test paper image training set is obtained, test paper images in the test paper image training set are classified according to the classes of marks, and the classes and detection line information corresponding to the classes are stored in a class database, wherein the detection line information comprises indexes corresponding to identification lines on the class of test paper and a preset range of the indexes. For example, when the category is C, T line test paper, the detection line information further includes a preset ratio range of brightness values corresponding to the T line and the C line. And then, training a pre-training CNN model by using the test paper image training set and the category corresponding to each test paper image, wherein the model training process comprises the steps of passing through 5 layers of convolution layers, then carrying out full connection, and then classifying until the loss function value of the pre-training CNN model tends to be stable, and determining the pre-training CNN model as the preset category model.
Meanwhile, in step 104, a region classification in the test paper image to be recognized is obtained by using the preset CRNN model, and the recognition line position in the test paper image to be recognized is determined according to the region classification. Specifically, the preset CRNN model is used for converting the test paper image to be identified into feature sequence information, and the region classification is determined according to color hopping in the feature sequence information, wherein the region classification includes a blank region, a left region of the identification line position, and a right region of the identification line position.
The CRNN model is established in the following way:
firstly, a test paper image sample is preprocessed to obtain a test paper image training sample which accords with a set pixel size. For example, after the test paper image sample is cut or stretched and laid flat, a test paper image training sample with a size of 1080 × 96 is obtained. Then, the area classifications in the test paper image training sample are marked to obtain a test paper image marked sample, for example, the area classifications in the test paper image training sample are marked as a blank area None, an identification line position left side area I-left, and an identification line position right side area I-right, respectively. Of course, for test strips including a plurality of identification lines, the region classification also includes a left region and a right region of the plurality of identification line positions. And then, training a pre-trained CRNN model by using the test paper image training sample and a corresponding test paper image marking sample, and determining the pre-trained CRNN model as the preset CRNN model until the loss function value of the pre-trained CRNN model tends to be stable.
The test paper can be regarded as a piece of sequence information, and the test paper is respectively a brand mark area and an identification line area from right to left, so that a test paper image can be regarded as data with the sequence information, and the sequence position of the identification line is predicted through a pre-training CRNN model. The pre-training CRNN model is divided into three parts, the first part is a convolution structure, feature extraction is carried out in the vertical direction through three layers of convolution neural networks, the scale of the input and output of the convolution structure in the horizontal direction is guaranteed not to change, and test paper image training samples are converted into feature sequence information. The second part is a three-layer bidirectional recurrent neural network structure, wherein the recurrent neural network unit selected by us is a GRU structure, because the GRU is improved in calculation speed and training effect compared with the traditional RNN and LSTM. And the third part of the model is a full connection layer, the characteristics output by different sequence coordinates in the cyclic neural network structure are subjected to region classification, and the predicted region classification is compared with the region classification in the corresponding test paper image marking sample to obtain a loss function value. After continuous training optimization, for example, 200 iterations, the loss function value tends to be stable, and the pre-trained CRNN model at this time is determined as the preset CRNN model. The model optimizer selects random gradient descent, the learning rate is set to be 0.001, and the loss function is a cross entropy function.
After the identification line position is determined by the above steps, in step 105, the color space of the identification line position is converted into an LAB color space. Then, the image brightness value corresponding to the identification line position in the LAB color space is extracted and compared with the brightness value range in the detection line information. For example, when the test paper is a multi-line test paper or a multi-T-line test paper, the indicator corresponding to each identification line on the test paper and the preset range of the indicator can be determined from the information of the corresponding detection line, so that the detection result of the indicator is determined by the value of the image brightness value corresponding to the preset range. And when the test paper is C, T line test paper, identifying the line positions as a T line and a C line, and after obtaining image brightness values corresponding to the T line and the C line, determining a test paper detection result corresponding to the ratio according to the ratio of the image brightness values corresponding to the T line and the C line and a preset ratio range in detection line information corresponding to the type of test paper.
According to the embodiment of the invention, the image recognition method based on the first deep learning is utilized to extract the test paper image to be recognized from the image to be recognized, so that the influence of a complex background on the test paper detection result is avoided, and all brands of test paper have marks, so that the classes of the marks on the test paper are recognized by utilizing a preset class model, the detection line information corresponding to the classes of the marks is obtained, the recognition line position is directly determined by utilizing the image recognition method based on the second deep learning, and the test paper detection result is determined by utilizing the regular image processing method according to the comparison between the image brightness value corresponding to the recognition line position and the detection line information. The embodiment of the invention can quickly identify the test paper detection result through the server, is convenient and quick, and saves time and cost.
To facilitate understanding of the embodiment of the present invention, fig. 2 is a schematic flow chart of a test paper identification method according to the embodiment of the present invention. The embodiment of the invention is described by taking ovulation test paper as an example, and as shown in fig. 2, the method comprises the following steps:
step 201, shooting an image of ovulation test paper to be identified by an intelligent terminal to obtain an image to be identified, and uploading the image to be identified to a server, wherein the image to be identified comprises a user identifier and the image of the ovulation test paper to be identified.
After a user uses the ovulation test paper for testing, the intelligent terminal with the photographing function can be used for photographing the ovulation test paper, and an image to be recognized obtained through photographing is uploaded to a server to obtain a test paper detection result. Therefore, the user only needs to save the image to be identified after the test is finished, and the test paper is not needed to be saved, so that great convenience is provided for the user.
202, a server receives an image to be identified uploaded by an intelligent terminal;
step 203, acquiring an edge gray-scale image corresponding to the test paper image to be identified from the image to be identified by using the preset HED model;
step 204, mapping the edge gray level image in the image to be recognized according to the edge gray level image by using an OpenCV technology to obtain the test paper image to be recognized;
step 205, identifying the category to which the mark in the test paper image to be identified belongs by using a preset category model, and extracting detection line information corresponding to the category, wherein the detection line information indicates that the category is ovulation test paper and comprises a preset ratio range corresponding to the ratio of the image brightness values of the T line and the C line;
step 206, converting the test paper image to be identified into characteristic sequence information by using the preset CRNN model;
step 207, determining the region classification according to the color jump in the feature sequence information, wherein the region classification comprises a blank region, a left region of an identification line position and a right region of the identification line position;
step 208, determining the position of the identification line in the ovulation test paper image to be identified according to the region classification;
step 209, converting the color space of the identification line position into an LAB color space;
step 210, extracting corresponding image brightness values of a C line and a T line in the identification line position in an LAB color space, and obtaining a ratio of the image brightness value corresponding to the T line position to the image brightness value corresponding to the C line position;
step 211, determining an ovulation test paper detection result corresponding to the ratio according to the ratio and a preset ratio range;
step 212, determining the current physiological information of the user corresponding to the image to be identified according to the test paper detection result;
step 213, extracting a user identifier included in the image to be recognized;
step 214, searching historical detection information corresponding to the user identification in a database, wherein the historical detection information comprises a historical ovulation test paper detection result and historical physiological information;
step 215, determining a physiological cycle corresponding to the user, for example, an ovulation cycle corresponding to the user, according to the historical physiological information and the current physiological information;
step 216, determining the suggested time for the user to perform the next ovulation test paper detection according to the physiological cycle;
step 217, providing the test strip detection result, the current physiological information of the user and the suggested time to the user.
For example, historical physiological information determined by the ovulation test strip detection result corresponding to the user can be used for obtaining the ovulation cycle corresponding to the user, and an analysis suggestion is comprehensively given by combining the current physiological information obtained by the current test strip detection result, namely the suggestion time when the ovulation test strip detection should be carried out next time. In addition, the extracted ovulation test paper image to be identified can be displayed on the intelligent terminal, so that the user can conveniently check the ovulation test paper image.
According to the embodiment of the invention, the intelligent terminal is used for rapidly detecting the ovulation test paper, so that the ovulation detection method is more convenient and simple, time and cost benefits are saved, a user can finish detection without going out of home, the physical condition of the user is known, an additional expensive instrument is not needed, convenience and rapidness are realized, time and money are saved, and the individual privacy of the user is better protected. In addition, the embodiment of the invention can quickly and accurately read the test result of the test paper, so that the test of the test paper is converted from the traditional qualitative test into the quantitative analysis, the suggested time of the test is provided for a user, and the user can predict the optimal conception or pregnancy time.
Of course, the embodiment of the invention is also suitable for result detection of other multi-line test paper, multi-T-line test paper and 1C-line test paper.
Correspondingly, fig. 3 is a schematic structural diagram of a test paper identification device according to an embodiment of the present invention. As shown in fig. 3, the apparatus is applied to a server, and the apparatus includes: the acquiring unit 31 is configured to acquire an image to be recognized, where the image to be recognized includes a test paper image to be recognized; a test paper image extraction unit 32, configured to extract the test paper image to be recognized from the image to be recognized by using an image recognition method based on a first deep learning; the category determining unit 33 is configured to identify, by using a preset category model, a category to which a mark in the test paper image to be identified belongs, and extract detection line information corresponding to the category; a position determining unit 34, configured to determine a recognition line position from the test strip image to be recognized by using an image recognition method based on a second deep learning; and a result determining unit 35, configured to determine a test strip detection result according to the image brightness value corresponding to the identification line position and the detection line information by using a regular image processing method.
Further, the test paper image extraction unit is further configured to: acquiring an edge gray-scale image corresponding to the test paper image to be identified from the image to be identified by using a preset HED model; and mapping the edge gray-scale image in the image to be identified by utilizing an OpenCV technology to obtain the test paper image to be identified.
Further, the preset HED model is established in the following manner: preprocessing the image sample to obtain an image training sample which accords with the specified pixel size; marking the edge of the test paper in the image training sample to obtain an image marked sample; and training a pre-trained HED model by using the image training sample and the corresponding image marking sample, and determining the pre-trained HED model as the preset HED model until the loss function value of the pre-trained HED model is stable.
Further, the preset category model is obtained by: acquiring a test paper image training set, classifying test paper images in the test paper image training set according to the types of marks, and storing the types and detection line information corresponding to the types; and training a pre-training CNN model by using the test paper image training set and the category corresponding to each test paper image until the loss function value of the pre-training CNN model is stable, and determining the pre-training CNN model as the preset category model.
Further, the position determination unit is further configured to: obtaining a region classification in the test paper image to be recognized by using a preset CRNN model, wherein the region classification comprises a blank region, a left region of a recognition line position and a right region of the recognition line position; and determining the position of the identification line in the test paper image to be identified according to the region classification.
Further, the position determination unit is further configured to: converting the test paper image to be identified into characteristic sequence information by using the preset CRNN model; and determining the region classification according to the color jump in the characteristic sequence information.
Further, the CRNN model is built in the following manner: preprocessing a test paper image sample to obtain a test paper image training sample which accords with a set pixel size; classifying and marking the regions in the test paper image training sample to obtain a test paper image marking sample; and training a pre-trained CRNN model by using the test paper image training sample and a corresponding test paper image marking sample, and determining the pre-trained CRNN model as the preset CRNN model until the loss function value of the pre-trained CRNN model tends to be stable.
Further, the result determination unit is further configured to: converting the color space of the identification line position into an LAB color space; extracting corresponding image brightness values of the identification line positions in an LAB color space; and determining the test result of the test paper according to the comparison result of the image brightness value and the detection line information.
Further, as shown in fig. 4, the apparatus further includes: a physiological information determining unit 41, configured to determine, according to the test paper detection result, current physiological information of the user corresponding to the image to be identified; a processing unit 42, configured to provide the test strip detection result and the current physiological information of the user to the user.
Further, the image to be recognized further includes a user identifier, as shown in fig. 5, the apparatus further includes: an identifier extracting unit 51, configured to extract a user identifier included in the image to be recognized; the searching unit 52 is configured to search historical detection information corresponding to the user identifier in a database, where the historical detection information includes a historical test paper detection result and historical physiological information; a cycle determining unit 53, configured to determine a physiological cycle corresponding to the user according to the historical physiological information and the current physiological information; the processing unit is further used for determining the suggested time for the user to perform the next test paper detection according to the physiological cycle and providing the suggested time for the user.
According to the embodiment of the invention, an image recognition method based on deep learning is combined with a regular image processing method, the test paper image to be recognized is extracted from the image to be recognized, the influence of a complex background on the test paper detection result is avoided, and all brands of test paper have marks, so that the classes of the marks on the test paper are recognized by using a preset class model, the detection line information corresponding to the classes of the marks is obtained, and after the position of the recognition line is determined, the test paper detection result is determined by comparing the image brightness value corresponding to the position of the recognition line with the detection line information. The embodiment of the invention can quickly identify the test paper detection result through the server, is convenient and quick, and saves time and cost.
The operation process of the device refers to the implementation process of the test paper identification method.
Accordingly, the embodiment of the present invention also provides a machine-readable storage medium, which stores instructions for causing a machine to execute the test strip identification method as described above.
Correspondingly, fig. 6 is a schematic structural diagram of an apparatus provided in an embodiment of the present invention, and as shown in fig. 6, the apparatus 60 includes at least one processor 61, and at least one memory 62 and a bus 63 connected to the processor; the processor and the memory complete mutual communication through the bus; the processor is configured to call the program instructions in the memory to execute the test strip identification method according to the above embodiment.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a device includes one or more processors (CPUs), memory, and a bus. The device may also include input/output interfaces, network interfaces, and the like.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip. The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A test paper identification method is characterized by comprising the following steps:
acquiring an image to be identified, wherein the image to be identified comprises a test paper image to be identified;
extracting the test paper image to be recognized from the image to be recognized by using an image recognition method based on first deep learning;
identifying the category to which the mark in the test paper image to be identified belongs by using a preset category model, and extracting detection line information corresponding to the category;
determining the position of a recognition line from the test paper image to be recognized by using an image recognition method based on second deep learning;
and determining a test paper detection result according to the image brightness value corresponding to the identification line position and the detection line information by using a regular image processing method.
2. A test strip recognition method according to claim 1, wherein the extracting of the test strip image to be recognized from the image to be recognized by using an image recognition method based on a first deep learning method includes:
acquiring an edge gray-scale image corresponding to the test paper image to be identified from the image to be identified by using the preset HED model;
and mapping the edge gray-scale image in the image to be identified by utilizing an OpenCV technology to obtain the test paper image to be identified.
3. The test paper identification method according to claim 2, wherein the preset HED model is established by:
preprocessing the image sample to obtain an image training sample which accords with the specified pixel size;
marking the edge of the test paper in the image training sample to obtain an image marked sample;
and training a pre-trained HED model by using the image training sample and the corresponding image marking sample, and determining the pre-trained HED model as the preset HED model until the loss function value of the pre-trained HED model is stable.
4. The test paper identification method according to claim 1, wherein the preset classification model is obtained by:
acquiring a test paper image training set, classifying test paper images in the test paper image training set according to the types of marks, and storing the types and detection line information corresponding to the types;
and training a pre-training CNN model by using the test paper image training set and the category corresponding to each test paper image until the loss function value of the pre-training CNN model is stable, and determining the pre-training CNN model as the preset category model.
5. A test strip recognition method according to claim 1, wherein determining a recognition line position from the test strip image to be recognized by using an image recognition method based on the second deep learning comprises:
obtaining a region classification in the test paper image to be recognized by using a preset CRNN model, wherein the region classification comprises a blank region, a left region of a recognition line position and a right region of the recognition line position;
and determining the position of the identification line in the test paper image to be identified according to the region classification.
6. A test strip identification method according to claim 5, wherein the obtaining of the region classification in the test strip image to be identified by using the preset CRNN model comprises:
converting the test paper image to be identified into characteristic sequence information by using the preset CRNN model;
and determining the region classification according to the color jump in the characteristic sequence information.
7. A test paper identification method according to claim 5, wherein the CRNN model is established by:
preprocessing a test paper image sample to obtain a test paper image training sample which accords with a set pixel size;
classifying and marking the regions in the test paper image training sample to obtain a test paper image marking sample;
and training a pre-trained CRNN model by using the test paper image training sample and a corresponding test paper image marking sample, and determining the pre-trained CRNN model as the preset CRNN model until the loss function value of the pre-trained CRNN model tends to be stable.
8. A test paper identification apparatus, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an image to be identified, and the image to be identified comprises a test paper image to be identified;
the test paper image extraction unit is used for extracting the test paper image to be identified from the image to be identified by using an image identification method based on first deep learning;
the category determining unit is used for identifying the category to which the mark in the test paper image to be identified belongs by using a preset category model, and extracting detection line information corresponding to the category;
the position determining unit is used for determining the position of a recognition line from the test paper image to be recognized by using an image recognition method based on second deep learning;
and the result determining unit is used for determining a test paper detection result according to the image brightness value corresponding to the identification line position and the detection line information by using a regular image processing method.
9. A machine-readable storage medium having stored thereon instructions for causing a machine to perform the method of dipstick identification according to any of claims 1 to 7.
10. An apparatus comprising at least one processor, and at least one memory, bus connected to the processor; the processor and the memory complete mutual communication through the bus; the processor is configured to call program instructions in the memory to perform the test strip identification method of any one of claims 1 to 7.
CN202010301577.1A 2020-04-16 2020-04-16 Test paper identification method, device, storage medium and equipment Active CN113537203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010301577.1A CN113537203B (en) 2020-04-16 2020-04-16 Test paper identification method, device, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010301577.1A CN113537203B (en) 2020-04-16 2020-04-16 Test paper identification method, device, storage medium and equipment

Publications (2)

Publication Number Publication Date
CN113537203A true CN113537203A (en) 2021-10-22
CN113537203B CN113537203B (en) 2025-03-11

Family

ID=78120244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010301577.1A Active CN113537203B (en) 2020-04-16 2020-04-16 Test paper identification method, device, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN113537203B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114911987A (en) * 2022-05-24 2022-08-16 无锡市第五人民医院 Autonomous analysis method and system for detection reagent strip
CN116433671A (en) * 2023-06-14 2023-07-14 广州万孚健康科技有限公司 A colloidal gold detection method, system and storage medium based on image recognition
CN116593452A (en) * 2023-05-08 2023-08-15 深圳市计量质量检测研究院(国家高新技术计量站、国家数字电子产品质量监督检验中心) Food safety detection method, system and medium
CN116679053A (en) * 2023-06-14 2023-09-01 浙江工商大学 Colloidal gold immunochromatography test strip for detecting enramycin and preparation method and application thereof
CN116824236A (en) * 2023-06-15 2023-09-29 广州万孚健康科技有限公司 Methods, devices and media for identifying test results of polymorphic immunolabeling products
CN121071460A (en) * 2025-11-03 2025-12-05 广州万孚健康科技有限公司 Methods, systems, and storage media for HCG and LH test strips based on optical characteristics

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106405118A (en) * 2016-09-27 2017-02-15 北京爱康泰科技有限责任公司 Ovulation test paper detection method and system
CN106680478A (en) * 2017-02-24 2017-05-17 成都微瑞生物科技有限公司 Gold labeled detection system and method based on cloud platform
CN109345609A (en) * 2018-08-31 2019-02-15 天津大学 A method for mural image denoising and line drawing generation based on convolutional neural network
US20190102605A1 (en) * 2017-09-29 2019-04-04 Baidu Online Network Technology (Beijing) Co.,Ltd. Method and apparatus for generating information
CN109886274A (en) * 2019-03-25 2019-06-14 山东浪潮云信息技术有限公司 Social security card identification method and system based on opencv and deep learning
CN110111369A (en) * 2019-05-08 2019-08-09 上海大学 A kind of dimension self-adaption sea-surface target tracking based on edge detection
CN110738204A (en) * 2019-09-18 2020-01-31 平安科技(深圳)有限公司 Method and device for positioning certificate areas
CA3106991A1 (en) * 2018-08-02 2020-02-06 Balanced Media Technology, LLC Task completion using a blockchain network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106405118A (en) * 2016-09-27 2017-02-15 北京爱康泰科技有限责任公司 Ovulation test paper detection method and system
CN106680478A (en) * 2017-02-24 2017-05-17 成都微瑞生物科技有限公司 Gold labeled detection system and method based on cloud platform
US20190102605A1 (en) * 2017-09-29 2019-04-04 Baidu Online Network Technology (Beijing) Co.,Ltd. Method and apparatus for generating information
CA3106991A1 (en) * 2018-08-02 2020-02-06 Balanced Media Technology, LLC Task completion using a blockchain network
CN109345609A (en) * 2018-08-31 2019-02-15 天津大学 A method for mural image denoising and line drawing generation based on convolutional neural network
CN109886274A (en) * 2019-03-25 2019-06-14 山东浪潮云信息技术有限公司 Social security card identification method and system based on opencv and deep learning
CN110111369A (en) * 2019-05-08 2019-08-09 上海大学 A kind of dimension self-adaption sea-surface target tracking based on edge detection
CN110738204A (en) * 2019-09-18 2020-01-31 平安科技(深圳)有限公司 Method and device for positioning certificate areas

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
胡勇: "基于图像处理的试纸识别技术研究", 《中国优秀硕士学位论文全文数据库》, 15 January 2019 (2019-01-15), pages 6 - 98 *
邱晓欢;吴啟超;: "一种基于改进EAST网络和改进CRNN网络的火车票站名识别系统", 南方职业教育学刊, no. 06, 20 November 2019 (2019-11-20) *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114911987A (en) * 2022-05-24 2022-08-16 无锡市第五人民医院 Autonomous analysis method and system for detection reagent strip
CN116593452A (en) * 2023-05-08 2023-08-15 深圳市计量质量检测研究院(国家高新技术计量站、国家数字电子产品质量监督检验中心) Food safety detection method, system and medium
CN116433671A (en) * 2023-06-14 2023-07-14 广州万孚健康科技有限公司 A colloidal gold detection method, system and storage medium based on image recognition
CN116433671B (en) * 2023-06-14 2023-08-25 广州万孚健康科技有限公司 Colloidal gold detection method, system and storage medium based on image recognition
CN116679053A (en) * 2023-06-14 2023-09-01 浙江工商大学 Colloidal gold immunochromatography test strip for detecting enramycin and preparation method and application thereof
CN116824236A (en) * 2023-06-15 2023-09-29 广州万孚健康科技有限公司 Methods, devices and media for identifying test results of polymorphic immunolabeling products
CN121071460A (en) * 2025-11-03 2025-12-05 广州万孚健康科技有限公司 Methods, systems, and storage media for HCG and LH test strips based on optical characteristics

Also Published As

Publication number Publication date
CN113537203B (en) 2025-03-11

Similar Documents

Publication Publication Date Title
CN113537203B (en) Test paper identification method, device, storage medium and equipment
CN112348787B (en) Training method of object defect detection model, object defect detection method and device
CN109165645B (en) Image processing method and device and related equipment
CN106203327B (en) Lung tumor recognition system and method based on convolutional neural network
CN119338827B (en) Surface detection method and system for precision fasteners
CN111814902A (en) Target detection model training method, target recognition method, device and medium
US12159405B2 (en) Method for detecting medical images, electronic device, and storage medium
CN111738114B (en) Vehicle target detection method based on accurate sampling of remote sensing images without anchor points
CN111738036B (en) Image processing methods, devices, equipment and storage media
CN111931751A (en) Deep learning training method, target object identification method, system and storage medium
CN111539456B (en) Target identification method and device
CN110599453A (en) Panel defect detection method and device based on image fusion and equipment terminal
CN113435444B (en) Immunochromatography detection method, immunochromatography detection device, storage medium and computer equipment
CN106372624A (en) Human face recognition method and human face recognition system
CN114742145A (en) Performance test method, device and equipment of target detection model and storage medium
CN112232368B (en) Target recognition model training method, target recognition method and related devices thereof
CN110796145A (en) Multi-certificate segmentation association method based on intelligent decision and related equipment
CN110765963A (en) Vehicle brake detection method, device, equipment and computer readable storage medium
CN108664970A (en) A kind of fast target detection method, electronic equipment, storage medium and system
KR20230083421A (en) Method and apparatus for quarantine of imported ornamental fish through data preprocessing and deep neural network-based image detection and classification
CN116453091A (en) Lightweight traffic sign detection method, storage medium and system
CN117197097B (en) Power equipment component detection method based on infrared image
CN119229141A (en) A color recognition method for immunochromatographic test paper
CN108154199B (en) High-precision rapid single-class target detection method based on deep learning
CN105069475B (en) The image processing method of view-based access control model attention mechanism model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240522

Address after: Building 1, 4th Floor, Anji Technology Entrepreneurship Park, Dipu Street, Anji County, Huzhou City, Zhejiang Province, 313399

Applicant after: Zhejiang Xingjian Medical Union Technology Co.,Ltd.

Country or region after: China

Address before: 100084 room 1206, block B, learning and research mansion, Shuang Ching Road, Haidian District, Beijing.

Applicant before: Beijing Aikangtai Technology Co.,Ltd.

Country or region before: China

GR01 Patent grant
GR01 Patent grant