CN118351531A - Image recognition method based on image sensor and hyperspectral imaging technology - Google Patents
Image recognition method based on image sensor and hyperspectral imaging technology Download PDFInfo
- Publication number
- CN118351531A CN118351531A CN202311215888.6A CN202311215888A CN118351531A CN 118351531 A CN118351531 A CN 118351531A CN 202311215888 A CN202311215888 A CN 202311215888A CN 118351531 A CN118351531 A CN 118351531A
- Authority
- CN
- China
- Prior art keywords
- image
- hyperspectral
- biological sample
- result
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/698—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/143—Sensing or illuminating at different wavelengths
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/147—Details of sensors, e.g. sensor lenses
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/58—Extraction of image or video features relating to hyperspectral data
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Vascular Medicine (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The application provides an image recognition method based on an image sensor and a hyperspectral imaging technology, and relates to the field of computers. The method comprises the following steps: acquiring a hyperspectral image of a biological sample by a hyperspectral imaging scanner, wherein the biological sample comprises a plurality of objects; acquiring a color image of the biological sample by an image sensor; identifying a plurality of objects in the biological sample based on the hyperspectral image and the color image, and generating an identification result, wherein the identification result is used for indicating the category to which the plurality of objects belong, and the plurality of objects comprise at least one of the following: cells, bacteria, tissues, fungi, parasites, crystals, particles and tubes. By implementing the technical scheme provided by the application, the accuracy of biological sample identification can be improved.
Description
Technical Field
The application relates to the technical field of image processing, in particular to an image recognition method based on an image sensor and a hyperspectral imaging technology.
Background
With the development and improvement of artificial intelligence recognition technology and image automation extraction algorithm, image processing recognition technology is widely applied in various fields. The image processing technology is introduced into the traditional biomedical field, and the biological smear in the biomedical field is detected and identified, so that the identification efficiency can be improved.
In the existing biological smear recognition technology, a color image of a biological smear is generally obtained, and objects in the biological smear are recognized by adopting a manual microscopic examination mode, or the color image of the biological smear is input into a classifier, and the color image of the biological smear is recognized by utilizing a pre-trained classifier so as to generate a recognition result of the objects in the biological smear. However, for some cells, such as late stage and early stage appearance of late and young red in erythrocytes, it is often difficult to distinguish them by manual microscopy alone or machine learning alone through color pictures, resulting in misclassification. In addition, due to differences in smear preparation, staining conditions, image acquisition equipment and the like, cell images obtained from different sources are generally complex, and there are situations of feature changes such as colors and textures of targets and backgrounds, feature deletion, feature confusion and the like, and deviation of recognition results is easily caused by recognition only through color images. Thus, how to improve the accuracy of classifying objects in biological smears is a problem to be solved.
Disclosure of Invention
The image recognition method based on the image sensor and the hyperspectral imaging technology provided by the embodiment of the application can improve the accuracy of classifying objects in biological smears.
In a first aspect, an embodiment of the present application provides an image recognition method based on an image sensor and hyperspectral imaging, including: acquiring a hyperspectral image of a biological sample by a hyperspectral imaging scanner, wherein the biological sample comprises a plurality of objects; acquiring a color image of the biological sample by an image sensor; identifying a plurality of objects in the biological sample based on the hyperspectral image and the color image, and generating an identification result, wherein the identification result is used for indicating the category to which the plurality of objects belong, and the plurality of objects comprise at least one of the following: cells, bacteria, tissues, fungi, parasites, crystals, particles and tubes.
In general, the biological sample has different structures of objects, different light transmission and different light absorption at different wavelengths; thus, the peak position, the peak intensity and the peak shape of the spectrum curve of different objects are different. That is, the spectral curves are different for different objects; the spectral curves are also different for different periods of the same object. According to the embodiment of the application, the hyperspectral image and the color image are utilized to identify the biological sample, so that the cells which are difficult to distinguish can be rapidly distinguished; in addition, the hyperspectral image can retain the spectral characteristics of each object in the biological sample, and when the characteristics of colors, textures and the like in the color image change, the characteristics are missing and the characteristics are mixed, the identification can be performed based on the hyperspectral image, so that the identification accuracy and the identification efficiency of the objects in the biological sample are improved.
In one possible implementation, identifying an object in a biological sample based on a feature set, generating an image identification result includes: extracting first characteristic data of each object in a plurality of objects presented by the hyperspectral image, and generating a first characteristic data set, wherein the first characteristic data is used for indicating the spectral characteristics of each object; extracting second feature data for each of a plurality of objects of the color image presentation, generating a second feature data set, the second feature data comprising at least one of: cell morphology, cell size, nuclear morphology, cytoplasmic color, nuclear plasma ratio, cell arrangement, nucleolus, cell chromatin, bacterial morphology, bacterial color, crystalline shape, and color; based on the first feature data set and the second feature data set, a plurality of objects in the biological sample are identified, and an identification result is generated.
In one possible implementation, identifying a plurality of objects in a biological sample based on a first feature data set and a second feature data set, generating an identification result includes: comparing the second characteristic data set with characteristic data in a preset color image characteristic database, and determining whether an unidentified object exists or not based on a comparison result; when an unidentified object exists, a class of the unidentified object is determined based on a position of the unidentified object in the biological specimen, first feature data of the unidentified object, and a correspondence between the preset first feature data and the class.
In one possible implementation, identifying a plurality of objects in a biological sample based on a hyperspectral image and a color image, generating an identification result includes: the hyperspectral image and the color image are input into a pre-trained neural network, and a recognition result is generated, wherein the recognition result comprises the position label of each object in the color image and the category to which each object belongs.
In one possible implementation, identifying a plurality of objects in a biological sample based on a hyperspectral image and a color image, generating an identification result includes: inputting the hyperspectral image into a first neural network to generate a first identification result; inputting the color image into a second neural network to generate a second recognition result; based on the first preset weight of the first recognition result and the second preset weight of the second recognition result, a recognition result is generated, wherein the recognition result comprises a position mark of each object in the color image and a category to which each object belongs.
In one possible implementation, the image recognition method further includes: training a first neural network to be trained by adopting a first training sample image set, and generating a first neural network based on a training result, wherein the first training sample image set comprises hyperspectral images and labeling information of each biological sample in a plurality of biological samples, and the labeling information is used for indicating the category of an object in each biological sample; training a second neural network to be trained by adopting a second training sample image set, and generating a second neural network based on a training result, wherein the second training sample image set comprises a color image and labeling information of each biological sample in a plurality of biological samples, and the labeling information is used for indicating the category of an object in each biological sample; inputting each hyperspectral test sample image in the test sample image set into a first neural network to obtain a first output result for indicating the category to which the object presented in the hyperspectral test sample image belongs; inputting each color test sample image in the test sample image set to a second neural network to obtain a second output result for indicating the category to which the object presented in the image belongs, wherein the test sample image set comprises a hyperspectral image and a color image of each biological sample in the plurality of biological samples; detecting a first difference between the first output result and a preset result, detecting a second difference between the second output result and the preset result, wherein the preset result indicates the category to which the object in each biological sample belongs; based on the first difference and the second difference, a first preset weight is set for the output result of the first neural network, and a second preset weight is set for the output result of the second neural network.
In a second aspect, an embodiment of the present application provides an image recognition apparatus including: the hyperspectral image acquisition module is used for acquiring hyperspectral images of biological samples through a hyperspectral imaging scanner, wherein the biological samples comprise a plurality of objects; the color image acquisition module is used for acquiring a color image of the biological sample through the image sensor; the identification module is used for identifying a plurality of objects in the biological sample based on the hyperspectral image and the color image, generating an identification result, wherein the identification result is used for indicating the category to which the plurality of objects belong, and the plurality of objects comprise at least one of the following: cells, bacteria, tissues, fungi, parasites, crystals, particles and tubes.
In one possible implementation, the identification module includes: the device comprises a first generation sub-module, a second generation sub-module and a third generation sub-module, wherein the first generation sub-module is used for extracting first characteristic data of each object in a plurality of objects presented by a hyperspectral image, generating a first characteristic data set, and the first characteristic data is used for indicating the spectral characteristics of each object; a second generation sub-module for extracting second feature data of each of a plurality of objects of the color image presentation, generating a second feature data set, the second feature data comprising at least one of: cell morphology, cell size, nuclear morphology, cytoplasmic color, nuclear plasma ratio, cell arrangement, nucleolus, cell chromatin, bacterial morphology, bacterial color, crystalline shape, and color; and the identification sub-module is used for identifying a plurality of objects in the biological sample based on the first characteristic data set and the second characteristic data set and generating an identification result.
In one possible implementation, the identification submodule is specifically configured to: comparing the second characteristic data set with characteristic data in a preset color image characteristic database, and determining whether an unidentified object exists or not based on a comparison result; when an unidentified object exists, a class of the unidentified object is determined based on a position of the unidentified object in the biological specimen, first feature data of the unidentified object, and a correspondence between the preset first feature data and the class.
In one possible implementation, the identification module includes: the generation sub-module is used for inputting the hyperspectral image and the color image into the pre-trained neural network, and generating a recognition result, wherein the recognition result comprises the position mark of each object in the color image and the category to which each object belongs.
In one possible implementation, the identification module includes: the first generation sub-module is used for inputting the hyperspectral image into the first neural network and generating a first identification result; the second generation sub-module is used for inputting the color image into a second neural network and generating a second identification result; the recognition sub-module is used for generating a recognition result based on the preset weight of the first recognition result and the preset weight of the second recognition result, wherein the recognition result comprises a position mark of each object in the color image and a category to which each object belongs.
In one possible implementation, the image recognition apparatus further includes: the first neural network generation module is used for training a first neural network to be trained by adopting a first training sample image set, and generating a first neural network based on a training result, wherein the first training sample image set comprises hyperspectral images and labeling information of each biological sample in a plurality of biological samples, and the labeling information is used for indicating the category of an object in each biological sample; the second neural network generation module is used for training a second neural network to be trained by adopting a second training sample image set, and generating a second neural network based on a training result, wherein the second training sample image set comprises a color image and labeling information of each biological sample in a plurality of biological samples, and the labeling information is used for indicating the category of an object in each biological sample; the first test module is used for inputting each hyperspectral test sample image in the test sample image set into the first neural network to obtain a first output result for indicating the category to which the object presented in the hyperspectral test sample image belongs; the second testing module is used for inputting each color testing sample image in the testing sample image set to the second neural network to obtain a second output result for indicating the category of the object presented in the image, wherein the testing sample image set comprises a hyperspectral image and a color image of each biological sample in the plurality of biological samples; the detection module is used for detecting a first difference between a first output result and a preset result, detecting a second difference between a second output result and the preset result, and the preset result indicates the category of the object in each biological sample; the weight setting module is used for setting a first weight for the output result of the first neural network and setting a second weight for the output result of the second neural network based on the first difference and the second difference.
In a third aspect, embodiments of the present application provide a dual channel imaging scanner comprising an optical microscopy assembly, a dual channel interface, a hyperspectral imaging scanner, and an image sensor; the hyperspectral imaging scanner is connected with the optical microscope component through a first conversion interface in the two-channel conversion interfaces; the image sensor is connected with the optical microscope component through a second conversion interface in the two-channel conversion interface; the hyperspectral imaging scanner collects hyperspectral images of biological samples through the first conversion interface and the optical microscopic component; the image sensor acquires a color image of the biological sample through the second interface and the optical microscope assembly.
In one possible implementation, the dual channel imaging scanner further includes an eyepiece coupled to the optical microscopy assembly.
According to the dual-channel imaging scanner provided by the embodiment of the application, the hyperspectral imaging scanner and the image sensor are arranged in the dual-channel imaging scanner, so that the hyperspectral imaging scanner and the image sensor share the same set of microscopic components, and the hyperspectral imaging scanner and the image sensor are utilized to collect images of biological samples at the same time, so that the hyperspectral images collected by the hyperspectral imaging scanner can be consistent with the content of color images collected by the image sensor, and the accuracy of image identification by subsequent electronic equipment is improved.
In a fourth aspect, embodiments of the present application provide an image recognition system comprising an electronic device, and a dual channel imaging scanner as in the third aspect; wherein the electronic device comprises a processor, a memory, and an interface; a memory for storing instructions; the interface is used for being connected with the double-channel imaging scanner; a processor for executing instructions stored in a memory to cause an electronic device to perform the method as described in the first aspect.
In a fifth aspect, embodiments of the present application provide a readable storage medium comprising computer instructions which, when run on a computer, cause the computer to perform the method according to the first aspect.
It can be appreciated that the technical solutions of the second to fifth aspects of the present application are consistent with the technical solutions of the first aspect of the present application, and the beneficial effects obtained by each aspect and the corresponding possible embodiments are similar, and are not repeated.
Drawings
FIG. 1 is a schematic diagram of an image recognition system according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a dual channel imaging scanner according to an embodiment of the present application;
FIG. 3 is a flowchart of an image recognition method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an image recognition device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments.
In describing embodiments of the present application, words such as "for example" or "for example" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "such as" or "for example" in embodiments of the application should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "or" for example "is intended to present related concepts in a concrete fashion.
In the description of embodiments of the application, the term "plurality" means two or more. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating an indicated technical feature. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Referring to fig. 1, fig. 1 is a schematic diagram of an architecture of an image recognition system 100 according to an embodiment of the application. As shown in fig. 1, the image recognition system 100 includes a two-channel imaging scanner 11 and an electronic device 12. The two-channel imaging scanner 11 is connected with the electronic device 12, and the two-channel imaging scanner 11 is used for acquiring hyperspectral images and color images of objects presented in biological samples and transmitting the acquired hyperspectral images and color images to the electronic device 12; the electronic device 12 recognizes the object presented in the biological sample based on the hyperspectral image and the color image, thereby generating a recognition result.
Referring to fig. 2, fig. 2 is a schematic diagram illustrating a dual-channel imaging scanner 11 according to an embodiment of the present application based on the image recognition system 100 shown in fig. 1. As shown in fig. 2, the dual channel imaging scanner 11 includes an optical microscope assembly 110, a dual channel interface 111, a hyperspectral imaging scanner 112, and an image sensor 113. The dual channel switch interface 111 includes a first switch interface A1 and a second switch interface A2. The hyperspectral imaging scanner 112 is connected to the optical subassembly 110 via a first interface of revolution A1. The image sensor 113 is connected to the optical subassembly 110 via a second interface A2. In the embodiment of the present application, the optical microscope assembly 110 includes a beam splitter B1 and an objective lens B2, where the beam splitter B1 is disposed at the intersection position of the first adapter A1 and the second adapter A2, as shown in fig. 2. As can be seen from fig. 2, an optical transmission path S1 is formed by the objective lens B2, the beam splitter B1 and the first conversion interface A1, and the hyperspectral imaging scanner 112 acquires a hyperspectral image of the biological sample through the first conversion interface A1, the beam splitter B1 and the objective lens B2; the objective lens B2, the spectroscope B1 and the second interface A2 form another optical transmission path S2, and the image sensor 113 acquires a color image of the biological sample through the second interface A2, the spectroscope B1 and the objective lens B2. That is, the hyperspectral imaging scanner 112 and the image sensor 113 share the same optical microscope assembly 110, and images with different light intensities are formed on different optical transmission paths by adjusting the angle of the beam splitter B2, evaporating the antireflection film, and the like.
In one possible implementation of an embodiment of the present application, the dual channel imaging scanner 11 may also include an eyepiece that may be coupled to the optical microscopy assembly 110. By arranging the ocular, a worker can observe the biological sample by using the ocular. Furthermore, the dual-channel imaging scanner 11 may include more or fewer components, such as a stage for holding a biological sample, an automatic switching of an objective lens, an automatic oil dripping device, and the like.
According to the image recognition system 100 provided by the embodiment of the application, the two-channel imaging scanner 11 is arranged, and the hyperspectral imaging scanner 112 and the image sensor 113 are arranged in the two-channel imaging scanner 11, so that the hyperspectral imaging scanner 112 and the image sensor 113 share the same set of microscopic components, and the hyperspectral imaging scanner 112 and the image sensor 113 are utilized to collect images of biological samples at the same time, so that the hyperspectral images collected by the hyperspectral imaging scanner 112 are consistent with the content of color images collected by the image sensor 113, and the accuracy of the subsequent electronic equipment on image recognition is improved; in addition, the electronic equipment performs image recognition based on the hyperspectral image and the color image, so that the recognition accuracy and recognition speed can be improved, the detection efficiency is improved, and the labor time cost is reduced.
Referring to fig. 3, fig. 3 is a flowchart of an image recognition method 300 based on an image sensor and hyperspectral imaging according to an embodiment of the present application, and the image recognition method 300 is applied to the electronic device 12 shown in fig. 1, based on the image recognition system 100 shown in fig. 1 and the dual-channel imaging scanner 11 shown in fig. 2. The flow of the image recognition method 300 includes the steps of:
in step 301, a hyperspectral image of a biological specimen is acquired by a hyperspectral imaging scanner.
In this step, a hyperspectral image of the biological specimen may be acquired by the hyperspectral imaging scanner 112 shown in fig. 2. The manner in which the hyperspectral imaging scanner 112 acquires the hyperspectral image of the biological sample is referred to the description of the hyperspectral imaging scanner 112 in fig. 2, and will not be repeated.
In embodiments of the present application, the biological sample may include, but is not limited to, one of the following: blood smears, urine smears, biological tissue shed cell smears, or human secretion smears, and the like. Thus, a plurality of objects may be included in the biological sample, the plurality of objects including at least one of: cells, bacteria, tissues, fungi, parasites, crystals, particles and tubes. Wherein the cells may include, for example, but are not limited to, one of the following: blood cells (e.g., erythrocytes and leukocytes), tumor cells, urine cells (e.g., erythrocytes, leukocytes, and epithelial cells). Bacteria may include, for example, but are not limited to, at least one of the following: gonococcus, escherichia coli, enterococcus faecium, staphylococcus aureus, mycobacterium tuberculosis, bacillus proteus, pneumococcus, klebsiella pneumoniae and the like. The organization may include, for example, but is not limited to, one of the following: glioma, thyroma, pituitary adenoma, gastric hypodifferentiation adenocarcinoma, cervical cancer, breast cancer, skin melanoma, thyroid nodule, and the like. Fungi may include, for example, but are not limited to, at least one of the following: fusarium, yeast type fungi, filamentous fungi, pneumosporium carinii, biphasic fungi, etc. Parasites may include, for example, but are not limited to, at least one of: hook worm, roundworm, schistosome, tapeworm, scabies, etc. Crystallization may include, for example, but is not limited to, at least one of: calcium oxalate crystals, uric acid crystals, ammonium urate crystals, ammonium magnesium phosphate crystals, calcium carbonate crystals, cholesterol crystals, bilirubin crystals, cystine crystals, leucine crystals, tyrosine crystals, sulfonamide crystals, and the like. The particles may include, for example, but are not limited to, at least one of: amorphous urate particles, amorphous phosphate particles, and the like. The tube type may include, for example, but is not limited to, at least one of the following: red blood cell tube type, white blood cell tube type, tubular renal tubular epithelial cell type, granule tube type, fat tube type, protein tube type, mixed cell tube type, nested tube type, etc.
In an embodiment of the present application, a hyperspectral image includes: spectral feature data, which may be, for example, a spectral curve, for each object in the biological sample. In general, the biological sample has different structures of objects, different light transmission and different light absorption at different wavelengths; thus, the peak position, the peak intensity and the peak shape of the spectrum curve of different objects are different. That is, the spectral curves are different for different objects; the spectral curves are also different for different cycles of the same subject (red blood cells for each cycle are different for red blood cells including primary red blood cells, premature red blood cells, intermediate red blood cells, late young red blood cells, reticulocytes, and mature red blood cells, for example).
In step 302, a color image of a biological specimen is acquired by an image sensor.
In this step, a color image of the biological sample may be acquired by the image sensor 113 shown in fig. 2. The manner in which the image sensor 113 captures the color image of the biological sample is described with reference to fig. 2 with respect to the image sensor 113, and will not be described again.
It should be noted that, step 301 and step 302 may be performed simultaneously, or may be performed in steps; in addition, the sequence of execution of the steps 301 and 302 is not specifically limited in the embodiment of the present application.
Step 303, identifying a plurality of objects in the biological sample based on the hyperspectral image and the color image, and generating an identification result.
In the embodiment of the application, the identification result is used for indicating the category to which a plurality of objects in the biological sample belong. The electronic device 12 shown in fig. 1 may recognize hyperspectral images and color images in a variety of ways.
In a first possible implementation manner, first feature data of each object in a plurality of objects presented by a hyperspectral image may be extracted first, and a first feature data set is generated, wherein the first feature data is used for indicating spectral features of each object; second feature data for each of a plurality of objects of the color image presentation is then extracted, generating a second feature data set, the second feature data set comprising at least one of: cell morphology, cell size, nuclear morphology, cytoplasmic color, cell arrangement, nucleolus, cell chromatin, bacterial morphology, bacterial color, crystalline shape and color; and finally, based on the first characteristic data set and the second characteristic data set, identifying a plurality of objects in the biological sample, and generating an identification result.
In an embodiment of the present application, the electronic device 12 may have a first mapping relationship between the category and the feature (the color image feature and the spectral feature) stored in advance. The electronic device 12 may first extract a spectral signature of each of the plurality of objects from the hyperspectral features; features (at least one of cell morphology, cell size, cell nucleus morphology, cytoplasmic color, cell arrangement, cell nucleus, cell chromatin, bacterial morphology, bacterial color, crystalline shape, and color) of each of the plurality of subjects are then extracted from the color image. Because the hyperspectral image and the color image adopt the same light path and the biological sample image is acquired through the same set of microscopic components, each object in the biological sample is presented in the same position of the hyperspectral image and the color image. The electronic device 12 may place the feature data corresponding to objects presented in the same location in both images together such that each of the plurality of objects includes the feature data and spectral feature curve data of the color image. Finally, the electronic device 12 may identify each object by using the first mapping relationship based on the color image feature data and the spectral feature curve data of each object, and generate an identification result corresponding to each object.
Further, in one possible implementation, identifying a plurality of objects in the biological sample based on the first feature data set and the second feature data set, generating the identification result includes: comparing the second characteristic data set with characteristic data in a preset color image characteristic database to determine whether an unidentified object exists or not; when an unidentified object exists, a class of the unidentified object is determined based on a position of the unidentified object in the biological specimen, first feature data of the unidentified object, and a correspondence between the preset first feature data and the class. In this implementation, each object may be first identified based on the feature data of the color image, and the spectral feature of each object may be used as a supplement, and when a certain object cannot be identified based on the feature data of the color image, the spectral feature may be further used for identification. More specifically, in the color image feature database, a multi-level directory is provided. For example, cells are primary and leukocytes are secondary, primary, premature, intermediate, late, reticulocytes and mature erythrocytes are tertiary, and neutrophils, eosinophils, basophils, monocytes and lymphocytes are tertiary. The electronic device 12 may detect whether the class to which each object belongs is the class of the last-level directory in the color image feature database, and if not, determine that the object that does not belong to the class in the last-level directory is an unidentified object, so as to further identify by using the spectral feature.
In the conventional technology, identification and classification of objects in biological samples are usually realized by means of manual microscopic examination or by means of machine learning by using color pictures. However, for some cells, such as late stage and early stage appearance of late and young red in erythrocytes, it is often difficult to distinguish them by manual microscopy alone or machine learning alone through color pictures, resulting in misclassification. However, in the late stage of the middle-young red blood cells and in the early stage of the late-young red blood cells, the cytoplasmic ratio, the cytoplasmic color and the chromatin of the middle-young red blood cells are different, and the spectral characteristic curves of the hyperspectral images of the middle-young red blood cells and the late-young red blood cells are different, so that the biological sample can be rapidly distinguished by utilizing the hyperspectral images and the colored images, and the identification accuracy and the identification efficiency of an object in the biological sample are improved.
In a second possible implementation, a machine learning method may be used to identify based on the hyperspectral image and the color image to generate an identification result. Identifying multiple objects using a machine learning approach may in turn include the following two implementations.
Mode one: and simultaneously inputting the hyperspectral image and the color image into a pre-trained neural network to generate a recognition result. The recognition result includes a position label of each object in the color image and a category to which each object belongs. In one possible implementation, the recognition result may also include a probability that each object belongs to the category. The neural network of the first mode is pre-trained based on training samples. The training sample set comprises hyperspectral images and color images corresponding to each biological sample in the plurality of biological samples, and labeling information for each image, wherein the labeling information is used for indicating the category of an object represented by the hyperspectral images and the color images. The embodiment of the application also comprises a training step of the neural network, wherein the training step specifically comprises the following steps: firstly, inputting hyperspectral images and color images corresponding to each biological sample in a training sample set into an initial neural network together to obtain an output result, wherein the output result is used for indicating the category of a plurality of objects in the biological sample; then, a loss function is constructed based on the deviation between the output result and the labeling information, which may include, for example, but not limited to: mean Absolute Error (MAE) loss function or Mean Square Error (MSE) loss function. The loss function comprises weight coefficients of each layer of network in the initial neural network. And iteratively adjusting the weight coefficient value of each layer of network in the initial neural network by using a back propagation algorithm and a gradient descent algorithm based on the constructed loss function until the error between the output result and the labeling information is smaller than or equal to a preset threshold value or the iteration number is smaller than the preset threshold value, and storing the weight coefficient value of each layer of network in the initial neural network. At this point, neural network training is complete. Gradient descent algorithms may include, but are not limited to: SGD, adam, etc. When back propagation is performed based on the preset loss function, a gradient of the preset loss function with respect to the weight coefficient of each layer of network in the neural network can be calculated by using a chain rule.
Mode two: inputting the hyperspectral image into a first neural network to generate a first identification result; inputting the color image into a second neural network to generate a second recognition result; based on the first preset weight of the first recognition result and the second preset weight of the second recognition result, a recognition result is generated, wherein the recognition result comprises a position mark of each object in the color image and a category to which each object belongs. In one possible implementation, the recognition result may also include a probability that each object belongs to the category.
The first neural network in the second mode is that each sample image in the first training sample image set is input into the first neural network to be trained, a back propagation algorithm and a gradient descent algorithm are utilized to iteratively adjust the weight coefficient value of each layer of network in the first neural network to be trained based on the constructed loss function, the first training sample image set comprises hyperspectral images corresponding to each biological sample in the plurality of biological samples, and labeling information of each image indicates the category of an object represented by the hyperspectral images.
The second neural network in the second mode is that each sample image in the second training sample image set is input into the second neural network to be trained, a back propagation algorithm and a gradient descent algorithm are utilized to iteratively adjust the weight coefficient value of each layer of network in the second neural network to be trained based on the constructed loss function, the second training sample image set comprises color images corresponding to each biological sample in the plurality of biological samples, and labeling information of each image is used for indicating the category of an object represented by the color images. Each of the plurality of biological samples includes at least one of cells, bacteria, crystals, and tubing.
After the first neural network and the second neural network are trained, the first neural network and the second neural network can be tested by using the test sample image set, and the weight coefficient of the first neural network and the weight coefficient of the second neural network are set based on the test result. Specifically, the test sample image set comprises a hyperspectral image and a color image of each biological sample, the hyperspectral image of each biological sample is input into a first neural network, and a first output result of the category to which the object presented in the hyperspectral image belongs is obtained; and inputting the color image of each biological sample into a second neural network to obtain a second output result of the category to which the object presented in the color image belongs. A first difference between the first output result and a preset result is detected, and a difference between the second output result and the preset result is detected. The preset result is used for indicating the category to which the object in each biological sample belongs. And finally, setting a first preset weight for the output result of the first neural network and setting a second preset weight for the output result of the second neural network based on the first difference and the second difference. When the first difference is smaller than the second difference, a higher weight can be set for the first neural network, and a lower weight can be set for the second neural network; when the first difference is greater than the second difference, a lower weight may be set for the first neural network and a higher weight may be set for the second neural network.
It will be appreciated that the electronic device 12 shown in fig. 1, in order to implement the functionality described in fig. 3, includes corresponding hardware and/or software modules that perform the various functions. The steps of the examples described in connection with the embodiments disclosed herein may be embodied in hardware or a combination of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The present embodiment may divide the functional modules of the electronic device 12 shown in fig. 1 according to the above-described method example, for example, may divide each different functional module corresponding to each function, or may integrate two or more functions into one processing module. The integrated modules described above may be implemented in hardware. It should be noted that, in this embodiment, the division of the modules is schematic, only one logic function is divided, and another division manner may be implemented in actual implementation.
In the case of dividing the respective functional modules by the respective functions, fig. 4 shows a possible schematic diagram of the image recognition apparatus 400 related to the above embodiment, and the image recognition apparatus 400 corresponding to fig. 4 may be a software apparatus, running on the electronic device 12 shown in fig. 1, or the image recognition apparatus 400 may be a combination of software and hardware, which is embedded in the electronic device 12 shown in fig. 1. As shown in fig. 4, the image recognition apparatus 400 may include: a hyperspectral image acquisition module 401, configured to acquire a hyperspectral image of a biological sample through a hyperspectral imaging scanner, where the biological sample includes a plurality of objects; a color image acquisition module 402 for acquiring a color image of the biological sample by the image sensor; the identifying module 403 is configured to identify a plurality of objects in the biological sample based on the hyperspectral image and the color image, and generate an identification result, where the identification result is used to indicate a category to which the plurality of objects belong, and the plurality of objects include at least one of the following: cells, bacteria, tissues, fungi, parasites, crystals, particles and tubes.
In one possible implementation, the identification module 403 includes: a first generation sub-module (not shown in the figure) for extracting first feature data of each of a plurality of objects presented by the hyperspectral image, generating a first feature data set, the first feature data being used for indicating a spectral feature of each object; a second generation sub-module (not shown) for extracting second feature data of each of the plurality of objects of the color image representation, generating a second feature data set, the second feature data comprising at least one of: cell morphology, cell size, nuclear morphology, cytoplasmic color, nuclear plasma ratio, cell arrangement, nucleolus, cell chromatin, bacterial morphology, bacterial color, crystalline shape, and color; an identification sub-module (not shown in the figure) is used for identifying a plurality of objects in the biological sample based on the first characteristic data set and the second characteristic data set, and generating an identification result.
In one possible implementation, the identification sub-module (not shown in the figure) is specifically configured to: comparing the second characteristic data set with characteristic data in a preset color image characteristic database, and determining whether an unidentified object exists or not based on a comparison result; when an unidentified object exists, a class of the unidentified object is determined based on a position of the unidentified object in the biological specimen, first feature data of the unidentified object, and a correspondence between the preset first feature data and the class.
In one possible implementation, the identification module 403 is specifically configured to: the hyperspectral image and the color image are input into a pre-trained neural network, and a recognition result is generated, wherein the recognition result comprises the position label of each object in the color image and the category to which each object belongs.
In one possible implementation, the identification module 403 includes: a first generation sub-module (not shown in the figure) for inputting the hyperspectral image to a first neural network to generate a first recognition result; a second generation sub-module (not shown in the figure) for inputting the color image to a second neural network to generate a second recognition result; and the recognition sub-module (not shown in the figure) is used for generating a recognition result based on the preset weight of the first recognition result and the preset weight of the second recognition result, wherein the recognition result comprises a position mark of each object in the color image and a category to which each object belongs.
In one possible implementation, the image recognition apparatus further includes: a first neural network generating module (not shown in the figure) for training a first neural network to be trained by using a first training sample image set, and generating a first neural network based on a training result, wherein the first training sample image set comprises a hyperspectral image and labeling information of each biological sample in a plurality of biological samples, and the labeling information is used for indicating a category to which an object in each biological sample belongs; a second neural network generating module (not shown in the figure) for training a second neural network to be trained by using a second training sample image set, and generating a second neural network based on the training result, wherein the second training sample image set comprises a color image and labeling information of each biological sample in the plurality of biological samples, and the labeling information is used for indicating the category to which the object in each biological sample belongs; a first test module (not shown in the figure) for inputting each hyperspectral test sample image in the test sample image set to the first neural network to obtain a first output result for indicating the category to which the object presented in the hyperspectral test sample image belongs; a second test module (not shown in the figure) for inputting each color test sample image in a test sample image set to the second neural network to obtain a second output result for indicating a category to which the object presented in the image belongs, wherein the test sample image set includes a hyperspectral image and a color image of each biological sample in the plurality of biological samples; a detection module (not shown in the figure) for detecting a first difference between the first output result and a preset result, and detecting a second difference between the second output result and the preset result, wherein the preset result indicates a category to which the object in each biological sample belongs; a weight setting module (not shown in the figure) is configured to set a first weight on the output result of the first neural network and a second weight on the output result of the second neural network based on the first difference and the second difference.
It should be noted that: in the image recognition apparatus 400 provided in the above embodiment, when implementing the functions thereof, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be implemented by different functional modules, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the embodiments of the apparatus and the method provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the embodiments of the method are detailed in the method embodiments, which are not repeated herein.
The application also discloses electronic equipment. Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device may be, for example, the electronic device 103 shown in fig. 1. The electronic device may include: at least one processor 501, at least one network interface 504, a user interface 503, a memory 505, at least one communication bus 502.
Wherein a communication bus 502 is used to enable connected communications between these components.
The user interface 503 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 503 may further include a standard wired interface and a standard wireless interface.
The network interface 504 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 501 may include one or more processing cores. The processor 501 connects various parts throughout the server using various interfaces and lines, performs various functions of the server and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 505, and invoking data stored in the memory 505. Alternatively, the processor 501 may be implemented in at least one hardware form of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 501 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 501 and may be implemented by a single chip.
The Memory 505 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 505 comprises a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 505 may be used to store instructions, programs, code sets, or instruction sets. The memory 505 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described various method embodiments, etc.; the storage data area may store data or the like involved in the above respective method embodiments. The memory 505 may also optionally be at least one storage device located remotely from the processor 501. Referring to fig. 5, an operating system, a network communication module, a user interface module, and an application program of a data processing method may be included in the memory 505 as a computer storage medium.
In the electronic device shown in fig. 5, the user interface 503 is mainly used for providing an input interface for a user, and acquiring data input by the user; and the processor 501 may be configured to invoke an application program in the memory 505 that stores a data processing method, which when executed by the one or more processors 501, causes the electronic device to perform the method as described in one or more of the embodiments above. It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all of the preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, such as a division of unit modules, merely a division of logic functions, and there may be other manners of dividing actually being implemented, such as a plurality of unit modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The unit modules described as separate components may or may not be physically separate, and components displayed as unit modules may or may not be physical unit modules, may be located in one place, or may be distributed over a plurality of network unit modules. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit module in each embodiment of the present application may be integrated in one processing unit, or each unit module may exist alone physically, or two or more unit modules may be integrated in one unit. The integrated unit modules can be realized in the form of hardware or software functional units.
The integrated unit modules, if implemented in the form of software functional unit modules and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in whole or in part in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method of the various embodiments of the present application. And the aforementioned memory includes: various media capable of storing program codes, such as a U disk, a mobile hard disk, a magnetic disk or an optical disk.
The foregoing is merely exemplary embodiments of the present disclosure and is not intended to limit the scope of the present disclosure. That is, equivalent changes and modifications are contemplated by the teachings of this disclosure, which fall within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure.
This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a scope and spirit of the disclosure being indicated by the claims.
Claims (10)
1. An image recognition method based on an image sensor and a hyperspectral imaging technology, comprising the steps of:
Acquiring a hyperspectral image of a biological sample by a hyperspectral imaging scanner, wherein the biological sample comprises a plurality of objects;
Acquiring a color image of the biological sample by an image sensor;
Identifying a plurality of objects in the biological sample based on the hyperspectral image and the color image, and generating an identification result, wherein the identification result is used for indicating the category to which the plurality of objects belong, and the plurality of objects comprise at least one of the following: cells, bacteria, tissues, fungi, parasites, crystals, particles and tubes.
2. The method of claim 1, wherein identifying a plurality of objects in the biological sample based on the hyperspectral image and the color image, generating an identification result comprises:
extracting first characteristic data of each object in the plurality of objects presented by the hyperspectral image, and generating a first characteristic data set, wherein the first characteristic data is used for indicating the spectral characteristics of each object;
Extracting second feature data of each of the plurality of objects of the color image presentation, generating a second feature data set, the second feature data comprising at least one of: cell morphology, cell size, nuclear morphology, cytoplasmic color, cell arrangement, nucleolus, cell chromatin, bacterial morphology, bacterial color, crystalline shape and color;
And identifying a plurality of objects in the biological sample based on the first characteristic data set and the second characteristic data set, and generating an identification result.
3. The method of claim 2, wherein the identifying a plurality of objects in the biological sample based on the first feature data set and the second feature data set, generating an identification result, comprises:
Comparing the second characteristic data set with characteristic data in a preset color image characteristic database, and determining whether an unidentified object exists or not based on a comparison result;
When an unidentified object exists, determining a category of the unidentified object based on a position of the unidentified object in the biological specimen, first characteristic data of the unidentified object, and a correspondence between preset first characteristic data and the category.
4. The method of claim 1, wherein the identifying a plurality of objects in the biological sample based on the hyperspectral image and the color image, generating an identification result, comprises:
Inputting the hyperspectral image and the color image into a pre-trained neural network, and generating the identification result, wherein the identification result comprises a position mark of each object in the color image and a category to which each object belongs.
5. The method of claim 1, wherein the identifying a plurality of objects in the biological sample based on the hyperspectral image and the color image, generating an identification result, comprises:
inputting the hyperspectral image into a first neural network to generate a first recognition result;
inputting the color image into a second neural network to generate a second recognition result;
Generating the recognition result based on the first preset weight of the first recognition result and the second preset weight of the second recognition result, wherein the recognition result comprises a position mark of each object in the color image and a category to which each object belongs.
6. The method of claim 5, wherein the method further comprises:
Training a first neural network to be trained by adopting a first training sample image set, and generating the first neural network based on a training result, wherein the first training sample image set comprises hyperspectral images and labeling information of each biological sample in a plurality of biological samples, and the labeling information is used for indicating the category of an object in each biological sample;
Training a second neural network to be trained by adopting a second training sample image set, and generating the second neural network based on a training result, wherein the second training sample image set comprises a color image and labeling information of each biological sample in a plurality of biological samples, and the labeling information is used for indicating the category of an object in each biological sample;
Inputting each hyperspectral test sample image in the test sample image set into the first neural network to obtain a first output result for indicating the category to which the object presented in the hyperspectral test sample image belongs;
inputting each color test sample image in a test sample image set to the second neural network to obtain a second output result for indicating the category to which the object presented in the image belongs, wherein the test sample image set comprises a hyperspectral image and a color image of each biological sample in a plurality of biological samples;
Detecting a first difference between the first output result and a preset result, and detecting a second difference between the second output result and the preset result, wherein the preset result indicates the category to which the object in each biological sample belongs;
setting the first preset weight on the output result of the first neural network and setting the second preset weight on the output result of the second neural network based on the first difference and the second difference.
7. The dual-channel imaging scanner is characterized by comprising an optical microscope assembly, a dual-channel conversion interface, a hyperspectral imaging scanner and an image sensor; wherein,
The hyperspectral imaging scanner is connected with the optical microscope component through a first conversion interface in the two-channel conversion interfaces;
the image sensor is connected with the optical microscope component through a second conversion interface in the two-channel conversion interface;
The hyperspectral imaging scanner acquires hyperspectral images of biological samples through the first conversion interface and the optical microscopic component;
the image sensor acquires a color image of the biological sample through the second interface and the optical microscope assembly.
8. The dual channel imaging scanner of claim 7, further comprising an eyepiece coupled to the optical microscopy assembly.
9. An image recognition system comprising electronics, and a dual channel imaging scanner as claimed in claim 7 or 8; wherein,
The electronic device includes a processor, a memory, and an interface;
The memory is used for storing instructions;
the interface is used for being connected with the dual-channel imaging scanner;
The processor configured to execute instructions stored in the memory to cause the electronic device to perform the method of any one of claims 1-6.
10. A readable storage medium comprising computer instructions which, when run on a computer, cause the computer to perform the method of any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311215888.6A CN118351531A (en) | 2023-09-20 | 2023-09-20 | Image recognition method based on image sensor and hyperspectral imaging technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311215888.6A CN118351531A (en) | 2023-09-20 | 2023-09-20 | Image recognition method based on image sensor and hyperspectral imaging technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118351531A true CN118351531A (en) | 2024-07-16 |
Family
ID=91823702
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311215888.6A Pending CN118351531A (en) | 2023-09-20 | 2023-09-20 | Image recognition method based on image sensor and hyperspectral imaging technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118351531A (en) |
-
2023
- 2023-09-20 CN CN202311215888.6A patent/CN118351531A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6246785B1 (en) | Automated, microscope-assisted examination process of tissue or bodily fluid samples | |
DK2973397T3 (en) | Tissue-object-based machine learning system for automated assessment of digital whole-slide glass | |
AU2014237346B2 (en) | System and method for reviewing and analyzing cytological specimens | |
US20170061608A1 (en) | Cloud-based pathological analysis system and method | |
KR20200140301A (en) | Method and system for digital staining of label-free fluorescent images using deep learning | |
JP5469070B2 (en) | Method and system using multiple wavelengths for processing biological specimens | |
CN110473167B (en) | Urine sediment image recognition system and method based on deep learning | |
JP2021506003A (en) | How to store and retrieve digital pathology analysis results | |
RU2732895C1 (en) | Method for isolating and classifying blood cell types using deep convolution neural networks | |
WO2014192184A1 (en) | Image processing device, image processing method, program, and storage medium | |
JP5154844B2 (en) | Image processing apparatus and image processing program | |
US20230393380A1 (en) | Method And System For Identifying Objects In A Blood Sample | |
WO2016189469A1 (en) | A method for medical screening and a system therefor | |
JP4864709B2 (en) | A system for determining the staining quality of slides using a scatter plot distribution | |
Bonton et al. | Colour image in 2D and 3D microscopy for the automation of pollen rate measurement | |
CN118351531A (en) | Image recognition method based on image sensor and hyperspectral imaging technology | |
JP2008304205A (en) | Spectral characteristics estimation apparatus and spectral characteristics estimation program | |
JP2010054426A (en) | Observation method of disease | |
WO2024185434A1 (en) | Information processing device, biological specimen analyzing system, and biological specimen analyzing method | |
KR20250098245A (en) | Artificial intelligence-based portable in vitro diagnostic kit analysis device | |
AU2012244307A1 (en) | Methods and systems for processing biological specimens utilizing multiple wavelengths |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |