Disclosure of Invention
The present invention has been made in view of the above-described problems occurring in the prior art.
Therefore, the invention provides a method for counting microorganisms by using combined images, which solves the problem of characteristic matching deviation in the existing microorganism counting technology.
In order to solve the technical problems, the invention provides the following technical scheme:
In a first aspect, the present invention provides a method for microorganism counting using a combined image, comprising, acquiring a microorganism microimage and performing a pretreatment;
Extracting core feature points of the preprocessed microscopic image by using a convolutional neural network model, and calculating the similarity between the core feature points to obtain a microorganism microscopic image containing a splicing position;
splicing the microbial microscopic images by using a self-adaptive splicing algorithm, and identifying and correcting errors in the splicing process to obtain a spliced panoramic image;
Inputting the spliced panoramic image into a graph cutting algorithm, constructing an energy function of the panoramic image, finding out a segmentation path with the lowest energy by using a minimum cutting algorithm, and generating a panoramic image of a microorganism area;
performing morphological operation on the panoramic image of the microorganism area, optimizing the boundary of the microorganism area, and outputting the optimized panoramic image;
and (3) counting microorganisms according to the optimized panoramic image, and outputting a microorganism counting result report.
As a preferable mode of the method for counting microorganisms using a combined image according to the present invention, the performing of the pretreatment includes graying treatment, equalization treatment, noise removal, and edge detection.
As a preferred scheme of the method for counting microorganisms by using the combined image, the method comprises the steps of extracting core characteristic points of a preprocessed microscopic image by using a convolutional neural network model, calculating similarity among the core characteristic points to obtain a microorganism microscopic image containing a splicing position,
Selecting a CNN model, inputting the preprocessed microscopic image into the CNN model, and outputting a feature map set;
Carrying out global pooling on the feature map set to obtain a feature vector set of the microbial microscopic image, generating feature point descriptors, matching the feature point descriptors, and outputting a feature point matching result;
And outputting a geometric transformation matrix between adjacent images according to the characteristic point matching result and applying the geometric transformation matrix to each pair of adjacent images to obtain a microorganism microscopic image containing the splicing position.
The method for counting microorganisms by using the combined image is characterized by comprising the following steps of performing eclosion treatment, color correction, detection of stitching errors and correction of stitching errors on microorganism microscopic images by using an adaptive stitching algorithm, and outputting stitched panoramic images.
The method for counting microorganisms by using the combined image is characterized in that the spliced panoramic image is input into a graph cutting algorithm, an energy function of the panoramic image is constructed, a segmentation path with the lowest energy is found by using a minimum cutting algorithm, a panoramic image of a microorganism area is generated,
Constructing an energy function of the panoramic image, taking each pixel of the spliced panoramic image as a graph node, and taking adjacent pixels of each pixel as edges to construct a graph structure;
and calculating the weight of the edge in the graph structure, finding out the segmentation path with the lowest energy based on the energy function of the panoramic image, distributing a unique identifier, and outputting the panoramic image of the microorganism area.
As a preferred embodiment of the method for microorganism count using a combined image according to the present invention, wherein morphological operations are performed on a panoramic image of a microorganism region, and boundaries of the microorganism region are optimized, the optimized panoramic image is output, comprising the steps of,
Performing morphological operation, selecting structural elements, repeatedly scanning each pixel of the panoramic image of the microorganism area, and outputting the panoramic image subjected to the morphological operation;
And selecting the U-Net model as a deep learning model, and inputting the panoramic image subjected to morphological operation into the U-Net model to obtain an optimized panoramic image.
As a preferable scheme of the method for counting microorganisms by using the combined image, the method for counting microorganisms by using the combined image comprises the following steps of performing microorganism counting according to an optimized panoramic image, outputting a microorganism counting result report,
Selecting a microbial classification standard to classify microbial areas, traversing each microbial area, counting the number of the microbes and summarizing to obtain a microbial statistical result;
The microbiological statistics are integrated into a microbiological count report.
In a second aspect, the present invention provides a device for counting microorganisms using a combined image, comprising an image acquisition module for acquiring a microorganism microscopic image and performing pretreatment;
The feature extraction module is used for extracting core feature points of the preprocessed microscopic image by using the convolutional neural network model, and calculating the similarity between the core feature points to obtain a microorganism microscopic image containing a splicing position;
the image stitching module is used for stitching the microorganism microscopic images by utilizing a self-adaptive stitching algorithm, and in the stitching process, the error is identified and corrected to obtain a stitched panoramic image;
The image generation module inputs the spliced panoramic image into a graph cutting algorithm, an energy function of the panoramic image is constructed, a segmentation path with the lowest energy is found out by using a minimum cutting algorithm, and a panoramic image of a microorganism area is generated;
The image optimization module is used for executing morphological operation on the panoramic image of the microorganism area, optimizing the boundary of the microorganism area and outputting the optimized panoramic image;
And the report generation module is used for counting microorganisms according to the optimized panoramic image and outputting a microorganism counting result report.
In a third aspect, the invention provides a computer device comprising a memory and a processor, the memory storing a computer program, wherein the computer program when executed by the processor performs any of the steps of the method of microorganism counting using a combined image according to the first aspect of the invention.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor performs any of the steps of the method for microorganism counting using a combined image according to the first aspect of the present invention.
The method has the beneficial effects that the convolution neural network model is used for extracting the core feature points of the preprocessed microscopic image, and the similarity between the core feature points is calculated to obtain the microorganism microscopic image containing the splicing position. And splicing the microbial microscopic images by using a self-adaptive splicing algorithm, and identifying and correcting errors in the splicing process to obtain the spliced panoramic image. The application of the FLANN and RANSAC algorithms greatly improves the matching precision and the stability of geometric transformation between adjacent images, and the eclosion technology and the color correction remarkably improve the transition effect and the smoothness of an overlapped area, so that the spliced panoramic images are more natural and consistent. The utilization of breadth-first search algorithm and minimum segmentation algorithm improves the segmentation precision of image segmentation, and avoids the situation of image dislocation and inconsistent overlapping in the conventional method.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present invention is not limited to the specific embodiments disclosed below.
Further, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic can be included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
Embodiment 1, referring to fig. 1 to 3, is a first embodiment of the present invention, and the embodiment provides a method for counting microorganisms by using a combined image, which includes the following steps:
S1, acquiring a microorganism microscopic image and preprocessing.
Comprises the steps of,
Converting the color value of each pixel in the microbial microscopic image into a gray value by using the graying function in the ImageJ image processing software, and outputting a graying microbial microscopic image;
performing histogram equalization processing on the grayscale microbial microscopic image by using a histogram equalization function in ImageJ image processing software, and outputting an equalized microbial microscopic image;
Removing noise of the microorganism microscopic image after equalization treatment by using a Gaussian filter kernel of a Gaussian filter, and outputting the microorganism microscopic image after denoising;
And (3) performing edge detection on the denoised microbial microscopic image by using a Canny edge detection algorithm, and outputting the microbial microscopic image after the edge detection treatment.
And S2, extracting core feature points of the preprocessed microscopic image by using a convolutional neural network model, and calculating the similarity between the core feature points to obtain the microorganism microscopic image containing the splicing position.
Comprises the steps of,
S2.1, selecting a CNN model (namely a convolutional neural network model) as a feature extractor, dividing the preprocessed microorganism microscopic image into a plurality of image blocks with the same size (for example, each image block has the size of 256x256 pixels) by using a sliding window technology, and then inputting each image block into the CNN model one by one, wherein the CNN model can automatically generate a feature map of each image block. The signature is a matrix of multiple channels output by the convolutional layer, with each channel representing a pattern of features. Specifically, the CNN model extracts core features in image blocks (including edge contours, textures, colors, intensity distributions, and local pattern structures of the image blocks) through convolutional layer operations and ReLU activation functions, the shallow convolutional layer extracts low-level features such as edges and textures, and the deep convolutional layer extracts high-level features such as local pattern structures) and combines the core features of each image block into a feature map set;
Next, a global pooling operation is required on the feature map set. Specifically, for each feature map, the average value of all pixels in each channel is calculated, and all the pixel average values are combined into one feature vector of uniform length. And then sequentially splicing the characteristic vectors according to the position sequence of the image blocks in the original image to form a characteristic vector set of the whole microorganism microscopic image.
S2.2, applying L2 normalization processing to each feature vector in the feature vector set of the microbial microscopic image to generate a feature point descriptor. The specific operation is to divide each feature vector by its own Euclidean norm, and then adjust the modular length of each feature vector to 1 to obtain a normalized feature vector (i.e. feature point descriptor);
S2.3, determining an overlapping area between adjacent images. Specifically, it is determined according to the microscope setting and the photographing order at the time of photographing the image. For example, if continuous photographing using an automatic stage is used, the overlapping ratio between adjacent images can be estimated from the distance of each movement (e.g., the step size of a stepping motor). Typically, the overlapping area between adjacent images will be 20% -30% of the total area of the images.
And S2.4, matching the feature point descriptors in the adjacent images according to the overlapping areas between the adjacent images. Specifically, the FLANN nearest neighbor matching algorithm is utilized to sequentially compare the feature point descriptors in each pair of adjacent images by fast approximate search (the search parameter is checks), and the most similar matching point pair is found out in all the feature points. Then, the similarity (i.e., euclidean distance) between each pair of matching points is calculated, and the matching point with the smallest euclidean distance is selected as a matching result (i.e., a matching result of the feature points).
S2.5, calculating a geometric transformation matrix between adjacent images according to the matching result. The specific operation is to select a random sample consensus (RANSAC) algorithm as the matching algorithm, and then set the number of iterations (1000 times) of the RANSAC algorithm, the interior point threshold value (i.e. a certain fixed pixel distance, such as 5 pixels), and the minimum number of samples (4). Then randomly extracting a set of points from the matching point pair and selecting the smallest sample number as an initial sample point set (each sample point contains two coordinates, one on the first image and one on the second image, assuming the coordinates of sample point a to be (x 1, y1) and the coordinates of sample point B to be (x 2, y2)), and calculating a homography matrix for the initial sample point set. The calculation process includes recording the relation between the coordinates of each sample point to form a corresponding relation (for example, the corresponding relation between the coordinates of the sample point A and the coordinates of the sample point B is (x 1, y1)->(x2, y2)), mapping the sample point on (x 1, y1) to the sample point on (x 2, y2) by utilizing three operations of translation, rotation and scaling, constructing an equation set after mapping is completed to align all the mapped sample points (the equation set can be understood as describing the corresponding relation between all the mapped sample points), then solving the transformation parameters of the equation set (namely, the homography matrix value), and converting the coordinates of each sample point on the first image through the homography matrix to obtain new coordinates on the second image. In this way, all sample points can be mapped according to the correspondence.
S2.6, counting the distances between all mapped sample points and actual sample points according to a homography matrix of the initial sample point set, and if the distances are smaller than an inner point threshold value preset by the RANSAC algorithm, considering the matching point pair as an inner point, namely, a point pair conforming to the current transformation matrix (can be understood as error evaluation of mapping). And then counting the specific number of all the interior points and updating the homography matrix, repeatedly performing three operations of selecting a sample point set, calculating the homography matrix and counting the number of the interior points until the set iteration times are reached, and selecting the homography matrix with the most interior points as a final geometric transformation matrix.
S2.7, applying the geometric transformation matrix to each pair of adjacent images, counting the size and the translation of the overlapped area of each pair of adjacent images, and outputting the microorganism microscopic image containing the splicing position.
S3, splicing the microorganism microscopic images by using a self-adaptive splicing algorithm, and identifying and correcting errors in the splicing process to obtain a spliced panoramic image.
Comprises the steps of,
S3.1, firstly selecting an image as a reference image, and then performing an image fusion process of the microorganism microscopic image. Specifically, a transition zone (20-30 pixels in width) is arranged around the overlapping area of each pair of adjacent images by using the eclosion technology, and all pixel points of each pair of adjacent images are weighted and averaged by using a Gaussian weighting method, so that gradual transition of the overlapping area becomes smoother and more natural. The specific operation is that a weight based on Gaussian distribution is distributed to all pixel points of each pair of adjacent images, and the point weight is higher when the point weight is closer to the center of an overlapping area, and is lower when the point weight is lower.
S3.2, calculating the color mean value and standard deviation of each pair of adjacent images, and determining the overall brightness and contrast of each pair of adjacent images. The hue of each pair of adjacent images is uniformly adapted according to the overall brightness and contrast, specifically, the color histogram of each pair of adjacent images is adjusted to coincide with the reference image using a histogram matching technique. Histogram matching unifies the color distribution of two images by remapping the pixel values of each pair of adjacent images. For example, if one image of a pair of adjacent images is darker and the other image is lighter, the color distribution of the adjacent images tends to be uniform by adjusting the pixel values of the adjacent images.
S3.3, in the splicing process, the problems of misplacement or inconsistent overlapping and the like of the images can occur. The accuracy of the splicing result is verified by utilizing the matching result of the feature points, whether the positions of all the matching feature points in the spliced image are still consistent is checked, an error threshold (the specific range is 1-5 pixels, and the error threshold can be adjusted according to the actual splicing condition) is set, and when the distance between the actual position of a certain feature point and the expected position exceeds 5 pixels, the feature point is considered to have splicing errors.
And S3.4, after the splicing error is detected, fine tuning is carried out on the geometric transformation matrix by using a local optimization algorithm. The specific method is to find out the characteristic points with larger errors and recalculate the local transformation matrix. For example, a few feature points with larger errors can be selected as a new sample point set, the geometric transformation matrix is recalculated and updated, and then the iteration is performed again according to the set iteration number (i.e. the geometric transformation matrix is continuously adjusted, and the stitching errors are gradually reduced until the stitching errors of all feature points are within the range of the error threshold).
And S3.5, combining the images subjected to the feathering treatment, the color correction and the stitching error correction into a complete panoramic image, storing the complete panoramic image into a standard format (TIFF), and generating the stitched panoramic image.
S4, inputting the spliced panoramic image into a graph cutting algorithm, constructing an energy function of the panoramic image, finding out a segmentation path with the lowest energy by using the minimum cutting algorithm, and generating the panoramic image of the microorganism area.
Comprises the steps of,
S4.1, constructing an energy function of the panoramic image according to the data item (determined according to the color or brightness of the microorganism under the illumination condition) and the smooth item. Specifically, the data item refers to a probability that whether each pixel in the stitched panoramic image belongs to a certain specific area is measured. That is, whether the panoramic image belongs to a microorganism region is judged according to the gray value or the color distribution of each pixel in the spliced panoramic image, and the smooth term refers to the similarity between adjacent pixels of the spliced panoramic image, and the gray difference between the adjacent pixels of the spliced panoramic image is calculated and is used as a part of an energy function. The specific calculation process is that since each pixel has 4 directly adjacent pixels (namely, up, down, left and right), the calculation is divided into two cases, and if the gray level image is adopted, the gray level difference between the adjacent pixels of the spliced panoramic image can be determined by comparing the gray level values of the two pixels. For example, if the gray value of one pixel is 50 and the gray value of the adjacent pixel above it is 55, the difference between the two pixels is 5. In the case of a color image, the difference for each color channel (red, green, blue) is calculated separately. For example, taking a pixel with RGB values (100, 150, 200) and its right neighbor with RGB values (105, 153, 198), the three color channels differ by 5, 3, and 2, respectively.
And S4.2, accumulating gray differences between adjacent pixels of all spliced panoramic images to obtain a total difference value, giving a weight to the total difference value according to actual segmentation requirements, calculating a smooth item contribution value (for example, the weight of the smooth item is 0.2, the total difference value is 3.5, the smooth item contribution value is 3.5X0.2=0.7), integrating the smooth item contribution value into an energy function to form a part of the energy function (assuming that 100 pixels exist, the smooth item contribution value of each pixel is 0.7, and the total smooth item energy is 100X 0.7=70).
S4.3, regarding each pixel of the spliced panoramic image as a graph node, regarding adjacent pixels of each pixel as edges, constructing a graph structure, and setting two special nodes, one being a source point and the other being a sink point. The source point represents the foreground region, the sink point represents the background region, and is connected to the source point and sink point by each edge.
According to the actual requirements of the smooth item, the edge weight between each pixel and the adjacent pixels is determined (the smaller the gray value is, the larger the edge weight is, the higher the possibility that two pixels belong to one region is, and then according to the actual requirements of the data item, the edge weight between each pixel and the source point and the sink point is determined. The specific method is to judge the probability that each pixel belongs to a foreground area or a background area according to the characteristics of the color, texture and the like of the pixel. For example, if a pixel is closer in color to the microorganism, the edge weight to the source point is larger, whereas if a pixel is closer to the background color, the edge weight to the sink point is larger.
And S4.4, finding out a segmentation path with the lowest energy on the basis of the constructed energy function by utilizing a minimum segmentation algorithm. The specific steps are to assign initial weights to all sides in the graph structure and initialize the state of the flow network (the flow network is a directed graph to help to realize the effective segmentation of the image), then search an augmentation path (the flow of all sides is set to 0 when searching is started) between the source point and the sink point by using breadth-first search algorithm, record the residual capacity of each side (i.e. the maximum flow that can be carried by the side) in the searching process, if the current flow of a forward side is smaller than the residual capacity, it is indicated that the forward side has positive residual capacity, and for the reverse side, if the current flow of the reverse side is smaller than the residual capacity, it is indicated that the reverse side has retroactive flow.
Based on the residual capacities of all edges, along the search path, it is checked whether each edge has sufficient residual capacity. Specifically, if the current node has positive residual capacity to the next node, the process continues, otherwise the search must be traced back or stopped, in which way an augmented path from the source point to the sink point is gradually constructed, with each edge having positive residual capacity.
S4.5, calculating the minimum residual capacity of the augmented path between the source point and the sink point, and updating the actual flow of each side on the augmented path according to the minimum residual capacity (for a forward side, increasing the actual flow to reduce the residual capacity, for a reverse side, increasing the same actual flow as the forward side and simultaneously reducing the residual capacity, and can be understood as tracing the actual flow back and forth by utilizing the reverse side). After the updating is completed, searching is continued from the source point, and after each new augmented path is found, the state of the streaming network is updated and the residual capacity is recalculated until any augmented path from the source point to the sink point cannot be found. When there are no more augmented paths, it turns out that the maximum traffic (i.e., the smallest cut), which is the lowest energy split path, has been found.
And S4.6, storing the segmentation path with the lowest energy as an image file in a PNG format, and allocating a unique identifier (such as a color label) for each microorganism area in the segmentation path, and marking the microorganism area in the image file to form a panoramic image of the microorganism area.
The panoramic image of the microbiological region obtained in this step builds a more comprehensive energy function by taking both data items and smoothing items into account. Compared with the traditional image segmentation method which can only rely on single characteristics (such as colors or textures) for segmentation, the segmentation accuracy is improved, the smoothness and continuity of segmentation results are ensured, an augmentation path is found through a breadth-first search algorithm, the state of a streaming network is updated to gradually approach the maximum stream, the algorithm efficiency is greatly improved, and the segmentation effect quality is ensured.
S5, morphological operation is carried out on the panoramic image of the microorganism area, the boundary of the microorganism area is optimized, and the optimized panoramic image is output.
Comprises the steps of,
S5.1, morphological operations include an expansion process (filling small voids in the microbiological area) and an etching process (removing isolated small noise points). Taking the inflation process as an example, first, an appropriate structural element (e.g., a 3×3 square) is selected, and each pixel is scanned row by row and column by column from the upper left corner of the panoramic image of the microorganism region by using the structural element (the current pixel position of the panoramic image of the microorganism region is set as a center point, the structural element is overlaid on the center point and the area near the center point, and then whether any one of all the pixels overlaid by the structural element belongs to the pixel of the microorganism region is checked one by one, and if so, the center point is marked as a part of the microorganism region even if the center point does not originally belong to the microorganism region).
After repeatedly scanning each pixel until the panoramic image of the whole microorganism area is traversed, in this way, small holes in the microorganism area can be effectively filled, so that the microorganism area becomes more complete and continuous.
S5.2, selecting a U-Net model (U-shaped network model) as a deep learning model, inputting the panoramic image subjected to morphological operation into the U-Net model, gradually recovering the resolution of the panoramic image subjected to morphological operation by the U-Net model through a decoder, generating a pixel-level classification result (namely the probability that each pixel belongs to a microorganism region), and further optimizing the boundary of the microorganism region for each identified microorganism region by the U-Net model. Specifically, the U-Net model adjusts the boundary of the microorganism region according to the edge information (such as gradient change) and the texture characteristics (such as local consistency) in the panoramic image after morphological operation and becomes clearer and more accurate. For example, the U-Net model can add more detail (e.g., fine protrusions, depressions, etc.) near the boundary of the microorganism region, or smooth discontinuities (i.e., discontinuities or uneven areas on the boundary of the microorganism region, such as jagged edges, broken lines, etc.), to improve the accuracy of the segmentation.
The U-Net model consists of an encoder (for feature extraction) consisting of multiple convolutional layers and a pooling layer, and a decoder (for resolution recovery) containing an upsampling layer and a convolutional layer. The training process of the U-Net model comprises the steps of selecting panoramic images of a microorganism area, extracting a small batch of panoramic images and labels to serve as a training set, inputting the training set into an encoder part of the U-Net, gradually reducing the spatial resolution of the panoramic images through a series of convolution layers and pooling layers by the encoder, and increasing the number of channels to capture more complex features. Each layer generates a panoramic image feature map representing abstract information at different levels. All panoramic image feature maps output by the encoder then enter the decoder section. The decoder gradually restores the spatial resolution of the original panoramic image through the deconvolution and upsampling operations. In this process, the decoder retains detailed information of the panoramic image and improves segmentation accuracy in conjunction with the skip connection from the corresponding layer of the encoder. Finally, the decoder outputs a probability map of the same size as the input microorganism region panoramic image, wherein each pixel value represents the probability that the pixel belongs to the microorganism region.
The output result of the decoder is converted into binary prediction, and then compared with a real label (namely, whether each pixel belongs to the marking information of the microorganism area) to calculate the average loss value of the panoramic image of the whole batch (namely, a binary cross entropy loss function is used, logarithmic loss between the prediction probability and the real label is calculated for each pixel point, and the average value of all the pixel points is taken as the final average loss value).
Using the autograd function in the PyTorch automatic differentiation tool, the gradients (i.e., back-propagation) of all the learnable parameters (i.e., weights and biases) of the U-Net model are calculated from the average loss values of the panoramic images of the entire batch. And updating network parameters of the U-Net model based on the calculated gradients using an Adam optimization algorithm. Specifically, adam optimization algorithms adjust the small step size (learning rate) and momentum of each parameter to avoid oscillations or trapping local minima while reducing losses.
Setting proper iteration times (such as 100 times), and repeating three execution operations of forward propagation, loss calculation and reverse propagation until the set iteration times are reached or the U-Net model completes convergence, so that the training of the U-Net model is completed.
In the step, the boundary of the microorganism region is optimized by utilizing the edge information and the texture characteristics through the U-Net model, and compared with the existing segmentation method, the segmentation result is not smooth and natural enough when the object boundary is processed, and the method remarkably improves the precision and visual effect of boundary segmentation.
S6, counting microorganisms according to the optimized panoramic image, and outputting a microorganism counting result report.
Comprises the steps of,
S6.1, selecting proper microorganism classification standards to distinguish different types of microorganisms according to the optimized panoramic image and by combining actual requirements. For example, microorganisms may be classified into spherical, rod-like or other complex shapes according to the shape of the microorganisms, and may be classified into different categories of red, green, etc. according to the color of the microorganisms.
S6.2, classifying each microorganism area according to the classification standard of the microorganism. Specifically, the shape, color, and texture of each microorganism are compared with classification criteria of the microorganism, and for example, if the shape is taken as the classification criteria, the shape type (e.g., rectangular, elliptical, irregular, etc.) of the microorganism can be judged by calculating the perimeter-to-area ratio of each microorganism region.
S6.3, initializing a counter for each microorganism type, for example, classifying microorganisms into three types (sphere, rod, other shape), initializing three counters, traversing each microorganism area one by one, and updating the corresponding counter according to the classification result of the microorganism area. The specific method is to examine the characteristics of each microorganism region and classify the microorganisms into corresponding categories according to the classification standards of the microorganisms. For example, if a certain microorganism area is identified as spherical, the value of the spherical category counter is increased.
S6.4, after counting the number of microorganisms, the number of microorganisms of each class needs to be summarized. The specific method is to accumulate the counter value of each category to obtain the final microbial statistic result. For example, after statistics, there are 50 spherical microorganisms, 30 rod-shaped microorganisms, and 15 other shapes. The final microbial statistics are combined into text to form a microbial count report (including the classification criteria of the microbes, the classification results of the microbes, and the total number of microbes).
The embodiment also provides a device for counting microorganisms by utilizing the combined image, which comprises an image acquisition module, a microorganism image acquisition module and a microorganism image acquisition module, wherein the image acquisition module acquires the microorganism image and performs pretreatment;
The feature extraction module is used for extracting core feature points of the preprocessed microscopic image by using the convolutional neural network model, and calculating the similarity between the core feature points to obtain a microorganism microscopic image containing a splicing position;
the image stitching module is used for stitching the microorganism microscopic images by utilizing a self-adaptive stitching algorithm, and in the stitching process, the error is identified and corrected to obtain a stitched panoramic image;
The image generation module inputs the spliced panoramic image into a graph cutting algorithm, an energy function of the panoramic image is constructed, a segmentation path with the lowest energy is found out by using a minimum cutting algorithm, and a panoramic image of a microorganism area is generated;
The image optimization module is used for executing morphological operation on the panoramic image of the microorganism area, optimizing the boundary of the microorganism area and outputting the optimized panoramic image;
And the report generation module is used for counting microorganisms according to the optimized panoramic image and outputting a microorganism counting result report.
The embodiment also provides a computer device, which is suitable for the situation of the method for counting microorganisms by using the combined image, and comprises a memory and a processor, wherein the memory is used for storing computer executable instructions, and the processor is used for executing the computer executable instructions to realize the method for counting microorganisms by using the combined image as proposed in the embodiment.
The computer device may be a terminal comprising a processor, a memory, a communication interface, a display screen and input means connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
The present embodiment also provides a storage medium having stored thereon a computer program which when executed by a processor implements a method for performing microorganism count using a combined image as proposed in the above embodiments, the storage medium may be implemented by any type of volatile or non-volatile storage device or combination thereof, such as a static random access Memory (Static Random Access Memory, SRAM for short), an electrically erasable Programmable Read-Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EEPROM for short), an erasable Programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM for short), a Programmable Read-Only Memory (ROM for short), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
In summary, the method comprises the steps of extracting core feature points of the preprocessed microscopic image by using a convolutional neural network model, and calculating the similarity between the core feature points to obtain the microorganism microscopic image containing the splicing position. And splicing the microbial microscopic images by using a self-adaptive splicing algorithm, and identifying and correcting errors in the splicing process to obtain the spliced panoramic image. The application of the FLANN and RANSAC algorithms greatly improves the matching precision and the stability of geometric transformation between adjacent images, and the eclosion technology and the color correction remarkably improve the transition effect and the smoothness of an overlapped area, so that the spliced panoramic images are more natural and consistent. The utilization of breadth-first search algorithm and minimum segmentation algorithm improves the segmentation precision of image segmentation, and avoids the situation of image dislocation and inconsistent overlapping in the conventional method.
It should be noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present invention may be modified or substituted without departing from the spirit and scope of the technical solution of the present invention, which is intended to be covered in the scope of the claims of the present invention.