CN119601178A - A method and system for monitoring bleeding after thyroid surgery - Google Patents
A method and system for monitoring bleeding after thyroid surgery Download PDFInfo
- Publication number
- CN119601178A CN119601178A CN202510128452.6A CN202510128452A CN119601178A CN 119601178 A CN119601178 A CN 119601178A CN 202510128452 A CN202510128452 A CN 202510128452A CN 119601178 A CN119601178 A CN 119601178A
- Authority
- CN
- China
- Prior art keywords
- module
- monitoring
- result
- image
- patient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
- A61B5/02042—Determining blood loss or bleeding, e.g. during a surgical procedure
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
- A61B5/0205—Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/145—Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue
- A61B5/14532—Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue for measuring glucose, e.g. by tissue impedance measurement
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/145—Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue
- A61B5/14542—Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue for measuring blood gases
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/54—Extraction of image or video features relating to texture
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Public Health (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Pathology (AREA)
- Surgery (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Animal Behavior & Ethology (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Physiology (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Mathematical Physics (AREA)
- Cardiology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Optics & Photonics (AREA)
- Primary Health Care (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Epidemiology (AREA)
- Signal Processing (AREA)
- Emergency Medicine (AREA)
- Fuzzy Systems (AREA)
- Biodiversity & Conservation Biology (AREA)
- Pulmonology (AREA)
- Psychiatry (AREA)
Abstract
The invention discloses a thyroid postoperative bleeding monitoring method and system, and relates to the technical field of medical monitoring. The method adopts the YOLO-Seg network model and combines SRCNN layers and Blind Dconv layers, can accurately identify the drainage tube region and the dressing region, improves the accuracy of region identification, improves the definition and deblurring effect of images, accurately detects the color, the category and the duty ratio information of drainage liquid, provides powerful support for judging whether a patient bleeds, comprehensively evaluates the whole health condition of the patient by comprehensively analyzing the physiological index data, accurately reflects the health state of the patient by the physiological index score, timely finds potential bleeding risk, provides a comprehensive, accurate and timely solution for monitoring the patient after thyroid operation, and effectively ensures the safety and rehabilitation of the patient.
Description
Technical Field
The invention relates to the technical field of medical monitoring, in particular to a thyroid postoperative bleeding monitoring method and system.
Background
Bleeding after thyroid surgery is one of the most serious complications of thyroid surgery, and can have serious effects on the health and subsequent recovery of the patient. Post-operative bleeding can lead to neck swelling and pain, affecting patient comfort and quality of life. A small amount of bleeding may form hematomas, causing neck swelling and pain. If the bleeding amount is large, the trachea can be pressed, so that dyspnea is caused, and even life is endangered. This acute airway obstruction requires urgent treatment, otherwise it may lead to suffocation death of the patient. In addition, bleeding may also cause recurrent laryngeal nerve damage, affecting movement of the vocal cords, leading to hoarseness or aphonia. Postoperative bleeding can increase the difficulty and time of recovery, and patients need longer hospitalization to receive more medical interventions such as hematoma clearance, tracheal intubation, etc.
Once the postoperative bleeding cannot be handled in time, the patient may suffocate, or even die, due to airway obstruction. Even if the patient survives, long-term sequelae may remain. Thus, there is a need for a monitoring method that closely monitors patients after thyroid surgery in order to timely discover and treat any bleeding signs.
Disclosure of Invention
The invention aims to provide a thyroid postoperative bleeding monitoring method and system so as to improve the technical problems.
In order to achieve the above object, the embodiment of the present invention provides the following technical solutions:
a method of monitoring bleeding after thyroid surgery, comprising:
s1, acquiring monitoring image data and physiological index data of a patient in real time based on a monitoring interval;
s2, constructing a region identification model and an image detection model;
S3, inputting the monitoring image data into a region identification model, and outputting to obtain a region identification result, wherein the region identification result comprises a drainage tube region image and a dressing region image;
S4, inputting the area identification result into an image detection model, and outputting to obtain a drainage tube detection result and a dressing detection result, wherein the dressing detection result is a blood trace area, and the drainage tube detection result comprises a drainage category result, a drainage tube color classification result and a duty ratio information difference;
s5, calculating and detecting the physiological index data to obtain a physiological index score;
S6, obtaining a monitoring result based on the drainage tube detection result, the dressing detection result and the physiological index score.
Further, the area identification model comprises an edge segmentation extraction module, SRCNN layers and Blind Dconv layers which are connected in series, wherein the edge segmentation extraction module comprises a YOLO-Seg network model, and the YOLO-Seg network model comprises a backhaul module, a Neck module, a Detection Head module and a Segmentation Head module which are connected in series;
the image detection model comprises a drainage tube region image detection module and a dressing region image detection module which are in parallel, wherein the drainage tube region image detection module comprises a color conversion sub-module, a first color segmentation sub-module, a color duty ratio calculation sub-module, a texture feature extraction sub-module and a liquid category classification sub-module, and the dressing region image detection module comprises a second color segmentation sub-module, a blood trace detection sub-module, a blood trace judgment sub-module and a blood trace area calculation sub-module which are connected in series.
Further, the training process of the area identification model is as follows:
s3-1, acquiring training monitoring image data and preprocessing the training monitoring image data to obtain preprocessed training monitoring image data;
s3-2, inputting the preprocessed training monitoring image data to the Backbone module, and outputting to obtain multi-scale monitoring image features;
s3-3, inputting the multi-scale monitoring image features to the Neck module, and outputting the fused multi-scale monitoring image features;
S3-4, inputting the fused multi-scale monitoring image characteristics to the Detection Head module, and outputting to obtain a corresponding target Detection frame and a class probability value thereof;
s3-5, inputting the target detection frame and the class probability value thereof to the Segmentation Head module, and outputting to obtain an initial training area recognition result;
S3-6, inputting the initial training area recognition result into the SRCNN layers, and outputting the processed initial training area recognition result;
S3-7, inputting the processed initial training area recognition result to the Blind Dconv layers, and outputting the processed initial training area recognition result to obtain a training area recognition result, wherein the training area recognition result comprises a drainage tube area training image and a dressing area training image;
S3-8, calculating a first loss function based on the training area recognition result;
s3-9, adjusting the weight parameters of the region identification model based on the first loss function.
Further, the color conversion sub-module is used for performing color space conversion on the drainage tube region image to obtain a converted drainage tube region image;
The first color segmentation submodule is used for setting a first color threshold value and extracting a color region, and classifying the converted drainage tube region image by utilizing a color classification algorithm to obtain a corresponding drainage tube color classification result, wherein the drainage tube color classification result is dark red, bright red, light red or other colors;
the color duty ratio calculation sub-module is used for calculating the duty ratio information of the color region in the converted drainage tube region image, acquiring the duty ratio information corresponding to the previous monitoring, and calculating the difference value of the two duty ratio information to obtain the corresponding duty ratio information difference;
the texture feature extraction submodule is used for extracting texture feature data of the converted drainage tube region image by using a texture analysis algorithm;
The liquid category classification submodule is used for carrying out feature extraction on the texture feature data by utilizing a neural network, judging drainage liquid categories and obtaining drainage category results, wherein the drainage category results are slurry liquid or non-slurry liquid;
the drainage tube detection result comprises the drainage category result, the drainage tube color classification result and the duty ratio information difference.
Further, the second color segmentation submodule is used for carrying out color conversion on the dressing region image, setting a second color threshold value, and segmenting the converted dressing region image by utilizing color segmentation to generate a corresponding image mask;
The blood trace detection submodule is used for carrying out feature extraction on the image mask by using a deep learning model to generate a blood trace detection result, wherein the blood trace detection result is a dressing bleeding area;
the blood trace judging submodule is used for judging whether a patient bleeds based on the blood trace detection result to obtain a corresponding judgment result, wherein the judgment result is that the patient bleeds or the patient does not bleed;
The blood trace area calculation sub-module is used for calculating the blood trace area based on the blood trace detection result and the judgment result.
Further, the physiological index data includes temperature, blood glucose, blood pressure, blood oxygen saturation, heart rate, respiratory rate, and sound data of the patient;
the step S5 comprises the following steps:
S5-1, acquiring the temperature and the sound data through a temperature sensor and a sound acquisition device;
S5-2, acquiring the blood sugar, the blood pressure, the blood oxygen saturation, the heart rate and the respiratory frequency by using a monitoring device;
S5-3, analyzing the sound data by using a signal processing algorithm to obtain a corresponding sound analysis result;
S5-4, calculating the physiological index score based on the temperature, the sound analysis result, the blood sugar, the blood pressure, the blood oxygen saturation, the heart rate and the respiratory frequency.
Further, the formula corresponding to the physiological index score is:
;
wherein, 、Respectively represents the temperature coefficient and the temperature fraction,、Respectively represent the sound coefficient and the sound score,、Respectively represents the blood sugar coefficient and the blood sugar fraction,、Respectively represents the blood pressure coefficient and the blood pressure fraction,、Respectively represents the blood oxygen saturation coefficient and the blood oxygen saturation fraction,、Respectively represent the heart rate coefficient and the heart rate score,、Respectively represents the respiratory rate coefficient and the respiratory rate fraction,The physiological index score is represented by a score,Represents a natural constant of the natural product,Representing the bias term.
Further, the step S6 includes the steps of:
S6-1, judging whether the physiological index score is lower than a physiological threshold value, the drainage type result is serous liquid, the drainage color classification result is any one of dark red and bright red, if yes, entering S6-2, otherwise, monitoring that the patient has a great deal of postoperative bleeding, and urgently treating the patient;
S6-2, judging whether the conditions that the ratio information difference is larger than 0 and the blood trace area is larger than 0 are met, if so, monitoring the postoperative small amount of bleeding of the patient, and treating the postoperative small amount of bleeding of the patient, otherwise, monitoring the postoperative stable operation of the patient, and observing the postoperative small amount of bleeding of the patient.
The thyroid postoperative bleeding monitoring system comprises an image acquisition module, a region identification module, an image detection module, a physiological index data acquisition module, a physiological index score calculation module and a monitoring module, wherein:
The image acquisition module is used for acquiring monitoring image data and physiological index data of a patient in real time based on the monitoring interval;
the area identification module is used for inputting the monitoring image data into the area identification model and outputting and obtaining an area identification result;
The image detection module is used for inputting the region identification result into the image detection model and outputting the drainage tube detection result and the dressing detection result;
the physiological index data acquisition module is used for acquiring physiological index data of a patient;
the physiological index score calculation module is used for analyzing and calculating the physiological index data to obtain a physiological index score;
And the monitoring module is used for obtaining a monitoring result based on the drainage tube detection result, the dressing detection result and the physiological index score.
Further, the physiological index data acquisition module comprises a sound acquisition device, a temperature sensor, a monitoring device and an index preprocessing sub-module, the physiological index score calculation module comprises a sound analysis sub-module and a physiological index score calculation sub-module, and the thyroid postoperative bleeding monitoring system further comprises a warning module, wherein:
the warning module is used for sending a corresponding warning signal to medical staff according to the monitoring result;
The sound acquisition device is used for acquiring sound data of a patient;
a temperature sensor for acquiring a temperature of a patient;
The monitoring device is used for acquiring blood sugar, blood pressure, blood oxygen saturation, heart rate and respiratory frequency;
The index preprocessing sub-module is used for denoising and data interpolation of temperature, sound data, blood sugar, blood pressure, blood oxygen saturation, heart rate and respiratory frequency;
The sound analysis sub-module is used for analyzing the preprocessed sound data by utilizing a signal processing algorithm to obtain a corresponding sound analysis result;
The physiological index score calculation sub-module is used for calculating the physiological index score based on the preprocessed temperature, the sound analysis result, the preprocessed blood sugar, the preprocessed blood pressure, the preprocessed blood oxygen saturation, the preprocessed heart rate and the preprocessed respiratory frequency.
The beneficial effects of the invention are as follows:
The method adopts a YOLO-Seg network model and combines SRCNN layers and Blind Dconv layers, can identify drainage tube areas and dressing areas with high precision, improves the accuracy of area identification through multi-scale feature extraction and fusion, improves the definition and deblurring effect of images, ensures accurate identification under different illumination conditions, extracts the color features and texture features of the drainage tube areas and the dressing areas, provides powerful support for accurately detecting the color, the category and the duty ratio information of drainage liquid, and provides a potential bleeding risk by comprehensively analyzing physiological index data, more comprehensively evaluating the overall health condition of patients, accurately reflecting the health state of the patients through the physiological index score and timely finding the potential bleeding risk;
The system utilizes a deep learning model and a signal processing algorithm to carry out intelligent analysis on image and sound data, improves the accuracy and reliability of monitoring, can send warning signals of different levels according to the monitoring result, ensures the efficient operation of the system by mutual cooperation among the modules, has a quick response mechanism, and can ensure that medical staff can take measures at the first time when a patient has bleeding, thereby avoiding serious consequences.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method according to an embodiment of the invention;
FIG. 2 is a block diagram of a region identification model in an embodiment of the present invention;
FIG. 3 is a block diagram of an image detection model in an embodiment of the present invention;
FIG. 4 is a system configuration diagram in an embodiment of the present invention;
Fig. 5 is a block diagram of a physiological index data module and a physiological index score calculating module according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
Referring to fig. 1, the method for monitoring bleeding after thyroid operation provided in this embodiment includes:
S1, acquiring monitoring image data and physiological index data of a patient in real time based on a monitoring interval, wherein the physiological index data comprise temperature, blood sugar, blood pressure, blood oxygen saturation, heart rate, respiratory frequency and sound data of the patient, monitoring the patient for 24 hours or 72 hours, and setting the monitoring interval to be half an hour.
S2, constructing a region identification model and an image detection model;
S3, inputting the monitoring image data into a region identification model, and outputting to obtain a region identification result, wherein the region identification result comprises a drainage tube region image and a dressing region image;
as shown in FIG. 2, the area identification model comprises an edge segmentation extraction module, SRCNN layers and Blind Dconv layers which are connected in series, wherein the edge segmentation extraction module comprises a YOLO-Seg network model, and the YOLO-Seg network model comprises a backhaul module, a Neck module, a Detection Head module and a Segmentation Head module which are connected in series;
The method selects the YOLO-Seg network model, SRCNN layers and Blind Dconv layers for region identification, has the advantages of high precision, image definition improvement, deblurring treatment and the like, can adapt to different image conditions, has stronger robustness and practicability, simplifies the steps of region identification and image enhancement through an end-to-end processing flow, and improves the efficiency.
The training process of the region identification model comprises the following steps:
s3-1, acquiring training monitoring image data and preprocessing to obtain preprocessed training monitoring image data, wherein preprocessing comprises denoising and contrast enhancement.
S3-2, inputting the preprocessed training monitoring image data to the Backbone module, and outputting to obtain multi-scale monitoring image features;
s3-3, inputting the multi-scale monitoring image features to the Neck module, and outputting the fused multi-scale monitoring image features;
S3-4, inputting the fused multi-scale monitoring image characteristics to the Detection Head module, and outputting to obtain a corresponding target Detection frame and a class probability value thereof;
s3-5, inputting the target detection frame and the class probability value thereof to the Segmentation Head module, and outputting to obtain an initial training area recognition result;
s3-6, inputting the initial training area recognition result into the SRCNN layers, further improving the definition of the image, and outputting the processed initial training area recognition result;
S3-7, inputting the processed initial training area recognition result into the Blind Dconv layers, performing deblurring treatment, improving image quality, and outputting to obtain a training area recognition result, wherein the training area recognition result comprises a drainage tube area training image and a dressing area training image;
S3-8, calculating a first loss function based on the training area recognition result;
First loss function The corresponding formula is:
;
;
;
;
wherein, Represent the firstThe number of loss coefficients is chosen such that,Representing the sum function,Represents the YOLO-Seg loss function,Which represents the factor of the illumination condition,、Respectively represent BCE loss weights and Dice loss weights,、Respectively represent a BCE loss function and a Dice loss weight function,The corresponding loss function (mean square error loss function) of the SRCNN layers is represented,、Respectively represent the first and second training area recognition results after processingThe pixel value of each label and the processed initial training area recognition resultThe value of the pixel is determined by the pixel value,A total number of pixels representing the processed initial training area recognition result,The corresponding loss function is represented Blind Dconv as,、The weight coefficient is represented by a number of weight coefficients,、The image reconstruction loss (mean square error loss function) and the blur kernel regularization loss function (L1 regularization) are represented respectively. Wherein the illumination condition factorThe value is 1 during daytime and 2 during night.
The first loss function is combined with the damage function of each module of the area identification model, and the illumination condition factors are introduced, so that the corresponding weight parameters are dynamically adjusted, the identification accuracy and the robustness of the area identification model can be further improved, and different illumination conditions are adapted.
S3-9, adjusting the weight parameters of the region identification model based on the first loss function.
S4, inputting the area identification result into an image detection model, and outputting to obtain a drainage tube detection result and a dressing detection result, wherein the dressing detection result is a blood trace area, and the drainage tube detection result comprises a drainage category result, a drainage tube color classification result and a duty ratio information difference;
As shown in FIG. 3, the image detection model comprises a drainage tube region image detection module and a dressing region image detection module which are in parallel, wherein the drainage tube region image detection module comprises a color conversion sub-module, a first color segmentation sub-module, a color duty ratio calculation sub-module, a texture feature extraction sub-module and a liquid category classification sub-module.
The color conversion sub-module is used for performing color space conversion on the drainage tube region image, converting the drainage tube region image into an HSV color space and obtaining a converted drainage tube region image;
The first color segmentation sub-module is used for setting corresponding first color thresholds according to dark red, bright red and light red to extract color areas, classifying the converted drainage tube area images by utilizing a color classification algorithm to obtain corresponding drainage tube color classification results, wherein the drainage tube color classification results are dark red, bright red, light red or other colors, and the color classification algorithm can adopt a K-Nearest Neighbors algorithm or a KNN network. For example, the first color threshold may be set to:
The dark red is H (0-10), S (100-255), V (100-255), the bright red is H (0-10), S (150-255), V (150-255), the light red is H (0-10), S (50-150) and V (150-255).
The color duty ratio calculation sub-module is used for calculating the duty ratio information of the color region in the converted drainage tube region image, acquiring duty ratio information corresponding to the last monitoring, and calculating the difference value of the two duty ratio information, namely the duty ratio information of the current monitoring-the duty ratio information of the last monitoring, so as to obtain the corresponding duty ratio information difference;
The texture feature extraction sub-module is used for extracting texture feature data of the converted drainage tube region image by using a texture analysis algorithm, and the texture analysis algorithm can select a gray level co-occurrence matrix (GLCM) to analyze texture features of the liquid because the texture of the slurry liquid is generally coarser than that of the common liquid.
The liquid category classification submodule is used for extracting characteristics of the liquid by utilizing a neural network, judging the drainage liquid category and obtaining a drainage category result, wherein the drainage category result is slurry liquid or non-slurry liquid;
The neural network can be a LeNet-5 convolutional neural network, and the corresponding process is as follows:
The texture feature data is processed through a convolution layer, filtering operation is carried out through a plurality of convolution kernels to generate a feature map, the feature map is subjected to downsampling through a pooling layer to reduce data dimension and retain important features, the features of higher layers are further extracted through alternating processing of a plurality of convolution layers and pooling layers, the extracted feature map is flattened into one-dimensional vectors and is input into a full-connection layer to be subjected to feature combination and classification, and the probability of each category is output through a Softmax function to complete classification tasks.
The LeNet-5 convolutional neural network has a simple structure, is suitable for small-scale image classification tasks, can automatically extract the characteristics of drainage tube liquid, has the advantages of strong robustness and high-efficiency classification, can efficiently complete classification tasks of drainage liquid types, and provides support for judging whether a patient bleeds.
The drainage tube detection result comprises the drainage category result, the drainage tube color classification result and the duty ratio information difference.
As shown in fig. 3, the dressing region image detection module includes a second color segmentation sub-module, a blood trace detection sub-module, a blood trace judgment sub-module, and a blood trace area calculation sub-module connected in series.
The second color segmentation submodule is used for carrying out color conversion on the dressing region image, setting a second color threshold value, and segmenting the converted dressing region image by utilizing color segmentation to generate a corresponding image mask;
The blood trace detection submodule is used for carrying out feature extraction on the image mask by using a deep learning model to generate a blood trace detection result, wherein the blood trace detection result is a dressing bleeding area;
the blood trace judging submodule is used for judging whether a patient bleeds based on the blood trace detection result to obtain a corresponding judgment result, wherein the judgment result is that the patient bleeds or the patient does not bleed;
the blood trace area calculation sub-module is used for calculating the blood trace area based on the blood trace detection result and the judgment result. When the patient is not bleeding as a result of the judgment, the blood trace area is 0.
S5, calculating and detecting the physiological index data to obtain a physiological index score;
the step S5 comprises the following steps:
S5-1, acquiring the temperature and the sound data through a temperature sensor and a sound acquisition device;
S5-2, acquiring the blood sugar, the blood pressure, the blood oxygen saturation, the heart rate and the respiratory frequency by using a monitoring device;
S5-3, analyzing the sound data by using a signal processing algorithm to obtain a corresponding sound analysis result, wherein the signal processing algorithm can adopt a multi-layer perceptron or a convolutional neural network, and the corresponding process is as follows:
The method comprises the steps of extracting characteristics of sound data to obtain corresponding power spectral density, inputting the power spectral density into a multi-layer perceptron or a convolutional neural network, classifying respiratory sound to obtain sound analysis results, wherein the sound analysis results are dyspnea, shortness of breath or stable breath.
S5-4, calculating the physiological index score based on the temperature, the sound analysis result, the blood sugar, the blood pressure, the blood oxygen saturation, the heart rate and the respiratory frequency.
Based on the sound analysis result, obtaining corresponding sound scores, namely:
;
Fractional temperature Fraction of blood sugarFractional blood pressureFractional oxygen saturationHeart rate scoreRespiratory rate fractionThe corresponding formula is:
;
;
;
;
;
;
wherein, 、Respectively represents the temperature and the average temperature of the patient,、Respectively represents blood sugar and average blood sugar of patients,、Respectively represents the systolic blood pressure and the average systolic blood pressure of the patient,、Respectively represents the diastolic blood pressure and the average diastolic blood pressure of the patient,、Respectively represents the blood oxygen saturation and the average blood oxygen saturation of the patient,、Respectively represent the heart rate and the average heart rate of the patient,、The respiratory rate and the average respiratory rate of the patient are respectively represented.
Thus, the formula corresponding to the physiological index score is:
;
wherein, The temperature coefficient is represented by a temperature coefficient,The sound coefficient is represented by a number of coefficients,Represents the blood glucose coefficient and,Represents the blood pressure coefficient and,Represents the blood oxygen saturation coefficient and the blood oxygen saturation coefficient,Representing the heart rate coefficient,Representing the coefficient of the breathing frequency,The physiological index score is represented by a score,Represents a natural constant of the natural product,Representing the bias term.
S6, obtaining a monitoring result based on the drainage tube detection result, the dressing detection result and the physiological index score.
The step S6 comprises the following steps:
S6-1, judging whether the physiological index score is lower than a physiological threshold, the drainage type result is serous liquid, the drainage color classification result is any one of dark red and bright red, if yes, entering S6-2, otherwise, monitoring that the patient is subjected to postoperative massive bleeding, and urgent need for treatment is met, wherein the physiological threshold can be set to 0.3.
S6-2, judging whether the conditions that the ratio information difference is larger than 0 and the blood trace area is larger than 0 are met, if so, monitoring the postoperative small amount of bleeding of the patient, and treating the postoperative small amount of bleeding of the patient, otherwise, monitoring the postoperative stable operation of the patient, and observing the postoperative small amount of bleeding of the patient.
In conclusion, the method adopts the YOLO-Seg network model to combine SRCNN layers and Blind Dconv layers, can identify the drainage tube region and the dressing region with high precision, improves the accuracy of region identification through multi-scale feature extraction and fusion, improves the definition and deblurring effect of images, ensures accurate identification under different illumination conditions, extracts the accurate detection of the color features and texture features of the drainage tube region and the dressing region on the color, category and duty ratio information of drainage liquid, provides powerful support for judging whether a patient bleeds, and more comprehensively evaluates the overall health condition of the patient through comprehensive analysis of physiological index data, accurately reflects the health state of the patient through the physiological index score and timely finds potential bleeding risks.
As shown in FIG. 4, the thyroid postoperative bleeding monitoring system comprises an image acquisition module, a region identification module, an image detection module, a physiological index data acquisition module, a physiological index score calculation module and a monitoring module, wherein:
The image acquisition module is used for acquiring monitoring image data and physiological index data of a patient in real time based on the monitoring interval;
the area identification module is used for inputting the monitoring image data into the area identification model and outputting and obtaining an area identification result;
The image detection module is used for inputting the region identification result into the image detection model and outputting the drainage tube detection result and the dressing detection result;
the physiological index data acquisition module is used for acquiring physiological index data of a patient;
the physiological index score calculation module is used for analyzing and calculating the physiological index data to obtain a physiological index score;
As shown in FIG. 5, the physiological index data acquisition module comprises a sound acquisition device, a temperature sensor and a monitoring device, the physiological index score calculation module comprises a sound analysis sub-module and a physiological index score calculation sub-module, and the thyroid postoperative bleeding monitoring system further comprises a warning module, wherein:
and the monitoring module is used for obtaining a monitoring result based on the drainage tube detection result, the dressing detection result and the physiological index score. The physiological index data acquisition module comprises a sound acquisition device, a temperature sensor, a monitoring device and an index preprocessing sub-module, the physiological index score calculation module comprises a sound analysis sub-module and a physiological index score calculation sub-module, and the thyroid postoperative bleeding monitoring system further comprises a warning module, wherein:
the warning module is used for sending a corresponding warning signal to medical staff according to the monitoring result;
The sound acquisition device is used for acquiring sound data of a patient;
a temperature sensor for acquiring a temperature of a patient;
The monitoring device is used for acquiring blood sugar, blood pressure, blood oxygen saturation, heart rate and respiratory frequency;
The index preprocessing sub-module is used for denoising and data interpolation of temperature, sound data, blood sugar, blood pressure, blood oxygen saturation, heart rate and respiratory frequency;
The sound analysis sub-module is used for analyzing the preprocessed sound data by utilizing a signal processing algorithm to obtain a corresponding sound analysis result;
The physiological index score calculation sub-module is used for calculating the physiological index score based on the preprocessed temperature, the sound analysis result, the preprocessed blood sugar, the preprocessed blood pressure, the preprocessed blood oxygen saturation, the preprocessed heart rate and the preprocessed respiratory frequency.
In the monitoring module, when the monitoring result is massive hemorrhage after operation of the patient, the treatment is needed or the patient is little hemorrhage after operation, the first-level alarm instruction is transmitted to the warning module, the patient information and the operation information are transmitted to a computer or a mobile phone of medical staff, and a warning is sent out. When the monitoring result is that the patient is stable after operation and needs to be observed, an observation instruction is transmitted to an alarm module, and the alarm device integrates the physiological index score, the drainage tube detection result, the dressing detection result and the patient information thereof into graphics and texts, sends the graphics and texts to a computer or a mobile phone of medical staff and reminds the medical staff to observe the patient. And an visual interface is provided, so that medical staff can easily check monitoring images, physiological index data and monitoring results of patients. The display mode of the graph and text enables medical staff to quickly understand and process the condition of patients, and improves working efficiency.
In conclusion, the system utilizes a deep learning model and a signal processing algorithm to carry out intelligent analysis on image and sound data, improves monitoring accuracy and reliability, can send warning signals of different levels according to monitoring results, mutually cooperates among modules, ensures efficient operation of the system, has a quick response mechanism, and can ensure that medical staff can take measures at the first time when bleeding occurs to a patient, and serious consequences are avoided.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. A method for monitoring bleeding after thyroid surgery, comprising:
s1, acquiring monitoring image data and physiological index data of a patient in real time based on a monitoring interval;
s2, constructing a region identification model and an image detection model;
S3, inputting the monitoring image data into a region identification model, and outputting to obtain a region identification result, wherein the region identification result comprises a drainage tube region image and a dressing region image;
S4, inputting the area identification result into an image detection model, and outputting to obtain a drainage tube detection result and a dressing detection result, wherein the dressing detection result is a blood trace area, and the drainage tube detection result comprises a drainage category result, a drainage tube color classification result and a duty ratio information difference;
s5, calculating and detecting the physiological index data to obtain a physiological index score;
S6, obtaining a monitoring result based on the drainage tube detection result, the dressing detection result and the physiological index score.
2. The thyroid postoperative bleeding monitoring method according to claim 1, wherein the region identification model comprises an edge segmentation extraction module, SRCNN layers and Blind Dconv layers connected in series, wherein the edge segmentation extraction module comprises a YOLO-Seg network model, and wherein the YOLO-Seg network model comprises a back bone module, a Neck module, a Detection Head module and a Segmentation Head module connected in series;
the image detection model comprises a drainage tube region image detection module and a dressing region image detection module which are in parallel, wherein the drainage tube region image detection module comprises a color conversion sub-module, a first color segmentation sub-module, a color duty ratio calculation sub-module, a texture feature extraction sub-module and a liquid category classification sub-module, and the dressing region image detection module comprises a second color segmentation sub-module, a blood trace detection sub-module, a blood trace judgment sub-module and a blood trace area calculation sub-module which are connected in series.
3. The method for monitoring bleeding after thyroid surgery according to claim 2, wherein the training process of the area identification model is as follows:
s3-1, acquiring training monitoring image data and preprocessing the training monitoring image data to obtain preprocessed training monitoring image data;
s3-2, inputting the preprocessed training monitoring image data to the Backbone module, and outputting to obtain multi-scale monitoring image features;
s3-3, inputting the multi-scale monitoring image features to the Neck module, and outputting the fused multi-scale monitoring image features;
S3-4, inputting the fused multi-scale monitoring image characteristics to the Detection Head module, and outputting to obtain a corresponding target Detection frame and a class probability value thereof;
s3-5, inputting the target detection frame and the class probability value thereof to the Segmentation Head module, and outputting to obtain an initial training area recognition result;
S3-6, inputting the initial training area recognition result into the SRCNN layers, and outputting the processed initial training area recognition result;
S3-7, inputting the processed initial training area recognition result to the Blind Dconv layers, and outputting the processed initial training area recognition result to obtain a training area recognition result, wherein the training area recognition result comprises a drainage tube area training image and a dressing area training image;
S3-8, calculating a first loss function based on the training area recognition result;
s3-9, adjusting the weight parameters of the region identification model based on the first loss function.
4. The thyroid postoperative bleeding monitoring method according to claim 2, wherein the color conversion sub-module is configured to perform color space conversion on the drainage tube region image to obtain a converted drainage tube region image;
The first color segmentation submodule is used for setting a first color threshold value and extracting a color region, and classifying the converted drainage tube region image by utilizing a color classification algorithm to obtain a corresponding drainage tube color classification result, wherein the drainage tube color classification result is dark red, bright red, light red or other colors;
the color duty ratio calculation sub-module is used for calculating the duty ratio information of the color region in the converted drainage tube region image, acquiring the duty ratio information corresponding to the previous monitoring, and calculating the difference value of the two duty ratio information to obtain the corresponding duty ratio information difference;
the texture feature extraction submodule is used for extracting texture feature data of the converted drainage tube region image by using a texture analysis algorithm;
The liquid category classification submodule is used for carrying out feature extraction on the texture feature data by utilizing a neural network, judging drainage liquid categories and obtaining drainage category results, wherein the drainage category results are slurry liquid or non-slurry liquid;
the drainage tube detection result comprises the drainage category result, the drainage tube color classification result and the duty ratio information difference.
5. The thyroid postoperative bleeding monitoring method according to claim 2, wherein the second color segmentation submodule is used for performing color conversion on the dressing region image, setting a second color threshold value, and segmenting the converted dressing region image by utilizing color segmentation to generate a corresponding image mask;
The blood trace detection submodule is used for carrying out feature extraction on the image mask by using a deep learning model to generate a blood trace detection result, wherein the blood trace detection result is a dressing bleeding area;
the blood trace judging submodule is used for judging whether a patient bleeds based on the blood trace detection result to obtain a corresponding judgment result, wherein the judgment result is that the patient bleeds or the patient does not bleed;
The blood trace area calculation sub-module is used for calculating the blood trace area based on the blood trace detection result and the judgment result.
6. The method of claim 1, wherein the physiological index data comprises patient temperature, blood glucose, blood pressure, blood oxygen saturation, heart rate, respiratory rate, and sound data;
the step S5 comprises the following steps:
S5-1, acquiring the temperature and the sound data through a temperature sensor and a sound acquisition device;
S5-2, acquiring the blood sugar, the blood pressure, the blood oxygen saturation, the heart rate and the respiratory frequency by using a monitoring device;
S5-3, analyzing the sound data by using a signal processing algorithm to obtain a corresponding sound analysis result;
S5-4, calculating the physiological index score based on the temperature, the sound analysis result, the blood sugar, the blood pressure, the blood oxygen saturation, the heart rate and the respiratory frequency.
7. The method for monitoring bleeding after thyroid surgery according to claim 6, wherein the formula corresponding to the physiological index score is:
;
wherein, 、Respectively represents the temperature coefficient and the temperature fraction,、Respectively represent the sound coefficient and the sound score,、Respectively represents the blood sugar coefficient and the blood sugar fraction,、Respectively represents the blood pressure coefficient and the blood pressure fraction,、Respectively represents the blood oxygen saturation coefficient and the blood oxygen saturation fraction,、Respectively represent the heart rate coefficient and the heart rate score,、Respectively represents the respiratory rate coefficient and the respiratory rate fraction,The physiological index score is represented by a score,Represents a natural constant of the natural product,Representing the bias term.
8. A method of monitoring bleeding after thyroid surgery according to claim 4 or 5, wherein S6 comprises the steps of:
S6-1, judging whether the physiological index score is lower than a physiological threshold value, the drainage type result is serous liquid, the drainage color classification result is any one of dark red and bright red, if yes, entering S6-2, otherwise, monitoring that the patient has a great deal of postoperative bleeding, and urgently treating the patient;
S6-2, judging whether the conditions that the ratio information difference is larger than 0 and the blood trace area is larger than 0 are met, if so, monitoring the postoperative small amount of bleeding of the patient, and treating the postoperative small amount of bleeding of the patient, otherwise, monitoring the postoperative stable operation of the patient, and observing the postoperative small amount of bleeding of the patient.
9. A thyroid postoperative bleeding monitoring system for implementing the thyroid postoperative bleeding monitoring method according to any one of claims 1 to 8, comprising an image acquisition module, a region identification module, an image detection module, a physiological index data acquisition module, a physiological index score calculation module, and a monitoring module, wherein:
The image acquisition module is used for acquiring monitoring image data and physiological index data of a patient in real time based on the monitoring interval;
the area identification module is used for inputting the monitoring image data into the area identification model and outputting and obtaining an area identification result;
The image detection module is used for inputting the region identification result into the image detection model and outputting the drainage tube detection result and the dressing detection result;
the physiological index data acquisition module is used for acquiring physiological index data of a patient;
the physiological index score calculation module is used for analyzing and calculating the physiological index data to obtain a physiological index score;
And the monitoring module is used for obtaining a monitoring result based on the drainage tube detection result, the dressing detection result and the physiological index score.
10. The thyroid postoperative bleeding monitoring system according to claim 9, wherein the physiological index data acquisition module comprises a sound acquisition device, a temperature sensor, a monitoring device and an index preprocessing sub-module;
the physiological index score calculation module comprises a sound analysis sub-module and a physiological index score calculation sub-module;
the thyroid postoperative bleeding monitoring system further comprises a warning module, wherein:
the warning module is used for sending a corresponding warning signal to medical staff according to the monitoring result;
The sound acquisition device is used for acquiring sound data of a patient;
a temperature sensor for acquiring a temperature of a patient;
The monitoring device is used for acquiring blood sugar, blood pressure, blood oxygen saturation, heart rate and respiratory frequency;
The index preprocessing sub-module is used for denoising and data interpolation of temperature, sound data, blood sugar, blood pressure, blood oxygen saturation, heart rate and respiratory frequency;
The sound analysis sub-module is used for analyzing the preprocessed sound data by utilizing a signal processing algorithm to obtain a corresponding sound analysis result;
The physiological index score calculation sub-module is used for calculating the physiological index score based on the preprocessed temperature, the sound analysis result, the preprocessed blood sugar, the preprocessed blood pressure, the preprocessed blood oxygen saturation, the preprocessed heart rate and the preprocessed respiratory frequency.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510128452.6A CN119601178B (en) | 2025-02-05 | 2025-02-05 | Thyroid postoperative bleeding monitoring method and system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510128452.6A CN119601178B (en) | 2025-02-05 | 2025-02-05 | Thyroid postoperative bleeding monitoring method and system |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN119601178A true CN119601178A (en) | 2025-03-11 |
| CN119601178B CN119601178B (en) | 2025-04-29 |
Family
ID=94844895
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510128452.6A Active CN119601178B (en) | 2025-02-05 | 2025-02-05 | Thyroid postoperative bleeding monitoring method and system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119601178B (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113178258A (en) * | 2021-04-28 | 2021-07-27 | 青岛百洋智能科技股份有限公司 | Preoperative risk assessment method and system for surgical operation |
| CN114549517A (en) * | 2022-03-03 | 2022-05-27 | 北京大学人民医院 | Intraoperative hemorrhage identification and metering method and system based on computer vision |
| CN114886405A (en) * | 2022-05-13 | 2022-08-12 | 浙江大学 | Intelligent postoperative bleeding monitoring system |
| US20230132247A1 (en) * | 2021-10-23 | 2023-04-27 | Benjamin Steven Hopkins | Apparatus and methods for machine learning to identify and diagnose intracranial hemorrhages |
-
2025
- 2025-02-05 CN CN202510128452.6A patent/CN119601178B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113178258A (en) * | 2021-04-28 | 2021-07-27 | 青岛百洋智能科技股份有限公司 | Preoperative risk assessment method and system for surgical operation |
| US20230132247A1 (en) * | 2021-10-23 | 2023-04-27 | Benjamin Steven Hopkins | Apparatus and methods for machine learning to identify and diagnose intracranial hemorrhages |
| CN114549517A (en) * | 2022-03-03 | 2022-05-27 | 北京大学人民医院 | Intraoperative hemorrhage identification and metering method and system based on computer vision |
| CN114886405A (en) * | 2022-05-13 | 2022-08-12 | 浙江大学 | Intelligent postoperative bleeding monitoring system |
Non-Patent Citations (1)
| Title |
|---|
| 金纯;刘乐;陈恩东;蔡业丰;: "甲状腺手术术后出血因素分析", 中国农村卫生事业管理, no. 05, 20 May 2015 (2015-05-20), pages 664 - 666 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119601178B (en) | 2025-04-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109009102B (en) | Electroencephalogram deep learning-based auxiliary diagnosis method and system | |
| CN119548095B (en) | A sleep stage detection method based on multimodal and visual transformation network | |
| CN114708258A (en) | Eye fundus image detection method and system based on dynamic weighted attention mechanism | |
| US12293576B2 (en) | Determining type of to-be-classified image based on signal waveform graph | |
| CN111161287A (en) | Retinal vessel segmentation method based on symmetric bidirectional cascade network deep learning | |
| KR102430946B1 (en) | System and method for diagnosing small bowel preparation scale | |
| CN115910338A (en) | A method and device for assessing human health status based on multimodal biometrics | |
| CN117542474A (en) | Remote nursing monitoring system and method based on big data | |
| CN116681923A (en) | Automatic ophthalmic disease classification method and system based on artificial intelligence | |
| CN115100468A (en) | Vision transform model-based plant leaf disease detection and classification method | |
| Zijian et al. | AFFD-Net: A dual-decoder network based on attention-enhancing and feature fusion for retinal vessel segmentation | |
| CN119601178B (en) | Thyroid postoperative bleeding monitoring method and system | |
| CN113647920A (en) | Method and device for reading vital sign data in monitoring equipment | |
| CN115909400B (en) | A method for identifying mobile phone usage behavior in low-resolution surveillance scenarios | |
| CN111798408A (en) | An endoscope interference image detection and classification system and method | |
| CN113822389B (en) | Digestive tract disease classification system based on endoscope picture | |
| CN113569655A (en) | Identification system for pink eye patients based on eye color monitoring | |
| CN120148115A (en) | Lightweight pedestrian fall detection method and system for inspection robots | |
| Wang et al. | PCRTAM-Net: A novel pre-activated convolution residual and triple attention mechanism network for retinal vessel segmentation | |
| CN110516611B (en) | Autism detection system and autism detection device | |
| KR20200005853A (en) | Method and System for People Count based on Deep Learning | |
| CN119478412A (en) | A coronary artery segmentation method combining MambaASPP and combined loss function | |
| CN112419288A (en) | Unmanned vegetable greenhouse planting method based on computer vision | |
| CN118675219A (en) | Method and system for detecting diabetic retinopathy lesion based on fundus image | |
| CN118657757B (en) | A non-invasive method for detecting skin pressure injuries |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |