CN119006701B - Data processing method for nerve block anesthesia ultrasonic guidance - Google Patents
Data processing method for nerve block anesthesia ultrasonic guidance Download PDFInfo
- Publication number
- CN119006701B CN119006701B CN202410978790.4A CN202410978790A CN119006701B CN 119006701 B CN119006701 B CN 119006701B CN 202410978790 A CN202410978790 A CN 202410978790A CN 119006701 B CN119006701 B CN 119006701B
- Authority
- CN
- China
- Prior art keywords
- image
- data
- algorithm
- dimensional
- real
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Clinical applications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Clinical applications
- A61B8/0891—Clinical applications for diagnosis of blood vessels
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
- G06N3/0442—Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/60—Image enhancement or restoration using machine learning, e.g. neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Computational Linguistics (AREA)
- Pathology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Heart & Thoracic Surgery (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Databases & Information Systems (AREA)
- Vascular Medicine (AREA)
- Geometry (AREA)
- Computer Hardware Design (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
The invention relates to a data processing method for nerve block anesthesia ultrasonic guidance. The method comprises the steps of preprocessing ultrasonic image data, removing image noise by adopting a noise reduction algorithm based on deep learning, enhancing key structures comprising the visibility of nerves and blood vessels by using a local contrast enhancement algorithm, carrying out feature extraction by utilizing deep learning, identifying and segmenting the nerves, blood vessels and muscle tissues in the image, simultaneously integrating data of different ultrasonic modes to extract tissue features, optimizing a processing flow by adopting a recursive learning algorithm and an adaptive algorithm, wherein the recursive learning algorithm adjusts processing parameters according to the previous injection effect and the accuracy of ultrasonic image optimization feature extraction and image analysis by adopting the adaptive algorithm, finally constructing a complete three-dimensional tissue model from a continuous two-dimensional ultrasonic image sequence by utilizing a volume reconstruction algorithm, and controlling the positioning and the real-time navigation of a needle by combining a real-time position sensing technology.
Description
Technical Field
The invention relates to an ultrasonic guidance data processing method, in particular to a nerve block anesthesia ultrasonic guidance data processing method.
Background
Although the existing method for processing the blocking anesthesia ultrasound guidance data, such as the method described in the chinese patent publication No. CN2024105715803, has made a certain progress in enhancing the image data, there are some disadvantages and drawbacks that require further improvement and optimization. These deficiencies and drawbacks are mainly manifested in the respect of, firstly, that existing image enhancement techniques rely mainly on histogram equalization and basic image processing algorithms. While these methods can improve the contrast and sharpness of the image to some extent, they tend to ignore complex relationships between different tissue organ structures. This approach can lead to over-or under-enhancement when dealing with tissue structures, affecting image realism and detail retention. For example, in treating bone and soft tissue, simple enhancement methods can result in too bright bone areas and loss of soft tissue detail due to their different reflective properties and motion states. Second, existing methods have limitations in dealing with the stability of tissue organ structures. The stability of different tissue organ structures is different, the bone position is relatively stable, and the soft tissue can generate slight change due to the movements of breathing and the like. The prior art judges the tissue stability by quantifying the brightness change of the pixel, but the method is not accurate enough in practical application. Particularly when processing anatomical structures, such a stability judgment method based on brightness variation is susceptible to noise and artifacts, resulting in erroneous judgment.
In addition, the prior art has shortcomings in multi-modal data fusion. Although the ultrasound image has advantages in terms of real-time and convenience, it has low resolution and contrast, and it is difficult to clearly display deep structures. In contrast, CT and MRI images have advantages in terms of resolution and contrast, but lack real-time. The ultrasonic image and CT or MRI data are fused, and the definition and contrast of the image can be obviously improved by utilizing the complementarity of the multi-mode data. However, existing approaches still present significant challenges in algorithms and implementations of data fusion, particularly in terms of how to efficiently align and fuse different modality data, lacking an efficient and accurate solution. The existing methods also have significant shortcomings in terms of real-time feedback and dynamic adjustment. During the block anesthesia, the physician needs to adjust the procedure based on the real-time images and feedback. However, the prior art lacks an effective real-time feedback mechanism, which makes it difficult for a physician to adjust the operation accurately in time. This not only increases the complexity of the procedure, but also increases the risk of surgery. The lack of real-time feedback and dynamic adjustment mechanisms leads to more experience and intuition of doctors in the operation process, and the accuracy and consistency of each operation are difficult to ensure. The prior art is also subject to improvement in terms of computational efficiency and algorithm optimization. Although some algorithms based on deep learning and artificial intelligence have been applied to medical image processing, these algorithms typically require a significant amount of computational resources and time and are difficult to apply in real-time operation. Particularly when processing large-scale three-dimensional image data, the computational efficiency of the existing algorithms often has difficulty in meeting clinical requirements. This not only limits its use in practical applications, but also increases the complexity and cost of the system.
Disclosure of Invention
The invention aims to provide a data processing method for nerve block anesthesia ultrasonic guidance, so as to solve part of defects and shortcomings pointed out in the background art.
The invention solves the technical problems by adopting the following technical proposal that firstly, ultrasonic image data is preprocessed, image noise is removed by adopting a noise reduction algorithm based on deep learning, and a local contrast enhancement algorithm is used for enhancing key structures including the visibility of nerves and blood vessels;
secondly, performing feature extraction by deep learning, identifying and segmenting nerve, blood vessel and muscle tissues in the image, and simultaneously integrating data of different ultrasonic modes to extract tissue features;
Then optimizing the processing flow through a recursion learning algorithm and a self-adaptive algorithm, wherein the recursion learning algorithm adjusts the processing parameters according to the previous injection effect and the accuracy of ultrasonic image optimization feature extraction and image analysis, and the self-adaptive algorithm adjusts the processing parameters according to the body types and tissue characteristics of different patients;
And finally, constructing a complete three-dimensional tissue model from the continuous two-dimensional ultrasonic image sequence by utilizing a volume reconstruction algorithm, and controlling the positioning and the real-time navigation of the needle head by combining a real-time position sensing technology.
Further, the preprocessing process for the ultrasonic image data includes:
S1, removing image noise by adopting a high-order partial differential equation with an adaptive adjustment coefficient, wherein the specific formula is as follows:
where i and j are the differential orders and λ and σ are parameters that are dynamically adjusted based on the image content;
S2, for visual enhancement of key structures including nerves and blood vessels, using superposition integral transformation, combining information of different scales and directions:
Where Ω is the image domain, w k is the weight of scale k, G k is the gaussian blur function, ω (x, y) is the position dependent weight function, adjusting the enhanced locality;
s3, controlling the nerve and blood vessel segmentation by utilizing a network structure containing local and global information feedback:
Wherein F i (x, y) is a feature extracted by the deep neural network, K is a kernel function for enhancing spatial context information in the image;
S4, finally, designing a dynamic adjustment model depending on image gradient and local brightness characteristics aiming at image contrast differences of different patients:
Wherein, Is the Laplacian, representing the second derivative of the image, used to detect boundaries and details, and γ (x, y) is a parameter that is adjusted according to the image characteristics.
Further, the construction process of feature extraction by deep learning includes:
s1, firstly adopting multi-mode data fusion, and passing through a function:
Integrating the data of different ultrasonic modes M i, wherein K i (x, y) is a convolution kernel designed for the mode M i, sigma i regulates the spatial scale, and alpha i is a weight obtained through training;
S2, hierarchical feature extraction is executed through a deep learning model, and the formula is applied:
performing weighted higher-order differentiation on each layer of output of the network L j, wherein β j is a hierarchical weight;
s3, finally, an end-to-end deep learning model is adopted for segmentation, and the segmentation is carried out through the formula:
where V (u, V) is a training derived transformation kernel for extracting the significance of key tissue features such as nerves and blood vessels in image I.
Further, the recursive learning algorithm optimizes feature extraction, and based on the effect of the previous procedure and the ultrasound image, uses the formula:
Where θ t is the model parameter at time t, η is the adaptive learning rate, y t is the surgical effect data, Is the effect predicted by the model, p (y t|xt;θt) is a conditional probability density function;
Then, an adaptive algorithm is applied to adjust image processing parameters according to the body type and tissue characteristics of the patient, and the formula is utilized:
Where P original is the original image processing parameter, δ is the adjustment intensity, α k,βk is the adjustment coefficient, D k(s) represents the kth feature extracted from the ultrasound image of the patient, μ k is the expected value of the feature;
finally, a real-time feedback and parameter optimization mechanism is realized, and the formula is used:
Where θ current and θ new are model parameters before and after adjustment, γ is a real-time adjustment factor, y observed and y predicted are the results of intra-operatively observed and model predictions, respectively, and φ (x data, θ) is a parameterized feature extraction function.
Further, the method for controlling the positioning and the real-time navigation of the needle comprises the following steps:
Firstly, implementing an enhanced volume reconstruction technology based on machine learning, and identifying and constructing three-dimensional models of nerves, blood vessels and surrounding tissues by analyzing a continuous two-dimensional ultrasonic image sequence;
Then, comprehensively analyzing the ultrasonic image and other imaging technology data including MRI or CT through a data fusion technology to enhance the detail richness of the obtained three-dimensional model;
and finally, combining a real-time position sensing technology, adopting an electromagnetic tracking system to monitor and transmit the position information of the needle in real time, synchronously updating the needle with the three-dimensional tissue model, and controlling a real-time navigation system to accurately guide the needle to safely arrive at a target area according to a preset path.
Further, the enhanced volumetric reconstruction technique employs a method of:
First, a volume reconstruction technology based on deep learning is implemented, and a convolutional neural network CNN is used for the following formula:
Identifying and predicting three-dimensional structures of nerves and blood vessels from continuous two-dimensional ultrasonic images, wherein Θ represents network parameters, alpha represents learning rate, N represents image quantity, x i represents input image, y i represents target structure label, and f (x i) represents feature extraction function of image x i;
Then, an ultrasonic image sequence is processed by applying a long-short-term memory network LSTM, and the following formula is adopted:
Ht=σs(W·[Ht-1,Xt]+b+∫Ωκ(s,t)·Hsds)
Analyzing the temporal relationship between the images, adjusting the algorithm to match patient-specific tissue changes, wherein H t is the hidden state at time t, X t is the input image, W and b are learning parameters, σ s is the activation function, κ (s, t) is the kernel function for modeling the dependency of H t on the previous state H s, Ω is the set of past states;
finally, combining ultrasound data with other imaging techniques including CT or MRI, through a multi-modal data fusion network, the formula:
Creating a comprehensive three-dimensional view, wherein X US,XCT,XMRI represents the ultrasound, CT, MRI image data, respectively, phi j represents fusion parameters, phi j and phi j are functions for j-th dimensional data processing, Is the entire data field.
Further, the construction method for enhancing the detail richness of the obtained three-dimensional model comprises the following steps:
firstly, implementing a multi-mode data fusion network MMF-NN based on deep learning, and carrying out an algorithm:
Identifying and predicting the three-dimensional structure of tissue from ultrasound US, MRI and CT images, where Θ represents a network parameter, η is a learning rate, N is the number of images, x i,m is an input image from pattern m, y i is a target structure tag, and f m represents a feature extraction function for pattern m;
feature extraction and matching algorithms are then applied, by the formula:
alignment and feature matching of ultrasound US, MRI and CT images is achieved, wherein Representing a set of images from mode m, ω (x) is a weight function for adjusting the contribution of features in different imaging techniques, δ is a dirac function, controlling the overlap of the modal images, μ is a function that calculates the cross-modal feature center.
Further, the real-time position sensing technique involves the steps of:
s1, firstly adopting an integrated electromagnetic tracking system EMT and real-time data processing algorithm, and passing through the algorithm:
Updating and adjusting the three-dimensional tissue model, wherein Θ represents parameters of the three-dimensional model, γ is an adjustment factor, Is a loss function, measuring the deviation between the model and the electromagnetic tracking data, and EMT t represents the electromagnetic tracking data at time t;
S2, a dynamic three-dimensional path planning and feedback mechanism is applied, path correction is carried out according to real-time data, and an algorithm is used:
where pi represents a path planning parameter, η is a learning rate, Is the predicted path output, y path is the actual needle path,Is the gradient of the path output to the planning parameter.
The invention has the beneficial effects that:
1. by adopting the image processing algorithm based on deep learning, the invention can adaptively enhance the image data in the ultrasonic guiding process, and effectively improve the definition and contrast of the image. This is particularly important for identifying and locating tissue-organ structures so that the anesthesiologist can more clearly see the details of the surgical field.
2. The invention can accurately divide different tissue organ structures in the ultrasonic image, and reduces the loss of image details by independently processing and enhancing each organ tissue region to be enhanced. Such accurate segmentation is of great importance to improve the success rate and safety of surgery, especially in complex tissue environments.
3. By quantifying the brightness change index corresponding to the pixel point at the same position, the invention can judge whether the tissue organ structure of the pixel point at the position is stable or not. For a relatively stable tissue structure, the invention can perform more accurate enhancement treatment, thereby improving the overall image quality and enhancing the operation confidence of anesthesiologists.
4. The invention can dynamically respond to the change of different tissue structures, such as the position change caused by respiration, and can ensure that high precision and stability can be continuously maintained in the image enhancement process through real-time analysis and processing. The dynamic response capability greatly reduces the risk in the operation and improves the anesthesia effect.
Drawings
FIG. 1 is a flow chart of a method for processing data guided by nerve block anesthesia ultrasound.
Fig. 2 is a flow chart of the preprocessing process of ultrasonic image data according to the present invention.
FIG. 3 is a flow chart of a construction process of feature extraction for deep learning according to the present invention.
FIG. 4 is a flow chart of a method of controlling needle positioning and real-time navigation in accordance with the present invention.
Detailed Description
The following describes the embodiments of the present invention in detail with reference to the drawings.
Referring to fig. 1, a method for processing data of nerve block anesthesia ultrasound guidance involves the steps of first preprocessing an ultrasound image, including removing random noise and structural noise in the image using a deep learning model, and generating a denoised sharp image by training a countering model using a denoising algorithm based on generating a countering network GAN. In addition, the use of local contrast enhancement techniques, including local adaptive histogram equalization (CLAHE), specifically enhances the contrast of critical structures in the ultrasound image, including nerves and blood vessels, making these structures more prominent and easily identifiable in the image.
Secondly, feature extraction is performed by deep learning techniques, which involve the use of convolutional neural networks CNN for the precise identification and segmentation of nerves, blood vessels and muscle tissue in the image. This process not only relies on a single ultrasound image modality, but also integrates data from multiple ultrasound modalities, including blood flow information provided by Doppler modalities, to fully capture and analyze tissue characteristics. The multi-mode data integration is realized through a specially designed network layer, wherein each layer is responsible for extracting and fusing information from different modes, and the finally output feature mapping is ensured to have high information richness and relevance.
Next, the process flow is further optimized using a recursive learning algorithm and an adaptive algorithm. The recursive learning algorithm uses the data collected from the previous procedure to continuously optimize the parameters of the model so that after each operation, the feature extraction and image analysis of the system are more accurate. Meanwhile, the self-adaptive algorithm dynamically adjusts image processing parameters according to the specific body type and tissue characteristics of the patient, and can adjust the intensity of image enhancement according to the muscle density of the patient, so that individuation and optimization of the processing process are ensured.
Finally, the continuous two-dimensional ultrasound image sequence is converted into a complete three-dimensional tissue model by a volumetric reconstruction algorithm. This step uses algorithms to estimate the spatial relationship between the two-dimensional image layers, reconstructing a detailed three-dimensional structure. Meanwhile, the electromagnetic tracking system of the real-time position sensing technology is utilized to monitor and transmit the position information of the needle in real time, the system synchronizes the information with the three-dimensional model, and the needle is guided to accurately reach the target area along the optimal path through the control algorithm, so that the safety and the success rate of the operation are greatly improved. The method comprehensively applying the advanced image processing technology and the real-time navigation system provides unprecedented accuracy and operation convenience for blocking anesthesia ultrasonic guidance.
Example 1:
The embodiment adopts a deep learning-based blocking anesthesia ultrasonic guidance data processing method to carry out lumbar nerve blocking anesthesia on a patient suffering from chronic lumbago. The ultrasound image data is first preprocessed. The image noise is removed by adopting a high-order partial differential equation with an adaptive adjustment coefficient, and the specific formula is as follows:
Where i and j are the orders of differentiation and λ and σ are parameters that are dynamically adjusted based on the image content. In actual operation, i and j are selected to have a value range of 1 to 2, λ is selected to have a value range of 0.1 to 1, and σ is selected to have a value range of 1 to 5.
By this method, noise in the ultrasound image is effectively removed, and the boundaries of nerves and blood vessels become clearer. Taking a certain ultrasound image pixel as an example, λ=0.5, σ=2 is set, and i=1, j=1 is selected for calculation. The gradient value of the original image is 2, and the value N (x, y) of the pixel point after denoising is calculated by a formula:
0.5*(22)*exp(-((12+12)/8))=1.21
The noise impact is significantly reduced.
Next, with reference to fig. 2, feature extraction is performed using a deep learning model. Nerves, blood vessels and muscle tissue in the ultrasound image are identified and segmented by convolutional neural network CNN while integrating data from different ultrasound modes. The preprocessed ultrasonic image is input, the model identifies nerves and blood vessels, and a segmented image is generated to display the specific location of each tissue. The image processing flow is further optimized by a recursive learning algorithm and an adaptive algorithm. The recursive learning algorithm continuously adjusts model parameters according to the previous injection effect and the ultrasonic image, and the formula is as follows:
wherein, theta t is the current model parameter, eta is the learning rate, the value range is 0.01 to 0.1, Is a loss function and f is a model function. Setting the initial parameter θ 0 to 1, the learning rate η=0.05, and the loss function value to 0.5, then the new parameter θ 1 =1+0.05×0.5=1.025 indicates that the parameters are gradually optimized to improve the accuracy of image processing.
The self-adaptive algorithm adjusts processing parameters according to body types and tissue characteristics, so that individuation and precision are ensured, and the formula is as follows:
Wherein δ is the adjustment intensity, the value range is 0.1 to 0.5, α k and β k are adjustment coefficients, the value range is 0.1 to 1, and μ k is the characteristic desired value. Setting initial parameter P original to 2, δ=0.3, α 1=0.5,β1 =0.2, characteristic difference D 1(s) =0.8, expected value μ 1 =1, then P adjusted=2*(1+0.3*0.5*exp(-0.2*(0.8-1)2))=2.15, the dynamic adjustment process of the parameters is demonstrated.
And finally, constructing a complete three-dimensional tissue model from the continuous two-dimensional ultrasonic image sequence by using a volume reconstruction algorithm. And by combining with an electromagnetic tracking system EMT, the position information of the needle is monitored and transmitted in real time, so that the accurate positioning of the needle is ensured. By the formula:
Dynamically adjusting path planning, wherein eta is a learning rate, the value range is 0.01 to 0.1, and pi is a path planning parameter. Setting the initial path parameter pi 0 to 3, η=0.05 and the predicted path error to 0.4, then setting the new path parameter pi 1 =3+0.05×0.4=3.02, which indicates that the path is gradually optimized.
Through the steps, a doctor can accurately guide the needle head to reach the target nerve region in real time in the operation process, so that the success of anesthesia and the safety of a patient are ensured. The whole process shows the remarkable advantages of the novel method in the aspects of improving the image processing precision, the real-time performance and the individuation.
The embodiment further adopts a visual enhancement technology based on superposition integral transformation to further improve the visibility of nerves and blood vessels in the ultrasonic image, and uses information of different scales and directions to enhance the image so as to make the key structure more prominent in the image. The specific formula is as follows:
Omega is the image domain, w k is the weight of scale k, G k is the gaussian blur function, omega (x, y) is the position dependent weight function for adjusting the enhanced locality.
Further improving the definition of nerves and blood vessels in the image so as to more accurately perform needle positioning. The specific operation is as follows:
An ultrasound image of the lumbar region of the patient is selected as the treatment object. The image field Ω encompasses the entire lumbar region, including critical nerve and vascular structures.
Different weights w k are set according to different scales and directions of the image. In order to improve the processing precision, three different scales are selected, and the set weights are w 1=0.4,w2=0.3,w3 =0.3 respectively. For each scale, a different gaussian blur function G k (x, y) is used, which is used to smooth the image and extract features of the image with different degrees of blur. The selected gaussian blur function parameters are standard deviation σ 3 =3 of standard deviation σ 2=2,G3 of standard deviation σ 1=1,G2 of G 1, respectively.
The location dependent weight function ω (x, y) adjusts the enhancement effect by analyzing the location of each pixel in the image. And a weight function based on image gradient is selected, and the weight function identifies edges and important structures in the image, so that the enhancement effect of the areas is more remarkable. Specifically defined as:
Representing the gradient of the image, σ=1.5 is the adjustment parameter.
In processing the ultrasound image, a region in the lumbar image is selected for detailed calculation. The initial image intensity I (x, y) =150 of a certain pixel point is set in this region. According to weights of different scales and Gaussian blur function parameters, the calculation process of the pixel point intensity C (I) after processing is as follows:
1. gaussian blur processing:
Dimension 1:I 1(x,y)*G1 (x, y) =120;
Dimension 2:I 2(x,y)*G2 (x, y) =130;
dimension 3:I 3(x,y)*G3 (x, y) =140.
2. And (5) overlapping weights:
3. Position dependent weight calculation:
Image gradient
4. Final enhancement results:
By processing in this way, the nerve and vascular structures in the image are significantly enhanced, and doctors can more clearly identify and locate these critical structures, thereby accurately guiding the needle to the target area, ensuring the success of the surgery and the safety of the patient. This procedure demonstrates the feasibility and effectiveness of visual enhancement techniques based on superposition integral transformations in practical applications.
Embodiments further utilize a network structure containing local and global information feedback to control the segmentation of nerves and blood vessels. The network structure used can be formulated as follows:
wherein F i (x, y) is a feature extracted by the deep neural network, beta i is a weight coefficient of the feature, and the value range is 0.1 to 1.K is a kernel function for enhancing spatial context information in the image, σ s is an activation function, sigmoid function.
The deep neural network DNN is used to extract local and global features from the enhanced ultrasound image. Each layer of the network extracts features of different scale and complexity, including edges, textures, and advanced semantic information. The feature number m=5 is set, and the weight β i of each feature F i (x, y) is randomly initialized between 0.1 and 1, and adjusted by training.
The kernel function K is used for combining local and global information to increase the accuracy of segmentation. Selecting a Gaussian kernel function:
Where σ=2.
In processing the lumbar ultrasound image of the patient, one image block is selected for detailed calculation. The initial image intensity of a certain pixel point in the image block is set to be I (x, y) =150. The network extracted features F 1 (x, y) to F 5 (x, y) are 0.6,0.8,0.7,0.9,0.5, respectively, with a corresponding weight β 1=0.4,β2=0.5,β3=0.3,β4=0.6,β5 =0.2.
Calculating local feature contributions:
The contribution of the kernel, the range r=3 of the kernel is set and the gaussian kernel parameter σ=2 is selected:
The average eigenvalue was set to 0.7 and the integration result was about:
and (3) calculating a final segmentation result:
S(x,y)=σ(1.25+3.402)=σ(4.652)≈0.99
And obtaining a segmentation result of each pixel point in the ultrasonic image, wherein the result of sigma is close to 1, which indicates that the high probability is nerve or blood vessel.
The final embodiment uses a dynamically adjusted model that relies on image gradients and local brightness characteristics to optimize image contrast. The dynamic contrast adjustment model employed may be represented by the following formula:
Wherein, Is the Laplacian, used to detect boundaries and details in an image, γ (x, y) is a parameter adjusted according to the image characteristics, μ (I) is the global average intensity of the image, θ is a parameter controlling the contrast adjustment intensity, and the value range is1 to 5.
Image gradients and local luminance characteristics are extracted from the enhanced and segmented image to determine a contrast adjustment parameter γ (x, y) for each pixel. The value of γ (x, K) is set to a range of 0.5 to 2 depending on the local luminance and gradient information.
In processing the lumbar ultrasound image of the patient, the critical areas are selected for detailed calculations. The initial image intensity I (x, y) =180, the global average intensity μ (I) =150, and the contrast adjustment intensity parameter θ=2 are set in this region.
Calculating the Laplacian of the pixel point:
According to the image characteristics, a contrast adjustment parameter gamma (x, y) =1.5 of the pixel point is determined.
Substituting the formula to calculate:
Since exp (-15) is very close to 0, the simplified result is:
A(I,x,y)≈180+30×1=210
The contrast of the pixel point is improved, so that the edges of nerves and blood vessels are more clearly visible.
Example 2:
The embodiment adopts a multi-mode data fusion technology, combines data from different ultrasonic modes, performs feature extraction through deep learning, and is helpful for comprehensively understanding and identifying the anatomical structure of a target area. The multi-modal data fusion formula used is as follows:
M i (x, y) is data from different ultrasound modes, K i (x, y) is a convolution kernel designed for mode M i, σ i modulates the spatial scale, and α i is the weight obtained by training.
Three different ultrasound modes were selected for data fusion, B-mode (B-mode), dopplermode (doppler mode) and Elastography (elastography mode), respectively. For each mode, a different convolution kernel K i was designed, with a convolution kernel parameter for B-mode of σ 1 =1.5, a convolution kernel parameter for dopplermode of σ 2 =2.0, and a convolution kernel parameter for elastography of σ 3 =1.0.
The weight alpha i of each mode is obtained through deep learning model training. In the training process, the obtained weight α 1=0.5,α2=0.3,α3 =0.2 is set.
In processing the lumbar ultrasound image of the patient, the critical areas are selected for detailed calculations. The initial image intensity of a certain pixel point in this region is set to be I (x, y) =180.
For B-mode data M 1 (x, K), the convolution kernel L 1 (x, y) is processed and Gaussian weighted:
weight α1=0.5;
For Dopplermode data M 2 (x, y), the convolution kernel K 2 (x, y) is processed and gaussian weighted:
Weight α2=0.3;
For Elastography data M 3 (x, y), the convolution kernel K 3 (x, y) is processed and gaussian weighted:
Weight α3=0.2.
Substituting the data into a multi-mode fusion formula for calculation:
Calculate the contribution of B-mode:
calculate Dopplermode contribution:
calculate Elastography contribution:
final fusion results:
F(x,y)=72.25+45.6+22.5=140.35
Through multi-mode data fusion, doctors can synthesize information of different ultrasonic modes to generate more accurate and comprehensive feature images.
With reference to fig. 3, the embodiment then performs hierarchical feature extraction through a deep learning model to further improve the visibility and segmentation accuracy of nerves and blood vessels in the image. The hierarchical feature extraction formula used is as follows:
Wherein, Is a high-order laplace operator of the image, is used for detecting multi-level boundaries and details in the image, and L j (x, y) is output of each layer of the network L j, and beta j is a level weight and has a value ranging from 0.1 to 1.
Each layer of the deep learning model extracts features of different scale and complexity. The shallower layer extracts edge and texture information, and the deeper layer extracts high-level semantic information. The number of layers m=4 of the feature extraction is set, the weight beta j of each layer is randomly initialized between 0.1 and 1, and the feature extraction is adjusted through training.
When processing the lumbar ultrasound image, the key region is selected for detailed calculation. The initial image intensity I (x, y) =180 at a certain pixel point is set in this region.
The features L 1 (x, y) extracted by the first layer are subjected to a higher-order laplace operator process:
The features L 2 (x, y) extracted by the second layer are subjected to a higher-order laplace operator process:
the extracted feature L 3 (x, y) of the third layer is subjected to a higher-order laplace operator process:
the feature L 4 (x, y) extracted by the fourth layer is subjected to a higher-order laplace operator process:
substituting the data into a hierarchical feature extraction formula for calculation:
calculating the contribution of the first layer:
calculating the contribution of the second layer:
calculating the contribution of the third layer:
Calculating the contribution of the fourth layer:
Final feature extraction results:
T(x,y)=9+8+5+1.5=23.5
Through hierarchical feature extraction, doctors can extract multi-level features in the ultrasonic images, so that boundaries and details of nerves and blood vessels are clearer.
The embodiment further adopts an end-to-end deep learning model to accurately divide the nerves and the blood vessels, and the formula is as follows:
wherein V (u, V) is a transformation kernel obtained by training, and is used for extracting key tissue characteristics in the image I, including the significance of nerves and blood vessels.
Training is performed through ultrasonic image data to obtain a transformation kernel V (u, V). The transform kernel size is set to 3×3, and each value is fixed as a matrix after training as follows:
When processing the lumbar ultrasound image, the key region is selected for detailed calculation. The initial image intensity matrix I for a certain 3×3 pixel point is set in this region as:
substituting the data into a segmentation formula for calculation:
calculating specific numerical values:
=36+95+40+68+144+76+32+85+36=612
applying an activation function:
each pixel point is calculated through convolution of the transformation kernel V (u, V) and the initial image intensity I (x-u, y-V), and finally a probability value close to 1 is generated, which indicates that the region has high probability of being a nerve or a blood vessel.
Example 3:
Embodiments use recursive learning algorithms to adjust and optimize the model parameters of feature extraction in order to dynamically adjust the model based on previous surgical effects and ultrasound image data to more accurately match actual surgical needs. The core formula of the algorithm is as follows:
θ t is a model parameter at time t, η is an adaptive learning rate, the range of values is set to 0.01 to 0.1 to ensure a smooth learning process, y t is surgical effect data, Is the effect predicted by the model, p (y t∣xt;θt) is a conditional probability density function.
In a patient operation, a doctor records effect data y t of a previous operation and predicts the effect by a real-time ultrasonic image x t Set at a specific moment, the actual surgical effect is y t =0.8 (complete success is 1), while the model predicts the surgical effect as
Setting a learning rate η=0.05;
Setting an initial value θ t =1.0 of the model parameter;
Setting the gradient of conditional probability density function Estimated to be 0.4.
Substituting the data into a recursive learning algorithm formula for calculation:
the calculation process is simplified (set the integration result to 1, i.e. to represent the whole image area):
θt+1=1.0+0.05×(0.2×0.4)=1.0+0.05×0.08=1.004
Through the recursion learning algorithm, a doctor can dynamically adjust the model parameter theta t based on the actual operation effect and the ultrasonic image data, so that the model is more close to the actual operation condition.
Embodiments further improve the accuracy and safety of surgery by adaptively adjusting image processing parameters to better visualize critical structures, including nerves and blood vessels. The adaptive algorithm formula used is as follows:
Where P original is the original image processing parameter, delta is the adjustment intensity, the value range is 0.05 to 0.2 to ensure that the adjustment is not overdriven, alpha k and beta k are adjustment coefficients, the weight and adjustment rate of the feature are controlled respectively, the value range is generally 0.1 to 1;D k(s) to represent the kth feature extracted from the ultrasound image of the patient, and mu k is the expected value of the feature.
In a pre-operative ultrasound examination of a patient, a physician determines several key features including muscle tissue density, fat layer thickness, and vascular location. The number of features n=3 is set and the adjustment factor for each feature is calculated from the patient's specific data.
Setting an original image processing parameter P original =1.0;
the adjustment intensity δ=0.1 is selected.
The feature weight α k and the adjustment rate β k are set to:
Alpha 1=0.8,β1 =0.5 (muscle density)
Alpha 2=0.7,β2 =0.6 (fat layer thickness)
Alpha 3=0.6,β3 =0.7 (vascular position)
The actual feature D k(s) and the expected value μ k are set as:
d 1(s)=2.0,μ1 =1.5 (muscle density)
D 2(s)=1.0,μ2 =1.0 (fat layer thickness)
D 3(s)=0.5,μ3 =0.8 (vascular position)
Calculating an adjustment factor:
calculating the adjusted parameters:
Padjusted=1.0×(1+0.1×1.969)=1.0×1.197=1.197
through the adaptive algorithm, a doctor can accurately adjust ultrasonic image processing parameters according to the personal body type and tissue characteristics of a patient.
The embodiment next implements a real-time feedback and parameter optimization mechanism, adjusts model parameters of the ultrasonic guidance system in real time, and performs dynamic optimization according to the difference between the actual situation observed in the operation process and the result predicted by the pre-model. The real-time feedback and parameter optimization formula used is as follows:
θnew=θcurrent+γ(∫(yobserved-ypredicted)·φ(xdata,θ)dxdata)
Where θ current and θ new are model parameters before and after adjustment, γ is a real-time adjustment factor, the value range is usually set to 0.01 to 0.05 to ensure stable adjustment, y observed and y predicted are the results of intra-operative observation and model prediction, respectively, and φ (x data, θ) is a parameterized feature extraction function.
In a patient surgery, the actual nerve response y observed monitored by the physician and the nerve response y predicted predicted by the model deviate at the beginning of the surgery. Setting:
y observed =0.9 (nerve reaction intensity), y predicted =0.7, the initial value of the parameter θ current =2.0, the real-time adjustment factor γ=0.03, and the calculation result of the feature extraction function Φ (x data, θ) is 0.8.
Substituting the data into a real-time feedback and parameter optimization formula for calculation:
θnew=2.0+0.03(∫(0.9-0.7)·0.8dxdata)
The calculation process is simplified (the integration result is set to 1, that is, the whole influence area is reflected):
θnew=2.0+0.03×(0.2×0.8)=2.0+0.03×0.16=2.0048
Through real-time feedback and parameter optimization mechanisms, doctors can ensure that the ultrasonic guidance system is suitable for actual operation conditions in the operation process, and model parameters are timely adjusted to match observed operation effects.
Example 4:
The embodiment adopts an enhanced volume reconstruction technology based on machine learning, and identifies and constructs a three-dimensional model of nerves, blood vessels and surrounding tissues by analyzing a continuous two-dimensional ultrasonic image sequence, and the core is to use deep learning, particularly a convolutional neural network CNN, so as to ensure the accuracy and instantaneity of the reconstructed model. The formula for the enhanced volumetric reconstruction technique is as follows:
Θ represents a network parameter, α is a learning rate, a value range is 0.001 to 0.01 to ensure a stable learning process, N is the number of images, x i is an input image, y i is a target structure tag, and f (x i) represents a feature extraction function of the image x i.
In the preparatory phase of surgery, the physician acquires a plurality of sequential two-dimensional images of the patient's lumbar region using an ultrasound device, which images cover different depths from the skin surface to deep nerves and blood vessels. And extracting key features in each two-dimensional image, including the positions of nerve bundles and blood vessels, by using the trained CNN model. The number of images n=30, and the feature extraction function f (x i) of each image can identify the key structure.
Setting initial values of model parametersSetting a learning rate α=0.005;
Setting a target structure label predicted by a model for a certain image x i in a certain training process Actual label y i =0.8, then the loss function gradient0.4.
Substituting the formula to calculate:
the calculation process is simplified (the gradient of contribution of each image is set to be the same):
Through the enhanced volume reconstruction technology based on deep learning, doctors can not only construct an accurate three-dimensional model, but also adjust and optimize the position of the needle head in real time in the operation process, thereby ensuring the safety and success rate of each operation
Embodiments further employ a long and short time memory network LSTM to process the ultrasound image sequence, analyze the temporal relationship between the images, and match patient-specific tissue changes. The formula is as follows:
H t is the hidden state at time t, X t is the input image, W and b are the learning parameters, the range of values is 0.01 to 0.1, σ s is the activation function, usually the Sigmoid function is chosen, κ (s, t) is the kernel function used to model the dependency of H t on the previous state H s, Ω is the set of past states.
A series of continuous ultrasound images of the lumbar region of the patient were acquired using an ultrasound device for a total of 30 frames, each frame having an image size of 512x512 pixels.
Setting an initial value H 0 =0 of the hidden state, and learning a parameter w=0.05 and b=0.1;
selecting a gaussian kernel function Where σ=2.
For the first time step t=1:
H1=σs(W·[H0,X1]+b+∫Ωκ(s,1)·Hs ds)
The feature extraction result of X 1 is set to 200, and the integral part is mainly dependent on the initial state H 0 being 0, the calculation is simplified to:
H1=σs(0.05·(0+200)+0.1)=σ(10.1)≈1
for the second time step t=2:
H2=σs(W·[H1,X2]+b+∫Ωκ(s,2)·Hsds)
let X 2 be 190 and H 1 ≡1:
Due to The main contribution at s=1, approximated as the value of H 1:
H2≈σ(0.05·191+0.1+1×0.6065)≈σ(9.65+0.1+0.6065)=σ(10.3565)≈1
this technique of LSTM processing the ultrasound image sequence significantly improves the speed of response and decision accuracy of the physician during the procedure, ensuring that the needle reaches the target area along the optimal path.
The final embodiment combines ultrasound data with imaging techniques to create a comprehensive three-dimensional view through a multi-modal data fusion network. The formula is used as follows:
Wherein X US,XCT,XMRI represents the image data of ultrasound, CT, MRI, respectively, phi j represents the fusion parameters, phi j and phi j are functions for j-th data processing, Is the entire data field.
Ultrasound images, CT scan images, and MRI images of the patient's lumbar region are collected. The ultrasound image has 30 frames, each of CT and MRI images has 10 frames, each frame having an image size of 512x512 pixels.
To ensure different modality image fusion, image registration is first performed to ensure that the ultrasound, CT and MRI images are aligned in the same spatial coordinate system.
The initial fusion parameter Φ j is set to be in the range of 0.1 to 1, and specifically Φ j =0.5.
The settings of phi j and phi j are as follows:
φj(x)=x1.5,ψj(CUS,XCT,XMRI;Φj)=Φj·(XUS+XCT+XMRI)
At a certain pixel position, the ultrasound image data is X US =150, the ct image data is X CT =200, and the mri image data is X MRI =180.
ψj(XUS,XCT,XMRI;Φj)=0.5·(150+200+180)=0.5·530=265
Set j=3, and each phi j function is the same:
This three-dimensional view provides more detailed anatomical information than a single modality image, enabling the physician to more clearly see the patient's nerves, blood vessels, and other vital tissue.
Example 5:
The embodiment uses a deep learning multi-modal data fusion network MMF-NN to enhance the detail richness of the obtained three-dimensional model, and the used formula is as follows:
Θ denotes a network parameter, η is a learning rate (typically ranging from 0.001 to 0.01), N is the number of images, x i,m is an input image from pattern m (ultrasound US, MRI, CT), y i is a target structure tag, and f m denotes a feature extraction function for pattern m.
Multimodal image data including ultrasound, MRI and CT are acquired from the patient's lumbar region. Each imaging technique was set to provide 30 images, respectively. The images of each imaging technique are processed by their corresponding feature extraction functions f m to identify and label nerves, blood vessels, and other critical structures.
Setting upThe initial value is a random value, and the learning rate eta=0.005 is set.
The network parameters are updated through the formula in the iteration process, the prediction capability of the model is optimized, the detail richness of the three-dimensional model is enhanced, and the feature extraction result of the MRI image x i,MRI and the CT image x i,CT is f US(xi,US)=0.8,fMRI(xi,MRI)=0.85,fCT(xi,CT) =0.9 aiming at one ultrasonic image x i,US in a certain iteration.
Assuming that the average contribution is 0.3 after the gradient calculation is simplified, then:
The multi-mode data fusion network for deep learning enables doctors to rely on finer and comprehensive three-dimensional models to position and navigate the needle heads in the operation, and the success rate and the safety of the operation are remarkably improved. In real-time operation, doctors can observe detail changes in real time, adjust operation strategies, ensure accurate operation of the surgical needle, and reduce risks and discomfort of patients.
The embodiments further employ feature extraction and matching algorithms to ensure precise alignment and feature matching between ultrasound US, magnetic resonance imaging MRI and computed tomography CT images using the following algorithm formulas:
Wherein, Representing a set of images from mode m, ω (x) is a weight function for adjusting the contribution of features in different imaging techniques, δ is a dirac function, controlling the overlap of the modal images, μ is a function that calculates the cross-modal feature center.
Ultrasound, MRI and CT images of the patient's lumbar region are collected. Each imaging technique provides a detailed image of the internal structure. Firstly, image registration is carried out by using an image processing technology, so that the alignment of images of different modes on space coordinates is ensured. A feature extraction algorithm is then applied to the images of each modality, identifying critical anatomical structures, including nerves and blood vessels. And then calculating the corresponding relation and the center position of each feature in different modes by using a feature matching algorithm so as to realize high-precision feature alignment.
The three mode images are set to show a key blood vessel in a specific area. The position in the ultrasound image x US is 100, the position in the MRI image x MRI is 102, and the position in the CT image x CT is 101.
The weighting function ω (x) is set to be uniformly distributed, i.e. all features contribute equally. Calculating a cross-modal feature center:
Feature matching calculation:
Ωaligned=∫{100,102,101}ω(x)δ(x-101)dx
the use of dirac function delta ensures that only fully overlapping features are represented, which in practice indicates that all modalities are highly consistent at location 101.
By the feature extraction and matching algorithm, accurate alignment of ultrasonic, MRI and CT images in operation can be ensured, and a comprehensive and unified view is provided, so that operation navigation is more accurate.
Example 6:
With reference to fig. 4, the embodiment incorporates a real-time position sensing technique, in combination with an electromagnetic tracking system EMT, to monitor and communicate needle position information in real time. Updating and adjusting the three-dimensional tissue model by using an integrated electromagnetic tracking system and a real-time data processing algorithm through the following algorithm:
Where Θ represents a parameter of the three-dimensional model, γ is an adjustment factor, and is generally set in the range of 0.001 to 0.01 to ensure stability; Is a loss function for measuring the deviation between the model and the electromagnetic tracking data, and EMT t represents the electromagnetic tracking data at time t.
The electromagnetic sensors are mounted on the needle and the surgical field to ensure that all motion and position information is captured and transmitted in real time. Each movement and change in position of the needle is recorded and synchronized in real time into a three-dimensional tissue model, with the model continually updated to reflect the actual surgical scene.
The initial parameter Θ old =1.0 is set.
Loss functionRepresenting the deviation between the actual position of the needle and the predicted position of the model, the loss function calculates this deviation each time the needle position is updated.
At a certain time point t, the electromagnetic tracking data EMT t captures that the needle deviates from the preset path, and the calculated gradient of the loss function is 0.2, then:
Through the real-time position sensing technology, doctors can monitor the needle position in real time in the operation process, accurately control the movement of the needle, and ensure the needle to be carried out according to a preset path.
The embodiment further applies a dynamic three-dimensional path planning and feedback mechanism to perform path correction according to real-time data so as to ensure that the needle head safely reaches a target area according to a preset path. The path correction algorithm used is as follows:
N represents a path planning parameter, η is a learning rate, and is set in a range of 0.001 to 0.01 to ensure a stable learning process; The predicted path output, y path, is the actual needle path, Is the gradient of the path output to the planning parameter.
The electromagnetic tracking system is used for monitoring the position information of the needle in real time, acquiring the actual path y path of the needle at each moment in the operation process, and predicting the path of the needle by an algorithm based on the current three-dimensional model and electromagnetic tracking dataAnd adjusting the path planning parameter pi according to real-time feedback, setting the initial parameter pi old =2.0, setting the predicted path output1.8, Actual path y path =2.0, path output gradient to planning parametersSetting the learning rate η=0.005, then:
Πnew=2.0+0.005∫(1.8-2.0)2·0.5dxpath
since (1.8-2.0) 2 =0.04, the integral part is calculated:
Πnew=2.0+0.005·0.04·0.5=2.0+0.0001=2.0001
Through the dynamic three-dimensional path planning and feedback mechanism, a doctor can continuously correct the path of the needle head according to real-time data in the operation process, and the needle head can safely reach a target area according to a preset path.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined in the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (6)
1. Firstly, preprocessing ultrasonic image data, removing image noise by adopting a noise reduction algorithm based on deep learning, and enhancing the visibility of key structures including nerves and blood vessels by using a local contrast enhancement algorithm;
secondly, performing feature extraction by deep learning, identifying and segmenting nerve, blood vessel and muscle tissues in the image, and simultaneously integrating data of different ultrasonic modes to extract tissue features;
Then optimizing the processing flow through a recursion learning algorithm and a self-adaptive algorithm, wherein the recursion learning algorithm adjusts the processing parameters according to the previous injection effect and the accuracy of ultrasonic image optimization feature extraction and image analysis, and the self-adaptive algorithm adjusts the processing parameters according to the body types and tissue characteristics of different patients;
finally, constructing a complete three-dimensional tissue model from a continuous two-dimensional ultrasonic image sequence by utilizing a volume reconstruction algorithm, and controlling the positioning and real-time navigation of the needle head by combining a real-time position sensing technology;
the preprocessing process for the ultrasonic image data comprises the following steps:
the image noise is removed by adopting a high-order partial differential equation with an adaptive adjustment coefficient, and the specific formula is as follows:
wherein N (x, y) is the value of the denoised pixel point, i and j are the differential orders, and lambda and sigma are parameters dynamically adjusted based on the image content;
then for visual enhancement of critical structures including nerves and blood vessels, using superposition integral transformation, combining information of different scales and directions:
Where Ω is the image domain, w k is the weight of scale k, G k is the gaussian blur function, ω (x, y) is the position dependent weight function, adjusting the enhanced locality;
neural and vascular segmentation is then controlled using a network architecture containing local and global information feedback:
Wherein F i (x, y) is a feature extracted by the deep neural network, K is a kernel function for enhancing spatial context information in the image;
finally, aiming at the image contrast difference of different patients, a dynamic adjustment model which depends on image gradient and local brightness characteristics is designed:
Wherein, Is the Laplace operator, which represents the second derivative of the image, used to detect boundaries and details, γ (x, y) is the parameter adjusted according to the image characteristics, μ (I) is the global average intensity of the image, θ is the parameter controlling the contrast adjustment intensity;
The method for controlling the positioning and the real-time navigation of the needle head comprises the following steps:
Firstly, implementing an enhanced volume reconstruction technology based on machine learning, and identifying and constructing three-dimensional models of nerves, blood vessels and surrounding tissues by analyzing a continuous two-dimensional ultrasonic image sequence;
Then, comprehensively analyzing the ultrasonic image and other imaging technology data including MRI or CT through a data fusion technology to enhance the detail richness of the obtained three-dimensional model;
and finally, combining a real-time position sensing technology, adopting an electromagnetic tracking system to monitor and transmit the position information of the needle in real time, synchronously updating the needle with the three-dimensional tissue model, and controlling a real-time navigation system to accurately guide the needle to safely arrive at a target area according to a preset path.
2. The method for processing data guided by nerve block anesthesia ultrasound according to claim 1, wherein the construction process of feature extraction by deep learning comprises the following steps:
Firstly, adopting multi-mode data fusion, and performing function:
Integrating the data of different ultrasonic modes M i, wherein K i (x, y) is a convolution kernel designed for the mode M i, sigma i regulates the spatial scale, and alpha i is a weight obtained through training;
And then performing hierarchical feature extraction through a deep learning model, and applying the formula:
each layer of the output of the network L j is subjected to a weighted higher-order differentiation, where beta j is a hierarchical weight, The high-order Laplacian operator is used for detecting multi-level boundaries and details in the image;
and finally, adopting an end-to-end deep learning model to segment, and adopting the formula:
where V (u, V) is a training derived transformation kernel for extracting key tissue features in image I.
3. The method for processing data guided by nerve block anesthesia ultrasound according to claim 1, wherein the recursive learning algorithm optimizes feature extraction by using the formula:
Where θ t is the model parameter at time t, η is the adaptive learning rate, y t is the surgical effect data, Is the effect predicted by the model, p (y t|xt;θt) is the conditional probability density function, x t is the real-time ultrasound image;
Then, an adaptive algorithm is applied to adjust image processing parameters according to the body type and tissue characteristics of the patient, and the formula is utilized:
Where P original is the original image processing parameter, δ is the adjustment intensity, α k,βk is the adjustment coefficient, D k(s) represents the kth feature extracted from the ultrasound image of the patient, μ k is the expected value of the feature;
finally, a real-time feedback and parameter optimization mechanism is realized, and the formula is used:
Where θ current and θ new are model parameters before and after adjustment, γ is a real-time adjustment factor, y observed and y predicted are the results of intra-operatively observed and model predictions, respectively, and φ (x data, θ) is a parameterized feature extraction function.
4. A method of processing data guided by nerve block anesthesia ultrasound according to claim 1, characterized in that the enhanced volume reconstruction technique employs the method of:
First, a volume reconstruction technology based on deep learning is implemented, and a convolutional neural network CNN is used for the following formula:
Identifying and predicting three-dimensional structures of nerves and blood vessels from continuous two-dimensional ultrasonic images, wherein Θ represents network parameters, alpha represents learning rate, N represents image quantity, x i represents input image, y i represents target structure label, and f (x i) represents feature extraction function of image x i;
Then, an ultrasonic image sequence is processed by applying a long-short-term memory network LSTM, and the following formula is adopted:
Analyzing the temporal relationship between the images, adjusting the algorithm to match patient-specific tissue changes, wherein H t is the hidden state at time t, X t is the input image, W and b are learning parameters, σ s is the activation function, κ (s, t) is the kernel function for modeling the dependency of H t on the previous state H s, Ω is the set of past states;
finally, combining ultrasound data with other imaging techniques including CT or MRI, through a multi-modal data fusion network, the formula:
Creating a comprehensive three-dimensional view, wherein X US,XCT,XMRI represents the ultrasound, CT, MRI image data, respectively, phi j represents fusion parameters, phi j and phi j are functions for j-th dimensional data processing, Is the entire data field.
5. The method for processing data guided by nerve block anesthesia ultrasound according to claim 1, wherein the method for constructing the detail richness of the enhanced three-dimensional model is characterized in that:
firstly, implementing a multi-mode data fusion network MMF-NN based on deep learning, and carrying out an algorithm:
Identifying and predicting the three-dimensional structure of tissue from ultrasound US, MRI and CT images, where Θ represents a network parameter, η is a learning rate, N is the number of images, x i,m is an input image from pattern m, y i is a target structure tag, and f m represents a feature extraction function for pattern m;
feature extraction and matching algorithms are then applied, by the formula:
alignment and feature matching of ultrasound US, MRI and CT images is achieved, wherein Representing a set of images from mode m, ω (x) is a weight function for adjusting the contribution of features in different imaging techniques, δ is a dirac function, controlling the overlap of the modal images, μ is a function that calculates the cross-modal feature center.
6. A method of data processing for nerve block anesthesia ultrasound guidance according to claim 1, characterized in that the real time position sensing technique involves the steps of:
firstly, an integrated electromagnetic tracking system EMT and real-time data processing algorithm is adopted, and the algorithm is adopted to:
Updating and adjusting the three-dimensional tissue model, wherein Θ represents parameters of the three-dimensional model, γ is an adjustment factor, Is a loss function, measuring the deviation between the model and the electromagnetic tracking data, and EMT t represents the electromagnetic tracking data at time t;
and then, a dynamic three-dimensional path planning and feedback mechanism is applied, path correction is carried out according to real-time data, and an algorithm is used:
where pi represents a path planning parameter, η is a learning rate, Is the predicted path output, y path is the actual needle path,Is the gradient of the path output to the planning parameter.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410978790.4A CN119006701B (en) | 2024-07-22 | 2024-07-22 | Data processing method for nerve block anesthesia ultrasonic guidance |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410978790.4A CN119006701B (en) | 2024-07-22 | 2024-07-22 | Data processing method for nerve block anesthesia ultrasonic guidance |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN119006701A CN119006701A (en) | 2024-11-22 |
| CN119006701B true CN119006701B (en) | 2025-04-01 |
Family
ID=93489047
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410978790.4A Active CN119006701B (en) | 2024-07-22 | 2024-07-22 | Data processing method for nerve block anesthesia ultrasonic guidance |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119006701B (en) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120298529B (en) * | 2025-04-09 | 2025-11-18 | 首都医科大学宣武医院 | Image Reconstruction Methods and Apparatus Based on Deep Learning |
| CN120411248B (en) * | 2025-07-03 | 2025-09-16 | 南昌大学第一附属医院 | Method and system for positioning blocking target for anesthesia based on multi-source data |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108471997A (en) * | 2015-10-28 | 2018-08-31 | 美敦力导航股份有限公司 | Apparatus and method for maintaining image quality while minimizing x-ray dose to a patient |
| CN118319486A (en) * | 2024-04-25 | 2024-07-12 | 浙江大学医学院附属邵逸夫医院 | An artificial intelligence-based image guidance system for cardiovascular interventional surgery |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105574820A (en) * | 2015-12-04 | 2016-05-11 | 南京云石医疗科技有限公司 | Deep learning-based adaptive ultrasound image enhancement method |
| CN113450294A (en) * | 2021-06-07 | 2021-09-28 | 刘星宇 | Multi-modal medical image registration and fusion method and device and electronic equipment |
| CN117853583A (en) * | 2024-01-12 | 2024-04-09 | 首都医科大学附属北京潞河医院 | Positioning method for guiding radiotherapy area based on multi-source image data fusion |
| CN117934354B (en) * | 2024-03-21 | 2024-06-11 | 共幸科技(深圳)有限公司 | An image processing method based on AI algorithm |
-
2024
- 2024-07-22 CN CN202410978790.4A patent/CN119006701B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108471997A (en) * | 2015-10-28 | 2018-08-31 | 美敦力导航股份有限公司 | Apparatus and method for maintaining image quality while minimizing x-ray dose to a patient |
| CN118319486A (en) * | 2024-04-25 | 2024-07-12 | 浙江大学医学院附属邵逸夫医院 | An artificial intelligence-based image guidance system for cardiovascular interventional surgery |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119006701A (en) | 2024-11-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20230113154A1 (en) | Three-Dimensional Segmentation from Two-Dimensional Intracardiac Echocardiography Imaging | |
| CN119006701B (en) | Data processing method for nerve block anesthesia ultrasonic guidance | |
| US10699410B2 (en) | Automatic change detection in medical images | |
| JP6947759B2 (en) | Systems and methods for automatically detecting, locating, and semantic segmenting anatomical objects | |
| US20210059758A1 (en) | System and Method for Identification, Labeling, and Tracking of a Medical Instrument | |
| CN118319486B (en) | An artificial intelligence-based image guidance system for cardiovascular interventional surgery | |
| US20110019889A1 (en) | System and method of applying anatomically-constrained deformation | |
| CN110648358A (en) | Adaptive non-linear optimization of shape parameters for object localization in 3D medical images | |
| US20250281065A1 (en) | Methods and systems for precise quantification of human sensory cortical areas | |
| CN116883462A (en) | Medical image registration method based on LOFTR network model and improved particle swarm algorithm | |
| WO2017148502A1 (en) | Automatic detection of an artifact in patient image data | |
| CN113614788A (en) | Deep reinforcement learning for computer-aided reading and analysis | |
| CN119206038B (en) | Cardiac interventional surgery scene reconstruction method based on 3D ultrasound imaging rendering | |
| CN116797612B (en) | Ultrasound image segmentation method and device based on weakly supervised deep active contour model | |
| KR20230156940A (en) | How to visualize at least one region of an object in at least one interface | |
| CN120661240A (en) | Interventional ultrasonic method and system for minimally invasive surgery | |
| US20150278471A1 (en) | Simulation of objects in an atlas and registration of patient data containing a specific structure to atlas data | |
| CN117934689B (en) | A multi-tissue segmentation and three-dimensional rendering method for fracture CT images | |
| CN118135108B (en) | Navigation Error Correction Method and Device Based on 3D Reconstruction of Bony Structure | |
| CN119027585A (en) | Three-dimensional imaging reconstruction system for minimally invasive interventional surgery based on adaptive neural network | |
| CN120241253A (en) | Brain anatomical structure recognition robotic surgery method and system based on machine vision | |
| Malinda et al. | Lumbar vertebrae synthetic segmentation in computed tomography images using hybrid deep generative adversarial networks | |
| CN116725640B (en) | A method for constructing a body piercing printing template | |
| CN114187299A (en) | Efficient and accurate dividing method for ultrasonic positioning tumor images | |
| CN120411248B (en) | Method and system for positioning blocking target for anesthesia based on multi-source data |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |