WO2026030337A1 - Systems and methods for 4-dimensional dynamic visualization of the brain - Google Patents
Systems and methods for 4-dimensional dynamic visualization of the brainInfo
- Publication number
- WO2026030337A1 WO2026030337A1 PCT/US2025/039686 US2025039686W WO2026030337A1 WO 2026030337 A1 WO2026030337 A1 WO 2026030337A1 US 2025039686 W US2025039686 W US 2025039686W WO 2026030337 A1 WO2026030337 A1 WO 2026030337A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- ultrasound
- brain
- helmet
- vol
- scanning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
A brain scanning system comprising a helmet configured to at least partially enclose the head of a subject, one or more scanning modules fixedly attached to the helmet, one or more ultrasound probes attached to each scanning module of the one or more scanning modules, at least one position tracker attached to each ultrasound probe of the one or more ultrasound probes and configured to record the relative position of each of the one or more ultrasound probes, and a computing system communicatively connected to each scanning module and configured to track the position of each ultrasound probe and to collect ultrasound data from the one or more ultrasound probes. Methods of 4D brain reconstruction are also described.
Description
SYSTEMS AND METHODS FOR 4-DIMENSIONAL DYNAMIC VISUALIZATION OF THE BRAIN
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority to U.S. Provisional Patent Application No. 63/676,632 filed on July 29, 2024, and U.S. Provisional Patent Application No. 63/677,384 filed on July 30, 2024, the contents of which are incorporated by reference herein in their entirety.
BACKGROUND
Point of care imaging technologies have the potential to improve screening and timely detection of disease through expanded pre-hospital solutions (S. A. Boppart et al. Science translational medicine, vol. 6, no. 253, p. 253rv2, Sep. 2014.; J. C. Martinez- Gutierrez et al., Journal ofNeurointerventional Surgery, vol. 11, no. 11, pp. 1085 1090, Nov. 2O19.)This can be particularly useful in emergency medical conditions such as heart attacks, trauma or stroke. Portable technology already exists to quickly diagnose multiple serious medical conditions - such as ECG for heart attacks and the FAST (focused assessment with Sonography in Trauma) exam for abdominal trauma. However, the diagnosis of stroke, a condition that annually affects 800,000 people in the US alone (C. W. Tsao et al., Circulation, vol. 147, no. 8, pp. e93-e621, 2023), is dependent on a CT-A or MRI scan which can only be performed at a hospital.
The high prevalence of stroke, coupled with its profound impact on individuals and healthcare systems, underscores the urgency for improved diagnostic methods. Rapid recanalization and restoration of blood perfusion to the brain in ischemic large vessel occlusion stroke, or rapid reversal of coagulopathies in hemorrhagic strokes are the most important determinant of outcome following stroke. Recent advancements in endovascular therapy (EVT) have substantially improved outcomes following stroke, however delayed identification, and transfer of large vessel occlusion (LVO) patients to EVT capable hospitals remain a major challenge in the field. Current diagnostic standards primarily rely on CT or MRI scans, which, while effective, face limitations in terms of availability, cost, and time required for diagnosis. In remote or outside hospital setting or
in resource limited facilities, these limitations can lead to critical delays in treatment, adversely affecting patient outcomes.
Further, Intracerebral hemorrhage (ICH) results in over 100,000 patient visits to emergency departments (ED) annually in the USA and causes high morbidity and mortality with disproportionately high healthcare utilization and poor functional outcomes, especially in minority populations. (F. Al-Mufti et al., Interv Neurol, vol. 7, no. 1-2, pp. 118-136, 2018.; B. Ovbiagele and A. I. Qureshi, in Prehospital and Emergency Department Management of Intracerebral Hemorrhage: Concepts and Customs. Cham, 2018, pp. 1-16.; R. Sahni and J. Weinberger, Vase Health Risk Manag, vol. 3, pp. 701-9, 2007.; Y. Hu, J. Wang, and B. Luo, J Zhejiang Univ Sci B, vol. 14, pp. 496 504, 2013.) The survival rate for ICH is approximately 50% with 60-80% of patients remaining disabled at 6 months. (B. Ovbiagele and A. I. Qureshi, in Prehospital and Emergency Department Management of Intracerebral Hemorrhage: Concepts and Customs. Cham, 2018, pp. 1-16.; Y. Hu, J. Wang, and B. Luo, J Zhejiang Univ Sci B, vol. 14, pp. 496 504, 2013.) Hematoma expansion is a strong predictor of mortality and functional outcomes in ICH and occurs in most patients within the first 3 to 6 hours. (R. Al-Shahi Salman et al., Lancet Neurol, vol. 17, no. 10, p. 885-894, 2018.) Several methods such as blood pressure stabilization, pharmacological reversal of anti coagulation, osmotic therapy, or external ventricular drainage are used to improve outcomes of ICH and reduce the hematoma expansion. (J. Caceres et al., Emerg Med Clin North Am, vol. 30, no. 3, pp. 771-794, Aug 2012.; N. Yassi et al., Stroke Vase Neurol, vol. 7, pp. 158-165, 2022.; L. Song et al., Trials, vol. 22, 2021.; Naidech et al., Int J Stroke, vol. 17, pp. 806-809, 2022.)
The high prevalence of stroke, coupled with its profound impact on individuals and healthcare systems, underscores the urgency for improved diagnostic methods. Moreover, in the case of ischemic strokes, an accurate diagnosis for the absence of an intracranial or subarachnoid hemorrhage is necessary to administer tissue plasminogen activators (TP A) in ischemic small vessel occlusion stroke patients. (W. J. Powers et al., Stroke, vol. 46, no. 10, pp. 3020-3035, Oct. 2015.) Recent advancements in endovascular therapy (EVT) have substantially improved outcomes following acute ischemic attacks, however, delayed identification, and transfer of patients with large
vessel occlusions to EVT-capable centers remains a major challenge in the field. (E. Venema et al., Stroke, vol. 51, no. 11, pp. 3310-3319, Nov 2020, epub 2020 Oct 7.; E. Brandler et al., J Am Coll Emerg Physicians Open, vol. 4, no. 5, p. el3048, Oct 2023, epub 2023 Oct 11.; K. Suyama et al., Fujita Medical Journal, vol. 8, no. 3, pp. 73-78, Aug 2022, epub 2021 Nov 25.; L. Schlemm et al., Stroke, vol. 49, no. 2, pp. 439-446, 2018.; E. Venema et al., Stroke, vol. 50, no. 4, pp. 923-930, 2019.)
During the past several decades, CT and MRI have become the most common modalities for detection of intracranial pathology. With emerging applications of point-of-care ultrasound in emergency medicine and critical care, there has been an increasing interest in exploring the use of cranial ultrasound in the evaluation of patients with suspected brain injury in situations in which CT/MRI is either not available or not feasible due to clinical reasons, such as patient instability. There have been investigations into the use of cranial ultrasound in diagnosis of intracerebral hemorrhage, subdural hemorrhage, hydrocephalus, tumors, and movement disorders, but there is currently no standard reference to describe the normal or abnormal B-mode sonographic appearance of the structures of the brain and skull.
Recent studies indicate the potential of ultrasound technology in detecting cerebrovascular emergencies. However, the application of ultrasound in stroke diagnosis is not yet widespread, partly due to a lack of research specifically focused on its efficacy in detecting various types of strokes, as well as image quality and specificity of current transcranial ultrasound techniques. The gap in evidence lies in the detailed understanding of a) how reliable and high-quality ultrasound images can be obtained, b) how effectively image reconstruction techniques can be utilized to differentiate between ischemic and hemorrhagic strokes, and c) how it can be applied rapidly and accurately in emergency settings.
Transcranial ultrasound, which includes transcranial Doppler (TCD), transcranial color-coded sonography (TCCS), and brain echography or cranial ultrasound, allows two-dimensional imaging of the brain parenchymal and intracranial vessels using a 1-3 MHz phased array transducer. (A. Sarwal, Lessons from the ICU, in: Robba et al., Eds. Cham: Springer International Publishing, 2023, pp. 275-290.; C. Henry et al., Journal of Neuroimaging, vol. 33, pp. 566-574, 2023.; C. Robba et al., Intensive Care
Medicine, vol. 45, pp. 913-927, 2019.; R. Hakimi, et al., Neurologic Clinics, vol. 38, no. 1, pp. 215-229, Feb. 2020.) Acquisition of B-mode images to identify the midbrain is the first step before vessel insonation for all transcranial ultrasound studies. (A. Sarwal, Lessons from the ICU, in: Robba et al., Eds. Cham: Springer International Publishing, 2023, pp. 275-290.) TCD or TCCS inherently include vessel Doppler insonation, while brain parenchyma assessment only requires B-mode images on ultrasound. (A. Sarwal, Lessons from the ICU, in: Robba et al., Eds. Cham: Springer International Publishing, 2023, pp. 275-290.) About 80-90% of the adult population have sufficiently thin temporal acoustic windows to permit ultrasound imaging. (T. Postert et al., Ultrasound in Medicine Biology, vol. 23, no. 6, pp. 857-862, 1997.; M. Marinoni et al., Ultrasound in Medicine Biology, vol. 23, no. 8, pp. 1275-1277, 1997.; M. Y.-M. Chan et al., Ultrasound in Medicine Biology, vol. 49, pp. 588-598, 2023.) Together, the prior studies suggest the potential of ultrasound technology in detecting cerebrovascular emergencies. Neuro-ultrasound can be employed for patients in rural environments where traditional neuroimaging is inaccessible or high-acuity patients who are not stable for CT or MRI transport, as well as to monitor patients who are at risk of neurological injury. (E. J. Sigman, F. J. Laghari, and A. Sarwal, Seminars in Ultrasound, CT and MRI, 2023.; Sarwal et al., The Ultrasound Journal, vol. 14, no. 1, p. 40, Oct. 2022.; S. Kapoor et al., Journal of Neuroimaging, vol. 32, no. 6, pp. 1013-1026, 2022.; C. Allen et al., Journal of Neuroimaging, vol. 33, no. 3, pp. 333-358, 2023.)
Over 10 published studies in >590 patients have reported high sensitivity and specificity of ICH diagnosis using cranial B-mode ultrasound with sufficient reliability to distinguish ICH from ischemic stroke. (M. Maurer et al., Stroke, vol. 29, pp. 2563-2567, 1998.; W.-D. Niesen et al., Journal of Neuroimaging, vol. 28, pp. 370-373, 2018.; G. Becker et al., Journal of Neuroimaging, vol. 3, pp. 41-47, 1993.; A. Sarwal et al., The Ultrasound Journal, vol. 14, no. 1, p. 40, Oct. 2022.; A. Sarwal et al., Clin Pract Cases EmergMed, vol. 2, no. 4, pp. 375-377, 2018.; N. Matsumoto et al., J Neuroimaging, 2011.; S. Kapoor et al., Journal of Neuroimaging, vol. 32, no. 6, pp. 1013-1026, 2022.; G. Seidel et al., Stroke, vol. 40, pp. 119-123, 2009.) Rapid accessibility to high-resolution CT scans and lack of sensitivity and specificity of ultrasound in detecting ischemic stroke prevented widespread adoption of this diagnostic
modality despite evidence of feasibility and reliability. Dr. Sarwal generated exploratory preliminary data from an ongoing IRB-approved study at Wake Forest Baptist Medical Center on NICU patients (IRB00048743: Mapping the Natural History of parenchyma and Cerebral Perfusion Changes in Acute Ischemic and Hemorrhagic Stroke) evaluating cranial point-of-care ultrasound (cPOCUS) performed by providers blinded to the CT scan data. Of the 13 enrolled patients in this study, 1 had no temporal windows and 7 patients had CT-proven ICH. cPOCUS correctly diagnosed ICH in all 7 patients with ICH as well as the 3 patients without ICH. However, cPOCUS resulted in the false positive identification of ICH in 2 patients who did not have CT-proven ICH. In further analyzing the sources of these errors, it became apparent that imaging artifacts associated with hand scanning led to the false positives.
TCD is regularly used in-hospital to continuously monitor patients with neurological conditions and is a proven technique for detecting emboli, (E. Ringelstein et al., Stroke, vol. 29, pp. 725-729, 1998.) vasospasm, (M. Saqqur et al., Critical Care Medicine, vol. 35, pp. S216-S223, 2007.) the presence of right-to-left shunt, (H. Katsanos et al., Annals of Neurology, vol. 79, no. 4, pp. 625-635, 2016.) and for the evaluation of ischemic strokes. (S. Sarkar et al., Postgraduate Medical Journal, vol. 83, pp. 683 - 689, 2007.) This scan is highly operator-dependent, requiring skilled sonographers, the unavailability of which may limit its use at some centers. (S. Purkayastha et al., Seminars in neurology, vol. 32, no. 4, pp. 411-420, Sep. 2012.) Recently, robotic devices such as Novasignal Novaguide (M. N. Rubin et al., Stoke, vol. 54, no. 11, p. 2842-2850, 2023.) and Viasonix Dolphin TCD (R. Hakimi, et al., Neurologic Clinics, vol. 38, no. 1, pp. 215-229, Feb. 2020.) have made finding the optimal scanning window faster or even autonomous. However, both these devices focus only on the temporal acoustic window for isonation of parts of the middle cerebral artery (MCA), anterior cerebral artery (ACA), and posterior cerebral artery (PCA). Moreover, the resulting Doppler waveform requires contextual knowledge of intracranial topography and the construction of a mental map by the sonographer to identify vessels, bifurcations, and collaterals. (K. Niederkorn et al., Stroke, vol. 19, no. 11, pp. 1335— 1344, 1988.; B. Lindsey et al., Ultrasound in Medicine Biology, vol. 39, no. 4, pp. 721- 734, 2013.)
Many studies have proven the feasibility of B-mode ultrasound in the detection of ICH, (S. Kapoor et al., Journal of Neuroimaging, vol. 32, no. 6, pp. 1013— 1026, 2022.; M. Woydt et al., Zentralblatt Fur Neurochir, vol. 57, no. 3, pp. 129-135, 1996.; G. Becker et al., Journal of Neuroimaging, vol. 3, pp. 41-47, 1993.; N. Matsumoto et al., J Neuroimaging, 2011.; M. Maurer et al., Stroke, vol. 29, pp. 2563-2567, 1998.; M. Masaeli et al., Arch Acad Emerg Med, vol. 7, p. e53, 2019.; W.-D. Niesen et al., Journal of Neuroimaging, vol. 28, pp. 370-373, 2018.) hydrocephalus, (G. Becker, J et al., Ultraschall in der Medizin, vol. 12, no. 5, pp. 211-217, 1991.; H.-S. Wang et al., Pediatric Neurology, vol. 26, pp. 43-46, 2002.) midline shift, (E. J. Sigman, F. J. Laghari, and A. Sarwal, Seminars in Ultrasound, CT and MRI, 2023.; W. Ziai et al., Neurovascular Sonography. Switzerland: Springer International Publishing, 2022) and tumors (H.-S. Wang et al., Pediatric Neurology, vol. 26, pp. 43-46, 2002.; G. Becker et al., Neuroradiology, vol. 36, pp. 585-590, 1994.; G. Becker et al., Ultrasound in Medicine Biology, vol. 21, pp. 1123-1135, 1995.; G. Becker et al., Neurosurgery, vol. 44, pp. 469-477, 1999.; K. Meyer et al., Journal of Neuroimaging, vol. 11, pp. 287-292, 2001 .) in cases where computer tomography may not be accessible or feasible.
A variety of studies were also successful in reproducibly discerning parenchymal structures like the Midbrain, (T. Prell et al., Amyotroph Lateral Scler Frontotemporal Degener, vol. 15, pp. 244-249, 2014.; P. Bartova et al., Ultrasound Med Biol, vol. 40, pp. 2365-2371, 2014.; S. Hellwig et al., Eur J Neurol, vol. 21, pp. 860-866, 2014.; D.-H. Li et al., Parkinsonism Relat Disord, vol. 21, pp. 923-928, 2015.; U. Walter et al., Neurology, vol. 63, pp. 504-509, 2004.; F. Doepp et al., Movement Disorders, vol. 23, pp. 405-410, 2008.; M. Budisi'c et al., Acta Clinica Croatica, vol. 47, pp. 205-210, 2008.; M. Budisic et al., European Journal of Neurology, vol. 15, pp. 229-233, 2008.) lateral ventricles, (S. Hellwig et al., Eur J Neurol, vol. 21, pp. 860-866, 2014.; U. Walter et al., Neurology, vol. 64, pp. 1726-1732, 2005.; M. Woydt et al., Zentralblatt Fur Neurochir, vol. 57, no. 3, pp. 129-135, 1996.; G. Becker et al., Journal of Neuroimaging, vol. 3, pp. 41-47, 1993.) third ventricle, (M. Budisic et al., European Journal of Neurology, vol. 15, pp. 229-233, 2008.; U. Walter et al., Neurology, vol. 63, pp. 504- 509, 2004.; U. Walter et al., Neurology, vol. 64, pp. 1726-1732, 2005.) pineal gland, (M. Budisic et al., European Journal of Neurology, vol. 15, pp. 229-233, 2008.; M. Budisi'c
et al., Acta Clinica Croatica, vol. 47, pp. 205-210, 2008.) and basal ganglia (F. Doepp et al., Movement Disorders, vol. 23, pp. 405-410, 2008.; A. Gaenslen et al., Lancet Neurology, vol. 7, pp. 417-424, 2008.; J. Hagenah et al., Journal of Neurology, vol. 254, pp. 1407-1413, 2007.) on B-Mode ultrasound. However, its clinical use is limited/non- existent due to CT/MR technology advancements and also because there is almost no standard reference to describe the normal or abnormal B-mode sonographic appearance of the structures of the brain and skull. (S. Kapoor et al., Journal of Neuroimaging, vol. 32, no. 6, pp. 1013-1026, 2022.) Rapid availability of round-the-clock computed tomography, the need for provider presence to perform B-mode TCU by the bedside, and a lack of standardized resources for learning cranial anatomy and pathology have not allowed more widespread use of this modality. (B. C. Allen et al., Journal of Neuroimaging, vol. 33, no. 3, pp. 333-358, 2023.)
As shown in Fig. 1, the normal brain parenchyma has a gray appearance (hypoechoic) due to its ability to transmit ultrasound through it. The midbrain (shaped like a butterfly) and lateral ventricles can be visualized as distinct structures from brain parenchyma due to their texture causing a different amount of reflection of ultrasound waves. In the acute phase, ICH appears as a homogenous, sharply demarcated and hyperechoic, bright white signal compared to surrounding brain that appears hypoechoic or relatively gray.
Other groups have used B-mode ultrasound for different applications. In particular, studies have shown efficacy for vascular reconstruction (J. Seeger et al., Annals of surgery, vol. 205, no. 6, pp. 733-739, 1987.; K. Niederkorn et al., Stroke, vol. 19, no. 11, pp. 1335-1344, 1988.; B. Lindsey et al., Ultrasound in Medicine Biology, vol. 39, no. 4, pp. 721-734, 2013.) and assessment of blood flow. (F. Galarce et al., Computer Methods in Applied Mechanics and Engineering, vol. 375, p. 113559, 2021.) Similarly, recent work has implemented B-mode ultrasound towards bone surface reconstruction and segmentation. (T. Karlita et al., Second Internation Workshop on Pattern Recognition, vol. 10443, 2017.; X. Wen et al., in 2007 IEEE Ultrasonics Symposium Proceedings, 2007, pp. 2535-2538.) Other studies have utilized it for brain tumor identification (F. Prada et al., Neurosurgical focus, vol. 40, no. 3, p. E7, 2016.) and realtime intraoperative brain imaging at a reduced resolution. (D. Gobbi, et al., vol. 4319.
International Society for Optics and Photonics, 2000.) More recent work has focused on the automation and robotic implementation. For instance, using deep learning and freehand 3D reconstruction methods, (D. Gobbi, et al., vol. 4319. International Society for Optics and Photonics, 2000.; Y. Yoon et al., IEEE transactions on medical imaging, vol. 36, no. 12, pp. 2474-2484, 2017.) automating 3-D reconstruction for carotid artery imaging, (K. Rosenfield et al., The American journal of cardiology, vol. 70, no. 3, pp. 379-384, 1992.) and the development of a 3D-ultrasound robotic system. (S. Merouche et al., IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 63, no. 1, pp. 35-46, 2016.)
The visualization of intracranial anatomy on B-mode ultrasound has been a challenge due to several artifacts, as seen in Fig. 2, being produced by hyperechoic signals inherent in brain and skull anatomy when images are created using temporal windows. (S. Kapoor et al., Journal of Neuroimaging, vol. 32, no. 6, pp. 1013-1026, 2022.) Intraventricular hemorrhage can be detected as hyperechoic material within the ventricles, but false positives may occur due to the hyperechoic choroid plexus in the lateral ventricles, which may be indistinguishable from blood. (E. J. Sigman, F. J. Laghari, and A. Sarwal, Seminars in Ultrasound, CT and MRI, 2023.) Other intraparenchymal pathologies, like vasogenic edema, cisternal effacement, vascular tumors, leukoaraosis, and leukoencephalopathies, can also cause hyperechoic signals, making them the most common mimics for hemorrhage on ultrasound and contributing to false-positive results. (S. Kapoor et al., Journal of Neuroimaging, vol. 32, no. 6, pp. 1013-1026, 2022.)
So far, the contralateral temporal window to the affected side is considered to provide the best view in 2D B-Mode imaging, as artifacts on the ipsilateral side can obscure the dural border, a highly echogenic structure on ultrasound. The space between this hyperechoic linear structure and the hyperechoic skull constitutes the subdural space. (S. Kapoor et al., Journal of Neuroimaging, vol. 32, no. 6, pp. 1013-1026, 2022; E. J. Sigman, F. J. Laghari, and A. Sarwal, Seminars in Ultrasound, CT and MRI, 2023.; B. C. Allen et al., Journal of Neuroimaging, vol. 33, no. 3, pp. 333-358, 2023.) Ultrasound has been shown to identify acute supratentorial ICH larger than 1 cm with a specificity of 95%-97% and a sensitivity of 78%-95%. The sensitivity is lower for ICH smaller than 1
cm or those located in the high frontal lobe, high parietal lobe, or subacute bleeds. (E. J. Sigman, F. J. Laghari, and A. Sarwal, Seminars in Ultrasound, CT and MRI, 2023.) This shows that handheld 2D cranial ultrasound at its current stage does not have enough sensitivity to rule out extra-axial hemorrhages, but it can be utilized as a rule-in evaluation when extra-axial pathology is seen in patients unable to travel to CT or as a way of monitoring for serial changes in patients with known acute subdural hematoma (SDH) and epidural hematoma (EDH) that are visible on ultrasound. (E. J. Sigman, F. J. Laghari, and A. Sarwal, Seminars in Ultrasound, CT and MRI, 2023.)
Robotic point of care systems have the potential to drastically change emergency medicine. These systems can automate tasks that would otherwise be impossible to accomplish in the back of an ambulance due to restrictions on how many EMS workers can fit in an ambulance as well as the limitations of training those individuals receive. Robotic systems may potentially accomplish tasks that would otherwise be impossible in the back of an ambulance, allow doctors to perform telemedicine, and decrease the time before patients receive the care they need.
Thus, there is a need in the art for a novel ultrasound-based high- resolution imaging system for immediate detection and differentiation of cerebrovascular abnormalities, providing a rapid, non-invasive, and cost-effective diagnostic tool.
SUMMARY OF THE INVENTION
Aspects of the present disclosure relate to a brain scanning system comprising a helmet configured to at least partially enclose the head of a subject, one or more scanning modules fixedly attached to the helmet, one or more ultrasound probes attached to each scanning module of the one or more scanning modules, at least one position tracker attached to each ultrasound probe of the one or more ultrasound probes and configured to record the relative position of each of the one or more ultrasound probes, and a computing system communicatively connected to each scanning module and configured to track the position of each ultrasound probe and to collect ultrasound data from the one or more ultrasound probes.
In some embodiments, the one or more ultrasound probes are configured to move relative to the one or more scanning modules. In some embodiments, the one or
more ultrasound probes are fixedly attached to the helmet. In some embodiments, the one or more ultrasound probes are ultrasound patches, and the helmet is an elastic, wearable interface. In some embodiments, the one or more scanning modules are positioned on the lateral sides of the helmet and configured to capture ultrasound images of the temporal regions of the brain. In some embodiments, the one or more scanning modules are positioned on the base of the helmet and configured to capture ultrasound images of the occipital region of the brain. In some embodiments, the system comprises at least three scanning modules. In some embodiments, the helmet further comprises a position tracker configured to track the position of the ultrasound probes of each scanning module. In some embodiments, the vertical position, the horizontal position, orientation, and tilt of each ultrasound probe of each scanning module may be adjusted via the computing system. In some embodiments, the helmet further comprises one or more proximity sensors to sense real-time movement of a patient undergoing a brain scan. In some embodiments, the computing system is configured to adjust the position and orientation of the ultrasound probes of each of the one or more scanning modules to account for realtime patient movement.
In some embodiments, the helmet has a size of 10-30 cm by 10-30 cm by 10-30 cm. In some embodiments, the helmet comprises a material selected from the group consisting of plastics, metals, metal alloys, polymers, fabrics, and combinations thereof. In some embodiments, the helmet further comprises one or more fiducial markers for image registration. In some embodiments, the system further comprises an ultrasound gel dispensing mechanism. In some embodiments, the system further comprises one or more contact elements movably attached to the helmet and configured to contact the patient’s head for head stabilization and mechanical registration. In some embodiments, the one or more contact elements are configured to contact the patient’s head along one or more axes comprising: the medial-lateral axis, the anterior-posterior axis, and the superior-inferior axis. In some embodiments, the system further comprises an electroencephalogram (EEG) or electrocardiogram (ECG or EKG) module. In some embodiments, the system further comprises one or more lasers removably attached to the helmet and configured to project a laser line on the patient’s head, wherein the laser line
intersects an anatomical landmark of the head. In some embodiments, the system is portable.
In some embodiments, the computing system comprises a processor and a non-transitory computer readable medium, wherein the non-transitory computer readable medium comprises instructions which when executed by the processor, performs the steps of positioning and orienting each ultrasound probe of each scanning module for image acquisition, acquiring ultrasound images in at least two ultrasound modes from ultrasound probe, recording the position of each ultrasound probe of each scanning module during image acquisition, sensing real-time patient movement via the one or more proximity sensors, and adjusting the position or angle of the ultrasound probes of each scanning module based on the patient movement.
Aspects of the present disclosure relate to a method for 4D brain reconstruction comprising providing the system of claim 1, acquiring ultrasound images in a first ultrasound mode from each scanning module while acquiring positional data of the ultrasound probes of each scanning modules via the position trackers, registering each ultrasound image in the first ultrasound mode with the probe positional data to generate a 3D structural reconstruction of the brain, acquiring ultrasound images in a second ultrasound mode from each scanning module while acquiring positional data of the ultrasound probes of each scanning modules via the position trackers, registering each ultrasound image in the second ultrasound mode with the probe positional data to generate a vascular flow reconstruction of the brain, and overlaying the vascular flow reconstruction on the 3D structural reconstruction to obtain a 4D volumetric reconstruction of the brain.
In some embodiments, the first ultrasound mode is B-mode ultrasound, and the second ultrasound mode is Doppler ultrasound. In some embodiments, the one or more scanning modules acquire ultrasound images from the left temporal region, the right temporal region, the occipital region, the orbital region, the mandibular region, or any combinations thereof. In some embodiments, the method further comprises a step of mapping the 3D geometry of the patient’s skull via the one or more proximity sensors of the helmet. In some embodiments, the method further comprises a step of calibrating the
ultrasound probe position data. In some embodiments, the method further comprises a step of removing artifacts from the 4D reconstruction of the brain. In some embodiments, the method further comprises a step of tissue characterization. In some embodiments, the method further comprises the steps of measuring a heart cycle of a patient via an electrocardiogram (ECG), and triggering an acquisition of an ultrasonic image from at least one of the scanning modules at a consistent point in the heart cycle of the patient.
BRIEF DESCRIPTION OF THE DRAWINGS
The following detailed description of embodiments of the invention will be better understood when read in conjunction with the appended drawings. It should be understood, however, that the invention is not limited to the precise arrangements and instrumentalities of the embodiments shown in the drawings.
Fig. 1 depicts an exemplary B-mode ultrasound and Computed Tomography (CT) images of a brain showing intracerebral hemorrhage imaged from the contralateral temporal window.
Fig. 2A depicts an exemplary ultrasonic image showing a false-positive Intracerebral Hemorrhage (ICH) detection due to artifacts from a tumor.
Fig. 2B depicts an exemplary ultrasonic image showing a false-positive ICH detection due to artifacts from bone.
Fig. 3A depicts an exemplary perspective view of a brain scanning system. Fig 3B depicts an exemplary side view of a brain scanning system.
Fig 3C depicts an exemplary helmet of the brain scanning system.
Fig 3D depicts an exemplary perspective view of a scanning module of the brain scanning system.
Fig. 3E depicts an exemplary view of a scanning module of the brain scanning system.
Fig. 4 depicts exemplary transcranial windows that are used to obtain ultrasonic images.
Fig. 5 depicts a flow chart showing an exemplary method of performing a brain scan and obtaining a 4D volumetric reconstruction of the brain.
Fig. 6 depicts an exemplary flow diagram showing a workflow to combine volumetric reconstruction of the parenchyma and vasculature for a 4D image of the brain.
Fig. 7A depicts an exemplary EM tracker on a patient.
Fig. 7B depicts an exemplary tracked probe.
Fig. 7C depicts an exemplary temporal setup and spatial calibration hardware setup.
Fig. 8 depicts a flow diagram showing an exemplary method of obtaining a 4D image of the brain.
Fig. 9A depicts an exemplary raw ultrasonic image showing an axial view of major intracranial vessels reconstructed from Doppler scans.
Fig. 9B depicts an exemplary thresholding based segmented ultrasonic image showing an axial view of major intracranial vessels reconstructed from Doppler scans.
Fig. 10A depicts an exemplary reconstructed volumetric image of cerebral anatomy from B-mode scans for midline measurement.
Fig. 10B depicts an exemplary image of cerebral anatomy from B-mode scans with a midline measurement of 65 mm from a healthy volunteer whose skull width is 140 mm.
Fig. 11 A depicts an exemplary raw reconstruction of the mid-brain anatomy from B-mode scans for midline measurement.
Fig. 1 IB depicts an exemplary raw reconstruction of the mid-brain anatomy from B-mode scans for midline measurement where the orange outline shows the Falx Cerebri, the green shading depicts the Thalamus, the red outline shows the Choroid and the pink outline shows the Contralateral skull.
Fig. 12A depicts an exemplary image of reconstructed anatomy from B- mode scans for midline measurement.
Fig. 12B depicts an exemplary image of reconstructed mid-brain anatomy from B-mode scans.
Fig. 13 depicts an exemplary image of a 2cm segment of a reconstructed Middle Cerebral Artery (MCA).
Fig. 14 depicts exemplary raw and segmented images of the major intracranial vessels reconstructed from Doppler scans.
Fig. 15 depicts an exemplary computing environment in which aspects of the present invention may be practiced.
Fig. 16A depicts an exemplary conventional method of measuring the midline shift of the brain using a 2D image slice.
Fig. 16B depicts an exemplary improved method of measuring the midline shift of the brain using a volumetric reconstruction.
Fig. 17 depicts an exemplary registration embodiment.
Fig. 18 depicts an exemplary registration embodiment.
Fig. 19 depicts an exemplary mechanical registration embodiment.
Fig. 20 depicts an exemplary registration embodiment.
Fig. 21 depicts an exemplary registration embodiment.
DETAILED DESCRIPTION
It is to be understood that the figures and descriptions of the present invention have been simplified to illustrate elements that are relevant for a clear understanding of the present invention, while eliminating, for the purpose of clarity many other elements found in related systems and methods. Those of ordinary skill in the art may recognize that other elements and/or steps are desirable and/or required in implementing the present invention. However, because such elements and steps are well known in the art, and because they do not facilitate a better understanding of the present invention, a discussion of such elements and steps is not provided herein. The disclosure herein is directed to all such variations and modifications to such elements and methods known to those skilled in the art.
It is noted that various embodiments are described in detail with reference to the drawings, in which like reference numerals represent like parts and assemblies throughout the several views. Reference to various embodiments does not limit the scope of the claims attached hereto. Additionally, any examples set forth in this specification are intended to be non-limiting and merely set forth some of the many possible embodiments for the appended claims. Further, particular features described herein can
be used in combination with other described features in each of the various possible combinations and permutations.
Unless otherwise specifically defined herein, all terms are to be given their broadest reasonable interpretation including meanings implied from the specification as well as meanings understood by those skilled in the art and/or as defined in dictionaries, treatises, etc. It is noted that as used in the specification and the appended claims, the singular forms “a”, “an”, and “the” include plural referents unless otherwise specified, and that the terms “includes” and/or “including”, when used in this specification, specify the presence of stated features, elements, and/or components, but do not preclude the presence of additional of one or more features, steps, operations, elements, components, and/or groups thereof.
Relative terms such as “horizontal”, “vertical”, “up”, “down”, “top”, and “bottom” as well as derivatives thereof (e.g. “horizontally”, “downwardly”, “upwardly”, etc.) should be construed to refer to the orientation as then described or shown in the drawing figure under discussion. These relative terms are for convenience of description and normally are not intended to require a particular orientation. Terms including “inwardly” versus “outwardly”, “longitudinal” versus “lateral” and the like are to be interpreted relative to one another or relative to an axis of elongation, or an axis or center of rotation, as appropriate. Terms concerning attachments, coupling, and the like, such as “connected” and “interconnected”, refer to a relationship wherein structures are secured or attached to one another either directly or indirectly through intervening structures, as well as both movable or rigid attachments or relationships, unless expressly described otherwise. The term “operably connected” is such an attachment, coupling, or connection that allows the pertinent structure to operate as intended by virtue of that relationship.
Reference throughout the specification to “one embodiment”, “an embodiment” or “some embodiments” means that a particular feature, structure, or characteristic described in connection with an embodiment is included in at least one embodiment of the subject matter disclosed. Thus, the appearance of the phrases “in one embodiment”, “in an embodiment”, or “in some embodiments” in various places throughout the specification is not necessarily referring to the same embodiment. Further, the particular features, structures, or characteristics of “one embodiment”, “an
embodiment”, “or some embodiments” may be combined in any suitable manner with each other to form additional embodiments of such combinations. It is intended that embodiments of the disclosed subject matter cover modifications and variations thereof. Terms such as “first”, “second”, “third”, etc., merely identify one of a number of portions, components, steps, operations, functions, and/or points of reference as disclosed herein, and likewise do not necessarily limit embodiments of the present disclosure to any particular configuration or orientation.
Moreover, throughout this disclosure, various aspects of the invention can be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1 , 2, 2.7, 3, 4, 5, 5.3, 6, and any whole and partial increments therebetween. This applies regardless of the breadth of the range. As used herein, the term “about” in reference to a measurable value, such as an amount, a temporal duration, and the like, is meant to encompass the specified value variations of plus or minus 20%, plus or minus 10%, plus or minus 5%, plus or minus 1%, and plus or minus 0.1% of the specified value, as such variations are appropriate.
The terms “patient,” “subject,” “individual,” and the like are used interchangeably herein, and refer to any animal amenable to the systems, devices, and methods described herein. The patient, subject, or individual may be a mammal, and in some instances, a human.
Aspects of the present invention relate to a wearable brain-imaging system configured to automate image acquisition to acquire images from the brain in multiple regions, positions, and angles. In some embodiments, the brain-imaging system utilizes a non-invasive, non-ionizing imaging technique. In some embodiments, the brain-imaging system is configured for 4D reconstruction of the brain. As defined herein, “4D reconstruction” refers to a 3D volumetric reconstruction of the brain’s structures and the
brain’s vasculature overlaid with the blood flow dynamics (hemodynamics). In some embodiments, each of these elements (structures, vasculature, hemodynamics) may be visualized independently or together. In some embodiments, the brain-imaging system is configured for the detection and differentiation of cerebrovascular abnormalities. In some embodiments, the brain-imaging system is configured for diagnosis of brain injuries or diseases. In some embodiments, the brain-imaging system is configured for intracerebral hemorrhage (ICH) diagnosis. In some embodiments, the brain-imaging system is configured to be portable, and for use in resource-limited and austere settings for screening high-risk patients to initiate appropriate treatment for the treatment and/or prevention of complications. In some embodiments, the brain-imaging system is configured for use in ambulances, point-of-care or pre-hospital settings, or outside conventional clinical settings.
Referring now to Figs. 3A and 3B, shown is a top perspective view (Fig. 3 A) and a side view (Fig. 3B) of an exemplary brain-imaging system 100 (hereinafter “system 100.”) System 100 generally comprises a helmet 102 configured to fit at least the top portion of a patient’s head, and one or more scanning modules 104 fixedly attached to the helmet 102 at positions corresponding to temporal and sub-occipital areas of the patient’s head. In some embodiments, each scanning module 104 comprises a movable ultrasound probe or device configured to acquire ultrasound images of the brain from each of the temporal and sub-occipital areas. In some embodiments, each scanning module 104 is communicatively connected to a controller, the controller configured to automate the motion of the ultrasound probe for image acquisition.
Referring now to Fig. 3C, shown is a side perspective view of an exemplary helmet 102 for use with the system 100. In some embodiments, the helmet 102 comprises a base 106 and lateral and back sides extending from the base. The lateral sides from a generally U-shape, and the back side has a generally concave shape configured to conform to the contour of the top of a patient’s head. The lateral and back sides define an inner cavity configured to fit a patient’s head and is configured to at least partially enclose the top portion of the head. In some embodiments, the helmet 102 has a front side with an opening providing access to the inner cavity. In some embodiments, the helmet 102 may have any shape configured to conform to the contour of a patient’s skull.
In some embodiments, the helmet 102 comprises two temporal windows 108 on each lateral side, wherein the temporal windows 108 are openings in the lateral sides of the helmet 102. In some embodiments, the helmet 102 further comprises a suboccipital window 110 on the bottom side, and adjacent to the base 106. In some embodiments, and referring back to Figs. 3A and 3B, the one or more scanning modules 104 are configured to be attached proximate to or within the temporal and sub-occipital windows 108 and 110 such that the scanning modules 104 are proximate to or in contact with the skull. In some embodiments, the one or more scanning modules 104 may be attached to the helmet via any attachment or fixture known to one of skill in the art. For example, the scanning modules 104 may comprise tabs configured to mate with corresponding grooves or slots on the helmet 102. In other embodiments, the scanning modules 104 may be attached via screws, bolts, magnets, snap-fit mechanisms, glues, adhesives, and the like. In some embodiments, the one or more scanning modules 104 may be fixedly and removably attached to the helmet 102. In some embodiments, the base 106 comprises a groove or slot configured for the insertion and attachment of a headrest 112. In some embodiments, the headrest 1 12 is configured such that the patient’s head and neck may be comfortably supported while the system 100 performs a brain scan. In some embodiments, the base 106 is further allows for the attachment of a scanning module 104 at the sub-occipital window 110.
Referring back to Fig. 3C, the helmet 102 further comprises one or more proximity sensors 114 positioned on the inner surface of the helmet 102. In some embodiments, the one or more proximity sensors may be positioned on the lateral inner surface of the helmet 102 and adjacent the temporal windows 108. In some embodiments, the one or more proximity sensors may be positioned on the on the top inner surface of the helmet 102. In some embodiments, the one or more proximity sensors may be LiDAR sensors, time-of-flight (ToF) sensors (e.g. STMicroelectronics VL53L4CD), ultrasonic sensors, infrared (IR) sensors, capacitive senses, inductive sensors, spring-loaded linear potentiometers, rack-and-pinion or lead screw encoders, optical displacement sensors with spring coupling, magnetic linear encoders (e.g. Hall-effect based sensors), Bowden cable-based displacement sensors, force-sensitive resistors (FSRs) with spring preload, or mechanical limit switches with deflection feedback. In some embodiments, the base 106
of helmet 102 further comprises at least one position tracker. In some embodiments, the at least one position tracker may be electromagnetic (EM) trackers (e.g. Polhemus Viper, NDI Aurora), optical trackers using infrared (IR) fiducials (e.g. NDI Polaris, VICON), stereovision or RGB-D camera systems (e.g. Intel RealSense, ZED, Microsoft Kinect, Orbbec Astra, FLIR Bumblebee, Traclnnovations Tracoline), inertial measurement units (IMUs) (e.g. XSens, Aruco, or QR marker-based vision tracking systems, passive mechanical arms with encoders (e.g. miniaturized Faro arms), or marker-based surface registration using LiDAR or other mechanical proximity sensors. In some embodiments, the helmet 102 further comprises any registration mechanism as described in more detail in the Registration Methods section below. In some embodiments, the one or more proximity sensors and the at least one position tracker are configured to track patient head movements in real-time such that the positioning of the ultrasound probes may be adjusted to account for changes in the position of the head.
In some embodiments, the helmet 102 have inner dimensions sized to comfortably fit the head of a patient. In some embodiments, the helmet 102 may be sized to fit the head of an adult or a child. In some embodiments, the helmet 102 may have a length ranging between about 10 cm and 30 cm, between about 15 cm and 25 cm, between about 16 cm and 24 cm, between about 17 cm and 23 cm, between about 18 cm and 22 cm, between about 19 cm and 21 cm, about 10 cm, about 15 cm, about 16 cm, about 17 cm, about 18 cm, about 19 cm, about 20 cm, about 21 cm, about 22 cm, about 23 cm, about 24 cm, about 25 cm, or about 30 cm. In some embodiments, the helmet 102 may have a width ranging between about 10 cm and 30 cm, between about 15 cm and 25 cm, between about 16 cm and 24 cm, between about 17 cm and 23 cm, between about 18 cm and 22 cm, between about 19 cm and 21 cm, about 10 cm, about 15 cm, about 16 cm, about 17 cm, about 18 cm, about 19 cm, about 20 cm, about 21 cm, about 22 cm, about 23 cm, about 24 cm, about 25 cm, or about 30 cm. In some embodiments, the helmet 102 may have a depth ranging between about 10 cm and 30 cm, between about 15 cm and 25 cm, between about 16 cm and 24 cm, between about 17 cm and 23 cm, between about 18 cm and 22 cm, between about 19 cm and 21 cm, about 10 cm, about 15 cm, about 16 cm, about 17 cm, about 18 cm, about 19 cm, about 20 cm, about 21 cm, about 22 cm, about 23 cm, about 24 cm, about 25 cm, or about 30 cm.
In some embodiments, the helmet 102 may comprise any suitable material, for example, but not limited to plastics, metals, metal alloys, polymers, fabrics, carbon fiber, glass composites, graphite composites, acrylic, paper-phenolic, and the like, or any combinations thereof. In some embodiments, the helmet 102 may comprise a radiolucent material including, but not limited to carbon fiber, titanium, aluminum, glass composites, epoxy, or any combinations thereof. In some embodiments, the material further comprises embedded radiopaque fiducial markers.
Referring now to Figs. 3D, shown is a side perspective view of an exemplary scanning module 104. In some embodiments, the one or more scanning modules 104 each comprise a frame 202 and an ultrasound probe 204 movably attached to the frame 202. In some embodiments, each scanning module 104 may comprise one or more ultrasound probes. In some embodiments, the ultrasound probe 204 may be a linear probe, a curvilinear probe, or a phased array probe. In some embodiments, the ultrasound prove 204 may be an ultrasound patch. In some embodiments, the frame 202 may be of any suitable shape, including but not limited to, a square, a rectangle, a circle, an oval, or an irregular shape. In some embodiments, the frame 202 encloses a window or opening. In some embodiments, the ultrasound probe 204 is movably attached to the frame 202 via a gimbal 206. In some embodiments, the gimbal 206 is connected to the frame 202 via a vertical sliding mechanism 208 and a horizontal sliding mechanism 210. In some embodiments, the sliding mechanisms 208 and 210 may be rods positioned vertically and horizontally (respectively) across the frame 202 to which the gimbal 206 is slidably attached. In some embodiments, the sliding mechanisms 208 and 210 may be actuated via an applied force, a rotational screw mechanism, a motor, pneumatic actuators, lead screw, and/or rack and pinion drive. In some embodiments, the vertical and horizontal sliding mechanisms 208 and 210 allow a vertical and horizontal movement of the gimbal 206, and thereby the probe 204, across the frame 202. In some embodiments, the gimbal 206 allows for the tilting or angling of the probe 204 in any desired angle or direction. In some embodiments, the ultrasound probe 204 of each scanning module has five or six degrees of articulation necessary for ultrasonic image acquisition. In some embodiments, the frame 202 of the one or more scanning modules further comprise proximity sensors 212 positioned on the top end of the frame 202, configured to track the motion of a
patient’s head.
Referring now to Fig. 3E, a view of the gimbal 206 is shown. In some embodiments, the gimbal 206 may hold a housing 214 configured to fit the ultrasound probe 204. In some embodiments, the housing 214 further comprises at least one position tracker 216, configured to detect the probe’s position and orientation. In some embodiments, the position tracker may be an electromagnetic (EM) tracker. In some embodiments, the at least one position tracker of the helmet 102 and the at least one position tracker 216 of the probe 204 are configured for spatial calibration to find the transformation (i.e. rotation and translation) between the position tracker 216 of the probe 204 and the ultrasound image acquired by the probe 204. This enables accurate anatomy localization in ultrasound-guided interventions.
In some embodiments, a loaded spring 218 is connected between the gimbal 206 and the frame 202. In some embodiments, the loaded spring 218 is configured to limit the amount of force applied by the probe 204 to a patient’s skull, thereby ensuring patient safety. In some embodiments, the loaded spring 218 limits the force applied in the medial -lateral direction for scanning modules positioned at the temporal windows, or in the anterior-posterior direction for scanning modules positioned at the sub-occipital window. In some embodiments, the frame 102 may be made of any suitable materials, including but not limited to, plastics, metals, metal alloys, polymers, and the like. In some embodiments, the frame 102 may have any suitable dimensions for attachment to the helmet 102.
In some embodiments, the system 100 may comprise any number of scanning modules 104. In some embodiments, and as depicted in Fig. 3 A, the system 100 may comprise at least three scanning modules 104. In such cases, a scanning module 104 is positioned at each lateral windows 108 of the helmet 102, such that each ultrasound probe 204 is positioned through a window 108 and may be in direct contact with the patient’s skull. A scanning module 104 is also connected at the sub-occipital window 110 of the helmet 102, with the ultrasound probe 204 facing upwards towards the back of the patient’s head. In some embodiments, the scanning module 104 positioned at the suboccipital window is low-profde, such that it can fit under the headrest and slide out during a brain scan. Fig. 4 depicts the different ultrasound probe placements on the skull
to capture images from specific regions of the brain. Therefore, scanning modules 104 placed on the lateral sides of the helmet 102 will capture images from the temporal region of the brain, while scanning modules 104 positioned at the base of the helmet 102 will capture images from the occipital region of the brain. It should be understood that additional windows and scanning modules 104 may be placed at any location on the helmet 102 and hence may capture images from any desired region of the brain.
In some embodiments, the system 100 further comprises an ultrasound gel dispensing mechanism. The ultrasound gel dispensing mechanism may comprise one or more ultrasound gel reservoirs fluidly connected to the ultrasound probe of each of the scanning modules 104 and configured to dispense ultrasound gel to the surface of the ultrasound probe, or to the skin surface of the patient via the temporal windows 108 and the sub-occipital window 110. In some embodiments, the ultrasound gel reservoir may be fluidly connected via one or more conduits leading to the surface of the probe or the skin surface of the patient. In some embodiments, the ultrasound gel reservoir may be configured as a syringe, requiring mechanical action or force to dispense the gel. In some embodiments, the ultrasound gel reservoir may comprise pumps and/or valves to control the delivery of the ultrasound gel. The pumps and/or valves may be communicatively connected to a controller or computing system (e.g. computer 400 depicted in Fig. 15) which may be configured to automate the ultrasound gel delivery. In some embodiments, the system 100 may comprise a single ultrasound gel reservoir fluidly connected to each scanning module 104, or each scanning module 104 may comprise a separate ultrasound gel reservoir.
In some embodiments, the system 100 further comprises a laser alignment system configured to project a visible laser line on the patient’s forehead. In some embodiments, the laser alignment system may comprise a laser, which may be any laser known to one of skill in the art. In some embodiments, the laser alignment system may comprise any other visible light element known to one of skill in the art. In some embodiments, the laser alignment system may comprise one or more lasers. In some embodiments, the laser may be removably attached to the helmet 102. In some embodiments, the laser may be configured to project a laser line which may be a horizontal line, vertical line, a diagonal line, a cross line, or any combinations thereof. In
some embodiments, the laser line may be configured to intersect any anatomical landmark on the forehead that may be used as a reference for helmet placement. In some embodiments, anatomical landmarks may include, but are not limited to, the glabella, the nasion (bony depression at the bridge of the nose), the inion (external occipital protuberance), preauricular points (just anterior to the ears), the vertex or Cz point (top midpoint of the skull), the inner and outer canthi of the eyes, mastoid processes (posterior to the ears), the tragus (outer cartilaginous portion of the outer ear), or any combinations thereof. In some embodiments, the laser line is configured to intersect the glabella, the midline bony anatomical landmark located between the eyebrows. Aligning the laser line with the glabella provides a coarse repeatable reference for helmet placement across sessions. In some embodiments, the laser alignment system may be communicatively connected to a controller or a computing system (e.g. computer 400 depicted in Fig. 15).
In some embodiments, the system 100 further comprises a mechanical registration system configured to localize and stabilize the patient’s head during scans. In some embodiments, the mechanical registration system provides physical fixation and precise measurement of the skull’s position without requiring full head encasement or external tracking systems. In some embodiments, the mechanical registration system comprises a plurality of contact elements, each contact element corresponding to a different axis, which may be one or more of: the medial -lateral axis (side to side), the anterior-posterior axis (front to back), and the superior-inferior axis (top to bottom). In some embodiments, each contact element comprises at least one contact plate. In some embodiments, a contact element may comprise a pair of contact plates. In some embodiments, the at least one contact plate are movably attached to the inner surface of the helmet 102. In some embodiments, the contact plates may be configured to be movable and may comprise a position sensor (e.g. a linear encoder) to track the position of the contact plate. In some embodiments, the contact plates may comprise a generally rigid and flat surface configured to contact the patient and provide a stabilizing surface for accurate positioning and measurement of the skull position. In some embodiments, the contact plates may be attached to the helmet 102 via a mounting system or frame which may be placed within the inner cavity of the helmet 102. In some embodiments, the motion of the contact plates may be actuated via manual systems such as
thumbscrews, lead screws, cam-lock levers, or sliding rails with locking mechanisms, passive systems such as springs or dampers, actuated systems such as motor-driven linear actuators, stepper or servo motors with leadscrew translation, or pneumatic pistons, and the like.
In some embodiments, and for the medial -lateral axis, a pair of contact plates may be configured to move towards the patient’s head from each lateral side to contact the patient’s temporal bones. Each plate of the pair of plates may be configured to move independently and their positions may be tracked with position sensors integrated into each contact plate. In some embodiments, and for the anterior-posterior axis, a single contact plate is configured to move and make contact with the patient’s forehead, the position of the contact plate being tracked with a position sensor, while the back of the head is fixed against the headrest or occipital cup. In some embodiments, and for the superior-inferior axis, a contact plate may be positioned on the upper side of the inner surface of the helmet and is configured to move to contact the top of the patient’s skull (vertex). The displacement of the contact plate is measured relative to a fixed support at the base of the head (e.g. at the chin or occiput) and vertical height and curvature is calculated. Once all three axes of the head are in contact with each contact element, the position of the contact plates are locked, creating a rigid, stabilized reference frame around the skull. The recorded positions of each contact element define the 3D geometry and spatial pose of the head relative to the helmet 102. This allows the system 100 to plan scan paths, reconstruct images, or deliver therapy based on the system’s internal kinematics.
In some embodiments, the system 100 further comprises an EEG module to enable functional brain monitoring. In some embodiments, the EEG module may comprise dry or wet electrodes, for use with or without conductive gel depending on the clinical application and hardware configuration. In some embodiments, the EEG module may be removably attached to the helmet 102. In some embodiments, at least three electrodes are positioned at standardized or anatomically relevant scalp locations (e.g., frontal, temporal, parietal) and may be mounted to the helmet 102 or to flexible extensions of the helmet 102 to ensure consistent skin contact. In some embodiments, the EEG module can operate independently or in combination with the imaging system.
When used together, EEG signals from the EEG module may be time-synchronized with ultrasound or other imaging modalities to provide multimodal data for detecting seizures, background slowing, or other functional neurological abnormalities. Alternatively, the EEG module may be used alone for screening, triage, or ongoing monitoring. This modular approach allows flexible deployment in emergency, ICU, or outpatient environments and supports integration with real-time biomarker detection or event- triggered imaging workflows.
In some embodiments, the helmet 102 may comprise one or more ultrasound patches either directly connected to the helmet 102, or to a mounting system connected to the helmet 102. In some embodiments, and in the case where the one or more ultrasound patches are directly connected to the helmet 102, the one or more scanning modules 104 may serve as contact elements with the head, and may be passive or semi -actuated, wherein each scanning module 104 comprises one or more ultrasound patches. In this case, the movement during scanning may be minimal or absent, as the one or more ultrasound patches cover a broad surface area for continuous monitoring or supplemental imaging. In other embodiments, the scanning modules may comprise actively actuated ultrasound probes that move over the region of interest to acquire data, as described above. In some embodiments, the one or more ultrasound patches may be movably connected to the helmet 102 or mounting system. In some embodiments, the one or more ultrasound patches may be fixedly connected to the helmet 102 or mounting system. In some embodiments, the helmet or frame may be configured such that the ultrasound patches are in contact with the skin surface of the patient, when the helmet or frame is positioned over the patient’s head. In some embodiments, the ultrasound transducers or patches are mounted in strategic locations instead of being mounted on an actuated robotic system, or may be mounted on a stretchable, wearable interface like a ski mask (Fig. 17). A method to obtain relative position transforms between each transducer is needed, in order to register the various ultrasound images to one another. This can be achieved by connecting each transducer to the other, or to a central point on the mask using Fiber Optic Shape Sensors, or mechanical trackers with one or more degrees of freedom (like a miniaturized Faro arm). Electromagnetic tracking sensors or infrared tracking fiducials (NDI Polaris or VICON) may also be attached to each transducer to
track their positions.
In some embodiments, the system 100 further comprises a computing system (e.g. computer 400 depicted in Fig. 15) communicatively connected to one or more components of the system 100. In some embodiments, the computing system is communicatively connected to each scanning module of the one or more scanning modules 104 and configured to control the actuation of the gimbal 206 and sliding mechanisms 208 and 210 for positioning of the ultrasound probe. In some embodiments, the computing system is configured to receive data from the sensors in the helmet 102 (e.g. the proximity sensors or the position trackers) and is configured to track the positioning of the probe and the head of the patient. In some embodiments, the computing system is further communicatively connected to the ultrasound gel dispensing mechanism and may be configured to control the actuation of the ultrasound gel dispensing mechanism to automate the dispensing of ultrasound gel prior to or during a brain scan. In some embodiments, the computing system may be further communicatively connected to the contact elements to configured to actuate the motion thereof, and to the position sensors of the contact element to receive data about the positioning of the contact elements. In some embodiments, the computing system is communicatively connected to the EEG module and may be configured to receive EEG data from the patient.
In some embodiments, the computing system comprises a processor, and a non-transitory computer-readable medium, wherein the non-transitoiy computer readable medium contains instructions, which when executed by the processor performs the steps of a) positioning and orienting each ultrasound probe of each scanning module for image acquisition, b) acquiring ultrasound images in at least two ultrasound modes from each ultrasound probe, c) recording the position of each ultrasound probe of each scanning module during image acquisition via the position trackers, d) sensing real-time patient movement via the one or more proximity sensors, and, e) adjusting the position or angle of each ultrasound probe based on the patient movement.
In some embodiments, the system 100 may fully automate the movement and positioning of each ultrasound probe 204 during a brain scan. In some embodiments, the system 100 may calibrate the position of the ultrasound probes 204 using the data acquired from the position tracker positioned at the base 106 of the helmet. In some
embodiments, the data acquired from the position tracker 216 of each ultrasound probe 204 may be used to provide positional information of each image slice captured. In some embodiments, the positional information of each captured image slice is integrated with the ultrasonic images to create a 3D volumetric reconstruction of the brain.
In some embodiments, the ultrasound probes 204 of each scanning module 104 is configured to capture ultrasound images in one or more ultrasound imaging modes. In some embodiments, the one or more ultrasound modes may be B-mode ultrasound, M- mode ultrasound, A-mode ultrasound, power Doppler ultrasound, color Doppler ultrasound, or any combinations thereof. In some embodiments, the ultrasound images are acquired from temporal and sub-occipital windows of the helmet 102. In some embodiments, the ultrasound images are acquired from multiple angles in each window. In some embodiments, the system 100 is configured to integrate the ultrasound images from the one or more ultrasound modes with the positional data of each ultrasound probe of each scanning module 104 at the time of each image acquisition to create a 3D volumetric reconstruction of the brain.
In cases where Doppler ultrasound is used as the one or more ultrasound modes, ultrasonic images obtained from the Doppler ultrasound provide an image of the vasculature of the brain. In some embodiments, the image of the brain vasculature may include major arteries in the brain such as the middle cerebral artery (MCA), anterior cerebral artery (ACA), basilar artery (BA), and posterior cerebral artery (PCA) and may be used to create a dynamic vascular map. In some embodiments, the vascular map may be integrated into the 3D volumetric reconstruction of the brain to create a 4D volumetric reconstruction which provides holistic imaging of brain anatomy, vasculature, and pathology.
In some embodiments, the system 100 may be configured to integrate a step function-based filtering method to insonate, filter, and confirm the presence of vessels, which may then be added as an image frame for a 4D volumetric reconstruction of the brain. In some embodiments, a similar step function filter may be utilized on other ultrasound modes, for example, but not limited to, Power doppler, or general Doppler. In some embodiments, the filtering method may be combined with contrast-to-noise measurements from, for example, B-mode ultrasound, to find optimal scanning windows.
In some embodiments, and while conducting a Doppler ultrasound, cine frames (e g. spectral Doppler cine frames) may be recorded and integrated into the 4D volumetric reconstruction. The cine frames may then be replayed by selecting a point of interest on the resulting 4D volumetric reconstruction to monitor the hemodynamics of the vessel at that location. This can be useful for detecting stenosis, vasospasms, aneurysms, and occlusions both upstream and downstream from the point of interest. In some embodiments, an ultrasound contrast agent (for example, Definity, Optison, or Lumason) may be administered to the patient for cerebral perfusion imaging to assess blood flow dynamics within the brain.
In some embodiments, the processor further comprises machine learning algorithms configured to analyze ultrasonic images captured from each ultrasound probe 204 to detect whether the dispensation of ultrasound gel is required, thereby instructing the computing system to activate the ultrasound gel dispensing mechanism. Alternatively, the processor may comprise instructions to perform one or more measurements to determine whether ultrasound gel is required. In some embodiments, the one or more measurements may be signal-to-noise ratio, contrast-to-noise ratio, Gray-Level Co- Occurrence matrix, local binary patterns, edge detection, or frequency domain analysis. In some embodiments, the computing system may further comprise instructions to dispense ultrasound gel based on the information received from the one or more measurements. In some embodiments, ultrasound gel may be dispensed prior to and/or during capture of the ultrasonic images.
In some embodiments, the processor further executes instructions configured to actuate the one or more contact elements of the mechanical registration system and acquire data via the position sensors of each contact element corresponding to the precise positioning of the patient’s head.
In some embodiments, the processor further executes instructions to trigger data collection from the EEG modules. In some embodiments, the system 100 may further be communicatively connected to an electrocardiogram (ECG or EKG) configured to measure electrical activity of the heart of the subject. The ECG signal may in some embodiments be used to trigger the collection of an image by the ultrasound probe 204 or by any other imaging device connected to the system. In some
embodiments, the system 100 may be configured to use the ECG signal to trigger collection of an image at a consistent point in the heart cycle, for example at a local minimum or maximum. In some embodiments, the system 100 may use a running average of heart rate to modulate a frequency of image capture in order to better synchronize image capture with a subject’s pulse.
In some embodiments, the processor may further execute instructions to project a laser line on the patient’s forehead for precise helmet placement. In some embodiments, the laser line is projected such that it intersects an anatomical landmark on the patient’s head.
In some embodiments, the system 100 may further comprise an interface device communicatively connected to the computing system and configured to receive acquired data from the probe 204. In some embodiments, the interface device may receive measurements as digital signals, analog signals, or both. As described herein, “interface device” refers to any device capable of receiving analog or digital signals and performing one or more of storing the data on a non-transitory computer readable medium or transmitting the data via a wired or wireless communication link to a remote computing device. In some embodiments, the system 100 acquires measurements when instructed by the user via an input to the interface device. In some embodiments, the computing system may further comprise a processor and stored instructions for performing analysis or display of the data collected. In some embodiments, the interface device can connect to one or more external displays in a wired or wireless connection. In some embodiments, the 4D image reconstruction may be performed by the interface device or by any other external devices or displays communicatively connected to the system 100. In some embodiments, data obtained by the system 100 may be wirelessly transmitted and interfaced with an Emergency Medical Services (EMS) Electronic Health Record or other software.
In some embodiments, the system 100 is configured for diagnostics using ultrasound. In some embodiments, the system 100 may include any other imaging modality known to one of skill in the art. In some embodiments, the system 100 is configured to deliver image-guided therapies.
In some embodiments, the system 100 further comprises a Global
Positioning System (GPS), configured to monitor patient location, for example when the system 100 is in use in an ambulance.
In some embodiments, the system 100 can be extended to pediatric applications using the same actuation module for the trans fontanelle window and/or trans orbital window.
In some embodiments, the system 100 is configured to be disassembled into component parts. In some embodiments, the system 100 is configured to be portable. In some embodiments, the system 100 may fit into the back of an ambulance to be used for emergency brain scans. In some embodiments, the system 100 may be configured for use outside of conventional clinical settings.
Brain Scanning and 4D Reconstruction Method
Referring now to Fig. 5, an exemplary method 300 of performing a brain scan to generate a 4D volumetric reconstruction of the brain is shown. In some embodiments, the method comprises providing a brain scanning system (e.g. system 100 described above) (step 302), acquiring ultrasound images in a first ultrasound mode from each scanning module, while acquiring positional data of the ultrasound probe of each scanning module via the position trackers (step 304), registering each ultrasound image in the first ultrasound mode with the probe positional data to generate a 3D structural reconstruction of the brain (step 306), acquiring ultrasound images in a second ultrasound mode from each scanning module, while acquiring positional data of the ultrasound probe of each scanning module via the position trackers (step 308), registering each ultrasound image in the second ultrasound mode with the probe positional data to generate a vascular flow reconstruction of the brain (step 310), and overlaying the vascular flow reconstruction on the 3D structural reconstruction to obtain a 4D volumetric reconstruction of the brain (step 312). Fig. 6 shows a flow diagram depicting the volumetric reconstruction process.
In some embodiments, the first ultrasound mode may be B-mode ultrasound, M-mode ultrasound, A-mode ultrasound, or any combinations thereof. In some embodiments, the second ultrasound mode may be power Doppler ultrasound, color Doppler ultrasound, or any combinations thereof. In some embodiments, ultrasound
images in the first and second ultrasound modes are acquired from one or more scanning modules positioned at the left temporal window, right temporal window, sub-occipital window, trans-orbital window, submandibular window, trans fontanelle, or any combinations thereof. In some embodiments, ultrasound images are acquired from at least three scanning modules positioned at the left temporal window, right temporal window, and sub-occipital window.
In some embodiments, and prior to step 304, the method further includes a step of mapping the 3D geometry of the patient’s skull. In some embodiments, the mapped 3D geometry forms a baseline from which the real-time patient movements during the brain scanning are detected as offsets from the baseline. In some embodiments, the offsets are used to account for real-time patient movements during image acquisition steps 304 or 308. In some embodiments, the step of mapping the 3D geometry of the patient’s skull is done via the proximity sensors of the helmet 102. A one-time calibration may be performed to calculate the transform between all proximity sensors using a fixed jig and a precision machined spherical ball. The proximity sensors may be installed such that the light beams intersect at a common point. Non-linear optimizations may be used to minimize the error in the measured distance of the sphere to calculate the transform between each sensor.
In some embodiments, and at steps 304 and 308, the positional information of the ultrasound probes of each scanning module is calibrated using the positional tracker integrated into the helmet 102. This calibration enables the calculation of the transformation (e.g. rotation and translation) between the position tracker on each ultrasound probe and the acquired ultrasound image slice to increase the accuracy of localization of anatomical structures. As an example, this may increase the accuracy of ultrasound-guided interventions. In some embodiments, and since the ultrasound images and sensor positions may update at different rates leading to time discrepancies, a temporal calibration may be implemented. The temporal calibration may involve estimating the time offset between the images and positions, which may help to correlate the images with the probe positions.
In some embodiments, a voxel-based volume reconstruction technique is used for 3D volume reconstruction in steps 308 and 310. In some embodiments, each
pixel of the B-mode ultrasonic image slices is iterated through and inserted into the corresponding volume voxel through the nearest-neighbor interpolation, wherein the intensity of each voxel is determined by a weighted average of all coinciding pixels from the B-mode image slices. In some embodiments, the method further comprises a step of removing artifacts from the 3D volume reconstruction via the estimation of minimum and maximum likelihoods of voxel intensities and multiple iterations over the same region. In other embodiments, reconstructed methods may include voxel-based volume fusion, point cloud accumulation, mesh-based surface modeling, scanline-based reconstruction, deep learning-based volume synthesis, multi-angle compounding techniques, and slice interpolation.
In some embodiments, and at step 312, the Doppler-based vascular volume is overlaid on top of the structural volume obtained from the B-mode images to create a combined 4D volumetric reconstruction or image that shows both brain anatomy and blood flow. In some embodiments, alignment is performed using rigid registration based on the tracking data obtained from the position trackers of each ultrasound probe. In some embodiments, timing between the structural and vascular flow scans can be synchronized through internal system timestamps or via EKG-triggered acquisition to minimize motion artifacts caused by the cardiac cycle. The resulting 4D reconstruction may display anatomy, vasculature, and flow either together or as separate layers. In some embodiments, a user may toggle individual components, or segment specific structures such as vessels or pathology. Users may also select a vessel within the 3D view to replay the corresponding Doppler cine loop over time, or to actively target that region for live Doppler acquisition.
In some embodiments, and in cases where B-mode ultrasound is used, the method further includes a step of tissue characterization. The tissue characterization step may include Hermite polynomial transform (HPT) or H-scan. The tissue characterization step may be utilized for structure and pathology classification. In some embodiments, the tissue characterization step comprises applying a Hermite transform to each acquired B- mode frame to extract orthogonal features related to texture and scattering. These may be used to classify image regions as midbrain, ventricles, hemorrhage, tumor, artifact, and the like. The classified image regions may then be assigned distinct colors and overlaid
on the B-mode images or integrated into the 3D or 4D volume reconstruction for visual differentiation. This approach enhances real-time or post-scan interpretation and supports quantitative analysis (e.g. midline shift, lesion volume). (Parker KJ (2016) OMICS J Radiol 5:236.; Tai, H., et al. Ultrasound in medicine & biology, 46(10), 2810-2818.;
Khairalseed, M. et al. (2019). Ultrasonics, 94, 28-36.; Tai, H. (2022). The University of Texas at Dallas.)
In some embodiments, the brain scan results in a 4D volumetric reconstruction of the brain providing an accurate visualization of brain anatomy, vasculature, and pathology. In some embodiments, the 4D volumetric reconstruction of the brain allows for detection and differentiation of cerebral abnormalities, including but not limited to strokes, hemorrhages, aneurysms, tumors, and the like, (see Blanco et al. J Ultrasound Med 2017, 36: 1251-1266 for example)
In some embodiments, the method of brain scanning may be used for stroke diagnosis, pre-hospital head trauma assessment, in-hospital or at-home post-stroke monitoring, follow-ups for cranial hemorrhages, long-term monitoring of stenosis and aneurysms, ultrasound guided endovascular vascular interventions in the brain, imaging of brain tumors, or guiding external ventricular brain placement.
In some embodiments, the method includes an ultrasound dispensing step to continuously and autonomously dispense ultrasound gel in the direction of motion of each prob. In some embodiments, the system may include a gel pump. In some embodiments, the pump may have an actuated syringe like mechanism to push the gel out. In some embodiments, the gel may be continuously released or may be released as needed. In some embodiments, the requirement can be detected using machine learning methods on the captured ultrasound image and/or it may be detected through basic signal- to-noise, contrast-to-noise, Gray-Level Co-Occurrence Matrix, Local Binary Patterns, edge detection, frequency domain analysis. In some embodiments, the operator may preapply the gel in the desired areas before the robotic scanning begins.
In some embodiments, the step 312 may further comprise function-based filtering to insonate, filter, and confirm the presence of a vessels which are then added as an image frame for reconstruction. For example, in a graph where the x-axis represents time and the y-axis represents frequency, the Doppler spectra would show a cloud of
points indicating blood flow velocities at different times and frequencies. The envelope would be a curve that traces the upper edge of this cloud. The step function is a horizontal line (at fstep) that separates the graph into two regions: above the line (no flow) and below the line (flow). The goal is to position this line such that it closely follows the upper edge of the Doppler spectra, thereby approximating the envelope. By finding the optimal fstep that minimizes the error, once can effectively track the envelope of the blood flow velocities over time. This provides a clear and simplified representation of the maximum blood flow velocities detected by the Doppler ultrasound.
In some embodiments, a similar step function filter on the power doppler (or general doppler), along with general contrast to noise measurements on B-mode frames, can be used to find optimal scanning windows.
In some embodiments, the basic imaging Power doppler, color doppler, b- mode, m-mode, a-mode or any combination of these ultrasound modalities may be used. These multi-modal images may then be combined with signal processing algorithms or machine learning for ultrasound probe path planning and image servoing.
In some embodiments, and while insonating a vessel, cine frames of spectral doppler cine frames may be recorded which can then be replayed by clicking at the region in the volumetric reconstruction or by revisiting that image frame of interest to monitor the hemodynamics of that vessel at that point. This can be especially useful at detecting stenosis, vasospasms, aneurysms, and even occlusions both upstream and downstream from the point of interest.
In some embodiments, the method includes a step of image playback to playback the individual frames collected while displaying the positions of the ultrasound probes with respect to the patient’s cranium is used. As the relative position transforms are known between all the probes and cranium for each image frame, these can be recorded and replayed using graphic renderings of each element.
In some embodiments, since the system is capable of utilizing multiple scanning windows and reconstructions, it is possible to combine multiple intracranial pressure (ICP) methods, (see “Internet book of Critical Care (IBCC)).
In some embodiments, the method includes a step of transmitting data, images, or the 3D or 4D volumetric reconstruction to an external device or cloud
computing platform. In some embodiments, the volumetric reconstructions, individual frames, and/or tracking data may be uploaded to the cloud and interfaced with an EMS Electronic Health Record or other PACS software. Machine learning and Al can be utilized (such as edge computing or swarm computing) to analyze the data in real-time, pattern recognition and classification of the main underlying condition based on algorithms (e.g., LVO vs Hemorrhage vs none)(see Blanco et al., J Ultrasound Med 2017; 36: 1251-1266). This information can then be shared with EMS or hospitals within the network and trigger optimal action plans.
In some embodiments, ultrasound contrast agents like Definity, Optison, and/or Lumason can be utilized for cerebral perfusion imaging to assess blood flow dynamics within the brain in 2D and 3D. The tracking module can be used to track arrival and passage of contrast agent to the arterial phase, thereby reconstructing the cerebral vasculature. Regional quantification of cerebral perfusion (cerebral blood volume, flow, and velocity) using automated contrast-enhanced ultrasound perfusion imaging. (See Power et al.; Cerebrovasc Dis 2009;27(suppl 2): 14-24)
In some embodiments, the ultrasound probes thus mentioned may be swapped for High Intensity Focused Ultrasound (HIFU) or Focused Ultrasound (FUS) based therapies. The main power of the disclosed robotic system is its ability to register the cranium (and essentially everything inside) and then move the ultrasound in relation to the registered anatomy. This can be used to provide ultrasound based therapies and for neuromodulation. For example, a scan is taken and volume reconstruction on individual ultrasound frames is performed. Now, the clinician may select a point from the 3D volume to provide therapy. The robot automatically moves the desired probe to point at the location, the ultrasound system changes its focus to that point, and fires the desired power. This may also be performed in relation to CT or MRI based registration. A pretherapy CT or MRI may be taken with relevant fiducials to register (more details in patient registration section) this with the helmet/robotic mechanism. A location on the CT/MRI can then be selected for the robot to point the probe and provide ultrasound therapy.
Registration Methods
In some embodiments, the registration method may include a step of establishing spatial relationships between the patient, the helmet 102, and any imaging or therapy device. This allows the same platform to be used for robotic coordination and spatial registration across different modalities, including ultrasound, low-power radio frequency imaging, eddy current-based sensing, cranial surgical navigation, and targeted therapeutic interventions. A rigid tracking framework may be used to establish the spatial relationships, which supports rigid positioning and cranial surface-based targeting across diagnostic, therapeutic, and interventional workflows.
In some embodiments, the goal of registration and motion tracking is to compute a position transform between the ultrasound probe and the patient’s anatomy. This is finally used to compute the transform between the ultrasound image and the anatomy. In some embodiments, the reference used is a rigid point on the patient’s cranium (rigid in the sense that there is no loose skin). In some embodiments the mastoid process is used as the rigid location. The forehead may also be used.
In some embodiments, LiDAR proximity sensors are used on the robotic system to generate a point cloud of the cranium (Fig. 18). Assuming the cranium is rigid, any subsequent point clouds after T=0 can be used to track patient movement by obtaining a relative position between the point cloud at T=t and T=0.
Each LiDAR proximity sensor’s placement is calibrated with respect to a fixed/stationary point on the robotic mechanism (this can be called the base). The ultrasound probe is calibrated with respect to the base. Thus, the relative position of the cranium from the ultrasound probe can be computed.
In some embodiments, a passive robotic arm with one or more degrees of freedom (like a miniaturized Faro arm) with one end attached to the base of the robotic mechanism and the other end attached to the patient’s cranium (mastoid process or forehead) may be used to track movement (Fig. 17).
In some embodiments, similar to the LIDAR proximity sensor, a mechanical proximity sensor can be used which is a contact-based system such that the piston of the sensor is compressed on contact (Fig. 19). The amount of compression from nominal position is used to create each point on the point cloud. The housing may internally use a force, optical, electromagnetic induction, or ultrasound-based methods
for measuring the linear position of the piston.
In some embodiments, RGBD and/or stereo cameras such as the Intel RealSense, ZED, Microsoft Kinect, Orbbec Astra, FLIR Bumblebee, and/or Traclnnovations Tracoline may be used to register and track the patient’s cranium. The camera will need to be calibrated with the ultrasound probe, (see Mathur et al.; 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering (BIBE), Athens, Greece, 2019, pp. 649-656)
In some embodiments, EM tracker sensors (like Polhemus Viper, NDI Aurora) are mounted to each ultrasound probe and to the patient.
In some embodiments, IR fiducials may be mounted to each ultrasound probe and the patient. A ski mask or skull cap with IR fiducials may be used for tracking (Fig. 20) In some embodiments, the helmet 102 may comprise a radiolucent material. In some embodiments, the helmet 102 may comprise a plurality embedded radiopaque fiducial markers positioned at predetermined and pre-calibrated positions relative to the helmet coordinate frame. In some embodiments, and during a perioperative CT scan, the patient may wear the helmet (with or without the scanning modules 104 attached), such that the plurality of fiducial markers is imaged. The CT scan with the imaged fiducial markers may then be used to compute a rigid transform from the helmet frame to CT space (T heimet^CT ). In some embodiments, a live 3D surface of the patient’s skull may be acquired using LiDAR, stereo vision, or RDBD sensors. The live 3D surface may then be registered to the CT-derived skull surface using rigid registration methods such as Iterative Closest Point (ICP), or genetic optimization (T^skuii^hei-met} - This enables realtime estimation of the skull-to-helmet transform. Combining both transforms allows for alignment of the patient to the CT volume via:
The skull-to-helmet transform may be used for CT-registered imaging or for repeat focused ultrasound therapy.
In some embodiments, inertial motion tracking systems (like XSens) may be used to track patient movements. In some embodiments, Aruco or QR markers rigidly attached to the patients’ cranium visualized using traditional RGB cameras may be used to estimate positions (Fig. 21).
Method of Use
In some embodiments, the 4D volumetric reconstruction of the brain obtained from the system 100 provides an image of the brain anatomy, vasculature, and pathology. In some embodiments, the 4D volumetric reconstruction of the brain allows for detection and differentiation of cerebral abnormalities. In some embodiments, the 4D volumetric reconstruction may be used to detect, measure, or calculate one or more of: brain ventricle diameter, midbrain diameter, midline shift, optic nerve sheath diameter, pulsatility index, intracranial pressure (ICP), cerebral perfusion pressure (CPP), resistive index, flow velocity within blood vessels in the brain (for example Middle Cerebral Artery (MCA), vertebral artery, basilar artery), flow ratios (for example Lindegaard ratio, Sloan ratio, Soustiel ratio), or flow patterns. In some embodiments, the 4D volumetric reconstruction may be used to perform any known measurement or calculation known to one of skill in the art.
In some embodiments, the 4D volumetric reconstruction obtained with the system 100 may be configured to playback individual frames collected during a brain scan while displaying the positions of the ultrasound probes with respect to the patient’s cranium. Since the relative position transforms between each ultrasound probes and the patient's cranium is known for each image frame, each image frame may be recorded and replayed using graphic renderings of each element.
In some embodiments, the 4D volumetric reconstruction obtained from the system 100 may be used to improve a method of measurement of supratentorial midline shift. Referring now to Fig. 16A, a conventional method of calculating the midline shift on a 2D image slice is shown. Referring now to Fig. 16B, the midbrain structure depicting the lateral ventricles, third ventricle, and flax cerebri is shown. The 4D volumetric reconstruction obtained from the system 100 may be used to measure the distances ml, n2, si, rl and compared to the corresponding distances of m2, n2, s2, r2 for an improved measurement of midline shift. This method may help to identify any localized herniation in the anterior-posterior direction which might not be visible at the center.
In some embodiments, the brain scanning system 100 may be used for
stroke diagnosis, pre-hospital head trauma assessment, in-hospital or at-home post-stroke monitoring, follow-ups for cranial hemorrhages, long-term monitoring of stenosis and aneurysms, ultrasound guided endovascular vascular interventions in the brain, imaging of brain tumors, or guiding external ventricular brain placement.
In some embodiments, the system 100 may be used to administer a therapy. In such cases, the ultrasound probes 204 may be a High Intensity Focused Ultrasound probe, Focused Ultrasound probe, or any other ultrasound probe configured to provide ultrasound therapy. In some embodiments, the system 100 may be configured to automate the delivery of ultrasound therapy. In some embodiments, the system 100 may automatically move the ultrasound probe to the desired location and deliver ultrasound at the desired power. In some embodiments, the system 100 may both perform a brain scan and deliver ultrasound therapy. In some embodiments, the system 100 may be used to map the 3D geometry of the brain prior to ultrasound therapy. In some embodiments, a CT or MRI-based image registration may be performed prior to ultrasound therapy. In some embodiments, the system 100 may be used for emergency brain scanning and/or therapy delivery in an ambulance or outside conventional hospital settings.
Computing Device
In some aspects of the present invention, software executing the instructions provided herein may be stored on a non-transitory computer-readable medium, wherein the software performs some or all of the steps of the present invention when executed on a processor.
Aspects of the invention relate to algorithms executed in computer software. Though certain embodiments may be described as written in particular programming languages, or executed on particular operating systems or computing platforms, it is understood that the system and method of the present invention is not limited to any particular computing language, platform, or combination thereof. Software executing the algorithms described herein may be written in any programming language known in the art, compiled, or interpreted, including but not limited to C, C++, C#, Objective-C, Java, JavaScript, MATLAB, Python, PHP, Perl, Ruby, or Visual Basic. It is further understood that elements of the present invention may be executed on any
acceptable computing platform, including but not limited to a server, a cloud instance, a workstation, a thin client, a mobile device, an embedded microcontroller, a television, or any other suitable computing device known in the art.
Parts of this invention are described as software running on a computing device. Though software described herein may be disclosed as operating on one particular computing device (e.g. a dedicated server or a workstation), it is understood in the art that software is intrinsically portable and that most software running on a dedicated server may also be run, for the purposes of the present invention, on any of a wide range of devices including desktop or mobile devices, laptops, tablets, smartphones, watches, wearable electronics or other wireless digital/cellular phones, televisions, cloud instances, embedded microcontrollers, thin client devices, or any other suitable computing device known in the art.
Similarly, parts of this invention are described as communicating over a variety of wireless or wired computer networks. For the purposes of this invention, the words “network”, “networked”, and “networking” are understood to encompass wired Ethernet, fiber optic connections, wireless connections including any of the various 802.11 standards, cellular WAN infrastructures such as 3G, 4G/LTE, or 5G networks, Bluetooth®, Bluetooth® Low Energy (BLE) or Zigbee® communication links, or any other method by which one electronic device is capable of communicating with another. In some embodiments, elements of the networked portion of the invention may be implemented over a Virtual Private Network (VPN).
Fig. 15 and the following discussion are intended to provide a brief, general description of a suitable computing environment in which the invention may be implemented. While the invention is described above in the general context of program modules that execute in conjunction with an application program that runs on an operating system on a computer, those skilled in the art will recognize that the invention may also be implemented in combination with other program modules.
Generally, program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the invention may be practiced with other computer system configurations, including hand-
held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
Fig. 15 depicts an illustrative computer architecture for a computer 400 for practicing the various embodiments of the invention. The computer architecture shown in Fig. 15 illustrates a conventional personal computer, including a central processing unit 450 (“CPU”), a system memory 405, including a random access memory 410 (“RAM”) and a read-only memory (“ROM”) 415, and a system bus 435 that couples the system memory 405 to the CPU 450. A basic input/output system containing the basic routines that help to transfer information between elements within the computer, such as during startup, is stored in the ROM 415. The computer 400 further includes a storage device 420 for storing an operating system 425, application/program 430, and data.
The storage device 420 is connected to the CPU 450 through a storage controller (not shown) connected to the bus 435. The storage device 420 and its associated computer-readable media provide non-volatile storage for the computer 400. Although the description of computer-readable media contained herein refers to a storage device, such as a hard disk or CD-ROM drive, it should be appreciated by those skilled in the art that computer-readable media can be any available media that can be accessed by the computer 400.
By way of example, and not to be limiting, computer-readable media may comprise computer storage media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer.
According to various embodiments of the invention, the computer 400 may operate in a networked environment using logical connections to remote computers through a network 440, such as TCP/IP network such as the Internet or an intranet. The computer 400 may connect to the network 440 through a network interface unit 445 connected to the bus 435. It should be appreciated that the network interface unit 445 may also be utilized to connect to other types of networks and remote computer systems.
The computer 400 may also include an input/output controller 455 for receiving and processing input from a number of input/output devices 460, including a keyboard, a mouse, a touchscreen, a camera, a microphone, a controller, a joystick, or other type of input device. Similarly, the input/output controller 455 may provide output to a display screen, a printer, a speaker, or other type of output device. The computer 400 can connect to the input/output device 460 via a wired connection including, but not limited to, fiber optic, Ethernet, or copper wire or wireless means including, but not limited to, Wi-Fi, Bluetooth, Near-Field Communication (NFC), infrared, or other suitable wired or wireless connections.
As mentioned briefly above, a number of program modules and data files may be stored in the storage device 420 and/or RAM 410 of the computer 400, including an operating system 425 suitable for controlling the operation of a networked computer. The storage device 420 and RAM 410 may also store one or more applications/programs 430. In particular, the storage device 420 and RAM 410 may store an application/program 430 for providing a variety of functionalities to a user. For instance, the application/program 430 may comprise many types of programs such as a word processing application, a spreadsheet application, a desktop publishing application, a database application, a gaming application, internet browsing application, electronic mail application, messaging application, and the like. According to an embodiment of the present invention, the application/program 430 comprises a multiple functionality software application for providing word processing functionality, slide presentation functionality, spreadsheet functionality, database functionality and the like.
The computer 400 in some embodiments can include a variety of sensors 465 for monitoring the environment surrounding and the environment internal to the computer 400. These sensors 465 can include a Global Positioning System (GPS) sensor,
a photosensitive sensor, a gyroscope, a magnetometer, thermometer, a proximity sensor, an accelerometer, a microphone, biometric sensor, barometer, humidity sensor, radiation sensor, or any other suitable sensor.
EXPERIMENTAL EXAMPLES
The invention is further described in detail by reference to the following experimental examples. These examples are provided for purposes of illustration only, and are not intended to be limiting unless otherwise specified. Thus, the invention should in no way be construed as being limited to the following examples, but rather, should be construed to encompass any and all variations which become evident as a result of the teaching provided herein.
Without further description, it is believed that one of ordinary skill in the art can, using the preceding description and the following illustrative examples, make and utilize the present invention and practice the claimed methods. The following working examples therefore are not to be construed as limiting in any way the remainder of the disclosure.
Example 1 : Preliminary Studies
Several clinical studies were conducted focusing on utilizing ultrasound to image transcranial anatomy and pathology. (C. Robba et al., Neurocritical Care, vol. 32, no. 2, pp. 502-511, Apr. 2020.; S. Kapoor et al., Journal of Neuroimaging, vol. 32, no. 6, pp. 1013-1026, 2022.; B. C. Allen et al., Journal of Neuroimaging, vol. 33, no. 3, pp. 333-358, 2023.; E. J. Sigman, F. J. Laghari, and A. Sarwal, Seminars in Ultrasound, CT and MRI, 2023.; A. Sarwal, Lessons from the ICU, in: Robba et al., Eds. Cham: Springer International Publishing, 2023, pp. 275-290.) A robotic ultrasonography system was designed to detect hemorrhages within the peritoneal, pericardial, and pleural spaces and to ascertain lung collapse, such as pneumothorax. Additionally, it incorporated an RGBD-based point cloud reconstruction method that may be applicable for patient registration in further studies. (B. Mathur et al., in 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering (BIBE), 2019, pp. 649-656.) To test the algorithms, phantoms were specially designed to simulate various conditions,
including hyperechoic signals, hemorrhages, and fluid flow, enhancing the accuracy and reliability of the imaging techniques.
Example 2: Research Design and Methods
An electromagnetic (EM) position-tracking sensor was used to measure the probe’s position in space. However, it is not adequate to determine the position of acquired two-dimensional ultrasound images by simply knowing the probe’s position. Spatial calibration was implemented by using an additional EM tracker at the base of the system to find the transformation (i.e. rotation and translation) between the probe tracking sensor and the acquired ultrasound image. This was a noticeable step toward accurate anatomy localization in ultrasound-guided interventions.
Moreover, when using an ultrasound system with tracking sensors, the ultrasound images and sensor positions may update at different rates leading to time discrepancies. To ensure accurate reconstruction, the time offset was estimated between the images and positions through temporal calibration, which helped to correlate images with the probe positions. The open-source PLUS toolkit was used to perform these calibrations. (A. Lasso et al., IEEE transactions on bio-medical engineering, vol. 61, no. 10, pp. 2527-2537, Oct. 2014.)
Between Dell Seton Medical Center and Wake Forest Baptist Medican Center >800 ICH and 2400 ischemic stroke patients are admitted every year. Based on the current clinical ICH volumes across EMS agencies chosen, 30-60 ICH patients and 270-300 non-ICH patients were enrolled. Adult patients suspected of having cerebrovascular disease or stroke, with existing CT/CT-A images, were enrolled. 10 patients each having ICH, tumors and ischemic strokes, and 5 patients with big intracranial bleeds were enrolled. Trained research personnel conducted transcranial ultrasound exams within 24 hours of CT/CT-A scans to acquire B-Mode and Doppler images of the intracranial anatomy from various angles, adhering to a standard protocol using transcranial acoustic windows (Fig 5). An electromagnetic (EM) motion-tracking sensor were fixed to the ultrasound probe and another on the subject’s mastoid process using surgical tape to monitor patient movement and establish a real-time, head-specific coordinate system. A 3D-printed fixture ensured the EM sensor is properly mounted on
the probe. Images and the probe’s relative position, captured via a frame grabber (Epiphan DVI2USB 3.0) connected to the ultrasound machine’s VGA or HDMI port, were recorded.
A voxel-based volume reconstruction technique (D. Gobbi et al., in Medical Image Computing and Computer- Assisted Intervention - MICCAI 2002. Springer, 2002, pp. 156-163.; J. Boisvert et al., in MICCAI 2008, International Workshop on Systems and Architectures for Computer Assisted Interventions, 2008, pp. 1-8.) was used for both the parenchyma and the vasculature. The 2D image slices were synchronized with probe positions using temporal calibration, transformed through the probe’s spatial calibration, and inserted into a 3D volume. This was implemented by iterating through each pixel of the rectangular or fan-shaped region of the slice and inserting the pixel value into the corresponding volume voxel through the nearest- neighbor interpolation. The intensity of each voxel was then determined by as a weighted average of all coinciding pixels from the 2D image. Minimum and maximum likelihood estimates were then used for removing acoustic shadows or other artifacts by multiple iterations over the same region.
After developing the reconstruction technology and proving its efficacy in diagnosing ICH, a robotic helmet was developed to bring this technology to the field. This robotic system was able to perform user-independent and autonomous transcranial ultrasound by accurately maneuvering multiple probes over the temporal and occipital. It was also able to adjust itself in real-time for the patient’s movements. The temporal and occipital acoustic windows were utilized as these allow insonation of all large intracranial vessels and also allow for the maximal field of view for parenchymal structures. A novel, custom-designed array of ID Lidar proximity sensors was used to map the 3D geometry of the skull before beginning the scan. Assuming the skull to be rigid, this geometry was used to track the patient’s movement, which was then accounted for as an offset for reconstructing the volumetric image. A one-time calibration was performed to calculate the transform between all the rigidly attached LIDAR proximity sensors using a fixed jig and a precision machined spherical ball. The sensors were installed such that the light beams intersect at a common point. Non-linear optimization was used to minimize the error in the measured distance of the sphere to calculate the transform between each
sensor. The algorithm to calculate transforms from the origin sensor i to each sensor j is presented below:
Input: Initial guesses
Known Radius r, Measured radius of sphere from sensor j dj + r, number of sensors n
Result: Optimal Values Tij
(1) While Not Converged do
Example 3: Blood Flow Velocities
A step function based filtering method was used to insonate, filter, and confirm the presence of vessels which were then added as an image frame for reconstruction. Assuming a graph where the x-axis represents time and the y-axis reperesents frequency, the Doppler spectra shows a cloud of points indicating blood flow velocities at different times and frequencies. The envelope is a curve that traces the upper edge of this cloud.
The step function is a horizontal line (at fstep) that separates the graph into two regions: above the line (no flow) and below the line (flow). Positioning this line such that it closely follows the upper edge of the Doppler spectra therefore approximates the envelope. By finding the optimal fstep that minimizes the error, the envelope of the blood flow velocities over time can be tracked. This provides a clear and simplified representation of the maximum blood flow velocities detected by the Doppler ultrasound.
Example 4: Robotic System
The robotic system was designed to be compact and portable such that it can be disassembled into 4 small components for easy storage in an ambulance. It consists of 3 actuation modules one on each temporal window and another on the suboccipital window. Each module holds a linear phased array probe and has 5 degrees of articulation needed for an ultrasound exam. The sub-occipital module was designed to be low-profile such that it can neatly fit under the patient’s headrest and slide out during the scan. The degree of freedom that pushes the probe in the medial-lateral direction for temporal scanning, and anterior-posterior for sub-occipital scanning was designed to be
passive with a soft spring to ensure patient safety and limit the maximum force being applied to the cranium.
The robotic manipulator was equipped with position encoders on each joint for estimating the probe’s position using forward kinematics. To improve localization accuracy, electromagnetic (EM) position trackers were added at the helmet’s base and on the probe. Data from these two sensors was fused to increase localization accuracy, resulting in better reconstruction outcomes.
Example 5: Detection of Intracerebral Hemorrhage (ICH) via 4D Brain Topography Images
Three blinded subject matter experts were recruited to review the 4D reconstructed images for detection of ICH. Standard, open-source medical image visualizing tools such as MicroDicom, 3DSlicer, and VTK were used. This analysis were then compared with CT or CT-A scans to check for accuracy, sensitivity, and specificity of ICH detection. Remarkably reduced artifacts were expected as compared to 2D ultrasound scans and thus, a significantly improved diagnostic accuracy than seen in previous studies.
The ultrasound reconstruction was compared with CT or CT-A scans taken within 24 hours for key imaging variables (size, location, and volume of bleed), and data was gathered on alternative etiology when found, patient demographics, clinical characteristics at onset, past medical history relevant to ICH risk factors; ICH severity score; time of ultrasound scan confirming ICH to the time first head CT confirming ICH and time of arrival to Comprehensive Stroke Center ED/ICU. This data was gathered from electronic medical records on all patients arriving at the two emergency departments to minimize the workload given limited resources.
Published sensitivity and specificity for the Siriraj stroke score are 0.65, 0.88, and Guy’s hospital stroke score are 0.54 and 0.89 for hemorrhagic stroke. ( G. C. Hawkins et al., Stroke, vol. 26, no. 8, pp. 1338-1342, 1995.; C. J. Weir et al., The Lancet, vol. 344, pp. 999-1002, 1994.; P. Badam et al., The National Medical Journal of India, vol. 16, no. 1, pp. 8-12, 2003.; P. Raghuram et al., Journal of Clinical and Diagnostic Research, vol. 6, pp. 851-854, 2012.) With the anticipated sample 30-60, if we observed
sensitivity of 90% or higher, then a 95% Clopper-Pearson exact confidence interval will have a lower bound of 73.5 - 79.5%. Given specificity is of key importance to avoid misdiagnosing an ischemic stroke and lower bounds improve with higher sample size, the lower bounds for thresholds were determined better than the published 89% specificity of clinical scores.
To further evaluate the technology, one patient with a confirmed CT-based diagnosis of ICH was recruited to wear the robotic helmet prototype. Metrics such as the time to don and acquire all the images and final 4D reconstruction, as well as presence of a detectable ICH in the images were used, as evaluated by expert collaborators.
The robotic ultrasound system offered superior image quality compared to manual scans, due to its ability to maintain optimal probe contact and perform precise adjustments without human errors like tremors or fatigue. This led to clearer, more consistent, and more accurate 4D reconstructions. Leveraging multiple transcranial acoustic windows for comprehensive insonation and volumetric reconstruction aided in visualizing anatomy on 2D slices from different perspectives and also significantly reduced artifacts leading to substantial improvement in the sensitivity, specificity, and accuracy of intracranial hemorrhage (ICH) diagnosis using cranial ultrasound. With this, the technology was expected to enhance pre-hospital diagnosis of ICH by reducing the time needed to detect ICH.
Example 6: Image Reconstruction Studies
Referring now to Figs. 7A-7C, shown are exemplary EM trackers and ultrasound probes used for testing of the brain scanning technology and image reconstruction method. Referring now to Fig. 8, shown is a flow diagram showing the method of obtaining a 4D image of the brain.
Referring now to Figs. 9A and 9B, shown are raw and segmented ultrasonic images showing an axial view of major intracranial vessels reconstructed from Doppler scans. Referring now to Figs. 10A and 10B, shown are reconstructed volumetric images of cerebral anatomy from B-mode scans for midline measurement. Fig. 10B shows the cerebral anatomy from B-mode scans with a midline measurement of 65 mm from a healthy volunteer whose skull width is 140 mm. Referring now to Figs. 11 A and
1 IB, shown is a raw reconstruction of the mid-brain anatomy from B-mode scans for midline measurement (Fig. 11 A) and a raw reconstruction of the mid-brain anatomy from B-mode scans for midline measurement where the orange outline shows the Falx Cerebri, the green shading depicts the Thalamus, the red outline shows the Choroid and the pink outline shows the Contralateral skull (Fig. 1 IB). Referring now to Figs. 12A and 12B, shown are images of reconstructed anatomy from B-mode scans for midline measurement. Referring now to Fig. 13, shown is an image of a 2cm segment of a reconstructed Middle Cerebral Artery (MCA). Referring now to Fig. 14, shown are raw and segmented images of the major intracranial vessels reconstructed from Doppler scans. These reconstructions were performed using PLUS toolkit and 3DSlicer.
References
1. S. A. Boppart and R. Richards-Kortum, “Point-of-care and point-of- procedure optical imaging technologies for primary care and global health,” Science translational medicine, vol. 6, no. 253, p. 253rv2, Sep. 2014.
2. J. C. Martinez-Gutierrez et al., “Technological innovation for prehospital stroke triage: ripe for disruption,” Journal of Neurointerventional Surgery, vol. 11, no. 11, pp. 1085-1090, Nov. 2019.
3. C. W. Tsao et al., “Heart disease and stroke statistics — 2023 update: A report from the american heart association,” Circulation, vol. 147, no. 8, pp. e93-e621, 2023.
4. S. Kapoor et al., “Brain topography on adult ultrasound images: Techniques, interpretation, and image library,” Journal of Neuroimaging, vol. 32, no. 6, pp. 1013-1026, 2022.
5. O. V. Solberg et al., “Freehand 3d ultrasound reconstruction algorithms — a review,” Ultrasound in Medicine Biology, vol. 33, no. 7, pp. 991-1009, 2007.
6. F. Al-Mufti et al., “Clinical and radiographic predictors of intracerebral hemorrhage outcome,” Interv Neurol, vol. 7, no. 1-2, pp. 118-136, 2018.
7. B. Ovbiagele and A. I. Qureshi, “Intracerebral hemorrhage therapeutics,” in Prehospital and Emergency Department Management of Intracerebral Hemorrhage: Concepts and Customs. Cham, 2018, pp. 1-16.
8. R. Sahni and J. Weinberger, “Management of intracerebral hemorrhage,” Vase Health Risk Manag, vol. 3, pp. 701-9, 2007.
9. Y. Hu, J. Wang, and B. Luo, “Epidemiological and clinical characteristics of 266 cases of intracerebral hemorrhage in hangzhou, china,” J Zhejiang Univ Sci B, vol. 14, pp. 496-504, 2013.
10. R. Al-Shahi Salman et al., “Absolute risk and predictors of the growth of acute spontaneous intracerebral haemorrhage: a systematic review and meta-analysis of individual patient data,” Lancet Neurol, vol. [volume], no. [issue], p. [pages], [year],
11. J. Caceres and J. Goldstein, “Intracranial hemorrhage,” Emerg Med Clin North Am, vol. 30, no. 3, pp. 771-794, Aug 2012.
12. N. Yassi et al., “Tranexamic acid for intracerebral haemorrhage within 2 hours of onset: protocol of a phase ii randomised placebo-controlled double-blind multicentre trial,” Stroke Vase Neurol, vol. 7, pp. 158-165, 2022.
13. L. Song et al., “Intensive ambulance-delivered blood pressure reduction in hyper-acute stroke trial (interact4): study protocol for a randomized controlled trial,” Trials, vol. 22, 2021.
14. A. Naidech et al., “Recombinant factor viia for hemorrhagic stroke treatment at earliest possible time (fastest): protocol for a phase iii, double-blind, randomized, placebo-controlled trial,” Int J Stroke, vol. 17, pp. 806-809, 2022.
15. W. J. Powers et al., “2015 American Heart Association/ American Stroke Association Focused Update of the 2013 Guidelines for the Early Management of Patients With Acute Ischemic Stroke Regarding Endovascular Treatment,” Stroke, vol. 46, no. 10, pp. 3020-3035, Oct. 2015.
16. E. Venema et al., “Prehospital triage strategies for the transportation of suspected stroke patients in the united states,” Stroke, vol. 51, no. 11, pp. 3310-3319, Nov 2020, epub 2020 Oct 7. PMID: 33023425; PMCID: PMC7587242.
17. E. Brandler et al., “Delay in hospital presentation is the main reason large vessel occlusion stroke patients do not receive intravenous thrombolysis,” J Am
Coll Emerg Physicians Open, vol. 4, no. 5, p. el3048, Oct 2023, epub 2023 Oct 11. PMID: 37840864; PMCID: PMC10568043.
18. K. Suyama et al., “Delays in initial workflow cause delayed initiation of mechanical thrombectomy in patients with in-hospital ischemic stroke,” Fujita Medical Journal, vol. 8, no. 3, pp. 73-78, Aug 2022, epub 2021 Nov 25. PMID: 35949519; PMCID: PMC9358672.
19. L. Schlemm et al., “Impact of prehospital triage scales to detect large vessel occlusion on resource utilization and time to treatment,” Stroke, vol. 49, no. 2, pp. 439-446, 2018.
20. E. Venema et al., “Effect of interhospital transfer on endovascular treatment for acute ischemic stroke,” Stroke, vol. 50, no. 4, pp. 923-930, 2019.
21. A. Sarwal, “Cranial Ultrasound for Intracerebral Pathology,” in Basic Ultrasound Skills “Head to Toe” for General Intensivists, ser. Lessons from the ICU, C. Robba et al., Eds. Cham: Springer International Publishing, 2023, pp. 275-290.
22. C. Henry et al., “Evaluation of the transverse venous sinus with transcranial color-coded duplex,” Journal of Neuroimaging, vol. 33, pp. 566-574, 2023.
23. C. Robba et al., “Brain ultrasonography: methodology, basic and advanced principles and clinical applications, a narratie review,” Intensive Care Medicine, vol. 45, pp. 913-927, 2019.
24. R. Hakimi, A. V. Alexandrov, and Z. Garami, “Neuroultrasonography,” Neurologic Clinics, vol. 38, no. 1, pp. 215-229, Feb. 2020.
25. T. Postert et al., “Insufficient and absent acoustic temporal bone window: potential and limitations of transcranial contrast-enhanced color-coded sonography and contrast-enhanced power-based sonography,” Ultrasound in Medicine Biology, vol. 23, no. 6, pp. 857-862, 1997.
26. M. Marinoni et al., “Technical limits in transcranial doppler recording: inadequate acoustic windows,” Ultrasound in Medicine Biology, vol. 23, no. 8, pp. 1275— 1277, 1997.
27. M. Y.-M. Chan et al., “Success rate of transcranial doppler scanning of cerebral arteries at different transtemporal windows in healthy elderly individuals,” Ultrasound in Medicine Biology, vol. 49, pp. 588-598, 2023.
28. E. J. Sigman, F. J. Laghari, and A. Sarwal, “Neuro Point-of-Care Ultrasound,” Seminars in Ultrasound, CT and
MRI, 2023.
29. A. Sarwal et al., “Exploratory study to assess feasibility of intracerebral hemorrhage detection by point of care cranial ultrasound,” The Ultrasound Journal, vol. 14, no. 1, p. 40, Oct. 2022.
30. B. C. Allen et al., “Transcranial ultrasonography to detect intracranial pathology: A systematic review and meta-analysis,” Journal of Neuroimaging, vol. 33, no. 3, pp. 333-358, 2023.
31. M. Maurer et al., “Differentiation between intracerebral hemorrhage and ischemic stroke by transcranial color-coded duplex-sonography,” Stroke, vol. 29, pp. 2563-2567, 1998.
32. W.-D. Niesen et al., “Transcranial sonography to differentiate primary intracerebral hemorrhage from cerebral infarction with hemorrhagic transformation,” Journal of Neuroimaging, vol. 28, pp. 370-373, 2018.
33. G. Becker et al., “Differentiation between ischemic and hemorrhagic stroke by transcranial color-coded real-time sonography,” Journal of Neuroimaging, vol. 3, pp. 41-47, 1993.
34. A. Sarwal and N. Elder, “Point-of-care cranial ultrasound in a hemi craniectomy patient,” Clin Pract Cases Emerg Med, vol. 2, no. 4, pp. 375-377, 2018.
35. N. Matsumoto et al., “Evaluation of cerebral hemorrhage volume using transcranial color-coded duplex sonography,” J Neuroimaging, 2011.
36. G. Seidel et al., “Sonographic evaluation of hemorrhagic transformation and arterial recanalization in acute hemispheric ischemic stroke,” Stroke, vol. 40, pp. 119-123, 2009.
37. E. Ringelstein et al., “Consensus on microembolus detection by ted,” Stroke, vol. 29, pp. 725-729, 1998.
38. M. Saqqur, D. Zygun, and A. Demchuk, “Role of transcranial doppler in neurocritical care,” Critical Care Medicine, vol. 35, pp. S216-S223, 2007.
39. A. H. Katsanos et al., “Transcranial doppler versus transthoracic echocardiography for the detection of patent foramen ovale in patients with cryptogenic
cerebral ischemia: A systematic review and diagnostic test accuracy meta-analysis,” Annals of Neurology, vol. 79, no. 4, pp. 625-635, 2016.
40. S. Sarkar et al., “Role of transcranial doppler ultrasonography in stroke,” Postgraduate Medical Journal, vol. 83, pp. 683 - 689, 2007.
41. S. Purkayastha and F. Sorond, “Transcranial Doppler Ultrasound: Technique and Application,” Seminars in neurology, vol. 32, no. 4, pp. 411-420, Sep. 2012.
42. M. N. Rubin et al., “Robot-assisted transcranial doppler versus transthoracic echocardiography for right to left shunt detection,” The Journal Name Here, vol. The Volume Here, no. The Issue Number Here, p. The Page Range Here, The Year Here.
43. K. Niederkorn et al., “Three-dimensional transcranial doppler blood flow mapping in patients with cerebrovascular disorders,” Stroke, vol. 19, no. 11, pp. 1335-1344, 1988.
44. B. Lindsey et al., “Simultaneous bilateral real-time 3-d transcranial ultrasound imaging at 1 mhz through poor acoustic windows,” Ultrasound in Medicine Biology, vol. 39, no. 4, pp. 721-734, Apr 2013, epub 2013 Feb 13. PMID: 23415287; PMCID: PMC3764922.
45. M. Woydt et al., “Transcranial duplex- sonography in intracranial hemorrhage evaluation of transcranial duplexsonography in the diagnosis of spontaneous and traumatic intracranial hemorrhage,” Zentralblatt Fur Neurochir, vol. 57, no. 3, pp. 129-135, 1996.
46. M. Masaeli et al., “Point of care ultrasound in detection of brain hemorrhage and skull fracture following pediatric head trauma; a diagnostic accuracy study,” Arch Acad Emerg Med, vol. 7, p. e53, 2019.
47. G. Becker, J. Winkler, and U. Bogdahn, “Transcranial color-coded real time sonography in adults, part 2: cerebral hemorrhage and tumors,” Ultraschall in der Medizin, vol. 12, no. 5, pp. 211-217, 1991.
48. H.-S. Wang et al., “Transcranial ultrasound diagnosis of intracranial lesions in children with headaches,” Pediatric Neurology, vol. 26, pp. 43-46, 2002.
49. W. Ziai and C. Cornwell, Eds., Neurovascular Sonography. Switzerland: Springer International Publishing, 2022, springer International Publishing AG.
50. G. Becker et al., “Reliability of transcranial colour-coded real-time sonography in assessment of brain tumours: correlation of ultrasound, computed tomography and biopsy findings,” Neuroradiology, vol. 36, pp. 585-590,
1994.
51. “Preoperative and postoperative follow-up in high-grade gliomas: comparison of transcranial color-coded real-time sonography and computed tomography findings,” Ultrasound in Medicine Biology, vol. 21, pp. 1123- 1135, 1995.
52. “Postoperative neuroimaging of high-grade gliomas: comparison of transcranial sonography, magnetic resonance imaging, and computed tomography,” Neurosurgery, vol. 44, pp. 469-477, 1999.
53. K. Meyer, G. Seidel, and U. Knopp, “Transcranial sonography of brain tumors in the adult: an in vitro and in vivo study,” Journal of Neuroimaging, vol. 11, pp. 287-292, 2001.
54. T. Prell et al., “Transcranial brainstem sonography as a diagnostic tool for amyotrophic lateral sclerosis,” Amyotroph Lateral Scler Frontotemporal Degener, vol. 15, pp. 244-249, 2014.
55. P. Bartova et al., “Transcranial sonography and (123)i-fp-cit single photon emission computed tomography in movement disorders,” Ultrasound Med Biol, vol. 40, pp. 2365-2371, 2014.
56. S. Hellwig et al., “Transcranial sonography and [18f]fluorodeoxy glucose positron emission tomography for the differential diagnosis of parkinsonism: a head-to-head comparison,” Eur J Neurol, vol. 21, pp. 860-866, 2014.
57. D.-H. Li et al., “Transcranial sonography of the substantia nigra and its correlation with dat-spect in the diagnosis of parkinson’s disease,” Parkinsonism Relat Disord, vol. 21, pp. 923-928, 2015.
58. U. Walter et al., “Sonographic discrimination of corticobasal degeneration vs progressive supranuclear palsy,” Neurology, vol. 63, pp. 504-509, 2004.
59. F. Doepp et al., “Brain parenchyma sonography and 123i-fp-cit spect in parkinson’s disease and essential tremor,” Movement Disorders, vol. 23, pp. 405-410, 2008.
60. M. Budisi'c et al., “Transcranial sonography in the evaluation of pineal lesions: two-year follow up study,” Acta Clinica Croatica, vol. 47, pp. 205-210, 2008.
61. M. Budisic et al., “Pineal gland cyst evaluated by transcranial sonography,” European Journal of Neurology, vol. 15, pp. 229-233, 2008.
62. U. Walter et al., “Sonographic detection of basal ganglia lesions in asymptomatic and symptomatic Wilson disease,” Neurology, vol. 64, pp. 1726-1732, 2005.
63. A. Gaenslen et al., “The specificity and sensitivity of transcranial ultrasound in the differential diagnosis of parkinson’s disease: a prospective blinded study,” Lancet Neurology, vol. 7, pp. 417-424, 2008.
64. J. Hagenah et al., “Substantia nigra hyperechogenicity correlates with clinical status and number of parkin mutated alleles,” Journal of Neurology, vol. 254, pp. 1407-1413, 2007.
65. J. Seeger, J. Schmidt, and T. Flynn, “Preoperative saphenous and cephalic vein mapping as an adjunct to reconstructive arterial surgery,” Annals of surgery, vol. 205, no. 6, pp. 733-739, 1987.
66. F. Galarce et al., “Fast reconstruction of 3D blood flows from Doppler ultrasound images and reduced models,” Computer Methods in Applied Mechanics and Engineering, vol. 375, p. 113559, Mar. 2021.
67. T. Karlita et al., “Automatic bone outer contour extraction from b- modes ultrasound images based on local phase symmetry and quadratic polynomial fitting,” vol. 10443, 2017.
68. X. Wen and S. Salcudean, “P6d-5 enhancement of bone surface visualization using ultrasound radio-frequency signals,” in 2007 IEEE Ultrasonics Symposium Proceedings, 2007, pp. 2535-2538.
69. F. Prada et al., “Identification of residual tumor with intraoperative contrast-enhanced ultrasound during glioblastoma resection,” Neurosurgical focus, vol. 40, no. 3, p. E7, 2016.
70. D. Gobbi, B. K. Lee, and T. Peters, “Correlation of preoperative mri and intraoperative 3d ultrasound to measure brain tissue shift,” in Medical Imaging 2001 : Visualization, Display, and Image-Guided Procedures, vol. 4319. International Society for Optics and Photonics, 2000.
71. Y. Yoon et al., “Efficient b-mode ultrasound image reconstruction from sub-sampled rf data using deep learning,” IEEE transactions on medical imaging, vol. 36, no. 12, pp. 2474-2484, 2017.
72. K. Rosenfield et al., “Three-dimensional reconstruction of human carotid arteries from images obtained during noninvasive b-mode ultrasound examination,” The American journal of cardiology, vol. 70, no. 3, pp. 379-384, 1992.
73. S. Merouche et al., “A robotic ultrasound scanner for automatic vessel tracking and three-dimensional reconstruction of b-mode images,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 63, no. 1, pp. 35-46, 2016.
74. C. Robba et al., “Brain Ultrasonography Consensus on Skill Recommendations and Competence Levels Within the Critical Care Setting,” Neurocritical Care, vol. 32, no. 2, pp. 502-511, Apr. 2020.
75. B. Mathur et al., “A semi-autonomous robotic system for remote trauma assessment,” in 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering (BIBE), 2019, pp. 649-656.
76. A. Lasso et al., “PLUS: open-source toolkit for ultrasound-guided intervention systems,” IEEE transactions on bio-medical engineering, vol. 61, no. 10, pp. 2527-2537, Oct. 2014.
77. D. Gobbi and T. Peters, “Interactive intra-operative 3d ultrasound reconstruction and visualization,” in Medical Image Computing and Computer-Assisted Intervention - MICCAI 2002. Springer, 2002, pp. 156-163.
78. J. Boisvert et al., “An open-source solution for interactive acquisition, processing and transfer of interventional ultrasound images,” in MICCAI 2008,
International Workshop on Systems and Architectures for Computer Assisted Interventions, 2008, pp. 1-8.
79. G. C. Hawkins et al., “Inadequacy of clinical scoring systems to differentiate stroke subtypes in population-based studies,” Stroke, vol. 26, no. 8, pp. 1338-1342, 1995.
80. C. J. Weir et al., “Poor accuracy of stroke scoring systems for differential clinical diagnosis of intracranial haemorrhage and infarction,” The Lancet, vol. 344, pp. 999-1002, 1994.
81. P. Badam et al., “Poor accuracy of the siriraj and guy’s hospital stroke scores in distinguishing haemorrhagic from ischaemic stroke in a rural, tertiary care hospital,” The National Medical Journal of India, vol. 16, no. 1, pp. 8-12, 2003.
82. P. Raghuram, M. Biradar, and J. Jeganathan, “Comparison of the siriraj stroke score and the guy’s hospital score in south india,” Journal of Clinical and Diagnostic Research, vol. 6, pp. 851-854, 2012.
83. Parker KJ (2016) OMICS J Radiol 5:236. doi: 10.4172/2167- 7964.1000236.
84. Tai, H., Khairalseed, M., & Hoyt, K. (2020). 3-D H-scan ultrasound imaging and use of a convolutional neural network for scatterer size estimation. Ultrasound in medicine & biology, 46(10), 2810-2818.
85. Khairalseed, M., Brown, K., Parker, K. J., & Hoyt, K. (2019). Realtime H-scan ultrasound imaging using a Verasonics research scanner. Ultrasonics, 94, 28- 36.
86. Tai, H. (2022). Tissue Characterization Using H-scan Ultrasound Imaging [PhD Dissertation], The University of Texas at Dallas
The disclosures of each and every patent, patent application, and publication cited herein are hereby each incorporated herein by reference in their entirety. While this invention has been disclosed with reference to specific embodiments, it is apparent that other embodiments and variations of this invention may be devised by others skilled in the art without departing from the true spirit and scope of the invention.
The appended claims are intended to be construed to include all such embodiments and equivalent variations.
Claims
1. A brain scanning system comprising: a helmet configured to at least partially enclose the head of a subject; one or more scanning modules fixedly attached to the helmet; one or more ultrasound probes attached to each scanning module of the one or more scanning modules; at least one position tracker attached to each ultrasound probe of the one or more ultrasound probes and configured to record the relative position of each of the one or more ultrasound probes; and a computing system communicatively connected to each scanning module and configured to track the position of each ultrasound probe and to collect ultrasound data from the one or more ultrasound probes.
2. The system of claim 1, wherein the one or more ultrasound probes are configured to move relative to the one or more scanning modules.
3. The system of claim 1, wherein the one or more ultrasound probes are fixedly attached to the helmet.
4. The system of claim 3, wherein the one or more ultrasound probes are ultrasound patches, and the helmet is an elastic, wearable interface.
5. The system of claim 1, wherein the one or more scanning modules are positioned on the lateral sides of the helmet and configured to capture ultrasound images of the temporal regions of the brain.
6. The system of claim 1, wherein the one or more scanning modules are positioned on the base of the helmet and configured to capture ultrasound images of the occipital region of the brain.
7. The system of claim 1, wherein the system comprises at least three scanning modules.
8. The system of claim 2, wherein the helmet further comprises a position tracker configured to track the position of the ultrasound probes of each scanning module.
9. The system of claim 2, wherein the vertical position, the horizontal position, orientation, and tilt of each ultrasound probe of each scanning module may be adjusted via the computing system.
10. The system of claim 1, wherein the helmet further comprises one or more proximity sensors to sense real-time movement of a patient undergoing a brain scan.
11. The system of claim 10, wherein the computing system is configured to adjust the position and orientation of the ultrasound probes of each of the one or more scanning modules to account for real-time patient movement.
12. The system of claim 1, wherein the helmet has a size of 10-30 cm by 10- 30 cm by 10-30 cm.
13. The system of claim 1, wherein the helmet comprises a material selected from the group consisting of: plastics, metals, metal alloys, polymers, fabrics, and combinations thereof.
14. The system of claim 1, wherein the helmet further comprises one or more fiducial markers for image registration.
15. The system of claim 1, further comprising an ultrasound gel dispensing mechanism.
16. The system of claim 1, further comprising one or more contact elements movably attached to the helmet and configured to contact the patient’s head for head stabilization and mechanical registration.
17. The system of claim 16, wherein the one or more contact elements are configured to contact the patient’s head along one or more axes comprising: the medial -lateral axis, the anterior-posterior axis, and the superior-inferior axis.
18. The system of claim 1, further comprising an electroencephalogram (EEG) or electrocardiogram (ECG or EKG) module.
19. The system of claim 1, further comprising one or more lasers removably attached to the helmet and configured to project a laser line on the patient’s head, wherein the laser line intersects an anatomical landmark of the head.
20. The system of claim 1, wherein the system is portable.
21. The system of claim 10, wherein the computing system comprises a processor and a non-transitory computer readable medium, wherein the non-transitory computer readable medium comprises instructions which when executed by the processor, performs the steps of: positioning and orienting each ultrasound probe of each scanning module for image acquisition; acquiring ultrasound images in at least two ultrasound modes from ultrasound probe; recording the position of each ultrasound probe of each scanning module during image acquisition; sensing real-time patient movement via the one or more proximity sensors; and adjusting the position or angle of the ultrasound probes of each scanning module based on the patient movement.
22. A method for 4D brain reconstruction comprising: providing the system of claim 1; acquiring ultrasound images in a first ultrasound mode from each scanning module while acquiring positional data of the ultrasound probes of each scanning modules via the position trackers;
registering each ultrasound image in the first ultrasound mode with the probe positional data to generate a 3D structural reconstruction of the brain; acquiring ultrasound images in a second ultrasound mode from each scanning module while acquiring positional data of the ultrasound probes of each scanning modules via the position trackers; registering each ultrasound image in the second ultrasound mode with the probe positional data to generate a vascular flow reconstruction of the brain; and overlaying the vascular flow reconstruction on the 3D structural reconstruction to obtain a 4D volumetric reconstruction of the brain.
23. The method of claim 22, wherein the first ultrasound mode is B-mode ultrasound, and the second ultrasound mode is Doppler ultrasound.
24. The method of claim 22, wherein the one or more scanning modules acquire ultrasound images from the left temporal region, the right temporal region, the occipital region, the orbital region, the mandibular region, or any combinations thereof.
25. The method of claim 22, further comprising a step of mapping the 3D geometry of the patient’s skull via the one or more proximity sensors of the helmet.
26. The method of claim 22, further comprising a step of calibrating the ultrasound probe position data.
27. The method of claim 22, further comprising a step of removing artifacts from the 4D reconstruction of the brain.
28. The method of claim 22, further comprising a step of tissue characterization.
29. The method of claim 22, further comprising the steps of: measuring a heart cycle of a patient via an electrocardiogram (ECG); and
triggering an acquisition of an ultrasonic image from at least one of the scanning modules at a consistent point in the heart cycle of the patient.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US63/676,632 | 2024-07-29 | ||
| US63/677,384 | 2024-07-30 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2026030337A1 true WO2026030337A1 (en) | 2026-02-05 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10231704B2 (en) | Method for acquiring ultrasonic data | |
| US10130328B2 (en) | Method and apparatus for ultrasound image acquisition | |
| US20230100078A1 (en) | Multi-modal medical image registration and associated devices, systems, and methods | |
| US11317896B2 (en) | Ultrasound diagnosis apparatus and image processing apparatus | |
| US20100016707A1 (en) | Imaging system | |
| Morgan et al. | Versatile low-cost volumetric 3-D ultrasound platform for existing clinical 2-D systems | |
| Riva et al. | 3D intra-operative ultrasound and MR image guidance: pursuing an ultrasound-based management of brainshift to enhance neuronavigation | |
| CN107854177A (en) | A kind of ultrasound and CT/MR image co-registrations operation guiding system and its method based on optical alignment registration | |
| US20240273822A1 (en) | System and Method for Generating Three Dimensional Geometric Models of Anatomical Regions | |
| US20180333141A1 (en) | Neurosurgical mri-guided ultrasound via multi-modal image registration and multi-sensor fusion | |
| US11160610B2 (en) | Systems and methods for soft tissue navigation | |
| Kern et al. | Multiplanar transcranial ultrasound imaging: standards, landmarks and correlation with magnetic resonance imaging | |
| Hartov et al. | A comparative analysis of coregistered ultrasound and magnetic resonance imaging in neurosurgery | |
| CN113907883A (en) | A 3D visualization surgical navigation system and method for ear side skull base surgery | |
| Laganà et al. | Transcranial ultrasound and magnetic resonance image fusion with virtual navigator | |
| Verhoef et al. | Freehand ultrafast Doppler ultrasound imaging with optical tracking allows for detailed 3D reconstruction of blood flow in the human brain | |
| US20200323516A1 (en) | Three-Dimensional Dynamic Contrast Enhanced Ultrasound and Real-Time Intensity Curve Steady-State Verification during Ultrasound-Contrast Infusion | |
| WO2026030337A1 (en) | Systems and methods for 4-dimensional dynamic visualization of the brain | |
| Punithakumar et al. | Integration of robotic technology for combining multiple views in three-dimensional echocardiography | |
| Ji et al. | Coregistered volumetric true 3D ultrasonography in image-guided neurosurgery | |
| Stember et al. | Surface point cloud ultrasound with transcranial Doppler: coregistration of surface point cloud ultrasound with magnetic resonance angiography for improved reproducibility, visualization, and navigation in transcranial doppler ultrasound | |
| Hareendranathan et al. | Patient movement compensation for 3D echocardiography fusion | |
| US20250213226A1 (en) | Multi-modality image visualization for stroke detection | |
| Alquwaynim | Robotic Arm Assisted Multiple Apical-View 3D Fusion of Echocardiography for Enhanced Right and Left Ventricular Assessment and Measurement | |
| WO2025174993A1 (en) | Systems and methods for improved ultrasound-guided medical imaging |