CN104305957B - Wear-type molecular image navigation system - Google Patents
Wear-type molecular image navigation system Download PDFInfo
- Publication number
- CN104305957B CN104305957B CN201410433156.9A CN201410433156A CN104305957B CN 104305957 B CN104305957 B CN 104305957B CN 201410433156 A CN201410433156 A CN 201410433156A CN 104305957 B CN104305957 B CN 104305957B
- Authority
- CN
- China
- Prior art keywords
- image
- module
- light source
- registration
- infrared
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 claims abstract description 19
- 238000003384 imaging method Methods 0.000 claims abstract description 17
- 230000003287 optical effect Effects 0.000 claims abstract description 15
- 238000001514 detection method Methods 0.000 claims abstract description 12
- 238000000034 method Methods 0.000 claims description 16
- 230000004927 fusion Effects 0.000 claims description 10
- 238000005070 sampling Methods 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 5
- 239000013307 optical fiber Substances 0.000 claims description 5
- 238000003708 edge detection Methods 0.000 claims description 4
- 238000003672 processing method Methods 0.000 claims description 4
- 230000008878 coupling Effects 0.000 claims description 2
- 238000010168 coupling process Methods 0.000 claims description 2
- 238000005859 coupling reaction Methods 0.000 claims description 2
- 238000006073 displacement reaction Methods 0.000 claims description 2
- 238000007689 inspection Methods 0.000 claims description 2
- 238000002372 labelling Methods 0.000 claims 2
- 206010061619 Deformity Diseases 0.000 claims 1
- 230000001815 facial effect Effects 0.000 claims 1
- 230000001678 irradiating effect Effects 0.000 claims 1
- 238000010606 normalization Methods 0.000 claims 1
- 238000002073 fluorescence micrograph Methods 0.000 abstract description 19
- 238000005516 engineering process Methods 0.000 description 8
- 230000005284 excitation Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 239000003814 drug Substances 0.000 description 3
- 229940079593 drug Drugs 0.000 description 3
- 238000001727 in vivo Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000011282 treatment Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000799 fluorescence microscopy Methods 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 231100000915 pathological change Toxicity 0.000 description 1
- 230000036285 pathological change Effects 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
- A61B5/0035—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for acquisition of images from more than one imaging mode, e.g. combining MRI and optical tomography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
- A61B5/0037—Performing a preliminary scan, e.g. a prescan for identifying a region of interest
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0075—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by spectroscopy, i.e. measuring spectra, e.g. Raman spectroscopy, infrared absorption spectroscopy
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6802—Sensor mounted on worn items
- A61B5/6803—Head-worn items, e.g. helmets, masks, headphones or goggles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient; User input means
- A61B5/742—Details of notification to user or communication with user or patient; User input means using visual displays
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient; User input means
- A61B5/7405—Details of notification to user or communication with user or patient; User input means using sound
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Pathology (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Veterinary Medicine (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Image Processing (AREA)
Abstract
一种头戴式分子影像导航系统,包括:多光谱光源模块,用于向探测区域照射可见光和近红外光;信号采集模块,用于采集成像对象的近红外荧光图像和可见光图像;头戴式系统支撑模块,用于承载所述多光谱光源模块和所述信号采集模块,以调整所述多光谱光源模块对所述探测区域的照射;图像处理模块,用于对采集的近红外光图像和可见光图像进行图像融合,并输出融合图像。根据本发明实施例,有效实现了影像系统应用中设备的灵活使用,扩展了光学分子影像导航的应用空间。
A head-mounted molecular image navigation system, including: a multi-spectral light source module, used to irradiate visible light and near-infrared light to the detection area; a signal acquisition module, used to collect near-infrared fluorescence images and visible light images of imaging objects; head-mounted The system support module is used to carry the multi-spectral light source module and the signal acquisition module to adjust the irradiation of the multi-spectral light source module to the detection area; the image processing module is used to collect near-infrared light images and The visible light image is fused, and the fused image is output. According to the embodiment of the present invention, the flexible use of the equipment in the application of the image system is effectively realized, and the application space of the optical molecular image navigation is expanded.
Description
技术领域technical field
本发明涉及一种成像系统,特别是一种头戴式分子影像导航系统。The invention relates to an imaging system, in particular to a head-mounted molecular image navigation system.
背景技术Background technique
作为无创可视化成像技术的新方法和手段,分子影像在本质上反映了分子调控的改变所引发的生物体生理分子水平变化和整体机能的变化。因此,在分子水平上在体(invivo)研究基因、生物大分子和细胞的生命活动是一种重要技术,其中基于分子技术、断层成像技术、光学成像技术、模拟方法学的在体生物光学成像技术的基础研究,已经成为分子影像领域研究的热点和难点之一。As a new method and means of non-invasive visualization imaging technology, molecular imaging essentially reflects the changes in the biological molecular level and the overall function of organisms caused by changes in molecular regulation. Therefore, it is an important technology to study the life activities of genes, biomacromolecules and cells at the molecular level in vivo. The basic research of technology has become one of the hotspots and difficulties in the field of molecular imaging.
分子影像设备将传统医学影像技术与现代分子生物学相结合,能够从细胞、分子层面观测生理或病理变化,具有无创伤、实时、活体、高特异性、高灵敏度以及高分辨率显像等优点。利用分子影像技术,一方面可极大加快药物的研制开发速度,缩短药物临床前研究时间;提供更准确的诊断,使治疗方案最佳地匹配病人的基因图谱,帮助制药公司研发个性化治疗的药物;另一方面,可以在生物医学领域进行应用,实现在体的定量分析、影像导航、分子分型等目标。然而,利用这种方法的系统相对复杂,操作简易性及使用舒适性方面有待进一步提高。Molecular imaging equipment combines traditional medical imaging technology with modern molecular biology, and can observe physiological or pathological changes from the cellular and molecular levels. It has the advantages of non-invasive, real-time, in vivo, high specificity, high sensitivity, and high-resolution imaging. . The use of molecular imaging technology, on the one hand, can greatly speed up the development of drugs and shorten the time for preclinical drug research; provide more accurate diagnosis, make the treatment plan best match the patient's genetic map, and help pharmaceutical companies develop personalized treatments Drugs; on the other hand, it can be applied in the field of biomedicine to achieve in vivo quantitative analysis, image navigation, molecular typing and other goals. However, the system using this method is relatively complicated, and the ease of operation and user comfort need to be further improved.
因此本发明提出了一种头戴式分子影像导航系统,通过多光谱激发的方法检测分子影像中的在体目标,增强应用的适用范围。Therefore, the present invention proposes a head-mounted molecular image navigation system, which detects in-body targets in molecular images by means of multi-spectral excitation, and enhances the scope of application.
发明内容Contents of the invention
本发明提供了一种头戴式分子影像导航系统,包括:The invention provides a head-mounted molecular image navigation system, comprising:
多光谱光源模块,用于向探测区域照射可见光和近红外光;The multi-spectral light source module is used to irradiate visible light and near-infrared light to the detection area;
信号采集模块,用于采集成像对象的近红外荧光图像和可见光图像;A signal acquisition module, configured to acquire near-infrared fluorescence images and visible light images of imaging objects;
头戴式系统支撑模块,用于承载所述多光谱光源模块和所述信号采集模块,以调整所述多光谱光源模块对所述探测区域的照射;The head-mounted system support module is used to carry the multi-spectral light source module and the signal acquisition module, so as to adjust the irradiation of the multi-spectral light source module to the detection area;
图像处理模块,用于对采集的近红外光图像和可见光图像进行图像融合,并输出融合图像。The image processing module is used to perform image fusion on the collected near-infrared light image and visible light image, and output the fusion image.
本发明的实施例具有以下技术效果:Embodiments of the invention have the following technical effects:
1、通过头戴方式实现分子影像导航、分子成像,在实现功能的同时提高了便捷性。1. Realize molecular image navigation and molecular imaging through head-mounted methods, which improves convenience while realizing functions.
2、采用投影成像的方法可以引导操作人员对成像范围进行预判断,从而增加了人机交互的功能。2. The method of projection imaging can guide the operator to pre-judge the imaging range, thus increasing the function of human-computer interaction.
3、利用语音识别的功能可以方便操作人员在使用系统的过程中进一步解放双手,从而更精确地控制头戴式分子影像导航系统。3. Using the function of voice recognition can facilitate the operator to further free his hands in the process of using the system, so as to control the head-mounted molecular image navigation system more accurately.
4、由于采用阈值分解的特征值提取方法,使得信背比明显提高,有助于操作人员根据图像引导实时精准操作。4. Due to the eigenvalue extraction method of threshold decomposition, the signal-to-background ratio is significantly improved, which is helpful for operators to operate accurately in real time according to image guidance.
附图说明Description of drawings
图1是根据本发明实施例的头戴式系统支撑模块的结构示意图;1 is a schematic structural view of a head-mounted system support module according to an embodiment of the present invention;
图2是依照本发明实施例的头戴式分子影像导航系统的方框图;2 is a block diagram of a head-mounted molecular image navigation system according to an embodiment of the present invention;
图3是依照本发明实施例的头戴式分子影像导航系统的图像处理方法流程图。FIG. 3 is a flowchart of an image processing method of the head-mounted molecular image navigation system according to an embodiment of the present invention.
具体实施方式detailed description
为使本发明的目的、技术方案和优点更加清楚明白,以下结合具体实施例,并参照附图,对本发明进一步详细说明。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in further detail below in conjunction with specific embodiments and with reference to the accompanying drawings.
本发明实施例基于分子影像中的激发荧光成像,提供了一种头戴式分子影像导航系统。Embodiments of the present invention provide a head-mounted molecular image navigation system based on excited fluorescence imaging in molecular images.
图1是根据本发明实施例的头戴式系统支撑模块的结构示意图。图2是根据本发明实施例的头戴式分子影像导航系统的方框图。如图2所示,该头戴式分子影像导航系统可以包括多光谱光源模块110,用于提供多个不同谱段的光,以便照射受检对象;光学信号采集模块120,用于实时采集受检对象的荧光激发图像和可见光图像;头戴式系统支撑模块130,用于调整操作人员佩戴时的舒适性,并保证成像的安全有效进行;图像处理模块140,用于进行图像分割、特征提取、图像配准等处理,实现可见光图像与荧光图像的融合并输出融合图像。FIG. 1 is a schematic structural diagram of a head-mounted system support module according to an embodiment of the present invention. FIG. 2 is a block diagram of a head-mounted molecular image navigation system according to an embodiment of the present invention. As shown in Figure 2, the head-mounted molecular image navigation system may include a multi-spectral light source module 110, used to provide a plurality of different spectral bands of light, so as to irradiate the subject; an optical signal acquisition module 120, used to collect the subject in real time The fluorescence excitation image and visible light image of the object to be inspected; the head-mounted system support module 130 is used to adjust the comfort of the operator when wearing it, and ensure the safe and effective imaging; the image processing module 140 is used to perform image segmentation and feature extraction , image registration and other processing to realize the fusion of visible light image and fluorescence image and output the fusion image.
接下来将分别详细描述多光谱光源模块110、光学信号采集模块120、头戴式系统支撑模块130和图像处理模块140的操作。Next, the operations of the multispectral light source module 110 , the optical signal acquisition module 120 , the head mounted system support module 130 and the image processing module 140 will be described in detail respectively.
多光谱光源模块110可以包括冷光源111、近红外激光器112及光源耦合器113。冷光源111用于向受检对象发射可见光。冷光源111可以放置有第一带通滤光片,以便透过波长为400-650nm的可见光。近红外激光器113配置为发射中心波长为例如785nm的近红外光。可以通过光纤将激发光源引出。本领域技术人员已知的是,本发明实施例并不局限于上述实现方式,还可以采用本领域公知的其他方式来发射可见光与近红外光。当激发探测区域时,基于光谱分离方法,由单根光纤来同时实现冷光源111与近红外激光器112光出射。具体地,将可见光源与近红外光源出射的光在出光口处耦合。在耦合处设置光源耦合器113。光源耦合器113可以是发散镜头,将光源由直线点光源变成锥束光,这样可以扩大照射面积,以实现激发光源对探测区域的均匀照射。例如,可以在近红外激光器112的出光口处设置光学镜头,光学镜头与激光器输出端反向耦合,实现光源较大发散角的输出。可以采用机械固定的方法,将光纤的一端和光学镜头固定在一起,将光纤的另一端与头戴式系统支撑模块130相连。The multispectral light source module 110 may include a cold light source 111 , a near-infrared laser 112 and a light source coupler 113 . The cold light source 111 is used to emit visible light to the object under inspection. The cold light source 111 may be provided with a first bandpass filter to transmit visible light with a wavelength of 400-650nm. The near-infrared laser 113 is configured to emit near-infrared light having a central wavelength of, for example, 785 nm. The excitation light source can be extracted through an optical fiber. It is known to those skilled in the art that the embodiments of the present invention are not limited to the above-mentioned implementation manners, and other manners known in the art may also be used to emit visible light and near-infrared light. When the detection area is excited, based on the spectral separation method, the cold light source 111 and the near-infrared laser 112 are simultaneously emitted by a single optical fiber. Specifically, the light emitted by the visible light source and the near-infrared light source is coupled at the light exit. A light source coupler 113 is provided at the coupling. The light source coupler 113 can be a diverging lens, which changes the light source from a straight point light source into a cone beam light, so that the irradiation area can be enlarged, so as to achieve uniform irradiation of the excitation light source on the detection area. For example, an optical lens may be provided at the light exit of the near-infrared laser 112 , and the optical lens is reversely coupled to the output end of the laser to achieve output with a large divergence angle of the light source. A mechanical fixing method may be used to fix one end of the optical fiber and the optical lens together, and connect the other end of the optical fiber to the head-mounted system support module 130 .
光学信号采集模块120可以包括相机121、镜头122和坐标投影器123。相机121配置用于采集近红外荧光信号和可见光信号。其中,在采集过程中冷光源对背景进行照明。例如,可以如下设置近红外光信号采集所需的参考参数:在800nm处,量子效率高于30%,帧速大于30fps,像源(即,相机121的最小感光单元点image source)尺寸大于5微米。优选地,在相机121与镜头122之间放置第二带通滤光片,以便透过波长为810-870nm的近红外光。当相机121进行操作时,坐标投影器123可以向探测区域(图中未示出)投射出一圆形轮廓,该轮廓为视野的最大范围,以便操作人员获得系统的探测区域,同时便于操作人员获得多光谱光源模块110的激发范围。The optical signal acquisition module 120 may include a camera 121 , a lens 122 and a coordinate projector 123 . The camera 121 is configured to collect near-infrared fluorescence signals and visible light signals. Among them, the cold light source illuminates the background during the acquisition process. For example, the reference parameters required for near-infrared light signal acquisition can be set as follows: at 800nm, the quantum efficiency is higher than 30%, the frame rate is greater than 30fps, and the size of the image source (that is, the smallest photosensitive unit point image source of the camera 121) is greater than 5 Microns. Preferably, a second bandpass filter is placed between the camera 121 and the lens 122 so as to transmit near-infrared light with a wavelength of 810-870nm. When the camera 121 is in operation, the coordinate projector 123 can project a circular outline to the detection area (not shown in the figure), which is the maximum range of the field of view, so that the operator can obtain the detection area of the system, and at the same time it is convenient for the operator The excitation range of the multispectral light source module 110 is obtained.
如图1所示,头戴式系统支撑模块130可以包括头戴式系统支架131。头戴式系统支架131用于承载光源模块110和信号采集模块120。优选地,头戴式系统支撑模块130还可以包括语音识别与控制模块132。语音识别控制模块132可以包括麦克风、语音识别单元和控制单元(未示出),以便通过操作人员的语音来控制多光谱光源模块110、坐标投影器123等模块的操作。可以使用本领域公知的语音识别技术来实现语音识别与控制模块132。As shown in FIG. 1 , the head-mounted system support module 130 may include a head-mounted system bracket 131 . The head-mounted system bracket 131 is used to carry the light source module 110 and the signal collection module 120 . Preferably, the head-mounted system support module 130 may also include a voice recognition and control module 132 . The voice recognition control module 132 may include a microphone, a voice recognition unit and a control unit (not shown), so as to control the operations of the multi-spectral light source module 110, the coordinate projector 123 and other modules through the operator's voice. The voice recognition and control module 132 can be implemented using voice recognition technology known in the art.
将来自光学信号采集模块120的受检对象的可见光图像和近红外荧光图像分别输入到图像处理模块140。图像处理模块140由后端计算机处理实现,采集及光源控制也可以由后端实现手动控制。图像处理模块140首先对输入的近红外荧光图像进行预处理,以便根据荧光特异性得到荧光图像的特性分布。预处理可以包括噪声去除、特征提取以及坏点补偿等。当然,也可以对可见光图像进行本领域公知的预处理。可以利用阈值分割对输入的近红外荧光图像进行特征提取。例如,对于近红外荧光图像中图像灰度值G/背景噪声灰度值Gn高于1.5的像素点,将该像素点的灰度值乘以2,对于G/Gn低于1.5的像素点,将该像素点的灰度值除以2。按照这种阈值分割方法能够强化特征点。对于灰度值大于预设阈值的感兴趣区域,可以通过本领域公知的灰度图像至伪彩色图像调整算法,将这些感兴趣区域转化成伪彩色图像,从而进一步标记出特征点及特征区域的位置,以便操作人员根据图像来引导实施操作。图像处理模块140处理后的图像为融合图像,在通用计算机上具有显示和投影接口,方便操作人员实现图像的输出显示。同时可以将视频信号反馈到头戴系统上,通过放置镜前屏幕实现对融合图像的可视化。The visible light image and the near-infrared fluorescence image of the subject from the optical signal acquisition module 120 are respectively input to the image processing module 140 . The image processing module 140 is implemented by back-end computer processing, and the acquisition and light source control can also be manually controlled by the back-end. The image processing module 140 first preprocesses the input near-infrared fluorescence image, so as to obtain the characteristic distribution of the fluorescence image according to fluorescence specificity. Preprocessing can include noise removal, feature extraction, and dead point compensation. Of course, preprocessing known in the art can also be performed on the visible light image. The feature extraction of the input near-infrared fluorescence image can be performed by threshold segmentation. For example, for a pixel in the near-infrared fluorescence image whose gray value G/background noise gray value G n is higher than 1.5, multiply the gray value of the pixel by 2, and for pixels whose G/G n is lower than 1.5 point, divide the gray value of the pixel by 2. According to this threshold segmentation method, feature points can be enhanced. For regions of interest whose grayscale value is greater than the preset threshold, these regions of interest can be converted into pseudo-color images through a grayscale image-to-pseudo-color image adjustment algorithm known in the art, so as to further mark the feature points and the features of the region. Position, so that the operator can guide the operation according to the image. The image processed by the image processing module 140 is a fused image, which has a display and projection interface on the general-purpose computer, which is convenient for the operator to realize the output display of the image. At the same time, the video signal can be fed back to the head-mounted system, and the fusion image can be visualized by placing the screen in front of the mirror.
然后,利用得到的荧光图像光学特性分布,将荧光输入的可见光图像进行图像融合,从而得到融合结果图像以便输出。具体地,荧光图像与可见光图像的图像融合包括利用荧光图像光学特性分布将荧光图像与可见光图像进行配准。以下将详细描述该配准操作。Then, by using the obtained optical characteristic distribution of the fluorescence image, image fusion is performed on the visible light image of the fluorescence input, so as to obtain a fusion result image for output. Specifically, the image fusion of the fluorescence image and the visible light image includes registering the fluorescence image and the visible light image by using the optical property distribution of the fluorescence image. This registration operation will be described in detail below.
荧光图像光学特性分布具有荧光特异性,而可见光图像是一种高分辨率结构图像。根据本发明实施例的图像配准利用了上述特性。在进行配准时,可以采用形态学理论,修正荧光图像光学特性分布的最小化能量函数式,使其形状接近成像组织。可以使用下式(1)来进行配准。The distribution of optical properties of the fluorescence image is fluorescence-specific, while the visible light image is a high-resolution structural image. Image registration according to an embodiment of the present invention takes advantage of the above properties. When performing registration, the morphology theory can be used to correct the minimized energy function of the distribution of the optical properties of the fluorescence image so that its shape is close to that of the imaged tissue. Registration can be performed using the following equation (1).
式(1)中,d为离散Laplace算子,U为位置向量,选择n个表面点作为主要标记点,pi、ai分别为成像表面标记点,Wi=(pi-ai)移动向量,通过最小化E(U)获得向量UP,则为表面变形后的位置。In formula (1), d is the discrete Laplace operator, U is the position vector, and n surface points are selected as the main marking points, p i and a i are the imaging surface marking points respectively, W i =(p i -a i ) Moving the vector, the vector U P is obtained by minimizing E(U), then is the position of the surface after deformation.
为了获取较准确的、高分辨的融合图像,在进行配准时,采用下式(2)所示的图像重合度作为配准效果评价标准。In order to obtain a more accurate and high-resolution fusion image, the image coincidence degree shown in the following formula (2) is used as the registration effect evaluation standard when performing registration.
其中,A是可见光图像归一化灰度值矩阵,B是荧光图像归一化灰度值矩阵。运算结果越接近1,说明图像配准效果越好。Among them, A is the normalized gray value matrix of the visible light image, and B is the normalized gray value matrix of the fluorescence image. The closer the calculation result is to 1, the better the image registration effect is.
图3示出了根据本发明实施例的图像处理方法的流程图。如图3所示,在步骤301,对经过预处理的可见光图像序列和荧光图像序列空间运动检测,以便滤除不匹配的微小位移帧,得到可见光图像序列M1和荧光图像序列M2。Fig. 3 shows a flowchart of an image processing method according to an embodiment of the present invention. As shown in FIG. 3 , in step 301 , spatial motion detection is performed on the preprocessed visible light image sequence and fluorescence image sequence, so as to filter out mismatched micro-displacement frames, and obtain visible light image sequence M1 and fluorescence image sequence M2.
可选地,在步骤303,针对步骤301得到的高分辨率可见光图像序列M1形成图像金字塔P1,以减少数据量,从而提高图像处理的实时性。具体地,采用高斯金字塔对图像进行下采样以便根据金字塔第i层生成第i+1层。首先用高斯核对第i层进行卷积,然后删除所有偶数行和偶数列。当然,新得到的图像大小会变为上一级图像的四分之一。在这种情况下,图像首先在每个维度上都扩大为原来的两倍,新增的行(偶数行)以0填充。然后使用指定滤波器进行卷积(实际上是一个在每一维上都扩大为两倍的过滤器)去估计“丢失”像素的近似值。按上述过程对输入图像循环执行操作就可产生整个金字塔。Optionally, in step 303, an image pyramid P1 is formed for the high-resolution visible light image sequence M1 obtained in step 301, so as to reduce the amount of data, thereby improving the real-time performance of image processing. Specifically, the Gaussian pyramid is used to down-sample the image so as to generate the i+1th layer according to the i-th layer of the pyramid. The i-th layer is first convolved with a Gaussian kernel, and then all even rows and columns are removed. Of course, the size of the newly obtained image will become a quarter of the previous image. In this case, the image is first doubled in every dimension, and the new rows (even-numbered rows) are filled with zeros. It is then convolved with the specified filter (actually a filter that doubles in each dimension) to estimate an approximation of the "missing" pixels. The entire pyramid is generated by looping over the input image as described above.
在步骤305,利用例如采用Roberts算子的梯度边缘检测方法,对得到的图像金字塔P1和荧光图像序列M2进行边缘检测,分别得到图像边缘E1和E2。当然,在图像处理能力较高的情况下也可以跳过步骤303,直接对可见光图像序列M1和荧光图像序列M2进行边缘检测。In step 305, edge detection is performed on the obtained image pyramid P1 and the fluorescence image sequence M2 by using, for example, a gradient edge detection method using the Roberts operator, to obtain image edges E1 and E2 respectively. Of course, step 303 can also be skipped when the image processing capability is high, and edge detection is directly performed on the visible light image sequence M1 and the fluorescence image sequence M2.
在步骤307,对得到的图像边缘E1和E2分别进行基于显著性的稀疏采样。可以采用相同的方法对图像边缘E1和E2分别进行基于显著性的稀疏采样,这里采用压缩感知稀疏采样技术对E1和E2进行系数采样,从而分别得到采样输出S1和S2。In step 307, saliency-based sparse sampling is performed on the obtained image edges E1 and E2 respectively. The same method can be used to perform saliency-based sparse sampling on the image edges E1 and E2 respectively. Here, the compressed sensing sparse sampling technology is used to sample the coefficients of E1 and E2, so as to obtain the sampling outputs S1 and S2 respectively.
在步骤308,对步骤307得到的采样输出S1和S2执行配准。除了采用以上公式(1)和(2)进行配准以外,还可以使用点云配准进一步优化配准结果。关于点云配准的详细可以参见“薛耀红等,点云数据配准及曲面细分技术研究,国防工业出版社,2011”,本文不再赘述。In step 308, registration is performed on the sampled outputs S1 and S2 obtained in step 307. In addition to using the above formulas (1) and (2) for registration, point cloud registration can be used to further optimize the registration results. For details on point cloud registration, please refer to "Xue Yaohong et al., Research on Point Cloud Data Registration and Surface Subdivision Technology, National Defense Industry Press, 2011", which will not be repeated in this article.
优选地,根据本发明的图像处理方法还可以包括步骤309。在步骤309,对点云配准的结果进行算法收敛性验证,以保证运算过程的稳定可靠。Preferably, the image processing method according to the present invention may further include step 309 . In step 309, algorithm convergence verification is performed on the result of point cloud registration, so as to ensure the stability and reliability of the calculation process.
优选地,可以通过体积较小的图像GPU或FPGA来执行步骤301、303、305和309的处理,同时采用计算能力更强的中央处理单元CPU来执行配准步骤308,从而进一步在优化系统性能的同时,减小所需的硬件尺寸。Preferably, the processing of steps 301, 303, 305, and 309 can be performed by a smaller image GPU or FPGA, and at the same time, a central processing unit CPU with stronger computing power is used to perform the registration step 308, thereby further optimizing system performance At the same time, the required hardware size is reduced.
以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施例而已,并不用于限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The specific embodiments described above have further described the purpose, technical solutions and beneficial effects of the present invention in detail. It should be understood that the above descriptions are only specific embodiments of the present invention and are not intended to limit the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included within the protection scope of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410433156.9A CN104305957B (en) | 2014-08-28 | 2014-08-28 | Wear-type molecular image navigation system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410433156.9A CN104305957B (en) | 2014-08-28 | 2014-08-28 | Wear-type molecular image navigation system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104305957A CN104305957A (en) | 2015-01-28 |
CN104305957B true CN104305957B (en) | 2016-09-28 |
Family
ID=52361385
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410433156.9A Active CN104305957B (en) | 2014-08-28 | 2014-08-28 | Wear-type molecular image navigation system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104305957B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105342561B (en) * | 2015-10-09 | 2017-12-29 | 中国科学院自动化研究所 | The wearable molecular image navigation system of Wireless sound control |
US10026202B2 (en) | 2015-10-09 | 2018-07-17 | Institute Of Automation, Chinese Academy Of Sciences | Wearable molecular imaging navigation system |
CN105640481B (en) * | 2015-12-31 | 2019-05-14 | 东莞广州中医药大学中医药数理工程研究院 | Orifice observation device with sound control light source and sound control method thereof |
CN106037674B (en) * | 2016-08-18 | 2018-10-30 | 皖江新兴产业技术发展中心 | A kind of vein imaging system based on high light spectrum image-forming |
CN107374730A (en) * | 2017-09-06 | 2017-11-24 | 东北大学 | Optical operation navigation system |
CN109662695B (en) * | 2019-01-16 | 2024-12-24 | 北京数字精准医疗科技有限公司 | Fluorescence molecular imaging system, device, method and storage medium |
CN109938700A (en) * | 2019-04-04 | 2019-06-28 | 济南显微智能科技有限公司 | A head-mounted infrared fluorescence detection device |
CN110226974B (en) * | 2019-07-08 | 2024-12-06 | 中国科学技术大学 | A near-infrared fluorescence imaging system based on augmented reality |
CN115981001A (en) * | 2022-12-02 | 2023-04-18 | 浙江迈视医疗科技有限公司 | Head-mounted vision auxiliary equipment |
CN118975781A (en) * | 2024-08-12 | 2024-11-19 | 东莞市迪凯医疗科技有限公司 | A multimodal imaging system and a coronary angiography catheter |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1341003A (en) * | 1999-01-26 | 2002-03-20 | 牛顿实验室公司 | Autofluorescence imaging system for endoscopy |
JP4971816B2 (en) * | 2007-02-05 | 2012-07-11 | 三洋電機株式会社 | Imaging device |
CN101339653B (en) * | 2008-01-30 | 2010-06-02 | 西安电子科技大学 | Infrared and color visible light image fusion method based on color transfer and entropy information |
CN102722556B (en) * | 2012-05-29 | 2014-10-22 | 清华大学 | Model comparison method based on similarity measurement |
CN103489005B (en) * | 2013-09-30 | 2017-04-05 | 河海大学 | A kind of Classification of High Resolution Satellite Images method based on multiple Classifiers Combination |
CN103530038A (en) * | 2013-10-23 | 2014-01-22 | 叶晨光 | Program control method and device for head-mounted intelligent terminal |
CN203709999U (en) * | 2014-02-07 | 2014-07-16 | 王学庆 | Headwear venipuncture guide dual-light source system device |
CN204072055U (en) * | 2014-08-28 | 2015-01-07 | 中国科学院自动化研究所 | Wear-type molecular image navigation system |
-
2014
- 2014-08-28 CN CN201410433156.9A patent/CN104305957B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN104305957A (en) | 2015-01-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104305957B (en) | Wear-type molecular image navigation system | |
CN101766476B (en) | Auto-fluorescence molecule imaging system | |
CN104116497B (en) | Spy optical molecular image-guidance system and multispectral imaging method | |
EP4129061A2 (en) | Systems, apparatus and methods for analyzing blood cell dynamics | |
JP2012527629A (en) | System and method for detecting low quality in 3D reconstruction | |
CN106447703A (en) | Near infrared fluorescence and Cherenkov fluorescence fused imaging method and apparatus | |
CN105342561B (en) | The wearable molecular image navigation system of Wireless sound control | |
Deán-Ben et al. | Fast unmixing of multispectral optoacoustic data with vertex component analysis | |
CN204072055U (en) | Wear-type molecular image navigation system | |
CN114120038B (en) | Parathyroid gland recognition method based on hyperspectral imaging technology and model training | |
CN104323858B (en) | Handheld molecular imaging navigation system | |
CN107485383A (en) | A kind of speckle blood flow imaging method and apparatus based on constituent analysis | |
CN204120989U (en) | Inner peeping type optical molecular image-guidance system | |
ES3015084T3 (en) | Anthropometric data portable acquisition device and method of collecting anthropometric data | |
Wisotzky et al. | Validation of two techniques for intraoperative hyperspectral human tissue determination | |
CN105662354B (en) | A wide-angle optical molecular tomography navigation system and method | |
CN115553686A (en) | Double-mode imaging device and detection system for optical coherence tomography and endoscopy of digestive tract | |
EP3824799A1 (en) | Device, apparatus and method for imaging an object | |
AU2014363329A1 (en) | Medical imaging | |
WO2016061754A1 (en) | Handheld molecular imaging navigation system | |
CN104181142A (en) | Molecular image imaging verification system and method | |
Liu et al. | In vivo accurate detection of the liver tumor with pharmacokinetic parametric images from dynamic fluorescence molecular tomography | |
CN107184181A (en) | The processing method and system of Dynamic Fluorescence molecular tomographic | |
CN106308835A (en) | Handheld optical and Gamma detector integrated image system and method | |
CN216777062U (en) | Rapid imaging system for human skin laser speckle blood flow |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |