CN102814006B - Image contrast device, patient positioning device and image contrast method - Google Patents
Image contrast device, patient positioning device and image contrast method Download PDFInfo
- Publication number
- CN102814006B CN102814006B CN201210022145.2A CN201210022145A CN102814006B CN 102814006 B CN102814006 B CN 102814006B CN 201210022145 A CN201210022145 A CN 201210022145A CN 102814006 B CN102814006 B CN 102814006B
- Authority
- CN
- China
- Prior art keywords
- image
- pattern matching
- posture
- region
- reference image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Processing (AREA)
- Radiation-Therapy Devices (AREA)
Abstract
Description
技术领域technical field
本发明涉及一种图像对照装置和患者定位装置,在用X射线、γ射线、粒子射线等放射线对患者的患部进行照射来进行癌症治疗的放射线治疗装置中,该图像对照装置利用CT图像数据等,且该患者定位装置利用该图像对照装置,将患者定位在照射出放射线的放射线照射位置。The present invention relates to an image collating device and a patient positioning device. The image collating device utilizes CT image data, etc. , and the patient positioning device uses the image comparison device to position the patient at the radiation irradiation position where the radiation is irradiated.
背景技术Background technique
近年来,在以癌症治疗为目的的放射线治疗装置中,对利用了质子或重离子等粒子射线的癌症治疗装置(特别地被称作粒子射线治疗装置)进行着开发和建设。众所周知,与X射线、γ射线等现有的放射线治疗相比,利用粒子射线的粒子射线治疗能集中地照射到癌症患部,即,能对应于患部的形状精确地照射粒子射线,能在不影响正常细胞的情况下进行治疗。In recent years, among radiation therapy devices for cancer treatment, cancer therapy devices using particle beams such as protons and heavy ions (particularly referred to as particle beam therapy devices) have been developed and constructed. It is well known that, compared with conventional radiation therapy such as X-rays and γ-rays, particle beam therapy using particle beams can irradiate cancer patients intensively. Treat normal cells.
在粒子射线治疗中,将粒子射线高精度地照射到癌症等患部很重要。因此,在进行粒子射线治疗时,利用固定件等来固定患者使得不会相对于治疗室(照射室)的治疗台发生错位。为了高精度地将癌症等患部定位在放射线照射范围中,利用激光指示器等对患者进行粗略固定等设置,接着,利用X射线图像等对患者患部进行精确定位。In particle beam therapy, it is important to precisely irradiate particle beams to affected areas such as cancer. Therefore, when particle beam therapy is performed, the patient is fixed with a fixture or the like so as not to be misaligned with respect to the treatment table in the treatment room (irradiation room). In order to accurately locate the affected part such as cancer in the radiation irradiation range, the patient is roughly fixed with a laser pointer or the like, and then the patient's affected part is precisely positioned using X-ray images or the like.
在专利文献1中,提出了床(bed)定位装置及其定位方法,在该床定位装置和定位方法中,对X射线透视图像的基准图像和利用X射线接收器所拍摄的当前图像中的任一个图像都不指定相同的多个标志(monument)的相同位置,来进行两阶段图案匹配,生成驱动治疗台的定位用信息。在1次图案匹配中,对2维当前图像设定第二设定区域,该第二设定区域与第一设定区域大小大致相同,其中第一设定区域包含对2维基准图像设置的等中心(isocenter)(射束照射中心),在2维当前图像的区域内依次移动第二设定区域,在第二设定区域的各位置下,对第一设定区域内的2维基准图像和第二设定区域内的2维当前图像进行比较,提取出具有与第一设定区域的2维基准图像最类似的2维当前图像的第二设定区域。在2次图案匹配中,将在1次图案匹配中提取出的第二设定区域内的2维当前图像与所述第一设定区域内的2维基准图像进行比较,进行图案匹配以使两个图像最一致。In Patent Document 1, a bed positioning device and its positioning method are proposed. In the bed positioning device and positioning method, the reference image of the X-ray fluoroscopic image and the current image captured by the X-ray receiver are compared. The same position of the same plurality of monuments is not specified in any of the images, and two-stage pattern matching is performed to generate positioning information for driving the treatment table. In one pattern matching, a second set area is set for the 2D current image, and the second set area is approximately the same size as the first set area, wherein the first set area includes the set area for the 2D reference image The isocenter (beam irradiation center) moves the second setting area sequentially within the area of the 2D current image, and at each position of the second setting area, the 2D benchmark in the first setting area The image is compared with the 2D current image in the second set area, and the second set area having the 2D current image most similar to the 2D reference image in the first set area is extracted. In the second pattern matching, the 2D current image in the second set area extracted in the first pattern matching is compared with the 2D reference image in the first set area, and the pattern matching is performed so that The two images are most consistent.
现有技术文献prior art literature
专利文献patent documents
专利文献1:日本国专利第3748433号公报(0007~0009段、0049段、图8、图9)Patent Document 1: Japanese Patent No. 3748433 (paragraphs 0007-0009, 0049, Fig. 8, Fig. 9)
发明所要解决的技术问题The technical problem to be solved by the invention
由于患部的形状是3维立体形状,因此将患部定位到治疗计划时的患部位置时,使用3维图像相比于使用2维图像能使定位精度更高。一般而言,制作治疗计划数据时,使用X射线CT(计算机断层显像:Computed Tomography)图像来确定3维的患部形状。近年来,会有如下要求:治疗室中要具备X射线CT装置,要使用治疗时由X射线CT装置所拍摄的X射线CT当前图像和治疗计划时的X射线CT图像,来进行定位。X射线透视图像中,不能良好地反映出作为软组织的患部,因此基本上使用骨骼来进行位置匹配,而使用X射线CT图像进行定位是由于能对X射线CT图像所反映的患部之间进行位置匹配。Since the shape of the affected part is a three-dimensional shape, when positioning the affected part to the position of the affected part in the treatment plan, the positioning accuracy can be higher by using the three-dimensional image than by using the two-dimensional image. In general, when creating treatment plan data, X-ray CT (Computed Tomography: Computed Tomography) images are used to determine the three-dimensional shape of the affected part. In recent years, there has been a demand for an X-ray CT device to be installed in the treatment room, and the current X-ray CT image captured by the X-ray CT device during treatment and the X-ray CT image during treatment planning are required for positioning. X-ray fluoroscopy images do not reflect the affected part as soft tissue well, so the bones are basically used for position matching, and X-ray CT images are used for positioning because it is possible to position the affected parts reflected in X-ray CT images. match.
因此,在现有的2阶段图案匹配中,考虑将基准图像和当前图像扩展到3维图像的情形。3维基准图像和3维当前图像包含用X射线CT装置拍摄的多个断层图像(切片图像)。3维当前图像出于被X射线辐射等观点设想为图像片数较少的情况,因此需要对具有密集的图像信息的3维基准图像和具有比3维基准图像要稀疏的图像信息的3维当前图像进行比较。在现有的2阶段图案匹配中,存在如下问题:尽管分别具有相同密度的图像信息的2维基准图像与2维当前图像之间能够进行比较,但在对图像信息密度不同的3维基准图像和3维当前图像进行比较时,不能仅通过将现有技术的图像维度单纯地从2维提高到3维,来实现2阶段图案匹配。即,存在如下问题:不能与现有技术相同地,单纯地从所设定的第一设定区域内的3维基准图像向第二设定区域内的3维当前图像进行1次图案匹配,单纯地将所提取的第二设定区域内的3维当前图像与第一设定区域内的3维基准图像进行比较,以实现使两个3维图像最一致的图案匹配。Therefore, in conventional two-stage pattern matching, a case where the reference image and the current image are expanded to a 3D image is considered. The 3D reference image and the 3D current image include a plurality of tomographic images (slice images) captured by an X-ray CT apparatus. The 3D current image is assumed to have a small number of images from the perspective of X-ray radiation, so it is necessary to compare the 3D reference image with dense image information and the 3D reference image with sparser image information than the 3D reference image. current image for comparison. In the existing two-stage pattern matching, there is a problem that although comparison can be made between 2D reference images having image information of the same density and the 2D current image, the comparison between 3D reference images with different image information densities When comparing with the 3D current image, the 2-stage pattern matching cannot be realized by simply increasing the image dimension of the prior art from 2D to 3D. That is, there is a problem that pattern matching cannot be performed simply once from the 3D reference image in the first set area to the 3D current image in the second set area as in the prior art. The extracted 3D current image in the second set area is simply compared with the 3D reference image in the first set area, so as to achieve the most consistent pattern matching between the two 3D images.
发明内容Contents of the invention
本发明的目的在于,在对放射线治疗的患者进行定位时,即使是在3维当前图像的断层图像数比3维基准图像要少的情况下,也能实现高精度的2阶段图案匹配(2阶段对照)。The object of the present invention is to realize high-precision two-stage pattern matching (2-stage pattern matching) even when the number of tomographic images in the 3D current image is smaller than that of the 3D reference image when positioning a patient for radiation therapy. phase comparison).
用于解决技术问题的技术方案Technical solutions for technical problems
本发明所涉及的图像对照装置包括:3维图像输入部,该3维图像输入部分别读取放射线治疗的治疗计划时所拍摄的3维基准图像和进行治疗时所拍摄的3维当前图像;对照处理部,该对照处理部对3维基准图像和3维当前图像进行对照,计算出体位修正量以使3维当前图像中的患部的位置姿势与3维基准图像中的患部的位置姿势相一致。对照处理部具有:1次对照部;该1次对照部根据3维基准图像对3维当前图像进行1次图案匹配;以及2次对照部,该2次对照部根据规定的模板区域对规定的检索对象区域进行2次图案匹配,其中规定的模板区域根据3维基准图像或3维当前图像中的一个并基于1次图案匹配结果而生成,而规定的检索对象区域根据与规定的模板区域的生成基础所不同的3维基准图像或3维当前图像中的另一个并基于1次图案匹配结果而生成。The image collation device according to the present invention includes: a 3D image input unit, which respectively reads a 3D reference image captured during radiation therapy treatment planning and a 3D current image captured during treatment; a collation processing unit that compares the 3D reference image and the 3D current image, and calculates a body posture correction amount so that the position and posture of the affected part in the 3D current image are consistent with the position and posture of the affected part in the 3D reference image; unanimous. The collation processing part has: a primary collation part; the primary collation part performs primary pattern matching on the 3D current image based on the 3D reference image; The search target area is subjected to secondary pattern matching, wherein the specified template area is generated based on one of the 3D reference image or the 3D current image based on the primary pattern matching result, and the specified search target area is generated based on the matching with the specified template area. The other of the 3D reference image or the 3D current image based on a different base is generated based on the primary pattern matching result.
发明效果Invention effect
本发明所涉及的图像对照装置根据3维基准图像对3维当前图像进行1次图案匹配,接着,基于1次图案匹配结果,生成规定的模板区域和规定的检索对象区域,执行检索对象区域和模板区域的2次图案匹配,因此即使是在3维当前图像的断层图像数比3维基准图像要少的情况下,也能实现高精度的2阶段图案匹配。The image matching device according to the present invention performs primary pattern matching on the 3D current image based on the 3D reference image, and then generates a predetermined template region and a predetermined search target region based on the primary pattern matching result, and executes the search target region and the predetermined search target region. 2-stage pattern matching in the template area enables high-precision 2-stage pattern matching even when the number of tomographic images in the 3D current image is smaller than that of the 3D reference image.
附图说明Description of drawings
图1是表示本发明的实施方式1所涉及的图像对照装置和患者定位装置的结构的图。FIG. 1 is a diagram showing configurations of an image collating device and a patient positioning device according to Embodiment 1 of the present invention.
图2是表示与本发明的图像对照装置和患者定位装置相关的整体设备结构的图。Fig. 2 is a diagram showing the overall equipment structure related to the image collation device and the patient positioning device of the present invention.
图3是表示本发明的实施方式1所涉及的3维基准图像和基准图像模板区域的图。3 is a diagram showing a three-dimensional reference image and a reference image template area according to Embodiment 1 of the present invention.
图4是表示本发明的实施方式1所涉及的3维当前图像的图。FIG. 4 is a diagram showing a three-dimensional current image according to Embodiment 1 of the present invention.
图5是对本发明的实施方式1所涉及的1次图案匹配方法进行说明的图。FIG. 5 is a diagram illustrating a primary pattern matching method according to Embodiment 1 of the present invention.
图6是对图5的1次图案匹配方法中的基准图像模板区域和切片图像的关系进行说明的图。FIG. 6 is a diagram illustrating the relationship between a reference image template region and a slice image in the primary pattern matching method of FIG. 5 .
图7是表示由本发明的实施方式1所涉及的1次图案匹配方法所提取的切片图像的1次提取区域的图。7 is a diagram showing primary extraction regions of slice images extracted by the primary pattern matching method according to Embodiment 1 of the present invention.
图8是对本发明的实施方式1所涉及的2次图案匹配方法进行说明的图。FIG. 8 is a diagram illustrating a secondary pattern matching method according to Embodiment 1 of the present invention.
图9是对图8的2次图案匹配方法中的基准图像模板区域和切片图像的关系进行说明的图。FIG. 9 is a diagram illustrating the relationship between a reference image template region and a slice image in the secondary pattern matching method of FIG. 8 .
图10是对本发明的实施方式2所涉及的1次图案匹配方法进行说明的图。FIG. 10 is a diagram illustrating a primary pattern matching method according to Embodiment 2 of the present invention.
图11是对图10的1次图案匹配方法中的基准图像模板区域和切片图像的关系进行说明的图。FIG. 11 is a diagram illustrating the relationship between a reference image template region and a slice image in the primary pattern matching method shown in FIG. 10 .
图12是表示本发明的实施方式2所涉及的姿势变换后的3维基准图像的图。12 is a diagram showing a three-dimensional reference image after posture conversion according to Embodiment 2 of the present invention.
图13是对本发明的实施方式2所涉及的2次图案匹配方法进行说明的图。FIG. 13 is a diagram illustrating a secondary pattern matching method according to Embodiment 2 of the present invention.
图14是表示本发明的实施方式3所涉及的图像对照装置和患者定位装置的结构的图。FIG. 14 is a diagram showing configurations of an image collating device and a patient positioning device according to Embodiment 3 of the present invention.
具体实施方式Detailed ways
实施方式1Embodiment 1
图1是表示本发明的实施方式1所涉及的图像对照装置和患者定位装置的结构的图,图2是表示与本发明的图像对照装置和患者定位装置相关的整体设备结构的图。在图2中,1是用于进行要在放射线治疗之前所进行的治疗计划的CT仿真器室,在该CT仿真器室中存在CT台架2、CT图像拍摄用床的顶板3,使患者4横卧在顶板3之上,并拍摄治疗计划用CT图像数据以使其包含患部5。另一方面,6是用来进行放射线治疗的治疗室,在该治疗室中存在CT台架7、旋转治疗台8,且在旋转治疗台8的上部有顶板9,使患者10横卧在顶板9之上,并拍摄定位用CT图像数据以使其包含治疗时的患部11。1 is a diagram showing configurations of an image collating device and a patient positioning device according to Embodiment 1 of the present invention, and FIG. 2 is a diagram showing overall equipment configurations related to the image collating device and patient positioning device of the present invention. In FIG. 2 , 1 is a CT simulator room for performing a treatment plan to be performed before radiation therapy. In this CT simulator room, there are a CT gantry 2 and a top plate 3 of a bed for CT image capturing, and the patient 4 lying on the top plate 3, CT image data for treatment planning is taken so as to include the affected part 5. On the other hand, 6 is a treatment room for radiotherapy. In this treatment room, there are CT gantry 7 and rotary treatment table 8, and there is a ceiling 9 on the top of the rotary treatment table 8, and a patient 10 is placed on the ceiling. 9, and CT image data for positioning are taken so as to include the affected part 11 during treatment.
此处,定位是指:根据治疗计划用CT图像数据算出治疗时的患者10和患部11的位置,计算出体位修正量使得与治疗计划相一致,进行位置匹配以使治疗时的患部11到达放射线治疗的射束照射中心12。通过以顶板9上承载着患者10的状态对旋转治疗台8进行驱动控制来移动顶板9的位置,从而实现位置匹配。旋转治疗台8可进行平移/旋转的6自由度的驱动修正,并且通过将旋转治疗台8的顶板9旋转180度,从而能从CT拍摄位置(图2中以实线表示)移动至进行放射线照射的照射床13的某一治疗位置(图2中以虚线表示)。另外,尽管在图2中示出CT拍摄位置和治疗位置具有180度的对置位置关系,然而配置方式不限于此,两者的位置关系也可以是成90度等的其它角度的位置关系。Here, positioning refers to calculating the positions of the patient 10 and the affected part 11 during treatment using CT image data based on the treatment plan, calculating body posture correction amounts so as to match the treatment plan, and performing position matching so that the affected part 11 during treatment reaches the radiation. The therapeutic beam irradiates the center 12 . Position matching is achieved by moving the position of the top plate 9 by driving and controlling the rotary treatment table 8 in a state where the patient 10 is placed on the top plate 9 . The rotary table 8 is capable of driving and correcting six degrees of freedom of translation/rotation, and by rotating the top plate 9 of the rotary table 8 by 180 degrees, it can be moved from the CT imaging position (indicated by a solid line in FIG. A certain treatment position of the irradiated irradiation couch 13 (indicated by a dotted line in FIG. 2 ). In addition, although it is shown in FIG. 2 that the CT shooting position and the treatment position have a 180-degree opposing positional relationship, the arrangement is not limited to this, and the positional relationship between the two may be other angles such as 90 degrees.
治疗计划用CT图像数据和定位用CT图像数据被传输到定位计算机14。治疗计划用CT图像数据成为3维基准图像,定位用CT图像数据成为3维当前图像。本发明中的图像对照装置29和患者定位装置30都与存在于该定位计算机14内的计算机软件相关,且图像对照装置29计算上述体位修正量(平移量、旋转量),并且患者定位装置30包含图像对照装置29而且还具有基于该体位修正量计算出对旋转治疗台8(根据情况简单称作治疗台8)的各驱动轴进行控制的参数的功能。患者定位装置30通过根据图像对照装置29所得到的匹配结果(对照结果)来控制治疗台8,从而对粒子射线治疗的对象患部进行引导以使其位于治疗装置的射束照射中心12。The CT image data for treatment planning and the CT image data for positioning are transmitted to the positioning computer 14 . The CT image data for treatment planning becomes a 3D reference image, and the CT image data for positioning becomes a 3D current image. The image comparison device 29 and the patient positioning device 30 in the present invention are all related to the computer software existing in the positioning computer 14, and the image comparison device 29 calculates the above-mentioned body position correction amount (translation amount, rotation amount), and the patient positioning device 30 It includes the image collating device 29 and also has a function of calculating parameters for controlling each drive shaft of the rotary treatment table 8 (simply referred to as the treatment table 8 as the case may be) based on the body posture correction amount. The patient positioning device 30 controls the treatment table 8 based on the matching result (comparison result) obtained by the image collating device 29 to guide the affected part of the particle beam therapy so that it is positioned at the beam irradiation center 12 of the treatment device.
在现有的放射线治疗中的定位中,通过对照根据治疗计划用CT图像数据所生成的DRR(数字重建放射成像术:Digitally Reconstructed Radiography)图像或与此同时拍摄的X射线透视图像、和治疗时的治疗室中所拍摄的X射线透视图像,来计算出位置偏移量。在X射线透视图像中,由于不能良好地反映出作为软组织的患部,因而基本上进行使用骨骼的位置匹配。在本实施方式中所表述的使用CT图像数据的定位具有如下特征:在治疗室6中设置CT台架7,且由于利用即将进行治疗前的CT图像数据和治疗计划用CT图像数据来进行位置匹配,因此能直接描绘出患部,且能进行患部的位置匹配。In conventional radiotherapy positioning, DRR (Digitally Reconstructed Radiography: Digitally Reconstructed Radiography) images generated from CT image data based on treatment plans or X-ray fluoroscopy images taken at the same time are compared with the time of treatment. The X-ray fluoroscopy images taken in the treatment room of the patient are used to calculate the position offset. In the X-ray fluoroscopy image, since the affected part which is soft tissue cannot be well reflected, position matching using bones is basically performed. The positioning using CT image data described in this embodiment has the following features: a CT gantry 7 is installed in the treatment room 6, and the positioning is performed by using the CT image data immediately before treatment and the CT image data for treatment planning. Matching, so the affected part can be directly described, and the location of the affected part can be matched.
接着,对本实施方式中的图像对照装置29和患者定位装置30的上述体位修正量的计算步骤进行说明。图1表示构成图像对照装置和患者定位装置的各数据处理部之间的关系,此处,图像对照装置29具备:读取CT图像数据的3维图像输入部21;对照处理部22;对照结果显示部23;以及对照结果输出部24。对图像对照装置29添加了治疗台控制参数计算部26的装置是患者定位装置30。Next, the calculation procedure of the above-mentioned body posture correction amount by the image collating device 29 and the patient positioning device 30 in this embodiment will be described. Fig. 1 shows the relationship between each data processing part that constitutes the image comparison device and the patient positioning device. Here, the image comparison device 29 is equipped with: a 3-dimensional image input part 21 that reads CT image data; a comparison processing part 22; a display unit 23 ; and a comparison result output unit 24 . The patient positioning device 30 is a device in which the treatment table control parameter calculation unit 26 is added to the image collating device 29 .
如上所述,3维基准图像是进行治疗计划时用于治疗计划而拍摄的数据,其特征在于,由人工输入表示作为粒子射线治疗对象的患部的患部信息(患部形状等)。3维当前图像是进行治疗时用于患者定位而拍摄的数据,其特征在于,出于抑制被X射线辐射的观点,断层图像(还称作切片图像)的片数较少。As described above, the 3D reference image is data captured for treatment planning when performing treatment planning, and is characterized in that affected area information (affected area shape, etc.) indicating an affected area to be treated with particle beams is manually input. The 3D current image is data captured for patient positioning during treatment, and is characterized in that the number of tomographic images (also referred to as slice images) is small in view of suppressing exposure to X-rays.
本发明中,采用进行2阶段图案匹配的结构:根据3维基准图像对3维当前图像进行1次图案匹配;接着,基于1次图案匹配结果,生成规定的模板区域和规定的检索对象区域,使用该规定的模板区域,同向或反向地进行2次图案匹配。在2阶段图案匹配中,通过使进行1次图案匹配时的匹配参数和进行2次图案匹配时的匹配参数不相同,从而能实现高速和高精度的处理。例如,存在如下方法:在低分辨率下、以大范围作为对象来进行1次图案匹配,并使用所找到的模板区域或检索对象区域,在高分辨率下、以筛选出的范围作为对象来进行2次图案匹配。In the present invention, a two-stage pattern matching structure is adopted: a pattern matching is performed on the 3D current image based on the 3D reference image; then, based on the result of the primary pattern matching, a specified template area and a specified search object area are generated, Using this predetermined template area, pattern matching is performed twice in the same direction or in the opposite direction. In the two-step pattern matching, by making the matching parameters for the first pattern matching different from the matching parameters for the second pattern matching, high-speed and high-precision processing can be realized. For example, there is a method of performing pattern matching once at a low resolution with a wide area as an object, and using the found template area or search target area at a high resolution with a narrowed area as an object. Do 2 pattern matches.
对3维图像输入部21进行说明。3维图像输入部21读取由X射线CT装置所拍摄的、由多个断层图像构成的图像群、DICOM(医学数字成像与通信:Digital Imaging and Communications in Medicine)形式的图像数据(切片图像群)以作为3维体数据。治疗计划用CT图像数据是进行治疗计划时的3维体数据,即3维基准图像。定位用CT图像数据是进行治疗时的3维体数据,即3维当前图像。另外,CT图像数据不限于DICOM形式,也可以是其它形式的数据。The three-dimensional image input unit 21 will be described. The 3D image input unit 21 reads an image group composed of a plurality of tomographic images captured by an X-ray CT apparatus, and image data in the form of DICOM (Digital Imaging and Communications in Medicine) (slice image group). ) as 3D volume data. The CT image data for treatment planning is 3-dimensional volume data at the time of treatment planning, that is, a 3-dimensional reference image. The CT image data for positioning is 3D volume data at the time of treatment, that is, a 3D current image. In addition, CT image data is not limited to the DICOM format, and may be data in other formats.
对照处理部22对3维基准图像和3维当前图像进行对照(图案匹配),计算出体位修正量以使3维当前图像中的患部位置姿势与所述3维基准图像中的患部的位置姿势相一致。对照结果显示部23在定位计算机14的显示器画面上显示由对照处理部22进行对照后的结果(下述的体位修正量、或将以该体位修正量移动后的3维当前图像与3维基准图像相重合来显示的图像等)。对照结果输出部24输出利用对照处理部22对3维基准图像和3维当前图像进行对照时的修正量、即利用对照处理部22计算出的体位修正量(平移量、旋转量)。治疗台控制参数计算部26将对照结果输出部24的输出值(平移3轴[ΔX、ΔY、ΔZ],旋转3轴[ΔA、ΔB、ΔC],共6个自由度)转换成对治疗台8的各轴进行控制的参数,即计算出参数。治疗台8基于利用治疗台控制参数计算部26所计算出的治疗台控制参数,来驱动治疗台8的各轴的驱动装置。由此,能计算出体位修正量使得与治疗计划相一致,且能进行位置匹配以使进行治疗时的患部11到达放射线治疗的射束照射中心12。The collation processing unit 22 compares the 3D reference image and the 3D current image (pattern matching), and calculates a body posture correction amount so that the position and posture of the affected part in the 3D current image are consistent with the position and posture of the affected part in the 3D reference image. consistent. The collation result display part 23 displays the result after collation by the collation processing part 22 on the display screen of the positioning computer 14 (the following body position correction amount, or the 3-dimensional current image and the 3-dimensional reference after being moved by the body position correction amount) Images displayed by superimposing images, etc.). The comparison result output unit 24 outputs the correction amount when the comparison processing unit 22 compares the 3D reference image and the 3D current image, that is, the body posture correction amount (translation amount, rotation amount) calculated by the comparison processing unit 22 . The treatment table control parameter calculation part 26 converts the output value of the comparison result output part 24 (3-axis translation [ΔX, ΔY, ΔZ], 3-axis rotation [ΔA, ΔB, ΔC], a total of 6 degrees of freedom) into the corresponding treatment table The parameters for controlling each axis of 8 are calculated parameters. The treatment table 8 drives the drive devices of the axes of the treatment table 8 based on the treatment table control parameters calculated by the treatment table control parameter calculation unit 26 . Thereby, the body position correction amount can be calculated so as to match the treatment plan, and position matching can be performed so that the affected part 11 at the time of treatment reaches the beam irradiation center 12 of the radiotherapy.
对照处理部22具有:位置姿势变换部25;1次对照部16;2次对照部17;基准模板区域生成部18。在进行1次图案匹配或2次图案匹配时,位置姿势变换部25改变对象数据的位置姿势。1次对照部16根据3维基准图像对3维当前图像进行1次图案匹配。2次对照部17根据规定的模板区域对规定的检索对象区域进行2次图案匹配,其中规定的模板区域根据3维基准图像或3维当前图像中的一个并基于1次图案匹配结果而生成,而规定的检索对象区域根据与规定的模板区域的生成基础不同的3维基准图像或3维当前图像中的另一个并基于1次图案匹配结果而生成。The comparison processing unit 22 includes: a position and posture conversion unit 25 ; a primary comparison unit 16 ; a secondary comparison unit 17 ; and a reference template region generation unit 18 . The position and posture conversion unit 25 changes the position and posture of the object data when the pattern matching is performed once or twice. The primary matching unit 16 performs primary pattern matching on the 3D current image based on the 3D reference image. The secondary comparison unit 17 performs secondary pattern matching on the predetermined search target region according to the predetermined template region, wherein the predetermined template region is generated based on the result of primary pattern matching based on one of the 3D reference image or the 3D current image, On the other hand, the predetermined search target region is generated based on the primary pattern matching result from the other of the 3D reference image or the 3D current image, which is different from the generation basis of the predetermined template region.
利用图3至图9,来详细说明对照处理部22。图3是表示本发明的实施方式1所涉及的3维基准图像和基准图像模板区域的图。图4是表示本发明的实施方式1所涉及的3维当前图像的图。图5是对本发明的实施方式1所涉及的1次图案匹配方法进行说明的图。图6是对图5的1次图案匹配方法中的基准图像模板区域和切片图像的关系进行说明的图。图7是表示由本发明的实施方式1所涉及的1次图案匹配方法所提取的切片图像的1次提取区域的图。图8是对本发明的实施方式1所涉及的2次图案匹配方法进行说明的图。图9是对图8的2次图案匹配方法中的基准图像模板区域和切片图像的关系进行说明的图。The collation processing unit 22 will be described in detail using FIGS. 3 to 9 . 3 is a diagram showing a three-dimensional reference image and a reference image template area according to Embodiment 1 of the present invention. FIG. 4 is a diagram showing a three-dimensional current image according to Embodiment 1 of the present invention. FIG. 5 is a diagram illustrating a primary pattern matching method according to Embodiment 1 of the present invention. FIG. 6 is a diagram illustrating the relationship between a reference image template region and a slice image in the primary pattern matching method of FIG. 5 . 7 is a diagram showing primary extraction regions of slice images extracted by the primary pattern matching method according to Embodiment 1 of the present invention. FIG. 8 is a diagram illustrating a secondary pattern matching method according to Embodiment 1 of the present invention. FIG. 9 is a diagram illustrating the relationship between a reference image template region and a slice image in the secondary pattern matching method of FIG. 8 .
对照处理部22的基准模板区域生成部18使用在进行治疗计划时所输入的患部形状(患部信息),从3维基准图像31生成基准图像模板区域33。3维基准图像31由多个切片图像32来构成。在图3中,出于方便,示出了由5片切片图像32a、32b、32c、32d、32e所构成的示例。患部形状作为ROI(感兴趣的区域:Region of Interest)35、作为在每个切片图像中包围患部的闭轮廓来输入。可将包含上述闭轮廓的区域例如作为外接四边形34,且将包含各外接四边形34的长方体区域作为模板区域。将该模板区域作为基准图像模板区域33。对照处理部22的1次对照部16进行1次图案匹配以将基准图像模板区域33匹配到3维当前图像36。图4所示的3维当前图像36表示由3片切片图像37a、37b、37c构成的示例。图5所示的当前图像区域38表示成为包含3片切片图像37a、37b、37c的长方体。如图5所示,在当前图像区域38中使基准图像模板区域33(33a、33b、33c)以光栅扫描状移动,计算出与3维当前图像36的相关值。作为相关值,可利用归一化互相关值等、在图像匹配(图像对照)中所利用的所有相关值。The reference template region generation unit 18 of the comparison processing unit 22 generates a reference image template region 33 from a three-dimensional reference image 31 using the shape of the affected part (affected part information) input during treatment planning. The three-dimensional reference image 31 consists of a plurality of slice images 32 to form. In FIG. 3 , an example composed of five slice images 32 a , 32 b , 32 c , 32 d , and 32 e is shown for convenience. The shape of the affected part is input as ROI (Region of Interest) 35 and as a closed contour surrounding the affected part in each slice image. A region including the above-mentioned closed contour can be used as, for example, a circumscribing quadrilateral 34 , and a cuboid region including each circumscribing quadrangle 34 can be used as a template region. Let this template area be the reference image template area 33 . The primary matching unit 16 of the matching processing unit 22 performs pattern matching once to match the reference image template region 33 to the three-dimensional current image 36 . The three-dimensional current image 36 shown in FIG. 4 shows an example composed of three slice images 37a, 37b, and 37c. The current image area 38 shown in FIG. 5 is represented as a cuboid including three slice images 37a, 37b, and 37c. As shown in FIG. 5 , the reference image template area 33 ( 33 a , 33 b , 33 c ) is moved in a raster scanning manner in the current image area 38 to calculate a correlation value with the three-dimensional current image 36 . As the correlation value, all correlation values used in image matching (image collation) such as normalized cross-correlation values can be used.
基准图像模板区域33a沿扫描路径39a以光栅扫描状在切片图像37a中移动。同样,基准图像模板区域33b沿扫描路径39b以光栅扫描状在切片图像37b中移动,基准图像模板区域33c沿扫描路径39c以光栅扫描状在切片图像37c中移动。为了使附图简单,简略地示出扫描路径39b、39c。The reference image template area 33a moves in the slice image 37a along the scanning path 39a in a raster scanning manner. Similarly, the reference image template region 33b moves in the slice image 37b in a raster scan along the scan path 39b, and the reference image template region 33c moves in the slice image 37c in a raster scan along the scan path 39c. In order to simplify the drawing, the scanning paths 39b, 39c are schematically shown.
进行1次图案匹配时,如图6所示,对构成基准图像模板区域33的每个切片图像53,与构成当前图像区域38的切片图像37进行图像对照。切片图像53是在3维基准图像31的切片图像32中由基准图像模板区域33所划分成的图像。基准图像模板区域33由与3维基准图像的5片切片图像32a、32b、32c、32d、32e相对应的5片切片图像构成。因而,在进行1次图案匹配时,分别利用基准图像模板区域33中的5片切片图像,对3维当前图像36的切片图像37a进行图像对照。对3维当前图像36的切片图像37b、37c,同样进行图像对照。When pattern matching is performed once, as shown in FIG. 6 , image comparison is performed for each slice image 53 constituting the reference image template region 33 with the slice image 37 constituting the current image region 38 . The slice image 53 is an image divided by the reference image template region 33 in the slice image 32 of the three-dimensional reference image 31 . The reference image template area 33 is composed of five slice images corresponding to the five slice images 32a, 32b, 32c, 32d, and 32e of the three-dimensional reference image. Therefore, when pattern matching is performed once, image comparison is performed on the slice images 37 a of the 3D current image 36 using the five slice images in the reference image template area 33 . For the slice images 37b and 37c of the 3D current image 36, image comparison is similarly performed.
1次对照部16从3维当前图像36的各切片图像37提取出1次提取区域43,以使其包含当前图像区域38与当前图像模板区域33的相关值最高的区域。如图7所示,从3维当前图像36的切片图像37a提取出1次提取区域43a。同样,从3维当前图像36的切片图像37b、37c提取出1次提取区域43b、43c。生成作为用于2次图案匹配的检索对象区域的1次提取当前图像区域42,以使其包含1次提取区域43a、43b、43c。这样,1次对照部16生成1次提取当前图像区域42,该1次提取当前图像区域42作为用于2次图案匹配的检索对象区域。The primary comparison unit 16 extracts the primary extraction region 43 from each slice image 37 of the three-dimensional current image 36 such that the region having the highest correlation value between the current image region 38 and the current image template region 33 is included. As shown in FIG. 7 , the primary extraction region 43 a is extracted from the slice image 37 a of the three-dimensional current image 36 . Similarly, primary extraction regions 43 b and 43 c are extracted from the slice images 37 b and 37 c of the three-dimensional current image 36 . The primary extraction current image area 42 which is the search target area for the secondary pattern matching is generated so as to include the primary extraction areas 43a, 43b, and 43c. In this way, the primary comparison unit 16 generates the primary extracted current image region 42 as the search target region for the secondary pattern matching.
此处,由于在定位前的状态下,3维基准图像31和3维当前图像36的姿势(旋转3轴)不一致,因此在如图5的简单的光栅扫描中,在3维当前图像36的切片片数较少的情况下,尽管不能进行连角度偏移也能检测出的高精度的匹配,但提取出用于进行2次图案匹配的1次提取区域43却不成问题。因此,在1次图案匹配中,计算出相关值而不检测角度偏移,在其后的2次图案匹配中,进行连角度偏移也能检测出的精度高的匹配。Here, since the postures (3-axis rotation) of the 3D reference image 31 and the 3D current image 36 are inconsistent in the state before positioning, in a simple raster scan as shown in FIG. When the number of slices is small, it is not a problem to extract the primary extraction region 43 for secondary pattern matching, although high-precision matching that can detect even an angular offset cannot be performed. Therefore, in the primary pattern matching, the correlation value is calculated without detecting the angular offset, and in the subsequent secondary pattern matching, high-accuracy matching is performed so that even the angular offset can be detected.
对2次图案匹配进行说明。在2次图案匹配中,利用对照处理部22的位置姿势变换部25生成对从3维基准图像31生成的基准图像模板区域33的位置姿势进行变换后的位置姿势变换模板区域40。在2次图案匹配中,如图8和图9所示,在进行匹配时,追加基准图像模板区域33的姿势变化量(旋转3轴)以作为参数。2次对照部17在利用位置姿势变换部25进行位置姿势变换后的位置姿势变换模板区域40与切片图像片数较少的3维当前图像36的1次提取当前图像区域42之间,进行连角度偏移也包含的高精度的匹配。通过这样,能实现连角度偏移也包含的高精度的2阶段图案匹配。通过将包含由1次图案匹配求出的区域在内的狭窄范围作为对象以作为2次图案匹配的探索范围,从而能使用包含以低分辨率、将宽范围作为对象来进行1次图案匹配而找出的1次提取区域43在内的1次提取当前图像区域42,以高分辨率进行2次图案匹配,且能缩短图案匹配所需的时间。The two-stage pattern matching will be described. In the secondary pattern matching, the position and orientation conversion unit 25 of the collation processing unit 22 generates a position and orientation converted template area 40 obtained by converting the position and orientation of the reference image template area 33 generated from the three-dimensional reference image 31 . In the secondary pattern matching, as shown in FIGS. 8 and 9 , when performing the matching, the amount of change in posture (rotation along three axes) of the reference image template region 33 is added as a parameter. The secondary comparison unit 17 performs continuous comparison between the position and posture conversion template region 40 after the position and posture conversion by the position and posture conversion unit 25 and the primary extraction current image region 42 of the 3D current image 36 with a small number of slice images. Angle offsets are also included for high precision matching. In this way, high-precision two-stage pattern matching including angular offset can be realized. By using a narrow range including the region obtained by the primary pattern matching as the search range of the secondary pattern matching, it is possible to use the low-resolution and wide range as the target to perform the primary pattern matching. The primary extraction current image area 42 including the found primary extraction area 43 performs pattern matching twice at high resolution, and the time required for pattern matching can be shortened.
图8所示的1次提取当前图像区域42表示为包含3个1次提取区域43a、43b、43c的长方体。作为对位置姿势进行变换后的基准图像模板区域的位置姿势变换模板区域40a沿扫描路径39a、以光栅扫描状在切片图像37a的1次提取区域43a中移动。同样,作为对位置姿势进行变换后的基准图像模板区域的位置姿势变换模板区域40b沿扫描路径39b、以光栅扫描状在切片图像37b的1次提取区域43b中移动,作为对位置姿势进行变换后的基准图像模板区域的位置姿势变换模板区域40c沿扫描路径39c、以光栅扫描状在切片图像37c的1次提取区域43c中移动。为了使附图简单,简略地示出扫描路径39b、39c。The primary extraction current image area 42 shown in FIG. 8 is represented as a cuboid including three primary extraction areas 43a, 43b, and 43c. The position-posture conversion template region 40a, which is the reference image template region after the position-posture conversion, moves in the primary extraction region 43a of the slice image 37a in a raster-scanning manner along the scanning path 39a. Similarly, the position and posture transformation template region 40b, which is the transformed reference image template region, moves along the scanning path 39b in the primary extraction region 43b of the slice image 37b in a raster scanning manner, and serves as the transformed position and posture template region. The position-posture conversion template region 40c of the reference image template region moves along the scan path 39c in the primary extraction region 43c of the slice image 37c in a raster scan manner. In order to simplify the drawing, the scanning paths 39b, 39c are schematically shown.
在进行2次图案匹配时,如图9所示,利用2次对照部17,在位置姿势变换模板区域40的剖面41与构成1次提取当前图像区域42的切片图像37的1次提取区域43之间进行图像对照。此外,也可在切片图像55与剖面41之间进行图像对照,该切片图像55是在3维当前图像36的切片图像37中由1次提取当前图像区域42所划分成的图像。从3维基准图像31的多个切片图像32生成位置姿势变换模板区域40的剖面41。例如,剖面41的数据是从构成3维基准图像31的多个切片图像32截取的。通常,位置姿势变换模板区域40的剖面41的数据密度与3维当前图像36的1次提取区域43的数据密度不相同,但计算出剖面41的每个像素的相关值即可。此外,位置姿势变换模板区域40的剖面41还可包含进行了补全使得剖面41的数据密度与3维当前图像36的1次提取区域43的数据密度相同的数据。When performing secondary pattern matching, as shown in FIG. 9 , the secondary comparison unit 17 is used to convert the cross section 41 of the template region 40 in position and posture and the primary extraction region 43 of the slice image 37 constituting the primary extraction current image region 42 . Compare images between them. In addition, image comparison may be performed between the slice image 55 that is divided into the slice image 37 of the three-dimensional current image 36 by extracting the current image region 42 once and the cross section 41 . A cross-section 41 of a position-posture conversion template region 40 is generated from a plurality of slice images 32 of a three-dimensional reference image 31 . For example, the data of the cross section 41 are extracted from the plurality of slice images 32 constituting the three-dimensional reference image 31 . Usually, the data density of the section 41 of the position and posture transformation template area 40 is different from the data density of the primary extraction area 43 of the 3D current image 36 , but it is only necessary to calculate the correlation value of each pixel of the section 41 . In addition, the section 41 of the position and posture conversion template area 40 may also include data that has been completed so that the data density of the section 41 is the same as the data density of the primary extraction area 43 of the 3D current image 36 .
此处,对实施方式1的2阶段图案匹配方法进行总结。首先,对照处理部22的基准模板区域生成部18从3维基准图像31生成基准图像模板区域33(基准图像模板区域生成步骤)。1次对照部16根据基准图像模板区域33对3维当前图像36执行1次图案匹配(1次图案匹配步骤)。1次图案匹配对构成基准图像模板区域33的每个切片图像53,与构成当前图像区域38的切片图像37进行图像对照。1次对照部16在每次扫描基准图像模板区域33时,计算出当前图像区域38与基准图像模板区域33之间的相关值(相关值计算步骤),通过1次图案匹配,提取出1次提取区域43以使其包含当前图像区域38与基准图像模板区域33之间的相关值最高的区域(1次提取区域提取步骤)。1次对照部16生成作为用于2次图案匹配的检索对象区域的1次提取当前图像区域42,以使其包含构成当前图像区域38的每个切片图像37的1次提取区域43(检索对象生成步骤)。实施方式1的2阶段图案匹配方法包含:基准图像模板区域生成步骤;1次图案匹配步骤;以及下述的2次图案匹配步骤。1次图案匹配步骤包含:相关值计算步骤;1次提取区域提取步骤;以及检索对象生成步骤。Here, the two-stage pattern matching method of Embodiment 1 is summarized. First, the reference template region generation unit 18 of the comparison processing unit 22 generates a reference image template region 33 from the three-dimensional reference image 31 (reference image template region generation step). The primary matching unit 16 performs primary pattern matching on the three-dimensional current image 36 based on the reference image template region 33 (primary pattern matching step). The primary pattern matching performs image comparison for each slice image 53 constituting the reference image template region 33 with the slice image 37 constituting the current image region 38 . The primary comparison unit 16 calculates the correlation value between the current image area 38 and the reference image template area 33 each time the reference image template area 33 is scanned (correlation value calculation step), and extracts a single The region 43 is extracted so as to include the region with the highest correlation value between the current image region 38 and the reference image template region 33 (1 extraction region extraction step). The primary comparison unit 16 generates the primary extraction current image area 42 as the search target area for the secondary pattern matching so as to include the primary extraction area 43 (search target area 43) for each slice image 37 constituting the current image area 38. build step). The two-stage pattern matching method of Embodiment 1 includes: a reference image template region generation step; a first pattern matching step; and a second pattern matching step described below. One pattern matching step includes: a correlation value calculation step; one extraction region extraction step; and a search object generation step.
接着,对照处理部22的2次对照部17根据由位置姿势变换部25对基准图像模板区域33的位置姿势进行变换后的位置姿势变换模板区域40对3维当前图像36的1次提取当前图像区域42执行2次图案匹配(2次图案匹配步骤)。2次图案匹配生成变换成规定的位置姿势后的位置姿势变换模板区域40的多个剖面41(剖面生成步骤),针对每个剖面41,在构成1次提取当前图像区域42的切片图像37的1次提取区域43或切片图像55、与该剖面41之间进行图像对照。在每次扫描位置姿势变换模板区域40时,2次对照部17计算1次提取当前图像区域42与位置姿势变换模板区域40的多个剖面41的相关值(相关值计算步骤)。此外,位置姿势变换部25进行变换以成为与之前的位置姿势不同的位置姿势(位置姿势变换步骤),2次对照部17生成该位置姿势中的位置姿势变换模板区域40的多个剖面41(剖面生成步骤),在每次扫描位置姿势变换模板区域40时,计算1次提取当前图像区域42与位置姿势变换模板区域40的多个剖面41的相关值(相关值计算步骤)。对照处理部22的2次对照部17将作为计算出的相关值中最高相关值的3维基准图像与3维当前图像的位置姿势关系(位置姿势信息)选定为最佳解(最佳解选定步骤)。由此来实现图案匹配以使得3维基准图像与3维当前图像――这两种3维图像最一致。2次图案匹配步骤包含:剖面生成步骤;相关值计算步骤;位置姿势变换步骤;以及最佳解选定步骤。Next, the secondary comparison unit 17 of the comparison processing unit 22 extracts the current image from the three-dimensional current image 36 based on the position and posture transformation template region 40 obtained by converting the position and posture of the reference image template region 33 by the position and posture transformation unit 25. The area 42 performs pattern matching twice (two pattern matching steps). Secondary pattern matching generates a plurality of cross-sections 41 of the position and posture transformation template region 40 transformed into a predetermined position and posture (section generation step), and for each cross-section 41, the slice image 37 constituting the primary extraction current image region 42 is generated. Image comparison is performed between the primary extraction region 43 or the slice image 55 and the section 41 . Every time the position and posture conversion template region 40 is scanned, the secondary comparison unit 17 calculates the correlation value between the primary extracted current image region 42 and the plurality of sections 41 of the position and posture conversion template region 40 (correlation value calculation step). In addition, the position and posture conversion unit 25 performs conversion to a position and posture different from the previous position and posture (position and posture conversion step), and the secondary comparison unit 17 generates a plurality of cross-sections 41 ( Section generation step), when scanning the position and posture transformation template region 40 each time, calculate and extract the correlation value of multiple sections 41 between the current image region 42 and the position and posture transformation template region 40 (correlation value calculation step). The secondary comparison unit 17 of the comparison processing unit 22 selects the position and posture relationship (position and posture information) between the 3-dimensional reference image and the 3-dimensional current image, which is the highest correlation value among the calculated correlation values, as the optimal solution (the optimal solution selected steps). Pattern matching is thereby achieved such that the 3D reference image and the 3D current image - the two 3D images are most consistent. The two pattern matching steps include: the section generation step; the correlation value calculation step; the position and posture transformation step; and the optimal solution selection step.
图案匹配结束后,对照处理部22根据其相关值在计算出的相关值中为最高的位置姿势变换模板区域40的位置姿势,计算出对3维基准图像31和3维当前图像36进行对照时的体位修正量(平移量、旋转量)(体位修正量计算步骤)。对照结果显示部23在计算机14的显示器画面中显示体位修正量、将以该体位修正量进行移动后的3维当前图像与3维基准图像相重合来显示的图像等。对照结果输出部24输出利用对照处理部22对3维基准图像31和3维当前图像36进行对照时的体位修正量(平移量、旋转量)(体位修正量输出步骤)。治疗台控制参数计算部26将对照结果输出部24的输出值(平移3轴[ΔX、ΔY、ΔZ],旋转3轴[ΔA、ΔB、ΔC],共6个自由度)转换成对治疗台8的各轴进行控制的参数,即计算出参数(治疗台控制参数计算步骤)。治疗台8基于利用治疗台控制参数计算部26所计算出的治疗台控制参数,来驱动治疗台8的各轴的驱动装置(治疗台驱动步骤)。After the pattern matching is completed, the collation processing unit 22 converts the position and posture of the template region 40 according to the position and posture whose correlation value is the highest among the calculated correlation values, and calculates when the 3D reference image 31 and the 3D current image 36 are compared. The body position correction amount (translation amount, rotation amount) (body position correction amount calculation step). The collation result display unit 23 displays, on the monitor screen of the computer 14 , the body posture correction amount, an image in which the 3D current image moved by the body posture correction amount is superimposed on the 3D reference image, and the like. The comparison result output unit 24 outputs the body posture correction amount (translation amount, rotation amount) when the comparison processing unit 22 compares the 3D reference image 31 and the 3D current image 36 (body posture correction amount output step). The treatment table control parameter calculation part 26 converts the output value of the comparison result output part 24 (3-axis translation [ΔX, ΔY, ΔZ], 3-axis rotation [ΔA, ΔB, ΔC], a total of 6 degrees of freedom) into the corresponding treatment table 8, the parameters controlled by each axis are the calculated parameters (the calculation step of the control parameters of the treatment table). The treatment table 8 drives the driving device of each axis of the treatment table 8 based on the treatment table control parameters calculated by the treatment table control parameter calculation unit 26 (treatment table driving step).
实施方式1所涉及的图像对照装置29进行从3维基准图像31到3维当前图像36的1次图案匹配,接着,基于1次图案匹配结果,从3维基准图像31生成作为规定的用于2次图案匹配的模板区域的位置姿势变换模板区域40,从3维当前图像36生成作为用于2次图案匹配的规定的检索对象区域的1次提取当前图像区域42以使其包含1次提取区域43,因此即使3维当前图像36的断层图像数(切片图像数)比3维基准图像31要少的情况下,也能实现精度高的2阶段图案匹配。The image collating device 29 according to Embodiment 1 performs one-time pattern matching from the three-dimensional reference image 31 to the three-dimensional current image 36, and then, based on the result of the one-time pattern matching, generates a predetermined image from the three-dimensional reference image 31. The position and orientation of the template area for the secondary pattern matching is transformed into the template area 40, and the primary extraction current image area 42, which is a predetermined search target area for the secondary pattern matching, is generated from the 3D current image 36 so as to include the primary extraction. Therefore, even when the number of tomographic images (the number of slice images) of the 3D current image 36 is smaller than that of the 3D reference image 31, high-precision two-stage pattern matching can be realized.
由于实施方式1所涉及的图像对照装置29即使在3维当前图像36的断层图像数(切片图像数)比3维基准图像31要少的情况下,也能实现精度高的2阶段图案匹配,因此能减少位置匹配时的来自X射线CT装置的3维当前图像36的断层图像数,能降低位置匹配时的因X射线CT装置的患者辐射暴露量。Since the image collating device 29 according to the first embodiment can realize highly accurate two-step pattern matching even when the number of tomographic images (the number of slice images) of the 3D current image 36 is smaller than that of the 3D reference image 31, Therefore, the number of tomographic images of the three-dimensional current image 36 from the X-ray CT apparatus at the time of position matching can be reduced, and the radiation exposure of the patient due to the X-ray CT apparatus at the time of position matching can be reduced.
实施方式1所涉及的图像对照装置29基于从3维基准图像31对3维当前图像36执行1次图案匹配的结果而生成1次提取当前图像区域42,通过将其区域比当前图像区域38要狭窄的1次提取当前图像区域42作为检索对象,从而利用包含1次提取区域43的1次提取当前图像区域42,能进行高分辨率的2次图案匹配,能缩短图案匹配所需时间,其中1次提取区域43是以低分辨率、将宽范围作为对象进行1次图案匹配来找到的。The image collating device 29 according to Embodiment 1 generates a primary extracted current image region 42 based on the result of primary pattern matching from the 3D reference image 31 to the 3D current image 36 , and makes the region smaller than the current image region 38 . The narrow primary extraction current image area 42 is used as the search object, thereby using the primary extraction current image area 42 including the primary extraction area 43, high-resolution secondary pattern matching can be performed, and the time required for pattern matching can be shortened. The primary extraction region 43 is found by performing primary pattern matching with low resolution and targeting a wide range.
实施方式1所涉及的患者定位装置30基于利用图像对照装置29计算出的体位修正量,能使得与进行治疗计划时的位置姿势相匹配。由于能使得与进行治疗计划时的位置姿势相匹配,因此能进行位置匹配以使进行治疗时的患部11到达放射线治疗的射束照射中心12。The patient positioning device 30 according to Embodiment 1 can match the position and posture at the time of treatment planning based on the body posture correction amount calculated by the image collating device 29 . Since the position and posture at the time of treatment planning can be matched, position matching can be performed so that the affected part 11 at the time of treatment reaches the beam irradiation center 12 of radiation therapy.
实施方式1所涉及的患者定位装置30可利用位置姿势变换部25生成位置姿势变换模板区域40,该位置姿势变换模板区域40适合用于从由3维基准图像31获得的基准图像模板区域33匹配到断层图像数(切片图像数)比3维基准图像31要少的3维当前图像36,能实现连角度偏移也包含的精度高的2阶段图案匹配。The patient positioning device 30 according to Embodiment 1 can use the position and posture conversion unit 25 to generate a position and posture conversion template region 40 suitable for matching from the reference image template region 33 obtained from the 3D reference image 31 . Up to the 3D current image 36 whose number of tomographic images (the number of slice images) is smaller than that of the 3D reference image 31, high-precision two-stage pattern matching including angular offset can be realized.
实施方式1所涉及的图像对照装置29包括:3维图像输入部21,该3维图像输入部21分别读取放射线治疗的治疗计划时所拍摄的3维基准图像31和进行治疗时所拍摄的3维当前图像36;以及对照处理部22,该对照处理部22对3维基准图像31和3维当前图像36进行对照,计算体位修正量以使3维当前图像36中的患部的位置姿势与3维基准图像31中的患部的位置姿势相一致,对照处理部22具有:1次对照部16,该1次对照部16根据3维基准图像31对3维当前图像36进行1次图案匹配;以及2次对照部17,该2次对照部17根据规定的模板区域(位置姿势变换模板区域40)对规定的检索对象区域42进行2次图案匹配,其中规定的模板区域根据3维基准图像31或3维当前图像36中的一个并基于1次图案匹配结果而生成,而规定的检索对象区域42根据与规定的模板区域(位置姿势变换区域40)的生成基础不同的3维基准图像31或3维当前图像36中的另一个并基于1次图案匹配结果而生成,因此,即使在3维当前图像36的断层图像数比3维基准图像31要少的情况下,也能实现精度高的2阶段图案匹配。The image collating device 29 according to Embodiment 1 includes a three-dimensional image input unit 21 that reads the three-dimensional reference image 31 captured during the treatment planning of radiation therapy and the three-dimensional reference image captured during the treatment. 3-dimensional current image 36; and a comparison processing unit 22, which compares the 3-dimensional reference image 31 and the 3-dimensional current image 36, and calculates a body position correction amount so that the position and posture of the affected part in the 3-dimensional current image 36 are consistent with The position and posture of the affected part in the 3D reference image 31 are consistent, and the comparison processing part 22 has: a primary comparison part 16, and the primary comparison part 16 performs pattern matching on the 3D current image 36 based on the 3D reference image 31; And the secondary comparison unit 17, the secondary comparison unit 17 performs secondary pattern matching on the predetermined search object region 42 based on the predetermined template region (position and posture transformation template region 40), wherein the predetermined template region is based on the three-dimensional reference image 31 or one of the 3D current images 36 and is generated based on the primary pattern matching result, and the predetermined search object region 42 is based on the 3D reference image 31 or The other one of the 3D current image 36 is generated based on the primary pattern matching result. Therefore, even when the number of tomographic images in the 3D current image 36 is smaller than that of the 3D reference image 31, high-precision 2 stage pattern matching.
实施方式1所涉及的患者定位装置30包括:图像对照装置29;以及治疗台控制参数计算部26,该治疗台控制参数计算部26基于利用图像对照装置29计算出的体位修正量来控制治疗台8的各轴,且图像对照装置29包括:3维图像输入部21,该3维图像输入部21分别读取放射线治疗的治疗计划时所拍摄的3维基准图像31和进行治疗时所拍摄的3维当前图像36;以及对照处理部22,该对照处理部22对3维基准图像31和3维当前图像36进行对照,计算出体位修正量以使3维当前图像36中的患部的位置姿势与3维基准图像31中的患部的位置姿势相一致。对照处理部22具有:1次对照部16,该1次对照部16根据3维基准图像31对3维当前图像36进行1次图案匹配;以及2次对照部17,该2次对照部17根据规定的模板区域(位置姿势变换模板区域40)对规定的检索对象区域42进行2次图案匹配,其中规定的模板区域根据3维基准图像31或3维当前图像36中的一个并基于1次图案匹配结果而生成,而规定的检索对象区域42根据与规定的模板区域(位置姿势变换区域40)的生成基础不同的3维基准图像31或3维当前图像36中的另一个并基于1次图案匹配结果而生成,因此,即使在3维当前图像36的断层图像数比3维基准图像31要少的情况下,也能进行精度高的定位。The patient positioning device 30 according to Embodiment 1 includes: an image collating device 29 ; and a treatment table control parameter calculation unit 26 for controlling the treatment table based on the body posture correction amount calculated by the image collating device 29 8, and the image comparison device 29 includes: a 3-dimensional image input unit 21, the 3-dimensional image input unit 21 respectively reads the 3-dimensional reference image 31 captured during the treatment plan of radiation therapy and the 3-dimensional reference image 31 captured during the treatment 3-dimensional current image 36; and a comparison processing unit 22, which compares the 3-dimensional reference image 31 and the 3-dimensional current image 36, and calculates a body position correction amount so that the position and posture of the affected part in the 3-dimensional current image 36 It matches the position and orientation of the affected part in the three-dimensional reference image 31 . The collation processing unit 22 includes: a primary collation unit 16 that performs primary pattern matching on the 3D current image 36 based on the 3D reference image 31 ; and a secondary collation unit 17 that performs pattern matching on the basis of the The predetermined template area (position and posture transformation template area 40) performs secondary pattern matching on the predetermined search object area 42, wherein the predetermined template area is based on one of the 3D reference image 31 or the 3D current image 36 and is based on the primary pattern The predetermined search object region 42 is based on the primary pattern based on the other of the 3D reference image 31 or the 3D current image 36 which is different from the generation basis of the predetermined template region (position and posture transformation region 40). Therefore, even when the number of tomographic images in the 3D current image 36 is smaller than that of the 3D reference image 31, high-precision positioning can be performed.
实施方式1涉及图像对照方法,该图像对照方法对放射线治疗的治疗计划时所拍摄的3维基准图像31和进行治疗时所拍摄的3维当前图像36进行对照;该图像对照方法包含:1次图案匹配步骤,该1次图案匹配步骤根据3维基准图像31对3维当前图像36进行1次图案匹配;以及2次图案匹配步骤,该2次图案匹配步骤根据规定的模板区域(位置姿势变换模板区域40)对规定的检索对象区域42进行2次图案匹配,其中规定的模板区域根据3维基准图像31或3维当前图像36中的一个并基于1次图案匹配结果而生成,而规定的检索对象区域42根据与规定的模板区域(位置姿势变换区域40)的生成基础不同的3维基准图像31或3维当前图像36中的另一个并基于1次图案匹配结果而生成,因此,即使在3维当前图像36的断层图像数比3维基准图像31要少的情况下,也能实现精度高的2阶段图案匹配。Embodiment 1 relates to an image comparison method. The image comparison method compares the 3D reference image 31 captured during radiation therapy treatment planning with the 3D current image 36 captured during treatment; the image comparison method includes: 1 time A pattern matching step, the first pattern matching step performs one pattern matching on the 3D current image 36 according to the 3D reference image 31; The template area 40) performs pattern matching twice on the specified search object area 42, wherein the specified template area is generated based on the result of primary pattern matching based on one of the 3D reference image 31 or the 3D current image 36, and the specified The search target region 42 is generated based on the primary pattern matching result from the other of the 3D reference image 31 or the 3D current image 36 that is different from the generation basis of the predetermined template region (position and posture conversion region 40 ). Even when the number of tomographic images in the three-dimensional current image 36 is smaller than that in the three-dimensional reference image 31, high-precision two-stage pattern matching can be realized.
实施方式2Embodiment 2
在实施方式2的2阶段图案匹配中,进行从3维基准图像31到3维当前图像36的1次图案匹配,接着,基于1次图案匹配的结果,从3维当前图像36生成作为规定的用于2次图案匹配的模板区域的当前图像模板区域44,将对3维基准图像31的位置姿势进行变换后的姿势变换基准图像区域47作为检索对象,根据当前模板区域44对姿势变换基准图像区域47进行2次图案匹配。2次图案匹配是与1次图案匹配反向的图案匹配。In the two-stage pattern matching of Embodiment 2, one pattern matching is performed from the three-dimensional reference image 31 to the three-dimensional current image 36, and then, based on the result of the one-pass pattern matching, a predetermined pattern is generated from the three-dimensional current image 36. The current image template region 44 of the template region used for the secondary pattern matching takes the pose transformation reference image region 47 after transforming the position and posture of the 3D reference image 31 as the retrieval object, and transforms the pose transformation reference image according to the current template region 44 Area 47 performs pattern matching twice. The 2nd pattern matching is the reverse of the 1st pattern matching.
图10是对本发明的实施方式2所涉及的1次图案匹配方法进行说明的图,图11是对图10的1次图案匹配方法中的基准图像模板区域和切片图像的关系进行说明的图。在实施方式2中,通过1次图案匹配,1次对照部16进行连旋转3轴也包含的探索并求出姿势变化量。10 is a diagram illustrating a primary pattern matching method according to Embodiment 2 of the present invention, and FIG. 11 is a diagram illustrating a relationship between a reference image template region and a slice image in the primary pattern matching method of FIG. 10 . In Embodiment 2, by pattern matching once, the comparison unit 16 performs searching including three axes of rotation to obtain the posture change amount.
图10所示的当前图像区域38表示成为包含3片切片图像37a、37b、37c的长方体。成为实施方式2的基准图像模板区域的位置姿势变换模板区域40a、40b、40c是利用位置姿势变换部25进行位置姿势变换后的区域。但是,初始位置姿势为默认状态,例如,旋转3轴的参数为0。作为对位置姿势进行变换后的基准图像模板区域的位置姿势变换模板区域40a沿扫描路径39a、以光栅扫描状在切片图像37a中移动。同样,作为对位置姿势进行变换后的基准图像模板区域40b的位置姿势变换模板区域37b沿扫描路径39b、以光栅扫描状在切片图像37b中移动,对位置姿势进行变换后的位置姿势变换模板区域40c沿扫描路径39c、以光栅扫描状在切片图像37c中移动。为了使附图简单,简略地示出扫描路径39b、39c。The current image area 38 shown in FIG. 10 is represented as a cuboid including three slice images 37a, 37b, and 37c. The position and posture transformation template regions 40 a , 40 b , and 40 c serving as the reference image template regions in Embodiment 2 are regions subjected to position and posture transformation by the position and posture conversion unit 25 . However, the initial position and posture are in the default state, for example, the parameter of the rotation 3 axis is 0. The position-posture conversion template region 40a, which is the reference image template region after the position-posture conversion, moves in the slice image 37a in a raster-scanning manner along the scan path 39a. Similarly, the position and posture transformation template region 37b, which is the position and posture transformed reference image template region 40b, moves along the scanning path 39b in the slice image 37b in a raster scanning manner, and the position and posture transformation template region 40c is obtained. Along the scanning path 39c, it moves in the slice image 37c in a raster scanning manner. In order to simplify the drawing, the scanning paths 39b, 39c are schematically shown.
一边对位置姿势进行变换,并一边进行3维当前图像36的切片图像37a、37b、37c与位置姿势变换模板区域40的相关计算。例如,使旋转3轴的每个轴以规定的变化量或变化率发生变化,来进行相关计算,移动到下一扫描位置,进行相关计算。如图11所示,1次对照部16在位置姿势变换模板区域40的剖面41与构成当前图像区域38的切片图像37之间进行图像对照。位置姿势变换模板区域40的剖面41是将位置姿势变换模板区域40以与作为初始位置姿势的3维基准图像31的切片图像32相平行的面来切断的面,且是从3维基准图像31的多个切片图像32生成的(剖面生成步骤)。例如,能使用实施方式1中所说明的方法。即,剖面41的数据可从构成3维基准图像31的多个切片图像32截取的。此外,位置姿势变换模板区域40的剖面41还可包含进行了补全使得剖面41的数据密度与3维当前图像36的数据密度相同的数据。While transforming the position and posture, the correlation calculation between the slice images 37 a , 37 b , and 37 c of the 3D current image 36 and the position and posture conversion template area 40 is performed. For example, the correlation calculation is performed by changing each of the three rotation axes at a predetermined change amount or rate of change, and moving to the next scanning position to perform the correlation calculation. As shown in FIG. 11 , the primary comparison unit 16 performs image comparison between the cross section 41 of the position and posture transformation template region 40 and the slice image 37 constituting the current image region 38 . The cross section 41 of the position and posture conversion template region 40 is a plane in which the position and posture conversion template region 40 is cut along a plane parallel to the slice image 32 of the 3D reference image 31 as the initial position and posture, and is obtained from the 3D reference image 31. A plurality of slice images 32 are generated (section generating step). For example, the method described in Embodiment 1 can be used. That is, the data of the cross-section 41 can be extracted from the plurality of slice images 32 constituting the three-dimensional reference image 31 . In addition, the section 41 of the position and posture transformation template region 40 may also include data that has been completed so that the data density of the section 41 is the same as that of the 3D current image 36 .
接着,1次对照部16生成当前图像模板区域44,该当前图像模板区域44用于2次匹配。1次对照部16例如根据各切片图像37a、37b、37c的每个图像中连旋转3轴都包含的探索结果,来求出相关值最高的位置姿势变换模板区域40的剖面41、此时的位置姿势变换模板区域40的姿势变化量、以及与该剖面41相对应的切片图像37的提取区域。1次对照部16从求出的每个切片图像的提取区域中生成当前图像模板区域44以使其包含相关值最高的3维当前图像的提取区域。当前图像模板区域44是2维图像。Next, the primary comparison unit 16 generates the current image template area 44 used for the secondary matching. The primary comparison unit 16 obtains, for example, the cross-section 41 of the position-posture conversion template region 40 with the highest correlation value based on search results including the three axes of rotation in each of the slice images 37a, 37b, and 37c. The amount of posture change in the position and posture transformation template area 40 and the extraction area of the slice image 37 corresponding to the section 41 . The primary comparison unit 16 generates the current image template region 44 from the obtained extracted regions of each slice image so as to include the extracted region of the 3D current image with the highest correlation value. The current image template area 44 is a 2-dimensional image.
接着,如图12所示,利用对照处理部22的位置姿势变换部25,使3维基准图像31整体的姿势以生成当前图像模板区域44时所求出的上述姿势变化量进行变化,并生成姿势变换后的3维姿势变换基准图像45,即生成姿势变换基准图像区域47。图12是表示本发明的实施方式2所涉及的姿势变换后的3维基准图像的图。切片图像46a、46b、46c、46d、46e分别为对切片图像32a、32b、32c、32d、32e以上述姿势变化量进行姿势变化后的切片图像。Next, as shown in FIG. 12 , the position and posture conversion unit 25 of the comparison processing unit 22 changes the overall posture of the three-dimensional reference image 31 by the above-mentioned posture change amount obtained when the current image template region 44 is generated, and generates The 3D pose transformation reference image 45 after the pose transformation is the pose transformation reference image area 47 . 12 is a diagram showing a three-dimensional reference image after posture conversion according to Embodiment 2 of the present invention. The slice images 46 a , 46 b , 46 c , 46 d , and 46 e are slice images obtained by changing the postures of the slice images 32 a , 32 b , 32 c , 32 d , and 32 e by the amount of posture change described above, respectively.
接着,如图13所示,2次对照部17沿扫描路径49、以光栅扫描状将当前图像模板区域44匹配到作为姿势变换后的3维姿势变换基准图像45的姿势变换基准图像区域47,从而能高速地仅检测出平移偏移。图13是对本发明的实施方式2所涉及的2次图案匹配方法进行说明的图。进行姿势变换后的姿势变换基准图像区域47表示成为包含5片切片图像46a、46b、46c、46d、46e的长方体。对照执行面48是与如下姿势对应的图像面,该姿势是与利用1次图案匹配对应于3维当前图像36的切片图像37的姿势之间的相关值最高的姿势,即是变成与如下姿势同等的姿势的面,该姿势与姿势变换基准图像区域47中的3维当前图像36的切片图像37对应。2次对照部17从姿势变换基准图像区域47生成规定的对照执行面48,从3维姿势变换基准图像45的多个切片图像46生成(对照执行面生成步骤)。例如,能使用实施方式1中所说明的方法。即,对照执行面48的数据可从构成3维姿势变换基准图像45的多个切片图像截取。此外,对照执行面48包含进行了补全使得对照执行面48的数据密度与当前图像模板区域44的数据密度相同。Next, as shown in FIG. 13 , the secondary comparison unit 17 matches the current image template region 44 to the pose transformation reference image region 47 which is the three-dimensional pose transformation reference image 45 after the pose transformation in a raster scanning manner along the scanning path 49, Therefore, only translational offset can be detected at high speed. FIG. 13 is a diagram illustrating a secondary pattern matching method according to Embodiment 2 of the present invention. The posture transformation reference image area 47 after the posture transformation is represented as a cuboid including five slice images 46 a , 46 b , 46 c , 46 d , and 46 e. The collation execution surface 48 is an image surface corresponding to a posture having the highest correlation value with the posture corresponding to the slice image 37 of the three-dimensional current image 36 by primary pattern matching, that is, a posture corresponding to the following A plane having an equivalent posture corresponding to the slice image 37 of the three-dimensional current image 36 in the posture conversion reference image area 47 . The secondary collation unit 17 generates a predetermined collation execution surface 48 from the pose transformation reference image region 47 and generates it from a plurality of slice images 46 of the three-dimensional pose transformation reference image 45 (comparison execution surface generation step). For example, the method described in Embodiment 1 can be used. That is, the data of the collation execution surface 48 can be extracted from a plurality of slice images constituting the three-dimensional pose transformation reference image 45 . In addition, the comparison execution surface 48 includes completion so that the data density of the comparison execution surface 48 is the same as the data density of the current image template area 44 .
对实施方式2的2阶段图案匹配方法进行总结。首先,对照处理部22利用位置姿势变换部25,从3维基准图像31生成进行位置变换后的位置姿势变换模板区域40(位置姿势变换模板区域生成步骤)。对照处理部22的1次对照部16将位置姿势变换模板区域40对3维当前图像36执行1次图案匹配(1次图案匹配步骤)。每次使位置姿势变换模板区域40的位置姿势发生变化(每次执行位置姿势变换步骤时)时,1次图案匹配对构成当前图像区域38的各切片图像37生成位置姿势变换模板区域40的剖面41(剖面生成步骤),在该位置姿势变换模板区域40的剖面41与构成当前图像区域38的切片图像37之间进行图像对照。The two-stage pattern matching method of Embodiment 2 is summarized. First, the collation processing unit 22 uses the position and posture conversion unit 25 to generate a position and posture transformation template region 40 from the three-dimensional reference image 31 (position and posture transformation template region generation step). The primary collation unit 16 of the collation processing unit 22 performs pattern matching on the three-dimensional current image 36 with the position and orientation transformation template region 40 once (primary pattern matching step). Each time the position and posture of the position and posture conversion template region 40 is changed (every time the position and posture conversion step is executed), the cross-section of the position and posture conversion template region 40 is generated for each slice image 37 constituting the current image region 38 by one pattern matching 41 (section generation step), image comparison is performed between the section 41 of the position and posture conversion template area 40 and the slice image 37 constituting the current image area 38 .
每次使位置姿势变换模板区域40的位置姿势发生变化时,1次对照部16计算出当前图像区域38与位置姿势变换模板区域40的相关值(相关值计算步骤)。此外,每次扫描位置姿势变换模板区域40时,1次对照部16计算出当前图像区域38与位置姿势变换模板区域40的相关值,利用1次图案匹配,生成当前图像模板区域44,以使其包含当前图像区域38和位置姿势变换模板区域40的相关值最高的位置姿势变换模板区域40的提取区域(当前图像模板区域生成步骤)。Each time the position and posture of the position and posture conversion template area 40 is changed, the primary comparison unit 16 calculates a correlation value between the current image area 38 and the position and posture conversion template area 40 (correlation value calculation step). In addition, each time the position and posture transformation template region 40 is scanned, the primary comparison unit 16 calculates the correlation value between the current image region 38 and the position and posture transformation template region 40, and uses pattern matching once to generate the current image template region 44, so that It includes the extraction region of the position and posture transformation template region 40 with the highest correlation value between the current image region 38 and the position and posture transformation template region 40 (current image template region generating step).
接着,对照处理部22利用位置姿势变换部25,使3维基准图像31整体的姿势以生成当前图像模板区域44时所求出的上述姿势变化量来变化,并生成姿势变换后的3维姿势变换基准图像45,即生成姿势变换基准图像区域47(姿势变换基准图像区域生成步骤)。2次对照部17将当前图像模板区域44对姿势变换基准图像区域47执行2次图案匹配(2次图案匹配步骤)。2次图案匹配通过对照执行面生成步骤来生成对照执行面48,对由对照执行面生成步骤所生成的对照执行面48与当前图像模板区域44进行图像对照。在进行该图像对照时,不使当前图像模板区域44旋转而进行平移,同时计算对照执行面48与当前图像模板区域44的相关值(相关值计算步骤)。Next, the comparison processing unit 22 uses the position and posture conversion unit 25 to change the posture of the entire three-dimensional reference image 31 by the above-mentioned posture change amount obtained when the current image template region 44 is generated, and generates a three-dimensional posture after the posture transformation. The transformation reference image 45, that is, the pose transformation reference image area 47 is generated (posture transformation reference image area generating step). The secondary matching unit 17 performs secondary pattern matching on the current image template area 44 and the pose transformation reference image area 47 (secondary pattern matching step). In the secondary pattern matching, the comparison execution surface 48 is generated through the comparison execution surface generation step, and image comparison is performed on the comparison execution surface 48 generated in the comparison execution surface generation step and the current image template area 44 . When performing this image comparison, the current image template area 44 is translated without being rotated, and the correlation value between the comparison execution surface 48 and the current image template area 44 is calculated (correlation value calculation step).
在2次图案匹配中,对照处理部22的2次对照部17将所计算出的相关值中最高相关值的3维姿势变换基准图像45与当前图像模板区域44的位置姿势关系(位置姿势信息)选定为最佳解(最佳解选定步骤)。由此,通过2阶段匹配来实现图案匹配使得3维基准图像31与3维当前图像36这两种3维图像最一致。实施方式2的2阶段图案匹配方法包含:位置姿势变换模板区域生成步骤;1次图案匹配步骤;姿势变换基准图像区域生成步骤;以及2次图案匹配步骤。1次图案匹配步骤包含:剖面生成步骤;相关值计算步骤;位置姿势变换步骤;以及当前图像模板区域生成步骤。2次图案匹配步骤包含:对照执行面生成步骤;相关值计算步骤;以及最佳解选定步骤。In the secondary pattern matching, the secondary comparison unit 17 of the comparison processing unit 22 compares the position and posture relationship (position and posture information) between the 3D posture transformation reference image 45 and the current image template area 44 with the highest correlation value among the calculated correlation values. ) is selected as the best solution (best solution selection step). Thus, pattern matching is realized by two-stage matching so that the two types of 3D images, the 3D reference image 31 and the 3D current image 36 , are the most consistent. The two-stage pattern matching method of Embodiment 2 includes: a position and posture transformation template region generation step; a pattern matching step once; a posture transformation reference image region generation step; and a pattern matching step twice. A pattern matching step includes: a section generation step; a correlation value calculation step; a position and posture transformation step; and a current image template region generation step. The two pattern matching steps include: the step of generating the comparison execution surface; the step of calculating the correlation value; and the step of selecting the best solution.
图案匹配结束后,对照处理部22从其相关值在计算出的相关值中为最高的3维姿势变换基准图像45中的高相关值区域的位置姿势计算出对3维基准图像31和3维当前图像36进行对照时的体位修正量(平移量、旋转量)(体位修正量计算步骤)。对照结果显示部23在计算机14的显示器画面中显示体位修正量、或将以该体位修正量进行移动后的3维当前图像重合到3维基准图像来显示的图像等。对照结果输出部24输出利用对照处理部22对3维基准图像31和3维当前图像36进行对照时的体位修正量(平移量、旋转量)(体位修正量输出步骤)。治疗台控制参数计算部26将对照结果输出部24的输出值(平移3轴[ΔX、ΔY、ΔZ],旋转3轴[ΔA、ΔB、ΔC],共6个自由度)转换成对治疗台8的各轴进行控制的参数,即计算出参数(治疗台控制参数计算步骤)。治疗台8基于利用治疗台控制参数计算部26所计算的治疗台控制参数,来驱动治疗台8的各轴的驱动装置(治疗台驱动步骤)。After the pattern matching is completed, the collation processing unit 22 calculates the position and posture of the high correlation value region in the 3D posture transformation reference image 45 whose correlation value is the highest among the calculated correlation values. Body position correction amount (translation amount, rotation amount) when the current image 36 is compared (body position correction amount calculation step). The collation result display unit 23 displays, on the monitor screen of the computer 14 , the body posture correction amount, or an image obtained by superimposing the 3D current image moved by the body posture correction amount on the 3D reference image. The comparison result output unit 24 outputs the body posture correction amount (translation amount, rotation amount) when the comparison processing unit 22 compares the 3D reference image 31 and the 3D current image 36 (body posture correction amount output step). The treatment table control parameter calculation part 26 converts the output value of the comparison result output part 24 (3-axis translation [ΔX, ΔY, ΔZ], 3-axis rotation [ΔA, ΔB, ΔC], a total of 6 degrees of freedom) into the corresponding treatment table 8, the parameters controlled by each axis are the calculated parameters (the calculation step of the control parameters of the treatment table). The treatment table 8 drives the driving device of each axis of the treatment table 8 based on the treatment table control parameters calculated by the treatment table control parameter calculation unit 26 (treatment table driving step).
实施方式2所涉及的图像对照装置29根据3维基准图像31的位置姿势变换模板区域40对3维当前图像36进行作为连旋转3轴也包含的图像对照的1次图案匹配,接着,基于1次图案匹配结果,从3维当前图像36生成作为用于2次图案匹配的模板区域的当前图像模板区域44,因此,即使在3维当前图像36的断层图像数(切片图像数)比3维基准图像31要少的情况下,也能实现精度高的2阶段图案匹配。The image collating device 29 according to Embodiment 2 performs one-time pattern matching as image collating including three-axis rotation on the three-dimensional current image 36 based on the position and posture transformation template region 40 of the three-dimensional reference image 31, and then, based on one As a result of secondary pattern matching, the current image template region 44 serving as a template region for secondary pattern matching is generated from the 3D current image 36. Therefore, even if the number of tomographic images (the number of slice images) of the 3D current image 36 is greater than that of the 3D current image 36 Even when the number of quasi-images 31 is small, highly accurate two-step pattern matching can be realized.
实施方式2所涉及的图像对照装置29通过从3维基准图像31生成作为姿势变换后的3维基准图像的3维姿势变换基准图像45,即通过生成姿势变换基准图像区域47,从而可使用2维的当前图像模板区域44,并利用不伴有旋转移动的平移移动来对姿势变换基准图像区域47实现直接图案匹配。在2次图案匹配中,由于仅计算每次平移移动的相关值,因此与计算每次旋转移动和平移移动的相关值的情况相比,实现了2次图案匹配的高速化。The image collating device 29 according to Embodiment 2 generates a 3D pose transformation reference image 45 as a pose transformed 3D reference image from the 3D reference image 31 , that is, generates a pose transformation reference image region 47 , so that 2 dimensional current image template region 44, and utilizes translational movement without rotational movement to perform direct pattern matching on the pose transformation reference image region 47. In the two-pass pattern matching, since only the correlation value is calculated for each translational movement, compared with the case of calculating the correlation value for every rotational movement and translational movement, the speed-up of the two-pass pattern matching is realized.
实施方式3Embodiment 3
实施方式3与实施方式1和2的不同在于,利用人体数据库(图谱模型:atlas model)来生成实施方式1的用于1次图案匹配的基准图像模板区域33、或实施方式2的作为位置姿势变换模板区域40的基础的基准图像模板区域33。图14是表示本发明的实施方式3所涉及的图像对照装置和患者定位装置的结构的图。实施方式3所涉及的图像对照装置29与实施方式1和2所涉及的图像对照装置29的不同点在于,其具有:人体数据库输入部50;以及平均模板区域生成部51。实施方式3所涉及的患者定位装置30具有:图像对照装置29和治疗台控制参数计算部26。The difference between Embodiment 3 and Embodiments 1 and 2 is that the human body database (atlas model: atlas model) is used to generate the reference image template area 33 for primary pattern matching in Embodiment 1, or the position and posture in Embodiment 2. The transformation template region 40 is based on the reference image template region 33 . FIG. 14 is a diagram showing configurations of an image collating device and a patient positioning device according to Embodiment 3 of the present invention. The image collating device 29 according to Embodiment 3 differs from the image collating devices 29 according to Embodiments 1 and 2 in that it includes: a human body database input unit 50 ; and an average template area generating unit 51 . The patient positioning device 30 according to Embodiment 3 includes an image comparison device 29 and a treatment table control parameter calculation unit 26 .
人体数据库输入部50从数据库装置等存储装置获取人体数据库(图谱模型)。平均模板区域生成部51从与患者4、10的患部5、11相对应的人体数据库的脏器部分截取平均模板区域54。对照处理部22的基准模板区域生成部18通过将该平均模板区域54图案匹配到3维基准图像31,从而自动生成基准图像模板区域33(基准图像模板区域生成步骤)。The human body database input unit 50 acquires a human body database (atlas model) from a storage device such as a database device. The average template region generation unit 51 cuts out the average template region 54 from the organ parts of the human body database corresponding to the affected parts 5 and 11 of the patients 4 and 10 . The reference template region generation unit 18 of the comparison processing unit 22 automatically generates the reference image template region 33 by pattern-matching the average template region 54 to the three-dimensional reference image 31 (reference image template region generation step).
利用上述基准图像模板区域33,执行实施方式1的2阶段图案匹配或实施方式2的2阶段图案匹配。通过这样,即使不在3维基准图像上预先准备表示患部的信息(患部形状等),也能实现2阶段图案匹配。Using the reference image template region 33 described above, the two-stage pattern matching of Embodiment 1 or the two-stage pattern matching of Embodiment 2 is performed. In this way, two-step pattern matching can be realized without preparing information indicating the affected part (the shape of the affected part, etc.) on the 3D reference image in advance.
另外,还考虑了平均模板区域生成部51从与患者4、10的患部5、11相对应的人体数据库的脏器部分截取2维的平均模板区域的情况。在2维的平均模板区域54的情况下,截取多个2维的平均模板区域,汇集多个2维的平均模板区域,来输出到对照处理部22。对照处理部22的基准模板区域生成部18通过将该多个2维的平均模板区域图案匹配到3维基准图像31,从而自动生成基准图像模板区域33。In addition, it is considered that the average template region generation unit 51 cuts out a two-dimensional average template region from the organ parts of the human body database corresponding to the affected parts 5 and 11 of the patients 4 and 10 . In the case of the two-dimensional average template region 54 , a plurality of two-dimensional average template regions are cut out, and the plurality of two-dimensional average template regions are collected and output to the collation processing unit 22 . The reference template region generation unit 18 of the comparison processing unit 22 automatically generates the reference image template region 33 by pattern-matching the plurality of two-dimensional average template regions to the three-dimensional reference image 31 .
标号说明Label description
16…1次对照部;17…2次对照部;18…基准模板区域生成部;21…3维图像输入部;22…对照处理部;25…位置姿势变换部;26…治疗台控制参数计算部;29…图像对照装置;30…患者定位装置;31…3维基准图像;33…基准图像模板区域;36…3维当前图像;40、40a、40b、40c…位置姿势变换模板区域;41…剖面;42…1次提取当前图像区域;44…当前图像模板区域;45…3维姿势变换基准图像;48…对照执行面;50…人体数据库输入部;51…平均模板区域生成部。16...First-time comparison part; 17...Second-time comparison part; 18...Reference template area generation part; 21...3-dimensional image input part; 22...Contrast processing part; 25...Position and posture transformation part; 29...image comparison device; 30...patient positioning device; 31...3-dimensional reference image; 33...reference image template area; 36...3-dimensional current image; 40, 40a, 40b, 40c...position and posture transformation template area; 41 ...section; 42...1 time extraction of the current image area; 44...current image template area; 45...3-dimensional pose transformation reference image; 48...contrast execution surface; 50...human body database input unit; 51...average template area generation unit.
Claims (16)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2011-130074 | 2011-06-10 | ||
| JP2011130074A JP5693388B2 (en) | 2011-06-10 | 2011-06-10 | Image collation device, patient positioning device, and image collation method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN102814006A CN102814006A (en) | 2012-12-12 |
| CN102814006B true CN102814006B (en) | 2015-05-06 |
Family
ID=47298678
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201210022145.2A Expired - Fee Related CN102814006B (en) | 2011-06-10 | 2012-01-13 | Image contrast device, patient positioning device and image contrast method |
Country Status (3)
| Country | Link |
|---|---|
| JP (1) | JP5693388B2 (en) |
| CN (1) | CN102814006B (en) |
| TW (1) | TWI425963B (en) |
Families Citing this family (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI573565B (en) * | 2013-01-04 | 2017-03-11 | shu-long Wang | Cone - type beam tomography equipment and its positioning method |
| US10092251B2 (en) * | 2013-03-15 | 2018-10-09 | Varian Medical Systems, Inc. | Prospective evaluation of tumor visibility for IGRT using templates generated from planning CT and contours |
| JP6192107B2 (en) * | 2013-12-10 | 2017-09-06 | Kddi株式会社 | Video instruction method, system, terminal, and program capable of superimposing instruction image on photographing moving image |
| CN104135609B (en) * | 2014-06-27 | 2018-02-23 | 小米科技有限责任公司 | Auxiliary photo-taking method, apparatus and terminal |
| JP6338965B2 (en) * | 2014-08-08 | 2018-06-06 | キヤノンメディカルシステムズ株式会社 | Medical apparatus and ultrasonic diagnostic apparatus |
| JP6452987B2 (en) * | 2014-08-13 | 2019-01-16 | キヤノンメディカルシステムズ株式会社 | Radiation therapy system |
| US9878177B2 (en) * | 2015-01-28 | 2018-01-30 | Elekta Ab (Publ) | Three dimensional localization and tracking for adaptive radiation therapy |
| JP6164662B2 (en) * | 2015-11-18 | 2017-07-19 | みずほ情報総研株式会社 | Treatment support system, operation method of treatment support system, and treatment support program |
| JP2018042831A (en) | 2016-09-15 | 2018-03-22 | 株式会社東芝 | Medical image processing apparatus, treatment system, and medical image processing program |
| JP6869086B2 (en) * | 2017-04-20 | 2021-05-12 | 富士フイルム株式会社 | Alignment device, alignment method and alignment program |
| CN109859213B (en) * | 2019-01-28 | 2021-10-12 | 艾瑞迈迪科技石家庄有限公司 | Method and device for detecting bone key points in joint replacement surgery |
| JP7513980B2 (en) | 2020-08-04 | 2024-07-10 | 東芝エネルギーシステムズ株式会社 | Medical image processing device, treatment system, medical image processing method, and program |
| CN114073827B (en) * | 2020-08-15 | 2023-08-04 | 中硼(厦门)医疗器械有限公司 | Radiation irradiation system and control method thereof |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1235684A (en) * | 1996-10-29 | 1999-11-17 | 匹兹堡大学高等教育联邦体系 | Device for matching X-ray images with refrence images |
| CN101032650A (en) * | 2006-03-10 | 2007-09-12 | 三菱重工业株式会社 | Radiotherapy device control apparatus and radiation irradiation method |
| JP2009189461A (en) * | 2008-02-13 | 2009-08-27 | Mitsubishi Electric Corp | Patient positioning device and method |
| CN101708126A (en) * | 2008-09-19 | 2010-05-19 | 株式会社东芝 | Image processing apparatus and x-ray computer tomography apparatus |
| WO2010133982A2 (en) * | 2009-05-18 | 2010-11-25 | Koninklijke Philips Electronics, N.V. | Marker-free tracking registration and calibration for em-tracked endoscopic system |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP3748433B2 (en) * | 2003-03-05 | 2006-02-22 | 株式会社日立製作所 | Bed positioning device and positioning method thereof |
| JP2007014435A (en) * | 2005-07-06 | 2007-01-25 | Fujifilm Holdings Corp | Image processing device, method and program |
| JP4425879B2 (en) * | 2006-05-01 | 2010-03-03 | 株式会社日立製作所 | Bed positioning apparatus, positioning method therefor, and particle beam therapy apparatus |
| JP5233374B2 (en) * | 2008-04-04 | 2013-07-10 | 大日本印刷株式会社 | Medical image processing system |
| TWI381828B (en) * | 2009-09-01 | 2013-01-11 | 長庚大學 | Method of making artificial implants |
-
2011
- 2011-06-10 JP JP2011130074A patent/JP5693388B2/en not_active Expired - Fee Related
- 2011-11-11 TW TW100141223A patent/TWI425963B/en not_active IP Right Cessation
-
2012
- 2012-01-13 CN CN201210022145.2A patent/CN102814006B/en not_active Expired - Fee Related
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1235684A (en) * | 1996-10-29 | 1999-11-17 | 匹兹堡大学高等教育联邦体系 | Device for matching X-ray images with refrence images |
| CN101032650A (en) * | 2006-03-10 | 2007-09-12 | 三菱重工业株式会社 | Radiotherapy device control apparatus and radiation irradiation method |
| JP2009189461A (en) * | 2008-02-13 | 2009-08-27 | Mitsubishi Electric Corp | Patient positioning device and method |
| CN101708126A (en) * | 2008-09-19 | 2010-05-19 | 株式会社东芝 | Image processing apparatus and x-ray computer tomography apparatus |
| WO2010133982A2 (en) * | 2009-05-18 | 2010-11-25 | Koninklijke Philips Electronics, N.V. | Marker-free tracking registration and calibration for em-tracked endoscopic system |
Also Published As
| Publication number | Publication date |
|---|---|
| TWI425963B (en) | 2014-02-11 |
| JP2012254243A (en) | 2012-12-27 |
| CN102814006A (en) | 2012-12-12 |
| TW201249496A (en) | 2012-12-16 |
| JP5693388B2 (en) | 2015-04-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN102814006B (en) | Image contrast device, patient positioning device and image contrast method | |
| CN109419524B (en) | Control of medical imaging system | |
| EP3362148B1 (en) | System and method for monitoring structural movements throughout radiation therapy | |
| EP2285279B1 (en) | Automatic patient positioning system | |
| EP1102611B1 (en) | Delivery modification system for radiation therapy | |
| KR102831960B1 (en) | Medical image processing device, treatment system, medical image processing method, and storage medium | |
| KR102619994B1 (en) | Biomedical image processing devices, storage media, biomedical devices, and treatment systems | |
| JP2008544831A (en) | High-precision overlay of X-ray images on cone-beam CT scan for image-guided radiation treatment | |
| JP2015083068A (en) | Radiotherapy apparatus and system and method | |
| JP7444387B2 (en) | Medical image processing devices, medical image processing programs, medical devices, and treatment systems | |
| CN119656490B (en) | Real-time image guiding radiotherapy device | |
| JP2009000369A (en) | Radiotherapy apparatus and treatment site positioning method | |
| JP6918443B2 (en) | Medical information processing device | |
| JP5401240B2 (en) | Radiation therapy system | |
| JP2022131757A (en) | Radiation therapy device, medical image processing device, radiation therapy method, and program | |
| US20250069730A1 (en) | Medical image processing device, treatment system, medical image processing method, and storage medium | |
| US20240362782A1 (en) | Medical image processing device, treatment system, medical image processing method, and storage medium | |
| CN121177671A (en) | Image guidance devices, guidance methods and related equipment for radiotherapy | |
| WO2024117129A1 (en) | Medical image processing device, treatment system, medical image processing method, and program |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| C14 | Grant of patent or utility model | ||
| GR01 | Patent grant | ||
| TR01 | Transfer of patent right | ||
| TR01 | Transfer of patent right |
Effective date of registration: 20190123 Address after: Tokyo, Japan Patentee after: Hitachi, Ltd. Address before: Tokyo, Japan Patentee before: MITSUBISHI ELECTRIC Corp. |
|
| CF01 | Termination of patent right due to non-payment of annual fee | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20150506 |