[go: up one dir, main page]

TW201249496A - Image collating apparatus, patient positioning apparatus, and image collating method - Google Patents

Image collating apparatus, patient positioning apparatus, and image collating method Download PDF

Info

Publication number
TW201249496A
TW201249496A TW100141223A TW100141223A TW201249496A TW 201249496 A TW201249496 A TW 201249496A TW 100141223 A TW100141223 A TW 100141223A TW 100141223 A TW100141223 A TW 100141223A TW 201249496 A TW201249496 A TW 201249496A
Authority
TW
Taiwan
Prior art keywords
image
dimensional
posture
reference image
area
Prior art date
Application number
TW100141223A
Other languages
Chinese (zh)
Other versions
TWI425963B (en
Inventor
Kosuke Hirasawa
Original Assignee
Mitsubishi Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Electric Corp filed Critical Mitsubishi Electric Corp
Publication of TW201249496A publication Critical patent/TW201249496A/en
Application granted granted Critical
Publication of TWI425963B publication Critical patent/TWI425963B/en

Links

Landscapes

  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)
  • Radiation-Therapy Devices (AREA)

Abstract

An objective of this invention is to realize a high-accuracy two-stage pattern matching (two-stage collating) even in a case that the number of tomographic images of a three-dimensional present image is fewer than that of a three-dimensional reference image when positioning a patient in a radiation therapy. This invention includes a collation processing part (22) which collates a three-dimensional reference image (31) and a three-dimensional present image (36) and calculates a body position correction amount so that the position/posture of an affected part in the present image (36) may be corrected to match the position/posture of the affected part in the reference image (31). The collation processing part (22) has a primary collating part (16) for performing a primary collation which collates the reference image (31) with the present image (36); and a secondary collating part (17) for performing a secondary collation which collates a specified template area (40) generated from either one of the reference image (31) or the present image (36) based on the result of the primary collation, with a specified retrieval object area (42) generated from the other one of the reference image (31) or the present image (36), which is different from the generation origin of the specified template area (40), based on the result of the primary collation.

Description

201249496 六、發明說明: 【發明所屬之技術領域】 本發明係關於在對患者的患部照射x射線、7射 粒子束等之放射線來進行癌治療之放射線治療装置中、’、、 用了 ct(電腦斷層)晝像資料之晝像核對裝置;以及使用利 晝像核對裝置來使患者定位到照射放射線的放射線照^此 置之患者定位裝置。 ' 【先前技術】 近年來,在以癌治療為目的之放射線治療裝置方面, 不斷開發及建構的是使用質子及重離子等的粒子束之癌户 療裝置(在此,特將之稱為粒子束治療裝置眾所周知的^ 使用粒子束之粒子束治療與使用x射線、r射線等之傳統 的放射線治療相比,較能集中照射癌患部,亦即較能配合 患部的开> 狀而以如針尖(pinpoint)的射束照射粒子束,可不 衫響正¥細胞地進行治療。 在粒子束治療中,將粒子束高精度地照射至癌等患部 很重要。因此,在粒子束治療時係利用固定具等將患者固 定成不會相對於治療室(照射室)的治療台產生位置偏移。 為了使癌等患部精度良好地定位在放射線照射範圍内,係 先進行利用到雷射指標(laser p〇inter)等之患者的粗略固定 等之設定(setting)然後使用X射線晝像等來進行患者的 患部之精密的定位。 專利文獻1中,提出一種:並不是在基準畫像(x射線 透視晝像)及現在晝像(利用X射線接收器而攝得的晝像)201249496 6. EMBODIMENT OF THE INVENTION The present invention relates to a radiotherapy apparatus that performs cancer treatment by irradiating radiation of x-rays, 7-beams, and the like to an affected part of a patient, and uses ct ( Computerized tomography) An image collation device for imaging data; and a patient positioning device that uses a sharp image collation device to position the patient to radiation that illuminates the radiation. [Prior Art] In recent years, in the field of radiation therapy devices for cancer treatment, cancer cell therapy devices using particle beams such as protons and heavy ions have been developed and constructed (here, they are called particles). The beam therapy device is well known. Compared with the conventional radiation therapy using x-rays, r-rays, etc., the particle beam treatment of the beam treatment can concentrate the irradiation on the cancerous part, that is, it can match the opening of the affected part. The beam of the pinpoint illuminates the particle beam, and it can be treated without slamming the cells. In particle beam therapy, it is important to irradiate the particle beam with high precision to the affected part such as cancer. Therefore, it is used in particle beam therapy. The fixation device or the like fixes the patient so as not to be displaced from the treatment table of the treatment room (irradiation room). In order to accurately position the affected part such as cancer in the radiation irradiation range, it is first used to the laser index (laser) The setting of the rough fixation of the patient such as p〇inter) and the like, and then the precise positioning of the affected part of the patient is performed using an X-ray imaging or the like. Patent Document 1 proposes Species: reference is not in the portrait (x-ray fluoroscopic image day) and the present day image (X-ray receiver and using the captured image of the day)

S 4 323646 201249496 的任一者的畫像都指定相同數目的複數個標識(monument) 的相同位置,而是進行兩階段的型樣匹配(pattern - matching),以產生用來驅動治療台的定位用資訊之床定位 • 裝置及其定位方法。一次型樣匹配,係針對二維現在畫像 設定一個與第〆設定區域約略相同大小的第二設定區域 (其中該第一設定區域係包含有針對二維基準晝像而設置 的isocenter(射束照射中心)之區域),然後在二維現在畫像 的區域内依序使第二設定區域移動,並在第二設定區域的 各個位置比較第一設定區域内的二維基準晝像盥第二設定 區域内的二維現在畫#,以插出具有與第二狀區域的二 維基準晝像最類似的二維現在晝像之第二設定區域。二次 型樣匹配係以:將在-次型樣匹配中抽出的第二設定區域 内的-維現在晝像與月ij述第一設定區域内的前述二維基準 晝像做比較,來使兩畫像最為一致之方式進行型樣匹配。 (先·前技術文獻) (專利文獻) (專利文獻1)日本特許第3748433號公報(0007至0009 段、0049段、第8圖、第9圖) 【發明内容】 (發明所欲解決之課題) 因為患部的形狀為三維的立體形狀,所以在將患部定 位在治療計晝時之患部位置方面,使用三維晝像可達到精 度比使用二維晝像的情況高之定位。一般而言,在治療計 晝貢料作成之際,會進行使用X射線CT(c〇mputed 5 323646 201249496The portraits of any of S 4 323646 201249496 all specify the same number of identical positions of the plurality of montages, but perform a two-stage pattern-matching to generate a positioning for driving the treatment table. Information bed positioning • Devices and their positioning methods. The primary pattern matching sets a second setting area approximately the same size as the second setting area for the two-dimensional current image (where the first setting area includes isocenter (beam irradiation) set for the two-dimensional reference image The area of the center), then sequentially moving the second set area in the area of the two-dimensional present image, and comparing the two-dimensional reference image in the first set area to the second set area in each position of the second set area The two-dimensional current drawing # inside is inserted into the second setting area of the two-dimensional current image having the most similar to the two-dimensional reference image of the second region. The quadratic pattern matching is to compare the -dimensional current image in the second set region extracted in the -type pattern matching with the two-dimensional reference image in the first setting region of the month ij The two images are the most consistent way to match the pattern. (Patent Document 1) (Patent Document 1) Japanese Patent No. 3748433 (0007 to 0009, paragraph 0049, Fig. 8, and Fig. 9) [Summary of the Invention] Since the shape of the affected part is a three-dimensional three-dimensional shape, the three-dimensional imaging can achieve a higher accuracy than the case of using the two-dimensional imaging in terms of positioning the affected part at the position of the affected part. In general, X-ray CT (c〇mputed 5 323646 201249496) will be used when the treatment plan is completed.

Tomography)晝像來決定出患部的三維的形狀。近年來,有 期望在治療室中設置X射線CT裝置,以使用治療之際利 用X射線CT裝置而攝得的χ射線CT現在晝像、及治療 計畫之際之X射線CT晝像,來進行定位之要求。這是因 為.在X射線透财像巾’本f為軟脸織之患部通常不 會顯得很清楚,所峰本上較制㈣射賴相對位 置來進行齡,反之,使用x射線CT晝像之定位,則不 用如此ΊΓ直接進行X射線CT晝像中拍攝的患部相互間 的對位之緣故。 因此,考慮在傳統的兩階段型樣匹配中,將基準畫像 及現在晝像擴張為三維的晝像。三維基準晝像及三維現在 旦像’係包含有以χ射線CT裝置攝得的複數個斷層晝像 (切片晝像)。由於有X射線曝露量等_慮,因此假;三 維現在晝像所包含的晝像片數較少,所以進行比較時,必 須進行具有較詳細的晝像#訊之三祕準晝像、與具有比 三維基準晝像簡略的晝像資訊之三維現在畫像的比較。傳 段型樣匹配’可以進行分別具有相同的密度的畫 像資訊之二維基準晝像與二維現在晝像之比較,但在進行 晝像資訊㈣度不同之三維基準晝像與三維現在晝像的比 幸=際,單純地將傳紐術的晝像維度(d—)從二維 提回到一維’卻具有無法實現兩階段型樣匹配之問題。換 :之’存在有並無法實現:與以往—樣,單純地進行從設 定的第m域⑽三維基準晝像對第二設定區域内的 三維現在畫像之欠型姐配,單純地將抽出的第二設定 323646 6 201249496 區域内的三維現在晝像部份與第一設定區域内的三維基準 晝像做比較,來使兩晝像最為一致這樣的型樣匹配之問題。 因此,本發明之目的在於:在進行放射線治療的患者 定位之際,即使是三維現在晝像的斷層晝像數比三維基準 晝像少之情況,也能夠實現高精度的兩階段型樣匹配(兩階 段核對)。 (解決課題之手段) 本發明之晝像核對裝置,係具備有:三維晝像輸入 部,係將放射線治療之治療計畫之際所攝得的三維基準晝 像、與治療之際所攝得的三維現在晝像分別予以讀入;以 及核對處理部,係核對三維基準畫像與三維現在晝像,計 算出讓三維現在晝像中的患部的位置姿勢與三維基準晝像 中的患部的位置姿勢一致之體位修正量。核對處理部係具 有:一次核對部,係進行從三維基準晝像相對於三維現在 晝像之一次型樣匹配;以及二次核對部,係進行從根據一 次型樣匹配的結果而從三維基準晝像或三維現在晝像之一 方所生成的預定的樣板區域,相對於根據一次型樣匹配的 結果而從與預定的樣板區域的生成基礎不同之三維基準晝 像或三維現在晝像之另一方所生成的預定的檢索對象區域 之二次型樣匹配。 (發明之效果) 本發明之晝像核對裝置,係先進行從三維基準晝像匹 配到三維現在晝像之一次型樣匹配,然後根據一次型樣匹 配的結果來生成預定的樣板區域與預定的檢索對象區域, s 7 323646 201249496 再執行檢索對象區域與樣板區域之二次型樣匹配,因此即 使是三維現在晝像的斷層晝像數比三維基準晝像少之情 況,也能夠實現高精度的兩階段型樣匹配。 【實施方式】 實施形態1 第1圖係顯示本發明實施形態1之晝像核對裝置及患 者定位裝置的構成之圖。第2圖係顯示與本發明之晝像核 對裝置及患者定位裝置相關的機器整體構成之圖。第2圖 中,1表示用來進行在放射線治療之前進行的治療計晝之 CT模擬室,其中有CT筒架(gantry)2及CT晝像攝影用床 的頂板3,且讓患者4橫躺在頂板3之上,而拍攝包含有 患部5之治療計晝用的CT晝像資料。另一方面,6表示用 來進行放射線治療之治療室,其中有CT筒架7及旋轉治 療台8,旋轉治療台8的上部有頂板9,且讓患者10橫躺 在頂板9之上,而拍攝包含有治療時的患部11之定位用的 CT晝像資料。 此處,所謂的定位,係指從治療計晝用的CT畫像資 料將治療時的患者10及患部11的位置算出來,再計算出 讓患者10及患部11的位置與治療計晝一致之體位修正 量,然後以治療時的患部11會來到放射線治療的射束照射 中心12之方式進行對位者。對位係藉由在患者10躺在頂 板9上的狀態下對旋轉治療台8進行驅動控制以使頂板9 的位置移動而實現。旋轉治療台8可做平移、旋轉共六個 自由度的驅動修正,而且使旋轉治療台8的頂板9做180 * 8 323646 201249496 度的旋轉’也可使旋轉治療台8的頂 (―)(在第2圖中以實線表示)移 攝讀置 照射頭13的某一個治療位置(在第 “、、、出放射線之 2圖中’雖錢不的是CT攝影位置與治療位置)4 ⑽度之相向位置關係的情況,但並不限於此配置^差 亦可為兩者的位置關係係相差9〇庚 1办態, 治療計晝用的CT晝像資料及角度之形態。 寸夂疋位用的CT晝像眘钮 傳送至定位電腦14。治療計晝用的CT 成為 維基準晝像’定㈣的„晝像資制成為三維現 本發明中之晝像核對裝置29及患者定位敎置:象。 存在於該定^電腦14内之電腦軟體相關者,晝像核對2 29係计真别述體位修正量(平移量、旋轉量)者,患 裝置30則為不僅包含晝像核對裝置29,而且還:有 據該體位修正量㈣算出絲控賴轉治療台8(以下」 地將之簡單稱為治療台8)的各驅動軸之參數之功能者。^ 者定位裝置30,係依據晝像核對裝置29所做的比對的: 果(核對結果)而控制治療台8,藉此將粒子束治療的對象患 部導引至位於治療裝置的射束照射中心12。 傳統的放射線治療中所做的定位,係藉由核對從治療 計晝用CT畫像資料生成的DRR(Digitally ReconstructedTomography) The image is used to determine the three-dimensional shape of the affected part. In recent years, it has been desired to provide an X-ray CT apparatus in a treatment room to use an X-ray CT image obtained by an X-ray CT apparatus and an X-ray CT image of a treatment image at the time of treatment. Requirements for positioning. This is because the X-ray translucent image towel 'this f is the soft-faced part of the affected part usually does not appear very clear, the peak is compared with the system (4) relative position to age, and vice versa, using x-ray CT image Therefore, it is not necessary to directly perform the alignment between the affected parts captured in the X-ray CT imaging. Therefore, it is considered to expand the reference image and the current artifact into a three-dimensional artifact in the conventional two-stage pattern matching. The three-dimensional reference image and the three-dimensional image now contain a plurality of tomographic images (slice images) taken by a x-ray CT apparatus. Because there are X-ray exposures, etc., it is false; 3D now has fewer artifacts, so when comparing, you must have a more detailed image of the third image. A comparison of three-dimensional current portraits with key image information that is abbreviated to the three-dimensional reference image. The segment pattern matching 'can compare the two-dimensional reference image with the same density of the image information and the two-dimensional current image, but the three-dimensional reference image and the three-dimensional current image with different imaging information (four degrees) Fortunately, the simple transfer of the imagery dimension (d-) of the technique from two dimensions to one dimension has the problem of not achieving a two-stage pattern matching. Change: "There is no such thing as it is: in the past, simply the three-dimensional reference image from the set m-th field (10) is used for the three-dimensional current image in the second setting area, and is simply extracted. The second setting 323646 6 201249496 The three-dimensional current image portion in the region is compared with the three-dimensional reference image in the first set region to match the pattern in which the two images are most consistent. Therefore, the object of the present invention is to achieve high-precision two-stage pattern matching even when the number of tomographic images of the three-dimensional current artifact is smaller than that of the three-dimensional reference image when the patient is subjected to radiotherapy ( Two-stage check). (Means for Solving the Problem) The image collation apparatus of the present invention includes a three-dimensional imaging input unit, which is a three-dimensional reference image taken during a treatment plan for radiation therapy, and is taken at the time of treatment. The three-dimensional current image is read in separately; and the check processing unit checks the three-dimensional reference image and the three-dimensional current image, and calculates that the position and posture of the affected part in the three-dimensional current image are consistent with the position and posture of the affected part in the three-dimensional reference image. The amount of body position correction. The check processing unit has a primary check unit that performs primary pattern matching from the three-dimensional reference image with respect to the three-dimensional current image, and a secondary check portion that performs the matching from the three-dimensional reference based on the result of the primary pattern matching. The predetermined template area generated by one side of the image or the three-dimensional current image is compared with the other one of the three-dimensional reference image or the three-dimensional current image which is different from the basis of the generation of the predetermined template region according to the result of the primary pattern matching. A quadratic pattern matching of the generated predetermined search target area. (Effect of the Invention) The image checking device of the present invention first performs a pattern matching from the three-dimensional reference image matching to the three-dimensional current image, and then generates a predetermined template region and a predetermined one based on the result of the one-type matching. The search target area, s 7 323646 201249496, performs the quadratic pattern matching between the search target area and the template area, so that even if the number of tomographic images of the three-dimensional current artifact is smaller than that of the three-dimensional reference image, high precision can be realized. Two-stage pattern matching. [Embodiment] Embodiment 1 FIG. 1 is a view showing a configuration of an imaging collation apparatus and a patient positioning apparatus according to Embodiment 1 of the present invention. Fig. 2 is a view showing the overall configuration of a machine relating to the imaging apparatus and the patient positioning apparatus of the present invention. In Fig. 2, 1 shows a CT simulation room for performing a treatment plan performed before radiotherapy, in which there is a CT gantry 2 and a top plate 3 of a CT imaging photographic bed, and the patient 4 is lying horizontally. On top of the top plate 3, CT image data containing the treatment plan for the affected part 5 is taken. On the other hand, 6 denotes a treatment room for performing radiation therapy, in which there is a CT pylon 7 and a rotating treatment table 8, the upper portion of the rotary treatment table 8 has a top plate 9, and the patient 10 is placed on the top plate 9, and The CT imaging data for the positioning of the affected part 11 at the time of treatment is taken. Here, the positioning means calculating the position of the patient 10 and the affected part 11 at the time of treatment from the CT image data for the treatment, and calculating the position correction for the position of the patient 10 and the affected part 11 in accordance with the treatment plan. The amount is then carried out in such a manner that the affected part 11 at the time of treatment comes to the radiation irradiation center 12 of the radiation therapy. The alignment is achieved by driving control of the rotary treatment table 8 in a state where the patient 10 is lying on the top plate 9 to move the position of the top plate 9. The rotary treatment table 8 can perform a translation correction of six degrees of freedom for translation and rotation, and the top plate 9 of the rotary treatment table 8 can be rotated by 180 * 8 323646 201249496 degrees 'to the top of the rotary treatment table 8 (-) ( In Fig. 2, the solid line indicates that one of the treatment positions of the irradiation head 13 is shifted (in the second, "radiation 2" diagram, although the money is not the CT imaging position and the treatment position) 4 (10) The case of the relative positional relationship of degrees, but it is not limited to this configuration. The positional relationship between the two is also different. The position of the CT image and the angle of the treatment are used. The CT image used for the position is transmitted to the positioning computer 14. The CT used for the treatment is used as a dimensional reference image, and the image is used as a three-dimensional image verification device 29 and patient positioning in the present invention. Set: Elephant. The computer software related person existing in the computer 14 checks the body position correction amount (translation amount, rotation amount), and the patient device 30 includes not only the image checking device 29 but also Further, according to the posture correction amount (4), the function of the parameters of the respective drive shafts of the wire control treatment table 8 (hereinafter simply referred to as the treatment table 8) is calculated. The person positioning device 30 controls the treatment table 8 according to the comparison made by the image checking device 29: thereby guiding the target portion of the particle beam treatment to the beam irradiation of the treatment device Center 12. The positioning done in traditional radiotherapy is based on the DRR (Digitally Reconstructed) generated from the CT image data from the treatment plan.

Radiography)晝—像或與此DRR晝像同時攝得的χ射線透視 畫像、與治療時在治療室攝得的χ射線透視畫像,來計算 出位置偏移1。在Χ射線透視畫像中,本質為軟體組織之 患部通常不會顯得报清楚,所以基本上都是要利用到與骨 9 323646 201249496 頭的相對位置來進行對位。 ct 室6 晝像資料之定位,則具有如下二?:中記述之使用 中設置CT筒架7,且在治療前= 匕因為在治… 用CT畫像資料上進行對位, 旦像貧料與治療計晝 做到在患部之對位。 “可直—心患部,能 接著,說明在本實施形態 定位裝置30中之前述體位修正^象^裝Ϊ 29及患者 構成晝像核對裴置及患者定位 σ "驟。第1圖顯示 係,其_,畫像核對裝置29具;^==理部間的關 之三維畫像輸人部21、核對處理部22^貝料予以讀入 23、及核對結果輪出部 乂對結果顯示部 3上:療_ 一二 計晝基:::::在:療計晝時為Τ治療 的對象之患部之患部資':(=於:表示作為粒子束治療 入。三維現在晝像係為在治療 * 手而輪 ㈣,者位用而攝得的 貝枓八特徵在於:從抑制\射線曝露量之觀點來看,复 斷層晝像(亦稱為切片晝像)片數較少。 八 本發明係構成為:先進行從三維基準晝像匹配到三維 現在晝像之一次型樣匹配,然後根據一次型樣匹配的妗果 來產生預定的樣板區域及預定的檢索對象區域,再使用該 預定的樣板區域而在同方向、或反方向進行二次型樣四配 之兩階段型樣匹配之構成。兩階段型樣匹配可藉由使一文Radiography) The positional shift 1 is calculated as a fluoroscopy fluoroscopy image taken at the same time as the DRR squint and a fluoroscopy fluoroscopy image taken at the treatment room during treatment. In the fluoroscopy perspective, the affected part of the soft tissue usually does not appear to be clear, so basically the relative position with the head of the bone is used to perform the alignment. The positioning of the ct room 6 image data has the following two?: The CT pylon 7 is set in use, and before the treatment = 匕 because it is being treated... The alignment is performed on the CT image data, and the image is poor. The treatment plan is done in the opposite position of the affected part. "The straight-hearted part can be explained, and the position correction device 29 and the patient-constituting image collation device and the patient positioning σ " in the positioning device 30 of the present embodiment can be explained. Figure 1 shows the system. _, the image collation device 29; ^== the three-dimensional portrait input unit 21 between the analytic units, the collation processing unit 22, the bedding material read 23, and the collation result rounding unit 乂 the result display unit 3上上:疗_一二计昼基::::: In the treatment of 昼 Τ Τ Τ Τ Τ Τ ' ' ' ' ' ' ' ' ' : : : : : : : : : : : : : : : : ' : ' ' ' ' ' ' ' ' ' Treatment * Hand and wheel (four), the position of the use of the Bayer eight features: from the point of view of inhibition of ray exposure, the number of complex fault images (also known as sliced images) is less. The invention is configured to first perform a pattern matching from the three-dimensional reference key matching to the three-dimensional current image, and then generate a predetermined template area and a predetermined search target area according to the result of the one-type matching, and then use the predetermined The model area and the quadratic pattern in the same direction or in the opposite direction Pattern matching the configuration phase. Phase two pattern matching can be described by the pair

S 323646 10 201249496 型樣匹配時的比對參數與二次型樣匹配時的比對參數不相 同而達成處理的高速化或高精度化。例如,有一種以較粗 糙的解析度對於較廣的範圍進行一次型樣匹配,然後使用 所找到的樣板區域或檢索對象區域,以較細密的解析度對 於縮小後的範圍進行二次型樣匹配之方法。 接著,針對三維晝像輸入部21進行說明◊三維晝像 輸入部21 ’係將利用X射線CT裝置而拍攝到的且構成為 複數個斷層晝像之晝像群做成為DlCOM(Digital Imaging and Communications in Medicine)形式的晝像資料(切片晝 數群),並以之作為三維的體積資料(volume data)而予以讀 入者。治療計晝用的CT晝像資料,係為治療計晝時的三 維體積資料,亦即三維基準畫像。定位用的CT畫像資料, 係為治療時的三維體積資料,亦即三維現在晝像。而且, CT晝像資料並不限於dic〇M形式者,亦可為其他的形式。 核對處理部22,係核對(型樣匹配)三維基準晝像與三 維現在畫像’計算出讓三維現在晝像中的患部的位置姿勢 與前述三維基準畫像中的患部的位置姿勢一致之體位修正 量。核對結果顯示部23,係使核對處理部22所核對出的 結果(使後述的體位修正量、及以該體位修正量使之移動後 的二維現在畫像重疊顯示於三維基準晝像上的晝像等)顯 不於疋位電腦14的監視器晝面上。核對結果輸出部24, 係將核對處理部22騎三絲準畫像與三維現在晝像的 核對之際得到的修正量,亦即核對處理部22所計算出的體 位修正量(平移量、㈣量)予叫出。^療台㈣參數算 323646 11 201249496 出部26,係將核對麩 ΰ果輸出部24的輸出值(平移三軸[△ X,△ Υ,ΔΖ]、旋轉= 〜軸[ΔΑ,ΔΒ, △〇]共計六個自由度) 變換為用來控制治療4 ..^ y 13 8的各轴之參數,亦即算出用來控 制〉口療〇 8的各軸之灰 μ山如v ,數。治療台8根據治療台控制參數 异出部26所計算出的、乂 λ /〇療台控制參數而驅動治療台8的各 軸的驅動裝置。如此丄 .L 就可計算出讓患部的位置與治療計 晝一致之體位修正量 Α、,Λ 然後以讓治療時的患部11來到放射 線治療的射束照射中 、12之方式進行對位。 核對處理部22息士 一、有:位置姿勢變換部25、一次核對 部16、二次核對部& β ^ 7、及基準樣板區域產生部18。位置 姿勢變換部25,係在〜^ 久型樣匹配或二次型樣匹配之際使 對象資料的位置姿勢錄$ 1 髮更者。一次核對部16,係進行三維 基準旦像相對於三維現左 A . / 疋在畫像之一次型樣匹配。二次核對 部17’係根據-次型樣匹配的結果而從三維基準晝像或三 、旦像t方生成的預定的樣板區域’相對於根據-次型樣匹配的結果㈣與預定的樣板區域的生成基礎不同 維現在晝像之另一方所生成的預定的 檢索對象Q域’進仃二: 欠型樣匹配。 、下利用第至第9圖來針對核對處理部22進行詳 細的說明。帛3目係_林發明實施形態1 +的三維基準 -晝像及基準立像樣.板區域之圖。第4圖係顯示本發明實施 形態1中的三維現在畫像之圖。第5圖係用來說明本發明 實施形態1中的一次螌樣匹配方法之圖。第6圖係用來說 明第5圖之一次型樣匹配方法中之基準晝像樣板區域與切 r· Λ 323646 12 201249496 片晝像的關係之圖。第7圖係顯示利用本發明實施形態1 中之一次型樣匹配方法而抽出的切片晝像的一次抽出區域 之圖。第8圖係用來說明本發明實施形態丨中的二次型樣 匹配方法之圖。弟9圖係用來說明第8圖之二次型樣匹配 方法中之基準畫像樣板區域與切片晝像的關係之圖。 核對處理部22的基準樣板區域產生部18,係使用在 治療計晝時輸進來之患部形狀(患部資訊),而從三維基準 晝像31產生出基準晝像樣板區域3 3。三維美準圭像31传 由複數個切片晝像32所構成。第3圖中為了不使說明變複 雜而顯不二維基準畫像31由五片切片晝像32a 32b 32c 32d,32e所構成之例。患部形狀係作為〇f Interest:注意區域)35而以在各切片晝像為包圍患部的閉 輪廓之形態輸進來。可將包含前述閉輪廓之區域設成是例 如外接四角形34,而將包含各外接四角形34之立方體區 域设成為樣板區域。以此樣板區域作為基準晝像樣板區域 33。核對處理部22的一次核對部16,係進行將基準晝像 樣板區域33匹配至三維現在晝像36之一次型樣匹配。第 4圖所示之二維現在畫像36係為由三片切片晝像37a,37b, 37c所構成之例。第5圖所示之現在晝像區域%係表現成 包含一片切片晝像37a, 37b,37c之立方體。如第5圖所示, 在現在晝像區域38中使基準晝像樣板區域 33(33a, 33b, 33C)王逐行掃描(raster scan)狀移動,並計算其與三維現在 食1象3 6夕 | 之相關值。在相關值方面’可使用標準化相互相關 】用於晝像比對(晝像核對)中之任何相關值。 S. 13 323646 201249496 基準晝像樣板區域33a係在切片查德,, 思1豕37a上沿荖播护 而呈逐行掃 路徑39a而呈逐行掃描狀移動。同樣的,烏準查有评抱 域33b係在切片晝像37b上沿著掃插路徑^外晝像樣板區 描狀移動,基準晝像樣板區域33c係在t μ ^ , 巧晝像37c上沿 者掃描路徑39c而呈逐行掃描狀移動。龙由α ° r ’掃插路徑39b 39c係為了不使圖變複雜而簡化顯示。 ’ -次型樣匹配之際,係如第6圖所示,分別就構成基 準晝像樣板區域33的每一個切片晝像53,進行與構成現 在晝像區域38的切片晝像37之晝像核對。切片晝像53, 係在三維基準晝像31的切片晝像32中用基準晝2樣板區 域33加以切出的畫像。基準晝像樣板區域33係由與三維 基準晝像中之五片切片晝像32a, 32b,32c, 32d, 32e相對應 之五片切片晝像53a,53b,53c,53d,53e所構成。因此,在 一次型樣匹配之際,係相對於三維現在晝像36的切片晝像 37a’分別用基準晝像樣板區域33之五片切片晝像53a,53b, 53c,53d,53e與之進行畫像核對。以及,相對於三維現在 晝像36的切片晝像37b,37c,同樣地進行晝像核對。 一次核對部16係藉由一次型樣匹配,而從三維現在 晝像36的各切片畫像37抽出一次抽出區域43(此一次抽 出區域43中包含了現在晝像區域38與基準畫像樣板區域 33之相關值最高的區域)。如第7圖所示,從三維現在晝 像36的切片晝像37a抽出一次抽出區域43.a。以及’從二 維現在晝像36的切片晝像37b,37c抽出一次抽出區域43b, 43c。然後,產生出將一次抽出區域43a,43b,43c都包含在 14 323646 201249496 其中之一次抽出現在晝像區域42來作為將用於二次型樣 匹配中之檢索對象區域。以此方式,一次核對部16產生出 作為二次型樣匹配中將用到的檢索對象區域之一次抽出現 在晝像區域42。 其中,在定位前之狀態,三維基準晝像31與三維現 在畫像36的姿勢(旋轉三軸)並不一致,所以如第5圖之單 純的逐行掃描,在三維現在晝像36的切片片數很少的情 況,並無法做到也將角度偏移予以檢出之高精度的匹配, 但在抽出進行二次型樣匹配所需的一次抽出區域43上並 不會有問題。因此,在一次型樣匹配中並未將角度偏移也 加以檢出而計算相關值,在之後的二次型樣匹配才進行連 角度偏移也予以檢出之高精度的匹配。 接著,說明二次型樣匹配。在二次型樣匹配中,藉由 核對處理部22的位置姿勢變換部25來產生出將從三維基 準晝像31生成的基準晝像樣板區域33的位置姿勢予以變 換後成為的位置姿勢變換樣板區域40。如第8及第9圖所 示,二次型樣匹配係在匹配時,追加基準晝像樣板區域33 的姿勢變化量(旋轉三軸)作為參數。二次核對部17係在藉 由位置姿勢變換部25將位置姿勢予以變換後成為的位置 姿勢變換樣板區域40與切片晝像片數較少之三維現在晝 像36的一次抽出現在晝像區域42之間,進行也將角度偏 移因素包含在裡面之高精度的匹配。如此設計,就可實現 也將角度偏移因素包含在裡面之高精度的兩階段型樣匹 配。藉由以包含有在一次型樣匹配中求出的區域之較窄的 15 323646 201249496 範圍作為對象,使之成為二次型樣匹配的搜尋範圍,就可 使用包含有以較粗糙的解析度且較廣的範圍作為對象進行 一次型樣匹配所找到的一次抽出區域43於其中之一 乂抽 出現在畫像區域42,來以較細密的解析度進行二次丨人 配,而可縮短型樣匹配所需的時間。 人垔樣匹 第8圖所示的一次抽出現在晝像區域42,係 含三個一次抽出區域43a,43b, 43c之立方體。本現成包 姿勢經變換後成為的基準晝像樣板區域之 為位置 ,區域4〇a,係在切片晝像37a的—次抽出區域^變換樣 著掃描路徑39a而呈逐行掃描狀移動。同樣的,43a上沿 經變換後成為的位置姿勢變換樣板區域4卟,、係=置姿勢 像37b的一次抽出區域43b上沿著掃描路徑在切片晝 掃插狀移動,位置姿勢經變換後成為的位^欢 呈遂行 區域40c,係在切片晝像37c的一次抽出區域彳^變換樣板 掃插路徑39c而呈逐行掃描狀移動。其中, 上沿著 39c係為了不使圖變複雜而簡化顯示。 田傻39b, 二次型樣匹配之際,係如第9圖所示,由一 & 〇在位置姿勢變換樣板區域40的斷面41與構人核對部 現在晝像區域42之切片晝像37的一次抽出區域-欠抽出 行晝像核對。此外’亦可在切片晝像55(用一;之間進 晝像區域42在三維現在晝像36的切月奎你, 出現在 ι像37上切出沾^ 像)、與斷面41之間進行晝像核對。位置姿勢變心的晝 域40的斷面41 ’係從三維基準晝像3i的…土 、樣板區 的设數個切片查a 32生成。例如’斷面41的資料係為從構 ^患像 战二維基準晝像S 323646 10 201249496 The matching parameter at the time of pattern matching is different from the comparison parameter in the case of quadratic pattern matching, and the processing speed and accuracy are achieved. For example, there is a pattern matching for a wider range with a coarser resolution, and then using the found template area or the search object area to perform quadratic pattern matching on the reduced range with finer resolution. The method. Next, the three-dimensional imaging input unit 21 will be described. The three-dimensional imaging input unit 21' is a D1COM (Digital Imaging and Communications) image group that is captured by an X-ray CT apparatus and configured as a plurality of tomographic images. In Medicine) The image data of the form (slice group) is read as a three-dimensional volume data. The CT image data used for treatment planning is the three-dimensional volume data for the treatment of the measurement, that is, the three-dimensional reference image. The CT image data for positioning is the three-dimensional volume data at the time of treatment, that is, the three-dimensional current image. Moreover, the CT image data is not limited to the dic〇M form, and may be in other forms. The collation processing unit 22 collates the three-dimensional reference image and the three-dimensional current image in the pattern matching (the pattern matching) to calculate the posture correction amount that matches the position and posture of the affected part in the three-dimensional current imaging with the position and posture of the affected part in the three-dimensional reference image. The collation result display unit 23 causes the collation processing unit 22 to collate the result (the body posture correction amount to be described later and the two-dimensional current image that has been moved by the posture correction amount are superimposed and displayed on the three-dimensional reference image). Like, etc.) is not visible on the monitor surface of the computer 14. The collation result output unit 24 is a correction amount obtained when the collation processing unit 22 rides the collimation of the three-line quasi-image and the three-dimensional current image, that is, the amount of correction of the body position calculated by the collation processing unit 22 (translation amount, (four) amount ) Call out. ^Therapy (4) Parameter calculation 323646 11 201249496 The output 26 will check the output value of the bran output unit 24 (translation three axes [△ X, △ Υ, ΔΖ], rotation = ~ axis [ΔΑ, ΔΒ, △〇 In total, six degrees of freedom are converted into parameters for controlling the axes of the treatments 4 ..^ y 13 8 , that is, the gray μ mountains such as v, which are used to control the axes of the osmotic treatment 〇8. The treatment table 8 drives the drive means of each axis of the treatment table 8 based on the 乂 λ / 〇 treatment table control parameters calculated by the treatment table control parameter. In this way, the position correction amount Α, Λ, which allows the position of the affected part to match the treatment plan, can be calculated, and then the target portion 11 during the treatment is irradiated to the radiation treatment of the radiation therapy, and the alignment is performed. The collation processing unit 22 includes a position and posture converting unit 25, a primary collating unit 16, a secondary collating unit & β^7, and a reference template region generating unit 18. The position and posture changing unit 25 records the position and orientation of the object data by $1 in the case of the ~^ pattern matching or the quadratic pattern matching. The primary check unit 16 performs a one-dimensional matching of the three-dimensional reference image with respect to the three-dimensional current left A. / 疋 in the portrait. The secondary collating portion 17' is a predetermined template region generated from the three-dimensional reference image or the third image, and the result is matched with the result according to the -sub pattern matching (four) and the predetermined template according to the result of the -type pattern matching. The base of the generation of the different dimensions of the current search object generated by the other side of the Q domain 'in the second two: the undertype match. The check processing unit 22 will be described in detail using the first to ninth drawings.帛3目系_Lin invention invention 1 + three-dimensional reference - 昼 image and reference vertical image. Fig. 4 is a view showing a three-dimensional present portrait in the first embodiment of the present invention. Fig. 5 is a view for explaining a primary sample matching method in the first embodiment of the present invention. Fig. 6 is a diagram showing the relationship between the reference image area in the pattern matching method of Fig. 5 and the cut image of the image of r· Λ 323646 12 201249496. Fig. 7 is a view showing a single extraction region of a sliced image extracted by the primary pattern matching method in the first embodiment of the present invention. Fig. 8 is a view for explaining a quadratic pattern matching method in the embodiment of the present invention. Fig. 9 is a diagram for explaining the relationship between the reference image template area and the slice artifact in the quadratic pattern matching method of Fig. 8. The reference template area generating unit 18 of the collation processing unit 22 generates the reference image template area 3 from the three-dimensional reference image 31 by using the shape of the affected part (affected part information) which is input at the time of the treatment. The three-dimensional beauty of the Zhuixiang 31 is composed of a plurality of sliced images 32. In the third drawing, an example in which the two-dimensional reference image 31 is composed of five sliced images 32a 32b 32c 32d and 32e is shown in order not to obscure the description. The shape of the affected part is entered as a 闭f Interest: attention area 35 in a form in which the sliced image is a closed contour surrounding the affected part. The area including the aforementioned closed contour may be set to, for example, an circumscribed square 34, and the cube area including each circumscribed square 34 may be set as a template area. This template area is used as a reference template area 33. The primary collating unit 16 of the collation processing unit 22 performs primary pattern matching in which the reference image template region 33 is matched to the three-dimensional current image 36. The two-dimensional present image 36 shown in Fig. 4 is an example of three sliced images 37a, 37b, and 37c. The current image area % shown in Fig. 5 is expressed as a cube containing a slice of the image 37a, 37b, 37c. As shown in Fig. 5, in the current imaging area 38, the reference image template area 33 (33a, 33b, 33C) is moved in a raster scan manner, and is calculated and compared with the three-dimensional current image 1 3 6夕| Relevant value. Any correlation value in the image comparison (image check) can be used in terms of correlation values. S. 13 323646 201249496 The reference image template area 33a is moved in a progressive scan manner along the line sweeping path 39a along the sliced Chad, and on the 1st 37a. Similarly, the Ukrainian review has a commentary field 33b on the sliced image 37b along the sweeping path ^ outside the image of the sample area, the reference image area 33c is t μ ^, the image 37c The edge scan path 39c moves in a progressive scan. The dragon is rotated by the α ° r ' sweeping path 39b 39c in order to simplify the display without complicating the drawing. When the --type matching is performed, as shown in Fig. 6, each of the slice images 53 constituting the reference image template area 33 is imaged with the slice image 37 constituting the current imaging area 38, respectively. Check. The sliced image 53 is an image cut out by the reference frame 2 in the slice image 32 of the three-dimensional reference image 31. The reference image template area 33 is composed of five sliced images 53a, 53b, 53c, 53d, 53e corresponding to the five sliced images 32a, 32b, 32c, 32d, 32e in the three-dimensional reference image. Therefore, at the time of the pattern matching, the slice images 37a' of the three-dimensional current image 36 are respectively subjected to the five slice images 53a, 53b, 53c, 53d, 53e of the reference image template region 33. The portrait is checked. Further, the image collation is performed in the same manner as the slice images 37b and 37c of the three-dimensional current image 36. The primary collating unit 16 extracts the primary extraction region 43 from each of the slice images 37 of the three-dimensional current image 36 by one pattern matching (this primary extraction region 43 includes the current imaging region 38 and the reference image template region 33). The area with the highest correlation value). As shown in Fig. 7, the extracted area 43.a is extracted once from the sliced image 37a of the three-dimensional now image 36. And the extraction regions 43b, 43c are extracted once from the sliced images 37b, 37c of the two-dimensional current image 36. Then, it is generated that the one-time drawing area 43a, 43b, 43c is included in 14 323646 201249496, one of which is drawn in the keying area 42 as the search target area to be used in the quadratic pattern matching. In this way, the primary collating unit 16 generates a one-time drawing of the search target area to be used in the quadratic pattern matching in the imaging area 42. However, in the state before the positioning, the three-dimensional reference image 31 does not coincide with the posture of the three-dimensional current image 36 (rotation three axes), so the number of slices in the three-dimensional current image 36 as in the simple progressive scan of FIG. In a rare case, it is impossible to perform a high-precision matching in which the angular offset is detected, but there is no problem in extracting the one-time extraction area 43 required for the secondary pattern matching. Therefore, in the one-shot matching, the angular offset is not detected and the correlation value is calculated, and the subsequent quadratic pattern matching is performed with the high-precision matching detected by the angular offset. Next, the secondary pattern matching will be explained. In the quadratic pattern matching, the position and posture conversion unit 25 of the collation processing unit 22 generates a position and posture conversion template obtained by converting the position and orientation of the reference image template area 33 generated from the three-dimensional reference image 31. Area 40. As shown in Figs. 8 and 9, the quadratic pattern matching system adds the posture change amount (rotation three axes) of the reference image template area 33 as a parameter. The second collation unit 17 is formed by the position and posture conversion template region 40 obtained by the position and posture conversion unit 25, and the three-dimensional current image 36 having a small number of sliced images is present in the imaging region 42. Between, the high-precision matching that also includes the angular offset factor is included. This design allows for a high-precision two-stage pattern that also includes angular offset factors. By using the narrower range of 15 323646 201249496 containing the region found in the primary pattern matching as the object of the quadratic pattern matching search range, it is possible to use a coarser resolution and A wide range of one-time extraction area 43 obtained as a target for one type matching is present in the image area 42 to perform a second-level match with a finer resolution, and the pattern matching can be shortened. Time required. The sputum sample shown in Fig. 8 appears in the sculpt area 42, which is a cube of three primary extraction regions 43a, 43b, 43c. In the present ready-made package, the position of the reference image template region after the transformation is changed, and the region 4〇a is moved in a progressive scan manner by changing the scanning path 39a in the secondary extraction region of the sliced image 37a. Similarly, the position and orientation change template area 4卟 on the 43a is converted, and the first extraction area 43b of the posture image 37b is swept in the slice along the scan path, and the position and posture are converted. The bit-shaped area 40c is moved in a progressive scan manner in the one-time extraction area of the sliced image 37c. Among them, the upper portion along the 39c system simplifies the display so as not to complicate the drawing. Tian Suo 39b, when the quadratic pattern is matched, as shown in Fig. 9, the section 41 of the position and posture change template area 40 and the section image of the current image area of the manipulating section are shown by Fig. 9 One extraction area of 37 - under-extracted line image check. In addition, it can also be used in sectioning image 55 (using one; between the image area 42 in the three-dimensional image of the current image of the cut moon, you appear on the ι image 37 and cut out the image), and the section 41 Perform an image check. The section 41' of the 昼 field 40 which is centered by the position and posture is generated from the number of slices of the three-dimensional reference image 3i and the number of slices of the template area. For example, the data of Section 41 is a two-dimensional reference image of the image.

S 323646 16 201249496 31之複數個切片晝像32切出者。通常,位置姿勢變換樣 板區域40的斷面41的資料密度會與三維現在晝像36的一 次抽出區域43的資料密度不同,但只要就斷面41的每一 晝素計算其相關值即可。此外,位置姿勢變換樣板區域40 的斷面41,亦可為包含以讓其資料密度與三維現在晝像36 的一次抽出區域43的資料密度一樣之方式增補過的資料 者。 在此,歸納說明實施形態1之兩階段型樣匹配方法。 首先,核對處理部22的基準樣板區域產生部18,從三維 基準晝像31產生出基準晝像樣板區域33(基準晝像樣板區 域產生步驟)。然後,一次核對部16執行基準晝像樣板區 域33相對於三維現在晝像36之一次型樣匹配(一次型樣匹 配步驟)。一次型樣匹配係就構成基準晝像樣板區域33之 每一個切片晝像53,進行與構成現在晝像區域38之切片 晝像37之晝像核對。一次核對部16係使基準畫像樣板區 域33進行掃描,並就每個掃描位置計算現在晝像區域38 與基準晝像樣板區域33的相關值(相關值計算步驟),藉由 一次型樣匹配以來抽出將現在畫像區域38與基準晝像樣 板區域33的相關值成為包含最高的區域的方式抽出一次 抽出區域43( —次抽出區域抽出步驟)。然後,一次核對部 16以構成現在晝像區域38之每一個切片晝像37都包含一 次抽出區域43的方式,產生出作為在二次型樣匹配中將用 到的檢索對象區域之一次抽出現在晝像區域42(檢索對象 產生步驟)。實施形態1之兩階段型樣匹配方法就是包含基S 323646 16 201249496 31 of the plurality of slices are like 32 cutters. In general, the data density of the section 41 of the position and orientation conversion template area 40 is different from the data density of the one extraction area 43 of the three-dimensional current image 36, but the correlation value may be calculated for each element of the section 41. Further, the section 41 of the position/posture conversion template area 40 may be a data supplemented in such a manner as to increase the data density of the first extraction area 43 of the three-dimensional current image 36. Here, the two-stage pattern matching method of the first embodiment will be described. First, the reference template area generating unit 18 of the collation processing unit 22 generates the reference image template area 33 from the three-dimensional reference image 31 (reference image template area generating step). Then, the primary collating section 16 performs a pattern matching of the reference thumbnail image area 33 with respect to the three-dimensional current image 36 (primary pattern matching step). The primary pattern matching system constitutes each of the slice images 53 of the reference image template area 33, and collates with the image of the slice image 37 constituting the current imaging area 38. The primary collating unit 16 scans the reference image template area 33, and calculates the correlation value between the current imaging area 38 and the reference image template area 33 for each scanning position (correlation value calculation step), since one pattern matching The extraction extraction area 43 is extracted by extracting the correlation value between the current image area 38 and the reference image template area 33 so as to include the highest area (the extraction area extraction step). Then, the primary collating unit 16 includes a single extraction region 43 in such a manner that each of the slice images 37 constituting the current imaging region 38 is generated, and one of the search target regions to be used in the quadratic pattern matching is generated. Keying area 42 (search object generation step). The two-stage pattern matching method of Embodiment 1 is an inclusion base.

S 17 323646 201249496 準晝像樣板區域產生步驟一次型樣 之二次塑樣匹配步驟。一次型樣匹配步驟則是包^關值 •計算步驟、一次抽出區域抽出步驟、以及檢索對象產生步 . 驟。 : 接著,核對處理部22的二次核對部17係從位置姿勢 變換部25將基準晝像樣板區域33的位置姿勢予以變換後 的位置姿勢變換樣板區域40,相對於三維現在晝像%的 -次抽出現在晝像區域42執行二次型樣匹配(二㈣樣四 配步驟)。二次型樣匹配,係產生經變換至 之位置姿勢變換樣板區域40的複數個斷面41(斷面產生步 驟)’並就每一斷面41 ’在構成一次抽出現在晝像區域42 之切片晝像37的一次抽出區域43或切片畫像55、與該斷 面41之間進行晝像核對。二次核對部17係使位置姿勢變 換樣板區域40進行掃描,並就每個掃描位置計算一次抽出 現在晝像區域42與位置姿勢變換樣板區域4〇的複數個斷 面41的相關值(相關值計算步驟)。然後,位置姿勢變換部 25使位置姿勢變換為與先前的位置姿勢不同的位置姿勢 (位置安勢變換步驟),再由二次核對部17產生在該位置姿 勢之位置姿勢變換樣板區域40的複數個斷面41(斷面產生 步驟)’且使位置姿勢變換樣板區域4〇進行掃描,並就每 個掃描位置計.算一次抽出現在晝像區域42與位置姿勢變 換樣板區域40的複數個斷面41的相關值(相關值計算步 驟)。核對處理部22的二次核對部π將相關值為計算出的 相關值之中最高的三維基準晝像與三維現在晝像之位置姿 323646 18 201249496 勢關係(位置姿勢資訊)選定作為最佳解(最佳解選定步 驟)。如此,就可實現讓三維基準晝像與三維現在晝像這兩 個三維晝像最為一致之型樣匹配。二次型樣匹配步驟包含 斷面產生步驟、相關值計算步驟、位置姿勢變換步驟、以 及最佳解選定步驟。 型樣匹配完成後,核對處理部22從相關值為計算出 的相關值之中最高的之位置姿勢變換樣板區域40的位置 姿勢,計算出使三維基準晝像31與三維現在晝像36相核 對之際之體位修正量(平移量、旋轉量)(體位修正量計算步 驟)。然後,核對結果顯示部23使體位修正量、及讓移動 過該體位修正量之三維現在畫像重疊於三維基準晝像上而 顯示的晝像等顯示於電腦14的監視器晝面上。而且,核對 結果輸出部24藉由核對處理部22將使三維基準晝像31 與三維現在晝像36相核對之際之體位修正量(平移量、旋 轉量)予以輸出(體位修正量輸出步驟)。然後,治療台控制 參數算出部26將核對結果輸出部24的輸出值(平移三軸 [△X, ΔΥ, ΔΖ]、旋轉三轴[ΔΑ,ΔΒ,AC]共計六個自由 度)變換為用來控制治療台8的各軸之參數,亦即算出用來 控制治療台8的各軸之參數(治療台控制參數算出步驟)。 然後,治療台8根據治療台控制參數算出部26所計算出的 治療台控制參數而驅動治療台8的各軸的驅動裝置(治療 台驅動步驟)。 因為實施形態1之晝像核對裝置29,係先進行三維基 準晝像31相對於三維現在晝像36之一次型樣匹配,然後 19 323646 201249496 根據一次型樣匹配的結果,從三維基準晝像31產生出預定 的二次型樣匹配用的樣板區域,亦即位置姿勢變換樣板區 域40,以及從三維現在晝像36產生出將一次抽出區域43 包含在其中之將用於二次型樣匹配之預定的檢索對象區 域,亦即一次抽出現在晝像區域42,所以即使是三維現在 晝像36的斷層晝像數(切片晝像數)比三維基準晝像31少 之情況,也能夠實現高精度的兩階段型樣匹配。 因為實施形態1之晝像核對裝置29,即使在三維現在 晝像36的斷層晝像數(切片晝像數)比三維基準晝像31少 之情況,也能夠實現高精度的兩階段型樣匹配,所以可減 少對位之際之利用X射線CT裝置拍攝的三維現在晝像36 的斷層晝像數,可減低對位之際之X射線CT裝置使患者 受到的X射線曝露量。 再者,實施形態1之晝像核對裝置29,係根據執行三 維基準晝像31相對於三維現在晝像36之一次型樣匹配所 得到的結果,來產生出一次抽出現在晝像區域42,且以比 現在晝像區域38窄區域之此一次抽出現在晝像區域42作 為檢索對象,因此可用較粗糙的解析度以較廣範圍為對象 進行一次型樣匹配,然後使用包含有在一次型樣匹配中找 到的一次抽出區域43之一次抽出現在晝像區域42,來以 較細密的解析度進行二次型樣匹配,而可縮短型樣匹配所 需的時間。 實施形態1之患者定位裝置30,可根據晝像核對裝置 29所計算出之體位修正量,而使患者的位置姿勢與治療計 20 323646 201249496 晝之際之位置姿勢ϋ於可使患者的位置姿勢斑治療 計畫之際之位置姿勢—致,因此能以讓治療時的象部η 來到放射線治療的射束照射中心12之方式進行姆位。 再者’實施形態1之患者定位裝置3〇,可 勢變換部Μ產生出在從得自三維基準晝像31之==像 樣板區域33匹配至斷層晝像數(切片畫像數)比三ς基=畫 像31少之三維現在畫|36之際很適切的位置姿^換ς 板區域40,而可實現也將角度偏移因 的兩階段型樣㈣。 料之问精度 f據實施形態i之晝像核對裝置29,具備有:將放射 線冶療之轉計晝之際攝得的三維基準畫们卜*治療之 際攝付的二維現在晝像36分別予讀金仏 部21;核對三祕準讀31 /以人之二維晝像輸入 讓三維現在書―:;準;二:二咖 Μ Φ的“ 位置姿勢與三維基準畫像 中的^㈣位置姿勢—致之體位修正量之核對處理部 U ’且核對處理部22係具有:進行三維基準晝像μ相對 於二維現在晝像36之…欠型樣匹配之—次核料Μ ;以 及進打根據一次型樣匹配的結果而從三維基準晝像31或 三維現在晝像36之-方生成的預定賴板區域(位置姿勢 變換樣板區域40) ’相狀次型觀_結果而從 與預定的樣板區域(位1姿勢變換樣板區域4_生成基礎 不同之二維基準4像31或三維現在晝像36之另—方生成 的預定的檢索對象區域42之二次型樣匹配之二次核對部 17’因此即使是二維現在晝像36的斷層晝像數比三維基準 21S 17 323646 201249496 The quasi-昼 model area produces a step of the second plastic matching step of the step-by-step pattern. The one-time matching step is a packet closing value calculation step, a single extraction region extraction step, and a retrieval object generation step. Then, the secondary collating unit 17 of the collation processing unit 22 converts the position and posture conversion template region 40 in which the position and orientation of the reference image template region 33 is converted from the position and posture conversion unit 25, with respect to the three-dimensional current image %. Sub-extraction occurs in the imaging area 42 to perform a quadratic pattern matching (two (four)-like four-matching step). The quadratic pattern matching is to generate a plurality of sections 41 (section generation steps) of the position and orientation transformation template area 40 that have been transformed, and to form a section of the image area 42 in each section 41'. The primary extraction area 43 or the slice image 55 of the image 37 is subjected to an image collation with the section 41. The secondary collating unit 17 scans the position and orientation conversion template area 40, and calculates the correlation value of the plurality of sections 41 that appear in the imaging area 42 and the position and posture transformation template area 4〇 for each scanning position (correlation value). calculation steps). Then, the position/posture conversion unit 25 converts the position and posture into a position and posture different from the previous position and posture (positional ampoule conversion step), and the second collation unit 17 generates a plurality of positions and postures in the position and posture conversion template area 40. Section 41 (section generation step)' and scanning the position and orientation transformation template area 4〇, and calculating a plurality of breaks appearing in the imaging area 42 and the position and posture transformation template area 40 for each scanning position. The correlation value of the face 41 (correlation value calculation step). The secondary collating unit π of the collation processing unit 22 selects the correlation value as the highest three-dimensional reference image of the calculated correlation value and the positional orientation of the three-dimensional current image 323646 18 201249496 potential relationship (position and posture information) as the optimal solution. (Best deselection step). In this way, it is possible to match the three-dimensional reference image with the three-dimensional image of the three-dimensional current image. The quadratic pattern matching step includes a section generation step, a correlation value calculation step, a position and orientation transformation step, and an optimal solution selection step. After the pattern matching is completed, the collation processing unit 22 calculates that the three-dimensional reference image 31 and the three-dimensional current image 36 are collated from the position and posture of the positional posture change template region 40 which is the highest among the calculated correlation values. Position correction amount (translation amount, rotation amount) at the time of the body position (step correction calculation step). Then, the collation result display unit 23 displays the body posture correction amount and the key image displayed by superimposing the three-dimensional current image on which the body posture correction amount has been superimposed on the three-dimensional reference image on the monitor surface of the computer 14. Further, the collation result output unit 24 outputs the posture correction amount (translation amount, rotation amount) when the collation processing unit 22 collates the three-dimensional reference pupil image 31 with the three-dimensional current imaging image 36 (body position correction amount output step) . Then, the treatment table control parameter calculation unit 26 converts the output value (translation three axes [ΔX, ΔΥ, ΔΖ], and the rotation three axes [ΔΑ, ΔΒ, AC] total six degrees of freedom) of the verification result output unit 24 into The parameters of the respective axes of the treatment table 8 are controlled, that is, the parameters for controlling the axes of the treatment table 8 (the treatment table control parameter calculation step) are calculated. Then, the treatment table 8 drives the drive means of each axis of the treatment table 8 in accordance with the treatment table control parameters calculated by the treatment table control parameter calculation unit 26 (the treatment table driving step). Because the image matching device 29 of the first embodiment performs the pattern matching of the three-dimensional reference image 31 with respect to the three-dimensional current image 36, then 19 323646 201249496 is based on the result of the primary pattern matching, from the three-dimensional reference image 31. A template area for predetermined quadratic pattern matching, that is, a position and orientation transformation template area 40 is generated, and a three-dimensional current image 36 is generated to include the primary extraction area 43 therein for use in the secondary pattern matching. The predetermined search target area, that is, the one-time extraction appears in the imaging area 42, so that even if the number of tomographic images (the number of sliced images) of the three-dimensional current image 36 is smaller than that of the three-dimensional reference image 31, high precision can be achieved. The two-stage type match. According to the image collation apparatus 29 of the first embodiment, even in the case where the number of tomographic images (the number of sliced images) of the three-dimensional current image 36 is smaller than that of the three-dimensional reference image 31, high-precision two-stage pattern matching can be realized. Therefore, the number of tomographic images of the three-dimensional current image 36 taken by the X-ray CT apparatus at the time of alignment can be reduced, and the amount of X-ray exposure received by the X-ray CT apparatus for the patient can be reduced. Further, the anamorphic collation apparatus 29 of the first embodiment generates a priming in the anamorphic region 42 based on the result of performing the pattern matching of the three-dimensional reference anamorphic image 31 with respect to the three-dimensional squall image 36, and This one-time drawing, which is narrower than the current imaging area 38, appears in the imaging area 42 as a retrieval object, so that a coarser resolution can be used to perform a pattern matching for a wider range of objects, and then the use includes matching in one pattern. One extraction of the primary extraction region 43 found in the imaging region 42 occurs to perform quadratic pattern matching with a finer resolution, and the time required for pattern matching can be shortened. According to the patient positioning device 30 of the first embodiment, the position and posture of the patient and the positional posture of the treatment device 20 323646 201249496 can be set to the position and posture of the patient according to the posture correction amount calculated by the imaging verification device 29. The positional posture at the time of the spot treatment plan is such that the image portion η at the time of treatment can be brought to the beam irradiation center 12 of the radiation therapy. Further, in the patient positioning device 3 of the first embodiment, the potential transforming unit Μ is matched to the image pattern area 33 obtained from the three-dimensional reference image 31 to the number of tomographic images (the number of slice images). Base = Portrait 31 Less 3D Now Painting | 36 is a very suitable position position ^Change the plate area 40, and the two-stage type (4) that also shifts the angle can be realized. The accuracy of the material is based on the two-dimensional reference image 36 of the embodiment i, which is obtained by the three-dimensional reference picture taken during the treatment of the radiation therapy. Read the gold 仏 Department 21 separately; check the third secret reading 31 / use the human two-dimensional image input to let the three-dimensional book now -:; quasi; two: two curry Φ "position and position in the three-dimensional reference image ^ (four) The position and posture—the body posture correction amount verification processing unit U′ and the verification processing unit 22 includes: performing a three-dimensional reference image μ with respect to the two-dimensional current image 36; The predetermined plate area (position and posture transformation template area 40) generated from the three-dimensional reference image 31 or the three-dimensional current image 36 is formed according to the result of the one-time pattern matching. The predetermined pattern area (bit 1 posture change template area 4_ generates a second type of matching of the predetermined search target area 42 generated by the two-dimensional reference 4 image 31 or the three-dimensional current image 36) The collating portion 17' thus even the tomographic image of the two-dimensional now-image 36 Number ratio 3D benchmark 21

S 323646 201249496 旦像31 /之彳月況,也能夠實現南精度的兩階段型樣匹配。 根據實施形態1之患者定位裝置3〇,具備有:晝像核 對裝置29 ;以及根據晝像核對装置29所計算出之體位修 正量,來算出用來控制治療台8的各轴之參數之治療台控 制參數算出部26,且畫像核對裝置29具備有:將放射線 治療之治療計晝之際攝得的三維基準畫像31、與治療之際 攝得的二維現在晝像36分別予以讀入之三維畫像輸入部 21,核對二維基準晝像31與三維現在晝像36 ,計算出讓 三維現在晝像36中的患部的位置姿勢與三維基準晝像31 中的患部的位置姿勢一致之體位修正量之核對處理部 22。核對處理部22具有:進行三維基準晝像31相對於三 維現在晝像36之一次型樣匹配之一次核對部16 ;以及進 行根據一次型樣匹配的結果而從三維基準晝像31或三維 現在晝像36之一方生成的預定的樣板區域(位置姿勢變換 樣板區域40),相對於根據一次型樣匹配的結果而從與預 定的樣板區域(位置姿勢變換樣板區域4〇)的生成基礎不同 之二維基準旦像31或二維現在晝像36之另一方生成的預 定的檢索對象區域42之二次型樣匹配之二次核對部17, 因此即使是二維現在晝冑36㈣層晝像冑比三維基準晝 像31少之情況,也能夠進行高精度的對位。 〜置诼极對万凌…即钩一梗核對戈 線治療之治療計晝之際攝得的三維基準晝像31與治突 際攝得的三維現在晝像36之晝像核財法,且包含 三維基準晝像31相對於一給相* + 耵於二維現在晝像36之一次型樣ε 323646 22 201249496 之一次型樣匹配步驟;以及執行根據一次型樣匹配的結果 而從三維基準晝像31或三維現在畫像36之一方生成的預 定的樣板區域(位置姿勢變換樣板區域40),相對於根據一 次型樣匹配的結果而從與預定的樣板區域(位置姿勢變換 樣板區域40)的生成基礎不同之三維基準晝像31或三維現 在畫像36之另一方生成的預定的檢索對象區域42之二次 型樣匹配之二次型樣匹配步驟,因此即使是三維現在畫像 36的斷層晝像數比三維基準晝像31少之情況,也能夠實 現高精度的兩階段型樣匹配。 實施形態2 實施形態2之兩階段型樣匹配,係先進行從三維基準 晝像31匹配到三維現在晝像36之一次型樣匹配,然後根 據一次型樣匹配的結果而從三維現在晝像36生成的預定 的二次型樣匹配用的樣板區域(現在晝像樣板區域44),再 以將三維基準晝像31的位置姿勢予以變換後成為的姿勢 變換基準畫像區域47作為檢索對象,進行現在晝像樣板區 域44相對於姿勢變換基準晝像區域47之二次型樣匹配。 二次型樣匹配係為與一次型樣匹配相反方向之型樣匹配。 第10圖係用來說明本發明實施形態2中的一次型樣 匹配方法之圖,第11圖係用來說明第10圖之一次型樣匹 配方法中之基準晝像樣板區域與切片晝像的關係之圖。在 實施形態2的一次型樣匹配中,一次核對部16進行也包含 旋轉三軸在内之搜尋而求出姿勢變化量。 第10圖所示之現在畫像區域38係表現成包含三片切 23 323646 201249496 •片畫像37a,37b,37c之立方體。作為實施形態2的基準畫 像樣板區域之位置姿勢變換樣板區域40a,40b, 40c,係為 - 位置姿勢經位置姿勢變換部25予以變換後成為的區域。其 中,最初的位置姿勢,係為預設(default)狀態,係為例如 旋轉三軸的參數都為〇之狀態。使將基準晝像樣板區域的 位置姿勢予以變換後成為的位置姿勢變換樣板區域4〇a, 在切片晝像37a上沿著掃描路徑3如而呈逐行掃描狀移 動0同樣的,使位置安勢經變換後成為的位置姿勢變換樣 板區域40b,在切片晝像37b上沿著掃描路徑39b而呈逐 行掃描狀移動,使位置姿勢經變換後成為的位置姿勢變換 樣板區域40c,在切片晝像37c上沿著掃描路徑*而呈 逐行掃描狀移動。其中,掃描路徑39b,39c係為了不使圖 變複雜而簡化顯示。 —邊使位置姿勢變化—邊進行三維現在晝像36的切 像37a,37b,37c與位置姿勢變換樣板區域4〇之相關 汁算。例如,使旋轉三軸的各個軸以預定的變化量或變化 率變化’而進行相關計算,然後移動到下一個掃描位置, 再進行相關計算。如第u圖所示,—次核對部16進行位 置姿勢變換樣板區域40的斷面41與構成現在晝像區域% 之切片晝像37之間的晝像核對。位置姿勢變換樣板區域 4〇的斷面41’係為以與在最初的位置姿勢之三維基準晝像 的刀片旦像32平行的面將位置姿勢變換樣板區域予 =切斷而呈現的斷面’係從三維基準晝像31的複數個切片 旦像32生成者(斷面生成步驟)。可用例如實施形態】中說 24 323646 201249496 明過的方法來生成斷面41。亦即,可將斷面41的資料設 定為從構成三維基準晝像31的複數個切片晝像32切出 者:此外,位置姿勢變換樣板區域40的斷面41,亦可為 包含以讓其資料密度與三維現在晝像36的資料密度一樣 之方式增補過的資料者。 然後,一次核對部16產生將用於二次型樣匹配中之 現在畫像樣板區域44。一次核對部16係從例如就各個切 片晝像37a,37b, 37c進行之也包含旋轉三軸在内之搜尋的 結果,來求出相關值最高之位置姿勢變換樣板區域40的斷 面41、該時之位置姿勢變換樣板區域40的姿勢變化量、 以及與該斷面41對應之切片晝像37的抽出區域。一次核 對部16係從求出的各個切片晝像的抽出區域之中,產生出 包含有相關值最高的三維現在晝像的抽出區域之現在晝像 樣板區域44。現在晝像樣板區域44係為二維的晝像。 然後,如第12圖所示,由核對處理部22的位置姿勢 變換部25,以產生現在晝像樣板區域44之際求出的前述 姿勢變化量使三維基準晝像31全體的姿勢變換,而產生出 姿勢變換後的三維姿勢變換基準晝像45,亦即產生出姿勢 變換基準晝像區域47。第12圖係顯示本發明實施形態2 中之姿勢變換後的三維基準晝像之圖。切片晝像46a, 46b, 46c,46d,46e分別為以前述姿勢變化量使切片畫像32a, 32b, 32c, 32d,32e的姿勢變化後之切片晝像。 然後,如第13圖所示,二次核對部17使現在晝像樣 板區域44在姿勢變換後的三維姿勢變換基準晝像45,亦S 323646 201249496 It is also possible to achieve a two-stage type matching of South precision like the 31/month. According to the patient positioning device 3 of the first embodiment, the image collation device 29 and the posture correction amount calculated by the imaging collation device 29 are used to calculate the parameters for controlling the parameters of the respective axes of the treatment table 8. The image control parameter calculation unit 26 includes the three-dimensional reference image 31 taken during the treatment of the radiation therapy and the two-dimensional current image 36 taken during the treatment. The three-dimensional image input unit 21 collates the two-dimensional reference image 31 and the three-dimensional current image 36, and calculates the posture correction amount that causes the position and posture of the affected part in the three-dimensional current image 36 to coincide with the position and posture of the affected part in the three-dimensional reference image 31. The processing unit 22 is checked. The collation processing unit 22 has a primary collation unit 16 that performs one-time matching of the three-dimensional reference image 31 with respect to the three-dimensional current image 36; and performs a three-dimensional reference image 31 or three-dimensionally based on the result of the one-time pattern matching. The predetermined template area (position and posture change template area 40) generated by one of the 36 sides is different from the basis of the generation of the predetermined template area (position and posture change template area 4) with respect to the result of the one-type pattern matching. The secondary reference image of the predetermined search target region 42 generated by the other of the dimensional reference image 31 or the two-dimensional image 36 is matched to the secondary collation portion 17, so that even the two-dimensional 昼胄36 (four) layer 昼 image ratio When the three-dimensional reference image 31 is small, high-precision alignment can be performed. ~ 诼 对 对 万 万 万 万 万 万 ... ... ... ... ... ... ... 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩 钩A pattern matching step comprising a three-dimensional reference image 31 relative to a first phase of the two-dimensional current image 36 ε 323646 22 201249496; and performing a pattern from the one-dimensional matching from the three-dimensional reference The predetermined template area (position and posture transformation template area 40) generated by one of the 31 or three-dimensional current portraits 36 is generated from the predetermined template area (position and posture conversion template area 40) with respect to the result of the primary pattern matching. The quadratic pattern matching step of the quadratic pattern matching image of the predetermined search target region 42 generated by the other of the three-dimensional reference image 31 or the three-dimensional current image 36, so that even the three-dimensional present image 36 has the number of tomographic images Compared with the three-dimensional reference image 31, high-precision two-stage pattern matching can be realized. Embodiment 2 In the two-stage pattern matching of Embodiment 2, the pattern matching from the three-dimensional reference image 31 to the three-dimensional current image 36 is performed first, and then the image is obtained from the three-dimensional image according to the result of the one-type matching. The generated template area for matching the secondary pattern matching (now the template area 44) is converted to the posture and the reference image area 47 obtained by converting the position and posture of the three-dimensional reference image 31 as a search target. The stencil-like template area 44 is matched to the quadratic pattern of the posture-converted reference anamorphic region 47. The quadratic pattern matching is a pattern matching in the opposite direction to the one-type matching. Fig. 10 is a view for explaining a primary pattern matching method in the second embodiment of the present invention, and Fig. 11 is for explaining a reference image area and a slice image in the primary pattern matching method of Fig. 10. Diagram of the relationship. In the primary pattern matching according to the second embodiment, the primary collating unit 16 performs the search including the rotation three axes to obtain the posture change amount. The present portrait area 38 shown in Fig. 10 is represented as a cube containing three cuts 23 323646 201249496 • sheet portraits 37a, 37b, 37c. The position and orientation conversion template areas 40a, 40b, and 40c of the reference image template area in the second embodiment are the areas in which the position and posture are converted by the position and posture conversion unit 25. The initial position and posture is the default state, for example, the parameters of the three axes of rotation are all in the state of 〇. The position and posture conversion template area 4〇a obtained by changing the position and orientation of the reference image template area is moved in the same manner as the scanning path 3 along the scanning path 3, and the position is made The position and orientation conversion template area 40b which has been subjected to the potential transformation is moved in a progressive scanning manner along the scanning path 39b on the sliced image 37b, and the position and posture conversion template area 40c which has been transformed by the position and posture is sliced. The image is moved in a progressive scan along the scan path* on the image 37c. Among them, the scanning paths 39b and 39c are simplified in order not to complicate the drawing. - while changing the position and posture - while performing the three-dimensional current image 36, the images 37a, 37b, 37c are correlated with the position and posture transformation template area. For example, the respective axes of the three axes of rotation are changed by a predetermined amount of change or rate of change', and then the correlation is calculated, and then moved to the next scanning position, and then the correlation calculation is performed. As shown in Fig. u, the secondary collating unit 16 performs an image collation between the cross-section 41 of the position and posture changing template region 40 and the slice image 37 constituting the current imaging region %. The cross-section 41' of the position-and-posture conversion template area 4' is a section which is formed by cutting the position and posture conversion template area on a plane parallel to the blade-shaped image 32 of the three-dimensional reference image of the first position and posture. The generator is generated from a plurality of slice images 32 of the three-dimensional reference image 31 (section generation step). The section 41 can be generated by, for example, the method described in the description of the method of 24 323646 201249496. That is, the data of the section 41 can be set to be cut out from the plurality of slice images 32 constituting the three-dimensional reference image 31. Further, the section 41 of the position and posture conversion template area 40 can be included to allow The data density is increased in the same way as the data density of the three-dimensional image 36. Then, the primary collating section 16 produces the present portrait template area 44 to be used in the quadratic pattern matching. The primary collating unit 16 obtains the cross-section 41 of the position-posture conversion template region 40 having the highest correlation value, for example, from the results of the search including the rotation three axes for each of the slice images 37a, 37b, and 37c. The positional posture changes the posture change amount of the template region 40 and the extraction region of the sliced image 37 corresponding to the cross section 41. The primary check unit 16 generates the current image template area 44 including the extracted area of the three-dimensional current image having the highest correlation value from among the extracted extraction areas of the respective sliced images. The jaw-like panel area 44 is now a two-dimensional image. Then, as shown in FIG. 12, the position and posture converting unit 25 of the collation processing unit 22 converts the posture of the entire three-dimensional reference image 31 by the posture change amount obtained when the current image template area 44 is generated. The three-dimensional posture transformation reference image 45 after the posture conversion is generated, that is, the posture transformation reference imaging region 47 is generated. Fig. 12 is a view showing a three-dimensional reference image after the posture conversion in the second embodiment of the present invention. The sliced images 46a, 46b, 46c, 46d, and 46e are sliced images in which the postures of the slice images 32a, 32b, 32c, 32d, and 32e are changed by the posture change amount, respectively. Then, as shown in Fig. 13, the secondary collating unit 17 changes the reference image 45 of the three-dimensional posture after the posture change of the present image sample area 44,

S 25 323646 201249496 即安勢變換基準晝像區域47 t沿著掃描路徑49呈逐行掃 描狀移動而進行比對,就可高速地檢測出只有平移之偏 • 移。第13圖係用來說明本發明實施形態2中的二次型樣匹 :配方法之圖。姿勢經變換而成為的姿勢變換基準晝像區域 -47,係表現成包含五片切片晝像46a, 46b,46c,46d,46e之 立方體。核對執行面48,係為與一個姿勢(此姿勢係和藉 由一次型樣匹配而找到之與三維現在晝像36的切片晝像 3 7對應的姿勢有最南的相關值之姿勢)相對應之晝像面, 亦即姿勢變換基準畫像區域47中之和與三維現在晝像% 的切片晝像37對應的姿勢有相同姿勢之面。二次核對部 17係從三維姿勢變換基準晝像45的複數個切片晝像46來 產生出姿勢變換基準晝像區域47中之預定的核對執行面 48(核對執行面產生步驟)。可用例如實施形態1中說明過 的方法來產生核對執行面48。亦即,巧·將核對執行面 的資料設定為從構成三維姿勢變換基準畫像45的複數個 切片晝像切出者。此外,核對執行面48,亦可為包含以讓 其資料密度與現在晝像樣板區域44的資料密度一樣之方 式增補過的資料者。 歸納說明實施形態2之兩階段型樣匹配方法。首先, 由核對處理部22的位置姿勢變換部25,從三維基準晝像 31產生出進行位置姿勢變換而得到的位置姿勢變換樣板 區域40(位置姿勢變換樣板區域產生步驟)。然後,核辦處 理部22的一次核對部μ執行位置姿勢變換樣板區域4〇 相對於三維現在晝像36之一次型樣匹配(一次型樣匹配步 323646 26 201249496 驟)。-次型樣匹配係針對構成現在晝像區域38之各個十 片晝像37,進行:每次使位置姿勢變換樣板區域奶 ' 置安勢變化(每次執行位置姿勢變換步驟),就產生出位 :·姿勢變換樣板區域40的斷面41(斷面產生步驟),並進疒 .置姿勢變換樣板區域40的斷面41與構成現在晝像區 之切片晝像37之間之晝像核對。 而且,一次核對部16在每次使位置姿勢變換樣板區 域40的位置姿勢變化時計算出現在晝像區域刊與位置^ 勢k換樣板區域40的相關值(相關值計算步驟)。以及,一 次核對部16在每次使位置姿勢變換樣板區域4〇掃描時, 計算出現在晝像區域38與位置姿勢變換樣板區域4〇的相 關值,藉由這樣的一次型樣匹配來產生出將現在晝像區域 38與位置姿勢變換樣板區域4〇的相關值最高之位置姿勢 €換樣板區域40的抽出區域包含在其中之現在畫像樣板 區域44(現在晝像樣板區域產生步驟)。 接著,核對處理部22利用位置姿勢變換部25,以產 生現在畫像樣板區域44之際求出的前述姿勢變化量使三 維基準晝像31全體的姿勢變換,而產生出姿勢變換後的三 維姿勢變換基準晝像45,亦即產生出姿勢變換基準畫像區 域47(姿勢變換基準晝像區域產生步驟”然後,二次核對 部17執行現在晝像樣板區域44相對於姿勢變換基準畫像 區域47之二次型樣匹配(二次型樣匹配步驟)。二次型樣匹 配,係在核對執行面產生步驟中產生出核對執行面48,然 後進行在核對執行面產生步驟中產生的核對執行面48與 27 323646 201249496 現在晝像樣板區域44之晝像核對。在此晝像核對之際,一 邊使現在晝像樣板區域44(不旋轉地)平移,一邊計算核對 執行面48與現在晝像樣板區域44之相關值(相關值計算步 在二次型樣匹配中’核對處理部22的二次核對部Π 將相關值為計算出的相關值之中最高的之三維姿勢變換基 準晝像45與現在晝像樣板區域44之位置姿勢關係(位置姿 勢資訊)選定作為最佳解(最佳解選定步驟)如此,就可藉 由兩階段匹配而實現讓三維基準晝像31與三維現在晝像 36這兩個二維晝像最為一致之型樣匹配。實施形態2之兩 階段型樣匹配方法就是包含位置姿勢變換樣板區域產生步 驟、-次型樣匹配步驟、姿勢變換基準晝像區域產生步驟、 以及二次雜匹峰驟。-麵!樣匹時魏含斷面產生S 25 323646 201249496 That is, the amperometric conversion reference imaging area 47 t is scanned and scanned in a progressively scanning manner along the scanning path 49, so that only the translational shift can be detected at high speed. Fig. 13 is a view for explaining a quadratic pattern in the second embodiment of the present invention. The posture-converted reference image area -47 which is transformed by the posture is expressed as a cube including five sliced images 46a, 46b, 46c, 46d, 46e. The check execution face 48 corresponds to a posture (this posture is a posture in which the pose corresponding to the slice image of the three-dimensional current image 36 has the southernmost correlation value). The image plane, that is, the posture in the posture transformation reference image region 47 and the posture corresponding to the slice image 37 of the three-dimensional current artifact % have the same posture. The secondary collating unit 17 generates a predetermined collation execution surface 48 (check execution surface generation step) in the posture conversion reference imaging region 47 from the plurality of slice images 46 of the three-dimensional posture conversion reference image 45. The collation execution surface 48 can be generated by, for example, the method described in the first embodiment. That is, the data of the verification execution surface is set to be a plurality of sliced image cutouts constituting the three-dimensional posture transformation reference image 45. In addition, the verification execution surface 48 may also be a data included in a manner that allows the data density to be the same as that of the current image area 44. The two-stage pattern matching method of Embodiment 2 will be summarized. First, the position and posture conversion unit 25 of the collation processing unit 22 generates a position and posture conversion template area 40 (position and posture conversion template area generation step) obtained by performing position and orientation conversion from the three-dimensional reference image 31. Then, the primary collation unit μ of the core processing unit 22 performs a pattern matching of the position and orientation conversion template area 4 相对 with respect to the three-dimensional current image 36 (primary pattern matching step 323646 26 201249496). The sub-pattern matching is performed for each of the ten images 37 constituting the current imaging area 38, each time the position and posture change template region milk is set to change (every time the position and posture transformation step is performed), Position: The cross-section 41 of the posture changing template region 40 (the step of generating the cross-section) is subjected to the collation of the cross-section between the cross-section 41 of the posture-converting template region 40 and the sliced image 37 constituting the current imaging region. Further, the primary collating unit 16 calculates the correlation value appearing in the imaging region publication and the positional k-changing template region 40 each time the position and posture of the position-and-posture conversion template region 40 is changed (correlation value calculation step). And, the primary collating unit 16 calculates the correlation value appearing in the imaging region 38 and the position and posture transformation template region 4〇 each time the position and posture transformation template region 4〇 is scanned, and the primary pattern matching is used to generate the correlation value. The position and posture of the current image area 38 and the position and posture transformation template area 4A are the highest, and the extraction area of the template area 40 is included in the present image template area 44 (now the template image area generation step). Then, the collation processing unit 22 converts the posture of the entire three-dimensional reference image 31 by the position and posture change amount obtained when the current image template area 44 is generated by the position and posture conversion unit 25, and generates a three-dimensional posture transformation after the posture conversion. The reference image 45, that is, the posture transformation reference image region 47 (posture transformation reference image region generation step) is generated, and then the secondary collation portion 17 executes the current image template region 44 twice with respect to the posture transformation reference image region 47. Pattern matching (secondary pattern matching step). The quadratic pattern matching generates a collation execution surface 48 in the collation execution surface generation step, and then performs the collation execution surfaces 48 and 27 generated in the collation execution surface generation step. 323646 201249496 Now the image of the template area 44 is checked. At the time of this image check, the collation execution surface 48 and the current template area 44 are calculated while the current template area 44 is being translated (not rotated). Correlation value (correlation value calculation step in the quadratic pattern matching 'reconciliation processing unit 22's secondary collation unit Π The correlation value is the calculated correlation value The positional relationship (position and posture information) of the highest three-dimensional posture transformation reference image 45 and the current jaw image region 44 is selected as the optimal solution (optimal deselection step), and thus can be realized by two-stage matching. The three-dimensional reference image 31 is matched with the two-dimensional image of the three-dimensional current image 36. The two-stage pattern matching method of the second embodiment includes the position and orientation transformation template region generation step, the sub-pattern. The matching step, the posture transformation reference image area generation step, and the second miscellaneous peaks.

正量、及讓移動過該體位修正量之二 、正量(平移量、旋轉量)(體 對結果顯示部23使體位修 楚之二維現在畫像重暴於二The positive amount, and the second amount of the body position correction amount, the positive amount (the amount of translation, the amount of rotation) (the body of the result display portion 23 makes the two-dimensional present portrait of the posture corrected to the second

24將核對處理部22使三維 面上。而且,核對結果輸出部 28The check processing unit 22 is placed on the three-dimensional surface. Moreover, the verification result output unit 28

S 323646 201249496 基準晝像31與三維現在晝像36相核對之際之體位修正量 (平移量、旋轉量)予以輸出(體位修正量輸出步驟)。然後, 治療台控制參數算出部26將核對結果輸出部24的輸出值 (平移三軸[ΔΧ,ΔΥ,ΔΖ]、旋轉三軸[ΔΑ,ΔΒ, △(:]共 計六個自由度)變換為用來控制治療台8的各轴之參數,亦 即算出用來控制治療台8的各軸之參數(治療台參數算出 步驟)。然後,治療台8根據治療台控制參數算出部26所 計算出的治療台控制參數而驅動治療台8的各軸的驅動裝 置(治療台驅動步驟)。 因為實施形態2之晝像核對裝置29,係進行三維基準 晝像31的位置姿勢變換樣板區域40相對於三維現在晝像 36之也包含旋轉三軸在内的晝像核對之一次型樣匹配,然 後根據一次型樣匹配的結果,從三維現在晝像36產生出二 次型樣匹配用的樣板區域5亦即現在晝像樣板區域44 ^所 以即使是三維現在晝像36的斷層晝像數(切片晝像數)比三 維基準畫像31少之情況,也能夠實現高精度的兩階段型樣 匹配。 再者,實施形態2之晝像核對裝置29,係從三維基準 晝像31產生出三維姿勢變換基準晝像45(姿勢變換後的三 維基準晝像),亦即產生出姿勢變換基準晝像區域47,因 此可藉由使用二維的現在晝像樣板區域44,使之不伴隨有 旋轉移動地相對於姿勢變換基準晝像區域47平行移動而 實現直接型樣匹配。而且,在二次型樣匹配中,只有計算 每次平行移動的相關值,所以與計算每次旋轉移動及平行 29 323646 201249496 移動的相關值之情況相比較,可達成二次梨樣匹配之高速 化。 實施形態3 實施形態3係在使用人體資料庫(atlas model)來產生 實施形態1中之一次型樣匹配用的基準晝像樣板區域33、 或實施形態2中之作為位置姿勢變換樣板區域40的基礎之 基準畫像樣板區域33之點,與實施形態1及2不同。第 14圖係顯示本發明實施形態3之晝像核對裝置及患者定位 裝置的構成之圖。實施形態3之晝像核對裝置29與實施形 態1及2之晝像核對裝置29的不同點在於具有人體資料庫 輸入部50、及平均樣板區域產生部5卜實施形態3之患者 定位裝置30係具有畫像核對裝置29及治療台控制參數算 出部26。 人體資料庫輸入部50,係從資料庫裝置等之記憶裝置 取得人體資料庫(atlas model)。平均樣板區域產生部51, 係從與患者4, 10的患部5, 11對應之人體資料庫的臟器部 份將平均樣板區域54切出。核對處理部22的基準樣板區 域產生部18,係藉由將該平均樣板區域54與三維基準書 像31做型樣匹配’來自動產生出基準晝像樣板區域美 準晝像樣板區域產生步驟)。 使用上述之基準晝像樣板區域33來執行實施形離! 中之兩階段型樣匹配或實施形態2中之兩階段型樣匹配。 如此-來,即使三維基準畫像上未事絲備有表示患部之 資訊(患部形狀等),也可實現兩階段型樣匹配。S 323646 201249496 The position correction amount (translation amount, rotation amount) when the reference image 31 is collated with the three-dimensional current image 36 is output (the body position correction amount output step). Then, the treatment table control parameter calculation unit 26 converts the output values of the collation result output unit 24 (translation three axes [ΔΧ, ΔΥ, ΔΖ], and the rotation three axes [ΔΑ, ΔΒ, Δ (total total six degrees of freedom) into The parameters for controlling the axes of the treatment table 8, that is, the parameters for controlling the axes of the treatment table 8 (the treatment table parameter calculation step) are calculated. Then, the treatment table 8 is calculated based on the treatment table control parameter calculation unit 26. The treatment table control parameter drives the driving device of each axis of the treatment table 8 (the treatment table driving step). Because the imaging matching device 29 of the second embodiment performs the position and orientation conversion template region 40 of the three-dimensional reference imaging 31 with respect to The three-dimensional image 36 now also includes a pattern matching of the image collation including the rotation three axes, and then generates a template area for the quadratic pattern matching from the three-dimensional current image 36 according to the result of the one-time matching. In other words, even if the number of tomographic images (the number of sliced images) of the three-dimensional image 36 is smaller than that of the three-dimensional reference image 31, the high-precision two-stage pattern can be realized. Further, in the image matching device 29 of the second embodiment, the three-dimensional posture conversion reference image 45 (three-dimensional reference image after the posture conversion) is generated from the three-dimensional reference image 31, that is, the posture conversion reference is generated. The image area 47 can be directly matched by using the two-dimensional current template area 44 so as to be parallel to the posture-shifting reference image area 47 without rotational movement. In the pattern matching, only the correlation value of each parallel movement is calculated, so that the speed of the second pear-like matching can be achieved as compared with the case of calculating the correlation value of each rotation movement and parallel movement of 29 323646 201249496. The third embodiment is based on the use of a human body database (atlas model) to generate the reference image template region 33 for the primary pattern matching in the first embodiment or the basis of the position and orientation conversion template region 40 in the second embodiment. The image template area 33 is different from the first and second embodiments. Fig. 14 is a view showing the configuration of the image collation apparatus and the patient positioning apparatus according to the third embodiment of the present invention. The image collation apparatus 29 of the third embodiment differs from the image collation apparatus 29 of the first and second embodiments in that the human body database input unit 50 and the average template area generating unit 5 are in the third embodiment. The positioning device 30 includes an image collation device 29 and a treatment table control parameter calculation unit 26. The human body database input unit 50 acquires an atlas model from a memory device such as a database device, and an average template region generating unit 51. The average template region 54 is cut out from the organ portion of the human body database corresponding to the affected parts 5, 11 of the patient 4, 10. The reference template region generating portion 18 of the collation processing portion 22 is by the average template region 54 is matched with the three-dimensional reference book image 31 to automatically generate the reference image area of the reference image area. Performing the separation using the above-described reference image template area 33 is performed! The two-stage pattern matching or the two-stage pattern matching in the second embodiment. In this way, even if the information on the affected part (the shape of the affected part, etc.) is provided on the three-dimensional reference image, the two-stage pattern matching can be realized.

S 30 323646 201249496 此外,也可考慮:平均樣板區域產生部51從與串 4’ 10的患部5, 11對應之人體資料庫的臟器部份切出:者 * 的平均樣板區域。二維的平均樣板區域54之情況,可士維 : 減個二維的平均樣板區域,然後將複數個二維的出 - 板區域一併輸出至核對處理部22 。核對處理部22的美準 樣板區域產生部18藉由將該複數個二維的平岣樣板=域 與二維基準晝像31做型樣匹配,來自動產生出基: 板區域33。 旦像樣 【圖式簡單說明】S 30 323646 201249496 In addition, it is also conceivable that the average template region generating portion 51 cuts out the average template region of the human body from the organ portion of the human body database corresponding to the affected portions 5, 11 of the string 4'10. In the case of the two-dimensional average template area 54, Cosway: a two-dimensional average template area is subtracted, and then a plurality of two-dimensional output-plate areas are output to the verification processing unit 22. The US standard template area generating unit 18 of the collation processing unit 22 automatically generates the base: plate area 33 by pattern matching the plurality of two-dimensional flat template=fields with the two-dimensional reference image 31. Decent [simplified description]

第1圖係顯示本發明實施形態i之晝像核對裝 者定位襞置的構成之圖。 W 第2圖係顯示與本發明之晝像核對裝置及患者定位裝 置相關的機器整體構成之圖。 第3圖係顯示本發明實施形態1中的三維基準晝像及 基準晝像樣板區域之圖。 第4圖係顯示本發明實施形態1中的三維現在晝像之 圖。 第5圖係用來說明本發明實施形態1中的一次型樣匹 配方法之圖。 第6圖係用來說明第5圖之一次型樣匹配方法中之基 準晝像樣板區域與切片畫像的關係之圖。 第7圖係顯示利用本發明實施形態1中的—次型樣匹 配方法而抽出的切片晝像的一次抽出區域之圖。 第8圖係用來說明本發明實施形態1中的二次型樣匹 31 323646 201249496 配方法之圖。 第9圖係用來說日㈣8圖之二次型樣隨方法令之基 準晝像樣板區域與切片晝像的關係之圖。 第10圖係用來說明本發明實施形態2中的一次型樣 匹配方法之圖。 第11圖係用來說明第10圖之一次型樣匹配方法中之 基準晝像樣板區域與切片晝像的關係之圖。 第12圖係顯示本發明實施形態2中之姿勢變換後的 三維基準晝像之圖。 第13圖係用來說明本發明實施形態2中的二次型樣 匹配方法之圖。 第14圖係顯示本發明實施形態3之晝像核對裝置及 患者定位裝置的構成之圖。 【主要元件符號說明】 1 CT模擬室 2,7 CT筒架 3,9 頂板 4,患者 5, 11 患部 6 治療室 8 旋轉治療台 12 射束照射中心 13 照射頭 14 電腦Fig. 1 is a view showing the configuration of an image collating device positioning device according to an embodiment i of the present invention. W Fig. 2 is a view showing the overall configuration of the apparatus related to the imaging collation apparatus and the patient positioning apparatus of the present invention. Fig. 3 is a view showing a three-dimensional reference image and a reference image template area in the first embodiment of the present invention. Fig. 4 is a view showing a three-dimensional current image in the first embodiment of the present invention. Fig. 5 is a view for explaining the primary pattern matching method in the first embodiment of the present invention. Fig. 6 is a view for explaining the relationship between the reference image area and the slice image in the primary pattern matching method of Fig. 5. Fig. 7 is a view showing a single extraction region of a sliced image extracted by the secondary pattern matching method in the first embodiment of the present invention. Fig. 8 is a view for explaining a method of arranging a secondary type sample 31 323646 201249496 in the first embodiment of the present invention. Figure 9 is a diagram showing the relationship between the basis of the model and the sliced image of the secondary pattern of the Japanese (4) 8 figure. Fig. 10 is a view for explaining the primary pattern matching method in the second embodiment of the present invention. Fig. 11 is a view for explaining the relationship between the reference image template area and the slice artifact in the one-shot matching method of Fig. 10. Fig. 12 is a view showing a three-dimensional reference image after the posture conversion in the second embodiment of the present invention. Fig. 13 is a view for explaining a quadratic pattern matching method in the second embodiment of the present invention. Fig. 14 is a view showing the configuration of an imaging collation apparatus and a patient positioning apparatus according to a third embodiment of the present invention. [Main component symbol description] 1 CT simulation room 2,7 CT tube holder 3,9 top plate 4, patient 5, 11 affected part 6 treatment room 8 rotating treatment table 12 beam irradiation center 13 irradiation head 14 computer

S 32 323646 201249496 16 一次核對部 17 二次核對部 18 基準樣板區域產生部 21 三維晝像輸入部 22 核對處理部 23 核對結果顯示部 24 核對結果輸出部 25 位置姿勢變換部 26 治療台控制參數算出部 29 晝像核對裝置 30 患者定位裝置 31 三維基準晝像 32a〜32e 切片晝像 33, 33a,33b, 33c 基準晝像樣板區域 34 外接四角形 35 ROI(注意區域) 36 三維現在晝像 37a〜37c 切片晝像 38 現在晝像區域 39a〜39c 掃描路徑 40, 40a,40b,40c 位置姿勢變換樣板區域 41 斷面 42 一次抽出現在晝像區域 43 一次抽出區域 33 323646 201249496 44 現在晝像樣板區域 45 三維姿勢變換基準晝像 . 46a〜46e 切片畫像 : 47 姿勢變換基準晝像區域 48 核對執行面 49 掃描路挫· 50 人體資料庫輸入部 51 平均樣板區域產生部 53 切片晝像 55 切片晝像 34 323646S 32 323646 201249496 16 Primary check unit 17 Secondary check unit 18 Reference template area generation unit 21 Three-dimensional image input unit 22 Check processing unit 23 Check result display unit 24 Check result output unit 25 Position and posture change unit 26 Treatment table control parameter calculation Part 29 Image reconciliation device 30 Patient positioning device 31 Three-dimensional reference image 32a to 32e Sliced image 33, 33a, 33b, 33c Reference image area 34 External quadrilateral 35 ROI (Note area) 36 Three-dimensional now image 37a~37c Sliced image 38 now image area 39a~39c Scan path 40, 40a, 40b, 40c Position and orientation change template area 41 Section 42 Once drawn in the image area 43 Once extracted area 33 323646 201249496 44 Now the image area 45 Pose change reference image. 46a to 46e Slice image: 47 Pose change reference image area 48 Check execution surface 49 Scan path frustration 50 Human body database input unit 51 Average template area generation unit 53 Slice image 55 Slice image 34 323646

Claims (1)

201249496 七、申請專利範圍: 1. 一種晝像核對裝置,具備有: - 三維晝像輸入部,係將放射線治療之治療計晝之 ; 際所攝得的三維基準晝像與治療之際所攝得的三維現 在晝像分別予以讀入;以及 * 核對處理部,係核對前述三維基準晝像與前述三 維現在晝像,計算出讓前述三維現在晝像中的患部的 位置姿勢與前述三維基準畫像中的患部的位置姿勢一 致之體位修正量, 前述核對處理部具有: 一次核對部,係進行從前述三維基準晝像相對於 前述三維現在晝像之一次型樣匹配;以及 二次核對部,係進行從根據前述一次型樣匹配的 結果而從前述三維基準晝像或前述三維現在晝像之一 方所生成的預定的樣板區域,相對於根據前述一次型 樣匹配的結果而從與前述預定的樣板區域的生成基礎 不同之前述三維基準晝像或前述三維現在晝像之另一 方所生成的預定的檢索對象區域之二次型樣匹配。 2. 如申請專利範圍第1項所述之畫像核對裝置,其中, 前述核對處理部具備有:根據前述三維基準晝像 中所準備的患部資訊,而從前述三維基準晝像產生出 三維區域的基準晝像樣板區域之基準樣板區域產生 部。 3. 如申請專利範圍第1項所述之晝像核對裝置,其中, s 1 323646 201249496 具備有: 人體資料庫輸入部,係從資料庫裝置取得人體資 料庫;以及 • 平均樣板區域產生部,係從前述人體資料庫中之 • ^患者的患部對應之臟器部份來產生出平均樣板區 域, 且則述核對處理部具有:進行從前述平 域相對於前述三維基準晝像之型樣匹配,並根據料 型樣匹配的結果而從前述三維基準畫像產生出三維區 域的基準畫像樣板區域之基準樣板區域產生部。 4. 如申請專利範圍第2或3項所述之晝像核對裳置,其 前述一次核對部係在前述一次型樣匹配之際,進 行從前述基準晝像樣板區域相對於前述三維現在晝像 之型樣匹配。 5. 如申請專利範圍第4項所述之晝像核對裝置,其中, 前述一次核對部係從前述二維現在晝像產生出將 與前述基準畫像樣板區域的相關值成為最高的區域包 含在其中之作為前述檢索對象區域之一次抽出現在畫 像區域。 6. 如..申請專利範圍第5項所述之晝像核對裝置,其中,… 前述核對處理部具備有:使得三維晝像的位置姿 勢變換之位置姿勢變換部, 前述位置姿勢變換部係產生出使前述基準畫像樣 2 323646 201249496 板區域的位置姿勢變換到預定的位置姿勢的位置姿勢 變換樣板區域, 前述二次核對部係在前述二次型樣匹配之際,進 行從作為前述預定的樣板區域之前述位置姿勢變換樣 板區域相對於前述一次抽出現在晝像區域之型樣匹 配。 7. 如申請專利範圍第6項所述之晝像核對裝置,其中, 前述二次核對部係產生出前述位置姿勢變換樣板 區域的斷面,然後進行前述一次抽出現在晝像區域與 前述斷面之間之型樣匹配。 8. 如申請專利範圍第2或3項所述之晝像核對裝置,其 中, 前述核對處理部具備有:使得三維晝像的位置姿 勢變換之位置姿勢變換部, 前述位置姿勢變換部係產生出使前述基準晝像樣 板區域的位置姿勢變換到預定的位置姿勢的位置姿勢 變換樣板區域, 前述一次核對部係在前述一次型樣匹配之際,進 行從前述位置姿勢變換樣板區域相對於前述三維現在 晝像之型樣匹配。 9. 如申請專利範圍第及項所述之晝像核對裝置,其中, 前述一次核對部係產生出前述位置姿勢變換樣板 區域的斷面,然後進行前述三維現在晝像與前述斷面 之間之型樣匹配,且執行:複數個前述斷面之中相關 3 323646 201249496 值最高的斷面之高相關斷面之決定、前述位置姿勢變 換樣板區域的姿勢變化量之演算、及前述三維現在畫 像中與前述高相關斷面對應之抽出區域的抽出。 10.如申請專利範圍第9項所述之畫像核對襄置,其中, 前述一次核對部係產生出將前述—次型樣匹配之 際所抽出的前述抽出區域包含在其中之作為前述預定 的樣板區域之現在畫像樣板區域, 前述位置姿勢變換部係產生出使前述三維 像的位置姿勢只變換與前述現在畫像樣板區域^應: =抽出區域的姿勢變化量的三維姿勢變換基準晝像 月ij述二次核對部係在前述二次型樣匹配之際,進 ^前述現在晝像樣板區域相對於作為前述檢索對象 區域之前述三維姿勢變換基準晝像區域之型樣匹配。 η.如申^專利範圍第1G項所述之畫像核料置,其中, 金借2人核對β係產生4屬於前述姿勢變換基準 域的斷面之核對執行面,然魏行前述現在晝 像樣板區域與前述核對執行面之間之型樣匹配。 12· —種患者定位裝置,具備有: 甲睛寻利範圍第 , ~ *** ^ )令 7 11項中任二項所述之晝像核對装置;以及 ,據前述晝像核料置所計算出之體位修正量, =來控制治療台的各輕之參數之治療台控制參 S 323646 4 201249496 13. —種晝像核對方法,係核對放射線治療之治療計晝之 際所攝得的三維基準晝像與治療之際所攝得的三維現 在晝像,包括: 執行從前述三維基準晝像相對於前述三維現在晝 像之一次型樣匹配之一次型樣匹配步驟;以及 執行從根據前述一次型樣匹配的結果而從前述三 維基準晝像或前述三維現在晝像之一方所生成的預定 的樣板區域,相對於根據前述一次型樣匹配的結果而 從與前述預定的樣板區域的生成基礎不同之前述三維 基準晝像或前述三維現在晝像之另一方所生成的預定 的檢索對象區域之二次型樣匹配之二次型樣匹配步 驟。 14. 如申請專利範圍第13項所述之晝像核對方法,其中, 包括··從前述三維基準晝像來產生出三維區域的基準 晝像樣板區域之基準晝像樣板區域產生步驟, 且前述一次型樣匹配步驟係執行從前述基準晝像 樣板區域相對於前述三維現在晝像之一次型樣匹配。 15. —種患者定位裝置,具備有: 申請專利範圍第4項所述之晝像核對裝置;以及 根據前述晝像核對裝置所計算出之體位修正量, 來算出用來控制洽療台的各軸之參數之治療台控制參 數算出部。 16. —種患者定位裝置,具備有: 申請專利範圍第8項所述之晝像核對裝置;以及 5 323646 201249496 根據前述晝像核對裝置所計算出之體位修正量, 來算出用來控制治療台的各軸之參數之治療台控制參 數算出部。 6 323646201249496 VII. Patent application scope: 1. An image checking device, which has: - a three-dimensional image input unit, which measures the treatment of radiation therapy; the three-dimensional reference image taken during the treatment and the treatment The obtained three-dimensional current image is read separately; and the * check processing unit checks the three-dimensional reference image and the three-dimensional current image, and calculates the position and posture of the affected part in the three-dimensional current imaging and the three-dimensional reference image. The collation processing unit includes: a primary collating unit that performs primary pattern matching from the three-dimensional reference image with respect to the three-dimensional current image; and a secondary collation unit that performs a posture correction amount in which the position and posture of the affected part are the same a predetermined template region generated from one of the aforementioned three-dimensional reference artifacts or one of the aforementioned three-dimensional current artifacts according to the result of the aforementioned one-time pattern matching, from the predetermined template region with respect to the result according to the aforementioned one-time pattern matching The basis of the generation is different from the aforementioned three-dimensional reference image or the other of the aforementioned three-dimensional image A quadratic pattern matching of the predetermined search target area. 2. The image matching device according to the first aspect of the invention, wherein the verification processing unit includes: generating a three-dimensional region from the three-dimensional reference image based on the affected part information prepared in the three-dimensional reference image The reference template area generation unit of the reference image template area. 3. As claimed in claim 1, the s 1 323646 201249496 has: a human body database input unit for obtaining a human body database from the database device; and an average template region generating unit, The average template region is generated from the organ portion corresponding to the affected part of the patient in the human body database, and the verification processing portion has: performing pattern matching from the aforementioned flat region with respect to the aforementioned three-dimensional reference image And, based on the result of the material pattern matching, a reference template region generating portion of the reference image template region of the three-dimensional region is generated from the three-dimensional reference image. 4. In the case of the image checking device described in claim 2 or 3, the aforementioned check-up portion performs the above-described three-dimensional current image from the aforementioned reference image region at the time of the above-mentioned primary pattern matching. The type matches. 5. The image checking device according to the fourth aspect of the invention, wherein the first checking unit generates a region in which the correlation value with the reference image template region is the highest from the two-dimensional current artifact. The one-time extraction as the search target area appears in the image area. 6. The image collation processing unit according to claim 5, wherein the collation processing unit includes a position and posture converting unit that converts a position and a posture of the three-dimensional imaging image, and the position and posture converting unit generates The position and posture change template region in which the position and orientation of the reference image pattern 2 323646 201249496 is changed to a predetermined position and posture, and the secondary collation portion is subjected to the predetermined template as the predetermined pattern The aforementioned positional position changing template area of the area matches the pattern appearing in the imaging area with respect to the aforementioned one-time drawing. 7. The image checking device according to claim 6, wherein the secondary collating portion generates a cross section of the position and posture changing template region, and then performing the first drawing in the image area and the cross section. Match the pattern between them. 8. The image matching device according to claim 2, wherein the verification processing unit includes a position and orientation conversion unit that changes a position and orientation of the three-dimensional imaging image, and the position and posture conversion unit generates Converting the position and orientation of the reference image template region to a position and posture transformation template region of a predetermined position and posture, and the primary verification portion performs the change from the position and posture to the three-dimensional present state when the primary pattern matching is performed The type of the image matches. 9. The image checking device according to the above claims, wherein the first checking portion generates a cross section of the position and posture changing template region, and then performs between the three-dimensional current image and the aforementioned cross section. Model matching, and execution: the determination of the high correlation section of the section with the highest value of 3 323646 201249496 among the plurality of sections, the calculation of the posture change amount of the position and posture transformation template area, and the aforementioned three-dimensional present portrait Extraction of the extracted area corresponding to the aforementioned high correlation section. 10. The image collation device according to claim 9, wherein the one-time collation unit generates the extracted region extracted by the aforesaid-sub-type matching as the predetermined template. In the current image template area of the area, the position and posture conversion unit generates a three-dimensional posture conversion reference image in which the position and posture of the three-dimensional image are only converted and the current image template area is changed: the posture change amount of the extraction area The secondary collation unit matches the pattern of the three-dimensional posture-converted reference image region as the search target region in the above-described quadratic pattern matching. η. The image material set according to the 1Gth item of the patent scope of the patent, wherein the gold is used by the two persons to check the β system to generate the check surface of the section belonging to the posture transformation reference domain, and the current image of the current image The pattern area matches the pattern between the aforementioned verification execution faces. 12· a patient positioning device having: an eye-catching device as described in any of the 7-11 items; and, according to the foregoing The calculated position correction amount, = the treatment table control to control the light parameters of the treatment table. S 323646 4 201249496 13. The method of checking the image, the three-dimensional image taken during the treatment of the radiation therapy The three-dimensional current image captured by the reference image and the treatment includes: performing a pattern matching step of matching from the aforementioned three-dimensional reference image with respect to the one-dimensional pattern of the aforementioned three-dimensional current image; and performing from the foregoing The predetermined template region generated from one of the aforementioned three-dimensional reference artifacts or one of the aforementioned three-dimensional current artifacts as a result of the pattern matching is different from the generation basis of the predetermined template region with respect to the result of the matching according to the aforementioned one-time pattern A quadratic pattern matching step of the quadratic pattern matching of the predetermined search target region generated by the other of the three-dimensional reference image or the other three-dimensional current image. 14. The method of checking an image according to claim 13, wherein the method includes: generating a reference image template region of the reference image region of the three-dimensional region from the three-dimensional reference image, and the foregoing The one-time pattern matching step performs a pattern matching from the aforementioned reference image template area with respect to the aforementioned three-dimensional current image. 15. A patient positioning device comprising: an image checking device according to claim 4; and calculating a body position correction amount calculated by the image checking device to calculate each of the control stations The treatment table control parameter calculation unit of the parameters of the axis. 16. A patient positioning device comprising: an image checking device according to item 8 of the patent application scope; and 5 323646 201249496 calculating the position correction amount calculated according to the aforementioned image checking device to calculate a treatment table The treatment table control parameter calculation unit of the parameters of each axis. 6 323646
TW100141223A 2011-06-10 2011-11-11 Image checking device, patient positioning device and image checking method TWI425963B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2011130074A JP5693388B2 (en) 2011-06-10 2011-06-10 Image collation device, patient positioning device, and image collation method

Publications (2)

Publication Number Publication Date
TW201249496A true TW201249496A (en) 2012-12-16
TWI425963B TWI425963B (en) 2014-02-11

Family

ID=47298678

Family Applications (1)

Application Number Title Priority Date Filing Date
TW100141223A TWI425963B (en) 2011-06-10 2011-11-11 Image checking device, patient positioning device and image checking method

Country Status (3)

Country Link
JP (1) JP5693388B2 (en)
CN (1) CN102814006B (en)
TW (1) TWI425963B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI573565B (en) * 2013-01-04 2017-03-11 shu-long Wang Cone - type beam tomography equipment and its positioning method
TWI762402B (en) * 2020-08-15 2022-04-21 大陸商中硼(廈門)醫療器械有限公司 Radiation irradiation system and control method thereof
TWI861420B (en) * 2020-08-04 2024-11-11 日商東芝能源系統股份有限公司 Medical image processing device, treatment system, medical image processing method, and medical image processing program

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10092251B2 (en) * 2013-03-15 2018-10-09 Varian Medical Systems, Inc. Prospective evaluation of tumor visibility for IGRT using templates generated from planning CT and contours
JP6192107B2 (en) * 2013-12-10 2017-09-06 Kddi株式会社 Video instruction method, system, terminal, and program capable of superimposing instruction image on photographing moving image
CN104135609B (en) * 2014-06-27 2018-02-23 小米科技有限责任公司 Auxiliary photo-taking method, apparatus and terminal
JP6338965B2 (en) * 2014-08-08 2018-06-06 キヤノンメディカルシステムズ株式会社 Medical apparatus and ultrasonic diagnostic apparatus
JP6452987B2 (en) * 2014-08-13 2019-01-16 キヤノンメディカルシステムズ株式会社 Radiation therapy system
US9878177B2 (en) * 2015-01-28 2018-01-30 Elekta Ab (Publ) Three dimensional localization and tracking for adaptive radiation therapy
JP6164662B2 (en) * 2015-11-18 2017-07-19 みずほ情報総研株式会社 Treatment support system, operation method of treatment support system, and treatment support program
JP2018042831A (en) 2016-09-15 2018-03-22 株式会社東芝 Medical image processing apparatus, treatment system, and medical image processing program
JP6869086B2 (en) * 2017-04-20 2021-05-12 富士フイルム株式会社 Alignment device, alignment method and alignment program
CN109859213B (en) * 2019-01-28 2021-10-12 艾瑞迈迪科技石家庄有限公司 Method and device for detecting bone key points in joint replacement surgery

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5784431A (en) * 1996-10-29 1998-07-21 University Of Pittsburgh Of The Commonwealth System Of Higher Education Apparatus for matching X-ray images with reference images
JP3748433B2 (en) * 2003-03-05 2006-02-22 株式会社日立製作所 Bed positioning device and positioning method thereof
JP2007014435A (en) * 2005-07-06 2007-01-25 Fujifilm Holdings Corp Image processing device, method and program
JP4310319B2 (en) * 2006-03-10 2009-08-05 三菱重工業株式会社 Radiotherapy apparatus control apparatus and radiation irradiation method
JP4425879B2 (en) * 2006-05-01 2010-03-03 株式会社日立製作所 Bed positioning apparatus, positioning method therefor, and particle beam therapy apparatus
JP4956458B2 (en) * 2008-02-13 2012-06-20 三菱電機株式会社 Patient positioning device and method
JP5233374B2 (en) * 2008-04-04 2013-07-10 大日本印刷株式会社 Medical image processing system
JP2010069099A (en) * 2008-09-19 2010-04-02 Toshiba Corp Image processing apparatus and x-ray computed tomography apparatus
EP2433262B1 (en) * 2009-05-18 2016-07-27 Koninklijke Philips N.V. Marker-free tracking registration and calibration for em-tracked endoscopic system
TWI381828B (en) * 2009-09-01 2013-01-11 長庚大學 Method of making artificial implants

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI573565B (en) * 2013-01-04 2017-03-11 shu-long Wang Cone - type beam tomography equipment and its positioning method
TWI861420B (en) * 2020-08-04 2024-11-11 日商東芝能源系統股份有限公司 Medical image processing device, treatment system, medical image processing method, and medical image processing program
TWI762402B (en) * 2020-08-15 2022-04-21 大陸商中硼(廈門)醫療器械有限公司 Radiation irradiation system and control method thereof

Also Published As

Publication number Publication date
JP2012254243A (en) 2012-12-27
TWI425963B (en) 2014-02-11
JP5693388B2 (en) 2015-04-01
CN102814006A (en) 2012-12-12
CN102814006B (en) 2015-05-06

Similar Documents

Publication Publication Date Title
TW201249496A (en) Image collating apparatus, patient positioning apparatus, and image collating method
JP4271941B2 (en) Method for enhancing a tomographic projection image of a patient
EP2056255B1 (en) Method for reconstruction of a three-dimensional model of an osteo-articular structure
JP2966089B2 (en) Interactive device for local surgery inside heterogeneous tissue
CN111918697B (en) Medical image processing device, treatment system and storage medium
JP2019526124A (en) Method, apparatus and system for reconstructing an image of a three-dimensional surface
CN103315746A (en) Method and system for automatic patient identification
JP7444387B2 (en) Medical image processing devices, medical image processing programs, medical devices, and treatment systems
KR102619994B1 (en) Biomedical image processing devices, storage media, biomedical devices, and treatment systems
CN112967379B (en) Three-dimensional medical image reconstruction method for generating confrontation network based on perception consistency
Zhang et al. Long-length tomosynthesis and 3D-2D registration for intraoperative assessment of spine instrumentation
Sismono et al. 3D-localisation of cochlear implant electrode contacts in relation to anatomical structures from in vivo cone-beam computed tomography
CN112989081B (en) Method and device for constructing digital reconstruction image library
Bifulco et al. Estimation of out-of-plane vertebra rotations on radiographic projections using CT data: a simulation study
US8693763B2 (en) Method and device for determining preferred alignments of a treatment beam generator
CN120769730A (en) Systems for validation procedures
JP2021041090A (en) Medical image processing device, x-ray image processing system and generation method of learning model
Sadowsky Image registration and hybrid volume reconstruction of bone anatomy using a statistical shape atlas
EP4641490A2 (en) Methods and systems for dynamic integration of computed tomography to interventional x-ray images
Monteiro Deep Learning Approach for the Segmentation of Spinal Structures in Ultrasound Images
CN121120831A (en) Cone beam X-ray perspective deformation correction system and correction method thereof
CN118696336A (en) Systems and methods for identifying features in an image of a subject
Galvin The CT-simulator and the Simulator-CT: Advantages, Disadvantages, and Future Developments
CN118696335A (en) Systems and methods for identifying features in an image of a subject
CN118541723A (en) Systems and methods for identifying features in an image of a subject

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees