TW202004572A - Learning device, learning method, program, learned model, and bone metastasis detection device - Google Patents
Learning device, learning method, program, learned model, and bone metastasis detection device Download PDFInfo
- Publication number
- TW202004572A TW202004572A TW108117252A TW108117252A TW202004572A TW 202004572 A TW202004572 A TW 202004572A TW 108117252 A TW108117252 A TW 108117252A TW 108117252 A TW108117252 A TW 108117252A TW 202004572 A TW202004572 A TW 202004572A
- Authority
- TW
- Taiwan
- Prior art keywords
- learning
- scintillation
- bone metastasis
- patch image
- area
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01T—MEASUREMENT OF NUCLEAR OR X-RADIATION
- G01T1/00—Measuring X-radiation, gamma radiation, corpuscular radiation, or cosmic radiation
- G01T1/16—Measuring radiation intensity
- G01T1/161—Applications in the field of nuclear medicine, e.g. in vivo counting
- G01T1/164—Scintigraphy
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Optics & Photonics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- High Energy & Nuclear Physics (AREA)
- Medical Informatics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Nuclear Medicine (AREA)
Abstract
Description
本發明係關於一種自受驗者之閃爍圖檢測骨轉移區域之技術。 The invention relates to a technique for detecting a bone metastasis area from a subject's scintillation chart.
作為骨閃爍圖上之骨轉移檢測之關聯研究,可列舉河上一公等人之研究(非專利文獻1)。於非專利文獻1中,於進行分割(全身之骨骼分類)之後,於各骨骼區域(8個區域)中,利用平均或標準偏差等資訊檢測高集聚部位。又,於非專利文獻2中,使用CAD(Computer Aided Diagnosis,電腦輔助診斷)系統「BONENAVI version 2.1.7」(FUJIFILM RI Pharma Co.,Ltd.,Tokyo,Japan),藉由人工類神經網路(ANN:Artificial Neural Networks)對解析全身圖像之骨閃爍圖後之225名前列腺癌患者(骨轉移病例:124例,正常例:101例)進行解析,並示出其等之解析結果。於該BONENAVI系統中,輸出ANN與BSI(Bone Scan Index,骨掃描指數)之2個成像標記。ANN值表示骨轉移之可能性,ANN之範圍係0~1之連續之值,「0」係指不存在骨轉移之可能性,「1」係指骨轉移之嫌疑較高。BSI表示轉移性腫瘤量(骨轉移區域相對於全身骨骼之構成比)。檢測結果為,Sensitivity(靈敏度)為82%(102/124),Specificity(特異性)為83%(84/101)。 As a related study on the detection of bone metastasis on a bone scintillation chart, a study by Kazuyuki Kagami and others (Non-Patent Document 1) can be cited. In
[非專利文獻1]河上一公等人「骨閃爍檢查術診斷支援軟體「BONENAVI」之介紹」核醫學分科會刊,63(0):41-51,2011 [Non-Patent Document 1] Kawakami et al. "Introduction of Bone Scintigraphy Diagnostic Support Software "BONENAVI"" Journal of Nuclear Medicine, 63(0): 41-51, 2011
[非專利文獻2]M, Koizumi, K. Motegi, M. Koyama, T. Terauchi, T. Yuasa, and J. Yonese, 「Diagnostic performance of a computer-assisted diagnosis system for bone scintigraphy of newly developed skeletal metastasis in prostate cancer patients: search for low-sensitivity subgroups」 Annals of Nuclear Medicine, 31 (7): 521-528, 2017 [Non-Patent Document 2] M, Koizumi, K. Motegi, M. Koyama, T. Terauchi, T. Yuasa, and J. Yonese, "Diagnostic performance of a computer-assisted diagnosis system for bone scintigraphy of newly developed skeletal metastasis in prostate cancer patients: search for low-sensitivity subgroups" Annals of Nuclear Medicine, 31 (7): 521-528, 2017
骨閃爍圖上之骨轉移之檢測支援系統主要包含解剖構造之識別處理及異常部位之強調(特徵抽取)或檢測處理。最終,將該等處理彙總而判定疑為骨轉移之部位,並向醫生提示其結果。 The bone metastasis detection support system on the bone scintillation diagram mainly includes the recognition processing of anatomical structures and the emphasis (feature extraction) or detection processing of abnormal parts. Eventually, these processes are aggregated to determine the site suspected of bone metastasis, and the result is presented to the doctor.
於上述習知技術中,於受驗者之閃爍圖上顯示骨轉移區域,但觀察到有將具有與骨轉移區域類似之濃度值之非骨轉移區域(骨折、炎症等非惡性病變區域)誤檢測為骨轉移區域(所謂「過度拾取」),而使檢測精度下降之情形。因此,本發明提供一種維持骨轉移區域之檢測率,並且減少過度拾取之技術。 In the above-mentioned conventional technique, the bone metastasis area is displayed on the scintillation graph of the subject, but it is observed that there is a non-bone metastasis area (non-malignant lesion area such as fracture, inflammation, etc.) having a similar concentration value as the bone metastasis area. The detection is a region of bone metastasis (so-called "over-picking"), which reduces the detection accuracy. Therefore, the present invention provides a technique for maintaining the detection rate of the bone metastasis area and reducing excessive pickup.
本發明之一態樣係一種學習裝置,其係產生用於自受驗者之閃爍圖檢測骨轉移之類神經網路之模型者;其具備:輸入部,其輸入複數個受驗者之閃爍圖及各閃爍圖中之骨轉移區域與非骨轉移區域之正確標籤作為指導資料;及學習部,其使用上述指導 資料進行用以檢測骨閃爍圖之骨轉移區域之類神經網路之模型的學習。 One aspect of the present invention is a learning device that generates a model for neural networks such as bone metastasis detection from a subject's scintillation diagram; it includes: an input section that inputs a plurality of subjects' flickers The correct labels of the bone metastasis area and the non-bone metastasis area in the map and each scintillation map are used as guidance data; and the learning department uses the above guidance data to carry out a model of a neural network model for detecting the bone metastasis area of the bone scintillation map Learn.
又,本發明之另一態樣係一種學習方法,其係產生用於自受驗者之閃爍圖檢測骨轉移區域之類神經網路之模型者;其具備以下步驟:輸入複數個受驗者之閃爍圖及各閃爍圖中之骨轉移區域與非骨轉移區域之正確標籤作為指導資料之步驟;及使用指導資料進行用以檢測骨閃爍圖之骨轉移區域之類神經網路之模型的學習之步驟。 Moreover, another aspect of the present invention is a learning method that generates a model for a neural network such as a bone metastasis region detected from a subject's scintillation graph; it has the following steps: input a plurality of subjects The scintillation map and the correct labeling of the bone metastasis area and non-bone metastasis area in each scintillation map as guidance data; and use the guidance data to learn the neural network model for detecting the bone metastasis area of the bone scintillation diagram Steps.
又,本發明之另一態樣係一種程式製品,其係用以產生用於自受驗者之閃爍圖檢測骨轉移區域之類神經網路之模型者;其執行以下步驟:輸入複數個受驗者之閃爍圖及各閃爍圖中之骨轉移區域與非骨轉移區域之正確標籤作為指導資料之步驟;及使用指導資料進行用以檢測骨閃爍圖之骨轉移區域之類神經網路之模型的學習之步驟。 In addition, another aspect of the present invention is a program product that is used to generate a neural network model such as a bone metastasis region detected from a subject's scintillation graph; it performs the following steps: input a plurality of subjects The scintillation graph of the examiner and the correct labeling of the bone metastasis area and non-bone metastasis area in each scintillation map as the guidance data; and use the guidance data to carry out the neural network model for detecting the bone metastasis area of the bone scintillation diagram Steps of learning.
又,本發明之另一態樣係一種記憶有學習完成模型的記憶媒體,其係用於以自受驗者之閃爍圖檢測骨轉移區域之方式使電腦發揮功能者;其由類神經網路構成,該類神經網路具有捲積層、及逆捲積層,且該類神經網路包含將藉由捲積層而獲得之特徵映射輸入至逆捲積層之構造,上述記憶有學習完成模型的記憶媒體將複數個受驗者之閃爍圖及各閃爍圖中之骨轉移區域與非骨轉移區域之正確標籤作為指導資料而進行學習,其以自輸入至上述類神經網路之受驗者之閃爍圖檢測骨轉移區域之方式使電腦發揮功能。 In addition, another aspect of the present invention is a memory medium with a learning completion model, which is used to make the computer function by detecting the bone metastasis area from the subject's scintillation diagram; it is composed of a neural network This type of neural network has a convolution layer and a deconvolution layer, and this type of neural network includes a structure in which the feature map obtained by the convolution layer is input to the deconvolution layer. The scintillation graphs of multiple subjects and the correct labels of the bone metastasis area and non-bone metastasis area in each scintillation graph are used as the guidance data for learning, which is based on the scintillation pattern of the subject input to the above neural network The method of detecting the bone metastasis area makes the computer function.
如此,藉由使用骨轉移區域與非骨轉移區域之正確標籤學習類神經網路之模型,能夠使用該模型自受驗者之閃爍圖恰當 地檢測骨轉移區域。 In this way, by using a neural network model of correct labeling of bone metastatic and non-bone metastatic regions, the model can be used to properly detect bone metastatic regions from the subject's scintillation graph.
1、2、3、4‧‧‧學習裝置 1, 2, 3, 4 ‧‧‧ learning device
10、21、40、51‧‧‧輸入部 10, 21, 40, 51 ‧‧‧ input
11、22、41、52‧‧‧控制部 11, 22, 41, 52 ‧‧‧ Control Department
12、23、44、55‧‧‧濃度標準化處理部 12, 23, 44, 55 ‧‧‧ Concentration Standardization Processing Department
13、24、45、56‧‧‧補丁圖像製作部 13, 24, 45, 56 ‧‧‧ patch image production department
14‧‧‧補丁圖像反轉部 14‧‧‧ Patch image reversal section
15、46‧‧‧學習部 15, 46‧‧‧ Learning Department
16、26、47、58‧‧‧記憶部 16, 26, 47, 58 ‧‧‧ Memory Department
17、27、48、59‧‧‧輸出部 17, 27, 48, 59 ‧‧‧ output
18‧‧‧指導資料分析部 18‧‧‧ Guidance Data Analysis Department
19‧‧‧補丁圖像選擇部 19‧‧‧ Patch image selection section
20、50‧‧‧骨轉移檢測裝置 20、50‧‧‧Bone metastasis detection device
25、57‧‧‧推論部 25、57‧‧‧Deduction Department
42、53‧‧‧圖像反轉部 42、53‧‧‧Image reversal section
43、54‧‧‧前後圖像對位部 43, 54‧‧‧ Front and rear image alignment
A‧‧‧補丁圖像 A‧‧‧ Patch image
A’‧‧‧補丁圖像 A’‧‧‧ Patch image
B‧‧‧補丁圖像 B‧‧‧ Patch image
B’‧‧‧補丁圖像 B’‧‧‧ Patch image
C‧‧‧補丁圖像 C‧‧‧ Patch image
D‧‧‧補丁圖像 D‧‧‧ Patch image
R‧‧‧區域 R‧‧‧Region
圖1係表示第1實施形態之學習裝置之構成之圖。 FIG. 1 is a diagram showing the structure of a learning device according to the first embodiment.
圖2A係表示受驗者之閃爍圖及正確標籤之圖。 Figure 2A is a diagram showing the subject's flashing diagram and the correct label.
圖2B係表示補丁圖像之例之圖。 FIG. 2B is a diagram showing an example of a patch image.
圖2C係表示補丁圖像之另一例之圖。 FIG. 2C is a diagram showing another example of the patch image.
圖3係表示類神經網路模型之構成之圖。 Figure 3 is a diagram showing the structure of a neural network-like model.
圖4係表示第1實施形態之骨轉移檢測裝置之構成之圖。 FIG. 4 is a diagram showing the configuration of the bone metastasis detection device according to the first embodiment.
圖5係表示自閃爍圖切出之補丁圖像之例之圖。 FIG. 5 is a diagram showing an example of a patch image cut out from a scintillation chart.
圖6係表示第1實施形態之學習裝置之動作之圖。 6 is a diagram showing the operation of the learning device according to the first embodiment.
圖7係表示第1實施形態之骨轉移檢測裝置之動作之圖。 Fig. 7 is a diagram showing the operation of the bone metastasis detection device according to the first embodiment.
圖8係表示第2實施形態之學習裝置之構成之圖。 8 is a diagram showing the structure of a learning device according to a second embodiment.
圖9係表示變形例之學習裝置之構成之圖。 9 is a diagram showing the structure of a learning device according to a modification.
圖10係表示藉由實驗而獲得之Sensitivity(靈敏度)與FP(P)之關係之(FROC,free-response receiver operating characteristic,自由應答接受者操作特徵)曲線。 FIG. 10 is a graph (FROC, free-response receiver operating characteristic) of the relationship between Sensitivity and FP (P) obtained through experiments.
圖11係表示第3實施形態之學習裝置之構成之圖。 11 is a diagram showing the structure of a learning device according to a third embodiment.
圖12係表示輸入至第3實施形態之學習裝置之受驗者之閃爍圖的圖。 12 is a diagram showing a scintillation graph of a subject input to the learning device of the third embodiment.
圖13係表示第3實施形態之學習裝置中所使用之類神經網路模型之構成的圖。 FIG. 13 is a diagram showing the configuration of a neural network model used in the learning device of the third embodiment.
圖14係表示第3實施形態之骨轉移檢測裝置之構成之圖。 Fig. 14 is a diagram showing the configuration of a bone metastasis detection device according to a third embodiment.
圖15係表示第3實施形態之學習裝置之動作之圖。 15 is a diagram showing the operation of the learning device according to the third embodiment.
以下,一面參照圖式,一面對本發明之實施形態之學習裝置及骨轉移檢測裝置進行說明。再者,於下述說明中,作為尺寸等條件而記載之數值僅為較佳之態樣中之例示,並非意圖限定本發明。 Hereinafter, the learning device and the bone metastasis detection device according to the embodiment of the present invention will be described with reference to the drawings. In addition, in the following description, the numerical values described as conditions such as dimensions are only examples in a preferred form, and are not intended to limit the present invention.
實施形態之學習裝置係產生用於自受驗者之閃爍圖檢測骨轉移之類神經網路之模型者,且具備:輸入部,其輸入複數個受驗者之閃爍圖及各閃爍圖中之骨轉移區域與非骨轉移區域之正確標籤作為指導資料;及學習部,其使用指導資料進行用以檢測骨閃爍圖之骨轉移區域之類神經網路之模型的學習。此處,非骨轉移區域雖為具有與骨轉移區域類似之濃度值之區域,但為不引起骨轉移之區域。非骨轉移區域中包含非惡性病變區域(骨折、炎症等)。再者,骨轉移區域亦稱為「異常集聚」。 The learning device of the embodiment is a model that generates a neural network model for detecting bone metastasis from a subject's scintillation chart, and is provided with: an input unit that inputs a plurality of subjects' scintillation charts and each of the scintillation charts The correct label of the bone metastasis area and the non-bone metastasis area is used as guidance data; and the learning department uses the guidance data to learn the neural network model such as the bone metastasis area for detecting the bone scintillation map. Here, although the non-bone metastasis area is an area having a similar concentration value as the bone metastasis area, it is an area that does not cause bone metastasis. Non-bone metastasis areas include non-malignant lesion areas (fractures, inflammation, etc.). Furthermore, the bone metastasis area is also called "abnormal accumulation".
如此,藉由使用骨轉移區域與非骨轉移區域之正確標籤學習類神經網路之模型,能夠使用該模型自受驗者之閃爍圖恰當地檢測骨轉移區域。 In this way, by using a neural network model of correct labeling of the bone metastatic region and the non-bone metastatic region, the model can be used to properly detect the bone metastatic region from the subject's scintillation graph.
實施形態之學習裝置亦可具備自複數個受驗者之閃爍圖切出拍攝到受驗者之骨之區域而產生補丁圖像之補丁圖像製作部,且學習部使用補丁圖像及與其對應之正確標籤作為指導資料而進行學習。 The learning device of the embodiment may also include a patch image creation unit that generates a patch image by cutting out the area of the subject's bone from the scintillation pictures of a plurality of subjects, and the learning unit uses the patch image and its corresponding The correct label is used as a guide for learning.
於類神經網路模型中,學習時所需之記憶體尺寸隨著圖像尺寸變大而增大。根據實施形態之構成,可藉由產生將拍攝到受驗者之骨之區域切出而得之補丁圖像,並使用補丁圖像進行學習,藉此而減小學習時所需之記憶體尺寸。又,由於骨轉移區域之 外觀並不那麼依存於器官之形狀,故而即便於未拍攝到整個器官之情形時,亦能夠進行學習。因此,補丁圖像適於檢測骨轉移區域之類神經網路模型之學習之指導資料。 In the neural network-like model, the memory size required for learning increases as the image size becomes larger. According to the configuration of the embodiment, it is possible to reduce the memory size required for learning by generating a patch image obtained by cutting out the area shot to the bone of the subject and using the patch image for learning . In addition, since the appearance of the bone metastasis area is not so dependent on the shape of the organ, it is possible to learn even when the whole organ is not photographed. Therefore, the patch image is suitable for the guidance data for the learning of neural network models such as bone metastasis areas.
於實施形態之學習裝置中,亦可為,補丁圖像製作部於受驗者之閃爍圖上對既定大小之視窗進行掃描,於該視窗內拍攝到受驗者之骨時,切出視窗之區域作為補丁圖像。藉由如此掃描視窗並切出補丁圖像,而自受驗者之閃爍圖無遺漏地切出補丁圖像。 In the learning device of the embodiment, the patch image creation unit may scan a window of a predetermined size on the subject's scintillation graph, and cut out the window when the subject's bone is photographed in the window The area serves as a patch image. By scanning the window in this way and cutting out the patch image, the self-experienced scintillation picture cut out the patch image without omission.
實施形態之學習裝置亦可具備於藉由補丁圖像製作部製作之補丁圖像中,求出包含骨轉移區域或非骨轉移區域之補丁圖像與不包含骨轉移區域及非骨轉移區域之任一者之補丁圖像之構成比的指導資料分析部。 The learning device of the embodiment may be included in the patch image created by the patch image creation unit to obtain a patch image including a bone metastatic region or a non-bone metastatic region and a region not including a bone metastatic region and a non-bone metastatic region A guide data analysis unit for the composition ratio of any patch image.
發明者等人使用藉由各種指導資料進行學習而產生之模型而進行推論,並對於能夠產生可恰當地進行骨轉移區域之檢測之類神經網路模型之條件進行了研究。結果發現,構成指導資料之補丁圖像之內容(包含骨轉移區域或非骨轉移區域之補丁圖像與不包含骨轉移區域及非骨轉移區域之任一者之補丁圖像的構成比)與類神經網路模型之精度存在關係。根據實施形態,可藉由進行用於學習之指導資料之分析並顯示指導資料之內容,而以能夠進行恰當之學習之方式調整指導資料。 The inventors and others made inferences using models generated by learning from various guidance materials, and studied the conditions under which neural network models that can properly detect bone metastasis regions can be generated. As a result, it was found that the content of the patch image constituting the guidance material (the composition ratio of the patch image including the bone metastatic region or the non-bone metastatic region and the patch image not including either the bone metastatic region or the non-bone metastatic region) and The accuracy of the neural network model is related. According to the implementation form, the guidance materials can be adjusted in such a way as to enable proper learning by analyzing the guidance materials for learning and displaying the contents of the guidance materials.
實施形態之學習裝置亦可具備以使藉由指導資料分析部求出之構成比包含於既定範圍之方式,自藉由補丁圖像製作部製作之補丁圖像抽取不包含骨轉移區域與非骨轉移區域之任一者之補丁圖像的補丁圖像選擇部。 The learning device of the embodiment may also include a method in which the composition ratio determined by the guidance data analysis unit is included in a predetermined range, and the bone image transfer region and the non-bone are not extracted from the patch image created by the patch image creation unit The patch image selection unit of the patch image of any one of the transition areas.
若包含骨轉移區域之補丁圖像之構成比過小,則藉由 學習而獲得之模型之精度可能變差,因此,藉由實施形態之構成,而使包含骨轉移區域之補丁圖像之構成比變大。 If the composition ratio of the patch image including the bone metastasis area is too small, the accuracy of the model obtained by learning may be deteriorated. Therefore, the composition ratio of the patch image including the bone metastasis area is adjusted by the configuration of the embodiment Get bigger.
實施形態之學習裝置亦可具備使藉由補丁圖像製作部製作之補丁圖像之至少一部分補丁圖像左右反轉或上下反轉之補丁圖像反轉部。 The learning device of the embodiment may further include a patch image inverting unit that inverts at least a part of the patch image created by the patch image creation unit from left to right or upside down.
藉由如此使補丁圖像反轉,能夠增加指導資料之變化,從而藉由學習而產生精度較高之模型。再者,於使補丁圖像反轉之情形時,可使用反轉後之補丁圖像,亦可使用反轉後之補丁圖像與反轉前之補丁圖像之兩者作為指導資料。 By inverting the patch image in this way, it is possible to increase the variation of the guidance data, thereby generating a model with higher accuracy through learning. Furthermore, in the case of inverting the patch image, the patch image after inversion can be used, or both the patch image after inversion and the patch image before inversion can be used as guidance materials.
於實施形態之學習裝置中,類神經網路亦可包含如下構造,即,具有Encoder-Decoder(編碼器-解碼器)構造,且將藉由Encoder(編碼器)構造而獲得之特徵映射輸入至Decoder(解碼器)構造。 In the learning device of the embodiment, the neural network may also include the following structure, that is, it has an Encoder-Decoder (encoder-decoder) structure, and the feature map obtained by the Encoder (encoder) structure is input to Decoder (decoder) structure.
根據該構成,利用編碼器構造捕捉圖像之全局之特徵,並將Encode(編碼)之過程中所獲得之特徵映射輸入至解碼器構造,藉此亦學習局部之特徵。藉由捕捉骨轉移區域部位之空間廣度,能夠恰當地求出骨轉移區域部位之位置資訊。 According to this configuration, an encoder structure is used to capture the global features of the image, and the feature map obtained during the Encode process is input to the decoder structure, thereby also learning the local features. By capturing the spatial breadth of the bone metastasis area, the position information of the bone metastasis area can be properly obtained.
於實施形態之骨轉移檢測裝置中,亦可為類神經網路具備將具有編碼器-解碼器構造之第1網路部分與具有編碼器-解碼器構造之第2網路部分結合而成之構造,輸入部針對各受驗者輸入自前後拍攝之閃爍圖及其正確標籤,學習部向第1網路部分之輸入層輸入自前方拍攝受驗者之閃爍圖,並且向第2網路部分之輸入層輸入自後方拍攝受驗者之閃爍圖而進行學習。 In the bone metastasis detection device of the embodiment, it may be a neural network including a first network part having an encoder-decoder structure and a second network part having an encoder-decoder structure. Structure, the input part inputs the scintillation pictures taken from the front and back and their correct labels for each subject, and the learning part inputs the scintillation pictures of the subjects taken from the front into the input layer of the first network part, and the second network part The input layer is input from the back to take a picture of the subject's flashing picture and learn.
又,於實施形態之骨轉移檢測裝置中,亦可為類神經 網路具備將具有編碼器-解碼器構造之第1網路部分與具有編碼器-解碼器構造之第2網路部分結合而成之構造,輸入部針對各受驗者輸入自前後拍攝之閃爍圖及其正確標籤,學習部向第1網路部分之輸入層輸入從自前方拍攝受驗者之閃爍圖切出之第1補丁圖像,並且向第2網路部分之輸入層輸入從自後方拍攝受驗者之閃爍圖切出之與第1補丁圖像對應的第2補丁圖像而進行學習。 In addition, in the bone metastasis detection device of the embodiment, the neural network may include a first network part having an encoder-decoder structure and a second network part having an encoder-decoder structure. The input unit inputs the scintillation pictures taken from the front and back and their correct labels for each subject, and the learning unit inputs the first cut out from the scintillation picture taken from the front of the subject to the input layer of the first network part. The patch image, and the second patch image corresponding to the first patch image cut out from the scintillation image of the subject taken from the rear is input to the input layer of the second network part to learn.
藉由利用將具有編碼器-解碼器構造之2個網路部分結合而成之構造之類神經網路對自前方拍攝之閃爍圖與自後方拍攝的閃爍圖同時進行處理,而非獨立進行處理,能夠產生提高骨轉移區域與非骨轉移區域之識別精度之類神經網路的模型。 By using a neural network such as a structure combining two network parts with an encoder-decoder structure, the scintillation picture taken from the front and the scintillation picture taken from the rear are processed simultaneously rather than independently It can generate a neural network model that improves the recognition accuracy of bone metastatic regions and non-bone metastatic regions.
於實施形態之骨轉移檢測裝置中,亦可為於非骨轉移區域中包含非惡性病變區域,輸入部受理帶有骨轉移區域與非惡性病變區域之各者之正確標籤之複數個受驗者的閃爍圖作為指導資料,學習部使用指導資料學習檢測骨轉移區域與非惡性病變區域之各者之類神經網路之模型。 In the bone metastasis detection device of the embodiment, a non-malignant lesion area may be included in the non-bone metastasis area, and the input unit accepts a plurality of subjects with correct labels of each of the bone metastasis area and the non-malignant lesion area The scintillation graph is used as guidance data. The learning department uses the guidance data to learn a model of a neural network such as detecting each of the bone metastasis area and the non-malignant lesion area.
實施形態之骨轉移檢測裝置具備:記憶部,其記憶有藉由上述學習裝置學習而得之類神經網路之學習完成模型;輸入部,其輸入受驗者之閃爍圖;補丁圖像製作部,其自閃爍圖製作補丁圖像;推論部,其向自記憶部讀出之學習完成模型之輸入層輸入補丁圖像,並求出補丁圖像中所包含之骨轉移區域;及輸出部,其輸出表示骨轉移區域之資料。根據該構成,能夠維持骨轉移區域之檢測率,並可減少過度拾取。 The bone metastasis detection device of the embodiment includes: a memory section having a learning completion model such as a neural network learned by the learning device; an input section which inputs a scintillation image of the subject; a patch image creation section , The patch image is made from the scintillation map; the inference section, which inputs the patch image to the input layer of the learning completed model read out from the memory section, and finds the bone transfer region contained in the patch image; and the output section, Its output represents data of bone metastasis area. According to this configuration, the detection rate of the bone metastasis area can be maintained, and excessive pickup can be reduced.
實施形態之程式製品亦可為用於自受驗者之閃爍圖檢測骨轉移區域者;其使電腦執行以下步驟:輸入受驗者之閃爍圖 之步驟;自閃爍圖製作補丁圖像之步驟;自記憶有藉由上述學習裝置學習而得之類神經網路之學習完成模型之記憶部讀出學習完成模型而向學習完成模型之輸入層輸入補丁圖像,並求出補丁圖像中所包含之骨轉移區域之步驟;及輸出表示骨轉移區域之資料之步驟。 The program product of the embodiment can also be used to detect the bone metastasis area from the subject's scintillation diagram; it causes the computer to perform the following steps: the step of inputting the subject's scintillation diagram; the step of making the patch image from the scintillation diagram; The self-memory has the learning part of the learning completion model of a neural network such as that learned by the above learning device, reads the learning completion model, inputs the patch image to the input layer of the learning completion model, and finds the patch image Steps of bone metastasis area; and step of outputting data representing bone metastasis area.
實施形態之程式製品係用於自受驗者之閃爍圖檢測骨轉移區域者;其使電腦執行以下步驟:輸入自前後拍攝受驗者之2張閃爍圖之步驟;使2張閃爍圖中之一張沿水平方向反轉之步驟;自記憶有藉由使用指導資料之學習而預先產生之學習完成模型之記憶部讀出學習完成模型,並向學習完成模型之輸入層輸入2張閃爍圖,求出閃爍圖中所包含之骨轉移區域之步驟;及輸出表示骨轉移區域之資料之步驟。藉由如此使自前後拍攝之2張閃爍圖之一者反轉成為相同方向之後,將2張閃爍圖輸入至學習完成模型之輸入層進行推論,能夠精度良好地檢測骨轉移區域。 The program product of the embodiment is used to detect the bone metastasis area from the scintillation diagram of the subject; it causes the computer to perform the following steps: input the steps of taking two scintillation pictures of the subject from before and after; A step reversed in the horizontal direction; self-memory has a learning part of the learning completion model pre-generated by learning using the guidance data to read out the learning completion model and
以下,參照圖式,對實施形態之學習裝置及骨轉移檢測裝置進行說明。 Hereinafter, the learning device and the bone metastasis detection device of the embodiment will be described with reference to the drawings.
(第1實施形態) (First embodiment)
圖1係表示第1實施形態之學習裝置1之構成之圖。第1實施形態之學習裝置1係藉由學習而產生用於自受驗者之閃爍圖檢測骨轉移區域之類神經網路模型的裝置。本實施形態之學習裝置1產生之類神經網路模型係用以將受驗者之閃爍圖之區域分類為骨轉移區域、非骨轉移區域及背景之3種類別之模型。於本實施形態中,於非骨轉移區域之類別中,除包含非惡性病變區域以外,還包含腎臟或膀胱等生理性集聚區域。 FIG. 1 is a diagram showing the structure of a
學習裝置1具有:輸入部10,其輸入指導資料;控制部11,其基於指導資料進行類神經網路模型之學習;記憶部16,其記憶藉由學習而產生之模型;及輸出部17,其向外部輸出記憶於記憶部16之模型。 The
圖2A係表示輸入至輸入部10之指導資料之例之圖。指導資料包含受驗者之閃爍圖、及賦予至受驗者之閃爍圖之正確標籤之資料。於本例中,閃爍圖之大小為512×1024[pixels]。正確標籤為對各像素指定出注目像素是與集聚對應之像素,抑或是背景(集聚以外)像素者。於與集聚對應之像素之情形時,進而指定出係骨轉移區域、注射洩漏或尿洩漏、非骨轉移區域之哪一種。再者,注射洩漏或尿洩漏會自本實施形態之骨轉移檢測裝置20之檢測之對象中排除。 FIG. 2A is a diagram showing an example of guidance data input to the
其次,對控制部11進行說明。控制部11具有濃度標準化處理部12、補丁圖像製作部13、補丁圖像反轉部14、及學習部15。 Next, the
濃度標準化處理部12具有進行濃度值之標準化之功能,其用以抑制根據每個受驗者而不同之正常骨區域之濃度值之不均。濃度標準化處理部12藉由濃度範圍調整、正常骨水準之鑑定、灰度標準化之處理而進行濃度值之標準化。濃度範圍之調整例如係以輸入圖像之除濃度值0以外之濃度柱狀圖之累積上位0.2%之像素值成為1023,且累積上位98%之像素值成為0之方式進行線性轉換。 The concentration
正常骨水準之鑑定係對濃度範圍調整後之圖像之除濃度值0以外之濃度柱狀圖利用多重臨限值法。臨限值係自柱狀圖 之累積上位1%至25%之像素值為止設為每1%。於在各個臨限值中使範圍調整後之圖像二值化後,進行4連結標記。針對其結果,抽選出面積為10[pixels]以上且未滿4900[pixels]之區域。其次,降序排列求出之區域內之平均濃度值,並求出轉移點(正常區域與異常區域之交界)。將2個連續之平均濃度值成為波峰像素之3%以下之位置作為轉移點。波峰像素係各區域之平均濃度值之最大值。 The identification of normal bone level is to use the multiple threshold method for the density histogram of the image after the density range adjustment except for the density value 0. The threshold value is set to 1% from the cumulative upper 1% to 25% of the pixel value of the histogram. After binarizing the image after the range adjustment in each threshold value, a 4-link mark is performed. According to the result, the area with an area of more than 10 [pixels] and less than 4900 [pixels] is selected. Secondly, the average concentration value in the calculated area is sorted in descending order, and the transition point (the boundary between the normal area and the abnormal area) is determined. The position where two consecutive average density values become less than 3% of the peak pixel is taken as the transition point. The peak pixel is the maximum value of the average density value of each area.
繼而,於灰度標準化中,求出包含轉移點之連續之5點之平均濃度值的平均值P。最後,藉由將標準化係數F=k/P與濃度範圍調整後之圖像相乘而進行標準化。此處,常數k設為358.4,但該值係藉由實驗而決定(伊藤達也「骨閃爍圖上之異常集聚檢測處理之開發」東京農業大學學士論文、2015)。 Then, in gradation normalization, the average value P of the average density values of five consecutive points including the transition point is obtained. Finally, normalization is carried out by multiplying the normalized coefficient F=k/P with the adjusted image of the density range. Here, the constant k is set to 358.4, but this value is determined by experiment (Ito Tatsuya "Development of Abnormal Clustering Detection Processing on Bone Scintillation Chart" Tokyo Agricultural University Bachelor Thesis, 2015).
補丁圖像製作部13具有自受驗者之閃爍圖切出並製作補丁圖像之功能。於本例中,補丁圖像之尺寸為64×64[pixels]。於受驗者之閃爍圖(512×1024[pixels])之上以2[pixels]間隔掃描64×64[pixels]之視窗,於(1)視窗內包含集聚標記(骨轉移區域或非骨轉移區域),或者(2)包含骨區域且不包含集聚之情形時,切出作為圖像補丁。補丁圖像製作部13自輸入之複數個受驗者之骨閃爍圖切出圖像補丁。圖2B及圖2C係表示自受驗者之閃爍圖切出之圖像補丁及與其對應之正確標籤之例的圖。 The patch
補丁圖像反轉部14具有使製作之補丁圖像中之一部分補丁圖像左右反轉之功能。 The patch
學習部15具有使用補丁圖像進行用於自閃爍圖檢測骨轉移區域之類神經網路模型之學習之功能。於本實施形態中,使用作為FCN(Fully Convolutional Network,全捲積網路)之一的 U-Net作為類神經網路模型。 The
圖3係表示本實施形態中所使用之類神經網路模型之例之圖。於圖3中,示出輸入補丁尺寸64×64[pixel]之補丁時之構造之例。本實施形態中所使用之類神經網路模型具有編碼器-解碼器構造。於編碼器構造中,反覆進行捲積與池化(pooling),抽選出圖像之全局之特徵。藉由解碼器構造,將全局之構造恢復為原尺寸之圖像,但於該過程中,藉由結合編碼(Encode)之過程中所獲得之特徵,而亦學習局部之特徵。 FIG. 3 is a diagram showing an example of a neural network model used in this embodiment. FIG. 3 shows an example of the structure when a patch with a patch size of 64×64 [pixel] is input. The neural network model used in this embodiment has an encoder-decoder structure. In the encoder structure, convolution and pooling are performed repeatedly to extract the global characteristics of the image. Through the decoder structure, the global structure is restored to the original size image, but in this process, by combining the features obtained in the process of encoding (Encode), the local features are also learned.
又,本實施形態中所使用之類神經網路模型具有作為殘差區塊之一的Bottleneck(瓶頸)(K.He,X.Zhang,S.Ren,and J.Sun「Deep residual learning for image recognition」arXiv:1512.03385,2015),以抽選出更高度之特徵。 Also, the neural network model used in this embodiment has Bottleneck (bottleneck) as one of the residual blocks (K. He, X. Zhang, S. Ren, and J. Sun "Deep residual learning for image "recognition" arXiv: 1512.03385, 2015) to select higher-level features.
對圖3所示之類神經網路模型之構造進行詳細地說明。於本例中,輸入圖像為灰度,輸入之維度為64×64×1。首先,於捲積層中使通道數為32,通過瓶頸。其後,進行2×2之MAX pooling(最大池化),以通道數翻倍之方式通過瓶頸。將該等層重複共4次,編碼器之最終之特徵映射之尺寸成為4×4×512。 The structure of the neural network model shown in FIG. 3 will be described in detail. In this example, the input image is grayscale, and the input dimension is 64×64×1. First, make the number of
繼而,使用逆捲積層使特徵映射之尺寸翻倍。然後,將逆捲積層之輸出與編碼器之特徵映射連結(concat),並通過瓶頸。與編碼器同樣地將該等層重複共4次,解碼器之最終之特徵映射之尺寸成為64×64×32。最後,於1×1之捲積層中成為輸出類別數即3個通道(背景、骨轉移區域、非骨轉移區域),而成為64×64×3。又,於所有3×3之捲積層中進行補零(Zero Padding),於捲積層之後具有Batch Normalization(批標準化)(S.Ioffe,and C.Szegedy, 「Batch Normalization:Accelerating Deep Network Training by Reducing Internal Covariate Shift」arXiv:1502.03167,2015)與ReLU函數。 Then, a deconvolution layer is used to double the size of the feature map. Then, the output of the deconvolution layer is concatenated with the feature map of the encoder, and passes through the bottleneck. The same layer is repeated 4 times in the same way as the encoder, and the size of the final feature map of the decoder becomes 64×64×32. Finally, the 1×1 convolutional layer becomes the number of output categories, that is, 3 channels (background, bone metastasis area, and non-bone metastasis area), and becomes 64×64×3. In addition, zero padding (Zero Padding) is performed in all 3×3 convolutional layers. After the convolutional layer, there is Batch Normalization (S. Ioffe, and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing "Internal Covariate Shift" (arXiv: 1502.03167, 2015) and ReLU function.
學習部15使用補丁圖像(包含以補丁圖像反轉部14左右反轉而得者)及其正確標籤進行類神經網路模型之學習。藉由對利用Softmax函數轉換將補丁圖像輸入至類神經網路模型時之輸出所得之概率pi與正確之概率的誤差(損失函數)進行評價而進行學習。以下示出Softmax函數與損失函數。 The
又,學習部15使用驗證用資料集(驗證集)進行學習所得之網路之驗證。保存使用指導資料進行學習任意之重複次數所得之學習模型,利用驗證集對所有學習模型進行學習模型之參數之搜索。將像素單元之過度拾取FP(P)與像素單元之遺漏FN(P)之和即FP(P)+FN(P)作為評價值而決定學習之重複次數。學習部15將藉由學習而產生之模型記憶於記憶部16。 In addition, the
以上,對本實施形態之學習裝置1之構成進行了說明,但上述學習裝置1之硬體之例為具備CPU(Central Processing Unit,中央處理單元)、RAM(Random Access Memory,隨機存取記憶體)、ROM(Read Only Memory,唯讀記憶體)、硬碟、顯示器、鍵盤、滑鼠、通信介面等之電腦。將具有實現上述各功能之模組之程式製品儲存於RAM或ROM,藉由利用CPU執行該程式製品而實 現上述學習裝置1。此種程式製品亦包含於本發明之範圍內。 The configuration of the
圖4係表示骨轉移檢測裝置20之構成之圖。骨轉移檢測裝置20具有:輸入部21,其輸入受驗者之閃爍圖;控制部22,其自受驗者之閃爍圖檢測骨轉移區域;記憶部26,其記憶有藉由上述學習裝置1學習而得之學習完成模型;及輸出部27,其輸出檢測出之骨轉移區域之資料。 FIG. 4 is a diagram showing the configuration of the bone
控制部22具有濃度標準化處理部23、補丁圖像製作部24、及推論部25。濃度標準化處理部23與學習裝置1所具備之濃度標準化處理部12相同。補丁圖像製作部24具有自輸入之受驗者之閃爍圖切出64×64[pixels]之補丁圖像之功能。基本之構成與學習裝置1所具備之補丁圖像製作部13相同,但切出補丁圖像之間隔不同。即,於學習裝置1中,以2[pixels]間隔進行切出,但於骨轉移檢測裝置20中,以32[pixels]間隔切出補丁圖像。 The control unit 22 includes a density
推論部25自學習完成模型記憶部26讀出學習完成模型,並向學習完成模型之輸入層輸入補丁圖像,求出補丁圖像之各像素屬於背景、骨轉移區域、非骨轉移區域之各類別之概率。 The
圖5係表示自閃爍圖切出之補丁圖像之例之圖。如圖5所示,補丁圖像係以相鄰之補丁圖像各一半地重疊之方式自受驗者之閃爍圖切出。因此,例如區域R係補丁圖像A~D重疊,區域R內之像素之特徵映射係藉由4個補丁圖像A~D之各者求出。推論部25係取出藉由4個補丁圖像之各者求出之特徵映射之平均。而且,推論部25藉由Softmax函數將再構成之輸出轉換為概率,對各像素判定概率最高之類別,並作為最終輸出。 FIG. 5 is a diagram showing an example of a patch image cut out from a scintillation chart. As shown in FIG. 5, the patch image is cut out from the subject's scintillation image in such a way that the adjacent patch images overlap half of each other. Therefore, for example, the region R is where the patch images A to D overlap, and the feature map of the pixels in the region R is obtained by each of the four patch images A to D. The
圖6係表示學習裝置1之動作之圖。學習裝置1輸入 複數個受驗者之閃爍圖及與其對應之正確標籤(背景、骨轉移區域、非骨轉移區域)作為指導資料(S10)。學習裝置1進行輸入之閃爍圖之濃度標準化(S11),自標準化後之閃爍圖製作補丁圖像(S12)。學習裝置1使製作之補丁圖像中之一部分補丁圖像左右反轉(S13)。繼而,學習裝置1使用補丁圖像及與其對應之正確標籤進行類神經網路模型之學習(S14),並將藉由學習而獲得之類神經網路模型記憶於記憶部16(S15)。再者,於在骨轉移檢測裝置20中使用學習完成模型之情形時,讀出記憶於記憶部16之學習模型,並將其輸出至其他裝置等。 FIG. 6 is a diagram showing the operation of the
圖7係表示骨轉移檢測裝置20之動作之圖。骨轉移檢測裝置20輸入檢查對象之受驗者之閃爍圖(S20)。骨轉移檢測裝置20進行輸入之閃爍圖之濃度標準化(S21),自標準化後之閃爍圖製作補丁圖像(S22)。骨轉移檢測裝置20自記憶部26讀出學習完成之類神經網路模型,並向讀出之類神經網路模型之輸入層輸入補丁圖像,對補丁圖像中所包含之各像素之骨轉移區域進行檢測(S23)。骨轉移檢測裝置20對複數個補丁圖像重疊之區域之像素整合檢測結果(S24)。骨轉移檢測裝置20輸出求出之骨轉移區域之最終結果(S25)。 FIG. 7 is a diagram showing the operation of the bone
第1實施形態之學習裝置1使用受驗者之閃爍圖及與其對應之正確標籤學習類神經網路模型。藉由使用該學習完成模型,能夠減少所謂「過度拾取」,恰當地檢測骨轉移區域。 The
又,第1實施形態之學習裝置1可藉由使用自受驗者之閃爍圖切出之補丁圖像進行學習而減小學習時所需之記憶體尺寸。又,由於骨轉移區域之產生部位無關於器官之形狀,故而即便 分割為補丁圖像進行學習,亦能夠進行恰當之學習。 In addition, the
又,第1實施形態之學習裝置1藉由使多個補丁圖像中之一部分補丁圖像左右反轉而增加指導資料之變化,從而獲得可靠之學習結果。再者,於本實施形態中,列舉了使補丁圖像左右反轉之例,但亦可使補丁圖像上下反轉。使用上下反轉後之補丁圖像之方法適於背景之骨之解剖學構造為上下對稱之情形(例如,研究沿鉛直方向延伸之四肢之集聚之情形等)。 In addition, the
(第2實施形態) (Second embodiment)
圖8係表示第2實施形態之學習裝置2之構成之圖。第2實施形態之學習裝置2產生之類神經網路模型與第1實施形態相同,係用以將受驗者之閃爍圖之區域分類為骨轉移區域、非骨轉移區域及背景之3種類別之模型。第2實施形態之學習裝置2之基本構成與第1實施形態之學習裝置1相同,但第2實施形態之學習裝置2具備對作為指導資料之多個圖像補丁之內容進行分析之指導資料分析部18。於多個圖像補丁中,有包含骨轉移區域或非骨轉移區域之補丁圖像、及不包含骨轉移區域與非骨轉移區域之任一者之補丁圖像。指導資料分析部18求出作為指導資料之多個補丁圖像中所包含之包含骨轉移區域或非骨轉移區域之補丁圖像與不包含骨轉移區域及非骨轉移區域之任一者之補丁圖像的構成比。輸出部17輸出產生記憶於記憶部16之學習完成模型之補丁圖像之構成比的資料。 FIG. 8 is a diagram showing the configuration of the
藉由如此輸出用於產生學習完成模型之補丁圖像之構成比,而可於使用學習完成模型進行之骨轉移區域之檢測精度不 變高地產生新的學習完成模型時,獲得應如何變更指導資料進行學習之提示。於本實施形態中,列舉了由觀察到補丁圖像之構成比之使用者變更指導資料之例,但亦可進一步進展,學習裝置2基於補丁圖像之構成比而變更指導資料。 By outputting the composition ratio of the patch image used to generate the learning completed model in this way, how to change the guidance information can be obtained when a new learning completed model is generated without changing the detection accuracy of the bone metastasis region using the learning completed model Tips for learning. In the present embodiment, an example in which the guidance data is changed by a user who observes the composition ratio of the patch image is cited, but further progress may be made, and the
圖9係表示第2實施形態之變形例之學習裝置3之圖。變形例之學習裝置3除了具備第2實施形態之學習裝置2的構成以外,還具備補丁圖像選擇部19。補丁圖像選擇部19具有基於指導資料分析部18之分析結果選擇用於學習之補丁圖像之功能。根據本發明者等人之研究,認為若不包含骨轉移區域與非骨轉移區域之任一者之補丁圖像過多,則無法產生恰當之模型。因此,變形例之學習裝置3於不包含骨轉移區域或非骨轉移區域之補丁圖像之構成比為既定之臨限值以上的情形時,選擇用於學習之補丁圖像,而並非使用所有不包含骨轉移區域或非骨轉移區域之補丁圖像。藉此,能夠產生骨轉移區域之檢測精度較佳之模型之可能性提高。 9 is a diagram showing a
(第3實施形態) (Third Embodiment)
圖11係表示第3實施形態之學習裝置4之圖。第3實施形態之學習裝置4使用Butterfly-Net作為學習對象之類神經網路之模型。Butterfly-Net具備將具有編碼器-解碼器構造之2個網路部分結合而成之構造。關於Butterfly-Net,於「Btrfly Net:Vertebrae Labelling with Energybased Adversarial Learning of Local Spine Prior」Anjany Sekuboyina等人,MICCAI 2018中已詳細地記載。 FIG. 11 is a diagram showing a
學習裝置4具有:輸入部40,其輸入指導資料;控制部41,其基於指導資料進行類神經網路模型之學習;記憶部47, 其記憶藉由學習而產生之模型;及輸出部48,其向外部輸出記憶於記憶部47之模型。再者,第3實施形態之學習裝置4產生用以將受驗者之閃爍圖之區域分類為骨轉移區域、非惡性病變區域(骨折、炎症等)、其他區域(腎臟、膀胱等生理性集聚區域、注射洩漏、尿洩漏、背景)之3種類別之模型。於本實施形態中,將生理性集聚區域包含於其他區域之類別,分類為與非惡性病變區域不同之類別。 The
本實施形態之學習裝置4使用自前方拍攝之受驗者之閃爍圖(以下,稱為「前方圖像」)與自後方拍攝之受驗者之閃爍圖(以下,稱為「後方圖像」)、及賦予至各個閃爍圖之正確標籤作為指導資料。圖12係表示前方圖像與後方圖像之例之圖。再者,後方圖像係沿水平方向反轉。 The
控制部41具有圖像反轉部42、前後圖像對位部43、濃度標準化處理部44、補丁圖像製作部45、及學習部46。 The
圖像反轉部42具有使後方圖像反轉之功能。藉由圖像反轉部42進行反轉時,賦予至後方圖像之正確標籤亦進行反轉。前後圖像對位部43進行前方圖像與反轉後之後方圖像之對位。再者,此處列舉了使後方圖像反轉且與前方圖像對位之例,但當然亦可使前方圖像反轉且與後方圖像對位。 The
濃度標準化處理部44具有進行濃度值之標準化之功能,以抑制根據每個受驗者而不同之正常骨區域之濃度值之不均。濃度標準化處理部44藉由濃度範圍調整、正常骨水準之鑑定、灰度標準化之處理而進行濃度值之標準化。濃度標準化處理部44將輸入之閃爍圖之濃度Iin轉換為藉由下述式(3)而標準化之濃度 Inormalized。 The concentration
其中,為黃金比例 among them, Golden ratio
補丁圖像製作部45具有自受驗者之閃爍圖切出並製作補丁圖像之功能。於本實施形態中,補丁圖像製作部45自前方圖像及後方圖像切出對應之位置之補丁圖像,產生前後一對補丁圖像。於圖12中,自前方圖像獲得之補丁圖像A與自後方圖像所獲得之補丁圖像A'係成對之補丁圖像。又,補丁圖像B與補丁圖像B'亦為成對之補丁圖像。 The patch
於本例中,補丁圖像之尺寸為64×64[pixels]。於受驗者之閃爍圖(512×1024[pixels])之上以2[pixels]間隔對64×64[pixels]之視窗進行掃描,於(1)在視窗內包含集聚標記(骨轉移區域或非骨轉移區域),或者(2)包含骨區域且不包含集聚之情形時,切出作為圖像補丁。於前方圖像或後方圖像之任一者中符合上述(1)(2)之條件而切出補丁圖像之情形時,自前方圖像或後方圖像之另一者切出成對之補丁圖像。 In this example, the size of the patch image is 64×64 [pixels]. Scan the 64×64[pixels] window at 2[pixels] intervals on the subject’s scintillation image (512×1024[pixels]), and include (1) the aggregation mark (bone metastasis area or Non-bone metastasis area), or (2) When the bone area is included and the aggregation is not included, an image patch is cut out. In the case where the patch image is cut out in accordance with the conditions of (1)(2) above in either the front image or the back image, a pair is cut out from the other of the front image or the back image Patch image.
學習部46具有使用補丁圖像進行用於自閃爍圖檢測骨轉移區域之類神經網路模型之學習的功能。於本實施形態中,使用具有將2個U-Net結合而成之構造之Butterfly-Net作為類神經網路模型。 The
圖13係表示本實施形態中所使用之Butterfly-Net之例之圖。Butterfly-Net之上側具有向下凸起之構成,且具有與圖3 所示之網路大致相同之構造。Butterfly-Net之下側具有向上凸起之構成,且具有與圖3所示之網路相同之構造(僅上下反轉而繪製)。Butterfly-Net係2個U-Net於8×8之各128個特徵映射之位置進行結合。 13 is a diagram showing an example of Butterfly-Net used in this embodiment. Butterfly-Net has a downward convex structure on the upper side, and has the same structure as the network shown in FIG. 3. Butterfly-Net has an upwardly convex structure underneath, and has the same structure as the network shown in FIG. 3 (only drawn upside down). Butterfly-Net is a combination of 2 U-Nets at the position of 128 feature maps of 8×8 each.
又,本實施形態中所使用之類神經網路模型使用作為殘差區塊之一的瓶頸(K.He,X.Zhang,S.Ren,and J.Sun「Deep residual learning for image recognition」arXiv:1512.03385,2015),以抽選出更高度之特徵。於本說明書中,將經如此改良之Butterfly-Net稱為「ResButterfly-Net」。 Also, the neural network model used in this embodiment uses a bottleneck as one of the residual blocks (K. He, X. Zhang, S. Ren, and J. Sun "Deep residual learning for image recognition" arXiv : 1512.03385, 2015), to select the more advanced features. In this manual, the Butterfly-Net thus improved is called "ResButterfly-Net".
於本例中,輸入圖像為灰度,輸入之維度為64×64×1。首先,於捲積層中使通道數為32,通過瓶頸。其後,進行2×2之最大池化,以通道數翻倍之方式通過瓶頸。藉由上下之U-Net之各者進行將該等層重複3次之處理。然後,於獲得8×8×128之尺寸之特徵映射之位置,將上下之2個U-Net之特徵映射結合,進而進行2次瓶頸與最大池化,最終,藉由編碼獲得2×2×512之尺寸之特徵映射。 In this example, the input image is grayscale, and the input dimension is 64×64×1. First, make the number of
繼而,於通過瓶頸後進行逆捲積,使特徵映射之尺寸翻倍。然後,將逆捲積之輸出與編碼器之特徵映射連結(concat),通過瓶頸。於與編碼器同樣地將該等層進行2次之後,複製其結果,將上下之編碼器之各者之特徵映射連結,重複3次通過瓶頸進行逆捲積之處理。最後,於1×1之捲積層中成為輸出類別數即3個通道(骨轉移區域、非惡性病變區域、其他區域),而成為64×64×3。再者,於圖13中,示出骨轉移區域(Bone metastatic legion)及非惡性病變區域(Non-malignant lesion),除骨轉移區域及非惡性病變區域 以外之部分為其他區域。 Then, after passing the bottleneck, deconvolution is performed to double the size of the feature map. Then, the output of the deconvolution is concatenated with the feature map of the encoder to pass the bottleneck. After performing the same layer twice as the encoder, the result is copied, the feature maps of the upper and lower encoders are connected, and the process of deconvolution through the bottleneck is repeated 3 times. Finally, in the 1×1 convolutional layer, the number of output categories is 3 channels (bone metastasis area, non-malignant lesion area, and other areas), which becomes 64×64×3. In addition, FIG. 13 shows a bone metastatic legion and a non-malignant lesion, and the part other than the bone metastatic legion and the non-malignant lesion is another area.
學習部46使用成對之前後之補丁圖像及其正確標籤進行類神經網路模型之學習。藉由對利用Softmax函數轉換將一對補丁圖像輸入至類神經網路模型時之輸出所得之概率pi與正確之概率的誤差(損失函數)進行評價而進行學習。以下示出損失函數。 The
此處,wc係用以減少像素數不同之影響之類別c之加權。 Here, w c is the weighting of category c to reduce the influence of different pixel counts.
圖14係表示第3實施形態之骨轉移檢測裝置50之構成之圖。骨轉移檢測裝置50具有:輸入部51,其輸入受驗者之閃爍圖;控制部52,其自受驗者之閃爍圖檢測骨轉移區域;記憶部58,其記憶有藉由上述學習裝置4學習而得之學習完成模型;及輸出部59,其輸出檢測出之骨轉移區域之資料。 FIG. 14 is a diagram showing the configuration of a bone
控制部52具有圖像反轉部53、前後圖像對位部54、濃度標準化處理部55、補丁圖像製作部56、及推論部57。圖像反轉部53、前後圖像對位部54及濃度標準化處理部55係與學習裝置4所具備之圖像反轉部42、前後圖像對位部43及濃度標準化處理部44相同。補丁圖像製作部56具有自輸入之受驗者之閃爍圖(前方圖像及後方圖像)切出補丁圖像之功能。補丁圖像製作部56自前後之閃爍圖切出對應之區域而產生成對之補丁圖像。再者,補丁圖像製作部56亦可相對於學習裝置4所具備之補丁圖像製作部45而 改變切出補丁圖像之間隔。 The
推論部57自學習完成模型記憶部58讀出學習完成模型,並向學習完成模型之輸入層輸入一對補丁圖像,求出補丁圖像之各像素屬於骨轉移區域、非惡性病變區域、其他區域之各類別之概率。 The
圖15係表示學習裝置4之動作之圖。學習裝置4輸入複數個受驗者之閃爍圖(前方圖像及後方圖像)及與其對應之正確標籤(骨轉移區域、非惡性病變區域、其他區域)作為指導資料(S30)。學習裝置4使後方圖像反轉(S31),進行前方圖像與反轉後之後方圖像之對位(S32)。其次,學習裝置4進行輸入之前方圖像與後方圖像之濃度標準化(S33),並切出前後之圖像對應之區域而產生數對補丁圖像(S34)。 15 is a diagram showing the operation of the
繼而,學習裝置4使用補丁圖像及與其對應之正確標籤,進行類神經網路模型之學習(S35)。如上所述,於此處之學習中,將一對前後之補丁圖像輸入至Butterfly-Net之輸入層,基於自輸出層輸出之類別與正確資料進行學習。學習裝置4將藉由學習而獲得之類神經網路模型記憶於記憶部47(S36)。再者,於在骨轉移檢測裝置50中使用學習完成模型之情形時,讀出記憶於記憶部47之學習模型,並將其輸出至其他裝置等。 Then, the
第3實施形態之學習裝置4構成為使用Butterfly-Net之類神經網路模型作為學習模型,向其輸入層輸入一對前後之補丁圖像而進行學習,因此,藉由同時對關聯性較高之補丁圖像進行處理,可產生能夠精度良好地檢測骨轉移區域之類神經網路之模型。 The
再者,於第3實施形態之學習裝置4中,亦與第1實 施形態相同,亦可藉由使補丁圖像反轉並使多個補丁圖像中之一部分補丁圖像左右反轉,而增加指導資料之變化。但是,於本實施形態中,作為指導資料之補丁圖像為前後一對圖像,因此,於使前方或後方之補丁圖像反轉之情形時,使另一補丁圖像亦向相同方向反轉。藉由如此使補丁圖像反轉而增加指導資料,能夠進行可靠之學習。 Furthermore, in the
又,於上述第3實施形態之學習裝置4中,列舉了產生分類為與第1實施形態不同之類別之學習完成模型之例,但當然亦可產生分類為與第1實施形態相同之類別之學習完成模型。反之,於上述第1實施形態或第2實施形態中,將注射洩漏或尿洩漏除外,但亦可將腎臟或膀胱等生理性集聚、注射洩漏、尿洩漏、及背景設為其他區域,產生分類為與第3實施形態相同之類別之模型。 In addition, in the
(實施例1) (Example 1)
對使用利用第1實施形態之學習裝置1產生之學習完成模型檢測骨轉移區域之實施例進行說明。作為學習完成模型,藉由使用使補丁圖像反轉而得之指導資料產生之學習完成模型、及使用不使補丁圖像反轉之指導資料產生之學習完成模型進行骨轉移區域之檢測。 An example of detecting a bone metastasis region using the learning completion model generated by the
(用於實驗之試樣) (Sample used for experiment)
‧前面骨閃爍圖濃度值標準化圖像:103個病例 ‧Standardized image of concentration value of anterior bone scintillation chart: 103 cases
‧圖像尺寸:512×1024[pixels] ‧Image size: 512×1024[pixels]
‧解析度:2.8×2.8[mm/pixel] ‧Resolution: 2.8×2.8[mm/pixel]
‧補丁尺寸:64×64[pixels] ‧Patch size: 64×64[pixels]
(評價法) (Evaluation method)
‧3-fold交叉驗證(學習:68個病例、驗證:17個病例、測試:17~18個病例) ‧3-fold cross-validation (learning: 68 cases, verification: 17 cases, test: 17-18 cases)
再者,驗證資料係用以決定學習之重複次數之資料。 Furthermore, the verification data is used to determine the number of repetitions of learning.
(評價值) (Evaluation value)
‧FP(P):像素單位之過度拾取 ‧FP(P): excessive picking up of pixel units
‧FN(P):骨轉移區域之像素單位之遺漏 ‧FN(P): omission of pixel units in bone metastasis area
‧靈敏度:骨轉移區域之區域單位之檢測率 =(檢測出之骨轉移區域數)/(骨轉移區域數) ‧Sensitivity: detection rate of the regional unit of bone metastasis area = (number of detected bone metastasis areas)/(number of bone metastasis areas)
‧FROC曲線:靈敏度vs.FP(P)或FP(R) ‧FROC curve: Sensitivity vs. FP(P) or FP(R)
‧FP(P)/背景+FN(P)/骨轉移區域 ‧FP(P)/background+FN(P)/bone metastasis area
(模型之學習條件) (Learning conditions of the model)
‧優化器(Optimizer):Adam(α=0.001、β1=0.9、β2=0.999) ‧Optimizer: Adam (α=0.001, β 1 =0.9, β 2 =0.999)
‧批尺寸(batch size)64
‧重複次數:10000次 ‧Number of repetitions: 10000 times
‧藉由驗證選擇FP(P)+FN(P)為最少之網路 ‧Select the network with the least number of FP(P)+FN(P) through verification
(比較例) (Comparative example)
作為比較例,與使用基於多個弱分類器之分類結果進行檢測之MadaBoost(C.Domingo and O.Watanabe「MadaBoost:A modification of AdaBoost」Proc.Thirteenth Annual Conference on Computational Learning Theory,pp.180-189,2000)求出之檢測結果進行比較。MadaBoost之演算法使用由發明者等人之研究室開發之方法(南勇太「骨閃爍圖上之骨轉移檢測處理之改良」第3次腫瘤核醫學圖像解析軟體開發會議)。 As a comparative example, the MadaBoost (C. Domingo and O. Watanabe "MadaBoost: A modification of AdaBoost" Proc. Thirteenth Annual Conference on Computational Learning Theory, pp. 180-189 , 2000) to compare the detected results. The algorithm of MadaBoost uses the method developed by the researcher of the inventors (Nan Yongtai "Improvement of Bone Metastasis Detection Processing on Bone Scintillation Chart" 3rd Cancer Nuclear Medicine Image Analysis Software Development Conference).
(實驗結果) (Experimental results)
圖10係表示藉由實驗而獲得之靈敏度與FP(P)之關係之FROC曲線。圖10所示之FROC曲線表示若縱軸之靈敏度變高,則相應地將非骨轉移區域作為骨轉移區域而拾取之「過度拾取」變多。於實施例之方法中,例如若將靈敏度設為0.8,則過度拾取為200像素以下。與於使用MadaBoost之習知法中產生500像素以上之過度拾取之情況相比,於實施例中,能夠抑制拾取過度。於圖10中,U-Net(Flip)之圖表示使用利用使一部分補丁圖像反轉而得之指導資料產生之學習完成模型進行檢測之結果的圖,U-Net之圖係表示使用未進行補丁圖像之反轉而產生之學習完成模型進行檢測之結果的圖。 FIG. 10 is a FROC curve showing the relationship between sensitivity and FP(P) obtained through experiments. The FROC curve shown in FIG. 10 indicates that as the sensitivity on the vertical axis becomes higher, the “over-picking” that picks up the non-bone metastatic region as the bone metastatic region accordingly increases. In the method of the embodiment, for example, if the sensitivity is set to 0.8, the over pickup is 200 pixels or less. Compared with the case where over-picking of 500 pixels or more occurs in the conventional method using MadaBoost, in the embodiment, over-picking can be suppressed. In FIG. 10, the U-Net (Flip) graph shows the result of detection using the learning completion model generated by using the guidance data obtained by inverting a part of the patch image, and the U-Net graph shows that the use is not performed. A diagram of the results of the detection of the model of learning completion generated by the inversion of the patch image.
(實施例2) (Example 2)
對使用利用第3實施形態之學習裝置4產生之學習完成模型檢測骨轉移區域之實施例進行說明。作為學習完成模型,使用了於第3實施形態中所說明之ResButterfly-Net、及將ResButterfly-Net之瓶頸替換為捲積層之Butterfly-Net。又,使用U-Net作為比較例。 An example of detecting a bone metastasis region using the learning completion model generated by the
(用於實驗之試樣) (Sample used for experiment)
‧52歲~95歲之前列腺癌之日本男性:246個病例 ‧52-95 year old Japanese men with prostate cancer: 246 cases
(評價法) (Evaluation method)
‧3-fold交叉驗證(學習:164個病例、驗證:41個病例、測試:41個病例) ‧3-fold cross-validation (learning: 164 cases, verification: 41 cases, test: 41 cases)
再者,驗證資料係用以決定學習之最佳之重複次數之資料。 Furthermore, the verification data is used to determine the best number of repetitions for learning.
(評價值) (Evaluation value)
‧FP(P):像素單位之過度拾取 ‧FP(P): excessive picking up of pixel units
‧FP(R):區域單位之過度拾取 ‧FP(R): Excessive pickup of regional units
‧FP(P)+FN(P):像素單位之過度拾取與骨轉移區域之像素單位之遺漏 ‧FP(P)+FN(P): over-picking of pixel units and omission of pixel units in bone metastasis area
(模型之學習條件) (Learning conditions of the model)
‧優化器:Adam(α=0.001、β1=0.9、β2=0.999) ‧Optimizer: Adam (α=0.001, β 1 =0.9, β 2 =0.999)
‧批尺寸256 ‧
‧重複次數:設為最大50000次,將被誤分類之像素之總數成為最小時作為最佳重複次數。 ‧Number of repetitions: set to a maximum of 50,000 times, and make the best number of repetitions when the total number of misclassified pixels becomes the minimum.
(實驗結果) (Experimental results)
表1表示骨轉移區域之感度為0.9時之各評價值。上為前方圖像之結果,下為後方圖像之結果。 Table 1 shows the evaluation values when the sensitivity of the bone metastatic region is 0.9. The top is the result of the front image, and the bottom is the result of the rear image.
[表1]
如表1所示,若使用ResButterfly-Net、Butterfly-Net作為學習模型,則相較於使用U-Net之模型,可於多個指標中確認出熱點檢測時之錯誤少。 As shown in Table 1, if ResButterfly-Net and Butterfly-Net are used as the learning model, compared with the model using U-Net, it can be confirmed that there are fewer errors in hotspot detection in multiple indicators.
上述實施形態及實施例包含以下(1)至(11)所示之技術思想。 The above embodiments and examples include the technical ideas shown in (1) to (11) below.
(1)一種學習裝置,其係產生用於自受驗者之閃爍圖檢測異常集聚之類神經網路之模型者;其具備:輸入部,其輸入複數個受驗者之閃爍圖及各閃爍圖中之正常集聚與異常集聚之正確標籤作為指導資料;及學習部,其使用上述指導資料進行用以檢測骨閃爍圖之異常集聚之類神經網路之模型的學習。 (1) A learning device that generates a model of a neural network such as an abnormal cluster for detecting scintillation graphs from subjects; it includes: an input unit that inputs a plurality of subject scintillation graphs and each flicker The correct labels of normal aggregation and abnormal aggregation in the figure are used as guidance data; and the learning department uses the above guidance data to learn the model of neural networks such as abnormal aggregation of bone scintillation graphs.
(2)如(1)之學習裝置,其具備自上述複數個受驗者之閃爍圖切出拍攝到受驗者之骨之區域而製作補丁圖像之補丁圖像製作部, 上述學習部使用上述補丁圖像及與其對應之正確標籤作為指導資料而進行學習。 (2) The learning device according to (1), which includes a patch image creation unit that cuts out the area where the bones of the subject are photographed from the scintillation pictures of the plurality of subjects to create a patch image, and the learning unit uses The above patch image and the corresponding correct label are used as guidance materials for learning.
(3)如(2)之學習裝置,其中上述補丁圖像製作部於上述受驗者之閃爍圖上對既定大小之視窗進行掃描,於該視窗內拍攝到受驗者之骨時,切出上述視窗之區域作為上述補丁圖像。 (3) The learning device according to (2), wherein the patch image creation part scans a window of a predetermined size on the scintillation graph of the subject and cuts out the subject's bone in the window The area of the window is the patch image.
(4)如(2)或(3)之學習裝置,其具備於藉由上述補丁圖像製作部製作之補丁圖像中,求出包含正常集聚或異常集聚之補丁圖像與不包含正常集聚及異常集聚之任一者之補丁圖像之構成比的指導資料分析部。 (4) The learning device according to (2) or (3), which is provided in the patch image created by the patch image creation unit, and obtains a patch image including normal aggregation or abnormal aggregation and does not include normal aggregation And the guide data analysis part of the composition ratio of the patch image of any of the abnormal aggregation.
(5)如請求項4之學習裝置,其具備以使藉由上述指導資料分析部求出之構成比包含於既定之範圍之方式,自藉由補丁圖像製作部製作之補丁圖像抽取不包含正常集聚與異常集聚之任一者之補丁圖像的補丁圖像選擇部。 (5) The learning device according to
(6)如(1)至(5)中任一項之學習裝置,其具備使藉由上述補丁圖像製作部製作之補丁圖像之至少一部分補丁圖像左右反轉或上下反轉之補丁圖像反轉部。 (6) The learning device according to any one of (1) to (5), which includes a patch for inverting at least a part of a patch image of the patch image created by the patch image creation unit from left to right or upside down Image reversal section.
(7)如(1)至(6)中任一項之學習裝置,其中,上述類神經網路包含具有編碼器-解碼器構造,且將藉由編碼器構造而獲得之特徵映射輸入至解碼器構造之構造。 (7) The learning device according to any one of (1) to (6), wherein the neural network described above includes an encoder-decoder structure, and the feature map obtained by the encoder structure is input to the decoding The structure of the device.
(8)一種學習方法,其係產生用於自受驗者之閃爍圖檢測異常集聚之類神經網路之模型者;其具備以下步驟:輸入複數個受驗者之閃爍圖及各閃爍圖中之正常集聚與異常集聚之正確標籤作為指導資料之步驟;及使用上述指導資料進行用以檢測骨閃爍圖之異常集聚之類神 經網路之模型的學習之步驟。 (8) A learning method that generates a neural network model for detecting abnormal clusters from the scintillation graph of the subject; it has the following steps: input a plurality of scintillation graphs of the subject and each scintillation graph The correct labeling of normal aggregation and abnormal aggregation is used as guidance data; and the use of the above guidance data to learn the model of neural network to detect abnormal aggregation of bone scintillation graph.
(9)一種程式製品,其係用以產生用於自受驗者之閃爍圖檢測異常集聚之類神經網路之模型者;其執行以下步驟:輸入複數個受驗者之閃爍圖及各閃爍圖中之正常集聚與異常集聚之正確標籤作為指導資料之步驟;及使用上述指導資料進行用以檢測骨閃爍圖之異常集聚之類神經網路之模型的學習之步驟。 (9) A program product that is used to generate a model of a neural network such as an abnormal cluster for detecting scintillation from the subject; it performs the following steps: input a plurality of scintillation of the subject and each flicker The steps of using the correct labels for normal and abnormal aggregation in the figure as guidance data; and using the above guidance data to learn the model of neural network model for detecting abnormal aggregation of bone scintillation graph.
(10)一種記憶有學習完成模型的記憶媒體,其係用於以自受驗者之閃爍圖檢測異常集聚之方式使電腦發揮功能者;其由類神經網路構成,該類神經網路具有捲積層、及逆捲積層,且該類神經網路包含將藉由捲積層而獲得之特徵映射輸入至逆捲積層之構造,上述記憶有學習完成模型的記憶媒體將複數個受驗者之閃爍圖及各閃爍圖中之正常集聚與異常集聚之正確標籤作為指導資料而進行學習,以自輸入至上述類神經網路之受驗者之閃爍圖檢測異常集聚之方式使電腦發揮功能。 (10) A memory medium with a learning completion model, which is used to make the computer function by the way of detecting abnormal aggregation of the subject's scintillation graph; it is composed of a neural network, which has Convolutional layer and inverse convolutional layer, and this type of neural network includes the structure of inputting the feature map obtained by the convolutional layer to the deconvolutional layer. The above-mentioned memory medium with the learned completion model will flash a plurality of subjects The correct labels of normal aggregation and abnormal aggregation in the graph and each scintillation graph are used as guidance data for learning, and the computer is made functional by detecting abnormal aggregation from the scintillation graph of the subject input to the above neural network.
(11)一種異常集聚檢測裝置,其具備:記憶部,其記憶有藉由(2)至(7)中任一項之學習裝置學習而得之類神經網路之學習完成模型;輸入部,其輸入受驗者之閃爍圖;補丁圖像製作部,其自上述閃爍圖製作補丁圖像;推論部,其向自上述記憶部讀出之學習完成模型之輸入層輸入上述補丁圖像,並求出上述補丁圖像中所包含之異常集聚之區域;及輸出部,其輸出表示上述異常集聚區域之資料。 (11) An anomalous aggregation detection device, comprising: a memory unit whose memory has a learning completion model of a neural network or the like learned by the learning device of any one of (2) to (7); an input unit, It inputs the scintillation image of the subject; the patch image creation unit, which creates the patch image from the above scintillation image; the inference unit, which inputs the patch image to the input layer of the learning completed model read from the memory unit, and Find the abnormally concentrated area included in the patch image; and the output section, which outputs data indicating the abnormally concentrated area.
本申請係主張以於2018年5月18日提出申請之日本申請特願2018-096186號為基礎之優先權,並將其揭示之全部內容引入於此。 This application claims priority based on Japanese Application No. 2018-096186 filed on May 18, 2018, and incorporates all the contents disclosed therein.
1‧‧‧學習裝置 1‧‧‧Learning device
10‧‧‧輸入部 10‧‧‧Input
11‧‧‧控制部 11‧‧‧Control Department
12‧‧‧濃度標準化處理部 12‧‧‧Concentration Standardization Department
13‧‧‧補丁圖像製作部 13‧‧‧ Patch image production department
14‧‧‧補丁圖像反轉部 14‧‧‧ Patch image reversal section
15‧‧‧學習部 15‧‧‧Learning Department
16‧‧‧記憶部 16‧‧‧ Memory Department
17‧‧‧輸出部 17‧‧‧ Output
Claims (16)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2018096186 | 2018-05-18 | ||
| JP2018-096186 | 2018-05-18 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| TW202004572A true TW202004572A (en) | 2020-01-16 |
Family
ID=68539742
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW108117252A TW202004572A (en) | 2018-05-18 | 2019-05-16 | Learning device, learning method, program, learned model, and bone metastasis detection device |
Country Status (3)
| Country | Link |
|---|---|
| JP (1) | JP7352261B2 (en) |
| TW (1) | TW202004572A (en) |
| WO (1) | WO2019221222A1 (en) |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111627554A (en) * | 2020-05-28 | 2020-09-04 | 浙江德尔达医疗科技有限公司 | Fracture image automatic classification system based on deep convolutional neural network |
| JP7491755B2 (en) * | 2020-07-13 | 2024-05-28 | 繁 塩澤 | Data generation device, detection device, and program |
| US11776287B2 (en) * | 2021-04-27 | 2023-10-03 | International Business Machines Corporation | Document segmentation for optical character recognition |
| JP7757076B2 (en) * | 2021-08-10 | 2025-10-21 | 英毅 森 | Program, storage medium, system, trained model, and judgment method |
| TWI839758B (en) * | 2022-06-20 | 2024-04-21 | 緯創資通股份有限公司 | Processing method of medical image and computing apparatus for processing medical image |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPWO2007029467A1 (en) * | 2005-09-05 | 2009-03-19 | コニカミノルタエムジー株式会社 | Image processing method and image processing apparatus |
| JP6442309B2 (en) * | 2015-02-04 | 2018-12-19 | 日本メジフィジックス株式会社 | Nuclear medicine image analysis technology |
| JP6927211B2 (en) * | 2016-07-04 | 2021-08-25 | 日本電気株式会社 | Image diagnostic learning device, diagnostic imaging device, method and program |
| JP6294529B1 (en) * | 2017-03-16 | 2018-03-14 | 阪神高速技術株式会社 | Crack detection processing apparatus and crack detection processing program |
-
2019
- 2019-05-16 TW TW108117252A patent/TW202004572A/en unknown
- 2019-05-16 WO PCT/JP2019/019478 patent/WO2019221222A1/en not_active Ceased
- 2019-05-16 JP JP2020519910A patent/JP7352261B2/en active Active
Also Published As
| Publication number | Publication date |
|---|---|
| JP7352261B2 (en) | 2023-09-28 |
| WO2019221222A1 (en) | 2019-11-21 |
| JPWO2019221222A1 (en) | 2021-08-12 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| TW202004572A (en) | Learning device, learning method, program, learned model, and bone metastasis detection device | |
| Rajasekar et al. | Lung image quality assessment and diagnosis using generative autoencoders in unsupervised ensemble learning | |
| CN112101451A (en) | Breast cancer histopathology type classification method based on generation of confrontation network screening image blocks | |
| JP2019091454A (en) | Data analysis processing device and data analysis processing program | |
| CN110517262A (en) | Object detection method, device, equipment and storage medium | |
| Farrag et al. | An explainable AI system for medical image segmentation with preserved local resolution: Mammogram tumor segmentation | |
| Zhang et al. | Evaluation of a new dataset for visual detection of cervical precancerous lesions | |
| CN118968186A (en) | Adaptive image classification method, system and storage medium for medical image data set | |
| Yang et al. | Mammodg: Generalisable deep learning breaks the limits of cross-domain multi-center breast cancer screening | |
| CN117373603A (en) | Image report generation method, device, equipment, storage medium and program product | |
| Mouzai et al. | Xray-Net: Self-supervised pixel stretching approach to improve low-contrast medical imaging | |
| Srinivasu et al. | SegAN for recognition of caries from 2D-panoramic X-ray images | |
| bin Azhar et al. | Enhancing COVID-19 Detection in X-Ray Images Through Deep Learning Models with Different Image Preprocessing Techniques. | |
| Sridhar et al. | Lung Segment Anything Model (LuSAM): A Prompt-integrated Framework for Automated Lung Segmentation on ICU Chest X-Ray Images | |
| CN120236150A (en) | Medical image classification method and related equipment | |
| Li et al. | Fundus Image Quality Assessment and Enhancement: a Systematic Review | |
| Brahim et al. | A proposed computer-aided diagnosis system for Parkinson's disease classification using 123 I-FP-CIT imaging | |
| TWI675646B (en) | Breast image analysis method, system, and non-transitory computer-readable medium | |
| Zaridis et al. | Transi-Net: An Explainable Deep Learning Model Ensemble For Prostate's Transition Zone Segmentation | |
| Harrabi et al. | Color Image Segmentation by Multilevel Thresholding using a Two Stage Optimization Approach and Fusion | |
| Moazzami et al. | Open Set Recognition for Endoscopic Image Classification: A Deep Learning Approach on the Kvasir Dataset | |
| Nabi et al. | Explainable deep learning models for HER2 IHC scoring in breast cancer diagnosis | |
| CN116824296B (en) | Methods, apparatus, equipment and storage media for increasing sample size | |
| Muhammad et al. | BONE-Net: A novel hybrid deep-learning model for effective osteoporosis detection | |
| CN120015282B (en) | An intelligent imaging classification diagnosis system and method for fractures based on big data |