[go: up one dir, main page]

TWI307058B - Method for identifying objects in an image and computer readable medium - Google Patents

Method for identifying objects in an image and computer readable medium Download PDF

Info

Publication number
TWI307058B
TWI307058B TW095100920A TW95100920A TWI307058B TW I307058 B TWI307058 B TW I307058B TW 095100920 A TW095100920 A TW 095100920A TW 95100920 A TW95100920 A TW 95100920A TW I307058 B TWI307058 B TW I307058B
Authority
TW
Taiwan
Prior art keywords
image
objects
pixel
range
pixels
Prior art date
Application number
TW095100920A
Other languages
Chinese (zh)
Other versions
TW200703148A (en
Inventor
Michael Zahniser
Original Assignee
Cytyc Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cytyc Corp filed Critical Cytyc Corp
Publication of TW200703148A publication Critical patent/TW200703148A/en
Application granted granted Critical
Publication of TWI307058B publication Critical patent/TWI307058B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Transition And Organic Metals Composition Catalysts For Addition Polymerization (AREA)
  • Lubrication Of Internal Combustion Engines (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

An improved image segmentation algorithm is provided that identifies each object in an image. Pixels of the image are sorted based on a range of attribute values of the image. These pixels are then added to a labeling image one by one starting with an extreme point in the range of attribute values. Features are calculated for each object located and these features are matched with pre-defined acceptance criteria. If there is a match, the object is output to an output image. The steps of adding pixels to the image, evaluating the features of the resulting objects and outputting the objects are repeated until a stopping point is reached.

Description

1307058 九、發明說明 【發明所屬之技術領域】 本發明係關於影像分析的領域。更特別地,本發明係 關於嶄新的影像分割方法。 【先前技術】 於病理學或細胞學中,經常想要使用自動或半自動儀 器來找出及量測細胞或細胞核。此種儀器可被使用於硏究 或筛檢。後者的實例係使用Papanicolou測試(或Pap測 試)之子宮頸的篩檢。這些儀器需要及分析數位影像以找 出有關的細胞或分類正常或可疑之切片。 於數位影像中之物體的分析,物體與影像的背景區分 開係必要的。爲了使細胞或物體特徵化,物體首先必須被 找出。找出數位影像內的物體的過程熟知爲"分割”。各種 技術被使用於分割過程以找出有關的物體,使得後續的電 腦分析可特徵化物體。例如,含有細胞之影像的分割可使 細胞的細胞核及/或細胞質被找出。 針對找出及分類影像內的物體的任務之傳統方式涉及 數個階段:首先,分割影像以產生物體的二元遮罩;然 後,標示遮罩中之物體,其中每一連接集的像素被指定一 不同標記;及最後,量測所標記物體的各種特徵。 使用來分割影像之技術的一者係”臨限化"。於此技術 中,影像亮度的臨限値被選擇,且,影像中的每一像素然 後與臨限値比較。具有此臨限之上的亮度値之像素被認爲 -4 - 1307058 是背景像素;具有此臨限之下的値之像素被認爲是物體像 素。用於找出物體之臨限値可基於影像直方圖予以選擇’ 影像直方圖係在一影像內找到暗度値的頻率分佈。臨限演 算法可使用這些直方圖來找到單一臨限値。例如’臨限値 可以是最暗及最亮像素間之中途。替代地,臨限値可被選 擇爲大量"背景"像素及更稀少"物體"像素間的轉折點。找 出影像中的每一物體之理想臨限係困難任務。通常’對於 整個影像內的多個具有不同暗度値之物體,單一臨限値不 是最佳的。 一旦臨限値被選擇且臨限過程被完成,''物體"像素可 形成物體的二元遮罩於影像中。此遮罩周圍的界限可被使 用來代表每一物體。此界限能或許不能準確地表現物體。 一旦物體被找出,許多方法已被發展來精化界限。此種方 法可使用接近界限之暗度資訊,或諸如梯度、曲率、"圓 的相似性"等之限制條件以精化界限。 目前已知之用於影像分割之技術通常是複雜且耗時。 這些技術不是一直產生高準確性於分割過程中,特別是如 果有極小對比在將被找出的物體及圍繞此物體的背景之 間。因此,目前的分割演算法經常無法適當地找出物體。 於細胞影像分析中,例如,細胞核可能被不恰當地分割, 因爲所找出的界限太大或太小。此可導致錯誤的正事件 (儀器不正確地將一正常物體視爲可疑)或錯誤的負事件 (儀器錯過一真正的可疑物體)。 有用於自動成像之改良的分割及自動成像裝置之需 -5- 1307058 要,尤其是物體界限的準確識別。 無論目前已知分割技術的精確優點、特徵及利益,它 們沒有一個可達到或完成本發明的目的。 【發明內容】 本發明提供一種識別影像中每一物體之方法’該方法 包含以下步驟:(a )基於該影像的屬性値的範圍來分類像 素:(b)以該屬性値範圍中之一極値點開始’將該等分類 的像素逐一加至一標記影像以找出該標記影像中的物體; (c)如果該等物體的特徵匹配一預定的接受標準,將該等 物體輸出至一輸出影像上;(d)重複實施步驟(b)及 (c )直到一停止點被達到,該停止點代表該屬性値範圍 中之另一極値點。 本發明提供一種製造物體’該製造物體包含一具有實 施於其中的電腦可讀取程式編碼之電腦可讀取媒體,該電 腦可讀取媒體識別一影像中的每一物體,該媒體包含:(a 電腦可讀取程式編碼,基於該影像的屬性値的範圍來分類 像素;(b)電腦可讀取程式編碼’以該屬性値範圍中之一 極値點開始,將該等分類的像素逐一加至一標記影像以找 出該標記影像中的物體;(Ο電腦可讀取程式編碼,如果 該等物體的特徵匹配一預定的接受標準’則將該等物體輸 出至一輸出影像上;及電腦可讀取程式編碼,重複實施步 驟(b )及(c )直到一停止點被達到’該停止點代表該屬 性値範圍中之另一極値點。 1307058 本發明提供一種在數個臨限値下識別一影像中的每一 物體之方法,該方法包含以下步驟:〇)基於該影像中之 像素的屬性値的範圍來分類該等像素,其中該屬性値範圍 匹配該數個臨限値;(b )以該屬性値範圍中之一極値點開 始,將像素逐一加至一標記影像以產生新物體或更新的舊 物體;(c )計算該產生的新物體及該更新的舊物體的特 徵;(d )將該產生的新物體及該更新的舊物體的該計算的 特徵與一預定的標準相配;(e)如果該等特徵滿足一接受 標準,則將該產生新物體及該更新的舊物體輸出至一輸出 影像上;及(f)重複實施步驟(b)至(e)直到一停止點被 達到,其中該停止點被選自以下任一者:該屬性値範圍中 之另一極値點、代表該屬性値範圍中的背景像素値之點、 或代表與該等新物體及該等更新的舊物體無關的像素値之 點。 【實施方式】 雖然本發明被解說於較佳實施例中,本發明可以許多 不同架構予以應用及創作。以下將參考圖式來詳細說明本 發明的較佳實施例,應瞭解到,本揭示將被認定爲本發明 的原理的範例及其構造的相關功能性說明書,且,未預期 將本發明限制在所解說的實施例。熟習此項技藝者將想像 到本發明的範圍內之許多其它可能變化。 在此說明以數位二維影像識別物體界限之改良的自動 化影像分割技術。雖然在此所述的影像分割技術識別細胞 1307058 核,此技術本身可被應用來識別數位影像中之任何物體’ 諸如生物應用中之細胞質或組織結構’或非生物應用之物 體,諸如電路板上的不同組件,或衛星影像上之人造及自 然特徵。此技術可被擴充至三維影像’諸如X-射線 (Ray ) 、CAT (電腦軸向斷層攝影)掃瞄或MRI (磁振 造影)裝置所產生之影像。此種三維影像中之三維像素元 件熟知爲"三維像素(voxels)"。成群的三維像素可以三 維表示器官或腫瘤。 數位二維影像可被想到爲三維表面,其中高度尺寸表 示各像素的灰度値(亦即,亮度)。於此表面的輪廓映像 中,識別細胞核可藉由找出在某一尺寸範圍內且足夠圓成 細胞核之輪廓予以完成。如果一個輪廓包含在另一細胞核 內,此兩輪廓中的"較佳者”應被選擇。圖1顯示重疊在細 胞群中的細胞上之此種輪廓映像。影像上的有關物體係連 接像素的集。 這些輪廓通常,其相當於在所有可能的臨限値下產生 所有物體,係數學上複雜且耗時的任務。naifve方式將簡 單在每一臨限下分割影像,物體的標示,量測它們的特 徵’且保留具有最佳特徵之物體,此消除屬於"最佳”物體 的子集或上集之物體。此方式在計算上會太昂貴。 圖2解說依據本發明的一個實施例之影像分割演算 法’此演算法識別由實施以下步驟發現於影像之每一物 體。首先’此演算法基於影像的屬性値的範圍而分類像素 (步驟202 )。此演算法接著將分類的像素逐一加至一" 1307058 標示影像”以找出標示影像中之物體,其以屬性値的範圍 中之極値點開始(步驟204 )。標示影像係每一不同物體 的像素被指定一獨特値之影像,或π標記"。且,使用於開 始値之極値點可以是在屬性値的範圍的最低或最高端。此 像素可基於諸如但不限於亮度、色調、梯度等之屬性予以 分類。找出的物體的特徵然後與預先標準比較。如果物體 匹配此標準,演算法輸出此物體至輸出影像,如步驟206 所示。停止點亦被界限於此影像。此停止點代表屬性値的 範圍中之點,在此點,演算法應停止重複實施步驟204及 206 (步驟208 )。此停止點可以是屬性値的範圍中之另 一極値點。於一實施例中,此停止點係重複屬性値的範圍 中的背景像素値之點。於另一實施例中,此停止點係代表 與被找出的物體無關的像素値之點。 圖4解說依據本發明的較佳實施例之演算法。於步驟 4 02中,影像中之像素係基於影像的屬性値的範圍予以分 類。屬性値的範圍匹配於影像中之數個臨限値。臨限値係 由所使用的屬性値的直方圖而決定。此像素可基於諸如但 不限制亮度、色調、梯度等之屬性予以分類。且,直方圖 上的轉位(indexing )可藉由亮度、色調、梯度等予以完 成。取代分割在各臨限値的影像,分類的像素被逐一加至 空白標示影像,其以屬性値的範圍中之極値點開始,如步 驟404所示,其中新物體被產生或舊物體被更新於此過程 中。如果所加入的像素未鄰接至另一像素或已經放在標示 影像(舊物體)上之物體,新物體被產生(見圖3a、3b -9- Γ307058 及3e)。如果所加入的像素鄰接至舊物體,此像素與舊 物體結合以更新舊物體(見圖3c、3d、3f、3g、3h、3i、 3j、3k、3m及3n)。再者,如果所加的像素連結兩個舊 物體’此兩個舊物體被更新且合倂爲一個物體(見圖 31)。於步驟406中,這些新或更新物體的特徵被計算。 此些特徵然後與預界定接受標準匹配,於步驟408中。如 果有匹配(亦即,接受標準被滿足),新或更新物體(代 表一找出的細胞核)被輸出至輸出影像於步驟410。演算 法重複實施步驟404-410直到停止點被達到(步驟 4 1 2 )。應注意到,物體可在臨限値的處理期間之任何時 候(亦即,屬性値)輸出至輸出影像。再者,此停止點可 以是屬性値的範圍中的另一極値點。於一實施例中,此停 止點係代表屬性値的範圍中的背景像素値之點。於另一實 施例中,停止點係代表與被找到的物體無關的像素値之 點。 演算法追蹤產生或更新於前述的步驟之物體以及它們 特徵。這些物體亦爲指定的標記(例如,見圖3 a及 3b)。每當加入接觸一存在物體的像素時,此物體的標記 被指定至此物體及此物體的特徵被更新以反射新像素的加 入(例如,見圖3d及3g)。當加入連結兩個物體之像素 時,此兩個物體的標記及特徵合倂爲一個(例如’見圖 31)。圖3a-3n顯示像速如何逐一加至原始空白遮罩/標示 影像以及標記在需要時指定給各像素。 由於設計演算法之方式’有在特別臨限於指定時間之 -10- 1307058 影像的處理期間之影像的遮罩的記錄,以及在此特別臨限 發現之所有物體的標記及特徵。但是,取代重新產生各臨 限値之資料,演算法建立在來自先前臨限的資料上。此大 大增加演算法的效率。 於本發明的一個實施例中,只要在計算在目前臨限的 目前特徵比起計算於所有先前臨限的特徵係對於接受標準 之較佳匹配時,物體被輸出至輸出影像。爲了決定一較佳 匹配,來自前述較佳匹配之資料被使用。爲了計算最佳匹 配在所有臨限的所有物體中之接受標準之最佳集的物體, 稱爲動態程式規劃之方法被使用。此程式逐一通過臨限, 記錄可僅使用目前找出的物體形成之最佳集的物體。每一 物體”記得"目前已達到之最佳狀態。圖5解說在五個不同 臨限値的影像中之物體的成長。在一臨限分開之各別物體 可在另一臨限接合。此圖式中之每一物體"指向"其先前最 佳狀態。無箭頭之物體指出其本身(未顯示)。此意指, 這些物體的目前狀態較佳於其先前狀態的任一者。因此, 一物體的新狀態僅需要比較其箭頭所指向之狀態,因爲此 狀態代表目前所達到之最佳狀態。如果兩個物體在一指定 臨限合倂,新合倂的物體指向之箭頭形成合倂的物體之兩 個物體中的較佳者。 當通過每一臨限時,程式保持含有來自目前所形成的 物體的集的最佳物體之輸出影像。每當一物體成長至比其 先前記錄的最佳狀態更佳之新狀態時,更新的物體被拉入 輸出影像。圖6顯示正在成長的輸出影像。首先,圖6含 -11 - 1307058 有來自第一臨限之物體。然後,圖6含有來自第一及第二 臨限之最佳物體,然後自首先第三臨限的最佳物體,等 等。最後的輸出影像含有被標記爲最佳物體且後來未有利 於一較佳者(與圖5比較)而被拒絕之所有物體。 因此,依據演算法的一個實施例,當一物體匹配接受 標準時,取代導引輸出至輸出影像,此物體可以是基於物 體的目前狀態及其先前狀態間之比較有條件地輸出。 雖然在此所述之本發明的演算法,基於諸如面積及周 長的特徵之識別細胞核且接受匹配圓形接受標準(周長的 平方除以面積)之物體,其它特徵及標準可被使用來識別 物體。接受標準可被調諧至特別成像應用(生物或非生 物)。當物體係基於除了圓形之外的標準予以識別時,其 它特徵可被替代地使用。慣性矩、離心率、較長及較短的 橢圓軸最適於橢圓、具有周圍像素之對比、灰度値或光學 密度平均或標準偏差,且,結構特徵係少數其它幾個特徵 實例,其可被容易量測且使用來指定最佳分割之接受標 準。其它接受標準亦可被使用,諸如尺寸、形狀、結構、 顏色、密度、對比等。多標準集可允許不同種類的物體, 諸如細胞核、不同形狀的細胞核、細胞質、細胞核或細胞 質"內含物"及成群的細胞,於在同時找出之單一影像中。 然而,影瞭解到,本發明未受限於這些特徵或接受標準的 使用。 在像素被加至物體時,更新物體的周長量測,如果周 長的合理估算皆係必要的,可僅使用最接近此像素的四個 -12- 1307058 像素予以完成。因此,每當一像素被’,配置"時,僅此像素 及其四個最近相鄰像素需被檢查以決定物體特徵應如何被 更新。超過這四個相鄰像素被影響之唯一情況係在兩個物 體被合倂以及此兩個的一者必須被指定另一者的標記時。 然而,合倂過程允許演算法的操作時間接近與像素的數量 成線性比例之操作時間之最佳化。因此,此周長的量測顯 示物體特徵如何可被自先前物體而計算,亦即,此量測顯 示演算法建立在來自先前臨限的資料上且增加演算法的效 率。 作爲另一實例,橢圓率(所量測物體對慣性矩陣的力 矩所界定之橢圓的比較)可被使用作爲識別細胞核之特 徵。基於橢圓率之分割可改善分割結果,因爲此分割將把 伸長細胞核與不規則形狀的加工品而區別開。 因此可看到,本發明的演算法提供各種的改善:用於 影像分割的處理時間之縮短、耗時預處理以找出可能的有 關物體且然後建立有關區以更準確地界定一物體於次要處 s的 '消除’處理具有變化暗度及對比之影像以最小化不正 確負性例子’處理具有不正常群之影像以最小化不正確負 性例子’使用多接受標準在相同時間識別單一影像中之多 物體。 應注意到,本發明的演算法不同於諸如區成長演算法 及有效輪廓(亦稱爲,,蛇形輪廓(snakes) ”)之其它已公 開的反覆影像分析技術。藉由首先將一影像分成許多分開 區(各別像素,小群的像素、具有相同灰度位準的鄰接像 -13- 1307058 素等)之區成長演算法作用。接著,藉由與接觸這些分開 區且具有相似特徵之其它區合倂,區”成長”。通常,這是 一旦不再有相似區的合倂可被完成時停止之反覆過程。此 技術的目標在於使最後集的區匹配正被分割之物體。 於本發明的分割演算法中,物體將”成長”之方式被預 定:僅有一個路徑,亦即,成長可採取且演算法簡單地” 注意"隨著沿著該路徑行進而改變之物體的特徵。被量測 之特徵不會影響到接著哪一像素被加入或哪些區被合倂。 在單一物體通過影像之後,該路徑中所觀察之"最佳物體” 被報告。 於區成長技術中,區的特徵決定哪些區被成長或合 倂。取代跟隨一線性路徑,區成長技術係如同經由具有許 多分支的樹之搜尋。物體特徵被使用來決定採用哪一分 支。通常,多通過係經由此影像而完成,且,該等區的最 後狀態被報告。區成長演算法未儲存該等物體所達成之最 佳狀態;確定的是,該演算法可能必須"支持(back up ) "以回到(一最佳狀態)。 ”有效輪廓",亦稱爲”蛇形輪廓",係由多角形或由參 數方程所代表之物體外形,該物體外形覆蓋在影像上。於 一有效輪廓影像分析技術中,這些線"進化(evolve )"以 改善其本身形狀及其與在下面的影像之匹配性。再者,當 外形不可能藉由進一步進化而改善,這是反覆過程結束。 本發明操作在各別像素上,且僅使用每一像素的直接 相鄰者來實施操作。於有效輪廓技術中,輪廓不是由像素 -14 - 1307058 表示,而是由數學函數表示。本眼算及有效輪廓技術間之 差別係相同如以區成長技術注意到之差別。 區別本發明的演算法與其它分割演算法之主要差異在 於,其它演算法包含使用物體特徵經由一樹或圖尋找最佳 路徑以選擇接著移動哪一方向之反覆過程。有時,這需要 反向追蹤,因爲演算法作用來以最佳可能結果來結束。本 演算法跟隨一線性預定路徑,旁通且記住在其過程中之所 有"最佳"物體。 附帶地,本發明提供包含電腦可讀程式編碼之製造物 件,此編碼含於實施一或更多個模組內以分割影像以及識 別物體。更者,本發明包括以電腦程式編碼爲基礎的產 品,此產品係具有程式編碼儲存其中之儲存媒體,此程式 編碼可被使用來指示電腦實施與本發明相關聯之任一方 法。電腦儲存媒體包括但不限於以下任一者:CD-ROM、 DVD、磁帶、光碟、硬體驅動、軟碟、鐵電性記憶體、快 閃記憶體、鐵磁性記憶體、光學記憶體、電荷耦合裝置、 磁性或光學卡、智慧卡、EEPR0M、EPROM、RAM、 ROM、DRAM、SRAM、SDRAM、或任合其它適當的靜態 或動態記憶體或資料儲存裝置。 軟模具被實施以電腦程式編碼爲基礎的產品用於以下 步驟: (a )基於一影像的屬性値的範圍來分類像素; (b )以該屬性値範圍中之一極値點開始,將該等分類 的像素逐一加至一標記影像以找出該標記影像中的物體; -15- l3〇7〇58 (C)如果該等物體的特徵匹配一預定的接受標準,則 將該等物體輸出至一輸出影像上; (d )重複實施步驟(b )及(c )直到一停止點被達 到’該停止點代表該屬性値範圍中之另一極値點。 結論 一系統及方法已被顯示於有效地實施改善的影像分割 的方法之以上實施例。雖然各種較佳實施例已被顯示及說 明’將瞭解到’並未有意圖藉由此種討論來限制本發明, 而是’意圖來含蓋屬於如附加請求項中所界定之本發明的 精神及範圍內之所有修改。例如,本發明應不受限於軟體 /程式、計算環境、或特定硬體。而且,本發明演算法應 不受限於影像的類型(生物、非生物、二維或三維等)、 影像中之臨限値的數量、被識別於影像中之物體的類型、 或使用來識別物體之特徵及接受標準。 【圖式簡單說明】 圖1解說重疊在細胞群中的細胞上之亮度"輪廓映像 ,,〇 圖2解說依據本發明的一個實施例之影像分割演算法 的步驟。 圖3a-3n共同地解說依據本發明的一個實施例之將像 素加入於標示影像之處理。 圖4解說依據本發明的較佳實施例之影像分割演算法 的步驟。 -16- 1307058 圖5解說依據本發明的一個實施例之影像中的物體在 五個不同臨限値的成長。 圖6解說依據本發明的一個實施例顯示在僅物體的最 佳狀態被保持之不同臨限値找出的物體之輸出影像。 -17-1307058 IX. Description of the Invention [Technical Field of the Invention] The present invention relates to the field of image analysis. More particularly, the present invention relates to a novel image segmentation method. [Prior Art] In pathology or cytology, it is often desirable to use automated or semi-automatic instruments to find and measure cells or nuclei. This instrument can be used for research or screening. An example of the latter is screening of the cervix using the Papanicolou test (or Pap test). These instruments require and analyze digital images to identify relevant cells or to classify normal or suspected sections. The analysis of objects in a digital image, the separation of the object from the background of the image is necessary. In order to characterize a cell or object, the object must first be found. The process of finding objects within a digital image is known as "splitting." Various techniques are used in the segmentation process to find related objects so that subsequent computer analysis can characterize the object. For example, segmentation of images containing cells can be made The cell's nucleus and/or cytoplasm is identified. The traditional way of finding and classifying objects within an image involves several stages: first, segmenting the image to produce a binary mask of the object; then, marking the mask An object in which the pixels of each connected set are assigned a different mark; and finally, various features of the marked object are measured. One of the techniques used to segment the image is "Restricted". In this technique, the threshold of image brightness is selected and each pixel in the image is then compared to the threshold. A pixel having a luminance above this threshold is considered to be -4 - 1307058 as a background pixel; a pixel having a threshold below this threshold is considered to be an object pixel. The threshold used to find the object can be selected based on the image histogram. The image histogram finds the frequency distribution of the darkness 在一 within an image. The proximity algorithm can use these histograms to find a single threshold. For example, the threshold can be halfway between the darkest and brightest pixels. Alternatively, the threshold can be selected as a turning point between a large number of "background" pixels and less rare "object" pixels. Finding the ideal threshold for each object in the image is a difficult task. Usually, a single threshold is not optimal for multiple objects with different darkness in the entire image. Once the threshold is selected and the threshold process is completed, the ''object'' pixel can form a binary mask of the object in the image. The boundaries around this mask can be used to represent each object. This limit may not accurately represent the object. Once the object is found, many methods have been developed to refine the boundaries. This method can use darkness information close to the limit, or constraints such as gradient, curvature, "circle similarity" to refine the boundaries. The techniques currently known for image segmentation are often complex and time consuming. These techniques do not always produce high accuracy in the segmentation process, especially if there is minimal contrast between the object to be found and the background surrounding the object. Therefore, current segmentation algorithms often fail to properly find objects. In cell image analysis, for example, the nucleus may be improperly segmented because the boundaries found are too large or too small. This can result in a false positive event (the instrument incorrectly treats a normal object as suspicious) or a false negative event (the instrument misses a real suspicious object). There is a need for improved segmentation and automatic imaging devices for automated imaging -5- 1307058 To, in particular, accurately identify the boundaries of objects. Regardless of the precise advantages, features, and benefits of the currently known segmentation techniques, none of them achieve or accomplish the objectives of the present invention. SUMMARY OF THE INVENTION The present invention provides a method for identifying each object in an image. The method includes the following steps: (a) classifying pixels based on a range of attributes 该 of the image: (b) one of the ranges of the attribute 値Defects begin by adding the pixels of the classification to a marker image one by one to find objects in the marker image; (c) if the features of the objects match a predetermined acceptance criteria, the objects are output to an output (d) Repeat steps (b) and (c) until a stop point is reached, the stop point representing the other extreme point in the range of the attribute. The present invention provides a manufacturing object that includes a computer readable medium having a computer readable program code embodied therein, the computer readable medium identifying each object in an image, the medium comprising: a computer readable program code, classifying pixels based on the range of attributes 该 of the image; (b) computer readable program code 'starts with one of the attributes 値 range, one by one of the pixels of the classification Adding to a marked image to find objects in the marked image; (a computer can read the program code, if the features of the objects match a predetermined acceptance criteria, then output the objects to an output image; and The computer can read the program code and repeat steps (b) and (c) until a stop point is reached. 'The stop point represents another pole in the range of the attribute. 1307058 The present invention provides a threshold A method of recognizing each object in an image, the method comprising the steps of: 〇 classifying the pixels based on a range of attributes 像素 of pixels in the image, wherein the The gender range matches the number of thresholds; (b) starting with one of the attributes 値 range, adding the pixels one by one to a marker image to generate a new object or an updated old object; (c) calculating the Generating a new object and a feature of the updated old object; (d) matching the generated new object and the calculated feature of the updated old object to a predetermined criterion; (e) if the features satisfy an acceptance Standard, outputting the new object and the updated old object to an output image; and (f) repeating steps (b) through (e) until a stop point is reached, wherein the stop point is selected from the following Either: the other point in the range of the attribute, the point representing the background pixel in the range of the attribute, or the point of the pixel that is unrelated to the new object and the updated old object. The present invention has been described in its preferred embodiments, and the present invention may be applied to various embodiments. Recognized The present invention is intended to be illustrative of the principles of the invention, and the subject matter of the present invention, and the invention is not intended to be limited to the illustrated embodiments. Those skilled in the art will recognize many other possible variations within the scope of the invention. An improved automated image segmentation technique for identifying object boundaries using digital 2D images is described herein. Although the image segmentation technique described herein identifies cells 1307058 cores, the technique itself can be applied to identify any object in a digital image' such as Cytoplasmic or tissue structures in biological applications' or non-biological objects, such as different components on a circuit board, or artificial and natural features on satellite imagery. This technique can be extended to 3D images such as X-ray (Ray) , CAT (computer axial tomography) scan or MRI (magnetic resonance imaging) device generated images. The three-dimensional pixel elements in such three-dimensional images are known as "three-dimensional pixels (voxels)". Groups of voxels can represent organs or tumors in three dimensions. A digital two-dimensional image can be thought of as a three-dimensional surface in which the height dimension represents the gray scale 値 (i.e., brightness) of each pixel. In the contour map of this surface, the identification of nuclei can be accomplished by finding a contour within a certain size and sufficiently rounded into the nucleus. If a contour is contained in another nucleus, the "preferred" in the two contours should be selected. Figure 1 shows such a contour image superimposed on cells in the cell population. The image system on the image is connected to the pixel. These contours are usually equivalent to all objects generated under all possible thresholds, and the coefficients are computationally complex and time consuming. The naifve method will simply segment the image at each threshold, the labeling of the object, and the amount. Measure their characteristics' and retain the object with the best features, which eliminates objects belonging to the subset of the "best" object or the upper set. This method is too expensive to calculate. 2 illustrates an image segmentation algorithm in accordance with an embodiment of the present invention. This algorithm identifies each object found in the image by performing the following steps. First, this algorithm classifies pixels based on the range of attributes 影像 of the image (step 202). The algorithm then adds the sorted pixels one by one to a " 1307058 labeled image" to find the object in the labeled image, starting with a very extreme point in the range of attributes ( (step 204). The pixel of the object is assigned a unique image, or π mark " and the extreme point used to start the 可以 can be at the lowest or highest end of the range of attributes 。. This pixel can be based on, for example, but not limited to, brightness, The properties of the hue, gradient, etc. are classified. The features of the found object are then compared to the pre-standard. If the object matches this criterion, the algorithm outputs the object to the output image, as shown in step 206. The stop point is also bounded by this image. This stop point represents a point in the range of the attribute ,, at which point the algorithm should stop repeating steps 204 and 206 (step 208). This stop point can be another extreme point in the range of the attribute 。. In one embodiment, the stop point is the point of the background pixel 重复 in the range of the repeating attribute 。. In another embodiment, the stop point represents an image unrelated to the object being found. Figure 4 illustrates an algorithm in accordance with a preferred embodiment of the present invention. In step 403, the pixels in the image are classified based on the range of attributes 影像 of the image. The range of attributes 匹配 matches the number in the image. The threshold is determined by the histogram of the attributes used. This pixel can be classified based on attributes such as, but not limited to, brightness, hue, gradient, etc., and indexing on the histogram (indexing) It can be done by brightness, hue, gradient, etc. Instead of dividing the image in each threshold, the classified pixels are added to the blank label image one by one, starting with the extreme point in the range of the attribute ,, as in step 404. As shown, where a new object is created or an old object is updated in the process. If the added pixel is not adjacent to another pixel or an object that has been placed on the marked image (old object), a new object is generated (see figure 3a, 3b -9- Γ 307058 and 3e). If the added pixel is adjacent to the old object, this pixel is combined with the old object to update the old object (see Figures 3c, 3d, 3f, 3g, 3h, 3i, 3j, 3k, 3m and 3n) Furthermore, if the added pixels are linked to two old objects 'the two old objects are updated and merged into one object (see Figure 31). In step 406, the features of these new or updated objects are calculated. The feature is then matched to the predefined acceptance criteria, in step 408. If there is a match (i.e., the acceptance criteria are satisfied), a new or updated object (representing a found cell) is output to the output image in step 410. Steps 404-410 are repeated until the stop point is reached (step 4 1 2). It should be noted that the object can be output to the output image at any time during the processing of the threshold (ie, attribute 値). Again, this The stop point can be another extreme point in the range of the attribute 。. In one embodiment, this stop point represents the point of the background pixel in the range of the attribute 値. In another embodiment, the stop point represents the point of the pixel that is unrelated to the object being found. The algorithm tracks the objects that produced or updated the aforementioned steps and their characteristics. These objects are also designated markers (see, for example, Figures 3a and 3b). Whenever a pixel that is in contact with an object is added, the object's indicia is assigned to the object and the features of the object are updated to reflect the addition of new pixels (see, for example, Figures 3d and 3g). When a pixel connecting two objects is added, the marks and features of the two objects are combined into one (for example, 'see FIG. 31). Figures 3a-3n show how the image speed is added one by one to the original blank mask/marker image and the markers are assigned to each pixel as needed. Because of the way in which the algorithm is designed, there are records of masks of images that are particularly limited to the processing of the -10-1307058 image at a specified time, as well as the markings and features of all objects found here. However, instead of regenerating the data of each threshold, the algorithm is based on data from previous thresholds. This greatly increases the efficiency of the algorithm. In one embodiment of the invention, the object is output to the output image as long as the current feature at the current threshold is calculated to be a better match to the acceptance criteria for all of the previously defined features. In order to determine a preferred match, information from the aforementioned preferred match is used. In order to calculate an object that best matches the best set of criteria for all objects in all thresholds, a method called dynamic programming is used. This program passes through the thresholds one by one, recording objects that can only use the best set of objects currently found. Every object "remembers" is currently at its best. Figure 5 illustrates the growth of objects in five different threshold images. Individual objects separated in one margin can be joined at another threshold. Each object in this schema "points to" its previous best state. Objects without arrows indicate themselves (not shown). This means that the current state of these objects is better than either of their previous states. Therefore, the new state of an object only needs to compare the state pointed to by its arrow, because this state represents the best state that is currently achieved. If two objects merge at a specified threshold, the new merged object points to the arrow. The preferred of the two objects forming the merged object. When passing each threshold, the program maintains an output image containing the best object from the set of objects currently being formed. Each time an object grows to be larger than its previous When the best state of the record is better, the updated object is pulled into the output image. Figure 6 shows the growing output image. First, Figure 6 contains -11 - 1307058 with objects from the first threshold. After that, Figure 6 contains the best objects from the first and second thresholds, then the best object from the first third threshold, etc. The final output image contains the object that is marked as the best and later does not benefit one. Preferably, all objects are rejected (compared to Figure 5). Thus, in accordance with an embodiment of the algorithm, when an object matches the acceptance criteria, instead of directing the output to the output image, the object may be based on the current state of the object. The comparison between its prior state and its previous state is conditionally output.Although the algorithm of the present invention described herein identifies cell nuclei based on features such as area and perimeter and accepts matching circular acceptance criteria (square of perimeter divided by area) Objects, other features and criteria can be used to identify objects. Acceptance criteria can be tuned to special imaging applications (biological or non-biological). When the object system is identified based on criteria other than circular, other features can be Alternative use. Moment of inertia, eccentricity, longer and shorter elliptical axes are best for ellipse, contrast with surrounding pixels, gray scale or optical density flat Mean or standard deviation, and structural features are a few other examples of features that can be easily measured and used to specify acceptance criteria for optimal segmentation. Other acceptance criteria can also be used, such as size, shape, structure, color , density, contrast, etc. Multiple sets of standards allow different types of objects, such as nuclei, differently shaped nuclei, cytoplasm, nucleus or cytoplasm "inclusions" and clusters of cells to simultaneously find a single image However, it is understood that the present invention is not limited by the use of these features or acceptance criteria. When a pixel is added to an object, the perimeter measurement of the object is updated, and if a reasonable estimate of the perimeter is necessary, only use The four -12-1307058 pixels closest to this pixel are done. Therefore, whenever a pixel is 'configured', only this pixel and its four nearest neighbors need to be checked to determine how the object features should be Update. The only case in which the four adjacent pixels are affected is when two objects are merged and one of the two must be assigned the other. However, the merging process allows the algorithm's operating time to be close to the optimization of the operating time that is linearly proportional to the number of pixels. Thus, the measurement of this perimeter shows how the feature of the object can be calculated from the previous object, i.e., the measurement display algorithm is built on data from the previous threshold and increases the efficiency of the algorithm. As another example, the ellipticity (comparison of the ellipse defined by the measured object to the moment of inertia matrix) can be used as a feature for identifying the nucleus. Segmentation based on ellipticity improves segmentation results because this segmentation distinguishes elongated nuclei from irregularly shaped articles. It can thus be seen that the algorithm of the present invention provides various improvements: reduced processing time for image segmentation, time-consuming pre-processing to find possible related objects and then establishing relevant regions to more accurately define an object. 'Erase' processing of s with ambiguous darkness and contrast images to minimize incorrect negative examples 'Processing images with abnormal groups to minimize incorrect negative examples' Using multiple acceptance criteria to identify a single at the same time Many objects in the image. It should be noted that the algorithm of the present invention differs from other published repetitive image analysis techniques such as zone growing algorithms and effective contours (also known as snakes) by first dividing an image into images. The function of the region growth algorithm of many separate regions (different pixels, small groups of pixels, adjacent images with the same gray level, -13-1307058, etc.). Then, by contacting these separate regions and having similar characteristics In other areas, the area “grows.” Usually, this is a repeated process that stops once the combination of similar areas no longer can be completed. The goal of this technique is to match the last set of areas to the object being segmented. In the segmentation algorithm of the present invention, the manner in which an object will "grow" is predetermined: there is only one path, that is, growth can be taken and the algorithm simply "notes" the object that changes as it travels along the path. feature. The measured features do not affect which pixel is added or which regions are merged. After a single object passes through the image, the "best object" observed in the path is reported. In the region growing technique, the characteristics of the region determine which regions are grown or merged. Instead of following a linear path, the region growing technology department As with a search through a tree with many branches, object features are used to determine which branch to use. Typically, multiple passes are done via this image, and the final state of the regions is reported. The region growth algorithm does not store the The best state achieved by the object; it is certain that the algorithm may have to "back up" to return to (a best state). "Effective profile", also known as "snake outline" ", is the shape of an object represented by a polygon or a parametric equation that covers the image. In an effective contour image analysis technique, these lines "evolve" are used to improve their shape and It matches the image below. Furthermore, when the shape cannot be improved by further evolution, this is the end of the repetitive process. The present invention operates on individual pixels. The operation is performed using only the direct neighbors of each pixel. In the effective contour technique, the contour is not represented by pixels - 14 - 307058, but by a mathematical function. The difference between the present eye calculation and the effective contour technique The difference is the same as that noted by the region growing technique. The main difference between the algorithm of the present invention and other segmentation algorithms is that other algorithms involve using object features to find the best path via a tree or graph to select which direction to move next. The iterative process. Sometimes, this requires backtracking because the algorithm acts to end with the best possible outcome. This algorithm follows a linear predetermined path, bypassing and remembering all the "best in the process. Incidentally, the present invention provides an article of manufacture comprising a computer readable program code embodied in one or more modules for segmenting an image and identifying an object. Moreover, the invention includes encoding with a computer program For the underlying product, this product has a program code to store the storage medium in it. This program code can be used to indicate the computer Any of the methods associated with the present invention. Computer storage media includes, but is not limited to, CD-ROM, DVD, tape, compact disc, hard drive, floppy disk, ferroelectric memory, flash memory , ferromagnetic memory, optical memory, charge coupled device, magnetic or optical card, smart card, EEPR0M, EPROM, RAM, ROM, DRAM, SRAM, SDRAM, or any other suitable static or dynamic memory or data storage The soft mold is implemented as a computer program code based product for the following steps: (a) classifying pixels based on the range of attributes of an image; (b) starting with one of the attributes 値 range, Adding the classified pixels one by one to a marked image to find an object in the marked image; -15- l3〇7〇58 (C) if the features of the objects match a predetermined acceptance criterion, then The object is output to an output image; (d) steps (b) and (c) are repeated until a stop point is reached. 'The stop point represents another pole in the range of the attribute. Conclusion A system and method has been shown in the above embodiments of a method for efficiently implementing improved image segmentation. While the preferred embodiment has been shown and described, it is understood that it is not intended to limit the invention, and is intended to cover the spirit of the invention as defined in the appended claims. And all modifications within the scope. For example, the invention should not be limited to software/programs, computing environments, or specific hardware. Moreover, the algorithm of the present invention should not be limited by the type of image (biological, abiotic, two-dimensional or three-dimensional, etc.), the number of thresholds in the image, the type of object identified in the image, or the use to identify The characteristics of the object and the acceptance criteria. BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 illustrates the brightness "contour map of cells superimposed on a cell population,> Fig. 2 illustrates the steps of an image segmentation algorithm in accordance with an embodiment of the present invention. Figures 3a-3n collectively illustrate the process of adding pixels to a labeled image in accordance with one embodiment of the present invention. 4 illustrates the steps of an image segmentation algorithm in accordance with a preferred embodiment of the present invention. -16- 1307058 Figure 5 illustrates the growth of objects in an image at five different thresholds in accordance with one embodiment of the present invention. Figure 6 illustrates an output image of an object found at different thresholds in which only the best state of the object is maintained, in accordance with one embodiment of the present invention. -17-

Claims (1)

1307058 十、申請專利範圍 1· 一種識別一影像中的每一物體之方法,該方法包 含以下步驟: (a) 基於該影像的屬性値的範圍來分類像素; (b) 以該屬性値範圍中之一極値點開始,將該等分類 的像素逐一加至一標記影像以找出該標記影像中的物體, 其中各加至該標記影像的像素產生新的物體或更新的舊物 體; (c) 產生各加至該標記影像的像素之物體的更新列 表’其中該更新列表包含個別物體的特徵且辨識該等特徵 對該新物體或更新的舊物體的預定接受標準的最佳匹配; (d) 如果相較於先前的最佳匹配,該等物體的特徵對 該預定接受標準係較佳匹配,則將該等物體輸出至一輸出 影像上;及 (e) 重複實施步驟(b)及(c)直到一·停止點被達 到,該停止點代表該屬性値範圍中之另一極値點。 2.如申請專利範圍第1項之方法,其中該等像素係 基於亮度,色調,或梯度的該屬性値的任一者予以分類。 3 ·如申請專利範圍第1項之方法,其中如果該增加 的像素未鄰接已放置在該標記影像上之另一像素或物體, 則該新物體被產生。 4.如申請專利範圍第1項之方法,其中如果該增加 的像素鄰接至已存在的該舊物體或如果該增加的像素連結 兩個已存在於該影像上之舊物體,則該更新的舊物體被產 -18- 1307058 生。 5.如申請專利範圍第1項之方法,其中該等物體被 指定標記’且’該等標記針對每一增加的像素被更新。 6·如申請專利範圍第5項之方法,其中該等物體的 該等標記建立在來自一先前增加的像素之標記上。 7 ‘如申請專利範圍第1項之方法,其中該等物體的 該等特徵係由當一像素先前被加入時找出之物體的特徵而 計算出。 8·如申請專利範圍第1項之方法,其中該方法被實 施在來自生物或非生物應用之二維或三維影像上。 9.如申請專利範圍第8項之方法,其中該等生物應 用之該等物體係以下的該任一者:細胞,細胞核,細胞 質,細胞核或細胞質的內含物及聚集的細胞,或藉由諸如 X射線,CAT掃瞄或磁振成像的成像裝置所產生之影像內 的物體。 1 〇.如申請專利範圍第8項之方法,其中該等非生物 應用之該等物體係以下的該任一者:一電路板上的組件’ 或一衛星影像中之人造或自然物體。 n.如申請專利範圍第1項之方法,其中該等特徵係 以下的該任一者:周長、面積、物體的慣性矩、偏心率、 摘圓軸、橢圓的最佳配合(best fit)、對比、灰度値、 光學密度平均、標準偏差或組織。 12.如申請專利範圍第1項之方法,其中該等接受標 準係以下的該任一者:尺寸、形狀、組織、顏色或密度。 -19- Γ307058 I3·如申請專利範圍第1項之方法,其中該方法使用 多種接受標準來同時識別該影像中之不同種類的物體。 1 4.如申請專利範圍第1項之方法,其中該停止點係 代表該屬性値該範圍中的背景像素値之點、或代表與被找 出的該等物體無關的像素値之點。 1 5 · —種電腦可讀取媒體,包含一實施於其中的電腦 可讀取程式編碼,該電腦可讀取媒體識別一影像中的每一 物體,該媒體包含: (a) 電腦可讀取程式編碼,基於該影像的屬性値的範 圍來分類像素; (b) 電腦可讀取程式編碼,以該屬性値範圍中之一極 値點開始,將該等分類的像素逐一加至一標記影像以找出 該標記影像中的物體,其中各加至該標記影像的像素產生 新的物體或更新的舊物體; (c) 電腦可讀取程式編碼,被組構以更新各加至該標 記影像的像素之物體的列表,其中該更新的列表包含個別 物體的特徵且辨識該等特徵對該新物體或更新的舊物體的 預定接受標準的最佳匹配; (d) 電腦可讀取程式編碼,如果相較於先前的最佳匹 配,該等物體的特徵對該預定接受標準係較佳匹配,則將 該等物體輸出至一輸出影像上;及 (e) 電腦可讀取程式編碼,重複實施步驟(b )至 (d )直到一停止點被達到,該停止點代表該屬性値範圍 中之另一極値點。 -20- 1307058 1 6 ·如申請專利範圍第1 5項之電腦可讀取媒體,包 含一具有實施於其中的電腦可讀取程式編碼,該電腦可讀 取媒體識別一影像中的每一物體,其中該停止點係代表該 屬性値範圍中的背景像素値之點、或代表與被找出的該等 物體無關的像素値之點。 17. —種在數個臨限値下識別一影像中的每一物體之 方法,該方法包含以下步驟: (a) 基於該影像中之像素的屬性値的範圍來分類該等 像素,其中該屬性値範圍符合該數個臨限値; (b) 以該屬性値範圍中之一極値點開始,將各像素逐 一加至一標記影像以產生新物體或更新的舊物體; (c) 計算該產生的新物體及該更新的舊物體之特徵; (d) 以一預定的接受標準來比較該產生的新物體及該 更新的舊物體之該計算出的特徵; (e) 如果該計算出的特徵對該新物體或更新的舊物體 的該預定的接受標準係較佳匹配,則將該產生新物體及該 更新的舊物體輸出至一輸出影像上;及 (f) 重複實施步驟(b)至(e)直到一停止點被達 到,該停止點被選自以下該任一者:該屬性値範圍中之另 一極値點,代表該屬性値範圍中的背景像素値之點,或代 表與該等新物體及該等更新的舊物體無關的像素値之點。 1 8 .如申請專利範圍第1 7項之方法,其中該等像素 係基於亮度、色調、或梯度的屬性値的該任一者予以分 類。 -21 - 1307058 19.如申請專利範圍第1 7項之方法,其中該數個臨 限値係由使用亮度、色調、或梯度的該等屬性値所檢索之 直方圖予以決定。 2〇.如申請專利範圍第17項之方法,其中如果該增 加的像素未鄰接已放置在該標記影像上之另一像素或物 體,則該等新物體被產生。 21. 如申請專利範圍第17項之方法,其中如果該增 加的像素鄰接已存在的該等舊物體或如果該等增加的像素 連結該等舊物體的任一者,則該等更新的舊物體被產生。 22. 如申請專利範圍第17項之方法,其中該方法被 實施在來自生物或非生物應用之二維或三維影像上。 23. 如申請專利範圍第22項之方法,其中該等生物 應用之該等物體係以下的該任一者:細胞、細胞核、細胞 質、細胞核或細胞質的內含物及聚集的細胞、或藉由諸如 X射線、CAT掃猫或磁振成像的成像裝置所產生之影像內 的物體。 24. 如申請專利範圍第22項之方法,其中該等非生 物應用之該等物體係以下的該任一者:一電路板上的組 件、或一衛星影像中之人造或自然物體。 25. 如申請專利範圍第17項之方法,其中該等特徵 係以下的該任一者:周長、面積、物體的慣性矩、偏心 率、橢圓軸、橢圓的最佳配合、對比 '灰度値、光學密度 平均、標準偏差或組織。 26. 如申請專利範圍第1 7項之方法,其中該等接受 -22- 1307058 標準係以下的該任一者:尺寸、形狀、組織、顏色或密 度。 2 7 .如申請專利範圍第1 7項之方法,其中該方法使 用多種接受標準來同時識別該影像中之不同種類的物體。 28. 如申請專利範圍第1 7項之方法,其中該等產生 的新物體及該等更新的舊物體被指定標記,且該等標記被 更新於每一增加的像素。 29. 如申請專利範圍第2 8項之方法,其中該等標記 建立在來自一先前增加的像素之標記上。 -23-1307058 X. Patent Application Scope 1. A method for identifying each object in an image, the method comprising the steps of: (a) classifying pixels based on a range of attributes 该 of the image; (b) in the range of the attribute Starting from one of the extreme points, the pixels of the classification are added one by one to a marked image to find an object in the marked image, wherein each pixel added to the marked image generates a new object or an updated old object; Generating an updated list of objects added to the pixels of the marker image, wherein the update list contains features of the individual objects and identifies the best match of the features to predetermined acceptance criteria for the new object or the updated old object; If the characteristics of the objects are better matched to the predetermined acceptance criteria than the previous best match, then the objects are output to an output image; and (e) steps (b) and (repeated) are repeated c) Until the stop point is reached, the stop point represents another pole in the range of the attribute. 2. The method of claim 1, wherein the pixels are classified based on any of the attributes 亮度 of brightness, hue, or gradient. 3. The method of claim 1, wherein the new object is generated if the added pixel does not abut another pixel or object that has been placed on the marked image. 4. The method of claim 1, wherein the updated old one if the added pixel is adjacent to the existing old object or if the added pixel is connected to two old objects already present on the image The object was born -18-1307058. 5. The method of claim 1, wherein the objects are designated 'and' and the indicia are updated for each additional pixel. 6. The method of claim 5, wherein the indicia of the objects are established on a mark from a previously added pixel. 7 'A method of claim 1, wherein the features of the objects are calculated from features of an object found when a pixel was previously added. 8. The method of claim 1, wherein the method is applied to a two or three dimensional image from a biological or non-biological application. 9. The method of claim 8, wherein the biological application of the system is any one of the following: cells, nuclei, cytoplasm, nucleus or cytoplasmic inclusions and aggregated cells, or by An object within an image produced by an imaging device such as X-ray, CAT scan or magnetic resonance imaging. The method of claim 8, wherein the non-biological application system is any one of the following: a component on a circuit board or an artificial or natural object in a satellite image. n. The method of claim 1, wherein the features are any of the following: perimeter, area, moment of inertia of the object, eccentricity, rounding axis, best fit of the ellipse (best fit) , contrast, gray scale, optical density average, standard deviation or tissue. 12. The method of claim 1, wherein the acceptance criteria are any of the following: size, shape, texture, color or density. -19- Γ 307058 I3. The method of claim 1, wherein the method uses a plurality of acceptance criteria to simultaneously identify different types of objects in the image. 1 4. The method of claim 1, wherein the stop point represents a point at which the attribute is a background pixel in the range, or a point representing a pixel that is unrelated to the object being found. 1 5 - A computer readable medium comprising a computer readable program code embodied therein, the computer readable medium recognizing each object in an image, the medium comprising: (a) a computer readable Program code, classifying pixels based on the range of attributes 该 of the image; (b) Computer-readable program code, starting with one of the attributes 値 range, adding the pixels of the classification to a mark image one by one To find an object in the marked image, wherein each pixel added to the marked image generates a new object or an updated old object; (c) a computer readable program code is configured to update each of the added image images a list of pixels of the object, wherein the updated list contains features of the individual objects and identifies the best match of the features to predetermined acceptance criteria for the new object or the updated old object; (d) computer readable program code, If the characteristics of the objects are better matched to the predetermined acceptance criteria than the previous best match, the objects are output to an output image; and (e) a computer readable program Code, repeated steps (b) to (d) until a stopping point is reached, the stopping point represents the other extreme point Zhi Zhi range of the attribute. -20- 1307058 1 6 · The computer readable medium of claim 15 of the patent application, comprising a computer readable program code embodied therein, the computer readable medium recognizing each object in an image Where the stop point represents a point of the background pixel in the range of the attribute, or a point representing a pixel that is unrelated to the object being found. 17. A method of identifying each object in an image under a plurality of thresholds, the method comprising the steps of: (a) classifying the pixels based on a range of attributes 像素 of pixels in the image, wherein The attribute 値 range meets the number of thresholds; (b) starting with one of the attributes 値 range, adding each pixel to a marker image to generate a new object or an updated old object; (c) Calculation a feature of the generated new object and the updated old object; (d) comparing the generated new object with the calculated feature of the updated old object by a predetermined acceptance criterion; (e) if the calculation is a feature that matches the predetermined acceptance criteria of the new object or the updated old object, and outputs the new object and the updated old object to an output image; and (f) repeats the step (b) ) to (e) until a stop point is reached, the stop point is selected from any of the following: another pole in the range of the attribute ,, representing the point of the background pixel in the range of the attribute ,, or Representatives and these new objects and such The new old-independent object pixel Zhi of points. 18. The method of claim 17, wherein the pixels are classified based on any of the attributes of brightness, hue, or gradient. -21 - 1307058 19. The method of claim 17, wherein the plurality of thresholds are determined by a histogram retrieved using the attributes of brightness, hue, or gradient. The method of claim 17, wherein the new object is generated if the added pixel does not abut another pixel or object that has been placed on the marked image. 21. The method of claim 17, wherein the updated old object is if the added pixel is adjacent to the existing old object or if the added pixel is coupled to any of the old objects Was produced. 22. The method of claim 17, wherein the method is performed on a two or three dimensional image from a biological or abiotic application. 23. The method of claim 22, wherein the biological application of the system is any of the following: cells, nuclei, cytoplasm, nucleus or cytoplasmic inclusions and aggregated cells, or by An object within an image produced by an imaging device such as X-ray, CAT sweeping cat or magnetic resonance imaging. 24. The method of claim 22, wherein the non-native application of the system is any one of: a component on a circuit board, or an artificial or natural object in a satellite image. 25. The method of claim 17, wherein the features are any of the following: perimeter, area, moment of inertia of the object, eccentricity, optimal fit of the elliptical axis, ellipse, contrast grayscale値, optical density average, standard deviation or organization. 26. The method of claim 17, wherein the acceptance of the -22-1307058 standard is any of the following: size, shape, texture, color or density. 2 7. The method of claim 17, wherein the method uses a plurality of acceptance criteria to simultaneously identify different types of objects in the image. 28. The method of claim 17, wherein the new objects and the updated old objects are designated for marking, and the markings are updated for each additional pixel. 29. The method of claim 28, wherein the indicia is based on a mark from a previously added pixel. -twenty three-
TW095100920A 2005-01-10 2006-01-10 Method for identifying objects in an image and computer readable medium TWI307058B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US64211005P 2005-01-10 2005-01-10
US11/328,354 US7689038B2 (en) 2005-01-10 2006-01-09 Method for improved image segmentation

Publications (2)

Publication Number Publication Date
TW200703148A TW200703148A (en) 2007-01-16
TWI307058B true TWI307058B (en) 2009-03-01

Family

ID=36593741

Family Applications (1)

Application Number Title Priority Date Filing Date
TW095100920A TWI307058B (en) 2005-01-10 2006-01-10 Method for identifying objects in an image and computer readable medium

Country Status (15)

Country Link
US (2) US7689038B2 (en)
EP (1) EP1836680B1 (en)
JP (1) JP4825222B2 (en)
KR (1) KR101255865B1 (en)
CN (1) CN101103373B (en)
AT (1) ATE477556T1 (en)
AU (1) AU2006205061B2 (en)
BR (1) BRPI0606705B1 (en)
CA (1) CA2591241C (en)
DE (1) DE602006016066D1 (en)
DK (1) DK1836680T3 (en)
MX (1) MX2007008363A (en)
PT (1) PT1836680E (en)
TW (1) TWI307058B (en)
WO (1) WO2006076312A2 (en)

Families Citing this family (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7480412B2 (en) * 2003-12-16 2009-01-20 Siemens Medical Solutions Usa, Inc. Toboggan-based shape characterization
US7689038B2 (en) * 2005-01-10 2010-03-30 Cytyc Corporation Method for improved image segmentation
US20070177799A1 (en) * 2006-02-01 2007-08-02 Helicos Biosciences Corporation Image analysis
US9020964B1 (en) * 2006-04-20 2015-04-28 Pinehill Technology, Llc Generation of fingerprints for multimedia content based on vectors and histograms
US7773807B2 (en) * 2006-08-29 2010-08-10 Siemens Medical Solutions Usa, Inc. Seed segmentation using l∞ minimization
JP5088329B2 (en) * 2007-02-13 2012-12-05 日本電気株式会社 Cell feature amount calculating apparatus and cell feature amount calculating method
JP2009082463A (en) * 2007-09-28 2009-04-23 Fujifilm Corp Image analysis apparatus, image processing apparatus, image analysis program, image processing program, image analysis method, and image processing method
US8175390B2 (en) * 2008-03-28 2012-05-08 Tandent Vision Science, Inc. System and method for illumination invariant image segmentation
US9602777B2 (en) 2008-04-25 2017-03-21 Roche Diagnostics Hematology, Inc. Systems and methods for analyzing body fluids
WO2012030313A1 (en) * 2008-04-25 2012-03-08 James Winkelman Method of determining a complete blood count and a white blood cell differential count
TWI385595B (en) * 2008-08-22 2013-02-11 Univ Ishou Image segmentation method using image region merging algorithm
US8260002B2 (en) * 2008-09-26 2012-09-04 Axis Ab Video analytics system, computer program product, and associated methodology for efficiently using SIMD operations
TWI405145B (en) 2008-11-20 2013-08-11 Ind Tech Res Inst Image segmentation marking method and system based on regional characteristics of pixels, and computer recordable medium
US8379960B2 (en) * 2009-03-30 2013-02-19 Ge Healthcare Bio-Sciences Corp. System and method for distinguishing between biological materials
TWI381321B (en) * 2009-04-30 2013-01-01 Ind Tech Res Inst Method for image recombination of multiple images and identifying image and system for identifying image and outputting identification result
US8335374B2 (en) 2009-08-12 2012-12-18 Genetix Corporation Image segmentation
US9047674B2 (en) * 2009-11-03 2015-06-02 Samsung Electronics Co., Ltd. Structured grids and graph traversal for image processing
TWI419061B (en) 2010-01-18 2013-12-11 Pixart Imaging Inc Method for recognizing multiple objects
US8593457B2 (en) * 2010-05-27 2013-11-26 National Tsing Hua University Method of three-dimensional image data processing
WO2012029817A1 (en) 2010-08-30 2012-03-08 三洋電機株式会社 Observation device, observation program, and observation system
US9111343B2 (en) 2011-01-18 2015-08-18 Roche Diagnostics Hematology, Inc. Microscope slide coordinate system registration
CN102103744A (en) * 2011-01-28 2011-06-22 武汉大学 Image segmentation method based on minimum spanning trees and statistical learning theory
CN102096816B (en) * 2011-01-28 2012-12-26 武汉大学 Multi-scale multi-level image segmentation method based on minimum spanning tree
CN111610138B (en) 2011-04-15 2023-06-09 罗氏血液诊断股份有限公司 Measure cell volume and composition
WO2013016039A1 (en) * 2011-07-22 2013-01-31 Constitution Medical, Inc. Identifying and measuring reticulocytes
US9123120B2 (en) * 2012-08-24 2015-09-01 DR Vision Technologies LLC Progressive decision for cellular process selection
US9292763B2 (en) * 2013-07-25 2016-03-22 Analog Devices Global System, method, and medium for image object and contour feature extraction
TWI496112B (en) * 2013-09-13 2015-08-11 Univ Nat Cheng Kung Cell image segmentation method and a nuclear-to-cytoplasmic ratio evaluation method using the same
EP3213504B1 (en) * 2014-10-28 2022-04-13 Hewlett-Packard Development Company, L.P. Image data segmentation
JP6635108B2 (en) 2015-02-23 2020-01-22 コニカミノルタ株式会社 Image processing apparatus, image processing method, and image processing program
WO2016172612A1 (en) 2015-04-23 2016-10-27 Cedars-Sinai Medical Center Automated delineation of nuclei for three dimensional (3-d) high content screening
EP3299811B1 (en) * 2015-05-22 2020-04-15 Konica Minolta, Inc. Image processing device, image processing method, and program for image processing
US9754378B2 (en) * 2016-01-21 2017-09-05 Molecular Devices, Llc System and method for segmentation of three-dimensional microscope images
DE102016105102A1 (en) * 2016-03-18 2017-09-21 Leibniz-Institut für Photonische Technologien e. V. Method for examining distributed objects
WO2017207373A1 (en) * 2016-06-03 2017-12-07 Koninklijke Philips N.V. Biological object detection
US11253852B2 (en) 2017-08-17 2022-02-22 Abbott Point Of Care Inc. Devices, systems, and methods for performing optical assays
US11067526B2 (en) 2017-08-17 2021-07-20 Abbott Point Of Care Inc. Devices, systems, and methods for performing optical and electrochemical assays
US10402623B2 (en) 2017-11-30 2019-09-03 Metal Industries Research & Development Centre Large scale cell image analysis method and system
US11244459B2 (en) * 2018-12-16 2022-02-08 Masahiko Sato Method for segmentation of grayscale images and segmented area tracking
KR102260169B1 (en) * 2019-10-08 2021-06-02 주식회사 수아랩 Method to generate data
CN112862829B (en) * 2019-11-27 2024-03-12 武汉Tcl集团工业研究院有限公司 Label image segmentation method, device and storage medium
CN114445421B (en) * 2021-12-31 2023-09-29 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) Identification and segmentation method, device and system for nasopharyngeal carcinoma lymph node region
CN117522760B (en) * 2023-11-13 2024-06-25 书行科技(北京)有限公司 Image processing method, device, electronic equipment, medium and product
WO2025131829A1 (en) 2023-12-18 2025-06-26 F. Hoffmann-La Roche Ag Method and system for applying a liquid sample onto a substrate for image analysis
WO2025196261A1 (en) 2024-03-22 2025-09-25 Roche Diagnostics International Ag Automated method for microscopic analysis of a biological sample

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4484081A (en) * 1980-09-19 1984-11-20 Trw Inc. Defect analysis system
US6021213A (en) * 1996-06-13 2000-02-01 Eli Lilly And Company Automatic contextual segmentation for imaging bones for osteoporosis therapies
WO1999057683A1 (en) * 1998-05-04 1999-11-11 The Johns Hopkins University Method and apparatus for segmenting small structures in images
GB9918681D0 (en) 1999-08-09 1999-10-13 Smithkline Beecham Plc Novel method
US7167583B1 (en) * 2000-06-28 2007-01-23 Landrex Technologies Co., Ltd. Image processing system for use with inspection systems
US7187389B2 (en) * 2001-04-12 2007-03-06 International Business Machines Corporation System and method for simultaneous display of multiple object categories
GB0115615D0 (en) * 2001-06-27 2001-08-15 Univ Coventry Image segmentation
US6985612B2 (en) * 2001-10-05 2006-01-10 Mevis - Centrum Fur Medizinische Diagnosesysteme Und Visualisierung Gmbh Computer system and a method for segmentation of a digital image
US7123766B2 (en) * 2002-02-11 2006-10-17 Cedara Software Corp. Method and system for recognizing and selecting a region of interest in an image
GB0226787D0 (en) * 2002-11-18 2002-12-24 Qinetiq Ltd Measurement of mitotic activity
US7689038B2 (en) * 2005-01-10 2010-03-30 Cytyc Corporation Method for improved image segmentation

Also Published As

Publication number Publication date
DK1836680T3 (en) 2010-12-06
CN101103373A (en) 2008-01-09
US7881532B2 (en) 2011-02-01
CA2591241C (en) 2014-07-22
CA2591241A1 (en) 2006-07-20
KR101255865B1 (en) 2013-04-17
EP1836680B1 (en) 2010-08-11
KR20070097053A (en) 2007-10-02
US20070036436A1 (en) 2007-02-15
CN101103373B (en) 2010-06-02
MX2007008363A (en) 2007-09-06
BRPI0606705A2 (en) 2009-07-07
EP1836680A2 (en) 2007-09-26
AU2006205061B2 (en) 2010-08-12
WO2006076312A3 (en) 2006-09-21
JP4825222B2 (en) 2011-11-30
PT1836680E (en) 2010-11-16
DE602006016066D1 (en) 2010-09-23
BRPI0606705B1 (en) 2018-06-19
ATE477556T1 (en) 2010-08-15
AU2006205061A1 (en) 2006-07-20
JP2008527546A (en) 2008-07-24
US7689038B2 (en) 2010-03-30
US20100150443A1 (en) 2010-06-17
TW200703148A (en) 2007-01-16
WO2006076312A2 (en) 2006-07-20

Similar Documents

Publication Publication Date Title
TWI307058B (en) Method for identifying objects in an image and computer readable medium
CN113780259B (en) A pavement defect detection method, device, electronic device and readable storage medium
US12026875B2 (en) Machine learning using distance-based similarity labels
CN114998220B (en) Tongue image detection and positioning method based on improved Tiny-YOLO v4 natural environment
Kovalev et al. Multidimensional co-occurrence matrices for object recognition and matching
CN113454733A (en) Multi-instance learner for prognostic tissue pattern recognition
CN110211108A (en) A kind of novel abnormal cervical cells automatic identifying method based on Feulgen colouring method
CN111310800B (en) Image classification model generation method, device, computer equipment and storage medium
CN111886630A (en) Three-dimensional cell and tissue image analysis for cellular and subcellular morphological modeling and classification
CN110084270A (en) Pathological slice image recognition method and device
CN107194319B (en) Mitosis positioning and identifying method based on support vector machine sequencing
JP2015508501A (en) Supervised classification method for classifying cells contained in microscopic images
CN118132786A (en) Method, device, equipment and storage medium for retrieving similar pictures
CN113255721B (en) Tumor peripheral surface auditory nerve recognition method based on machine learning
ES2351052T3 (en) METHOD FOR IMPROVED SEGMENTATION OF IMAGES.
JPS60171575A (en) Feature representation method using dimension reduction method
El-Zehiry et al. Cell segmentation in digital holographic images
Mantau et al. Detecting ellipses in embryo images using arc detection method with particle swarm for Blastomere-quality measurement system
HK40053751A (en) Machine learning using distance-based similarity labels
CN120912557A (en) Neuron three-dimensional morphological reconstruction quality assessment method
CN121033088A (en) A method and apparatus for cell segmentation in bright-field images
CN120219283A (en) Method and system for processing cell images based on DistSeg model
HK40051109A (en) Multiple instance learner for prognostic tissue pattern identification
Li Locality sensitive modelling approach for object detection, tracking and segmentation in biomedical images
Aotake et al. Automated Dynamic Cellular Analysis in Time-Lapse Microscopy