[go: up one dir, main page]

TW201225003A - System and method for marking differences of two images - Google Patents

System and method for marking differences of two images Download PDF

Info

Publication number
TW201225003A
TW201225003A TW99142793A TW99142793A TW201225003A TW 201225003 A TW201225003 A TW 201225003A TW 99142793 A TW99142793 A TW 99142793A TW 99142793 A TW99142793 A TW 99142793A TW 201225003 A TW201225003 A TW 201225003A
Authority
TW
Taiwan
Prior art keywords
image
primitive
channel matrix
points
point
Prior art date
Application number
TW99142793A
Other languages
Chinese (zh)
Inventor
Guang-Jian Wang
Dai-Gang Zhang
Jin-rong ZHAO
xiao-mei Liu
Original Assignee
Hon Hai Prec Ind Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hon Hai Prec Ind Co Ltd filed Critical Hon Hai Prec Ind Co Ltd
Priority to TW99142793A priority Critical patent/TW201225003A/en
Publication of TW201225003A publication Critical patent/TW201225003A/en

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention provides a system and method for marking differences of two images. The system is installed and implemented by a computer that connects with an image capturing device. The system includes an image obtaining module, an array construction module, an array calculation module, and an image marking module. The system obtains a captured image of a target object using the image capturing device, and obtains a standard image of the target object from a storage device of the computer. The system further automatically compares the captured image with the standard image to find the different parts of the two images, and marks the different parts in the captured image with a special shape.

Description

201225003 六、發明說明: 【發明所屬之技術領域】 [0001 ] 本發明涉及·種影像處理系統及方法,尤其係關於一種 兩張影像之差異部位標注系統及方法° 【先前彳支術】 [0002] 於一般外界光源照射之條件下拍攝被測物體(例如主板 )之影像時,由於光源可能由一盞日光燈變為兩盞燈造 成光線亮度變化干擾而導致拍攝影像與實際影像產生差 異。因此,工業上需要利用光學自動檢測(Au t oma t i c Optic Inspection * AOI )詨備對被測物體之拍攝影像 進行光學自動檢測,來找出拍攝影像與實際影像之差異 部位。於待測影像與實際影像做比對和輸出時,業界通 常採用迭代求值逐點分析來求解兩張影像不同部位之中 心點。然而’採用送代求值逐點分析法分析大圖片,比 如CPU Socket 4102x3384之影像時,常常需要幾分鐘 才能分析出影像之差異部位,其分折效率較低,分析成 本較高。 - 【發明内容】 闺#於以上内容’有必要提供一種影像差異部位標注系統 及方法,能夠將外界光源變化造成光影干擾時所拍 測物件之拍姆像Μ於實㈣像之差異雜進行標注 〇 099142793 所述之影像差異部位標㈣統,絲並運行 該電腦連接有影像攝取設備。《統包括:影像 組’用於藉由影像攝取設備攝取待測物件之拍攝= 表單編號Α0101 第3頁/共20頁 0992074203-0 [0004] 201225003 從電腦之記憶體中獲取待測物件之標準影像’藉由比對 拍攝影像和標準影像之影像平均能量密度得到該兩張影 像之差異影像,及將該差異影像按照RGB三通道分離成R 灰度影像、G灰度影像及B灰度影像;矩陣構造模組’用 於根據R灰度影像中之每一圖元點及其週圍圖元點之亮度 值構造R通道矩陣群,根據G灰度影像中之每’圖70點及 其週圍圖元點之亮度值構造G通道矩陣群’根據B灰度影 像中之每一圖元點及其週圍圖元點之亮度值構通道矩 陣群,及從R通道矩陣群中選取負定之R通道矩陣,從6通 道矩陣群中選取負定之G適道矩陣’從B通道雉陣群中選 取負定之B通道矩陣;矩陣計算模組’用於在負疋之尺通 道矩陣中求解局部極大值所對應之由圖元黠構成之第一 圖元點集合,在負定之G通道矩陣中求解局邡橾大值所對 應之由圖元點構成之第二圖元點集合,在負定之B通道矩 陣中求解局部極大值所對應之由圖元點構成之第二圖兀 點集合,及求解所述三個圖:!元點集合之交集得到所需標 注位置之圖元點序列;影像標注模組,用於以所述圖元 點序列中之每一個圖元點^中心於其週圍四個方向搜索 亮度值為零之圖元點,將該亮度值為零之圖元點與中心 圖元點之間之距離作為該中心圖元點之標漆半徑,以所 述圖元點序列中每—個圖元點為中心並以其#應之標注 半徑為標注範圍於拍攝影像中標注出不同於耩準影像之 差異部位。 所述之影像差異部位標注方法包括步驟:藉由影像攝取 設備攝取待測物件之拍攝影像,從記憶體中旅取待測物 099142793 表單編號A0101 0992074203-0 [0005] 201225003 件之標準影像;藉由比對拍攝影像和標準影像之影像平 均能量密度得到該兩張影像之差異影像;將該差異影像 按照RGB三通道分離成R灰度影像、G灰度影像及B灰度影 像;根據R灰度影像中之每一圖元點及其週圍圖元點之亮 度值構造R通道矩陣群’根據G灰度影像中之每一圖元點 及其週圍圖元點之亮度值構造G通道矩陣群,及根據β灰 度影像中之每一圖元點及其週圍圖元點之亮度值構造Β通 道矩陣群;從R通道矩陣群中選取負定之R通道矩陣,從G 通道矩陣群中選取負定之G通道矩陣,及從β通道矩陣群 中選取負定之Β通道矩陣;於負定之R通道矩陣中求解局 部極大值所對應之由圖元點構成之第'一圖元點集合,於 負定之G通道矩陣中求解局部極大填淆對應之由圖元點構 成之第二圖元點集合,及於負定之Β通道钜陣中求解局部 極大值所對應之由圖元點構成之第三圖元點集合;求解 所述三個圖元點集合之交集得到所需標注位置之圖元點 序列;以所述圖元點序列中每一個圖元點為中心於其週 圍四個方向搜索亮度值為零之圖元點,並將該亮度值為 零之圖元點與中心圖元點之間之距離作為該中心圖元點 之標注半徑;以所述圖元點序列中每一個圖元點為中心 並以其對應之標注半徑為標注範圍於拍攝影像中標注出 不同於標準影像之差異部位。 [0006] 相較於習知技術,本發明所述之影像差異部位標注系統 及方法,能夠藉由電腦軟體將外界光源變化造成光影干 擾時所拍攝待測物件之拍攝影像不同於實際影像之差異 部位進行標注,從而提高分析效率,降低分析成本。 099142793 表單編號Α0101 第5頁/共20頁 0992074203-0 201225003 【實施方式】 [0007] 如圖1所示,係本發明影像差異部位標注系統11較佳實施 例之架構圖。於本實施例中,所述之影像差異部位標注 系統11安裝並運行於電腦1中,能夠將外界光源變化造成 光影干擾時所拍攝待測物件3之拍攝影像不同於無外界光 影干擾時所拍攝待測物件3之實際影像之差異部位進行標 注。所述之電腦1連接有影像攝取設備2,該影像攝取設 備2用於攝取待測物件3之拍攝影像,例如圖3所示之拍攝 影像。於一般外界光源照射之條件拍攝待測物件3 (例如 主板)之影像時,由於光源可能由一盞日光燈變為兩盞 燈造成光線亮度變化而導致拍攝影像與實際影像產生差 異,例如圖3所示之差異部位a 1、a 2及a 3。 [0008] 所述之電腦1包括記憶體12、中央處理器13及顯示器14。 記憶體12儲存有與拍攝影像作比對之標準影像,例如圖3 所示之標準影像b。中央處理器13用於執行影像差異部位 標注系統11自動找出待測物件3之拍攝影像與標準影像之 差異部位,並將該差異部位於拍攝影像中標注出來。顯 示器14用於顯示已經標注了差異部位之影像。 [0009] 所述之影像差異部位標注系統11包括影像獲取模組111、 矩陣構造模組112、矩陣計算模組113及影像標注模組 114。本發明所稱之模組係由一系列計算指令組成之電腦 程式段。於本實施例中,所述之模組係一種能夠被中央 處理器13所執行且能夠完成固定功能之電腦程式段,其 儲存於所述之記憶體12中。 [0010] 所述之影像獲取模組111用於藉由影像攝取設備2攝取待 099142793 表單編號A0101 第6頁/共20頁 0992074203-0 201225003 測物件3之拍攝影像’並從記憶體12中獲取待測物件3之 払準〜像。该拍攝影像係一種於外界光源變化造成光影 干擾時所拍攝待測物件3之影像,該標準影像係—種於無 外界光影干擾時所拍攝待測物件3之實際影像。 [_錢之影像獲取模組⑴還狀藉由比對減影像和標準 影像之衫像平均能量密度(Image Average Energy Density,iAED)值得到該兩張影像之差異影像,例如 圖3所示之差異影像c,及將該差異影像按照RGB三通道分 〇 離成R灰度影像、G灰度影像及B灰度影像。所述之RGB三 通道包括影像之R灰度通道、G灰度通道及B灰度通道。所 述之IAED係指解析度為NxN影像中每一個圖元之平均能量 密度,用於衡量影像之圖元能量之基準,其按照公式 IAED = (R+G+B)/N/N來計算該影像之圖元平均能量密度 ,於這個公式中之R代表所述影像中所有圖元點之尺亮度 值總和,G代表所述影像中所有圖元點之G亮度值總和, 及B代表所述影像中所有圖元點之B亮度值總和。 〇 [0012]所述之矩陣構造模組112用於根據R灰度影像中之每一圖 元點及其週圍圖元點之亮度值構造R通道矩陣群,根據G 灰度影像中之每一圖元點及其週圍圖元點之亮度值構造G 通道矩陣群,及根據將B灰度影像中之每一圖元點及其週 圍圖元點之壳度值構造B通道矩陣群。所述之R通道矩陣 群具有與R灰度影像中圖元點數目等同數量之!{通道矩陣 ,所述之G通道矩陣群具有與G灰度影像中圖元點數目等 同數量之G通道矩陣,所述之B通道矩陣群具有與B灰度影 像中圖元點數目等同數量之B通道矩陣。於本實施例中, 099142793 表單編號A0101 第7頁/共20頁 0992074203-0 201225003 所述之RGB通道矩陣均可採用一種海瑟矩陣構造得到。每 一通道矩陣群中均包含正定之矩陣及負定之矩陣,每一 矩陣係一種由圖元之亮度值構成之多維變數函數之二階 偏導數矩陣。 [0013] 所述之矩陣構造模組1 12還用於分別從R通道矩陣群中選 取負定之R通道矩陣,從G通道矩陣群中選取負定之G通道 矩陣,從B通道矩陣群中選取負定之B通道矩陣。於本實 施例中,假如某一圖元點之位置座標表示為(X,Y),該圖 元點之亮度值表示為Ε(Χ,Υ),則矩陣構造模組11 2以該 圖元點為中心逐步判斷其週圍之圖元點之亮度值,當 E(Xn-l,Ym)<E(Xn,Ym)&E(Xn,Ym)>E(Xn+l,Ym),i 同時滿足E(Xn, Ym-l)<E(Xrr,Ym)且 E(Xn,Ym)>E(Xn,Ym+l)之圖元點,則圖元點(X,Y)及其 週圍點構成之矩陣即滿足負定之矩陣。 [0014] 所述之矩陣計算模組113用於在負定之R通道矩陣中求解 局部極大值所對應之由ρ個圖元點構成之第一圖元點集合 {R(X1,Y1),R(X2,Y2), ... R(Xp,Yp)},在負定之G 通道矩陣中求解局部極大值所對應之由q個圖元點構成之 第二圖元點集合{G(M1,N1),G(M2,N2), ... G(Mq,Nd)},在負定之B通道矩陣中求解局部極大值所對 應之由r個圖元點構成之第三圖元點集合{B(T1,S1), B(T2,S2),... B(Tr,Sr)}。於本實施例中,矩陣計算 模組11 3藉由判定負定之R、G、B通道矩陣内變數之二階 偏導數是否等於零來確定各自通道矩陣内圖元點亮度值 之局部極大值,每一個R、G、B通道矩陣内之局部極大值 099142793 表單編號A0101 第8頁/共20頁 0992074203-0 201225003 所對應之圖元點即為構成上述各自圖元點集合。 [0015] 所述之矩陣计异模組! ! 3還用於求解上述三個圖元點集合 之交集得到所需標注位置之標注圖元點序列{(χι,γι), (Χ2,Υ2)’ ·.‘ (Xi,Yj)},其中 i及 j 小於p、q、r之最 小值。於本實施例中,矩陣計算模組113藉由上述三個圖 元點集合内搜索所需標注位置之圖元點,當χ=Μ=Τ且 Y = N = S時之圖元點即為所需標注位置之中心圖元點 (Xi,Yj)。 Ο [0016] 所述之影像標注模組u4用於在每一個中心圖元點 (Xi,Yj)之週圍四個方向搜索亮度值全部為零之圖元點, 並將壳度值為零之圖元點與中心圖元點(χ i,γ j )之間之最 遠距離作為該中心圖元點(Xi,Yj)之標注半徑。於本實施 例中,影像標注模組114搜索中心圖元i(Xi,Yj)週圍四 個方向之亮度值為零之圖元點’藉由中心圖元點(Xi Yj) 與壳度值為零之圖元點之間之距離得到標注半徑。 [0017] Ο 所述之影像標注模組114還用於以每一個圖元點(xi, 為中心,並以其對應之標注半徑為標注範圍於拍攝影像 中標注出不同於標準影像之差異部位。於本實施例中, 差異部位可以採用區別於拍攝影像之顏色矩形或圓形來 標注’例如®3中標注有紅色的矩形係為拍攝影像不同於 標準影像之差異部位a 1、a 2及a 3。 如圖2所示,係本發明影像差異部位標注方法較佳實施例 之流程圖。於本實施例中,該方法能夠將外界光源變化 造成光影干擾時所拍攝待測物件3之拍攝影像不同於無外 099142793 表單編號A0101 第9頁/共20頁 0992074203-0 [0018] 201225003 界光影干擾時所拍攝待測物件3之實際影像之差異部位進 行標注。 [0019]步驟S20 ’影像獲取模組1 1 1藉由影像攝取設備2攝取待測 物件3之拍攝影像,並從記憶體12中獲取待測物件3之標 準影像。該拍攝影像係一種於外界光源變化造成光影干 擾時所拍攝待測物件3之影像,該標準影像係一種於無外 界光影干擾時所拍攝待測物件3之實際影像。 _ 步驟S21,影像獲取模組111藉由比對拍攝影像和標準影 像之影像平均能量密度以玉!)值得到該兩張影像之差異影 像,例如圖3所示之差異影像c。辦述之IAED係指解析度 為NxN影像中每一個圖元之平均能量密度,用於衡量影像 之圖元能量之基準,其按照公式IAED = (R+G + B)/N/N來 計算該影像之圖元平均能量密度。 [00213 步驟S22 ’影像獲取模組iu將該差異影像按照rGB三通 道分離成R灰度影像、G灰度景彡像友B灰度影像。於本實施 例中,所述之RGB三通道包括影像之R灰度通道、G灰度通 道及B灰度通道。所述之RGB值採用影像圖元點之亮度值 來表示。 [0022] 步驟S23 ’矩陣構造模組112根據R灰度影像中之每一圖元 點及其週圍圖元點之梵度值構造R通道矩陣群,根據G灰 度影像中之每一圖元點及其週圍圖元點之亮度值構造G通 道矩陣群,及根據將B灰度影像中之每一圖元點及其週圍 圖元點之亮度值構造B通道矩陣群。所述之RGB通道矩陣 群中之矩陣均可採用一種海瑟矩陣構造得到。於本實施 099142793 表單編號A0101 第頁/共20頁 0992074203-0 201225003 [0023] ❹ [0024]201225003 VI. Description of the Invention: [Technical Field] [0001] The present invention relates to an image processing system and method, and more particularly to a system and method for marking different parts of two images. [Previous 彳 术 】 [0002 When shooting an image of an object to be measured (such as a main board) under the condition of ordinary external light source illumination, the difference between the captured image and the actual image may be caused by the change of the brightness of the light caused by the change of the light source from one fluorescent lamp to two lamps. Therefore, the industry needs to use optical automatic detection (AOI) to perform optical automatic detection on the image of the measured object to find the difference between the captured image and the actual image. When comparing and outputting the image to be tested with the actual image, the industry often uses iterative evaluation point-by-point analysis to solve the center points of different parts of the two images. However, when analyzing large images, such as the CPU Socket 4102x3384 image, it takes a few minutes to analyze the difference between the images, and the splitting efficiency is lower, and the analysis cost is higher. - [Summary of the Invention] 闺# In the above content, it is necessary to provide an image difference part labeling system and method, which can mark the image of the object taken by the external light source when the light source is disturbed by the difference (4) The image difference part (4) described in 〇099142793 is connected to the computer and connected to the image capturing device. "System includes: Image group" for taking pictures of the object to be tested by the image capturing device = Form No. 1010101 Page 3 / Total 20 pages 0992074203-0 [0004] 201225003 Obtaining the standard of the object to be tested from the memory of the computer The image 'obtains the difference image of the two images by comparing the average energy density of the image of the captured image and the standard image, and separates the difference image into R grayscale image, G grayscale image and B grayscale image according to RGB three channels; The matrix structure module 'is configured to construct an R channel matrix group according to the luminance values of each primitive point and its surrounding primitive points in the R grayscale image, according to each of the 70 points of the G grayscale image and its surrounding map The luminance value of the element points constructs the G channel matrix group 'constructs the channel matrix group according to the luminance values of each primitive point and its surrounding primitive points in the B grayscale image, and selects the negative R channel matrix from the R channel matrix group. Selecting the negative G-mode matrix from the 6-channel matrix group 'Select the negative B-channel matrix from the B-channel 雉 matrix; the matrix calculation module' is used to solve the local maxima in the negative 通道 通道 channel matrix Figure黠 constituting the first set of primitive points, solving the second set of primitive points consisting of primitive points corresponding to the large value in the negative G channel matrix, and solving the local maximum in the negative B channel matrix Corresponding to the second set of map points consisting of primitive points, and solving the three graphs: the intersection of the set of meta-points to obtain a sequence of primitive points of the desired marked position; the image annotation module is used for Each of the primitive points in the sequence of primitive points searches for a primitive point whose luminance value is zero in the four directions around it, and the distance between the primitive point of the luminance value and the central primitive point is taken as The radius of the paint of the center pixel point is centered on each of the primitive points in the sequence of the pixel points, and the difference between the marked radius of the symbol point is marked in the captured image. Part. The method for marking an image difference portion includes the steps of: taking a captured image of the object to be tested by the image capturing device, and traveling from the memory to obtain a standard image of the object to be tested 099142793 Form No. A0101 0992074203-0 [0005] 201225003; Obtaining the difference image between the two images on the average energy density of the captured image and the standard image; separating the difference image into R grayscale image, G grayscale image, and B grayscale image according to RGB three channels; according to the R grayscale image Constructing a G channel matrix group according to the luminance values of each primitive point in the G grayscale image and the surrounding pixel points, and constructing the R channel matrix group by the luminance value of each pixel point and its surrounding primitive points, and Constructing a channel matrix group according to the luminance values of each pixel point and its surrounding primitive points in the β gray image; selecting a negative R channel matrix from the R channel matrix group, and selecting a negative G matrix from the G channel matrix group a channel matrix, and selecting a negative channel matrix from the β channel matrix group; solving a first pixel set consisting of primitive points corresponding to the local maximum in the negative R channel matrix The second set of primitive points formed by the primitive points corresponding to the local maximal filling in the negative G channel matrix, and the third composed of the primitive points corresponding to the local maximum in the negatively determined channel matrix a set of primitive points; solving an intersection of the set of three primitive points to obtain a sequence of primitive points of the desired marked position; searching for brightness in four directions around the center of each of the primitive points in the sequence of the primitive points a pixel point with a value of zero, and the distance between the primitive point having the luminance value of zero and the central primitive point is taken as the labeling radius of the central primitive point; each primitive in the sequence of the primitive point sequence The point is centered and the difference between the target image and the standard image is marked with the corresponding labeling radius. [0006] Compared with the prior art, the image difference portion labeling system and method of the present invention can distinguish the captured image of the object to be tested from the actual image when the external light source is changed by the computer software to cause light and shadow interference. Mark the parts to improve analysis efficiency and reduce analysis costs. 099142793 Form No. Α0101 Page 5 of 20 0992074203-0 201225003 [Embodiment] FIG. 1 is a block diagram showing a preferred embodiment of the image difference portion labeling system 11 of the present invention. In the embodiment, the image difference part labeling system 11 is installed and operated in the computer 1, and can capture the image of the object to be tested 3 when the external light source changes to cause light and shadow interference, and is different from the image without the external light and shadow interference. The difference between the actual images of the object to be tested 3 is marked. The computer 1 is connected to an image capturing device 2 for taking a captured image of the object to be tested 3, such as the captured image shown in FIG. When the image of the object to be tested 3 (for example, the main board) is photographed under the condition of normal external light source illumination, the light source may change from one fluorescent lamp to two lamps, causing a difference in the brightness of the image, which is caused by the difference between the captured image and the actual image, for example, FIG. The difference parts a 1 , a 2 and a 3 are shown. The computer 1 includes a memory 12, a central processing unit 13, and a display 14. The memory 12 stores a standard image that is compared with the captured image, such as the standard image b shown in FIG. The central processing unit 13 is configured to execute the image difference portion labeling system 11 to automatically find the difference between the captured image of the object to be tested 3 and the standard image, and mark the difference portion in the captured image. The display 14 is used to display an image in which the difference portion has been marked. The image difference location labeling system 11 includes an image acquisition module 111, a matrix structure module 112, a matrix calculation module 113, and an image annotation module 114. The module referred to in the present invention is a computer program segment consisting of a series of calculation instructions. In the embodiment, the module is a computer program segment that can be executed by the central processing unit 13 and can perform a fixed function, and is stored in the memory 12. [0010] The image acquisition module 111 is configured to capture the captured image of the object 3 by the image capturing device 2 and the image captured by the image capturing device 2, which is 099142793, form number A0101, page 6 of 20 pages 0992074203-0 201225003 The object to be tested 3 is accurate ~ like. The captured image is an image of the object to be tested 3 when the light source is disturbed by the change of the external light source, and the standard image is the actual image of the object 3 to be tested when there is no external light interference. [_Qianzhi Image Acquisition Module (1) also obtains the difference image of the two images by comparing the image average energy density (iAED) value of the subtraction image and the standard image, for example, the difference shown in FIG. The image c, and the difference image is separated into R gray image, G gray image and B gray image according to RGB three channels. The RGB three channels include an R gray scale channel, a G gray scale channel, and a B gray scale channel of an image. The IAED refers to an average energy density of each primitive in the NxN image, which is used to measure the reference energy of the image element, and is calculated according to the formula IAED = (R+G+B)/N/N. The average energy density of the image element in the image, where R represents the sum of the brightness values of all the pixel points in the image, G represents the sum of the G brightness values of all the element points in the image, and B represents the The sum of the B luminance values of all the primitive points in the image. [0012] The matrix structure module 112 is configured to construct an R channel matrix group according to the luminance values of each primitive point and its surrounding primitive points in the R grayscale image, according to each of the G grayscale images. The luminance value of the primitive point and its surrounding primitive points constructs a G channel matrix group, and the B channel matrix group is constructed according to the shell value of each primitive point in the B grayscale image and its surrounding primitive points. The R channel matrix group has the same number as the number of primitive points in the R grayscale image! {Channel matrix, the G channel matrix group has the same number of G channel matrices as the number of primitive points in the G grayscale image. The B channel matrix group has a B channel matrix equal to the number of primitive points in the B grayscale image. In the present embodiment, the RGB channel matrix described in the form of 099142793 Form No. A0101, page 7 of 20, 0992074203-0 201225003 can be constructed using a Heather matrix. Each channel matrix group contains a positive definite matrix and a negative definite matrix, and each matrix is a second-order partial derivative matrix of a multi-dimensional variable function composed of luminance values of primitives. [0013] The matrix structure module 1 12 is further configured to select a negative R channel matrix from the R channel matrix group, select a negative G channel matrix from the G channel matrix group, and select a negative from the B channel matrix group. Determine the B channel matrix. In this embodiment, if the position coordinate of a certain pixel point is represented as (X, Y), and the luminance value of the primitive point is represented as Ε(Χ, Υ), the matrix construction module 11 2 uses the primitive Point to the center to gradually judge the brightness value of the pixel points around it, when E(Xn-l, Ym)<E(Xn,Ym)&E(Xn,Ym)>E(Xn+l,Ym) , i satisfies the E (Xn, Ym-l) < E (Xrr, Ym) and E (Xn, Ym) > E (Xn, Ym + l) primitive points, then the element points (X, Y) The matrix formed by the points and its surrounding points satisfy the negative matrix. [0014] The matrix calculation module 113 is configured to solve a set of first primitive points {R(X1, Y1), R, which are composed of ρ primitive points corresponding to local maxima in a negative R channel matrix. (X2, Y2), ... R(Xp, Yp)}, the set of second primitive points consisting of q primitive points corresponding to the local maxima in the negative G channel matrix {G(M1, N1), G(M2, N2), ... G(Mq, Nd)}, the set of third primitive points consisting of r primitive points corresponding to local maxima in a negative B-channel matrix { B(T1, S1), B(T2, S2), ... B(Tr, Sr)}. In this embodiment, the matrix calculation module 117 determines the local maximum values of the luminance values of the primitive points in the respective channel matrices by determining whether the second-order partial derivatives of the variables in the negative R, G, and B channel matrices are equal to zero, each The local maximum value in the R, G, and B channel matrices 099142793 Form No. A0101 Page 8 / Total 20 pages 0992074203-0 201225003 The corresponding primitive points constitute the set of respective primitive points. [0015] The matrix metering module is described! ! 3 is also used to solve the intersection of the above three sets of pixel points to obtain the sequence of the labeled element points of the desired label position {(χι, γι), (Χ2, Υ2)' ·.' (Xi, Yj)}, where i And j is less than the minimum of p, q, and r. In this embodiment, the matrix calculation module 113 searches for the primitive points of the required label position by using the above three pixel point sets, and the element point when χ=Μ=Τ and Y=N=S is The central feature point (Xi, Yj) of the desired marked position. [0016] The image labeling module u4 is configured to search for the pixel points whose luminance values are all zero in each of the four central pixel points (Xi, Yj), and the shell value is zero. The farthest distance between the primitive point and the central primitive point (χ i, γ j ) is taken as the labeling radius of the central primitive point (Xi, Yj). In this embodiment, the image tagging module 114 searches for the pixel point of the four directions around the central element i (Xi, Yj) with zero luminance value by the central pixel point (Xi Yj) and the shell value. The distance between the zero element points is the radius of the label. [0017] The image labeling module 114 is further configured to mark each of the pixel points (xi, centered with the corresponding labeled radius as a different range from the standard image in the captured image. In this embodiment, the difference portion may be marked with a color rectangle or a circle different from the captured image. For example, a rectangle marked with red in the ® 3 is a difference portion a 1 , a 2 of the captured image different from the standard image. As shown in FIG. 2, it is a flowchart of a preferred embodiment of the image difference portion marking method of the present invention. In this embodiment, the method can capture the object 3 to be tested when the external light source changes to cause light and shadow interference. Image is different from 099142793 Form No. A0101 Page 9/Total 20 Page 0992074203-0 [0018] 201225003 The difference between the actual image of the object to be tested 3 taken during the boundary light interference is marked. [0019] Step S20 'Image Acquisition The module 1 1 1 captures the captured image of the object to be tested 3 by the image capturing device 2, and acquires a standard image of the object to be tested 3 from the memory 12. The captured image is an external light. The image of the object to be tested 3 is taken when the light and shadow interferes, and the standard image is an actual image of the object to be tested 3 when there is no external light and shadow interference. _ Step S21, the image capturing module 111 compares the captured image and The average image energy density of the standard image is obtained as a difference image of the two images by a value of jade!), such as the difference image c shown in FIG. The IAED refers to the average energy density of each primitive in the NxN image, which is used to measure the energy of the image element of the image, which is calculated according to the formula IAED = (R+G + B)/N/N. The average energy density of the image element. [00213 Step S22] The image acquisition module iu separates the difference image into an R gray scale image and a G gray scale scene image friend B gray scale image according to the rGB three channel. In this embodiment, the RGB three channels include an R grayscale channel, a G grayscale channel, and a B grayscale channel of the image. The RGB values are represented by the luminance values of the image element points. [0022] Step S23 'The matrix structure module 112 constructs an R channel matrix group according to the vanguard value of each primitive point in the R grayscale image and the surrounding primitive points, according to each primitive in the G grayscale image. The luminance value of the point and its surrounding primitive points constructs a G channel matrix group, and the B channel matrix group is constructed according to the luminance values of each primitive point and its surrounding primitive points in the B grayscale image. The matrix in the RGB channel matrix group can be obtained by using a Heather matrix structure. In this implementation 099142793 Form No. A0101 Page / Total 20 pages 0992074203-0 201225003 [0023] ❹ [0024]

例中,每一通道矩陣群中均包含正定之矩陣及負定之矩 陣,每一矩陣係一種由圖元之亮度值構成之多維變數函 數之二階偏導數矩陣。 步驟S24,矩陣構造模組112分別從R通道矩陣群中選取負 定之R通道矩陣,從G通道矩陣群中選取負定之G通道矩陣 ,從B通道矩陣群中選取負定之B通道矩陣。於本實施例 中,假如某一圖元點之位置座標表示為(X,Y),該圖元點 之亮度值表示為E(X,Y),則矩陣構造模組112以該圖元 點為中心逐步判斷其週圍之圖元點之亮度值,當 £(叉11-1,丫111)<£(!11,丫111)及£(乂11,¥111)>£(乂11+1,¥11〇,且 同時滿足E(Xn,Ym-1 )<E(Xn,.:Ym)且 E(Xn,Ym)>E(Xn,Ym+l)之圖元點,則圖元點(X,Y)及其 週圍點構成之矩陣即滿足負定之矩陣。 步驟S25,矩陣計算模組113在負定之R通道矩陣中求解局 部極大值所對應之由ρ個圖元點構成之第一j圖元點集合 {R(X1,Y1),R(X2,Y2),... R(Xp,Yp)},在負定之G 通道矩陣中求解局部極大值所對應之由q個圖元點構成之 第二圖元點集合{G(M1,N1),G(M2,N2),... G(Mq,Nq)},在負定之B通道矩陣中求解局部極大值所對 應之由r個圖元點構成之第三圖元點集合{B(T1,S1), B(T2,S2),... B(Tr,Sr)}。於本實施例中,矩陣計算 模組11 3藉由判定負定之R、G、B通道矩陣内變數之二階 偏導數是否等於零來確定各自通道矩陣内圖元點亮度值 之局部極大值,每一個R、G、B通道矩陣内之局部極大值 所對應之圖元點即為構成上述各自圖元點集合。 099142793 表單編號A0101 第11頁/共20頁 0992074203-0 201225003 _5]纟驟S26 ’矩陣計算模组113求解上述三個圖元點集合之 父集得到所需標注位置之圖元點序列{(X1,γ 1), (X2,Y2), ... (xi,Yj)},其中 i及 j 小於p、Qr之最 小值。於本實施例中,矩陣計算模組113藉由上述三個圖 元點集合内搜索所需標注位置之圖元點,當χ = Μ = τ& Y = N = S時之圖元點即為所需標注位置之圖元點。 [0026]步驟S27,影像標注模組114於每一個中心圖元點 (Xi,Yj)之週圍四個方向搜索亮度值全部為零之圖元點, 並將亮度值為零之圖元點與中心s元點之間之最遠距離 作為該中心圖元點(X i,Y D之標注半徑。於本實施例中, 影像標注模組! i 4搜索中心圖元點⑴,γ]·)週圍四個方向 之亮度值為零之圖元點,藉由計算中心圖元點(xi Yj)與 焭度值為零之圖元點之間之距離得到標注半徑。 剛步驟S28,影像標注模組114以每—個圖元點⑴’⑴為 中並以其對應之標注奇徑為標注範圍於拍攝影像中 標注出不同於標準影像之差異部位。於本實施例中差 異部位可以採賴顺拍攝轉之顏―形或圓形來標 注,例如圖3中標注有紅色的矩形係為拍攝影像不同於標 準影像之差異部位a 1、a 2及a 3。 闺以上所述僅為本發明之較佳實施㈣已,且已達廣泛之 使用功效’凡其他未脫離本發明所揭示之精神下所完成 之均等變化或修飾,均應包含於下述之申請專利範圍内 〇 099142793In the example, each channel matrix group includes a positive definite matrix and a negative definite matrix, and each matrix is a second-order partial derivative matrix of a multi-dimensional variable function composed of luminance values of primitives. In step S24, the matrix structure module 112 selects a negative R channel matrix from the R channel matrix group, selects a negative G channel matrix from the G channel matrix group, and selects a negative B channel matrix from the B channel matrix group. In this embodiment, if the position coordinate of a certain pixel point is represented as (X, Y), and the luminance value of the primitive point is represented as E(X, Y), the matrix construction module 112 uses the primitive point. For the center, gradually judge the brightness value of the pixel points around it, when £(fork 11-1, 丫111) <£(!11,丫111) and £(乂11,¥111)>£(乂11 +1, ¥11〇, and satisfy the E (Xn, Ym-1 ) < E (Xn,.: Ym) and E (Xn, Ym) > E (Xn, Ym + l) primitive points, Then, the matrix formed by the pixel point (X, Y) and its surrounding points satisfies the negative matrix. In step S25, the matrix calculation module 113 solves the local maximum value corresponding to the ρ primitive points in the negative R channel matrix. The first set of j primitive points {R(X1, Y1), R(X2, Y2), ... R(Xp, Yp)}, which are solved by solving local maxima in the negative G channel matrix The set of second primitive points {G(M1,N1), G(M2,N2),...G(Mq,Nq)} composed of q primitive points, solve the local maximum in the negative B channel matrix Corresponding to the set of third primitive points {B(T1, S1), B(T2, S2), ... B(Tr, Sr)} composed of r primitive points. In this embodiment, The matrix calculation module 11 3 determines the local maximum value of the luminance value of the primitive point in the respective channel matrix by determining whether the second-order partial derivative of the variable in the negative R, G, and B channel matrix is equal to zero, and each R, G, and B channel The primitive points corresponding to the local maxima in the matrix constitute the set of the respective primitive points. 099142793 Form No. A0101 Page 11/20 pages 0992074203-0 201225003 _5] Step S26 'The matrix calculation module 113 solves the above The parent set of the three primitive point sets obtains the sequence of primitive points {(X1, γ 1), (X2, Y2), ... (xi, Yj)} of the desired label position, where i and j are less than p, The minimum value of Qr. In this embodiment, the matrix calculation module 113 searches for the primitive points of the required label position by using the above three sets of primitive points, when χ = Μ = τ & Y = N = S The pixel point is the primitive point of the desired label position. [0026] In step S27, the image labeling module 114 searches for the luminance values in all four directions around each central pixel point (Xi, Yj). The point, and the farthest distance between the pixel point with the luminance value of zero and the center s element is taken as the center map Point (X i, the nominal radius of YD. In this embodiment, the image annotation module! i 4 search center pixel point (1), γ] ·) the brightness of the four directions around the pixel point is zero, by Calculate the distance between the center pixel point (xi Yj) and the pixel point whose zero value is zero. In step S28, the image labeling module 114 marks each of the pixel points (1)'(1) as a medium and uses the corresponding labeled odd path as the labeling range to mark a difference from the standard image in the captured image. In this embodiment, the difference portion may be marked with a shape-shaped shape or a circular shape. For example, the rectangle marked with red in FIG. 3 is a difference between the captured image and the standard image a 1 , a 2 and a 3. The above is only the preferred embodiment (4) of the present invention, and has been used in a wide range of applications. Any other equivalent changes or modifications that are not departing from the spirit of the present invention should be included in the following application. Within the scope of patent 〇099142793

【圖式簡單說明】 闺圖丨縣㈣t彡像差異部則H練 表單編號Α(ηοι 铱____ κ吧1巧〈木不苒直I 第12頁/共20頁 0992074203-0 201225003 [0030] 圖2係本發明影像差異部位標注方法較佳實施例之流程圖 〇 [0031] 圖3係於拍攝影像中標注出不同於標準影像之差異部位之 示意圖。 【主要元件符號說明】 [0032] 電腦 1 [0033] 影像差異部位標注系統11 [0034] 影像獲取模組111 ♦ [0035] 矩陣構造模組112 [0036] 矩陣計算模組11 3 [0037] 影像標注模組114 [0038] 記憶體12 [0039] 中央處理器13 [0040] 顯示器14 [0041] 影像攝取設備2 [0042] 待測物件3 099142793 表單編號A0101 第13頁/共20頁 0992074203-0[Simple description of the map] 闺 图丨 (4) t彡 image difference section H practice form number Α (ηοι 铱 ____ κ吧1巧 <木不苒直 I page 12 / total 20 pages 0992074203-0 201225003 [0030] 2 is a flow chart of a preferred embodiment of the method for marking different parts of the image of the present invention. [0031] FIG. 3 is a schematic diagram showing a difference from a standard image in a captured image. [Key Symbol Description] [0032] Computer 1 [0033] Image Difference Part Labeling System 11 [0034] Image Acquisition Module 111 ♦ [0035] Matrix Structure Module 112 [0036] Matrix Calculation Module 11 3 [0037] Image Labeling Module 114 [0038] Memory 12 [0040] Central Processing Unit 13 [0040] Display 14 [0041] Image Ingestion Device 2 [0042] Object to be tested 3 099142793 Form No. A0101 Page 13 / Total 20 Page 0992074203-0

Claims (1)

201225003 七、申請專利範圍: 1 · 一種影像差異部位標mu並運行於電腦中,該電 腦連接有影像攝取設備,所述之影像差異部位標注系^ 括: 影像獲取模組,用於藉由影像攝取設備攝取待測物件之拍 攝影像,從電腦之記憶體中獲取待測物件之標準影像,藉 由比對拍攝影像和標準影像之影像平均能量密度得到該= 張影像之差異影像,及將該差異影像按照RGB三通道分離 成R灰度影像、G灰度影像及B灰度影像; 矩陣構造模組,用於根據g灰度影像中之每一圖元點及其 週圍圖70點之亮度值構造R通道矩陣群,根據G灰度影像中 之每一圖元點及其週圍圖元點之亮度值構造6通道矩陣群 ,根據B灰度影像中之每一圖元點及其週圍圖元點之亮度 值構造B通道矩陣群,及從R通道矩陣群中選取負定之尺通 道矩陣,從G通道矩陣群中選取負定之G通道矩陣,從B通 道矩陣群中選取負定之B通道矩陣; 矩陣計算模組,用於在負定之1{通道矩陣中求解局部極大 值所對應之由圖元點構成之第一 _元點集合,在負定之G 通道矩陣中求解局部極大值所對應之由圖元點構成之第二 圖元點集合,在負定之B通道矩陣中求解局部極大值所對 應之由圖元點構成之第三圖元點集合,及求解所述三個圖 元點集合之交集得到所需標注位置之圖元點序列;及 影像標注模組,用於以所述圖元點序列中之每一個圖元點 為中心於其週圍四個方向搜索亮度值全部為零之圖元點, 並將亮度值為零之圖元點與中心圖元點之間之最遠距離作 099142793 表單編號A0101 第14頁/共20頁 0992074203-0 201225003 為該中心圖兀點之標注半徑,及以所述圖元點序列中每一 個圖元點為中心並以其對應之標注半徑為標注範圍於拍攝 影像中標注出不同於標準影像之差異部位。201225003 VII. Patent application scope: 1 · An image difference part is marked with mu and runs in a computer. The computer is connected with an image capturing device. The image difference part labeling system includes: an image capturing module for using the image The ingesting device takes the captured image of the object to be tested, obtains a standard image of the object to be tested from the memory of the computer, and obtains the difference image of the image by comparing the average energy density of the image of the captured image and the standard image, and the difference The image is separated into R gray image, G gray image and B gray image according to RGB three channels; matrix structure module is used for brightness value according to each pixel point in g gray image and its surrounding point 70 Constructing an R channel matrix group, constructing a 6-channel matrix group according to the luminance values of each primitive point and its surrounding primitive points in the G grayscale image, according to each primitive point in the B grayscale image and its surrounding primitives The brightness value of the point constructs the B channel matrix group, and selects the negative fixed channel matrix from the R channel matrix group, selects the negative G channel matrix from the G channel matrix group, and selects from the B channel matrix group. A negative B-channel matrix; a matrix calculation module for solving a first set of _ meta-points composed of primitive points corresponding to local maxima in a negative 1{channel matrix, and solving a local in a negative G-channel matrix a set of second primitive points consisting of primitive points corresponding to the maximum value, solving a set of third primitive points formed by the primitive points corresponding to the local maximum values in the negative B-channel matrix, and solving the three a sequence of primitive points of the set of primitive points to obtain a sequence of primitive points of the desired marked position; and an image annotation module for searching for brightness in four directions around each of the primitive points in the sequence of the primitive points The value of all the primitive points is zero, and the farthest distance between the pixel point with the luminance value of zero and the central primitive point is 099142793. Form No. A0101 Page 14 / Total 20 Page 0992074203-0 201225003 The radius of the mark is set, and the difference between the target image points and the corresponding mark radius is marked in the captured image to mark the difference from the standard image. 如申請專利範圍第1項所述之影像差異部位標注系統,其 中,所述之影像平均能量密度係指解析度為ΝχΝ影像中每 一個圖兀之平均能量密度,其按照算式(R+G+B)/N/N來 计异知到’其中R為所述影像中所有圖元點之R亮度值總和 ,G為所述影像中所有圖元點之G亮度值總和,及B為所述 影像中所有圖元點之B亮度值總和。 如申請專利範圍第1項所述之影像差異部位標注系統,其 中’所述之R通道矩陣群、G通道矩陣群和B通道矩陣群中 均包含正定之矩陣及負定之矩陣。 Μ請專利範㈣i項所述K象差異部位標注系統,其 中,所述d通道矩陣、G通道鱗㈣紐鱗分別為一 種由各自灰度影像中所有圖元點之亮度值構成之多維變數 函數之二階偏導數矩陣。 如申請專祕圍第4項所狀影像差異部位標注系統,盆 中,所述之局部極大值係藉由判 内變數之二階偏導數是否等於零來確定。 —種影像差異部位標注方法,利田+ !用電腦將待測物件之拍攝 影像不同於實際影像之差異部位推v _ 1位進仃標注’該電腦連接有 影像攝取設備’該方法包括步驟· 藉由影_取設職取㈣物件之#_像從電腦之纪 憶體中獲取待測物件之標準影像; ° 能量密度得到該 藉由比對拍攝影像和標準影像之影像平均 兩張影像之差異影像; 099142793 表單編號Α0101 第15頁/共2〇頁 0992074203-0 201225003 將該差異影像按照RGB三通道分離成R灰度影像、G灰度影 像及B灰度影像; 根據R灰度影像中之每一圖元點及其週圍圖元點之亮度值 構造R通道矩陣群,根據G灰度影像中之每一圖元點及其週 圍圖元點之亮度值構造G通道矩陣群,及根據B灰度影像中 之每一圖元點及其週圍圖元點之亮度值構造B通道矩陣群 &gt; 從R通道矩陣群中選取負定之R通道矩陣,從G通道矩陣群 中選取負定之G通道矩陣,及從B通道矩陣群中選取負定之 B通道矩陣; 於負定之R通道矩陣中求解局部極大值所對應之由圖元點 構成之第一圖元點集合,於負定之G通道矩陣中求解局部 極大值所對應之由圖元點構成之第二圖元點集合,及於負 定之B通道矩陣中求解局部極大值所對應之由圖元點構成 之第三圖元點集合; 求解所述三個圖元點集合之交集得到所需標注位置之圖元 點序列; 以所述圖元點序列中每一個圖元點為中心於其週圍四個方 向搜索亮度值全部為零之圖元點,並將亮度值為零之圖元 點與中心圖元點之間之最遠距離作為該中心圖元點之標注 半徑;及 以所述圖元點序列中每一個圖元點為中心並以其對應之標 注半徑為標注範圍於拍攝影像中標注出不同於標準影像之 差異部位。 7 .如申請專利範圍第6項所述之影像差異部位標注方法,其 中,所述之影像平均能量密度係指解析度為NxN影像中每 099142793 表單編號A0101 第16頁/共20頁 0992074203-0 201225003 / 一個圖元之平均.能量密度,其按照算式(R + G + B)/N/N來 計算得到,其中R為所述影像中所有圖元點之R亮度值總和 ,G為所述影像中所有圖元點之G亮度值總和,及B為所述 影像中所有圖元點之B亮度值總和。 8 .如申請專利範圍第6項所述之影像差異部位標注方法,其 中,所述之R通道矩陣群、G通道矩陣群和B通道矩陣群中 • 均包含正定之矩陣及負定之矩陣。 9 .如申請專利範圍第6項所述之影像差異部位標注方法,其 中,所述之R通道矩陣、G通道矩陣和B通道矩陣分別為一 〇 I 種由各自灰度影像中所有圖元點之亮度值構成之多維變數 函數之二階偏導數矩陣。 10 .如申請專利範圍第9項所述之影像差異部位標注方法,其 中,所述之局部極大值係藉由判定負定之RGB三通道矩陣 内變數之二階偏導數是否等於零來確定。 〇 099142793 表單編號A0101 第17頁/共20頁 0992074203-0The image difference part labeling system according to claim 1, wherein the image average energy density refers to an average energy density of each image in the image, according to the formula (R+G+). B) / N / N to calculate the difference to 'where R is the sum of the R luminance values of all the primitive points in the image, G is the sum of the G luminance values of all the primitive points in the image, and B is the The sum of the B brightness values of all the feature points in the image. The image difference part labeling system according to claim 1, wherein the R channel matrix group, the G channel matrix group, and the B channel matrix group include a positive definite matrix and a negative definite matrix. The K-image difference part labeling system described in item (4) i of the patent, wherein the d-channel matrix and the G-channel scale (four) scales are respectively a multi-dimensional variable function composed of luminance values of all primitive points in respective grayscale images. The second-order partial derivative matrix. For example, if the image difference part labeling system of the fourth item of the special secret area is applied, the local maximum value of the basin is determined by determining whether the second-order partial derivative of the internal variable is equal to zero. - Method for labeling different parts of the image, Litian + ! Use the computer to push the image of the object to be tested different from the difference of the actual image. v _ 1 position 仃 mark 'The computer is connected with the image capture device' This method includes steps · Borrow The image of the object to be tested is obtained from the image of the computer (4). The energy density is obtained by comparing the difference between the two images of the captured image and the standard image. 099142793 Form No. 1010101 Page 15 of 2 Page 0992074203-0 201225003 The difference image is separated into R grayscale image, G grayscale image and B grayscale image according to RGB three channels; Constructing an R channel matrix group by the luminance values of a primitive point and surrounding pixel points, constructing a G channel matrix group according to the luminance values of each primitive point and its surrounding primitive points in the G grayscale image, and according to the B gray Constructing a B-channel matrix group for each luminance point value of each primitive point and its surrounding primitive points&gt; Selecting a negative R-channel matrix from the R-channel matrix group, and selecting a negative determination from the G-channel matrix group a G channel matrix, and selecting a negative B channel matrix from the B channel matrix group; solving a set of first primitive points composed of primitive points corresponding to local maxima in a negative R channel matrix, in a negative G channel Solving a set of second primitive points consisting of primitive points corresponding to the local maximum values in the matrix, and solving a set of third primitive points formed by the primitive points corresponding to the local maximum values in the negative B-channel matrix; Solving the intersection of the set of three primitive points to obtain a sequence of primitive points of the required label position; searching for the luminance values in all four directions around the pixel point sequence in the sequence of the primitive points is zero a primitive point, and the farthest distance between the primitive point of the luminance value and the central primitive point is used as the labeling radius of the central primitive point; and each primitive point in the sequence of the primitive point is The center and the corresponding labeled radius are marked in the marked image to distinguish the difference from the standard image. 7. The image difference portion labeling method according to claim 6, wherein the image average energy density means that the resolution is 099142793 in the NxN image. Form No. A0101 Page 16 / Total 20 Page 0992074203-0 201225003 / average of one primitive. Energy density, which is calculated according to the formula (R + G + B) / N / N, where R is the sum of the R luminance values of all the primitive points in the image, G is the The sum of the G luminance values of all the primitive points in the image, and B is the sum of the B luminance values of all the primitive points in the image. 8. The image difference portion labeling method according to claim 6, wherein the R channel matrix group, the G channel matrix group, and the B channel matrix group all comprise a positive definite matrix and a negative definite matrix. 9. The image difference portion labeling method according to claim 6, wherein the R channel matrix, the G channel matrix, and the B channel matrix are respectively one by one of all the primitive points in the respective grayscale images. The luminance value constitutes a second-order partial derivative matrix of the multidimensional variable function. 10. The image difference portion labeling method according to claim 9, wherein the local maximum value is determined by determining whether a second-order partial derivative of a negative RGB three-channel matrix internal variable is equal to zero. 〇 099142793 Form No. A0101 Page 17 of 20 0992074203-0
TW99142793A 2010-12-08 2010-12-08 System and method for marking differences of two images TW201225003A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW99142793A TW201225003A (en) 2010-12-08 2010-12-08 System and method for marking differences of two images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW99142793A TW201225003A (en) 2010-12-08 2010-12-08 System and method for marking differences of two images

Publications (1)

Publication Number Publication Date
TW201225003A true TW201225003A (en) 2012-06-16

Family

ID=46726044

Family Applications (1)

Application Number Title Priority Date Filing Date
TW99142793A TW201225003A (en) 2010-12-08 2010-12-08 System and method for marking differences of two images

Country Status (1)

Country Link
TW (1) TW201225003A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI496111B (en) * 2013-06-19 2015-08-11 Inventec Corp Bent pin inspection method
US10510145B2 (en) 2017-12-27 2019-12-17 Industrial Technology Research Institute Medical image comparison method and system thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI496111B (en) * 2013-06-19 2015-08-11 Inventec Corp Bent pin inspection method
US10510145B2 (en) 2017-12-27 2019-12-17 Industrial Technology Research Institute Medical image comparison method and system thereof

Similar Documents

Publication Publication Date Title
US11002839B2 (en) Method and apparatus for measuring angular resolution of multi-beam lidar
CN109919933B (en) VR equipment and its screen detection method, device, and computer-readable storage medium
EP2339292A1 (en) Three-dimensional measurement apparatus and method thereof
CN112687231B (en) Brightness and chrominance data extraction method, equipment and computer readable storage medium
CN104034516B (en) Machine vision based LED detection device and detection method thereof
CN104581135A (en) Light source brightness detection method and system
US20130170756A1 (en) Edge detection apparatus, program and method for edge detection
WO2020110560A1 (en) Inspection assistance device, inspection assistance method, and inspection assistance program for concrete structure
KR20140075042A (en) Apparatus for inspecting of display panel and method thereof
JP2020008502A (en) Depth acquisition device by polarization stereo camera, and method of the same
US9305366B2 (en) Portable electronic apparatus, software and method for imaging and interpreting pressure and temperature indicating
WO2020259416A1 (en) Image collection control method and apparatus, electronic device, and storage medium
JP2008185526A (en) Color discrimination device and method
TW201225003A (en) System and method for marking differences of two images
TWI590196B (en) Method for detecting of liquid
CN117788726A (en) Map data rendering method and device, electronic equipment and storage medium
KR20150009842A (en) System for testing camera module centering and method for testing camera module centering using the same
CN117745838A (en) Monocular camera ranging calibration method based on affine transformation and gridding thought
JP6650829B2 (en) Image retrieval apparatus, method, and program
JPWO2020170291A1 (en) Meter detector, meter detection method, and meter detection program
TW201518695A (en) System and method for testing stability of light source
TW201250219A (en) Detecting method and system for 3D micro-retardation film
CN111985498A (en) Canopy density measurement method and device, electronic device and storage medium
CN111932599A (en) Cylinder two-dimensional image generation method based on multiple RGB-D cameras
RU2010152364A (en) STEREOSCOPIC MEASURING SYSTEM AND METHOD