TWI618031B - Image edge detection method - Google Patents
Image edge detection method Download PDFInfo
- Publication number
- TWI618031B TWI618031B TW106112116A TW106112116A TWI618031B TW I618031 B TWI618031 B TW I618031B TW 106112116 A TW106112116 A TW 106112116A TW 106112116 A TW106112116 A TW 106112116A TW I618031 B TWI618031 B TW I618031B
- Authority
- TW
- Taiwan
- Prior art keywords
- edge
- edge contour
- mask
- image
- contour
- Prior art date
Links
- 238000003708 edge detection Methods 0.000 title claims abstract description 29
- 238000000034 method Methods 0.000 title claims abstract description 23
- 230000003628 erosive effect Effects 0.000 claims description 16
- 230000000717 retained effect Effects 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 15
- 238000001514 detection method Methods 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 3
- 230000002950 deficient Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
一種影像邊緣偵測方法。獲得包含目標物的場景的深度影像以及彩色影像。取得位於深度影像的目標物的第一邊緣輪廓以及位於彩色影像的目標物的第二邊緣輪廓。基於第一邊緣輪廓,自第二邊緣輪廓來獲得第三邊緣輪廓。基於第二邊緣輪廓與第三邊緣輪廓來獲得一或多個外緣區塊,而自基於第三邊緣輪廓所產生的第一遮罩中刪除所述外緣區塊來獲得第二遮罩。An image edge detection method. Obtain a depth image of the scene containing the target and a color image. A first edge contour of the object located in the depth image and a second edge contour of the object located in the color image are obtained. A third edge profile is obtained from the second edge profile based on the first edge profile. One or more outer edge blocks are obtained based on the second edge contour and the third edge contour, and the second edge mask is obtained by deleting the outer edge block from the first mask generated based on the third edge contour.
Description
本發明是有關於一種影像處理方法,且特別是有關於一種影像邊緣偵測方法。The present invention relates to an image processing method, and in particular to an image edge detecting method.
市面上大多數的深度相機(depth camera)在獲取物體的深度資訊時,常常會發現在物體邊緣的位置所取得的深度資訊並不準確。例如,在手指張開時,在兩個手指中間常常會出現蹼狀誤差,如圖11所示。這是因為在2個手指中間正好在旁邊都有物體,因此,深度相機在進行判斷時會將背景和手指視為相同深度。另外,人的身體邊緣也會因深度相機的誤差而造成身體邊緣沒有被良好裁切的狀況。圖11是習知中僅利用深度影像來進行邊緣偵測所獲得結果的示意圖。在圖11中可以清楚地看到去除背景後的人像輪廓在手指間會產生被誤判為人像輪廓的一部份的蹼狀區塊。Most depth cameras on the market often find that the depth information obtained at the edge of the object is not accurate when acquiring the depth information of the object. For example, when a finger is opened, a click error often occurs between two fingers, as shown in FIG. This is because there are objects just right next to the two fingers, so the depth camera will treat the background and fingers as the same depth when making a judgment. In addition, the edge of the human body may also cause the edge of the body to be not well cut due to the error of the depth camera. FIG. 11 is a schematic diagram showing the results obtained by performing edge detection using only depth images in the prior art. It can be clearly seen in Fig. 11 that the silhouette of the portrait after removal of the background produces a defective block between the fingers that is misjudged as part of the silhouette of the portrait.
一般而言,針對上述身體邊緣出現的白邊或是手指間的蹼狀區塊,可以藉由取得圖片中的邊緣資訊,或是使用在影像處理中會用到的侵蝕及膨脹演算法來濾除白邊。然而,對於手指間被誤判的大面積背景,使用侵蝕及膨脹演算法對於手部區塊的效果並不好,甚至可能會造成破圖,或是手指等細微部位也一併被清除掉的狀況。In general, the white edges appearing at the edge of the body or the ridges between the fingers can be filtered by taking the edge information in the image or using the erosion and expansion algorithms used in image processing. Except the white side. However, for large-area backgrounds where the fingers are misjudged, the erosion and expansion algorithm is not good for the hand block, and may even cause the image to be broken, or the fine parts such as the fingers are also removed. .
另一種處理方式,則是透過膚色檢測找出手部的位置並消除手指之間的蹼狀區塊。膚色檢測是基於圖片中像素的RGB色彩,或是將圖片轉換至YCbCr或是HSV等等色彩空間中,並透過設定出一個可能為膚色的色彩範圍,來找出臉、手或是圖片中其他屬於膚色的部分後再將這些區塊的背景消除。此作法的問題在於,畫面中的膚色常常因環境光線的明暗、顏色,或是人種及每個人的實際膚色而有所差異,導致在尋找膚色的色彩範圍時,容易造成極大的誤差。另外若要在錄影時及時地找出人像區塊並去背,如果使用其他更加複雜的演算法往往會拖慢整個程式的執行效能,造成機器的耗電及效能上的負擔,影片的幀率(frame rate)也會一併下降。Another way to do this is to find the position of the hand through the skin color detection and eliminate the squat block between the fingers. Skin color detection is based on the RGB color of the pixels in the picture, or the picture is converted to YCbCr or HSV color space, and by setting a color range that may be skin color, to find the face, hand or other in the picture. The parts belonging to the skin color are then eliminated from the background of these blocks. The problem with this approach is that the skin color in the picture often differs depending on the light and darkness of the ambient light, the color, or the actual color of the human race and each person, resulting in a large error in finding the color range of the skin color. In addition, if you want to find the portrait block and go back in time when recording, if you use other more complicated algorithms, it will slow down the execution performance of the whole program, causing the power consumption and performance burden of the machine, and the frame rate of the movie. (frame rate) will also drop together.
本發明提供一種影像邊緣偵測方法,可準確地去除背景並找出最貼近目標物(例如:人像)的邊緣輪廓。The invention provides an image edge detection method, which can accurately remove the background and find the edge contour closest to the target (for example, a portrait).
本發明的影像邊緣偵測方法,包括下列步驟:獲得包含目標物的場景的深度影像以及彩色影像。對深度影像與彩色影像分別執行邊緣偵測,以獲得位於深度影像的目標物的第一邊緣輪廓以及位於彩色影像的目標物的第二邊緣輪廓。堆疊第一邊緣輪廓與第二邊緣輪廓,自第一邊緣輪廓中依序取出相鄰的兩個像素,並取出位於所述兩個像素的連線方向上通過其中一個像素的法線與第二邊緣輪廓相交的交點,由上述多個交點形成第三邊緣輪廓。堆疊第二邊緣輪廓與第三邊緣輪廓,根據第二邊緣輪廓與第三邊緣輪廓的重疊區域與未重疊區域形成第四邊緣輪廓,根據第二邊緣輪廓與第三邊緣輪廓的至少一未重疊區域形成第四邊緣輪廓的至少一外緣區塊,並且自基於第三邊緣輪廓所產生的第一遮罩(Mask)中刪除至少一外緣區塊對應的影像而獲得第二遮罩。The image edge detection method of the present invention comprises the following steps: obtaining a depth image of a scene including a target object and a color image. Edge detection is performed on the depth image and the color image, respectively, to obtain a first edge contour of the object located in the depth image and a second edge contour of the object located in the color image. Stacking the first edge contour and the second edge contour, sequentially extracting two adjacent pixels from the first edge contour, and taking out a normal and a second passing through one of the pixels in a connection direction of the two pixels The intersection of the edge contours intersects to form a third edge contour from the plurality of intersections. Stacking the second edge contour and the third edge contour, forming a fourth edge contour according to the overlapping region and the non-overlapping region of the second edge contour and the third edge contour, according to the at least one non-overlapping region of the second edge contour and the third edge contour Forming at least one outer edge block of the fourth edge profile, and deleting the image corresponding to the at least one outer edge block from the first mask generated based on the third edge profile to obtain the second mask.
在本發明的一實施例中,在獲得第二遮罩之後的步驟更包括:基於第一遮罩與第二遮罩在彩色影像中進行去背動作。In an embodiment of the invention, the step of obtaining the second mask further comprises: performing a back-removing action in the color image based on the first mask and the second mask.
在本發明的一實施例中,基於第一遮罩與第二遮罩在彩色影像中進行去背動作的步驟包括:對第二遮罩進行侵蝕膨脹運算而獲得多個圖案區塊;藉由對各圖案區塊的質心進行分群組,而以每一個群組的群心做為中心點並設定為感興趣區域;自第一遮罩中刪除感興趣區域的位置對應的外緣區塊的影像,而產生第三遮罩;以及基於第三遮罩在彩色影像中進行去背(background removal)動作,以保留彩色影像中目標物之影像。In an embodiment of the invention, the step of performing the back-removal operation in the color image based on the first mask and the second mask comprises: performing an erosion expansion operation on the second mask to obtain a plurality of pattern blocks; The centroids of each pattern block are grouped, and the group center of each group is taken as a center point and set as a region of interest; the outer edge region corresponding to the position where the region of interest is deleted from the first mask The image of the block produces a third mask; and a background removal operation is performed in the color image based on the third mask to preserve the image of the object in the color image.
在本發明的一實施例中,上述取出位於兩個像素的連線方向上通過其中一個像素的法線與第二邊緣輪廓相交的交點的步驟更包括:當位於兩個像素的連線方向上通過其中一個像素的法線與第二邊緣輪廓不相交時,將彩色影像轉換為灰階影像,並於灰階影像中對應法線的延伸方向利用梯度(gradient)偵測以取得交點。In an embodiment of the invention, the step of extracting the intersection of the normal line of one of the pixels and the intersection of the second edge contour in the direction of the connection of the two pixels further comprises: when connecting the connection direction of the two pixels When the normal of one of the pixels does not intersect with the contour of the second edge, the color image is converted into a grayscale image, and the gradient is detected in the extending direction of the corresponding normal in the grayscale image to obtain the intersection.
在本發明的一實施例中,上述其中堆疊第二邊緣輪廓與第三邊緣輪廓,根據第二邊緣輪廓與第三邊緣輪廓的至少一重疊區域與至少一未重疊區域形成第四邊緣輪廓,根據第二邊緣輪廓與第三邊緣輪廓的至少一未重疊區域形成第四邊緣輪廓的至少一外緣區塊的步驟更包括:對第二邊緣輪廓及第三邊緣輪廓進行膨脹運算,再堆疊已膨脹運算後的第二邊緣輪廓與第三邊緣輪廓,根據已膨脹運算後的第二邊緣輪廓與第三邊緣輪廓的至少一重疊區域與至少一未重疊區域形成第四邊緣輪廓,根據已膨脹運算後的第二邊緣輪廓與第三邊緣輪廓的至少一未重疊區域形成第四邊緣輪廓的至少一外緣區塊。In an embodiment of the invention, wherein the stacking the second edge contour and the third edge contour, forming a fourth edge contour according to the at least one overlapping region of the second edge contour and the third edge contour and the at least one non-overlapping region, according to The step of forming at least one outer edge block of the fourth edge contour by the at least one non-overlapping region of the third edge contour and the third edge contour further comprises: expanding the second edge contour and the third edge contour, and then stacking has been expanded The calculated second edge contour and the third edge contour form a fourth edge contour according to the at least one overlapping region of the second edge contour and the third edge contour after the expansion operation and the at least one non-overlapping region, according to the expanded operation The second edge contour and the at least one non-overlapping region of the third edge contour form at least one outer edge block of the fourth edge contour.
在本發明的一實施例中,上述對第二遮罩進行侵蝕膨脹運算而獲得圖案區塊的步驟包括:先對第二遮罩進行侵蝕運算,之後執行膨脹運算。In an embodiment of the invention, the step of performing the erosion expansion operation on the second mask to obtain the pattern block comprises: performing an erosion operation on the second mask first, and then performing an expansion operation.
在本發明的一實施例中,其中自基於第三邊緣輪廓所產生的第一遮罩中刪除至少一外緣區塊對應的影像而獲得第二遮罩的步驟包括:基於第三邊緣輪廓產生第一遮罩;堆疊第一遮罩與第四邊緣輪廓;刪除第一遮罩中對應至少一外緣區塊的影像,以獲得該第二遮罩。In an embodiment of the invention, the step of obtaining the second mask from the first mask generated by the third edge contour to delete the image corresponding to the at least one outer edge block comprises: generating the second edge contour based on the third edge contour a first mask; stacking the first mask and the fourth edge contour; deleting an image of the at least one outer edge block in the first mask to obtain the second mask.
基於上述,利用深度影像及彩色影像的邊緣資訊作為去除多餘背景的依據,以找出目標物(例如:人像)邊界,可避免環境光源及光線、個人膚色差異所造成的邊緣誤判問題,進而可更準確地將目標物與背景分離。Based on the above, the edge information of the depth image and the color image is used as a basis for removing the excess background to find the boundary of the target object (for example, a portrait), thereby avoiding the problem of edge misjudgment caused by the difference between the ambient light source and the light and the individual skin color. Separate the target from the background more accurately.
為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。The above described features and advantages of the invention will be apparent from the following description.
圖1是依照本發明一實施例的電子裝置的方塊圖。請參照圖1,電子裝置100包括處理器110、儲存裝置120以及攝像裝置130。處理器110耦接至儲存裝置120以及攝像裝置130。1 is a block diagram of an electronic device in accordance with an embodiment of the present invention. Referring to FIG. 1 , the electronic device 100 includes a processor 110 , a storage device 120 , and an imaging device 130 . The processor 110 is coupled to the storage device 120 and the camera device 130.
處理器110例如為中央處理單元(Central Processing Unit,CPU)、圖像處理單元(Graphic Processing Unit,GPU)、物理處理單元(Physics Processing Unit,PPU)、可程式化之微處理器(Microprocessor)、嵌入式控制晶片、數位訊號處理器(Digital Signal Processor,DSP)、特殊應用積體電路(Application Specific Integrated Circuits,ASIC)或其他類似裝置。The processor 110 is, for example, a central processing unit (CPU), an image processing unit (GPU), a physical processing unit (PPU), a programmable microprocessor (Microprocessor), Embedded control chip, Digital Signal Processor (DSP), Application Specific Integrated Circuits (ASIC) or other similar devices.
儲存裝置120例如是任意型式的固定式或可移動式隨機存取記憶體(Random Access Memory,RAM)、唯讀記憶體(Read-Only Memory,ROM)、快閃記憶體(Flash memory)、硬碟或其他類似裝置或這些裝置的組合。儲存裝置120中儲存有多個程式碼片段,上述程式碼片段在被安裝後,會由處理器110來執行,以實現下述影像邊緣偵測方法。The storage device 120 is, for example, any type of fixed or removable random access memory (RAM), read-only memory (ROM), flash memory, and hard memory. Disc or other similar device or a combination of these devices. A plurality of code segments are stored in the storage device 120. After being installed, the code segments are executed by the processor 110 to implement the image edge detection method described below.
攝像裝置130例如為彩色深度攝影機(RGB-D camera)。彩色深度攝影機除了能夠得到色彩影像之外,還能擷取所看到物體的深度影像。The imaging device 130 is, for example, a color depth camera (RGB-D camera). In addition to color images, color depth cameras capture depth images of objects you see.
在本實施例中,由處理器110執行儲存裝置120內的程式碼片段以對攝像裝置130所擷取的包含一目標物的一場景的深度影像及彩色影像進行處理,進而找出目標物與背景的邊界,進而可更準確地將去除背景。即,利用深度影像及彩色影像的邊緣資訊作為去除多餘背景的依據,以取代膚色檢測來找出背景與人像的邊緣。如此可避免環境光源及光線、每個人的膚色差異所造成的邊緣誤判問題。In this embodiment, the processor 110 executes the code segment in the storage device 120 to process the depth image and the color image of a scene captured by the camera 130, including a target, to find the target and The boundaries of the background, in turn, more accurately remove the background. That is, the edge information of the depth image and the color image is used as a basis for removing the excess background to replace the skin color detection to find the background and the edge of the portrait. This avoids the problem of edge misjudgment caused by ambient light source and light, and the difference in skin color of each person.
底下進一步來說明影像邊緣偵測方法。The image edge detection method will be further described below.
圖2A是依照本發明一實施例的影像邊緣偵測方法的流程圖。請參照圖2A,在步驟S20中,利用攝像裝置130對包含一目標物的一個場景進行拍攝以獲得此包含目標物的場景的深度影像及彩色影像。在此實施例中目標物例如為至少一個人像。FIG. 2A is a flowchart of an image edge detection method according to an embodiment of the invention. Referring to FIG. 2A, in step S20, a scene including a target is captured by the imaging device 130 to obtain a depth image and a color image of the scene including the object. In this embodiment the object is for example at least one portrait.
接著,在步驟S21中,對深度影像與彩色影像分別執行邊緣偵測,以獲得位於深度影像的目標物的第一邊緣輪廓以及位於彩色影像的目標物的第二邊緣輪廓。底下為了方便說明,僅繪示出一個手掌進行說明,然,需知道的是,本實施例是針對整張影像來進行邊緣偵測。例如,以Sobel演算法、Prewitt演算法或Canny演算法等來進行邊緣偵測。Next, in step S21, edge detection is performed on the depth image and the color image, respectively, to obtain a first edge contour of the object located in the depth image and a second edge contour of the object located in the color image. For the sake of convenience of explanation, only one palm is illustrated for description. However, it should be noted that this embodiment performs edge detection for the entire image. For example, edge detection is performed using a Sobel algorithm, a Prewitt algorithm, or a Canny algorithm.
圖3是依照本發明一實施例的深度影像與彩色影像中局部部位的邊緣輪廓的示意圖。在圖3中,僅繪示目標物的局部部位(即,手掌)的邊緣輪廓。在此,以點鏈線表示深度影像的第一邊緣輪廓,以實線(包括黑色區塊)表示彩色影像的第二邊緣輪廓。3 is a schematic diagram of an edge contour of a local portion in a depth image and a color image according to an embodiment of the invention. In Fig. 3, only the edge contour of the local part of the target (i.e., the palm) is shown. Here, the first edge contour of the depth image is represented by a dotted line, and the second edge contour of the color image is represented by a solid line (including a black block).
接著,在步驟S22中,堆疊第一邊緣輪廓與第二邊緣輪廓,自第一邊緣輪廓中依序取出相鄰的兩個像素,並取出位於所述兩個像素的連線方向上通過其中一個像素的法線與第二邊緣輪廓相交的一交點,作為第三邊緣輪廓的點,而該些交點則形成第三邊緣輪廓。即,使用深度影像的第一邊緣輪廓以及彩色影像的第二邊緣輪廓來獲得更貼近目標物(即人像)的邊緣輪廓(第三邊緣輪廓)。Next, in step S22, the first edge contour and the second edge contour are stacked, the adjacent two pixels are sequentially taken out from the first edge contour, and the connection direction in the two pixels is taken out through one of the two pixels. An intersection of the normal of the pixel and the intersection of the second edge contour serves as points of the third edge contour, and the intersections form a third edge contour. That is, the first edge contour of the depth image and the second edge contour of the color image are used to obtain an edge contour (third edge contour) closer to the target (ie, the portrait).
舉例來說,圖4是依照本發明一實施例的取出第三邊緣輪廓的示意圖。請參照圖4,假設點A1~點A5為第一邊緣輪廓的像素,點B1~點B7為第二邊緣輪廓的像素。例如,以找出點A1在第二輪廓邊緣中對應的交點為例,自第一邊緣輪廓中取出點A1與點A2(相鄰的兩個像素),依據點A1與點A2相連的連線方向獲得直線L,進而獲得直線L通過點A1上的法線N,在法線N上找出與第二邊緣輪廓有交集的點B1,意即法線N與第二邊緣輪廓相交的交點為點B1,而以點B1作為第三邊緣輪廓的點。For example, Figure 4 is a schematic illustration of the removal of a third edge profile in accordance with an embodiment of the present invention. Referring to FIG. 4, it is assumed that the points A1 to A5 are pixels of the first edge contour, and the points B1 to B7 are pixels of the second edge contour. For example, to find the intersection point of the point A1 in the second contour edge, take the point A1 and the point A2 (two adjacent pixels) from the first edge contour, and connect the line according to the point A1 and the point A2. The direction obtains a straight line L, and further obtains a straight line L passing through the normal N on the point A1, and finds a point B1 on the normal line N that intersects with the second edge contour, that is, the intersection point where the normal line N intersects the second edge contour is Point B1 and point B1 as the point of the third edge contour.
接著,找出點A2在第二輪廓邊緣中對應的點。如同上述,自第一邊緣輪廓中取出相鄰的點A2與點A3,並依據點A2與點A3的連線方向上通過點A2的法線與第二邊緣輪廓相交的交點來找出在第二邊緣輪廓中與點A2對應的交點,以該點作為第三邊緣輪廓的點。以此類推,而取得上述多個交點藉以形成第三邊緣輪廓。Next, find the point at which point A2 corresponds to the second contour edge. As described above, the adjacent point A2 and the point A3 are taken out from the first edge contour, and the intersection point of the point A2 intersects with the second edge contour by the line A2 and the point A3 in the line connecting direction is found. The intersection of the two edge contours corresponding to the point A2, with the point as the point of the third edge contour. By analogy, the plurality of intersections are obtained to form a third edge contour.
另外,倘若第一邊緣輪廓中的點與第二邊緣輪廓的點重疊,則可省略上述步驟,而直接以重疊的第二邊緣輪廓的點來做為第三邊緣輪廓的點。又,當法線與第二邊緣輪廓不相交時,可利用梯度(gradient)偵測取得交點,更詳細的說,將彩色影像轉換為一灰階影像,並於灰階影像中對應法線的延伸方向利用梯度偵測獲得灰階值快速變化的點作為交點。In addition, if the point in the first edge contour overlaps the point of the second edge contour, the above steps may be omitted, and the point of the overlapping second edge contour is directly used as the point of the third edge contour. Moreover, when the normal line and the second edge contour do not intersect, the gradient detection can be used to obtain the intersection point, and more specifically, the color image is converted into a gray scale image, and corresponding to the normal line in the gray scale image. The extension direction uses gradient detection to obtain a point where the grayscale value changes rapidly as an intersection point.
圖5是依照本發明的局部部位的第一、第二、第三邊緣輪廓的示意圖。在圖5中,以點線表示深度影像的第一邊緣輪廓,以實線(包括黑色區塊)表示彩色影像的第二邊緣輪廓,以虛線表示第三邊緣輪廓。由於第三邊緣輪廓是自第二邊緣輪廓所獲得,故,由圖5可以知道,第三邊緣輪廓(虛線)部分的點會與第二邊緣輪廓(實線)的點重疊。Figure 5 is a schematic illustration of the first, second, and third edge profiles of a partial portion in accordance with the present invention. In FIG. 5, the first edge contour of the depth image is indicated by a dotted line, the second edge contour of the color image is indicated by a solid line (including a black block), and the third edge contour is indicated by a broken line. Since the third edge contour is obtained from the second edge contour, it can be known from FIG. 5 that the point of the third edge contour (dashed line) portion overlaps with the point of the second edge contour (solid line).
而在獲得第三邊緣輪廓之後,在步驟S23中,堆疊第二邊緣輪廓與第三邊緣輪廓,根據第二邊緣輪廓與第三邊緣輪廓的至少一重疊區域與至少一未重疊區域形成第四邊緣輪廓,根據第二邊緣輪廓與第三邊緣輪廓的至少一未重疊區域形成第四邊緣輪廓的至少一外緣區塊。並且,在步驟S24中,自基於第三邊緣輪廓所產生的第一遮罩(Mask)中刪除所述至少一外緣區塊對應的影像而獲得一第二遮罩。After the third edge contour is obtained, in step S23, the second edge contour and the third edge contour are stacked, and the fourth edge is formed according to the at least one overlapping region of the second edge contour and the third edge contour and the at least one non-overlapping region. And contouring, forming at least one outer edge block of the fourth edge contour according to the second edge contour and the at least one non-overlapping region of the third edge contour. In addition, in step S24, a second mask is obtained by deleting an image corresponding to the at least one outer edge block from a first mask generated based on the third edge contour.
圖6是依照本發明一實施例的堆疊第二邊緣輪廓與第三邊緣輪廓所獲得的第四邊緣輪廓的示意圖。在本實施例中,對第二邊緣輪廓及該第三邊緣輪廓進行膨脹運算,再堆疊已膨脹運算後的第二邊緣輪廓與第三邊緣輪廓,根據已膨脹運算後的第二邊緣輪廓與第三邊緣輪廓的至少一重疊區域與至少一未重疊區域形成第四邊緣輪廓,根據已膨脹運算後的第二邊緣輪廓與第三邊緣輪廓的至少一未重疊區域形成第四邊緣輪廓的至少一外緣區塊。其中,區塊600(即斜線填滿的區塊)為經膨脹運算後所獲得的區塊,由於區塊600亦屬膨脹運算後第二邊緣輪廓與第三邊緣輪廓的重疊區域,故區塊600並未被判定為外緣區塊。6 is a schematic diagram of a fourth edge profile obtained by stacking a second edge profile and a third edge profile, in accordance with an embodiment of the present invention. In this embodiment, the second edge contour and the third edge contour are expanded, and the second edge contour and the third edge contour after the expanded operation are stacked, according to the second edge contour and the expanded edge Forming, by the at least one overlapping region of the three-edge contour, the fourth edge contour with the at least one non-overlapping region, forming at least one outer contour of the fourth edge contour according to the expanded second edge contour and the at least one non-overlapping region of the third edge contour Marginal block. The block 600 (ie, the block filled with the slanted line) is the block obtained after the expansion operation, and the block 600 is also the overlapping area of the second edge contour and the third edge contour after the expansion operation, so the block 600 was not judged as the outer edge block.
底下以圖6為例來說明如何獲得外緣區塊。首先,在第四邊緣輪廓中取出已膨脹運算後的第二邊緣輪廓與第三邊緣輪廓的多個區域r1~r10。接著,判斷區域r1~r10的最外圍的邊緣輪廓是否存在第二邊緣輪廓與第三邊緣輪廓重疊的部分。進而,將最外圍的邊緣輪廓不存在第二邊緣輪廓與第三邊緣輪廓重疊的部分的區域判定為外緣區塊。Figure 6 is taken as an example to illustrate how to obtain the outer edge block. First, the second edge contour of the expanded operation and the plurality of regions r1 rr10 of the third edge contour are taken out in the fourth edge contour. Next, it is judged whether or not the outermost edge contour of the regions r1 to r10 has a portion where the second edge contour overlaps with the third edge contour. Further, an area of the portion where the outermost edge contour does not have the second edge contour overlapping the third edge contour is determined as the outer edge block.
在圖6中,黑色代表第二邊緣輪廓的點,深灰色代表只存在第三邊緣輪廓的點。而淺灰色代表同時存在第二邊緣輪廓的點與第三邊緣輪廓的點。由於區域r2、r3、r4、r6、r7、r8、r9不存在第二邊緣輪廓與第三邊緣輪廓重疊的部分,即未重疊區域,因此被判定為外緣區塊,至於區域r5、r10因最外圍的邊緣輪廓存在第二邊緣輪廓與第三邊緣輪廓重疊的部分,即重疊區域,故不屬於外緣區塊。In Fig. 6, black represents the point of the second edge contour, and dark gray represents the point where only the third edge contour exists. Light gray represents the point where the second edge contour and the third edge contour exist simultaneously. Since the regions r2, r3, r4, r6, r7, r8, r9 do not have a portion where the second edge contour overlaps with the third edge contour, that is, the non-overlapping region, it is determined as the outer edge block, and the regions r5 and r10 are The outermost edge contour has a portion where the second edge contour overlaps with the third edge contour, that is, the overlapping region, and thus does not belong to the outer edge block.
在步驟S24中,自基於第三邊緣輪廓所產生的第一遮罩中刪除外緣區塊對應的影像而獲得第二遮罩。舉例來說,圖7是依照本發明一實施例的第一遮罩的示意圖。圖8是依照本發明一實施例的第二遮罩的示意圖。圖7所示為第一遮罩M1,圖8所示為第二遮罩M2。堆疊第一遮罩M1與第四邊緣輪廓,在第一遮罩M1中刪除對應圖6所示的第四邊緣輪廓的區域r2、r3、r4、r6、r7、r8、r9(即,外緣區塊)的影像,便可獲得第二遮罩M2,在獲得第二遮罩M2之後,可基於第一遮罩與第二遮罩在彩色影像中進行一去背動作。In step S24, the second mask is obtained by deleting the image corresponding to the outer edge block from the first mask generated based on the third edge contour. For example, Figure 7 is a schematic illustration of a first mask in accordance with an embodiment of the present invention. Figure 8 is a schematic illustration of a second mask in accordance with an embodiment of the present invention. Figure 7 shows the first mask M1 and Figure 8 shows the second mask M2. Stacking the first mask M1 and the fourth edge contour, and deleting the regions r2, r3, r4, r6, r7, r8, r9 corresponding to the fourth edge contour shown in FIG. 6 in the first mask M1 (ie, the outer edge The image of the block can be used to obtain the second mask M2. After the second mask M2 is obtained, a backing action can be performed in the color image based on the first mask and the second mask.
另外,針對不同目標物為了避免誤刪其外緣區塊,導致位於去背後之彩色影像之目標物產生缺塊,因此本發明再提出如何更精準地進行邊緣偵測,底下再舉一實施例說明。圖2B是依照本發明另一實施例的影像邊緣偵測方法的流程圖。請參照圖2B,在本實施例中,步驟S205~S227與上述圖2A步驟S20~S25相同,因此相關描述請參照上述。In addition, in order to avoid accidentally deleting the outer edge block of different objects, the object located in the color image behind the back is caused to be missing. Therefore, the present invention further proposes how to perform edge detection more accurately, and another embodiment is given below. Description. FIG. 2B is a flowchart of an image edge detection method according to another embodiment of the present invention. Referring to FIG. 2B, in the embodiment, steps S205 to S227 are the same as steps S20 to S25 of FIG. 2A above. Therefore, please refer to the above description.
在獲得第二遮罩M2之後,可基於第一遮罩與第二遮罩在彩色影像中進行一去背動作,詳細步驟如下,在步驟S230中,對第二遮罩M2進行侵蝕膨脹運算而獲得多個圖案區塊。在此,先對第二遮罩M2進行侵蝕運算,之後執行膨脹運算。例如,藉由侵蝕運算來找出可能是手指的部分。這是因為手指相對於人像其他部位屬於較細長的部位,因此在進行侵蝕運算之後,會產生相對較大的變形(例如:位於第二遮罩的面積變小,甚至於產生斷裂)。之後,再利用膨脹運算來修復因侵蝕運算所造成的斷裂以獲得所述圖案區塊。步驟S225~S230是為了找出手的部位,以避免將人像範圍較小的部位被誤判為外緣區塊。After obtaining the second mask M2, a back-removing action can be performed on the color image based on the first mask and the second mask. The detailed steps are as follows. In step S230, the second mask M2 is subjected to an erosion expansion operation. Obtain multiple pattern blocks. Here, the second mask M2 is first subjected to an erosion operation, and then the expansion operation is performed. For example, an erosion operation is used to find out what may be a part of a finger. This is because the finger belongs to a relatively elongated portion with respect to other parts of the portrait, so that after the erosion operation, a relatively large deformation occurs (for example, the area of the second mask becomes small, and even breakage occurs). Thereafter, an expansion operation is used to repair the fracture caused by the erosion operation to obtain the pattern block. Steps S225 to S230 are for finding the part of the hand to avoid misidentification of the portion having a small portrait range as the outer edge block.
接著,在步驟S235中,藉由對各圖案區塊的質心進行分群組,而以每一個群組的群心做為中心點並設定為感興趣區域(region of interest,ROI)。之後,在步驟S240中,自第一遮罩M1中刪除感興趣區域的位置對應的外緣區塊的影像,而產生第三遮罩。於此實施例中,可僅針對人像局部部位(例如:手掌)實施此步驟即可。Next, in step S235, the centroid of each of the pattern blocks is grouped, and the group center of each group is used as a center point and set as a region of interest (ROI). Thereafter, in step S240, the image of the outer edge block corresponding to the position of the region of interest is deleted from the first mask M1 to generate a third mask. In this embodiment, this step may be performed only for a partial part of the portrait (for example, a palm).
底下舉例來進行說明。圖9是依照本發明一實施例的進行侵蝕膨脹運算所獲得的區塊示意圖。圖10是依照本發明一實施例的感興趣區塊的示意圖。The following is an example to illustrate. Figure 9 is a block diagram of a block obtained by performing an erosion expansion operation in accordance with an embodiment of the present invention. 10 is a schematic diagram of a block of interest in accordance with an embodiment of the present invention.
如圖9所示,第二遮罩M2在進行侵蝕膨脹運算後獲得圖案區塊901~906。圖案區塊901~905各自的質心為c1~c5。而圖案區塊906的質心在此未繪示。由於上述方法皆是針對整張影像進行處理,因此,區塊906的部分是與身體連接在一起,因此區塊906的質心是落在未繪示出來的位置。在此,可利用k平均演算法(k-means algorithm)來對這些質心進行分群組。假設分群組後的結果為:質心c1與c2被分群為群組1,而質心c3、c4、c5被分群為群組2。群組1的群心為G1,群組2的群心為G2。As shown in FIG. 9, the second mask M2 obtains the pattern blocks 901 to 906 after performing the erosion expansion operation. The centroids of the pattern blocks 901 to 905 are c1 to c5. The center of mass of the pattern block 906 is not shown here. Since the above methods are all processed for the entire image, the portion of the block 906 is connected to the body, so the center of mass of the block 906 falls at a position not shown. Here, these centroids can be grouped using a k-means algorithm. Assume that the results after grouping are: centroids c1 and c2 are grouped into group 1, and centroids c3, c4, and c5 are grouped into group 2. The group center of group 1 is G1, and the group center of group 2 is G2.
接著,為每一個群心設定一個相同尺寸的感興趣區域。如圖10所示,以群心G1做為中心點來設定對應的感興趣區域R1,並且以群心G2做為中心點來設定對應的感興趣區域R2。在獲得感興趣區域R1、R2之後,自第一遮罩M1中刪除感興趣區域R1、R2的位置對應的外緣區塊的影像,即刪除對應外緣區塊r2’、r3’、r4’、r6’、r7’、r8’、r9’ 的影像,而產生第三遮罩。Next, a region of interest of the same size is set for each group heart. As shown in FIG. 10, the corresponding region of interest R1 is set with the cluster center G1 as a center point, and the corresponding region of interest R2 is set with the cluster center G2 as a center point. After obtaining the regions of interest R1, R2, the image of the outer edge block corresponding to the position of the region of interest R1, R2 is deleted from the first mask M1, that is, the corresponding outer edge blocks r2', r3', r4' are deleted. The image of r6', r7', r8', r9' produces a third mask.
之後,在步驟S245中,基於第三遮罩在彩色影像中進行去背動作,以保留彩色影像中目標物之影像。Thereafter, in step S245, the back masking operation is performed on the color image based on the third mask to preserve the image of the object in the color image.
圖11是習知中僅利用深度影像來進行邊緣偵測所獲得結果的示意圖。圖12是依照本發明一實施例的利用影像邊緣偵測方法所獲得結果的示意圖。由圖12可以清楚地看到明顯地改善了習知的缺點。FIG. 11 is a schematic diagram showing the results obtained by performing edge detection using only depth images in the prior art. FIG. 12 is a schematic diagram of results obtained by using an image edge detection method according to an embodiment of the invention. It will be apparent from Figure 12 that the disadvantages of the prior art are significantly improved.
綜上所述,利用深度相機的深度資訊來找出整個人像的區塊遮罩,再對彩色影像進行邊緣檢測找出一個更相近的人像邊緣,並根據這兩種資訊找出外緣區塊位置。接著將人像的邊緣輪廓做侵蝕處理,在此處理中,手指的部分非常容易因侵蝕運算而被截斷,藉此可找出人像細部或是手指的部分。而在獲得手指截斷的部分時,再以此資訊找出手指截斷部分的圖案區塊,並計算出每個圖案區塊的中心點後,再對每一個中心點以進行分群組,並且在找出每個群心位置後,以此群心設定一個ROI區塊。最後,判斷一開始由深度資訊及彩色影像的邊緣輪廓找出的外緣區塊有哪些落在這些ROI中,再於彩色影像中對應刪除落在ROI中的外緣區塊的影像,便可以儘量將手指之間屬於背景的蹼狀區塊予以清除。據此,可避免環境光源及光線、個人膚色差異所造成的邊緣誤判問題,進而可更準確地將背景與目標物(例如:人像)分離。In summary, the depth information of the depth camera is used to find the block mask of the entire portrait, and then the edge detection of the color image is performed to find a more similar image edge, and the outer edge block is found according to the two kinds of information. position. Then, the edge contour of the portrait is eroded. In this process, the portion of the finger is easily cut off by the erosion operation, thereby finding the detail of the portrait or the part of the finger. When the finger cut portion is obtained, the information block of the finger cut portion is found by using the information, and the center point of each pattern block is calculated, and then each center point is subgrouped, and After finding each group heart position, set a ROI block in this group. Finally, it is determined that the outer edge blocks found by the edge information of the depth information and the color image are in the ROI, and the image of the outer edge block falling in the ROI is correspondingly deleted in the color image. Try to remove the plaques that belong to the background between your fingers. According to this, the edge misjudgment caused by the difference of the ambient light source, the light, and the individual skin color can be avoided, and the background can be more accurately separated from the target object (for example, a portrait).
雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。Although the present invention has been disclosed in the above embodiments, it is not intended to limit the present invention, and any one of ordinary skill in the art can make some changes and refinements without departing from the spirit and scope of the present invention. The scope of the invention is defined by the scope of the appended claims.
100‧‧‧電子裝置100‧‧‧Electronic devices
110‧‧‧處理器110‧‧‧ processor
120‧‧‧儲存裝置120‧‧‧Storage device
130‧‧‧攝像裝置130‧‧‧ camera
S20~S25‧‧‧影像邊緣偵測方法各步驟S20~S25‧‧‧Image edge detection method steps
S205~S245‧‧‧影像邊緣偵測方法各步驟S205~S245‧‧‧Image edge detection method steps
600、901~906‧‧‧區塊600, 901~906‧‧‧ blocks
A1~A5、B1~B8‧‧‧點A1~A5, B1~B8‧‧‧ points
c1~c5‧‧‧質心C1~c5‧‧‧ centroid
G1、G2‧‧‧群心G1, G2‧‧‧ group heart
L‧‧‧直線L‧‧‧ Straight line
M1‧‧‧第一遮罩M1‧‧‧ first mask
M2‧‧‧第二遮罩M2‧‧‧ second mask
N‧‧‧法線N‧‧‧ normal
r1~r10‧‧‧區域 R1~r10‧‧‧ area
R1、R2‧‧‧感興趣區域 R1, R2‧‧‧ Areas of Interest
圖1是依照本發明一實施例的電子裝置的方塊圖。 圖2A及圖2B是依照本發明不同實施例的影像邊緣偵測方法的流程圖。 圖3是依照本發明一實施例的深度影像與彩色影像中局部部位的邊緣輪廓的示意圖。 圖4是依照本發明一實施例的取出第三邊緣輪廓的示意圖。 圖5是依照本發明的堆疊局部部位的第一、第二、第三邊緣輪廓的示意圖。 圖6是依照本發明一實施例的堆疊第二邊緣輪廓與第三邊緣輪廓所獲得的第四邊緣輪廓的示意圖。 圖7是依照本發明一實施例的第一遮罩的示意圖。 圖8是依照本發明一實施例的第二遮罩的示意圖。 圖9是依照本發明一實施例的第二遮罩進行侵蝕膨脹運算所獲得的區塊示意圖。 圖10是依照本發明一實施例的感興趣區塊的示意圖。 圖11是習知中僅利用深度影像來進行邊緣偵測所獲得結果的示意圖。 圖12是依照本發明一實施例的利用影像邊緣偵測方法所獲得結果的示意圖。1 is a block diagram of an electronic device in accordance with an embodiment of the present invention. 2A and 2B are flowcharts of an image edge detection method according to various embodiments of the present invention. 3 is a schematic diagram of an edge contour of a local portion in a depth image and a color image according to an embodiment of the invention. 4 is a schematic illustration of the removal of a third edge profile in accordance with an embodiment of the present invention. Figure 5 is a schematic illustration of the first, second, and third edge profiles of a stacked partial portion in accordance with the present invention. 6 is a schematic diagram of a fourth edge profile obtained by stacking a second edge profile and a third edge profile, in accordance with an embodiment of the present invention. Figure 7 is a schematic illustration of a first mask in accordance with an embodiment of the present invention. Figure 8 is a schematic illustration of a second mask in accordance with an embodiment of the present invention. FIG. 9 is a block diagram of a block obtained by performing an erosion expansion operation of a second mask according to an embodiment of the invention. 10 is a schematic diagram of a block of interest in accordance with an embodiment of the present invention. FIG. 11 is a schematic diagram showing the results obtained by performing edge detection using only depth images in the prior art. FIG. 12 is a schematic diagram of results obtained by using an image edge detection method according to an embodiment of the invention.
Claims (6)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW106112116A TWI618031B (en) | 2017-04-12 | 2017-04-12 | Image edge detection method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW106112116A TWI618031B (en) | 2017-04-12 | 2017-04-12 | Image edge detection method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TWI618031B true TWI618031B (en) | 2018-03-11 |
| TW201837856A TW201837856A (en) | 2018-10-16 |
Family
ID=62189334
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW106112116A TWI618031B (en) | 2017-04-12 | 2017-04-12 | Image edge detection method |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TWI618031B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI753332B (en) * | 2019-12-12 | 2022-01-21 | 萬里雲互聯網路有限公司 | Method for processing pictures |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI439962B (en) * | 2010-08-23 | 2014-06-01 | Univ Nat Cheng Kung | Intuitive depth image generation system and depth image generation method |
| US20150206318A1 (en) * | 2013-02-14 | 2015-07-23 | Lsi Corporation | Method and apparatus for image enhancement and edge verificaton using at least one additional image |
| TWI511079B (en) * | 2014-04-30 | 2015-12-01 | Au Optronics Corp | Three-dimension image calibration device and method for calibrating three-dimension image |
| TWI517091B (en) * | 2013-10-31 | 2016-01-11 | 國立臺北科技大學 | Method of 2d-to-3d depth image construction and device thereof |
-
2017
- 2017-04-12 TW TW106112116A patent/TWI618031B/en not_active IP Right Cessation
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI439962B (en) * | 2010-08-23 | 2014-06-01 | Univ Nat Cheng Kung | Intuitive depth image generation system and depth image generation method |
| US20150206318A1 (en) * | 2013-02-14 | 2015-07-23 | Lsi Corporation | Method and apparatus for image enhancement and edge verificaton using at least one additional image |
| TWI517091B (en) * | 2013-10-31 | 2016-01-11 | 國立臺北科技大學 | Method of 2d-to-3d depth image construction and device thereof |
| TWI511079B (en) * | 2014-04-30 | 2015-12-01 | Au Optronics Corp | Three-dimension image calibration device and method for calibrating three-dimension image |
Also Published As
| Publication number | Publication date |
|---|---|
| TW201837856A (en) | 2018-10-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10839521B2 (en) | Image processing apparatus, image processing method, and computer-readable storage medium | |
| JP6125188B2 (en) | Video processing method and apparatus | |
| US11156564B2 (en) | Dirt detection on screen | |
| CN110796041B (en) | Subject identification method and device, electronic device, computer-readable storage medium | |
| CN103366361B (en) | Region growing method and region growing method with marker function | |
| KR101223046B1 (en) | Image segmentation device and method based on sequential frame imagery of a static scene | |
| KR20150079638A (en) | Image processing method and apparatus for elimination of depth artifacts | |
| JP6136537B2 (en) | Image processing apparatus, image processing method, image processing control program, and recording medium | |
| CN111161136B (en) | Image blurring method, image blurring device, equipment and storage device | |
| WO2017084094A1 (en) | Apparatus, method, and image processing device for smoke detection | |
| JP6720845B2 (en) | Image processing apparatus, image processing method and program | |
| CN111080542B (en) | Image processing method, device, electronic equipment and storage medium | |
| CN113781421B (en) | Underwater-based target identification method, device and system | |
| JP6497162B2 (en) | Image processing apparatus and image processing method | |
| CN107016348B (en) | Face detection method and device combined with depth information and electronic device | |
| WO2017109854A1 (en) | Learning image automatic sorting device, learning image automatic sorting method, and learning image automatic sorting program | |
| JP2020197989A5 (en) | Image processing systems, image processing methods, and programs | |
| WO2017173578A1 (en) | Image enhancement method and device | |
| TW202022802A (en) | Method of identifying foreground object in image and electronic device using the same | |
| WO2017128646A1 (en) | Image processing method and device | |
| CN114842213A (en) | An obstacle contour detection method, device, terminal device and storage medium | |
| US20220237802A1 (en) | Image processing apparatus and non-transitory computer readable medium storing program | |
| CN1892702A (en) | Tracking apparatus | |
| JP2018142828A (en) | Deposit detector and deposit detection method | |
| JP6399122B2 (en) | Face detection apparatus and control method thereof |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| MM4A | Annulment or lapse of patent due to non-payment of fees |