TW201816661A - Automatic electric scooter identification an d part-based outer defect detection method and system thereof - Google Patents
Automatic electric scooter identification an d part-based outer defect detection method and system thereof Download PDFInfo
- Publication number
- TW201816661A TW201816661A TW105133934A TW105133934A TW201816661A TW 201816661 A TW201816661 A TW 201816661A TW 105133934 A TW105133934 A TW 105133934A TW 105133934 A TW105133934 A TW 105133934A TW 201816661 A TW201816661 A TW 201816661A
- Authority
- TW
- Taiwan
- Prior art keywords
- image
- license plate
- algorithm
- original body
- body part
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 43
- 230000007547 defect Effects 0.000 title abstract description 6
- 238000000034 method Methods 0.000 claims abstract description 19
- 238000007689 inspection Methods 0.000 claims abstract description 7
- 238000010845 search algorithm Methods 0.000 claims description 12
- 230000011218 segmentation Effects 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 6
- 230000001172 regenerating effect Effects 0.000 claims 1
- 238000010792 warming Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000000052 comparative effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 229910052734 helium Inorganic materials 0.000 description 1
- 239000001307 helium Substances 0.000 description 1
- SWQJXJOGLNCZEY-UHFFFAOYSA-N helium atom Chemical compound [He] SWQJXJOGLNCZEY-UHFFFAOYSA-N 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
本發明係有關一種自動化電動機車識別與車體瑕疵檢測之方法及其裝置,其提出一套自動化電動機車外部瑕疵檢測模組結合車輛辨識模組,能分辨電動機車身分,並在檢測出瑕疵後,釐清責任歸屬,對該車輛進行求償。The invention relates to a method and device for detecting an automatic electric motor vehicle and a vehicle body flaw detection, and proposes an automatic motor vehicle external helium detecting module combined with a vehicle identification module, which can distinguish the motor body part and detect the flaw To clarify the responsibility and to claim the vehicle.
按,隨著人口增長與地球資源的飽與地球造成極大的負擔,而隨著科技的發展,智慧城市是透過創新之科技管理城市基礎設施與透過分析城市基礎設施有效率應用與管理能量資源;智慧城市能增進城市之發展,且限縮之能源能更有效被應用,在智慧城市中,智慧交通(ITS)一直是很被重視的一塊,為了響應全球暖化,綠色交通工具十份被提倡,像是大眾運輸工具、電動機車租賃系統或是電動自行車租賃系統,如YouBike,都非常盛行。According to the development of science and technology, smart cities manage urban infrastructure through innovative technology and efficiently apply and manage energy resources through analysis of urban infrastructure; Smart cities can enhance the development of cities, and the limited energy can be applied more effectively. In smart cities, intelligent transportation (ITS) has always been a very important piece. In response to global warming, green transportation is promoted. , such as mass transit vehicles, electric car rental systems or electric bicycle rental systems, such as YouBike, are very popular.
次按,在公共租賃系統中,車體會因為重複性的使用而造成多處損傷,進而導致車輛無法騎乘,造成車體維護成本之提高。在人工檢測系統上通常無法提供固定標準之檢測系統且會因為人為的疏失造成之誤判。After pressing, in the public rental system, the vehicle body will cause multiple damages due to repeated use, which will result in the vehicle not being able to ride, resulting in an increase in the maintenance cost of the vehicle body. Fixed-standard detection systems are often not available on manual inspection systems and can be misjudged due to human error.
本發明之主要目的,係在提供一種自動化電動機車識別與車體瑕疵檢測之方法,可以透過行動拍攝裝置拍攝當下即時計算拍攝視角是否為最適合自動化瑕疵檢測系統,進而提高使用者在拍攝的效率,也能確保使用者拍攝出來之影像,待測物不會被遮蔽,並且透過特徵點定位之方法找出瑕疵可能的位置,避免處理整張影像,以利提高效率。也提供透過字元投影計算傾斜角度用以校正傾斜之車牌,並透過支持向量機所學習之字源分類器進行自動化車牌辨識。The main object of the present invention is to provide an automatic motor vehicle identification and vehicle body detection method, which can capture the current instant shooting angle of view through the action camera to optimize the user's shooting efficiency. It can also ensure the image taken by the user, the object to be tested will not be obscured, and the possible position can be found through the method of feature point positioning to avoid processing the entire image, so as to improve efficiency. It also provides a license plate to calculate the tilt angle by character projection to correct the tilted license plate, and to perform automatic license plate recognition through the word source classifier learned by the support vector machine.
為達上述目的,本發明之自動化電動機車識別與車體瑕疵檢測之方法,係包括: a).拍攝一待測影像,該待測影像具有一原始車身部件影像及一車牌影像,並設定一相對應該原始車身部件影像之第一感興趣區域與一相對應該車牌影像之第二感興趣區域;b).使用一特徵提取演算法係轉換該待測影像之原始車身部件影像成一特徵值,該特徵值被一最鄰近搜索演算法所利用,並以該最鄰近搜索演算法係建立在該待測影像之原始車身部件影像與一樣板影像間,而可產生一組獨特之二維碼,並透過各該二維碼之差距係判斷該待測影像之原始車身部件影像與該樣板影像間的相似程度,形成一個影像相似度單元;c).該待測影像之原始車身部件影像上具有複數個定位點,並抓取各該定位點之顏色特徵,形成一個車身部件定位單元;d).該待測影像之車牌影像附近具有複數個車燈,並轉換成一二值化車燈影像,並在該二值化車燈影像找出各該車燈之成對特徵,形成一個成對車燈單元;e).經該影像相似度單元、車身部件定位單元及成對車燈單元組合後,而計算出一最佳化檢視模組,令該第一感興趣區域擷取該原始車身部件影像與該第二感興趣區域擷取該車牌影像;f).提供一車體瑕疵檢驗模組,其使用一特徵瑕疵定位演算法係搜尋該第一感興趣區域所擷取該原始車身部件影像上的瑕疵特徵,並配合一非原始車身部件影像,又其使用一瑕疵檢測演算法係比對該原始車身部件影像與該非原始車身部件影像之差異,而找出該非原始車身部件影像在該原始車身部件影像所未呈現的瑕疵特徵;以及g).提供一車輛身分識別模組,其使用一校正車牌影像演算法、分割字元演算法及辨識字元演算法係辨識該第二感興趣區域所擷取該車牌影像上的身分。In order to achieve the above object, the method for detecting and detecting vehicle body defects of the present invention comprises: a) capturing an image to be tested, the image to be tested having an original body part image and a license plate image, and setting a Corresponding to the first region of interest of the original body component image and a second region of interest corresponding to the license plate image; b) using a feature extraction algorithm to convert the original body component image of the image to be tested into a feature value, The feature value is utilized by a nearest neighbor search algorithm, and the nearest neighbor search algorithm is established between the original body part image of the image to be tested and the same board image, and a unique set of two-dimensional codes can be generated, and The image similarity unit is formed by determining the degree of similarity between the image of the original body part of the image to be tested and the image of the image through the difference between the two-dimensional codes; c) the image of the original body part of the image to be tested has a plurality of images Positioning points, and grasping the color features of each of the positioning points to form a body part positioning unit; d). the image of the image to be tested has a complex near the license plate image a headlight and converts it into a binary image of the car, and finds the paired features of the lamp in the binarized lamp image to form a pair of lamp units; e). After the combination of the unit, the body component positioning unit and the pair of lamp units, an optimized viewing module is calculated, so that the first region of interest captures the original body component image and the second region of interest The license plate image; f) provides a vehicle body inspection module, which uses a feature 瑕疵 positioning algorithm to search for the 感兴趣 feature of the original body component image captured by the first region of interest, and cooperates with a non-original The image of the body component, and the use of a detection algorithm is different from the image of the original body component and the image of the non-original body component, and finds a feature of the non-original body component image that is not present in the image of the original body component; And g) providing a vehicle identity recognition module that uses a corrected license plate image algorithm, a segmentation character algorithm, and a recognition character algorithm to identify the license plate of the second region of interest The identity on the image.
依據前揭特徵,該最鄰近搜索演算法為KD-ferns演算法;該特徵提取演算法為HOG演算法步驟包括:輸入該原始車身部件影像;轉換該原始車身部件影像為灰階影像;計算該灰階影像之梯度方向;分割該灰階影像為複數格;統計各該格之梯度方向直方圖;組合鄰近格成為區塊,使該區塊內所形成該特徵值為HOG特徵值。According to the foregoing feature, the nearest neighbor search algorithm is a KD-ferns algorithm; the feature extraction algorithm is a HOG algorithm step comprising: inputting the original body component image; converting the original body component image into a grayscale image; The gradient direction of the gray-scale image; dividing the gray-scale image into a plurality of cells; and counting the gradient direction histogram of each of the cells; combining the adjacent cells into a block, so that the feature value formed in the block is a HOG feature value.
依據前揭特徵,抓取各該定位點之顏色特徵步驟包括:轉換該原始車身部件影像從RGB色彩空間至YCbCr色彩空間;經轉換成該YCbCr色彩空間之原始車身部件影像係縮至預定大小;經縮至該預定大小之原始車身部件影像上的像素點係以預定大小之遮罩進行掃描後,並過濾出部分像素點為複數個區域最大值;該區域最大值係設定距離範圍,並以該區域最大值為中心,且在距離範圍內而向外與其他區域最大值進行比較,若該區域最大值大於該其他區域最大值,則該區域最大值係設定為範圍最大值,反之,若該區域最大值小於該其他區域最大值,則濾除該區域最大值,而選出複數個範圍最大值;各該範圍最大值係配合各該定位點所設定相對位置,可從各該範圍最大值中而找出各該定位點所在位置。According to the foregoing feature, the step of capturing the color features of each of the positioning points comprises: converting the original body part image from the RGB color space to the YCbCr color space; and converting the original body part image converted into the YCbCr color space to a predetermined size; The pixel points on the original body part image retracted to the predetermined size are scanned by a mask of a predetermined size, and a part of the pixel points are filtered to a maximum of a plurality of areas; the maximum value of the area is a set distance range, and The maximum value of the area is centered, and is compared with the maximum value of other areas within the distance range. If the maximum value of the area is greater than the maximum value of the other area, the maximum value of the area is set to the maximum value of the range, and vice versa. If the maximum value of the area is smaller than the maximum value of the other area, the maximum value of the area is filtered out, and a plurality of range maximum values are selected; the maximum value of each range is matched with the relative position set by each of the positioning points, and the maximum value of each range is available. Find out where each anchor point is located.
依據前揭特徵,該特徵瑕疵定位演算法為SURF演算法;該瑕疵檢測演算法為一投影轉換演算法。According to the foregoing feature, the feature 瑕疵 positioning algorithm is a SURF algorithm; the 瑕疵 detection algorithm is a projection conversion algorithm.
依據前揭特徵,該校正車牌影像演算法步驟包括:輸入一具有傾斜角度之車牌影像;轉換該車牌影像成一二值化車牌影像;過濾出該二值化車牌影像上所設定複數個獨立區域內的字元;計算各該獨立區域之傾斜角度,而校正該傾斜角度之車牌影像,並計算校正後之各該獨立區域之傾斜角度;若傾斜角度維持在-1~+1之間,則輸出校正後之車牌影像,形成一校正車牌影像;反之,若傾斜角度非在-1~+1之間,則該車牌影像重新產生欲校正之傾斜角度;重新校正該車牌影像所新產生之傾斜角度。According to the foregoing feature, the step of correcting the license plate image algorithm comprises: inputting a license plate image having a tilt angle; converting the license plate image into a binarized license plate image; and filtering out a plurality of independent regions set on the binarized license plate image a character within the character; calculating a tilt angle of each of the independent regions, correcting the license plate image of the tilt angle, and calculating a tilt angle of each of the corrected independent regions; if the tilt angle is maintained between -1 and +1, Outputting the corrected license plate image to form a corrected license plate image; otherwise, if the tilt angle is not between -1 and +1, the license plate image regenerates the tilt angle to be corrected; re-correcting the newly generated tilt of the license plate image angle.
依據前揭特徵,該分割字元演算法步驟包括:轉換該校正車牌影像成一二值化校正車牌影像,該二值化校正車牌影像上的字元係以預定大小之遮罩進行填補,形成一填補影像;判斷該填補影像上的符號- 能否獨立被切割;若符號- 能獨立被切割,則移除符號-位置,而將該填補影像係分成兩個部分影像;反之,若符號-不能獨立被切割,則統計該填補影像中點的前後位置之垂直直方圖,並給定閥值,移除低於閥值之像素,並依中點位置,而將該填補影像係分成兩個部分;統計該填補影像之垂直直方圖,並判斷是否有字元1的存在;若是有該字元1的存在,則在該字元1的前後給予適當留白空間進行切割;反之,若是無該字元1的存在,則將該兩個部分影像進行三等分切割,而獨立出不同字元;計算各該字元特徵,又該辨識字元演算法為SVM分類器演算法步驟包括:使用事先學習好之SVM分類器進行各該字元辨識。According to the foregoing feature, the segmentation character algorithm step comprises: converting the corrected license plate image into a binarized corrected license plate image, and the character on the binarized corrected license plate image is filled with a mask of a predetermined size to form a fill video; determining the symbols to fill the image - can be independently cut; if the symbol - is independently cut, remove the symbol - the position, and the fill video image lines into two portions; conversely, if the symbol - If it cannot be cut independently, the vertical histogram of the position of the midpoint of the image is filled, and the threshold is given, the pixel below the threshold is removed, and the image is divided into two according to the midpoint position. Part; counting the vertical histogram of the filled image, and judging whether there is the existence of the character 1; if there is the presence of the character 1, the appropriate blank space is given before and after the character 1 for cutting; otherwise, if there is no If the character 1 is present, the two partial images are halved and separated, and the different characters are separated. The calculation of each character feature, and the recognition character algorithm is an SVM classifier algorithm step including: Each character recognition is performed using a previously learned SVM classifier.
依據前揭特徵,該車牌影像係以左視角之拍攝視角傾斜度容忍度為46°~65°之偏擺(yaw)、23°~41°之翻滾(roll)及21°~37°之俯仰(pitch)進行拍攝。According to the predecessor feature, the license plate image is a yaw of 46°~65°, a roll of 23°~41° and a pitch of 21°~37° with a viewing angle of the left angle of view. (pitch) to shoot.
為達上述目的,本發明之自動化電動機車識別與車體瑕疵檢測之系統,係包括:一拍攝模組,其拍攝一待測影像,該待測影像具有一原始車身部件影像及一車牌影像,並設定一相對應該原始車身部件影像之第一感興趣區域與一相對應該車牌影像之第二感興趣區域;一最佳化檢視模組,係由一影像相似度單元、車身部件定位單元及成對車燈單元組合後所計算而成,可令該第一感興趣區域擷取該原始車身部件影像與該第二感興趣區域擷取該車牌影像,且該影像相似度單元係使用一特徵提取演算法係轉換該待測影像之原始車身部件影像成一特徵值,該特徵值被一最鄰近搜索演算法所利用,並以該最鄰近搜索演算法係建立在該待測影像之原始車身部件影像與一樣板影像間,而可產生一組獨特之二維碼,並透過各該二維碼之差距係判斷該待測影像之原始車身部件影像與該樣板影像間的相似程度,又該車身部件定位單元係在該待測影像之原始車身部件影像上具有複數個定位點,並抓取各該定位點之顏色特徵,再該成對車燈單元係在該待測影像之車牌影像附近具有複數個車燈,並轉換成一二值化車燈影像,並在該二值化車燈影像找出各該車燈之成對特徵;一車體瑕疵檢驗模組,其使用一特徵瑕疵定位演算法係搜尋該第一感興趣區域所擷取該原始車身部件影像上的瑕疵特徵,並配合一非原始車身部件影像,又其使用一瑕疵檢測演算法係比對該原始車身部件影像與該非原始車身部件影像之差異,而找出該非原始車身部件影像在該原始車身部件影像所未呈現的瑕疵特徵;以及一車輛身分識別模組,其使用一校正車牌影像演算法、分割字元演算法及辨識字元演算法係辨識該第二感興趣區域所擷取該車牌影像上的身分。In order to achieve the above object, the system for detecting and detecting the vehicle body of the present invention includes: a shooting module that captures an image to be tested, the image to be tested having an original body part image and a license plate image. And setting a first region of interest corresponding to the image of the original body component and a second region of interest corresponding to the license plate image; an optimized viewing module is composed of an image similarity unit, a body component positioning unit, and Calculated after the combination of the lamp unit, the first region of interest captures the original body component image and the second region of interest to capture the license plate image, and the image similarity unit uses a feature extraction The algorithm converts the original body part image of the image to be tested into a feature value, and the feature value is utilized by a nearest neighbor search algorithm, and the nearest neighbor search algorithm is used to establish an image of the original body part of the image to be tested. Between the same panel image, a unique set of two-dimensional codes can be generated, and the original body part of the image to be tested is judged by the difference between the two-dimensional codes. The similarity between the image and the image of the template, and the body component positioning unit has a plurality of positioning points on the image of the original body component of the image to be tested, and captures the color features of each of the positioning points, and then pairs The lamp unit has a plurality of lights in the vicinity of the license plate image of the image to be tested, and is converted into a binarized car light image, and the paired features of the lights are found in the binarized car light image; a vehicle body inspection module, which uses a feature 瑕疵 positioning algorithm to search for the 感兴趣 feature of the original body part image captured by the first region of interest, and cooperates with a non-original body part image, and uses one The 瑕疵 detection algorithm compares the image of the original body part image with the image of the non-original body part, and finds a 瑕疵 feature that the non-original body part image does not present on the original body part image; and a vehicle identity recognition module, The method uses a corrected license plate image algorithm, a segmentation character algorithm and a recognition character algorithm to identify the body of the license plate image captured by the second region of interest. .
藉助上揭技術手段,本發明中提出方法可以針對電動機車外部瑕疵之自動化檢測,並針對大傾斜角度之車牌進行自動化車牌辨識。在進行瑕疵檢測之前,系統會在拍攝當下計算即時影像資訊,確保拍攝角度在最佳視角的情況下,該最佳化視角模組包含三部分:影像相似度單元、車身部件定位單元(以定位貼紙作為定位目標)及成對車燈單元。在自動化瑕疵檢測,利用特徵定位法定位出瑕疵之處,並比較使用前後之影像差異來進行瑕疵之判讀。在自動化車牌辨識係以投影為基礎之字元傾斜角度進行校正,而將傾斜之車牌進行轉正,並利用訓練好之字元辨識分類器,而將切割好之字元進行辨識。By means of the above-mentioned technical means, the method proposed in the invention can be applied to the automatic detection of the external cymbal of the electric motor vehicle, and the automatic license plate recognition is carried out for the license plate with a large inclination angle. Before the 瑕疵 detection, the system will calculate the real-time image information under the shooting to ensure that the shooting angle is at the optimal viewing angle. The optimized viewing angle module has three parts: the image similarity unit and the body part positioning unit (to locate Stickers as positioning targets) and pairs of headlight units. In the automated flaw detection, the feature localization method is used to locate the flaws, and the image differences before and after use are compared to perform the interpretation. In the automated license plate recognition, the projection-based character tilt angle is corrected, and the tilted license plate is turned positive, and the trained character is used to identify the classifier, and the cut characters are recognized.
首先,本發明自動化電動機車識別與車體瑕疵檢測之方法較佳可行實施例之步驟,請參閱圖1之流程圖,並分別配合圖2、圖3A~3B、圖4A~4G、圖5A~5C、圖6、圖7A~7D、圖8A~8G及圖9A~圖9B所示。Firstly, the steps of the preferred embodiment of the method for the identification of the automatic motor vehicle and the detection of the vehicle body of the present invention are as follows. Please refer to the flow chart of FIG. 1 and cooperate with FIG. 2, FIG. 3A-3B, FIG. 4A~4G, FIG. 5A respectively. 5C, Fig. 6, Figs. 7A to 7D, Figs. 8A to 8G, and Figs. 9A to 9B.
步驟S1:拍攝一待測影像(A),該待測影像(A)具有一原始車身部件影像(C1)及一車牌影像(D),並設定一相對應該原始車身部件影像(C1)之第一感興趣區域(B1)與一相對應該車牌影像(D)之第二感興趣區域(B2),配合圖2所示。Step S1: Shooting an image to be tested (A), the image to be tested (A) has an original body part image (C1) and a license plate image (D), and sets a corresponding image of the original body part (C1) A region of interest (B1) and a second region of interest (B2) corresponding to the license plate image (D) are shown in FIG.
步驟S2:使用一特徵提取演算法係轉換該待測影像之原始車身部件影像(C1)成一特徵值,該特徵值被一最鄰近搜索(Nearest Neighbor Search,NNS)演算法所利用,並以該最鄰近搜索演算法係建立在該待測影像(A)之原始車身部件影像(C1)與一樣板影像(X)間,而可產生一組獨特之二維碼,並透過各該二維碼之差距係判斷該待測影像(A)之原始車身部件影像(C1)與該樣板影像(X)間的相似程度,形成一個影像相似度單元121,如圖3A所示,其利用資料庫(Y)儲存複數個樣板影像(X),且該最鄰近搜索演算法為KD-ferns演算法,用以找出在該資料庫之各該樣板影像(X),與該原始車身部件影像(C1)最相似,乃為該影像相似度單元121之搜尋相似態樣,而KD-ferns演算法之特性,在於同一層分類底下,所有節點有相同支分割值與分割維,而於每一層分類選擇,亦能使整筆資料產生最大差異之維度,並從維度中選出在分割後能產生最大熵之數值,且KD-ferns演算法乃參考“Fast multiple-part based object detection using KD-Ferns”文獻,係為先前技術,非本發明之專利標的,容不贅述。該特徵提取演算法為HOG演算法包括:步驟S21:輸入該原始車身部件影像(C1);步驟S22:轉換該原始車身部件影像(C1)為灰階影像;步驟S23:計算該灰階影像之梯度方向;步驟S24:分割該灰階影像為複數格;步驟S25:統計各該格之梯度方向直方圖;步驟S26:組合鄰近格成為區塊,使該區塊內所形成該特徵值為HOG特徵值,配合圖3B所示,如此一來,形成該區塊中統計之各區方向數值串接成影像的描述元,而該HOG演算法乃參考“Histograms of oriented gradients for human detection”文獻,係為先前技術,非本發明之專利標的,容不贅述。Step S2: Converting the original body part image (C1) of the image to be tested into a feature value by using a feature extraction algorithm, and the feature value is utilized by a Nearest Neighbor Search (NNS) algorithm, and The nearest neighbor search algorithm is established between the original body part image (C1) of the image to be tested (A) and the same board image (X), and a unique set of two-dimensional codes can be generated and transmitted through each of the two-dimensional codes. The difference is the degree of similarity between the original body part image (C1) of the image to be tested (A) and the template image (X), forming an image similarity unit 121, as shown in FIG. 3A, which utilizes a database ( Y) storing a plurality of template images (X), and the nearest neighbor search algorithm is a KD-ferns algorithm for finding each of the template images (X) in the database, and the original body part image (C1) The most similar is the search similarity of the image similarity unit 121, and the characteristic of the KD-ferns algorithm is that under the same layer classification, all nodes have the same branch segmentation value and segmentation dimension, and the classification is selected in each layer. , which can also make the difference in the whole data, The value of the maximum entropy that can be generated after the segmentation is selected from the dimension, and the KD-ferns algorithm refers to the "Fast multiple-part based object detection using KD-Ferns" document, which is a prior art, not the patent of the present invention. Do not repeat them. The feature extraction algorithm is a HOG algorithm comprising: step S21: inputting the original body part image (C1); step S22: converting the original body part image (C1) into a grayscale image; and step S23: calculating the gray level image a gradient direction; step S24: dividing the grayscale image into a plurality of cells; step S25: counting a gradient direction histogram of each of the cells; and step S26: combining the neighboring cells into a block, so that the feature value formed in the block is HOG The feature value, as shown in FIG. 3B, is such that the statistical value of each region in the block is formed into a description element of the image, and the HOG algorithm refers to the document "Histograms of oriented gradients for human detection". It is a prior art, and is not a patent of the present invention.
步驟S3:該待測影像(A)之原始車身部件影像(C1)上具有複數個定位點(E),並抓取各該定位點(E)之顏色特徵,形成一個車身部件定位單元122;本實施例中,抓取各該定位點(E)之顏色特徵步驟包括:轉換該原始車身部件影像(C1)從RGB色彩空間至YCbCr色彩空間,配合圖4A所示;經轉換成該YCbCr色彩空間之原始車身部件影像(C1)係縮至預定大小,配合圖4B所示;經縮至該預定大小之原始車身部件影像(C1)上的像素點係以預定大小之遮罩(F)進行掃描後,並過濾出部分像素點為複數個區域最大值(G),配合圖4C~4D所示;該區域最大值(G)係設定距離範圍,並以該區域最大值(G)為中心,且在距離範圍內而向外與其他區域最大值進行比較,若該區域最大值(G)大於該其他區域最大值,則該區域最大值(G)係設定為範圍最大值(H),反之,若該區域最大值(G)小於該其他區域最大值,則濾除該區域最大值(G),而選出複數個範圍最大值(H),配合圖4E所示;各該範圍最大值(H)係配合各該定位點(E)所設定相對位置,可從各該範圍最大值(H)中而找出各該定位點(E)所在位置,配合圖4F所示,最後,如圖4G所示,乃為該車身部件定位單元122之定位態樣。Step S3: the original body part image (C1) of the image to be tested (A) has a plurality of positioning points (E), and grabs the color features of each of the positioning points (E) to form a body part positioning unit 122; In this embodiment, the step of capturing the color features of each of the positioning points (E) includes: converting the original body part image (C1) from the RGB color space to the YCbCr color space, as shown in FIG. 4A; converting to the YCbCr color. The original body part image (C1) of the space is contracted to a predetermined size as shown in Fig. 4B; the pixel points on the original body part image (C1) retracted to the predetermined size are made with a mask (F) of a predetermined size. After scanning, some pixels are filtered out to be the maximum value (G) of a plurality of regions, as shown in FIG. 4C to FIG. 4D; the maximum value (G) of the region is set to a distance range, and is centered on the maximum value (G) of the region. And within the distance range and outward with other regions maximum value, if the region maximum value (G) is greater than the other region maximum value, the region maximum value (G) is set to the range maximum value (H), On the other hand, if the maximum value (G) of the region is smaller than the maximum value of the other region, the filter is filtered out. The maximum value (G) of the domain is selected, and the maximum value (H) of the plurality of ranges is selected, as shown in FIG. 4E; the maximum value (H) of each of the ranges is matched with the relative position set by each of the positioning points (E), and The position of each of the positioning points (E) is found in the range maximum (H), as shown in FIG. 4F, and finally, as shown in FIG. 4G, is the positioning aspect of the body component positioning unit 122.
步驟S4:該待測影像(A)之車牌影像(D)附近具有複數個車燈(I),並轉換成一二值化車燈影像(J),並在該二值化車燈影像(J)找出各該車燈(I)之成對特徵,形成一個成對車燈單元123,本實施例中,以YCbCr色彩空間中的Cb與Cr兩通道產生Cb與Cr影像(D1、D2),配合圖5A所示,亦透過給予閥值,而產生該二值化車燈影像(J),亦找出車燈(I)所在位置,配合圖5B所示,接著,以該二值化車燈影像(J)中聯通區域之面積大小進行過濾,最後,如圖5C所示,透過車燈(I)之相對應空間關係,而抓取出車燈(I)位置,乃為該成對車燈單元123之搜尋態樣。Step S4: a plurality of lights (I) are arranged in the vicinity of the license plate image (D) of the image to be tested (A), and converted into a binarized car light image (J), and in the binarized car light image ( J) Finding the paired features of each of the lamps (I) to form a pair of lamp units 123. In this embodiment, Cb and Cr images are generated by Cb and Cr channels in the YCbCr color space (D1, D2) ), as shown in FIG. 5A, the binary image (J) is also generated by giving a threshold, and the position of the lamp (I) is also found, as shown in FIG. 5B, and then the binary value is used. The size of the area of the Unicom area in the image of the vehicle lamp (J) is filtered. Finally, as shown in FIG. 5C, the position of the headlight (I) is captured by the corresponding spatial relationship of the headlights (I). The search for the pair of headlight units 123.
步驟S5: 經該影像相似度單元121、車身部件定位單元122及成對車燈單元123組合後,而計算出一最佳化檢視模組12,令該第一感興趣區域(B1)擷取該原始車身部件影像(C1)與該第二感興趣區域(B2)擷取該車牌影像(D),若符合最佳化視角之條件,則該第一感興趣區域(B1)、第二感興趣區域(B2)係轉換為黃色,配合圖6所示。Step S5: After the image similarity unit 121, the body component positioning unit 122, and the pair of headlight units 123 are combined, an optimized view module 12 is calculated to capture the first region of interest (B1). The original body part image (C1) and the second region of interest (B2) capture the license plate image (D), and if the condition of the optimized viewing angle is met, the first region of interest (B1), the second sense The area of interest (B2) is converted to yellow, as shown in Figure 6.
步驟S6: 提供一車體瑕疵檢驗模組13,其使用一特徵瑕疵定位演算法係搜尋該第一感興趣區域(B1)所擷取該原始車身部件影像(C1)上的瑕疵特徵(N),本實施例中,該特徵瑕疵定位演算法為SURF演算法,使該原始車身部件影像(C1)以積分影像、海森矩陣、哈爾小波轉換,偵測出SURF特徵點(K),配合圖7A所示,且該SURF演算法乃參考Speeded-Up Robust Features (SURF),”文獻,係為先前技術,非本發明之專利標的,容不贅述,同時,並配合一非原始車身部件影像(C2),又其使用一瑕疵檢測演算法係比對該原始車身部件影像(C1)與該非原始車身部件影像(C2)之差異,本實施例中,該瑕疵檢測演算法為一投影轉換(Homography)演算法,並對應出該原始車身部件影像(C1)與該非原始車身部件影像(C2)之相對位置,配合圖7B所示,進一步,找出該原始車身部件影像(C1)與該非原始車身部件影像(C2)之不同SURF特徵點(K)相對應位置,而將在不同時間點所拍攝之影像造成的視角變化固定成同一視角,令該非原始車身部件影像(C2)轉換成與該原始車身部件影像(C1)相同視角之轉換影像(M),並以預定大小之黃色小框(L)係框出不同SURF特徵點(K)之周圍進行比較,而找出該非原始車身部件影像(C2)在該原始車身部件影像(C1)所未呈現的瑕疵特徵(N),配合圖7C所示,且該非原始車身部件影像(C2)之瑕疵特徵(N),乃利用HOG特徵來對欲比較之範圍內的影像進行描述,而決定該瑕疵特徵(N)有以下兩步驟:(1).比較兩相對應之欲比對範圍區域之差異,並給予閥值,亦決定出第一階段符合瑕疵之候選人,但無法找出非常精準之定位點中心進行轉換矩陣,而產生些許誤差,該誤差易在第一階段導致大量的誤判;(2).統計該比較區域中的方向性,且計算該方向性的差異,給予閥值,從第一階對所給予的候選人中,亦決定該瑕疵特徵(N),配合圖7D所示,且該投影轉換演算法係為先前技術,非本發明之專利標的,容不贅述。Step S6: providing a vehicle body inspection module 13 for searching for the 瑕疵 feature (N) of the original body part image (C1) captured by the first region of interest (B1) using a feature 瑕疵 positioning algorithm In this embodiment, the feature 瑕疵 positioning algorithm is a SURF algorithm, and the original body component image (C1) is converted by the integral image, the Hessian matrix, and the Hal wavelet, and the SURF feature point (K) is detected. As shown in FIG. 7A, and the SURF algorithm is referred to Speeded-Up Robust Features (SURF), the document is a prior art, not a patent of the present invention, and is not described here, and is accompanied by a non-original body part image. (C2), which uses a detection algorithm to compare the difference between the original body part image (C1) and the non-original body part image (C2). In this embodiment, the detection algorithm is a projection conversion ( Homography) algorithm, and corresponding to the original body part image (C1) and the non-original body part image (C2) relative position, as shown in FIG. 7B, further, find the original body part image (C1) and the non-original Body part image (C2) Different SURF feature points (K) correspond to positions, and the angle of view changes caused by the images taken at different time points are fixed to the same angle of view, so that the non-original body part image (C2) is converted into the original body part image (C1) Converting images (M) of the same viewing angle, and comparing the surrounding areas of different SURF feature points (K) with a predetermined small yellow frame (L), and finding the non-original body part image (C2) in the original body The 瑕疵 feature (N) not shown by the component image (C1), as shown in Fig. 7C, and the 瑕疵 feature (N) of the non-original body component image (C2), is the image of the range to be compared using the HOG feature. Describe, and determine the 瑕疵 feature (N) has the following two steps: (1). Compare the difference between the two corresponding areas of the desired range, and give the threshold, and also determine the candidate who meets the first stage. However, it is impossible to find a very accurate center of the transformation point to perform the transformation matrix, and a slight error is generated, which easily leads to a large number of misjudgments in the first stage; (2) statistical directionality in the comparison area, and the calculation of the directionality Difference, giving a threshold, from And the first-order projective transformation based algorithm to the prior art, non-patentable subject matter of the present invention, capacity is not repeated in the candidates given, also determining the fault signature (N), with FIG. 7D,.
步驟S7: 提供一車輛身分識別模組14,其使用一校正車牌影像演算法、分割字元演算法及辨識字元演算法係辨識該第二感興趣區域(B2)所擷取該車牌影像(D)上的身分,本實施例中,如8A圖所示,該校正車牌影像演算法步驟包括:步驟S711:輸入一具有傾斜角度之車牌影像(D);步驟S712:轉換該車牌影像(D)成一二值化車牌影像(O);步驟S713:過濾出該二值化車牌影像(O)上所設定複數個獨立區域(P)內的字元,如此一來,在最佳視角的拍攝視角下,而抓取車燈(I)位置,並從車燈(I)位置向下延伸預定範圍,能找出車牌約略位置,亦得出該車牌影像(D),再者,將該車牌影像(D)轉換成灰階影像,並給予亮的閥值而產生該二值化車牌影像(O) ,接著,依八個方向連結成該獨立區域(P),計算連結該獨立區域(P)之面積大小與長寬比,亦得出字元所在位置,配合圖8B所示;步驟S714:計算各該獨立區域(P) 之傾斜角度(r ),而校正該傾斜角度之車牌影像(D),並計算校正後之各該獨立區域(P)之傾斜角度,如此一來,計算第一個字元與最後一個字元間的旋轉角度(θrotation ) ,令灰階車牌影像經旋轉角度(θrotation ) 形成旋轉影像(Q),配合圖8C所示,並給予旋轉的閥值,形成一二值化旋轉車牌影像(R),且以第一個字元決定進行切割,形成一切割二值化車牌影像(S),配合圖8D所示,接著,尋找在該切割二值化車牌影像(S)中各字元傾斜角度(r ) ,且一字元被劃分為三部分,如圖8E所示,將上半部與下半部係分別投影在水平軸上,對於沒有傾斜之車牌影像(D),而上半部投影範圍之中點(up0 )與下半部投影範圍之中點(down0 ),則位於相同位置上,並分別設定上半部投影範圍(up1 、up2 )與下半部投影範圍(down1 、down2 ),若字元為傾斜,則上半部及下半部之投影中點(up0 、down0 ) 之投影線傾斜,並透過八方向之聯通區域法找出該切割二值化車牌影像(S)中之獨立區域(P),並計算各個獨立區域(P)之傾斜角度(r ) ,配合圖8F所示;步驟S715、S716:若傾斜角度(r )維持在-1~+1之間,選擇傾斜角度(r )中之眾數來當作該切割二值化車牌影像(S)之旋轉角度(θrotation ),則輸出校正後之車牌影像(D),形成一校正車牌影像(T),配合圖8G所示;反之,步驟S715、S717:若傾斜角度(r )非在-1~+1之間,則該車牌影像(D)重新產生欲校正之傾斜角度;步驟S718:重新校正該車牌影像(D)所新產生之傾斜角度。Step S7: providing a vehicle identity recognition module 14 that uses a corrected license plate image algorithm, a segmentation character algorithm, and a recognition character algorithm to identify the license plate image captured by the second region of interest (B2) ( In the embodiment, as shown in FIG. 8A, the step of correcting the license plate image algorithm includes: step S711: inputting a license plate image having a tilt angle (D); and step S712: converting the license plate image (D) a binary image of the license plate image (O); step S713: filtering out the characters in the plurality of independent regions (P) set on the binarized license plate image (O), thus, at the optimal viewing angle At the shooting angle of view, while grabbing the position of the headlight (I) and extending downward from the position of the headlight (I), the approximate position of the license plate can be found, and the license plate image (D) is also obtained. The license plate image (D) is converted into a grayscale image, and the binarized license plate image (O) is generated by giving a bright threshold, and then the independent region (P) is connected in eight directions, and the independent region is calculated and connected ( P) the area size and aspect ratio, also the location of the character, as shown in Figure 8B; step S714: calculation The separate regions (P) of the inclination angle (R & lt), correcting the inclination angle of the license plate (D), and calculates each of the separate regions (P) after the correction of the inclination angle, thus, the first word is calculated The rotation angle between the element and the last character ( θ rotation ) causes the gray-scale license plate image to form a rotated image (Q) via the rotation angle ( θ rotation ), as shown in Figure 8C, and gives the threshold of rotation to form one or two. The rotating license plate image (R) is valued and cut by the first character to form a cut binarized license plate image (S), as shown in FIG. 8D, and then, the cut binocular license plate image is searched for ( S) the angle of inclination of each character ( r ), and one character is divided into three parts, as shown in Fig. 8E, the upper half and the lower half are respectively projected on the horizontal axis, for the license plate image without tilt (D), while the upper half of the projection range (up 0 ) and the lower half of the projection range (down 0 ) are at the same position, and set the upper half projection range (up 1 , up 2) with the lower half of the projection range (down 1, down 2), if the character is inclined, the upper half and a lower half Projection midpoint (up 0, down 0) of the inclined projection line, and find a separate area (P) of the cutting plate binarized image (S) in the eight directions through the Unicom zone method, and calculates the individual regions (P The inclination angle ( r ) is shown in Fig. 8F; steps S715, S716: if the inclination angle ( r ) is maintained between -1 and +1, the mode in the inclination angle ( r ) is selected as the cutting The rotation angle ( θ rotation ) of the binarized license plate image (S) outputs the corrected license plate image (D) to form a corrected license plate image (T), as shown in Fig. 8G; otherwise, steps S715, S717: If the tilt angle ( r ) is not between -1 and +1, the license plate image (D) regenerates the tilt angle to be corrected; and step S718: re-corrects the tilt angle newly generated by the license plate image (D).
本實施例中,如圖9A所示,該分割字元演算法步驟包括:步驟S721:轉換該校正車牌影像(T)成一二值化校正車牌影像(W),該二值化校正車牌影像(W)上的字元係以預定大小之遮罩(U)進行填補,形成一填補影像(V),並配合圖9B所示;步驟S722:判斷該填補影像(V)上的符號- 能否獨立被切割;步驟S723:若符號- 能獨立被切割,則移除符號-位置,而將該填補影像(V)係分成兩個部分影像;反之,步驟S724:若符號-不能獨立被切割,則統計該填補影像(V)中點的前後位置之垂直直方圖,並給定閥值,移除低於閥值之像素,並依中點位置,而將該填補影像(V)係分成兩個部分;步驟S725:統計該填補影像(V)之垂直直方圖,並判斷是否有字元1的存在;步驟S726:若是有該字元1的存在,則在該字元1的前後給予適當留白空間進行切割;反之,步驟S727:若是無該字元1的存在,則將該兩個部分影像進行三等分切割,而獨立出不同字元;步驟S728:計算各該字元特徵,接下來決定字元影像之特徵。經過校正與切割過後的字元影像並沒有固定之形狀,將切割後之字元切割成小區域,並計算各小區域之像素平均值,將各小區域平均值串接後做為該字元之影像特徵。又該辨識字元演算法為SVM分類器演算法步驟包括: 步驟S729:使用事先學習好之SVM分類器進行各該字元辨識,該SVM分類器演算法係為先前技術,非本發明之專利標的,容不贅述。此外,該車牌影像(D)係以左視角之拍攝視角傾斜度容忍度為46°~65°之偏擺(yaw)、23°~41°之翻滾(roll)及21°~37°之俯仰(pitch)進行拍攝。In this embodiment, as shown in FIG. 9A, the segmentation character algorithm step includes: step S721: converting the corrected license plate image (T) into a binarized corrected license plate image (W), and the binarized corrected license plate image The character on (W) is filled with a mask (U) of a predetermined size to form a filled image (V), as shown in FIG. 9B; and step S722: determining the symbol on the filled image (V) - No cutting is independent; step S723: If the symbol - is independently cut, remove the symbol - the position, and the filled image (V) is divided into two parts based images; otherwise, step S724: If the symbol - can not be cut independently , the vertical histogram of the front and rear positions of the midpoint of the image (V) is counted, and the threshold is given, the pixel below the threshold is removed, and the filled image (V) is divided according to the midpoint position. Two parts; step S725: counting the vertical histogram of the filled image (V), and judging whether there is the existence of the character 1; step S726: if there is the presence of the character 1, then giving before and after the character 1 Properly blanking space for cutting; otherwise, step S727: if there is no such character 1, the two partial images are Trisected for cutting, independent of different characters; Step S728: calculated for each feature of the character, the image of the next character features determined. After the corrected and cut character image has no fixed shape, the cut character is cut into small areas, and the pixel average value of each small area is calculated, and the average value of each small area is concatenated and used as the character. Image features. The step of identifying the character algorithm as the SVM classifier algorithm comprises: Step S729: performing the character recognition using a well-learned SVM classifier, the SVM classifier algorithm being a prior art, not a patent of the invention The subject matter is not described. In addition, the license plate image (D) is a yaw with a tilt angle tolerance of 46° to 65°, a roll of 23° to 41°, and a pitch of 21° to 37° with a left-angle viewing angle. (pitch) to shoot.
上述自動化電動機車識別與車體瑕疵檢測之方法乃應用於自動化電動機車識別與車體瑕疵檢測之系統10,如圖10所示,其包括:一拍攝模組11,其拍攝一待測影像(A),該待測影像(A)具有一原始車身部件影像(C1)及一車牌影像(D),並設定一相對應該原始車身部件影像(C1)之第一感興趣區域(B1)與一相對應該車牌影像(D)之第二感興趣區域(B2);一最佳化檢視模組12,係由一影像相似度單元121、車身部件定位單元122、及成對車燈單元123組合後所計算而成,可令該第一感興趣區域(B1)擷取該原始車身部件影像(C1)與該第二感興趣區域(B2)擷取該車牌影像(D);一車體瑕疵檢驗模組13,其使用一特徵瑕疵定位演算法係搜尋該第一感興趣區域所擷取該原始車身部件影像(C1)上的瑕疵特徵(N),並配合一非原始車身部件影像(C2),又其使用一瑕疵檢測演算法係比對該原始車身部件影像(C1)與該非原始車身部件影像(C2)之差異,而找出該非原始車身部件影像(C2)在該原始車身部件影像(C1)所未呈現的瑕疵特徵(N);以及一車輛身分識別模組14,其使用一校正車牌影像演算法、分割字元演算法及辨識字元演算法係辨識該第二感興趣區域(B2)所擷取該車牌影像(D)上的身分。The above-mentioned automatic motor vehicle identification and vehicle body detection method is applied to the system 10 for automatic motor vehicle identification and vehicle body detection. As shown in FIG. 10, it includes: a shooting module 11 that captures an image to be tested ( A), the image to be tested (A) has an original body part image (C1) and a license plate image (D), and sets a first region of interest (B1) corresponding to the original body part image (C1) and a Corresponding to the second region of interest (B2) of the license plate image (D); an optimized viewing module 12 is combined by an image similarity unit 121, a body component positioning unit 122, and a pair of headlight units 123 Calculated so that the first region of interest (B1) captures the original body component image (C1) and the second region of interest (B2) to capture the license plate image (D); The module 13 uses a feature 瑕疵 positioning algorithm to search for the 感兴趣 feature (N) of the original body part image (C1) captured by the first region of interest, and cooperates with a non-original body part image (C2) And it uses a 瑕疵 detection algorithm to compare the original body part image (C 1) distinguishing the non-original body part image (C2) from the non-original body part image (C2) in the original body part image (C1) without the 瑕疵 feature (N); and a vehicle identity recognition mode Group 14, which uses a corrected license plate image algorithm, a segmentation character algorithm, and a recognition character algorithm to identify the identity of the license plate image (D) captured by the second region of interest (B2).
是以,該自動化電動機車識別與車體瑕疵檢測之系統10實作至租賃電動機車之損害賠償責任,當使用者在使用之前先進行取像,如未使用(使用前)之定義為:使用者剛完成租借手續,尚未使用,即拍照存證,而該自動化電動機車識別與車體瑕疵檢測之系統10會在使用者取像時即時計算最佳化視角,等待使用者拍攝完成後進行車牌辨識,待使用者確認後,根據車牌辨識結果進行歸檔,完成租借手續,以利與使用者歸還時的再取之影像作比對;使用者要歸還時需要進行第二次取像,如使用過(使用後)之定義為:使用者欲將租借物進行歸還,此時由該自動化電動機車識別與車體瑕疵檢測之系統10透過未使用時所拍攝之影像來進行瑕疵檢測,該自動化電動機車識別與車體瑕疵檢測之系統10一樣會再拍攝當下即時計算最佳化視角,等待取向完成進行車牌辨識,待使用者確認後,則根據車牌辨識結果至資料庫中讀進相對應之租借前所拍攝之影像(也就是第一部分所歸檔之影像),再進行後續之自動化瑕疵檢測,完成歸還手續。Therefore, the automatic motor vehicle identification and vehicle body detection system 10 is implemented to damage the liability of the rental motor vehicle. When the user takes the image before use, if not used (before use), the definition is: use The person has just completed the rental procedure and has not used it, that is, the photo is stored, and the system 10 for automatic motor vehicle identification and vehicle body detection will calculate the optimized viewing angle immediately when the user takes the image, and wait for the user to take the license plate after the shooting is completed. Identification, after the user confirms, archive according to the license plate identification result, complete the rental procedure, in order to compare with the re-taken image when the user returns; the user needs to perform the second image capture if returned, such as After (after use) is defined as: the user wants to return the leased item, at which time the system 10 of the automatic motor vehicle identification and vehicle body detection system performs flaw detection through an image taken when not in use, the automatic motor The vehicle identification will be the same as the system 10 for detecting the vehicle body. The current view will be optimized and the license will be recognized. After confirmation wearer, according to the recognition result to the license plate read into the database corresponding to the image before capturing of the lease (i.e. the first portion of the image archive), and then the subsequent automatic flaw detection, returned to complete the procedures.
基於如此之構成,本發明之主要貢獻如下:(1) 即時性最佳視角判讀;(2)高效能之特徵定位自動化瑕疵檢測;(3) 基於字元投影之傾斜車牌校正暨自動化車牌辨識系統。為了維護基礎騎乘功能,自動化瑕疵檢測系統是非常重要的,自動化瑕疵檢測系統不只能顯著的降低車體維護成本,也能減少人為判讀之錯誤。Based on such a composition, the main contributions of the present invention are as follows: (1) Instantaneous optimal viewing angle interpretation; (2) High-performance feature positioning automatic detection; (3) Tilt-based license plate correction based on character projection and automated license plate recognition system . In order to maintain the basic riding function, the automated flaw detection system is very important. The automated flaw detection system can not only significantly reduce the maintenance cost of the vehicle body, but also reduce the errors of human interpretation.
綜上所述,本發明所揭示之構造,為昔所無,且確能達到功效之增進,並具可供產業利用性,完全符合發明專利要件,祈請 鈞局核賜專利,以勵創新,無任德感。In summary, the structure disclosed by the present invention is unprecedented, and can indeed achieve the improvement of efficacy, and has industrial availability, fully conforms to the patent requirements of the invention, and invites the bureau to grant a patent to encourage innovation. There is no sense of morality.
惟,上述所揭露之圖式、說明,僅為本發明之較佳實施例,大凡熟悉此項技藝人士,依本案精神範疇所作之修飾或等效變化,仍應包括在本案申請專利範圍內。The drawings and the descriptions of the present invention are merely preferred embodiments of the present invention, and those skilled in the art, which are subject to the spirit of the present invention, should be included in the scope of the patent application.
S1~S7‧‧‧步驟S1~S7‧‧‧ steps
S21~S26‧‧‧步驟S21~S26‧‧‧Steps
S711~S718‧‧‧步驟S711~S718‧‧‧Steps
S721~S729‧‧‧步驟S721~S729‧‧‧Steps
A‧‧‧待測影像A‧‧‧ image to be tested
B1‧‧‧第一感興趣區域B1‧‧‧First area of interest
B2‧‧‧第二感興趣區域B2‧‧‧Second area of interest
C1‧‧‧原始車身部件影像C1‧‧‧ original body parts image
C2‧‧‧原始車身部件影像C2‧‧‧ original body parts image
D‧‧‧車牌影像D‧‧‧ License Plate Image
D1‧‧‧Cb影像D1‧‧‧Cb image
D2‧‧‧Cr影像D2‧‧‧Cr image
E‧‧‧定位點E‧‧‧ anchor point
F、U‧‧‧遮罩F, U‧‧‧ mask
G‧‧‧區域最大值G‧‧‧ area maximum
H‧‧‧範圍最大值H‧‧‧Maximum range
I‧‧‧車燈I‧‧‧ headlights
J‧‧‧二值化車燈影像J‧‧‧ Binarized car light image
K‧‧‧SURF特徵點K‧‧‧SURF feature points
L‧‧‧黃色小框L‧‧‧ yellow small box
N‧‧‧瑕疵特徵N‧‧‧瑕疵 characteristics
O‧‧‧二值化車牌影像O‧‧‧ Binarized license plate image
P‧‧‧獨立區域P‧‧‧Independent area
Q‧‧‧旋轉影像Q‧‧‧Rotating image
R‧‧‧二值化旋轉車牌影像R‧‧‧ Binarized rotating license plate image
S‧‧‧切割二值化車牌影像S‧‧‧ cutting binarized license plate image
T‧‧‧校正車牌影像T‧‧‧corrected license plate image
V‧‧‧填補影像V‧‧‧ Filling the image
W‧‧‧二值化校正車牌影像W‧‧‧ Binarization Correction License Plate Image
X‧‧‧樣本影像X‧‧‧ sample image
Y‧‧‧資料庫Y‧‧‧Database
r‧‧‧傾斜角度 r ‧‧‧ tilt angle
θrotation 旋轉角度 Rotation rotation angle of rotation
up0‧‧‧上半部投影範圍之中點Up 0 ‧‧‧The middle half of the projection range
down0‧‧‧下半部投影範圍之中點Down 0 ‧‧‧The middle half of the projection range
up1、up2‧‧‧上半部投影範圍Up 1 , up 2 ‧‧‧ upper half projection range
down1、down2‧‧‧下半部投影範圍Down 1 , down 2 ‧‧‧lower half projection range
10‧‧‧自動化電動機車識別與車體瑕疵檢測之系統10‧‧‧Automatic motor vehicle identification and vehicle body detection system
11‧‧‧拍攝模組11‧‧‧ Shooting module
12‧‧‧最佳化檢視模組12‧‧‧Optimized viewing module
121‧‧‧影像相似度單元121‧‧‧Image Similarity Unit
122‧‧‧車身部件定位單元122‧‧‧ Body parts positioning unit
123‧‧‧成對車燈單元123‧‧‧paired lamp unit
13‧‧‧車體瑕疵檢驗模組13‧‧‧Car body inspection module
14‧‧‧車輛身分識別模組14‧‧‧Vehicle identity recognition module
圖1係本發明之流程圖。 圖2係本發明拍攝之示意圖。 圖3A係本發明KD-fern演算法搜尋之示意圖。 圖3B係本發明HOG描述元之示意圖。 圖4A係本發明原始車身部件影像轉換色彩空間之示意圖。 圖4B係本發明原始車身部件影像縮至預定大小之示意圖。 圖4C係本發明區域最大值定義之示意圖。 圖4D係本發明區域最大值搜尋結果之示意圖。 圖4E係本發明範圍最大值搜尋結果之示意圖。 圖4F係本發明定位點搜尋結果之示意圖。 圖4G係本發明原始車身部件影像檢測結果之示意圖。 圖5A本發明車牌影像於Cr通道與Cr通道之示意圖。 圖5B本發明車牌影像轉換成二值化車燈影像之示意圖。 圖5C本發明成對車燈搜尋之示意圖。 圖6係本發明擷取原始車身部件影像及車牌影像之示意圖。 圖7A係本發明非原始車身部件影像所定位SURF特徵結果之示意圖。 圖7B係本發明對應非原始車身部件影像與原始車身部件影像之示意圖。 圖7C係本發明瑕疵比較步驟之示意圖。 圖7D係本發明瑕疵比較結果之示意圖。 圖8A係本發明校正車牌影像之流程圖。 圖8B係本發明車牌影像與及字元位置之示意圖。 圖8C係本發明旋轉角度及旋轉影像之示意圖。 圖8D係本發明旋轉二值化車牌影像及切割二值化車牌影像之示意圖。 圖8E係本發明字元投影之示意圖。 圖8F係本發明各獨立區域所呈現傾斜角度之示意圖。 圖8G係本發明校正車牌影像之示意圖。 圖9A係本發明車牌辨識之流程圖。 圖9B係本發明填補影像之之示意圖。 圖10係本發明之使用狀態圖。Figure 1 is a flow chart of the present invention. Figure 2 is a schematic illustration of the capture of the present invention. Figure 3A is a schematic diagram of the KD-fern algorithm search of the present invention. Figure 3B is a schematic illustration of the HOG descriptive element of the present invention. 4A is a schematic diagram of an image conversion color space of an original body component of the present invention. Figure 4B is a schematic illustration of the original body member image of the present invention retracted to a predetermined size. Figure 4C is a schematic illustration of the definition of the region maximum for the present invention. Figure 4D is a schematic illustration of the region maximum search results of the present invention. Figure 4E is a schematic illustration of the results of the maximum range search for the present invention. 4F is a schematic diagram of the search result of the anchor point of the present invention. Fig. 4G is a schematic view showing the image detection result of the original body member of the present invention. FIG. 5A is a schematic diagram of a license plate image of the present invention in a Cr channel and a Cr channel. FIG. 5B is a schematic diagram of the conversion of the license plate image of the present invention into a binarized car light image. Figure 5C is a schematic illustration of the search for paired headlights of the present invention. Fig. 6 is a schematic view showing the image of the original body part and the image of the license plate of the present invention. Figure 7A is a schematic illustration of the SURF feature results of the non-original body part image of the present invention. FIG. 7B is a schematic diagram of the image of the non-original body part and the original body part of the present invention. Figure 7C is a schematic illustration of the comparative step of the present invention. Figure 7D is a schematic illustration of the comparative results of the present invention. Figure 8A is a flow chart of the method of correcting license plate images of the present invention. Figure 8B is a schematic diagram of the license plate image and character position of the present invention. Figure 8C is a schematic illustration of the angle of rotation and the rotated image of the present invention. FIG. 8D is a schematic diagram of the rotating binarized license plate image and the cut binocular license plate image of the present invention. Figure 8E is a schematic illustration of the character projection of the present invention. Figure 8F is a schematic illustration of the angle of inclination exhibited by the individual regions of the present invention. Fig. 8G is a schematic view showing the correction of the license plate image of the present invention. Figure 9A is a flow chart of the license plate recognition of the present invention. Figure 9B is a schematic diagram of the image filling of the present invention. Figure 10 is a view showing the state of use of the present invention.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW105133934A TWI603271B (en) | 2016-10-20 | 2016-10-20 | Automatic electric scooter identification an d part-based outer defect detection method and system thereof |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW105133934A TWI603271B (en) | 2016-10-20 | 2016-10-20 | Automatic electric scooter identification an d part-based outer defect detection method and system thereof |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TWI603271B TWI603271B (en) | 2017-10-21 |
| TW201816661A true TW201816661A (en) | 2018-05-01 |
Family
ID=61011029
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW105133934A TWI603271B (en) | 2016-10-20 | 2016-10-20 | Automatic electric scooter identification an d part-based outer defect detection method and system thereof |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TWI603271B (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI697425B (en) * | 2019-02-26 | 2020-07-01 | 國立臺灣科技大學 | Car body repair system and method thereof |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109325488A (en) * | 2018-08-31 | 2019-02-12 | 阿里巴巴集团控股有限公司 | For assisting the method, device and equipment of car damage identification image taking |
| CN111639645A (en) * | 2020-06-03 | 2020-09-08 | 北京首汽智行科技有限公司 | License plate number identification method |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TW201123097A (en) * | 2009-12-23 | 2011-07-01 | Gorilla Technology Inc | Automatic traffic violation detection system and method of the same |
| JP2013093013A (en) * | 2011-10-06 | 2013-05-16 | Ricoh Co Ltd | Image processing device and vehicle |
| TWI497422B (en) * | 2012-12-25 | 2015-08-21 | Univ Nat Chiao Tung | A system and method for recognizing license plate image |
-
2016
- 2016-10-20 TW TW105133934A patent/TWI603271B/en not_active IP Right Cessation
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI697425B (en) * | 2019-02-26 | 2020-07-01 | 國立臺灣科技大學 | Car body repair system and method thereof |
Also Published As
| Publication number | Publication date |
|---|---|
| TWI603271B (en) | 2017-10-21 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| KR101403876B1 (en) | Method and Apparatus for Vehicle License Plate Recognition | |
| US8724885B2 (en) | Integrated image processor | |
| CN103971128B (en) | A kind of traffic sign recognition method towards automatic driving car | |
| WO2021195563A1 (en) | Arrangements for digital marking and reading of items, useful in recycling | |
| Mishra et al. | Segmenting “simple” objects using RGB-D | |
| CN104036480B (en) | Quick elimination Mismatching point method based on surf algorithm | |
| CN103052968A (en) | Object detection device, object detection method, and program | |
| CN107563330B (en) | A method for correcting horizontally tilted license plates in surveillance video | |
| CN105225281B (en) | A kind of vehicle checking method | |
| TWI603271B (en) | Automatic electric scooter identification an d part-based outer defect detection method and system thereof | |
| CN109146859A (en) | A kind of pavement crack detection system based on machine vision | |
| CN114898153B (en) | A two-stage surface defect recognition method combining classification and detection | |
| CN120598966B (en) | Rare metal processing quality detection method based on machine vision | |
| CN102901735B (en) | System for carrying out automatic detections upon workpiece defect, cracking, and deformation by using computer | |
| JP2013089129A (en) | Mobile object detection device, computer program, and mobile object detection method | |
| CN111476230B (en) | License plate positioning method for improving combination of MSER and multi-feature support vector machine | |
| TW202318338A (en) | Detection method | |
| CN113869292B (en) | Target detection method, device and equipment for automatic driving | |
| CN115375679A (en) | Edge finding and point searching positioning method and device for defective chip | |
| CN111815725B (en) | QR code region positioning method | |
| CN110717910B (en) | CT image target detection method based on convolutional neural network and CT scanner | |
| CN113920055A (en) | Defect detection method | |
| CN119379583A (en) | A method and device for detecting surface defects of industrial parts | |
| CN118397489A (en) | Power scene defect identification de-duplication method and device, electronic terminal and medium | |
| Yang et al. | Vehicle detection from low quality aerial LIDAR data |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| MM4A | Annulment or lapse of patent due to non-payment of fees |