[go: up one dir, main page]

TW201117132A - Method for identifying moving foreground objects in an orthorectified photographic image - Google Patents

Method for identifying moving foreground objects in an orthorectified photographic image Download PDF

Info

Publication number
TW201117132A
TW201117132A TW98137967A TW98137967A TW201117132A TW 201117132 A TW201117132 A TW 201117132A TW 98137967 A TW98137967 A TW 98137967A TW 98137967 A TW98137967 A TW 98137967A TW 201117132 A TW201117132 A TW 201117132A
Authority
TW
Taiwan
Prior art keywords
tile
value
region
image
grayscale
Prior art date
Application number
TW98137967A
Other languages
Chinese (zh)
Inventor
Tim Bekaert
Pawel Kaczanowski
Marcin Cuprjak
Original Assignee
Tele Atlas Bv
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tele Atlas Bv filed Critical Tele Atlas Bv
Priority to TW98137967A priority Critical patent/TW201117132A/en
Publication of TW201117132A publication Critical patent/TW201117132A/en

Links

Landscapes

  • Image Processing (AREA)

Abstract

Photographic images recorded with mobile mapping vehicles (20) in real life situations usually contain cars or other moving objects (34) that cover visual information on the road surface (24). According to the techniques of this invention, moving objects (34) are detected by grayscale differencing in overlapping pixels or sections of two or more orthorectified image tiles. Based on moving object identification, masks are generated for each orthorectified tile. The masks are then compared and priorities established based on grayscale values associated with the masks. Mosaics of a large surface of interest such as the Earth can be assembled from a plurality of overlapping photographic images with moving objects (34) largely removed from the resulting mosaic.

Description

201117132 六、發明說明: 【發明所屬之技術領域】 - 本發明係關於一種用於在正糾正攝影影像中識別移動前 景物體之方法。 【先前技術】 數位地圖及數位地圖資料庫用於導航系統中。數位地圖 可藉由各種方法獲得’包括來自空間之高解析度成像以及 自陸用行動車輛獲取之正糾正影像。在後者情形下,自陸 用製圖系統獲得的影像必須經轉換至—標度經校正且描I 如自地面特徵之精確地面位置上方觀看之該等地面特徵的 正、、.Η正衫像。正糾正影像為一種已經幾何校正以使得像片 之標度均-(意謂像片可被認為等效於地圖)的航拍像片。 正糾正影像可用於量測真實距離,因為其為所關注表面 (例如’地球表面)之準4表示。針對地形起伏、透鏡畸變 差及相機傾斜而調整正糾正影像。 可極有效地自航拍影像獲得正糾正影像。然而,經常引 入錯誤,此可導致地理定位資料之不準確製圖。_個問題 ◎ 為通常航拍影像並非係完全垂直於地球表面而獲取。即使 當圖像係接近於垂直而獲取時,僅其精確之中心線將為垂 直的。為了正糾正此影像,必須另外獲得地形高度資訊。 . 才I像中物體之準確高度資訊的缺乏結合用於判定正糾 正衫像之二角測量方法可導致此等影像之達十幾米或更大 的不正確性。可藉由獲取重疊影像並比較自後續影像獲得 之相同表面而改良正確性。然而,自此方法獲得之正確性 144236.doc 201117132 ‘ 與其成本相比存在限制。 在本文中,術語「水平」資料或資訊對應於具有平行於 或大體上平行於地球表面之表面的物體。術語「垂直」資 料或資訊對應於可藉由大體i ± „ 篮十仃於地球表面之觀視軸而觀 ‘ 看的物體。垂直資訊不可自典型俯視航拍或衛星影像獲 得。 行動製圖車(通常為陸用車輛,諸如,箱型車或汽車, 〇 ❻亦可能為航拍車輛)用於收集行動資料以用於增強數位 地圖資料庫。行動製圖車通常安褒有許多相機,可能其中 之一些為立體相機且其全部由於载有精確GPS及另一位置 •及定向判定設備(例如,慣性導航系統_INS)而經準確地地 •理定位。在於道路網或已建立之路線上行驶之同時,以連 續訊框或影像擷取地理編碼之影像序列。地理編碼意謂將 由GPS接收器及(可能)INS計算之位置及(可能)與影像相關 聯之額外駛向(heading)及/或定向資料附加至由相機擷取 〇 之每一影像的元資料。行動製圖車記錄所關注表面(例 如,路面)之一個以上影像序列,且對於影像序列之每一 影像,準確地判定地理座標參考系統中之地理位置連同該 • 影像序列相對於該地理位置之位置及定向資料。具有對應 . 地理位置資訊之影像序列被稱為經地理編碼之影像序列。 其他資料亦可由其他感應器收集,同時且類似地進行地理 編碼。 已知用於獲得正糾正拼月(tile)以用於組合出所關注大表 '面(諸如,地球)之烏眼馬赛克(BEM)的先前技術。此技術 144236.doc 201117132 之極佳實例描述於申請者之於2〇〇8年7月17日公開的國際: 公告第WO/2008/044927號中。按認可以引用方式之併入之 權限,特此以引用之方式併入並依賴於該國際公告之全部 揭示内容。 根據已知技術,正糾正影像經組合於一起以產生一馬賽 克而不考慮其中含有之影像内容的品質。實情為’此等影 像通常相繼地依次拼接,與屋頂上木瓦按順序一者重疊於 另一者上極其相似。雖然通常為有效的,但經常發生以下 情形:像片影像中所擷取之移動物體(例如,途經行動製 ◎ 圖車或由行動製圖車途經之機動車輛)出現於上覆拼片而 非下伏拼片中,使得較不理想之拼片上覆於較理想之拼片 上。因此,部分地使路面之地圖模糊的移動前景物體可能 出現於已完成之BEM中。 申請者之題為「Method Of An Apparatus For Pr〇dueing A MulU-Viewpoint Panorama」的同在申請中之申請案 P601 5247 PCT描述一種用以使用自行動製圖車之多個觀察 點獲取的垂直影像序列來產生垂直全景之方法。在產生全 ϋ 景之同時,使用雷射掃描器資料來偵測靠近相機之物體。 藉由標記垂直影像中之不應使用之部分而移除影像中所操 取的不適宜物體。接著將應使用之部分投射至全景表面 · 上。 雷射資料之使用(尤其結合垂直影像)為用於產生正糾正 水平影像以用於產生鳥眼馬賽克(ΒΕΜ)的昂貴、繁瑣且較 不理想之技術。因此,存在對不依賴於雷射掃描器或其他 144236.doc 201117132 繁瑣技術之使用的在所關注表面之正糾正攝影影像中識別 移動前景物體(尤其當現有影像資料可能可用而不存在同 時雷射掃描資料時)的需要。 【發明内容】 根據本發明,描述一種 "—•〜正、令1止 〇 〇 攝影影像中識別移動前景物體的方法。該方法包含自該所 關注表面之一第一正糾正像片提供-第-拼片的步驟Γ該 第一拼片經劃分成離散區(例如,像素)且與相對於該所關 注表2之一絕對座標位置及定向相關聯。自該所關注表面 之:弟二正糾正像片提供一至少部分地與該第—拼片重疊 之第二拼片。該第二拼片亦經劃分成離散區,且與相對: 該所關注表面之一絕對座標位置及定向相關聯。比較該第 二與該第二拼片中之一致區(亦即’與相對於該所關 相同絕對座標位置相關聯的區)。判定該 片中之该一致區的一灰階值連同該第二拼片中之該 的該灰階值。計算此等灰階值之間 - 區之灰階值的絕對差超過-預定臨限值二致 讀。因此’當比較該第一拼片與該第二拼= 了,之灰階值極類似,則該兩者之間 : 小。灰階值之小差將降低至-預定臨限值 極 個灰階值之間的大差將超過_預定臨 。然而,兩 動物體之存在。 .值亚私示一前景移 因此,本發明之規則不需要任何雷射 於價測正糾正攝影影像中之移動 〜直衫像用 不物體。實情為,移動 144236.doc 201117132 物體(例如,途經之汽車)之偵測係基於正糾正水平影像中 之變化偵測。 【實施方式】 參看諸圖’其中相同數字遍及若干視圖指示相同或對應 部分’行動製圖車通常以20指示。行動製圖車2〇較佳(但 不必)為安裝有通常使用於地理製圖應用中之類型之一或 ^固相機22的㈣箱型車或汽車。相機22經高度校準以使 得所關注表面24(諸如,道路)之所獲取圖像可以—特定位 置及定向而進行地理編碼。此通常係經由自繞地球運行之 複數個衛星28接收位置資料的Gps接收器%來實現。此 外,定向判定設備(例如,INS)係由特徵3〇表示以提供由 相機22獲取之每—影像的駛向資料。藉由此等器件,由相 機22獲取之每-攝影影像經地理編碼,意謂使如由㈣接 =器26及定向設備3G計算出之其位置連同(可能)其他驶向 資訊與該影像相關聯以作為元資料。當行動製圖車2〇穿越 路面24時,在時間卜』f?及i + zJ?處擷取路面之連續影 像,其中々為連續影像之間的時間間隔。雜建立而足夠 小以使得表面24之連續影像彼此間在區域32處重疊。 如圖2A至圖2C中所展示,多個相機22可結合行動製圖 車20而使用以便在廣泛範圍中且自不同視角記錄表面^之 攝衫衫像。在對所關注表面24攝影期間,移動前景物體 34(诸如,圖2A至圖2C中所說明之跑車)可能在不同時間相 對於各相機22而暫時阻礙表面μ之影像。經模糊化之影像 在其發生於車道合併、交叉及其他相關道路特徵上時尤其 144236.doc 201117132 令人煩惱,此係歸因於此等特徵在製圖應用中之重要性。 圖3"兒明仃動製圖車20遭遇移動前景物體34之另一實 例。在此實你丨φ,τ·, _ 面向則及面向後的相機22在不同時間對 • Η *"重§區域32攝影。重疊區域32僅在-個片刻而非另一 .#刻中由移動物體34阻礙。當自複數個小的重疊攝影影像 ^馬赛克(例如,麵)時,需要使用每-片刻中之最佳 貝之々像°在對所關注表面24之同-區域32進行-次以 〇上攝影的情況下(如圖3中),本發明描述-種可藉以在-影 識別出移動刖景物體且藉以使更佳品質之影像用於產 生馬赛克的方法。 圖4說明自如圖3中所描繪之行動製圖車2〇向前觀視之視 •圖。梯形虛線表示由面向前的相機22對所關注表面2續取 之像片之邊界。前景移動物體34經掏取於影像之左上象限 ❹ ^展示在已使用上文所描述之技術中的一者進行正叫 =後的像片。正糾正影像被稱為拼片,且在此特定例子 被稱為「第二」拼片36,但彼術語為稍微隨意的。因 ,,對於任何給定,對應於,之正収影像根據其中 欲入之地理編碼資料而置於參考 5展系統中。對應於ί-沿 (第拚片)及(第三拼片)之正糾正赘僮婭 1止衫像經置於相同座 =、,’先中’因此可發現影像之間的重疊部分201117132 VI. Description of the Invention: TECHNICAL FIELD OF THE INVENTION The present invention relates to a method for recognizing a moving foreground object in a correcting photographic image. [Prior Art] A digital map and a digital map database are used in a navigation system. Digital maps can be obtained by a variety of methods, including high-resolution imaging from space and positive correction images acquired from land-based mobile vehicles. In the latter case, the images obtained from the land-based mapping system must be converted to a scale-corrected image of the ground features as viewed from above the precise ground position of the ground features. The positive correction image is an aerial image that has been geometrically corrected such that the scale of the image is - (meaning the picture can be considered equivalent to a map). A positive correction image can be used to measure the true distance because it is a quasi-4 representation of the surface of interest (e.g., the 'Earth surface'). Adjust the correct image for terrain fluctuations, lens distortion, and camera tilt. The positive correction image can be obtained very efficiently from the aerial image. However, errors are often introduced, which can lead to inaccurate mapping of geolocation data. _ A question ◎ For normal aerial imagery not obtained completely perpendicular to the Earth's surface. Even when the image is acquired close to vertical, only its exact centerline will be vertical. In order to correct this image, additional terrain height information must be obtained. The lack of accurate height information of objects in the image of the I image can be used to determine the correctness of the image by a two-corner measurement method that can cause up to a dozen meters or more of such images. Correctness can be improved by acquiring overlapping images and comparing the same surface obtained from subsequent images. However, the correctness obtained from this method is 144236.doc 201117132 ‘ There is a limit compared to its cost. As used herein, the term "horizontal" data or information corresponds to an object having a surface that is parallel or substantially parallel to the surface of the earth. The term "vertical" data or information corresponds to an object that can be viewed by a general i ± „ basket of ten views on the viewing axis of the Earth's surface. Vertical information cannot be obtained from a typical overhead aerial or satellite imagery. For land vehicles, such as vans or cars, 〇❻ may also be used for aerial vehicles to collect action data for enhanced digital map databases. Action carts usually have many cameras installed, some of which may be Stereo cameras and all of them are accurately positioned due to the accurate GPS and another position and orientation determination device (eg, inertial navigation system _INS). While driving on the road network or established route, Geocoding a sequence of images in a continuous frame or image. Geocoding means the location and/or possible additional heading and/or directional data associated with the image by the GPS receiver and (possibly) INS. A meta-data attached to each image captured by the camera. The action cart records more than one sequence of images of the surface of interest (eg, the road surface) And for each image of the image sequence, accurately determining the geographic location in the geographic coordinate reference system together with the location and orientation data of the image sequence relative to the geographic location. Corresponding. The geographic image sequence is called geography. Encoded image sequences. Other data may also be collected by other sensors, and similarly geocoded. It is known to obtain positive correction tiles for combining the large surface of interest (such as the Earth) Prior art of the Black Eye Mosaic (BEM). An excellent example of this technique 144236.doc 201117132 is described in Applicant's International Publication No. WO/2008/044927, published July 17, 2008. The right to incorporate references may be incorporated by reference and is hereby incorporated by reference in its entirety in its entirety in its entirety the entire disclosures in The quality of the video content. The fact is that 'the images are usually stitched one after the other, and the shingles on the roof overlap one another in sequence. Extremely similar. Although usually effective, it is often the case that moving objects captured in a film image (for example, a motor vehicle passing through a motion system or a motor vehicle passing through a motion charter) appear in the overlay tile. Instead of underlying tiles, the less ideal tiles are overlaid onto the preferred tiles. Therefore, moving foreground objects that partially blur the road map may appear in the completed BEM. Application No. P601 5247 to "Method Of An Apparatus For Pr〇dueing A MulU-Viewpoint Panorama" PCT describes a vertical image sequence used to generate a vertical panorama using a plurality of observation points from a motion cart. method. Use laser scanner data to detect objects close to the camera while producing full scenes. Unsuitable objects in the image are removed by marking portions of the vertical image that should not be used. Then project the part that should be used onto the panoramic surface. The use of laser data (especially in combination with vertical images) is an expensive, cumbersome and less desirable technique for producing positive corrected horizontal images for use in producing bird eye mosaics. Therefore, there is a recognition of moving foreground objects in positive corrected photographic images of the surface of interest that do not rely on the use of laser scanners or other 144236.doc 201117132 cumbersome techniques (especially when existing image data may be available without simultaneous lasers) The need to scan the data). SUMMARY OF THE INVENTION According to the present invention, a method of recognizing a moving foreground object in a photographic image is described in a "-> The method includes the step of providing a first-pre-slice from a first positively corrected image of the surface of interest, the first tile being divided into discrete regions (eg, pixels) and with respect to the table of interest 2 An absolute coordinate position and orientation are associated. From the surface of interest: the second correcting picture provides a second piece that at least partially overlaps the first piece. The second tile is also divided into discrete regions and associated with: an absolute coordinate position and orientation of one of the surfaces of interest. Comparing the second and the second tiles in a uniform region (i.e., ' the region associated with the same absolute coordinate position relative to the closed). A grayscale value of the coincident region in the slice is determined along with the grayscale value in the second tile. Calculate the absolute difference between the grayscale values of these - grayscale values over the - predetermined threshold. Therefore, when comparing the first tile with the second spell =, the grayscale value is very similar, then between the two: small. The difference in grayscale values will be reduced to - the predetermined threshold. The difference between the extreme grayscale values will exceed the _ predetermined threshold. However, the existence of two animals. The value of the sub-private shows a foreground shift. Therefore, the rule of the present invention does not require any laser to correct the movement in the photographic image. The truth is, move 144236.doc 201117132 The detection of an object (for example, a passing car) is based on correcting changes in the horizontal image. [Embodiment] Referring to the drawings, wherein like numerals refer to the same or corresponding parts throughout the plurality of views, the action cart is generally indicated at 20. The action cart 2 is preferably (but not necessarily) a (four) van or car that is fitted with one of the types commonly used in geography applications or the camera 22. The camera 22 is height calibrated to enable the acquired image of the surface 24 of interest (such as a road) to be geocoded with a particular location and orientation. This is typically accomplished by a Gps Receiver % that receives location data from a plurality of satellites 28 orbiting the Earth. In addition, the orientation determination device (e.g., INS) is represented by feature 3A to provide heading data for each image acquired by camera 22. By means of such devices, each of the photographic images acquired by camera 22 is geocoded, meaning that its position as calculated by (4) connector 26 and orientation device 3G, along with (possibly) other heading information, is associated with the image. Union as a meta-data. When the action cart 2 traverses the road surface 24, at the time of the bu? And i + zJ? draws a continuous image of the road surface, where 々 is the time interval between successive images. The impurities are established and small enough to cause successive images of surface 24 to overlap each other at region 32. As shown in Figures 2A-2C, a plurality of cameras 22 can be used in conjunction with the action cart 20 to record a picture of the surface in a wide range and from different angles of view. During the photography of the surface 24 of interest, moving the foreground object 34 (such as the sports car illustrated in Figures 2A-2C) may temporarily obstruct the image of the surface μ relative to each camera 22 at different times. Obfuscated images are particularly annoying when they occur on lane consolidation, intersections, and other related road features. This is due to the importance of these features in mapping applications. Figure 3 "Following the other example of moving the cart 20 to move the foreground object 34. Here, you 丨φ,τ·, _ face-to-face and rear-facing camera 22 at different times • Η *" The overlap region 32 is obstructed by the moving object 34 only in one moment instead of the other. When a plurality of small overlapping photographic images are mosaicted (for example, a face), it is necessary to use the best image of each of the moments at the same time - the region 32 of the surface of interest 24 is photographed. In the case of (as in Fig. 3), the present invention describes a method by which a moving scene object can be recognized and a better quality image is used for generating a mosaic. Figure 4 illustrates a view from the action cartography of Figure 2 as viewed in Figure 3. The trapezoidal dashed line indicates the boundary of the image that is continued by the front facing camera 22 to the surface of interest 2. The foreground moving object 34 is captured in the upper left quadrant of the image ❹ ^ shows the image that has been called after = using one of the techniques described above. The positive correction image is referred to as a tile, and is referred to as a "second" tile 36 in this particular example, but the term is somewhat random. Therefore, for any given, the corresponding received image is placed in the reference system according to the geocoded data to be entered. Corresponding to the ί-edge (the first piece) and the (third piece), the positive correction of the 赘童娅 1 smock image is placed in the same seat =,, 'first middle' so that the overlap between the images can be found

以圖形方式描繪此。 團T 再次特定參看圖5 ’在此階段不知曉 _ %禾一析片3 6中之哪 二品表示移動物體及影像之哪—部分與所關注表面Μ有 144236.doc 201117132 關。為清楚起見,此處之術語「區」用於描述整個拼片之 經界定部分或區域。實務上,將針對數位像片中之每_ = 素指派一區,然而,達到彼精細標度之解析度並非始終為 必需的。圖6展示圖5之第二拼片%連同「第一」拼片3 ' 重疊部分32。第一拼片38表示由相機22在時間μ處或緊 接於獲取導致正糾正第二拼片36之攝影影像的時間之前獲 取的攝影影像。如隨後將描述,有可能第一拼片Μ及第二 〇 拼片36由兩個不同相機22同時獲取或由兩個不同相機在: 個不同時間處獲取(如圖3中所提示)。 當拼片36、38以圖6中所展示之方式重疊時,所關注之 非移動表面24顯得大體上相同,使得影像可經重疊而具有 極少至無失真。此可由重疊區域32中之完全對準之車道^ 記證實。然而,移動物體34在時間^與時間,處具有不同不 位置,且因此可在沿道路之正糾正拼片36、38之重疊部分 …路面Γ:不同位置處看到。重疊區域32亦可被稱為 致&32 ’ 胃拚片36、38中之此等各別區(或像素)與相 對於所關注表面24之相同絕對座標位置相關聯。 圖7及圖8分別福给筮_ # u, 钿、、s第—拼片36及第一拼片%之一致區 :2二亦即’圖7為第二拼片刊之片段視圖其僅 致區32部分。另一古品门, ,圖8為第一拼片38之片段視圖, 致區32部分。在比較第—拼片财第二拼 之致區32的過程中,顯然在圖7中未阻礙路面24’ 二在圖8中路:之部分由移動物體34阻礙。藉由比較拼片 之重$部分’有可能判定是否存在運動中之物體 I44236.doc -10- 201117132 34。此係藉由逐區域或逐像素地計算灰階值之絕對差而進 行。接著對此等進行定限以獲得被稱為如圖9中所描繪之 光罩40的黑色/白色影像。無論係逐像素還是以較粗略之 區域分析進行,灰階值均係跨越整個一致區32針對第一拼 片38及第二拼片36中之每一者而判定。 灰階值通常處於0至255之範圍中,其中〇相當於黑色且 255相當於白色。在彩色像片中,可藉由針對每一區或像 素簡單地平均個別紅色、綠色及藍色色值而計算灰階值。 因此,根據簡單平均技術,若紅色色值為丨55、藍色色值 為14且綠色色值為90,則灰階色值為約86。然而,實務 上,灰階值通常計算為加權總和。舉例而言:〇.2989xr+ (K587〇xG+〇.114〇xB。當然,亦可使用其他灰階判定技 術適當臨限值經預定而處於數字〇與255之間。舉例而 二臨限值可選擇為6〇。在此情形下,若第一拼片38與第 ❹ ,拼片36中之一致區之像素或區域中的灰階值之間的絕對 '、 差之絕對值)超過臨限值(例如,60),則移動前 :物體34經識別為存在於彼像素或區域中。作為一實例, 右第拚片38中之—致區32内之特定像素或區域的灰階值 為86且第-姐μ„ —片36之對應像素或區域中的灰階值為丨5,則 值之間的絕對差等於86減15或71。差71高於例示性臨 限值60 ’且因此斷定移動前景物體34經描繪或擷取於一致 區32之彼特定像素或區域中。 一:此方式比較兩個拼片36、38,可產生可被稱為第 4〇(因為其與第-拼片38相關聯)之光罩4〇。當第一 144236.doc 201117132 拚片3士8與第二拼片36之間的灰階值中之絕對差低於預定臨 限值日寸’第-光罩40將白色灰階值(亦即,255)指派給第一 二罩中之對應像素或區域 '然而,當絕對差之計算產生 高於預定臨限值之數字以使得移動前景物體^經識別為存 在於弟二拼片36之彼像素或區域中時,向光罩如之對應像 素或區域指派—黑色灰階值(亦即,g), 域所表示。因此,在上文所提及之實例(其中灰⑽= 對差為川中,光罩40中之彼特定像素或區域將經指派一 黑色灰階值或呈現黑色,如圖9中所展示。藉由此方法, 光罩40清楚地識料巾料移動前景物體34之像素或區 域0 當然,當兩個對應像素(或區域)之間的絕對差超過臨限 值時,可易於藉由將255而非〇指派給一像素而顛倒此等 白色」及「黑色」慣例。—種用以解釋本發明之此特徵 之完全不同的方式避免術語「白色」及「黑色」之潛在複 雜之使用’且替代地僅關注像素優先權或重要性。在此情 形下,可在灰階值比較之基礎上嚴格地評估像素(或區域月) 優先權。在臨,值設定(在先前實例中,僅出於論述目的 而提示為「60」)之側上的絕對差比較被給予高於落於餘 限值之相對側上之彼等比較的冑先權。因此,在一方法 t ’較低值(亦~ ’低於臨限值)意謂較重要之像素,而在 另-方法中’較高值意謂較重要之像素。此僅為用以解釋 光罩值之使用及實施的另一方式。 或者’並非向料4G之對應像素或區域指派黑色〇(或白 -J2· 144236.doc 201117132 色255)灰階值,可能較佳將某一中間灰階值指派給光罩 中之對應像素或區域,該中間灰階值可能等於第—拼片U 之一致區32中所計算之灰階值。換言之,若第一拼片“中 .之一致區32中的對應像素或區域具有灰階值71,且絕對差 之計算超過預定臨限值,則光罩4〇中之對應區域或像素將 被給予一中間灰階值71。此為上文所描述並展示於圓9中 之方法的替代方法,使得光罩4〇將顯示介於臨限值(例 〇 如,6〇)與0(或在白色黑色慣例如先前所描述而顛倒時為 255)之間的灰階值。在任何情形下,值得注意的係·光罩 40係藉由兩個拼片36、38之比較而產生,其中移動前景物 體3 4係藉由計算一致區3 2中之對應像素或區域之灰階值的 絕對差而識別。This is depicted graphically. The group T is again specifically referred to Fig. 5'. At this stage, it is not known which of the two pieces of the moving object and the image--the part of the object and the image of interest are 144236.doc 201117132. For the sake of clarity, the term "zone" is used herein to describe a defined portion or region of the entire tile. In practice, one zone will be assigned to each _ = in the digital image, however, reaching the resolution of the fine scale is not always necessary. Figure 6 shows the second panel % of Figure 5 along with the "first" tile 3' overlap portion 32. The first tile 38 represents a photographic image taken by the camera 22 at time μ or immediately prior to the time at which the photographic image of the second tile 36 is being corrected. As will be described later, it is possible that the first tile 第二 and the second 拼 tile 36 are simultaneously acquired by two different cameras 22 or acquired by two different cameras at different times (as prompted in Fig. 3). When the tiles 36, 38 overlap in the manner shown in Figure 6, the non-moving surfaces 24 of interest appear substantially identical, such that the images can be overlapped with little to no distortion. This can be confirmed by the fully aligned lanes in the overlap region 32. However, the moving object 34 has a different position at time & time, and thus can be seen at the overlapping portions of the positively-corrected tiles 36, 38 along the road. The overlapping regions 32 may also be referred to as respective ones (or pixels) of the &32' stomach patches 36, 38 associated with the same absolute coordinate position relative to the surface 24 of interest. Figure 7 and Figure 8 respectively show the consistent area of 筮 # _ # u, 钿, s _ 片 36 and the first splicing piece: 2 2 ie 'Fig. 7 is the fragment view of the second snippet. Lead to section 32. Another ancient door, Figure 8 is a fragment view of the first piece 38, which results in section 32. In the process of comparing the second spelling zone 32 of the first piece, it is apparent that in Fig. 7, the road surface 24' is not obstructed. In the road of Fig. 8, the portion is obstructed by the moving object 34. It is possible to determine whether there is an object in motion by comparing the weight of the patch to the part #44236.doc -10- 201117132 34. This is done by calculating the absolute difference of the grayscale values on a region-by-region or pixel-by-pixel basis. This is then limited to obtain a black/white image called a reticle 40 as depicted in FIG. Whether for pixel-by-pixel or coarser region analysis, the grayscale values are determined across the entire uniform region 32 for each of the first tile 38 and the second tile 36. Grayscale values are typically in the range of 0 to 255, where 〇 is equivalent to black and 255 is equivalent to white. In color images, grayscale values can be calculated by simply averaging individual red, green, and blue color values for each region or pixel. Therefore, according to the simple averaging technique, if the red color value is 丨55, the blue color value is 14 and the green color value is 90, the grayscale color value is about 86. However, in practice, grayscale values are usually calculated as weighted sums. For example: 〇.2989xr+ (K587〇xG+〇.114〇xB. Of course, other grayscale decision techniques can also be used to set the appropriate threshold between the numbers 255 and 255. For example, the second threshold can be selected. In this case, if the absolute value of the absolute 'and the difference between the grayscale values in the pixel or region of the first patch 38 and the third patch, the patch 36 exceeds the threshold value. (e.g., 60), then before moving: Object 34 is identified as being present in the pixel or region. As an example, the grayscale value of a particular pixel or region in the region 32 of the right tile 38 is 86 and the grayscale value in the corresponding pixel or region of the slice 36 is 丨5, The absolute difference between the values then equals 86 minus 15 or 71. The difference 71 is above the exemplary threshold 60' and thus concludes that the moving foreground object 34 is depicted or captured in a particular pixel or region of the coincident region 32. : This way compares two tiles 36, 38, which produces a mask 4 that can be called the 4th (because it is associated with the tile - 38). When the first 144236.doc 201117132 tiles 3 The absolute difference in the grayscale values between the 8 and the second tile 36 is lower than the predetermined threshold value. The first photomask 40 assigns a white grayscale value (i.e., 255) to the first mask. Corresponding pixel or region 'However, when the calculation of the absolute difference produces a number above a predetermined threshold so that the moving foreground object is identified as being present in the pixel or region of the second tile 36, the photomask is Corresponding pixel or region assignment—the black grayscale value (ie, g), represented by the domain. Therefore, in the example mentioned above (where ash (10)= For a difference, the particular pixel or region of the mask 40 will be assigned a black grayscale value or appear black, as shown in Figure 9. By this method, the mask 40 clearly identifies the foreground of the tissue movement. Pixel or region 0 of object 34 Of course, when the absolute difference between two corresponding pixels (or regions) exceeds a threshold, it can be easily reversed by assigning 255 instead of 〇 to a pixel. "Black" convention. A completely different way of explaining this feature of the invention avoids the potentially complicated use of the terms "white" and "black" and instead focuses only on pixel priority or importance. The pixel (or region month) priority can be strictly evaluated based on the grayscale value comparison. In the case of the value setting (in the previous example, the prompt is "60" for the purpose of discussion only) The difference comparison is given a higher priority than the comparison of the comparisons on the opposite side of the margin. Therefore, in a method t 'lower value (also ~ 'below the threshold) means more important pixels And in the other - method 'higher value It is more important pixel. This is just another way to explain the use and implementation of the mask value. Or 'not assign black 〇 to the corresponding pixel or region of material 4G (or white - J2 · 144236.doc 201117132 color 255 Gray scale values, it may be preferable to assign an intermediate gray scale value to a corresponding pixel or region in the reticle, the intermediate gray scale value may be equal to the gray scale value calculated in the coincidence region 32 of the first patch U. In other words, if the corresponding pixel or region in the uniform region 32 of the first tile has a grayscale value of 71, and the calculation of the absolute difference exceeds the predetermined threshold, the corresponding region or pixel in the mask 4〇 will be An intermediate grayscale value of 71 is given. This is an alternative to the method described above and shown in circle 9, such that the reticle 4 will display a threshold (eg, 6 〇) and 0 (or The grayscale value between 255) when the white black convention is reversed as previously described. In any case, the notable system reticle 40 is produced by comparison of two tiles 36, 38, wherein the moving foreground object 34 is calculated by calculating the gray of the corresponding pixel or region in the coincident region 3 2 It is identified by the absolute difference of the order values.

圖10提供使用(諸如)可用於使用以賦能軟體(enabling software)程式化之電腦處理器之實際應用中的功能模組之 方法步驟之概述。以簡化方式展示一用於使用光罩產生馬 賽克之方法流程。根據此技術,收集道路24之正糾正影 像,該等正糾正影像已藉由安裝於行動製圖車2〇上之經校 準視覺設備22而記錄。每—影像丧人有與正糾正拼片對應 之位置資料。接著藉由比較重疊拼片而產生光罩,藉此提 供關於正糾正拼片中之—致區32的每一區域或像素之品質 的資訊。接著,可使用此等光罩來產生極大所關注表面 24(諸如,地球之表面)之馬赛克。 以此方式’藉由比較重叠正糾正影像而針對每一正糾正 拼片產生光罩。然而,如下文令較全面地描述,可使用特 J44236.doc -13- 201117132 定模型化或預測技術來預測移動物體34何時將處於特定拼 片影像中且接著僅針對彼等拼片產生光罩。可藉由比較光 罩之序列而增強或改進移動物體34之偵測,如可能最佳展 示於圖11至圖13中。舉例而言,圊丨丨描繪如圖5中所展示 之正糾正第二拼片36。為了改良原始偵測結果,可模型化 移動物體34之行為。移動物體34通常屬於兩種類別:相對 於行動製圖車20處於大體上恆定速度之物體,及正追上行 動製圖車20或正被行動製圖車2〇追上之移動物體34。雖然 在行動製圖車20前方行進之第一類別中的物體34確實變得 〇 在拼接影像之頂部部分中可見,但其亦自該影像之相同部 分中消失不見。此等物體34在實際處理上並不困難,因為 當連續拼片一者重疊於另一者上以產生所得馬赛克時,其 被「拼除(tile away)」且在最終馬賽克中幾乎始終不可 見,此係因為不含有該物體之下一拼片經繪製於其上,與 屋頂瓦片極其相似。因此,第二類別(追上)中之物體“傾 向於引起較大困難。此等物體34傾向於出現於所得馬赛克 (BEM)或拼片中且幾乎始終行駛於與行動製圖車不同之 ◎ 車道中,此係歸因於追趕汽車之特定性質(見圖2a 以用於說明)。 圖12A描繪展示在時間t中在正糾正影像或拼片之四個不 , 同區域(A、B、c、D)中之光罩資料的原始偵測資料。因 此,沿水平轴,圖13中之數W、2、3·..表示關於特定 相機22之時間或訊框編號。根據圖Π,垂直軸表示水平方 向上之區域八至!)之光罩。在此處,影像之頂部部分中的 144236.doc 14 201117132 Ο Ο 黑色意謂有物體34存在於拼片之左邊(見圖11}。在影像之 底部部分中的黑色意謂在光罩產生之第一步驟(圖6)期間债 測到物體34,此偵測到原始移動障礙%。因此,在特定地 參看圖以、圖12Β及圖13時,水平訊框經劃分成四個垂 直區域Α至D。一個區域為全部黑色或全部白色。在於完 成原始障礙谓測之後具有特定值之彼區域(八至…中,藉Z 對像素之總數定P艮而選擇值(用於黑色之〇及用於白色曰之 255) g)此’為了改良強健性,基於移動通過訊框之物體 34的經模型化行為而調整資料。結果為如圖ΐ2β中所說明 之資料’圖UB隨時間推移較清楚地描繪在前⑸固訊框中 相虽快速地追上行動製圖車2()之物體34 ;及接著在訊框Μ 至約50中追上行動製圖車2〇的慢得多之移動物體μ。接下 來的10個訊框(大約)不含有㈣測移動物體,然而,訊框 7〇至大約)展示追上移動物體34之行動製圖車2〇。Figure 10 provides an overview of the method steps using a functional module in a practical application, such as a computer processor programmed with an enabling software. A flow of methods for producing a mosaic using a reticle is shown in a simplified manner. According to this technique, the positive corrected image of the road 24 is collected, which has been recorded by the calibrated visual device 22 mounted on the mobile cart 2. Each image victim has a location data corresponding to the correcting tile. A reticle is then created by comparing the overlapping tiles, thereby providing information about the quality of each region or pixel of the region 32 in the positive correction tile. These masks can then be used to create a mosaic of surfaces 24 of great interest, such as the surface of the earth. In this way, a reticle is produced for each positive correction tile by comparing the overlapping positive correction images. However, as described more fully below, the J44236.doc -13 - 201117132 modeling or prediction technique can be used to predict when moving objects 34 will be in a particular tile image and then only produce masks for those tiles. . The detection of moving object 34 can be enhanced or improved by comparing the sequence of reticle, as best shown in Figures 11-13. For example, 圊丨丨 depicts the positive correction of the second tile 36 as shown in FIG. To improve the original detection results, the behavior of the moving object 34 can be modeled. The moving object 34 generally falls into two categories: an object at a substantially constant speed relative to the mobile cart 20, and a moving object 34 that is being chased by the upstream cart 20 or being captured by the cart. Although the object 34 in the first category traveling in front of the action cart 20 does become visible in the top portion of the stitched image, it also disappears from the same portion of the image. Such objects 34 are not difficult to handle in practice because when a continuous tile overlaps the other to produce the resulting mosaic, it is "tile away" and is almost always invisible in the final mosaic. This is because it does not contain a piece of the object below it and is drawn on it, which is very similar to the roof tile. Therefore, objects in the second category (catch up) tend to cause greater difficulty. These objects 34 tend to appear in the resulting mosaic (BEM) or tiles and almost always travel in a different lane than the action cart. This is due to the specific nature of catching up with the car (see Figure 2a for illustration). Figure 12A depicts four areas in the same time (A, B, C) that correct the image or tile at time t. The original detection data of the reticle data in D). Therefore, along the horizontal axis, the numbers W, 2, 3, .. in Fig. 13 indicate the time or frame number with respect to the specific camera 22. According to the figure, the vertical The axis represents the occlusion of the area eight to !) in the horizontal direction. Here, 144236.doc 14 201117132 顶部 Ο in the top part of the image means that an object 34 exists on the left side of the tile (see Figure 11). The black color in the bottom portion of the image means that the object 34 is measured during the first step of the mask generation (Fig. 6), which detects the original movement obstacle %. Therefore, in particular, reference is made to Fig. 12 and In Figure 13, the horizontal frame is divided into four vertical areas Α to D. The area is all black or all white. It is the area with a specific value after the original obstacle is predicted (in eight to..., the value is selected by Z to the total number of pixels (for black 〇 and for white) 255) g) This 'in order to improve robustness, the data is adjusted based on the modeled behavior of the object 34 moving through the frame. The result is the data described in Figure 2β. Figure UB is more clearly depicted over time. In the front (5) solid frame, it quickly catches up with the object 34 of the action cart 2; and then catches up with the much slower moving object μ of the action cart 2 in the frame 约 to about 50. The next 10 frames (approximately) do not contain (d) the measured moving object, however, the frame 7〇 to approximately) shows the action cart 2 chasing the moving object 34.

每光罩可描述為指示正糾正影像(㈣,拼片)中之哪 些區域或像素含有運動中之物體34的資料集合。由圖12A Μ 12Β_之前述實例描㈣測資料之改進以產生更佳 結果。此等步驟並非針對視覺系統之每-組件而執行,而 特疋子集而執行。對於彼子集,光罩資料易於作 結果:關:ί輪出而為可用的。然而,基於彼子集的偵測 針對視覺李:動製圖車2〇之視覺系統之設置的知識,亦可 之所有组件之每-正糾正影像而產生光罩資 同祕理為記錄視覺系統之不同組件(包括相機22)不 。 行動製圖車2〇上。此意謂在時間遠,路面24 144236.doc -15- 201117132 上之物體34可在由視覺系統之不同組件記錄的多個垂直影 像中之不同位置處看到。因此,給定對於視覺系統之至少 一組件22運動中之物體34在路面24上之位置及移動的知 識,可預測路面24上之移動物體34將在視覺系統之其他組 件的影像中處於何處及是否可見,且亦可針對彼等組件產 生光罩資料。 作為一實例,相機22之子集可為兩個側部相機(左/右), 且光罩係藉由在僅針對彼兩個相機之正糾正空間中的差分 而產生。基於此等結果,假定移動物體34符合以下假設, 則可針對其他相機(例如,前部相機及後部相機)產生光 罩··對於視覺系統之每一組件,若運動中之物體在時間"Each reticle can be described as indicating which of the regions or pixels in the correcting image ((4), the tile contain the data set of the moving object 34. The improvement of the measured data is shown by the foregoing example of Fig. 12A Μ 12Β_ to produce better results. These steps are not performed for each component of the vision system, but are performed with a subset of features. For the subset, the reticle data is easy to produce: Off: ί is available for use. However, based on the detection of the subset of the visual Li: the knowledge of the setting of the visual system of the visual drawing car, it is also possible to correct the image for all the components to generate the photomask and the secret to record the visual system. Different components (including camera 22) do not. The action cart is on the 2nd floor. This means that at times, the object 34 on the road surface 24 144236.doc -15- 201117132 can be seen at different locations in the plurality of vertical images recorded by different components of the vision system. Thus, given knowledge of the position and movement of the object 34 in motion of at least one component 22 of the vision system on the road surface 24, it can be predicted where the moving object 34 on the road surface 24 will be in the image of other components of the vision system. And visible, and can also produce mask data for their components. As an example, a subset of the cameras 22 can be two side cameras (left/right), and the reticle is produced by correcting the difference in the positive correction space for only the two cameras. Based on these results, it is assumed that the moving object 34 conforms to the following assumptions, and a hood can be generated for other cameras (for example, the front camera and the rear camera). For each component of the vision system, if the moving object is at time"

及在時12處在正糾正影像中可見,則預期其對於所有^ 可見’ S中"< ί < /2 ’且預期在時間"處在正糾正影像之 邛刀中變待可見的遠多於一個之物體在時間G處在影像 之相對部分中移離可見度(m〇ve _ 〇f visibiHty)。因此,And at time 12, it is visible in the correcting image, and it is expected to be visible for all ^ visible 'S in "< ί < /2 ' and expected to be visible in time " More than one object moves away from the visibility (m〇ve _ 〇f visibiHty) in the opposite part of the image at time G. therefore,

在右側相機22上變得可見之物體34針對右前方相機產生光 罩以使得使用此-者。由於視肖差異,道路24之在側部相 機22中被阻擋之部分在前部相機22中仍可見,因此可使用 來自則部相機之影像。—旦追趕汽車變得亦可見於右側相 機之左邊中且右邊部分再次變得不可使用,便可針對前部 才機產生光罩以使得在此情形下不使用彼一者(因為障礙 34將愈加可見)。因為每一相機22之駛向及子集中之相機 的驶向為已知的且僅基於子集相機之光罩中的彼角度,所 乂亦針對其他相機產生光罩。只要正糾正空間中之訊框之 144236.doc .16- 201117132 間的共同部分足夠大,便可能明確地針對每一相機產生光 罩。然而,僅使用一精選子集大大增加處理速度且僅稍微 降低結果。因此,障礙之行為愈符合上文陳述之假設,則 注意到的效能之降低愈小。 如上文所陳述,光罩可被解釋為加權影像。黑色(亦 即,灰階值255)意謂最低優先權,而白色意謂最高優先 權。光罩產生方法流程中之最初兩個步驟僅產生黑色或白 色值。如先前所提示’第三步驟可產生小於255之灰階 值’藉此基於子集相機之光罩及相機之角度而將不同優先 權給予不同相機。 藉由此等方法,有可能最佳化自垂直影像得到之正糾正 拼片36、38的產生以便改良路面及路肩之可見度。因為所 關注表面24上之相同點可在相同時間或不同時間自兩個不 同相機22(或在不同時間自相同相機22)可見,所以可使用 本發明之概念實現改良之可見度。An object 34 that becomes visible on the right side camera 22 produces a hood for the right front camera to enable use of this. Due to the difference in view, the portion of the road 24 that is blocked in the side camera 22 is still visible in the front camera 22, so images from the camera can be used. Once the chasing car becomes visible in the left side of the right camera and the right part becomes unusable again, a mask can be created for the front camera so that the one is not used in this situation (because the obstacle 34 will become more and more visible). Since the heading of each camera 22 and the heading of the camera are known and based only on the angle of the mask of the subset camera, the mask is also produced for other cameras. As long as the common part between the 144236.doc .16- 201117132 that is correcting the space in the space is large enough, it is possible to explicitly create a mask for each camera. However, using only a select subset greatly increases processing speed and only slightly reduces the results. Therefore, the more the obstacle behavior is consistent with the assumptions stated above, the less the reduction in performance noted. As stated above, the reticle can be interpreted as a weighted image. Black (that is, the grayscale value of 255) means the lowest priority, and white means the highest priority. The first two steps in the mask generation method flow only produce black or white values. As previously suggested, the 'third step can produce a grayscale value less than 255' whereby different priorities are given to different cameras based on the reticle of the subset camera and the angle of the camera. By this means, it is possible to optimize the production of the positive correction patches 36, 38 obtained from the vertical image in order to improve the visibility of the road surface and the shoulder. Since the same points on the surface 24 of interest can be seen from two different cameras 22 (or from the same camera 22 at different times) at the same time or at different times, improved visibility can be achieved using the concepts of the present invention.

圃μ詋明此枝術之另一流程圖,其中讀取自第一正糾正 像片提供之第一拼片(步驟42)連同第一拼片之第—光罩 驟44),此識別第一拼片中 ^ 研巧Τ之任何已知移動前景物體。 於論述目的,可假定第_姐y、击 — 疋第拼片連同其第—光罩構成地球表 面之馬賽克(諸如,ΒΕΜΊ的银古加八 的現有部分。在步驟乜中, 表示至少部分地與第一拼片重 ’、 τ々里登之新正糾正像片的第二 片,由系統讀取其位置資料H光= 上,如步驟48處所指示。同樣,在步驟50中,將第片 之光罩投射至臨時光罩拼 —汛框 方上在步驟52處,計算臨時拼 144236.doc 17 201117132 片之相機距離。此盍6 域所量測的歐幾里I目22之焦點至考慮中之像素或區 -拼片盘第相離。逐區域或可能逐像素地比較第 有空部一致區32。若第-或目的拼片具 ‘=則使用來自第二、臨時…對應區域 次像素。此展示於詢問54及步驟56中。 時)光罩中之對應像素或區域的灰階值& 1 對應像素或區域的灰階值,用_戈=中之 素或區域替換第-拼片中之彼像辛狂時拼片之像 A 月中之彼像素或區域。此展示於步驟 :的詢問58中。若灰階值相等或處於預 所提示),則在62處作出另-詢問以判定第二 Γ!Γ相機距離是否小於第一、目的像素之相機距 56,=—、臨時像素係自較近距離獲取,則根據步驟 第一、臨時像素(或區域)複製至(亦即,替換)第— 像素《區域)。接著更新光罩值(步驟64)以及相機距離(步 祕中)。在68處作出關於是否已考慮一致區中之最末區 域或像素的詢問。若否,則重複方法步驟52至66。—旦^ 以此方式分析來自一致區之最末像素(或區域),便在步驟 7〇中保存經更新拼片連同經更新光罩,且該經更新拼片及 該經更新光罩變成馬賽克(ΒΕΜ)之部分。 參看圖15至圖18C,以圖形方式表示圖14之圖。在此等 實例中,第一拼片38由指向前之相機22表示且第二拼片% 來源於成角相機22。然而,必須理解,展示於圖b中之相 機22之特定定向嚴格地僅用於說明目的。正糾正第—拼片 38展不於圖16A中,而正糾正拼片刊展示於圖17八中。針 144236.doc -18- 201117132 Ο Ο 對第拼片38產生之光罩40展示於圖16Β中,而第二拼片 36之光罩72展示於圖17Β中。在此簡化實例中,僅在第二 拼片36中谓測到移動物體34(圖17Α),其對應光罩72反映 其中之所蠘別移動物體34。拼片及光罩影像兩者較佳儲存 於AVI檔案中。如圖丨沾中所展示,在圖之拼片π中不 姊在待遮蔽之物,因為在水平影像中尚未偵測到移動物 肢。因此’光罩40完全為白色。第二拼片36及其光罩72展 Γ於圖^及圖⑽中。接著,拼片36、38如圖18种所展 :而重疊而無光罩,使得當第二拼片36上覆第一拼片38 移動物體34使圖ΐ6Α中清楚看到的路面影像之部分模 糊。接著,在圖⑽中將光罩4〇、72展示為經組合。若藉 由比較第-拼片38及第二拼片36中之一致(亦即,重疊)區 3:’第二光罩72中之灰階值大於第一光罩4〇中之灰階值, 貝1將使用來自第二拼片36之一致區來替換第一拼片%之— 然而,在此特定實例中為相反情形,因為在兩個光 信I令對應之一致區的比較展示第二光罩72令之灰階 ^於第—光罩4时之對應區的灰階值。因此,使用第_ 拼片38影像中之下伏部分,如由所得圖⑽斤表示。 圖18C展示移動物體34 _ 克中,因為第二拼片36含丄出現於所得馬赛 之必n 3有在4一拼片财不存在對應區 之衫像貧料。因此’當在第一 區域時,ρ 不存在對應像素或 知移動物體34亦然。在一致區之㈣杜 卩使八各有已 „g 致&之比較指示第一光罩盥第- 先罩之間的灰階值大體相等的情形下,«統將評估料 144236.doc •19· 201117132 各別第-及第二像片之距離。此處之攝影距離表示正糾正 拚片中的影像與相機22之焦點之間的距離。具有最小攝影 距離之影像將被認·為較可靠,且因此其影像將在重疊區U 中被給予優先權。 -旦重疊完成,便更新馬賽克光罩連同記錄於馬赛克中 之像片距離,以使得在任何後續拼接操作中,新正糾正拼 片將與所記錄光罩資料進行比較。以此方式將正糾正拼片 組合成馬賽克,其中重疊區係基於特定地關於移動物㈣ 之存在的影像内容而選擇。 因此’經由本發明之技術,識別出移動物體以且接著自 正糾正拼片產生光罩,此可用於在產生所關注大表面 24(諸如,地球)之馬賽克時判定重疊拼片之哪些區應被給 予優先權。根據先前技術,不加選擇地上覆之正糾正拼片 可給出不太有用之結果,因為障礙34可覆蓋所關注表面Μ 之部分。然而’根據本發明,光罩之使用幫助選擇具有水 平物體(諸如,分道線、車道走廊、排水溝置放等)之最相 關資訊的最佳可用影像。因此,光罩之使用幫助改良所得 馬赛克(BEM)之可辨性。且因為此等光罩可嚴格地在所比 較影像資料之基礎上產生,所以不需要額外成像或雷射資 料技術來識別移動物體34。實情為,僅需要一對重疊水平 (正糾正)影像來產生鳥眼馬賽克(B E M)。藉由多個:糾正 拚片之共同區域或像素的灰階差分而偵測移動物體Μ。與 在垂直訊框情形下改變僧測相反,因為價測係在正糾正^ 間中進行,所以該方法可直接區別背景與移動物體M。 144236.doc -20· 201117132 圖19展示本發明之兩個替代應用,其中正糾正拼片係 自載運於航拍車輛(諸如,衛星12〇或飛機22〇)上之相= 122、222所獲取的影像而產生。又,在此情形 a 1 m 砂勤前 豕早礙物134、234可在所得影像中產生障礙物。經由本文 所描述之概念的直接應用,有可能改良來自此等航拍影^ 之所得馬賽克的影像品質。 Ο ❹ 已根據相關法定標準對前述發明進行了描述,因此該描 述性質上為例示性的而非限制性的。對所揭示實施例^ 化及修改對於熟習此項技術者而言可變得顯而易見且落入 【圖式簡單說明】 =本發明之範4内。因Α ’對本發明所賦予之法定保護之 範僅可藉由研究以下巾請專利範圍而判定。 圖1為穿越路面且使用適當攝影設備獲取-系列順序影 像之㈣製®車的高度簡化說明’㈣列順序影像使用 GPS定位資料連同自適當遙測設備取得之定向資料 地理編碼; π” 圖2A至圖2〔說明—時間順序視圖,其中根據本發明之 打動製圖車被-前景移動物體(在此情形下描緣為跑車)追 —t·, 圖3為展示跟隨一移動前景障礙物之行動製圖車的時間 移序列,該移動前景障礙物部分地使藉由—個(面向前 的)相機獲取之所要路面的影像模糊但並未使藉由不同(面 向後的)相機獲取之相同表面的影像模糊; 圖為如自仃動製圖車(諸如,圖3中所描綠之行動製圖 144236.doc •21- 201117132 看的簡化透視圖, ’且虛線表示由前 其中移動前 向相機獲取 車)頂部之面向前的相機觀 景障礙出現於前方左車道中 之攝影影像的邊界; 圖5表示圖4之攝影影像之正糾正視圖,其中展示為左上 角中之變黑部分的障礙使路面之視圖模糊; ‘、、、上 ^為圖5中所騎之拼片連同先前第—心…㈣上 其經配置以展示移動前景障礙物可自_個拼片至 一拼片改變相對位置且可在— 產生視圖障礙物而 在另一拼片中不產生視圖障礙物的方式; 圖7描繪如圖5中所展示之第二拼片之一致區; :8為圖6之第一拚片之一致區的視圖,丨中移動障礙物 經展不而阻擋路面之一部分; 圖94田缯·用於第一拼片之一致區(圖8)的光罩; 圖10為描述使用本發明之方法產生馬赛克的流程圖; 圖11表示類似於圖5之正糾正拼片之正糾正拼片,其出 於處理後影像改進之目的而經再劃分成四行(A至D)。 圖12A為自本發明收集之原始資料的時間圖,其中列表 示每一拼片中之經再劃分之區(八至〇)且行表示順^拼片或 影像、ί、等); 圖12Β為圖12Α之時間圖,其說明可使用行為模型化來 改良前景移動物體之偵測的方式; 圖13為圖12Α中之侷限於13處之區域的放大視圖; 圖14為描繪用於使用光罩改良拼接之沿道路之正糾正影 像中之路面之可見度的步驟序列之流程圖; 144236.doc -22- 201117132 圖15為安裝有複數個相機之行動製圖車的簡化俯視圖, 兩個此等相機同時對所關注表面上之重疊區域攝影; 圖16A描繪由圖15中之行動製圖車之指向前的第一相 所擷取的第一拼片; 圖16B為針對圖i6A之第一拼片而產生之光罩; 圖17A為自圖15中之行動製圖車成角地面向之第二 所獲取的正糾正第二拼片;Another flow chart of the process, wherein the first tile provided by the first positive correction image is read (step 42) together with the first mask-photomask step 44), the identification Any known moving foreground object in a piece of film. For the purposes of the discussion, it can be assumed that the first sister y, the slap- 疋 拼 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同 连同The second piece with the first piece weight ', τ々里登's new positive correction picture, is read by the system its position data H light = up, as indicated at step 48. Again, in step 50, the first piece The reticle of the film is projected onto the temporary reticle - on the frame side, at step 52, the camera distance of the temporary 144236.doc 17 201117132 piece is calculated. The focus of the ECU 6 field measured by the 盍6 field is Considering the pixel or zone-spelt disc in the first phase. The first empty portion coincidence area 32 is compared on a per-regional basis or possibly pixel-by-pixel. If the first- or destination tile has '= then the corresponding region from the second, temporary... Sub-pixels. This is shown in the query 54 and the step 56. The grayscale value of the corresponding pixel or region in the mask is & 1 corresponds to the grayscale value of the pixel or region, and is replaced by _戈=中素素 or region - The image of the piece in the piece is like a pixel or area in the middle of the month. This is shown in the inquiry 58 of the step: . If the grayscale values are equal or at the prompt, then another inquiry is made at 62 to determine whether the second camera is less than the camera distance 56 of the first and destination pixels, and the temporary pixel is closer. For the distance acquisition, the first pixel, the temporary pixel (or region) is copied to (ie, replaced) the first pixel "region". The mask value is then updated (step 64) and the camera distance (in step). An inquiry is made at 68 as to whether the last region or pixel in the consensus zone has been considered. If no, method steps 52 through 66 are repeated. Once the last pixel (or region) from the consistent region is analyzed in this way, the updated tile is saved in step 7A along with the updated mask, and the updated tile and the updated mask become a mosaic Part of (ΒΕΜ). Referring to Figures 15 through 18C, the diagram of Figure 14 is graphically represented. In these examples, the first tile 38 is represented by the forward pointing camera 22 and the second tile % is derived from the angled camera 22. However, it must be understood that the particular orientation of the camera 22 shown in Figure b is strictly for illustrative purposes only. Correction of the first piece - piece 38 is not shown in Figure 16A, and the correcting piece is shown in Figure 17-8. Needle 144236.doc -18- 201117132 Ο 光 The reticle 40 produced for the first piece 38 is shown in Fig. 16A, and the reticle 72 of the second piece 36 is shown in Fig. 17A. In this simplified example, only the moving object 34 (Fig. 17A) is detected in the second tile 36, which corresponds to the reticle 72 reflecting the identified moving object 34 therein. Both the tile and the reticle image are preferably stored in the AVI file. As shown in the figure, there is no object to be obscured in the tile π of the figure, because the moving object has not been detected in the horizontal image. Therefore, the photomask 40 is completely white. The second panel 36 and its reticle 72 are shown in Figures & Figures (10). Next, the tiles 36, 38 are shown in Fig. 18: overlapping without a reticle, such that when the second tile 36 is overlaid with the first tile 38, the object 34 is moved to make the portion of the road image clearly visible in the image ΐ6Α. blurry. Next, the masks 4, 72 are shown as being combined in the diagram (10). By comparing the coincident (ie, overlapping) regions 3 of the first patch 38 and the second patch 36: the gray scale value in the second mask 72 is greater than the gray scale value in the first mask 4 , Bay 1 will replace the first tile % with the consistent region from the second tile 36 - however, in this particular case the opposite case, because the comparison of the corresponding regions of the two optical I commands shows the second The reticle 72 is arranged to have a gray scale value of the corresponding area of the gradation of the photomask 4. Therefore, the underlying portion of the image of the _tile 38 is used, as indicated by the resulting figure (10) jin. Fig. 18C shows the moving object 34_ gram, because the second piece 36 contains 丄 in the resulting Marseille, and there is no shirt in the corresponding area. Therefore, when in the first region, ρ does not exist for the corresponding pixel or the known moving object 34. In the case of (4) rhododendrons in the unanimous zone, the comparison between the eight and the grading of the first reticle and the first hood is substantially equal, and the evaluation will be 144236.doc • 19· 201117132 The distance between each of the first and second images. The distance of the photograph here indicates the distance between the image in the patch and the focus of the camera 22. The image with the smallest photographic distance will be recognized as Reliable, and therefore its image will be given priority in the overlap area U. - Once the overlap is completed, the mosaic mask is updated along with the image distance recorded in the mosaic so that in any subsequent stitching operation, the new correct spelling The slice will be compared to the recorded reticle material. In this way, the positive correction tiles are combined into a mosaic, wherein the overlapping regions are selected based on the image content that is specifically related to the presence of the moving object (four). Thus, by the technique of the present invention, Recognizing the moving object and then producing a reticle from the positive correction patch, which can be used to determine which regions of the overlapping tiles should be prioritized when producing a mosaic of large surfaces 24 of interest, such as the earth. According to the prior art, the positively corrective patch that is over-selectively overlaid can give less useful results because the barrier 34 can cover a portion of the surface of interest 。. However, according to the present invention, the use of a reticle helps the selection to have a level The best available image of the most relevant information on objects such as lanes, lane corridors, drains, etc. Therefore, the use of reticle helps to improve the legibility of the resulting mosaic (BEM). The cover can be produced strictly based on the compared image data, so no additional imaging or laser data techniques are needed to identify the moving object 34. In fact, only a pair of overlapping horizontal (positive corrected) images are needed to produce a bird eye mosaic ( BEM). Detecting moving objects by correcting the gray-scale difference of the common area or pixel of the tiles. Contrary to changing the guess in the vertical frame, because the price measurement system is in the positive correction Therefore, the method can directly distinguish the background from the moving object M. 144236.doc -20· 201117132 FIG. 19 shows two alternative applications of the present invention, in which the positive correction patch is carried on the air. In the case of a vehicle (such as a satellite 12 or an aircraft 22), the image obtained by the phase = 122, 222 is generated. In this case, the a 1 m sand front and the early obstruction 134, 234 can be generated in the resulting image. Obstacle. Through the direct application of the concepts described herein, it is possible to improve the image quality of the resulting mosaic from such aerial photographs. Ο ❹ The foregoing invention has been described in accordance with relevant statutory standards, and thus the description is illustrative in nature. It is to be understood that the embodiments of the present invention will become apparent to those skilled in the art and fall within the scope of the invention. The scope of legal protection conferred by the invention can only be determined by studying the scope of the following patents. Figure 1 is a simplified illustration of the height of a (four) system® vehicle that traverses the road and obtains a series of sequential images using appropriate photographic equipment. (4) Sequence image using GPS positioning data along with directional data geocoding obtained from appropriate telemetry equipment; π” Figure 2A Figure 2 [Description - chronological view in which an actuated cart is driven in accordance with the present invention - a moving object in the foreground (in this case, a sports car) - Figure 3 is an action map showing a moving foreground obstacle a time shifting sequence of the vehicle that partially obscures the image of the desired road surface acquired by a (front facing) camera but does not cause the same surface image acquired by a different (backward facing) camera Blur; The picture is a simplified perspective view of a self-propelled cart (such as the action map 144236.doc •21-201117132 shown in Figure 3, 'and the dotted line indicates the front of the camera from the front to move the camera to get the car) The front-facing camera viewing obstacle appears on the boundary of the photographic image in the front left lane; Figure 5 shows the positive correction view of the photographic image of Figure 4, which shows Obscuring the view of the road for the blackened part of the upper left corner; ', , , and ^ are the pieces of the ride in Figure 5 together with the previous first-heart... (4) which is configured to show the moving foreground obstacles _ a tile to a tile to change the relative position and can be used to create a view obstacle without creating a view obstacle in another tile; Figure 7 depicts the consistency of the second tile as shown in Figure 5. Area; :8 is the view of the consistent area of the first piece of Figure 6. The moving obstacle in the middle of the road does not block one part of the road surface; Figure 94 Tian Hao · used in the uniform area of the first piece (Figure 8) Figure 10 is a flow chart depicting the use of the method of the present invention to create a mosaic; Figure 11 is a schematic representation of a positive correction tile similar to the positive correction tile of Figure 5, which is subdivided for purposes of image improvement after processing In four rows (A to D) Figure 12A is a time diagram of the raw data collected from the present invention, wherein the columns represent the subdivided regions (eight to ten) in each tile and the rows represent the smooth tiles or images. , ί, etc.); Figure 12 is a time diagram of Figure 12, which illustrates the use of behavioral modes. FIG. 13 is an enlarged view of a region limited to 13 in FIG. 12A; FIG. 14 is a view showing a road surface in a positive correction image for improving stitching along the road using a reticle Flow chart of the sequence of steps of visibility; 144236.doc -22- 201117132 Figure 15 is a simplified top view of an action cart equipped with a plurality of cameras, two of which simultaneously photograph the overlapping areas on the surface of interest; Figure 16A A first tile drawn from the first phase before the pointing of the action cart in FIG. 15 is depicted; FIG. 16B is a mask produced for the first tile of FIG. i6A; FIG. 17A is from FIG. The second figure obtained by the action drawing cart facing the second side is correcting the second piece;

圖17B表示針對圖17A之第二拼片而產生之第二光罩; 圖18A表示第一拼片及第二拼片之拼接,其中重疊之 二拼片歸因於移動前景障礙物而使可見路面之部分模糊. 圖⑽描繪第一光罩與第二光罩之間的比較,盆中光罩 優先權經評估且用於判定第一梆片及第二拼片之哪些部八 含有所關注表面之較準確資料; 刀Figure 17B shows the second mask produced for the second tile of Figure 17A; Figure 18A shows the stitching of the first tile and the second tile, wherein the overlapping two tiles are visible due to moving the foreground obstacle Part of the road surface is blurred. Figure (10) depicts a comparison between the first mask and the second mask, the mask priority in the basin is evaluated and used to determine which portions of the first and second tiles contain the focus More accurate information on the surface; knife

由 圖18C為如圖18A中之視圖 光罩比較所得之改良資料來產 圖19為說明本發明之概念可 應用的方式之高度簡化視圖, 星影像及/或航拍像片。 然而,圖18C展示使用藉 生馬賽克;及 用於其他影像獲取及馬赛克 其中正糾正拼片可來源於衛 【主要元件符號說明】 13 區域 20 行動製圖車 22 相機 24 路面 26 GPS接收器 144236.doc •23- 201117132 28 衛星 30 定向設備 32 重疊區域 34 移動前景物體 36 第二拼片 38 第一拼片 40 光罩 72 光罩 120 衛星 122 相機 134 移動前景障礙物 220 飛機 222 相機 234 移動前景障礙物 A 區域 B 區域 C 區域 D 區域 H4236.doc -24-Figure 18C is a view of the reticle comparison as shown in Figure 18A. Figure 19 is a highly simplified view of a manner in which the concepts of the present invention can be applied, a star image and/or an aerial image. However, Figure 18C shows the use of a borrowed mosaic; and for other image acquisitions and mosaics where the corrective patch can be derived from the [main component symbol description] 13 Area 20 Action Cart 22 Camera 24 Pavement 26 GPS Receiver 144236.doc • 23- 201117132 28 Satellite 30 Orientation device 32 Overlapping area 34 Moving foreground object 36 Second piece 38 First piece 40 Mask 72 Photomask 120 Satellite 122 Camera 134 Moving foreground obstacles 220 Aircraft 222 Camera 234 Moving foreground obstacles A Area B Area C Area D Area H4236.doc -24-

Claims (1)

201117132 七、申請專利範圍 1. Ο 於在-所關注表面之—正糾正攝影影像中識別移 動别“勿體之方法,該方法包含以下步驟: 自該所關注表面之一第一正糾正像片提供一第一拼 片,該第-拼片經劃分成離散區且與相對於該所關注表 面之一絕對座標位置及定向相關聯; 自該所關注表面之—第二正糾正像片提供—至少部分 地與該第-拼片重疊之第二拼片,該第二拼片經劃分成 離散區且與相對於該所關注表面之—絕對座標位置及定 向相關聯; 比較該第-拼片與該第二拼片中之與相對於該所關注 表面之相同絕對座標位置相關聯的一致區;及 判疋s亥第一拼片中之該一致區的灰階值; 判疋6亥苐二拼片中之該一致區的該灰階值·, 計算該等各別第一拼片Α第二拼片之該等_致區之間 的灰階值之絕對差; 且其特徵在於在該一致區之灰階值的該絕對差超過一 預定臨限值時識別一移動前景物體。 2.如請求項i之方法,其進一步包括以下步驟:針對該第 一拼片產生一第一光罩,將該第一光罩劃分成對應於該 第一拼片之該等區之離散區,在該第一拼片中之該一致 區之灰階值的該絕對差低於該預定臨限值時將一白色灰 階值指派給該第一光罩之該對應區,且在該第—拼片中 之該一致區之灰階值的該絕對差超過該預定臨限值時將 144236.doc 201117132 才曰派^該第一光罩之該對應區。 3 ·如請求項2之方沐 ’八中指派一非白色灰階值之該步驟 包括在該第—拼 θ τ t該一致區之灰階值的該絕對差超 過該預定臨限值時將_ 1里守將黑色灰階值指派給該第一光罩之 該對應區。 早 4 · 如請求項2之方丰 ',八中指派一非白色灰階值之該步驟 包括在該第—拼片 . 乃甲之遠一致區之灰階值的該絕對差超 -X員疋δ™限值時將該第一拼片中之該一致區 值傳送至該第-光罩之該對應區。 5·如清求項1至4中任—項之方法,其中判定灰階值之該等 ^驟匕括摻合該等各別第—拼片及第二拼片之該-致區 内所表示的k由 Λί, λ. 幻 '工色、綠色及藍色色值。 6.如。月求項1至4中任—項之方法,其中識別一移動前景物 體之Θ步驟包括設定_介於嗅出之間的預定臨限值。 7 ·如請求項1至4 φ紅 TS ^ 中任一項之方法,其中識別一移動前景物 體之該步驟包括卻·中 入&比 匕栝叹疋~介於約6〇與255之間的 值。 8.如凊求項2至4中任一項之方法,纟進一步包括以下步 ::針對該第二拼片產生—第二光罩,將該第二光罩劃 分成對應於該第二拼片之該等區之離散區,在該第二拼 ^致區之灰階值的該絕對差低於該預定臨限值 時將一低優先權灰階值指派給該第二光罩之該對應區, 且在該第二拼片中之該一致區之灰階值的該絕對差超過 該預定臨限值時將-高優先權灰階值指派給該第二光罩 144236.doc 201117132 之δ亥對應區,且禮刑儿—姑 -移動-旦札光罩與該第二光罩之間的 移動别.?、物體之行為。 9.如請求項8之方法,1 . 至少部分地騎第包括以T步驟··提供與一 、/、^弟一拚片重疊之第三拼片相關聯的一第 • 並基於模型化該行為之j牛赖< & ss、al 景物體之位置。 卩為之及步驟而預測一移動前 10.=:項9之方法,其中預測一移動前景物體之該位置 〇 的该步驟進-步包括用以下步驟凌駕該指派步驟··即使 該第二拼月Φ夕# Μ ^ 1I '"一致區之灰階值的該絕對差降低至該 …艮值之下,仍將-高優先權灰階值附加至該第二 光罩之一區。 .11.如清求項9之方法’其中預測一移動前景物體之該位置 的該步驟進一步包括用以下步驟凌駕該指派步驟:即使 該第二拼片中之該-致區之灰階值的該絕對差超過該預 定臨限值,仍將一低優先權灰階值附加至該第二光罩之 〇 一區。 如π求項1至4中任一項之方法,其中提供該等各別第一 拼片及第—拼片之該等步驟包括在—相對於該所關注表 面移動之行動車輛上安裝至少一相機。 13·=請求項中任一項之方法,其中使該第_拼片與該 第二拼片相關聯之該等步驟包括在該等各別第—拼片及 第二拼片上加印來自一GPS衛星接收器之座標資料。 14.如凊求項丨至4中任一項之方法,其中提供該等各別第一 拼片及第二拼片之該等步驟包括在不同時間獲取該第一 144236.doc 201117132 像片及該第二像片。 15.如請求項1至4中任一項之方法,其中提供該等各別第一 拼片及第二拼片之該等步驟包括在相同時間獲取該第一 像片及該第二像片。 144236.doc 4-201117132 VII. Patent application scope 1. 识别 In the method of correcting the photographic image, the method of recognizing the movement is not the body. The method includes the following steps: First correcting the photo from one of the surfaces of interest Providing a first tile, the first tile being divided into discrete regions and associated with an absolute coordinate position and orientation relative to one of the surfaces of interest; from the surface of interest - a second positive corrected image is provided - a second tile at least partially overlapping the first tile, the second tile being divided into discrete regions and associated with an absolute coordinate position and orientation relative to the surface of interest; comparing the first tile a uniform region associated with the same absolute coordinate position of the second tile relative to the surface of interest; and a grayscale value of the uniform region in the first tile of the singer; Calculating the grayscale value of the uniform region in the two tiles, and calculating the absolute difference of the grayscale values between the respective first tiles and the second tiles; and characterized in that The absolute difference of the gray scale values of the consistent region exceeds Identifying a moving foreground object when the threshold is predetermined. 2. The method of claim i, further comprising the steps of: generating a first mask for the first tile, dividing the first mask into corresponding a discrete area of the first patch of the first tile, assigning a white grayscale value to the first when the absolute difference of the grayscale values of the consistent region in the first tile is lower than the predetermined threshold The corresponding area of the reticle, and when the absolute difference of the gray level value of the consistent area in the first tile exceeds the predetermined threshold, the 144236.doc 201117132 is sent to the first reticle Corresponding area. 3. The step of assigning a non-white gray scale value to the square of the request item 2 includes the absolute difference of the gray scale value of the uniform region of the first spell θ τ t exceeding the predetermined threshold When the value is _1, the black grayscale value is assigned to the corresponding area of the first reticle. Early 4 · As in the case of the square 2 of the request item 2, the step of assigning a non-white gray scale value to the eighth is included in The first-slice piece. The absolute difference of the gray-scale value of the far-consistent area of the armor is over-X staff 疋δTM limit The uniform zone value in the first tile is transmitted to the corresponding zone of the first photomask. 5. The method of any one of items 1 to 4, wherein the grayscale value is determined. The k is represented by the 第ί, λ. magic 'work color, green color and blue color value represented by the blending of the respective first-slices and the second piece. 6. For example, the monthly item 1 The method of claim 4, wherein the step of identifying a moving foreground object comprises setting a predetermined threshold between sniffing. 7 · as claimed in any of claims 1 to 4 φ red TS ^ The method wherein the step of identifying a moving foreground object comprises a value of between about 6 255 and 255. 8. The method of any one of clauses 2 to 4, further comprising the steps of: generating a second mask for the second tile, dividing the second mask into the second spell a discrete region of the regions of the slice, assigning a low priority grayscale value to the second photomask when the absolute difference of the grayscale values of the second spelling region is below the predetermined threshold Corresponding zone, and assigning the -high priority grayscale value to the second mask 144236.doc 201117132 when the absolute difference of the grayscale values of the consistent zone in the second tile exceeds the predetermined threshold The corresponding area of the δ hai, and the movement between the ritual child-gu-moving-dawn reticle and the second reticle, the behavior of the object. 9. The method of claim 8, wherein the at least partially riding comprises, in a step T, providing a third associated with a third tile overlapping with a tile, and based on modeling The position of the action of the cattle and the <& ss, al scene object. Predicting a method of moving the first 10.=: item 9, wherein the step of predicting a position of the moving foreground object further comprises: overriding the assigning step with the following steps: even if the second spell The absolute difference of the gray scale values of the coincident region is reduced to the value of the ..., and the high priority gray scale value is still attached to one of the second masks. 11. The method of claim 9, wherein the step of predicting the position of a moving foreground object further comprises overriding the assigning step with the step of: even if the grayscale value of the region in the second tile The absolute difference exceeds the predetermined threshold and a low priority gray scale value is still appended to the first region of the second mask. The method of any one of clauses 1 to 4, wherein the steps of providing the respective first and first tiles comprise installing at least one on a mobile vehicle that moves relative to the surface of interest camera. The method of any one of the preceding claims, wherein the step of associating the first piece with the second piece comprises printing on the respective first piece and the second piece from the first The coordinates of the GPS satellite receiver. 14. The method of any one of the preceding claims, wherein the steps of providing the respective first and second tiles comprise obtaining the first 144236.doc 201117132 photo and at different times The second photo. The method of any one of claims 1 to 4, wherein the steps of providing the respective first and second tiles comprise acquiring the first and second images at the same time . 144236.doc 4-
TW98137967A 2009-11-09 2009-11-09 Method for identifying moving foreground objects in an orthorectified photographic image TW201117132A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW98137967A TW201117132A (en) 2009-11-09 2009-11-09 Method for identifying moving foreground objects in an orthorectified photographic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW98137967A TW201117132A (en) 2009-11-09 2009-11-09 Method for identifying moving foreground objects in an orthorectified photographic image

Publications (1)

Publication Number Publication Date
TW201117132A true TW201117132A (en) 2011-05-16

Family

ID=44935149

Family Applications (1)

Application Number Title Priority Date Filing Date
TW98137967A TW201117132A (en) 2009-11-09 2009-11-09 Method for identifying moving foreground objects in an orthorectified photographic image

Country Status (1)

Country Link
TW (1) TW201117132A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI776025B (en) * 2018-10-09 2022-09-01 先進光電科技股份有限公司 Panorama image system and driver assistance system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI776025B (en) * 2018-10-09 2022-09-01 先進光電科技股份有限公司 Panorama image system and driver assistance system

Similar Documents

Publication Publication Date Title
US9230300B2 (en) Method for creating a mosaic image using masks
US11080911B2 (en) Mosaic oblique images and systems and methods of making and using same
US11614338B2 (en) Method and apparatus for improved location decisions based on surroundings
NL2010463C2 (en) METHOD FOR GENERATING A PANORAMA IMAGE
JP5714940B2 (en) Moving body position measuring device
US20120155744A1 (en) Image generation method
US20100086174A1 (en) Method of and apparatus for producing road information
JP6060682B2 (en) Road surface image generation system, shadow removal apparatus, method and program
JP6833668B2 (en) Image feature enhancement device, road surface feature analysis device, image feature enhancement method and road surface feature analysis method
GB2557398A (en) Method and system for creating images
CN106464847A (en) Image synthesis system, image synthesis device therefor, and image synthesis method
JP2012503817A (en) Method and composition for blurring an image
JP2011170599A (en) Outdoor structure measuring instrument and outdoor structure measuring method
JP2018205264A (en) Image processing apparatus, image processing method, and image processing program
JP2009140402A (en) INFORMATION DISPLAY DEVICE, INFORMATION DISPLAY METHOD, INFORMATION DISPLAY PROGRAM, AND RECORDING MEDIUM CONTAINING INFORMATION DISPLAY PROGRAM
JP7315216B2 (en) Corrected Distance Calculation Device, Corrected Distance Calculation Program, and Corrected Distance Calculation Method
CN110023988A (en) For generating the method and system of the combination overhead view image of road
NL2016718B1 (en) A method for improving position information associated with a collection of images.
CN111899512B (en) Vehicle trajectory extraction method, system and storage medium combined with skyline observation
WO2011047732A1 (en) Method for identifying moving foreground objects in an orthorectified photographic image
TW201117132A (en) Method for identifying moving foreground objects in an orthorectified photographic image
TW201117131A (en) Method for creating a mosaic image using masks
JP7030732B2 (en) Same structure detection device, same structure detection method and same structure detection program
Enami et al. Image matching robust to changes in imaging conditions with a car-mounted camera
WO2012089262A1 (en) Method and apparatus for use in forming an image