[go: up one dir, main page]

WO2016129091A1 - Système de détection d'objets et procédé de détection d'objets - Google Patents

Système de détection d'objets et procédé de détection d'objets Download PDF

Info

Publication number
WO2016129091A1
WO2016129091A1 PCT/JP2015/053888 JP2015053888W WO2016129091A1 WO 2016129091 A1 WO2016129091 A1 WO 2016129091A1 JP 2015053888 W JP2015053888 W JP 2015053888W WO 2016129091 A1 WO2016129091 A1 WO 2016129091A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
unit
detection
information
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2015/053888
Other languages
English (en)
Japanese (ja)
Inventor
智明 吉永
信尾 額賀
裕樹 渡邉
敬介 藤本
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to PCT/JP2015/053888 priority Critical patent/WO2016129091A1/fr
Publication of WO2016129091A1 publication Critical patent/WO2016129091A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L25/00Recording or indicating positions or identities of vehicles or trains or setting of track apparatus
    • B61L25/06Indicating or recording the setting of track apparatus, e.g. of points, of signals

Definitions

  • the present invention relates to an object detection apparatus and an object detection method for detecting an object from an image.
  • Patent Document 1 a video image taken by the video camera 1 while the vehicle is running is recorded on the video tape 2 and reproduced on the video deck 4 to be automatically recognized by the road sign recognition device 10 ( It is disclosed that a road sign is detected in real time by a personal computer).
  • Patent Document 1 assumes that only a sign having a fixed shape is detected and updated. For this reason, a process of estimating an approximate shape from the existence probability of the contour line and recognizing it as a sign is performed. However, assets such as signs actually installed outdoors are often discolored or deformed due to various conditions such as the weather, and there is a problem that detection accuracy is lowered.
  • an object detection system is an image that is information about a camera provided on a moving body and a position where the camera has captured an image.
  • the target image feature quantity, which is the image feature quantity of the target object, and the installation position information are stored as information about the target object to be detected.
  • a second storage unit that performs selection, an image selection unit that selects an image as a selection image based on the image position information and the installation position information, and determines a specified region in the selection image as a detection region;
  • a detection unit that performs object detection by determining whether or not an object exists using the similarity between the image feature amount extracted from the detection region and the target image feature amount, and it is determined that the object does not exist If And having and an output unit for performing notification to THE.
  • the fourth step of performing object detection by making a determination using the similarity with the target image feature amount extracted from the detection area, and notification to the user when it is determined that the target does not exist I do And having a step, the.
  • FIG. 1 is a diagram showing the configuration of the present invention.
  • the camera 100 is installed on a moving body and acquires images before and after the moving direction of the moving track (or road).
  • the GPS receiver 120 is a receiver that acquires GPS information, and can thereby acquire position information of a moving moving body.
  • the video storage unit (first storage unit) is connected to the camera 100 and the GPS receiver 120, acquires video from the camera, GPS information from the GPS receiver 120, and records this. At this time, the camera 110 and the GPS receiver 120 transmit time stamps to the acquired video and GPS information, respectively, and then transmit them to the video storage unit 130. In the video storage unit, the position information is added and recorded for each frame image constituting the video by synchronizing with both time stamps.
  • the asset information DB 144 stores asset images and installed location information.
  • the asset feature DB 145 stores search image feature amounts extracted from asset images. In the present specification, these may be collectively referred to as a second storage unit.
  • the video confirmation unit 140 detects an asset in the video stored in the video storage unit 130, confirms the status of the asset, and notifies the asset. Below, each part which comprises the image
  • the process selection unit 141 the asset information DB 141 is collated based on the position information of the video stored in the video storage unit 130, and the frame image to be detected is selected, and the content of the process to be performed in the detection unit 142 is selected. To do.
  • the detection unit 142 performs image recognition processing for detecting assets from the video stored in the video storage unit according to the selection made by the processing selection unit 141.
  • the position estimation unit 143 estimates how far the detected asset is from the camera. By adding the estimated relative position information of the asset and the GPS information, the actual position where the detected asset is installed is estimated.
  • the detection result is output to the determination unit 147 and stored in the detection result DB 146. In the detection result DB 146, information on the detected asset is recorded.
  • the determination unit 147 compares the asset position obtained by the detection unit 142 and the position estimation unit 143 with the asset information DB 144 to determine whether the detected asset is installed at the correct position. If an abnormality such as no asset is determined, the result is output to the notification unit 148.
  • the notification unit 148 notifies the user by using voice, video, or the like when the determination unit 147 finds an abnormality.
  • the video is acquired from the video storage unit 130 or the detection result DB (third storage unit) 146 and displayed.
  • FIG. 2 is an outline of a moving body 200 in which the object detection device of the present invention is installed.
  • the moving body is described as an example of a train, but it goes without saying that the present invention can also be applied to a vehicle running on a road.
  • FIG. 2A shows an example in which all components are installed in a moving body.
  • the image confirmation unit confirms the status of the asset, and notifies the person boarding the moving body of the result.
  • only the abnormal state and the video at that time are notified to the management center through a network such as the Internet communication network. In this case, since only the video when the abnormality occurs and the determination result need to be sent to the management center, the data transmission capacity in real time can be suppressed.
  • FIG. 2B shows an example in which the video confirmation unit is installed on a server in the management center.
  • the moving body has only a camera, a GPS, and a video storage unit, and transmits video to the video confirmation unit in the management center using real-time communication via the Internet or data transfer means such as a USB cable.
  • asset management is performed by the video confirmation unit in the management center in real time or offline.
  • FIG. 3 is a diagram showing a physical device configuration of the video confirmation unit.
  • the video confirmation is realized as software that operates on one or a plurality of PCs or servers.
  • I / F such as keyboard and mouse
  • network I / F that performs image acquisition and result notification through the network
  • CPU that performs detection processing and position estimation processing
  • storage unit that stores data such as asset information DB and asset feature DB
  • video A memory that holds a program for each process constituting the confirmation unit and manages temporary data is connected via a bus.
  • FIG. 4 is a flowchart showing the processing flow of the entire video confirmation unit.
  • a certain sequence of videos is acquired from the video storage unit.
  • the asset information DB is inquired whether there is an asset to be detected in the section of the actual position where the sequence is captured.
  • the process is not performed, and the process returns to S401 to acquire the next video sequence. If it is an area where assets exist in the video sequence, the process proceeds to the next step.
  • S404-407 detection processing is performed for all assets that should exist in the video.
  • the processing of asset n is selected. If the detection target is a sign, an area on the right side of the screen is set as a detection area such as an area on the right side of the screen and an area around the line if the line is branched, and an area in the frame image to be detected is determined.
  • detection processing suitable for the selected asset n is performed.
  • the process returns to S404 to perform the same process for the next asset.
  • S408 it is determined whether or not the asset to be detected is detected as a result of the processing in S403-07, and if it is detected, the position of the asset is estimated in S409. If there is no asset, the process proceeds to S410.
  • the installation position in the world coordinate system can be calculated by estimating the installation position of the asset from the image and the speed information of the moving body, and combining the GPS information and the information on the moving direction of the moving body.
  • the position of the asset registered in the asset information DB or whether there is an asset in a specific area around it is determined.
  • the specific section here can be determined by multiplying the error value of GPS or camera time measurement by the current moving speed.
  • the detection processing time can be shortened by setting the interval as short as possible, it is necessary to set the detection processing not to be performed due to these errors.
  • the length of the specific section may be determined from an error in the installation position that may occur during installation. This is a value set based on the experience of maintenance so far, so that even if there is a slight error in the installation position of the asset, it can be detected.
  • the process returns to S401.
  • the process proceeds to S411 to notify the user that there is no asset.
  • the object detection method described in the present embodiment performs first accumulation of an image captured by the camera 110 provided in the moving body 200 and image position information that is information on a position where the camera captured the image.
  • a second step of storing, a third step of selecting an image as a selected image based on the image position information and the installation position information, and determining a designated area in the selected image as a detection area S403-407)
  • the fourth step of performing object detection by determining whether or not an object exists in the detection area by using the similarity with the target image feature quantity extracted from the detection area (S408).
  • the object is determined not to exist, and having a step (S410-411) for performing notification to a user, the.
  • the detection accuracy can be improved by using the position information to identify and detect assets in the section, instead of using all assets in the asset information DB. . Furthermore, since the frame image for performing asset detection and the detection area in the frame image can be limited, the processing time can be reduced. By preparing the image feature amount extracted from various variations of each asset in the asset feature DB 145, even if the same sign A has a change such as a tilt due to the installation problem or a difference in the description contents Can be detected robustly.
  • FIG. 5 is an example of a table 500 held by the asset information DB. Information about what is installed as assets and where is stored. Also, the date on which the installation was confirmed by the worker and the worker was confirmed is stored.
  • FIG. 6 is an example of a frame image indicating the processing region and the detection target determined by the processing selection unit (image selection unit) 141.
  • the video frame for performing the asset detection process is limited to only those acquired at the peripheral position where the asset to be detected is located. Further, the detection target area is limited such that the area at the right end of the track is a sign and the area on the road is a branch.
  • FIG. 7 is a diagram showing details of processing of the detection unit.
  • FIG. 7A shows a search window in the detection process, and the search window 709 is moved by the sliding window method to the detection area 708 designated by the process selection unit. The size and aspect ratio of this search window are also determined by the process selection unit according to the asset.
  • the search window extraction unit 701 cuts out partial images using each search window 709.
  • the feature amount extraction unit 702 extracts image feature amounts from the extracted partial images.
  • an image feature amount extraction process corresponding to the asset to be detected designated by the process selection unit 141 is performed.
  • the image feature amount an intensity distribution in the luminance gradient direction, a histogram in RGB or CIELab color system, and the like are extracted.
  • the same feature quantity extraction method is used for assets whose detection target areas are close in the image, such as the same feature quantity F1 for the signs A and B, and the same feature quantity F2 for the branches and sleepers on the track.
  • the signs A and B are installed side by side, it is not necessary to extract different image feature amounts from the same region a plurality of times, and the detection process can be made efficient.
  • the image search unit 703 determines the presence / absence of an asset by comparing this with the feature amount of the specific asset in the asset feature DB 145.
  • image feature amounts extracted from images of a plurality of signs A are recorded, and an image search process is performed on this, and a distance value with the most similar image is similar to the sign A. Get as a degree. The smaller the distance value is, the higher the similarity is, that is, it is similar.
  • the search result integration unit 704 integrates the results of the similarity between all the areas obtained in the plurality of search windows 709 and the marker Ano0 image feature amount, and if there is a search window in which the similarity is a certain threshold value or less. , It is determined that there is a sign A in the search window. On the other hand, if there is no image similar to the asset image feature DB 145 of the sign A, it is determined that there is no sign A, and the fact is notified to the user via the notification unit 148.
  • the object detection system described in the present embodiment includes a camera 110 provided in the moving body 200, a GPS 120 that acquires image position information that is information on a position where the camera 110 has captured an image, A first accumulator 130 that accumulates image position information, and a second accumulator (144, 155) that accumulates an object image feature quantity that is an image feature quantity of the object and installation position information as information about the object to be detected.
  • Image position information and installation position information an image is selected as a selected image, an image selection unit 141 that determines a specified area in the selected image as a detection area, and an object exists for the detection area
  • a detection unit that performs object detection by using the similarity between the image feature amount extracted from the detection region and the target image feature amount, and determines that the target does not exist. If, and it is having and an output unit for performing a notification to the user.
  • the detection accuracy can be improved by using the position information to identify and detect the assets in the section instead of using all the assets in the asset information DB. Furthermore, since the frame image for performing asset detection and the detection area in the frame image can be limited, the processing time can be reduced. In addition, by preparing image feature quantities extracted from various variations of each asset in the asset feature DB 145, there are changes such as inclinations due to installation problems and differences in description contents even with the same sign A. Even in this case, detection can be performed robustly.
  • FIG. 8 is an image diagram showing an example of position estimation processing in the position estimation unit.
  • FIG. 8A shows an example of the result of asset detection by the detection unit. Signs and track branches are detected. Now, since the position of the moving body (the position of the camera that captured the image) is known from the GPS information, in order to know the position of this asset, the relative distance of the asset to the camera may be estimated.
  • FIG. 8B is a diagram showing an image when the installation position of the asset detected by the image processing is estimated.
  • the distance Z from the camera to the asset in the world coordinate system from the vertical position y on the image coordinate system is obtained. It can be calculated.
  • FIG. 8C is an example in which a broken line is drawn on the image for each distance Z having a constant interval. Accordingly, the distance from the camera can be estimated by detecting the lower end position of the detected sign. Furthermore, by performing position estimation using a laser range sensor, a stereo camera, or the like, errors can be reduced more than distance estimation from an image.
  • FIG. 8D is an example illustrating a sensor signal that can be acquired from a range sensor when a laser range sensor is used as a position estimation device separately from the camera.
  • a laser range sensor a laser is blown toward the front and side of a moving body, and the distance is estimated by reflection thereof. For this reason, a sensor signal can be collected as a distance value with respect to the ⁇ direction from the moving body as shown in FIG. From the result of FIG. 8A, it can be seen that the detected sign is in the ⁇ direction with respect to the moving body, and by obtaining the distance value in that direction, the relative distance value from the camera to the sign can be acquired.
  • FIG. 8E shows an example of a distance image obtained from a stereo camera when a stereo camera is used as the camera.
  • a stereo camera two cameras are installed side by side, and the distance value to each pixel in the image can be estimated by obtaining the shift of the image obtained from each camera.
  • the distance is indicated by a gray value, and the distance value can be acquired for each pixel. For this reason, distance can be estimated by acquiring the pixel value in the area of the detected sign.
  • the relative position of the asset with respect to the moving object (camera) can be estimated by the above method.
  • the absolute position of the asset in the world coordinate system can be estimated from the relative distance thus obtained and the information on the position of the moving body and the direction of the moving body based on the GPS information.
  • FIG. 9 is a flowchart showing the determination process in the determination unit.
  • step S901 the assets detected from the detection unit and the position estimation unit and their installation positions are input.
  • step S902 the asset information DB is collated. If the asset specified in the asset information DB is not detected, the process goes to S903, and the determination result of “no asset” is output. If there is an asset, the process proceeds to S904.
  • the position of the detected asset is collated with the asset information DB.
  • the distance value is calculated by comparing the detected asset with the past detection result image stored in the detection result DB.
  • FIG. 10 is a diagram showing an example in which the notification unit notifies the user not only by voice but also by screen display.
  • marker A as an asset of a detection target is shown.
  • the determination result notification unit 1001 displays the determination result of the determination unit.
  • the current video display unit 1002 acquires and displays a video of a peripheral area where the asset A is not detected from the video storage unit.
  • the playback control unit 1002 controls playback and stop of the video at this time. Thereby, when an abnormality has occurred in the asset to be detected, it can be confirmed on the video whether the notification result is really correct. In addition, when an abnormality actually occurs, the user can check the status of the asset without going to the shooting site.
  • the map information display unit 1005 displays the current playback position on the map based on the GPS information of the captured video. As a result, the user can easily grasp at which position the abnormality has occurred in the asset.
  • FIG. 11 shows an example in which two cameras are used for asset management. Since other configurations are the same as those of the first embodiment, the description thereof will be omitted as appropriate.
  • one camera is installed as the peripheral camera 111 so that the periphery of the front or rear of the railway can be widely viewed.
  • the other camera is installed as a road surface camera 112 facing downward from the camera 111 so that only the road surface portion is photographed.
  • FIG. 11B shows an image taken by the peripheral camera 111. By shooting so that all the signs, tracks, tunnels, etc. next to the track are within the angle of view, the angle of view is adjusted so that all assets such as signs, tunnels, tracks, etc. fit within the screen.
  • FIG. 11C is an example of an image photographed by the road surface camera 112.
  • FIG. 11C shows an example in which the camera 112 is rotated 90 degrees from an angle of 111 to capture a vertically long image. You can take a picture of the sleepers on the track separated by shooting downward.
  • road markings such as speed restrictions and stop lines can be photographed for easy image recognition. As described above, since only the road surface portion is photographed with high resolution, the asset position can be estimated with higher accuracy.
  • the process selection unit 141 determines which video of the two cameras is used based on the GPS information. In a place where there is an asset on the road surface such as a branch of a track, an image captured by the road surface camera 112 is selected as an image used for detection.
  • the line branching portion detection process is performed on the video from the peripheral camera 111.
  • parameters such as the frame rate of the road surface camera 112 are controlled. Specifically, how many meters an image is taken is set as a parameter, and the frame rate is controlled in accordance with the current speed of the moving body, which is a fluctuation value. For example, in the case of a camera capable of shooting a range of 5 m, it is assumed that an image is to be acquired every 2 m while looking at the margin.
  • the camera is stored at 0.5 second intervals (2 fps), and when the moving speed is 20 m / s, the camera is stored at 0.1 second intervals (10 fps). Control the frame rate. Also, the faster the moving speed, the more blurred the image becomes and the more difficult it is to detect. Therefore, if the preset 2 m acquisition interval is narrowed according to the speed, asset detection accuracy can be further improved.
  • the road surface image is taken only in a narrow section, when the moving speed of the moving body is high, there is a possibility that the image may be blurred or the property may not be taken. Therefore, it is possible to control the camera parameters in accordance with the situation by grasping the target in advance from the road camera image or estimating the current traveling speed. Also, it is possible to prevent omissions by increasing the frame rate only when assets on the road surface come into the road camera.
  • FIG. 13 is an example of a processing flow of the asset management apparatus that performs redetermination processing according to the third embodiment of the present invention.
  • the main processing flow is the same as that in FIG. 4 of the first embodiment, and thus the description thereof is omitted. However, since S420 to S422 are newly provided, this will be described here.
  • Specified specific section and surrounding section here will be described in detail.
  • the specific section a measurement error of about 1-5 m caused by GPS or camera time information is set.
  • an error of about 10 m that occurs when assets are installed is set.
  • the value in the specific section is a value for correctly detecting in anticipation of the error range of the present apparatus when the asset information DB is correct.
  • the value of the peripheral section is a value for finding an error in this apparatus when there is an error in the asset information DB.
  • the specific section is about 1-5 m centering on the position registered in the asset information DB as the installation position.
  • a section excluding a specific section among sections of about 10 m centering on the installation position is a peripheral section.
  • the detection process is also performed for the peripheral position.
  • the asset information DB is installed in a location away from the DB due to an error in creating the asset information DB. It is possible to detect existing assets in the second determination process.
  • FIG. 14 is a diagram showing a configuration of a detection unit that automatically switches DBs and detects assets according to the fourth embodiment of the present invention.
  • a feature is that a search DB selection unit 706 is newly provided.
  • the search DB selection unit 706 selects an asset image to be compared with the image selected for detection from either the asset feature DB 145 or the detection result DB 146 (or both).
  • the search DB selection unit when the sign A photographed at the exact same place is not accumulated in the detection result DB, the search DB selection unit collates with the data of the sign A in the asset image feature DB. On the other hand, if the same point has been audited many times in the past, as a result, if there are multiple images of the sign at the same location, the image of the sign A taken on another day at the same point An image in the search result DB is selected for collation.
  • FIG. 15 shows an example of fluctuations that can occur in the same type of label.
  • the characters in the plate differ depending on the location, and the size of the plate may change accordingly.
  • the shape of the pole to which the plate is fixed may change, or the pole may be tilted depending on the condition of the ground contact surface as shown in FIG.
  • the search DB selection unit selects the DB to be collated. In other words, the detection rate can be improved at locations where the vehicle has traveled in the past because it is the same as the same sign.
  • the detection result is stored in the detection result DB, the shooting date and time, weather conditions, and the like are stored together. At the time of detection, it is possible to detect with high accuracy even in a place where auditing has not been performed in the past by targeting only images of signs of the same type that were taken in different places but under similar conditions. .
  • the weather condition is acquired by acquiring the weather information of the shooting location from the GPS information if the video storage unit 130 is connected to the network 210 via the Internet or the like. Accumulate in the accumulator. For example, when the weather information of an image captured at a certain time is “sunny”, by performing detection processing using only images with a “sunny” tag among images stored in the detection result DB, The accuracy of detection processing can be improved.
  • the gain value used at the time of control can be used as a kind of weather condition information.
  • the same effect can be obtained by using only an image in which the difference between the gain value of the photographed image and the gain value of the image accumulated in the detection result DB is equal to or less than the threshold value.
  • Image search unit 704 ... Search result integration unit 705 ... Search result output unit 706 ... Search DB selection unit 1001 ... Determination result notification unit 1002 ... Current video display unit 1003 . Playback control unit 1004 ... Past video display unit 1005 ... Map display unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)

Abstract

Cette invention concerne un procédé de détection d'objets qui est caractérisé en ce qu'il comprend : une caméra qui est disposée sur une unité mobile ; un GPS qui acquiert des informations de position d'image, qui sont des informations sur la position dans laquelle une image a été capturée par la caméra ; une première unité de stockage qui stocke l'image et les informations de position d'image ; une seconde unité de stockage qui stocke, en tant qu'informations sur un objet à détecter, une quantité caractéristique d'image d'objet, qui est la quantité caractéristique d'image de l'objet à détecter, et des informations de position d'installation ; une unité de sélection d'image qui sélectionne l'image comme image de sélection sur la base des informations de position d'image et des informations de position d'installation et détermine une région spécifiée dans l'image de sélection en tant que région de détection ; une unité de détection qui effectue une détection d'objet par détermination de la présence ou de l'absence de l'objet à détecter dans la région de détection en utilisant le degré de similarité entre la quantité caractéristique d'image extraite de la région de détection et la quantité caractéristique d'image d'objet ; et une unité de sortie qui transmet une notification à un utilisateur, s'il est déterminé qu'il n'y a pas d'objet à détecter.
PCT/JP2015/053888 2015-02-13 2015-02-13 Système de détection d'objets et procédé de détection d'objets Ceased WO2016129091A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/053888 WO2016129091A1 (fr) 2015-02-13 2015-02-13 Système de détection d'objets et procédé de détection d'objets

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/053888 WO2016129091A1 (fr) 2015-02-13 2015-02-13 Système de détection d'objets et procédé de détection d'objets

Publications (1)

Publication Number Publication Date
WO2016129091A1 true WO2016129091A1 (fr) 2016-08-18

Family

ID=56615522

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/053888 Ceased WO2016129091A1 (fr) 2015-02-13 2015-02-13 Système de détection d'objets et procédé de détection d'objets

Country Status (1)

Country Link
WO (1) WO2016129091A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008502538A (ja) * 2004-06-11 2008-01-31 ストラテック システムズ リミテッド 鉄道軌道スキャニングシステムおよび方法
JP2008250687A (ja) * 2007-03-30 2008-10-16 Aisin Aw Co Ltd 地物情報収集装置及び地物情報収集方法
JP2014092875A (ja) * 2012-11-01 2014-05-19 Toshiba Corp 画像同期装置及びシステム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008502538A (ja) * 2004-06-11 2008-01-31 ストラテック システムズ リミテッド 鉄道軌道スキャニングシステムおよび方法
JP2008250687A (ja) * 2007-03-30 2008-10-16 Aisin Aw Co Ltd 地物情報収集装置及び地物情報収集方法
JP2014092875A (ja) * 2012-11-01 2014-05-19 Toshiba Corp 画像同期装置及びシステム

Similar Documents

Publication Publication Date Title
US10607090B2 (en) Train security system
CN107563419B (zh) 图像匹配和二维码相结合的列车定位方法
CA2560800C (fr) Detection, mesure de localisation et reconnaissance automatisees d'actifs
Ai et al. Critical assessment of an enhanced traffic sign detection method using mobile LiDAR and INS technologies
KR101339354B1 (ko) 영상을 이용한 철도차량의 위치검지 시스템 및 위치검지방법
JP2019084881A (ja) 支障物検知装置
US20200034637A1 (en) Real-Time Track Asset Recognition and Position Determination
WO2016132587A1 (fr) Dispositif de traitement d'informations, système de gestion d'ouvrages routiers et procédé de gestion d'ouvrages routiers
EP3415400A1 (fr) Système et procédé de mesure de la position d'un véhicule guidé
JP2018018461A (ja) 情報処理装置、表示装置、情報処理方法、及びプログラム
WO2020210960A1 (fr) Procédé et système de reconstruction de panorama numérique d'itinéraire de circulation
KR102163208B1 (ko) 하이브리드 무인교통 감시시스템 및 그 방법
KR102257078B1 (ko) 좌표계를 이용한 안개 탐지 장치 및 그 방법
CN111696365A (zh) 一种车辆跟踪系统
KR102200204B1 (ko) Cctv 영상을 활용한 3차원 영상분석 시스템
JP2018017101A (ja) 情報処理装置、情報処理方法、及びプログラム
CN109826668B (zh) 井下多源精确人员定位系统及方法
CN113420726A (zh) 基于俯视图像的区域去重客流统计方法
JP2002008019A (ja) 軌道認識装置及び軌道認識装置を用いた鉄道車両
WO2016129091A1 (fr) Système de détection d'objets et procédé de détection d'objets
JP2002002485A (ja) 軌道認識装置及び軌道認識装置を用いた鉄道車両
KR101509105B1 (ko) 증강현실 제공 장치 및 그 방법
CN111767872A (zh) 基于cim及多视角图像处理的交互式应急车辆通行方法
TWI811954B (zh) 定位系統及物件位置的校正方法
JP7721406B2 (ja) 情報処理装置及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15881966

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15881966

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP