[go: up one dir, main page]

HK1197294B - Coordinate conversion table creation system and coordinate conversion table creation method - Google Patents

Coordinate conversion table creation system and coordinate conversion table creation method Download PDF

Info

Publication number
HK1197294B
HK1197294B HK14110849.6A HK14110849A HK1197294B HK 1197294 B HK1197294 B HK 1197294B HK 14110849 A HK14110849 A HK 14110849A HK 1197294 B HK1197294 B HK 1197294B
Authority
HK
Hong Kong
Prior art keywords
image
world
vehicle
information
roadside
Prior art date
Application number
HK14110849.6A
Other languages
Chinese (zh)
Other versions
HK1197294A1 (en
Inventor
Fukumoto Go
Original Assignee
日本电器株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本电器株式会社 filed Critical 日本电器株式会社
Priority claimed from PCT/JP2012/006774 external-priority patent/WO2013088626A1/en
Publication of HK1197294A1 publication Critical patent/HK1197294A1/en
Publication of HK1197294B publication Critical patent/HK1197294B/en

Links

Description

Coordinate conversion table creation system and coordinate conversion table creation method
Technical Field
The present invention relates to a coordinate conversion table creation system and a coordinate conversion table creation method.
Background
Nowadays, a safe driving support system has been proposed which photographs a vehicle by using a camera device arranged on the roadside and detects the position and speed of the vehicle on the basis of the photographed image in order to measure the traffic volume and prevent a collision at a curve or intersection with poor visibility.
According to the safe driving support system, the TTC (time-to-collision time) is estimated on the basis of the position and speed of the vehicle acquired from the captured image. Then, on the basis of the estimation result, the attention calling the user or the braking controlling the vehicle is performed. Accordingly, it is necessary to acquire the position and speed of the vehicle with excellent accuracy.
In order to acquire the position and speed of the vehicle from the captured image with excellent accuracy, it is necessary to convert the position of the vehicle according to coordinates (hereinafter, referred to as image coordinates) set for the captured image into the position of the vehicle according to coordinates (hereinafter, referred to as world coordinates) set for a real space such as a road or the like. That is, it is necessary to perform coordinate transformation between the image coordinates and the world coordinates.
Therefore, when an image pickup device such as a surveillance camera or the like that monitors a vehicle is newly arranged, a coordinate conversion table that performs conversion between image coordinates and world coordinates is created.
As a technique for creating a coordinate conversion table, japanese patent application laid-open No.2010-236891 describes the following method. That is, a vehicle on which a target object is mounted is photographed by a roadside camera (image pickup device), and the position of the target object is found from image coordinates on the basis of the photographed image. Meanwhile, by using a GPS (global positioning system), the position of the target object according to world coordinates is found. Then, a conversion table is created by comparing and associating the position of the target object according to the world coordinates and the position of the target object according to the image coordinates.
Disclosure of Invention
(problems to be solved by the invention)
However, the technique described in japanese patent application laid-open No.2010-236891 has a problem that it is impossible to create a coordinate conversion table in the case where a roadside camera is arranged in a space where GPS is not available, such as a tunnel or the like.
A main object of the present invention is to provide a coordinate conversion table creation system and a coordinate conversion table creation method that can acquire a coordinate conversion table between image coordinates and world coordinates with excellent accuracy even in an environment such as a tunnel where GPS is unavailable.
(means for solving the problem)
To solve this problem, a coordinate transformation table creating system includes: an image-based information acquisition unit that photographs a running vehicle, and acquires an image-based vehicle position according to image-based coordinates set to the photographed image, and outputs the acquired image-based vehicle position as image-based information; a world-based information acquisition unit that acquires a world-based vehicle position of the vehicle according to the world-based coordinates and outputs the acquired world-based vehicle position as world-based information; and a coordinate conversion information creating unit that creates a coordinate conversion table between the image-based information and the world-based information on the basis of the image-based information and the world-based information.
Further, a coordinate conversion table creating method includes: an image-based information acquisition process of photographing a running vehicle, and acquiring an image-based vehicle position according to image-based coordinates set to the photographed image, and outputting the acquired image-based vehicle position as image-based information; a world-based information acquisition process that acquires a world-based vehicle position of the vehicle according to the world-based coordinates and outputs the acquired world-based vehicle position as world-based information; and a coordinate conversion information creating process of creating a coordinate conversion table between the image-based information and the world-based information on the basis of the image-based information and the world-based information.
(advantageous effects of the invention)
According to the present invention, a coordinate conversion table between image coordinates and world coordinates can be acquired with excellent accuracy even in an environment where GPS is not available, such as a tunnel section.
Drawings
FIG. 1 is a schematic view showing a coordinate transformation table creating system according to the present invention;
fig. 2 is a block diagram showing a roadside image pickup device and an in-vehicle device;
FIG. 3 is a flowchart showing a process of creating a coordinate conversion table;
fig. 4A is a roadside captured image captured by the roadside when the vehicle passes the judgment line; and
fig. 4B is a roadside captured image captured by the roadside when the vehicle passes through the judgment line.
Detailed Description
Hereinafter, exemplary embodiments according to the present invention will be described. Fig. 1 is a schematic diagram showing a coordinate conversion table creating system 2 according to the present invention. The coordinate conversion table creating system 2 includes a roadside image pickup device 10 arranged on the roadside, and an in-vehicle device 20 mounted on a vehicle 30. Fig. 2 is a block diagram showing the roadside image pickup device 10 and the in-vehicle device 20.
The roadside image capturing device 10 includes a roadside camera 11, a vehicle detection unit 12, an image-based vehicle detection unit 13, and a coordinate conversion table creation unit 14. Among them, the roadside camera 11, the vehicle detection unit 12, and the image-based vehicle detection unit 13 are included in the image-based information acquisition unit 3, and the coordinate conversion table creation unit 14 is included in the coordinate conversion information creation unit 4.
The roadside camera 11 is a camera such as a road monitoring camera or the like arranged on the roadside. The running vehicle 30 is photographed by the roadside camera 11, and the photographed image is output to the vehicle detection unit 12 and the image-based vehicle detection unit 13 as a roadside photographed image. Here, it is assumed that a lane arranged as a broken line on the road distinguishes a boundary line K (see fig. 1).
The lane distinguishing boundary line K serves as an immovable object (hereinafter, shown as a reference object) for the roadside image pickup device 10 and the in-vehicle device 20. The reference object is not limited to the lane dividing boundary line. The reference object may be a reflecting plate or the like disposed on the road. Therefore, by using the reference object K, a reference point of coordinate transformation between the image coordinates and the world coordinates is found.
Further, the roadside camera 11 photographs the rear portion of the vehicle (the rear portion of the vehicle) that becomes distant from the roadside camera 11. The arrows in fig. 1 and 4 indicate the traveling direction of the vehicle 30.
The vehicle detection unit 12 extracts the vehicle 30 from the roadside captured image, and determines whether the extracted vehicle 30 exists in a measurement region set in advance. The judgment is transmitted to the world-based vehicle detection unit 22 as the area judgment information by wireless means using radio waves or the like. Here, the measurement area refers to a range for detecting the position of the vehicle 30. In the case where the size of the vehicle in the roadside captured image is small (in the case of the far point capture), the accuracy of the vehicle position is reduced. Therefore, the measurement region is set in advance by taking the resolution of the roadside camera 11 and the like into consideration.
Further, the vehicle detection unit 12 determines whether the extracted vehicle 30 is close to the reference object K. In the case where the vehicle 30 approaches the reference object K, the vehicle detection unit 12 transmits vehicle approach information indicating that the vehicle 30 approaches the reference object K to the world-based vehicle detection unit 22.
As shown in fig. 1, the lane distinguishing boundary line K is composed of a plurality of white lines K1 drawn intermittently, and each white line K1 has a predetermined length. Therefore, the judgment of whether the vehicle 30 is approaching depends on the judgment of which white line K1 corresponds in position to the reference point. Then, it is assumed that an end point K2(K2_ i, where i is a positive integer) of the white line K1 is a reference point for determining whether the vehicle 30 is approaching. Since there are a plurality of white lines K1, there are a plurality of end points K2. Therefore, the judgment as to whether the vehicle 30 approaches is performed at each endpoint K2.
The image-based vehicle detection unit 13 acquires the position of the light source unit 23 from the roadside captured image. Since the light source unit 23 is mounted in the in-vehicle apparatus 20, the position of the light source unit 23 corresponds to the position of the vehicle. Here, the position of the vehicle acquired by the image-based vehicle detection unit 13 is found from the roadside captured image, and is therefore based on the image coordinates.
The image-based vehicle detection unit 13 defines that the position of the vehicle is the image-based vehicle position, and that the time at which the position of the vehicle is acquired is the image-based position acquisition time. Therefore, the image-based vehicle detecting unit 13 outputs these two pieces of information to the coordinate conversion table creating unit 14 as image-based information. The image-based position acquisition time is measured by a timer (not shown in the figure) mounted on the roadside camera 11, the image-based vehicle detection unit 13, and the like.
The coordinate conversion table creating unit 14 creates a coordinate conversion table between the image coordinates and the world coordinates on the basis of the image-based information received from the image-based vehicle detecting unit 13 and the world-based information received from the world-based vehicle detecting unit 22 described later.
Next, the configuration of the in-vehicle apparatus 20 will be described. Among them, the in-vehicle apparatus 20 is included in the world-based information acquisition unit 5. The vehicle-mounted device 20 includes a vehicle-mounted camera 21, a world-based vehicle detection unit 22, and a light source unit 23, and the vehicle-mounted device 20 is mounted on a vehicle 30.
The in-vehicle camera 21 photographs the reference object K. The world-based vehicle detection unit 22 detects the endpoint K2 of the reference object K on the basis of the image captured by the vehicle-mounted camera 21 (vehicle-mounted captured image) and the vehicle approach information received from the vehicle detection unit 12, and then acquires the position of the vehicle with respect to the position of the endpoint K2 on the basis of the world-based vehicle position. Further, it is assumed that the time at which the world-based vehicle position is acquired is the world-based position acquisition time. The world-based vehicle position and the world-based position acquisition time are transmitted to the coordinate conversion table creation unit 14 as world-based information. The world-based position acquisition time is measured by a timer (not shown in the figure) mounted on the vehicle-mounted camera 21 and the world-based vehicle detection unit 22.
Further, when the world-based vehicle detection unit 22 acquires the world-based information, the world-based vehicle detection unit 22 outputs a trigger signal to the light source unit 23. Once the light source unit 23 receives the trigger signal from the world-based vehicle detection unit 22, the light source unit 23 including a light source such as an LED or the like is turned on and off once.
Next, a process of creating a coordinate conversion table by using the coordinate conversion table creating system 2 as described above will be described with reference to a flowchart shown in fig. 3. Where, for convenience of description, it is assumed that a roadside camera is arranged inside the tunnel, the exemplary embodiments are not limited to use in the above-described environment.
Step SA 1: the vehicle 30 travels on a road in a tunnel. The roadside camera 11 of the roadside imaging device 10 images the road. Thus, the vehicle 30 entering the shooting area is shot.
Step SA 2: the vehicle detection unit 12 detects the vehicle 30 by performing predetermined image processing on the roadside captured image captured by the roadside camera 11, and acquires the position of the vehicle 30. The position of the vehicle 30 acquired at this time is a position according to the image coordinates.
As a method for detecting the vehicle 30, the following method can be exemplified. That is, an image in which the vehicle 30 does not exist is acquired in advance as a background image, and the difference between the roadside captured image captured by the roadside camera 11 and the background image is found out. By finding out the difference, the vehicle 30 can be extracted. The position of the vehicle from the origin of the preset image coordinates is calculated. The origin may be defined as a point set in the captured region (e.g., a point of a corner of a roadside captured image). As described later, the position at this time is used for determining whether the vehicle 30 is present in the measurement area, and for determining whether the vehicle 30 is approaching. Here, the exemplary embodiments are not limited to the background difference processing, and a well-known method such as pattern matching or the like is possible.
Step SA 3: next, the vehicle detection unit 12 determines whether the acquired vehicle position is in a measurement area set in advance, and transmits the determination result to the world-based vehicle detection unit 22 as area determination information.
Step SA 4: further, in the case where the vehicle detection unit 12 determines that the vehicle 30 is present in the measurement area, the vehicle detection unit 12 determines whether the vehicle 30 is approaching by comparing the position of the vehicle 30 and the position of the end point K2 of the reference object K, both within the roadside captured images, according to a procedure described later.
For example, fig. 1 shows a state in which the vehicle 30 becomes far from the end point K2_4, but close to the end point K2_ 5. Since the roadside camera 11 is fixed at the immovable position for the road, the roadside camera 11 is also at the immovable position for the end point K2. Therefore, if the positions of the end points K2 existing in the measurement area and based on the image-based coordinates are acquired in advance, it can be determined whether the vehicle 30 is close to each end point K2.
The vehicle detecting unit 12 selects the nearest endpoint K2 among the plurality of endpoints K2, and determines whether the vehicle 30 is close to the nearest endpoint K2. For example, fig. 1 shows a state in which the in-vehicle device 20 approaches the endpoint K2_4 to the endpoint K2_7, and the nearest endpoint among the plurality of endpoints is the endpoint K2_ 4. Therefore, the vehicle detection unit 12 determines that the vehicle 30 approaches the end point K2_ 4. In the case where the vehicle detection unit 12 determines that the vehicle 30 is approaching, the process proceeds to step SA5, and in the case where the vehicle detection unit 12 determines that the vehicle 30 is not approaching, the process returns to step SA 2. In the case where the vehicle detection unit 12 determines that the vehicle 30 is not close to the end point K2, the case is where the vehicle 30 moves outside the measurement area.
The determination as to whether the vehicle 30 is approaching is performed as follows. That is, the reference object K is photographed in the shape of a white line or the like in the roadside captured image. Then, the vehicle detection unit 12 acquires the reference object K by performing a process of extracting an edge and a process of extracting a strong luminance portion on the roadside captured image. That is, the vehicle detection unit 12 acquires the reference object K including the plurality of white lines K1.
Each of fig. 4A, 4B is a diagram showing the reference object K acquired as described above and the vehicle 30 acquired in step SA 2. Here, a broken line (judgment line) L shown in the figure is a line passing through an end point K2 of the reference K and perpendicular to the reference K. Further, the mark X indicates the position of the in-vehicle device 20 within the vehicle 30. Fig. 4A is a captured image of the road side captured at time t when the vehicle 30 just passes the determination line L, and fig. 4B is a captured image of the road side captured at time t + (>0) after the vehicle 30 passes the determination line L. In the case where the vehicle 30 passes through the determination line L as described above, it is determined that the vehicle 30 approaches. The setting of the determination line L corresponds to specifying the endpoint K2_4 in the state where the vehicle 30 approaches to the endpoint K2_4 among the endpoints K2_ 7.
Step SA 5: in the case where it is determined that the vehicle 30 approaches, the vehicle detection unit 12 transmits vehicle approach information to the world-based vehicle detection unit 22.
In which transmission of information is performed by using a wireless LAN or the like, but another transmission manner may be used on the condition that the time required for transmitting information is short. The short time required for transmitting the information means that the delay time of information transmission does not cause a problem from the viewpoint of required accuracy.
In the case where the transmission information is performed by the grouping method, the world-based vehicle detection unit 22 performs processing so that the group including the vehicle approach information can be identified. For example, a specific bit of the packet data is defined as a flag, and a packet whose flag is set to indicate that the packet has vehicle proximity information is transmitted. It is apparent that the exemplary embodiments are not limited to the above-described method.
Step SA 6: on the other hand, in the case where the vehicle detection unit 12 determines that the vehicle 30 is not present in the measurement area, the image-based vehicle detection unit 13 selects, among the roadside captured images, the roadside captured image captured at the timing when the light source unit 23 is turned on and off. Then, the image-based vehicle detection unit 13 acquires the position of the vehicle (image-based vehicle position) from the selected roadside captured image, and further acquires the capturing time as the image-based position acquisition time. As described above, the image-based vehicle detecting unit 13 acquires the position based on the image coordinates and the position acquisition time based on the image, and outputs both information to the coordinate conversion table creating unit 14 as the image-based information. The acquisition of the image-based position information is repeatedly performed. The number of repetitions corresponds to the number of times the light source unit 23 is turned on and off.
In the roadside captured image, it can be determined whether or not the roadside captured image is captured at the timing when the light source unit 23 is turned on and off, on the basis of the luminance of the region including the light source unit 23. That is, the contour of the vehicle 30 is found by performing a contour extraction process on the roadside captured image. Since the position of the light source unit 23 relative to the contour of the vehicle 30 is known in advance, the area of the light source unit 23 can be specified by finding the contour of the vehicle 30. Then, in the case where the luminance of the specified area is not less than the predetermined luminance, the image-based vehicle detection unit 13 determines that the roadside captured image was captured at the time when the light source unit 23 was turned on.
The predetermined luminance is appropriately set according to the environment. For example, in the case where the coordinate conversion table creating system 2 is arranged in an environment such as a tunnel, the vehicle 30 turns on headlights or taillights. Therefore, the predetermined luminance is set to be slightly brighter in order to prevent erroneous determination due to the lamps when the luminance of the light source unit 23 is not changed. As described above, the image-based vehicle detection unit 13 determines whether or not the roadside captured image is captured at the time when the light source unit 23 is turned on.
Of course, the exemplary embodiments are not limited to the above-described method. For example, when a region whose luminance is not less than a predetermined luminance is found in the roadside captured image, it can also be determined that the luminance of the region is due to the light source unit 23 being turned on. In this case, since it is unnecessary to perform the process of finding the contour of the vehicle 30, the processing speed becomes high.
Step SA 7: the coordinate conversion table creating unit 14 waits for the reception of the image-based position information from the image-based vehicle detecting unit 13, and receives the world-based information from the world-based vehicle detecting unit 22.
Step SA 8: the coordinate conversion table creation unit 14 associates the image-based vehicle position with the world-based vehicle position. As described above, the image-based vehicle position and the world-based vehicle position are vehicle positions according to the image coordinates at the time when the light source unit 23 is turned on. That is, the image-based vehicle position and the world-based vehicle position are vehicle positions under the condition of the time when the light source unit 23 is turned on at the same time. Therefore, a coordinate conversion table between the image coordinates and the world coordinates can be created. Wherein the coordinate transformation table can be realized by function approximation.
Incidentally, the image-based vehicle detecting unit 13 outputs the image-based vehicle position and the image-based position acquisition time to the coordinate conversion table creating unit 14 as the image-based information, and the world-based vehicle detecting unit 22 outputs the world-based vehicle position and the world-based position acquisition time to the coordinate conversion table creating unit 14 as the world-based information.
However, when the above-described coordinate conversion table is created, the image-based position acquisition time and the world-based position acquisition time are not used. The reason is that the image-based vehicle detection unit 13 extracts the vehicle position from the roadside captured image captured when the light source unit 23 is turned on, and defines the extracted vehicle position as the image-based vehicle position, and therefore, the image-based vehicle position and the world-based vehicle position are position-synchronized.
In contrast to the above-described method, the coordinate conversion table creating unit 14 can create the coordinate conversion table by using the image-based vehicle position and the world-based vehicle position at which the image-based position acquisition time and the world-based position acquisition time are the same. That is, in this case, the image-based vehicle position and the world-based vehicle position are time-synchronized.
Considering the processing speed and accuracy, it is determined which of the position synchronization and the time synchronization should be employed, and the determined synchronization is set. Furthermore, two synchronizations may be used. With regard to the position synchronization, it is preferable to detect the timing when the light source 23 is turned on. However, in some cases, due to the shooting condition accuracy (e.g., the number of shooting frames), the image-based vehicle position at a time different from the time when the light source unit 23 was turned on is selected. Then, the image-based vehicle positions preceding and succeeding the image-based vehicle position captured at the time when it is determined that the light source unit 23 is turned on are regarded as candidates, and are also output to the coordinate conversion table creation unit 14. The coordinate conversion table creating unit 14 calculates an image-based position acquisition time equal to the world-based position acquisition time. Then, the coordinate conversion table creating unit 14 performs interpolation processing such as linear interpolation on the plurality of image-based vehicle positions received as candidates, and calculates the image-based vehicle position corresponding to the calculated image-based position acquisition time. According to the above method, the coordinate conversion table can be created with excellent accuracy.
Next, a process performed by the in-vehicle apparatus 20 will be described. In the case of the in-vehicle apparatus 20, the above-described world-based position information is acquired and transmitted to the coordinate conversion table creation unit 14.
Step SB 1: first, the in-vehicle camera 21 captures a reference object K.
Step SB 2: the world-based vehicle detection unit 22 detects the endpoint K2 of the reference object K by performing predetermined image processing on the vehicle-mounted captured image provided by the vehicle-mounted camera 21. Therefore, the position of the apparatus itself (vehicle 30) with respect to the position of the reference object K can be acquired. Since the acquired position of the device itself is a position relative to the position of the reference object K arranged on the road, the acquired position of the device itself is based on world coordinates.
As the predetermined image processing method, a known method such as an edge extraction method, a highlight extraction method, or the like is available. For example, in the case of using the edge extraction method, it is assumed that the vehicle-mounted captured image captured by the vehicle-mounted camera 21 includes a plurality of pixels. In this case, a luminance difference (differential) between pixels adjacent to each other is calculated. In a region where the luminance of an edge or the like greatly changes, the differential value becomes large. Therefore, the region can be extracted. Of course, the above is merely an example, and another method is also possible.
Step SB 3: next, the world-based vehicle detection unit 22 determines whether the device itself (vehicle 30) is present in the measurement area on the basis of the area determination information received from the vehicle detection unit 12. Then, in the case where the device itself exists outside the measurement region, the process proceeds to step SB 10. On the other hand, in the case where the device itself exists within the measurement region, the process proceeds to step SB 4.
Step SB 4: the world-based vehicle detection unit 22 waits to receive vehicle approach information from the vehicle detection unit 12.
Step SB 5: when the world-based vehicle detection unit 22 receives the vehicle approach information, the world-based vehicle detection unit 22 stops detecting the reference object K and changes the reference object detection parameter so that the detection accuracy is high.
For example, when 10 frames per second are used for image processing as general accuracy, the world-based vehicle detection unit 22 changes the reference object detection parameter (in this case, the number of frames per second) so that the end point K2 of the reference object K is detected by using 15 frames per second when the world-based vehicle detection unit 22 receives the vehicle approach information. Therefore, it is possible to suppress an error that causes the vehicle 30 to pass through the end point K2 of the reference object K in the period between times when two frames adjacent to each other are captured. That is, the end point K2 can be detected with excellent accuracy.
Step SB 6: the world-based vehicle detection unit 22 restarts detecting the reference object K by using the newly set reference detection parameters.
Step SB 7: then, the world-based vehicle detection unit 22 determines whether the end point K2 of the reference object K is detected. In the case where the endpoint K2 is detected, the process proceeds to step SB8, and in the case where the endpoint K2 is not detected, the process returns to step SB 6.
Step SB 8: in the case where the end point K2 of the reference object K is detected, the world-based vehicle detection unit 22 outputs a trigger signal to the light source unit 23. By receiving the trigger signal, the light source unit 23 turns on and off the light source such as the LED or the like once.
Step SB 9: the world-based vehicle detection unit 22 defines a time when the world-based vehicle detection unit 22 outputs the trigger signal as a world-based position acquisition time, and acquires a position where the vehicle exists when the trigger signal is output as a world-based vehicle position. Then, the process returns to step SB 2.
Wherein the world-based vehicle location is obtained by the following method. In the case of an expressway or the like, since the length of the reference objects K is adjusted to 8 meters and the interval between the reference objects K is adjusted to 12 meters, the position of each point K2 can be calculated from world coordinates. Furthermore, the world-based vehicle detection unit 22 calculates the position of the device itself (vehicle 30) relative to the position of the endpoint K2 by Using a Tsai Camera model (R., Y.Tsai: "A vertical Camera Calibration Technique for High-Accuracy3D video Vision Using Off-the-Shell TV Cameras and Lenses", IEEE joural of Robotics and Automation, volume RA-3, No.4, pp.323.344,1999), and so on. Therefore, the position of the vehicle can be acquired based on the world coordinates.
Step SB 10: in the case where it is determined in step SB3 that the vehicle 30 is outside the measurement area (in the case where the vehicle is far away), the world-based vehicle detecting unit 22 transmits the stored world-based information to the coordinate conversion table creating unit 14.
According to the above-described procedure, the coordinate conversion table creating unit 14 creates the coordinate conversion table on the basis of the relationship between the image-based vehicle position and the world-based vehicle position.
As described above, it is possible to create the coordinate conversion table that can indicate the relationship between the image-based vehicle position acquired from the roadside captured image and the world-based vehicle position acquired from the onboard captured image with excellent accuracy even in an environment in which GPS is not available, such as a tunnel or the like.
This application is based on and claims priority from japanese patent application No.2011-272449, filed 12/13/2011, the entire contents of which are incorporated herein by reference.
REFERENCE SIGNS LIST
2-coordinate transformation table creation system
3 image-based information acquisition unit
4-coordinate transformation information creation unit
5 world-based information acquisition unit
10 roadside image pickup device
11 roadside camera
12 vehicle detection unit
13 image-based vehicle detection unit
14 coordinate conversion table creating unit
20 vehicle-mounted device
21 vehicle-mounted camera
22 world-based vehicle detection unit
23 light source unit
30 vehicle

Claims (10)

1. A coordinate conversion table creation system that creates a coordinate conversion table between image-based coordinates set to a captured image and world coordinates set to an immovable object, the coordinate conversion table creation system comprising:
an image-based information acquisition unit that photographs a running vehicle, and acquires an image-based vehicle position according to image-based coordinates set to the photographed image, and outputs the acquired image-based vehicle position as image-based information;
a world-based information acquisition unit that is mounted on a traveling vehicle and calculates a world-based vehicle position of the traveling vehicle to a predetermined reference object as world-based information from a vehicle-mounted captured image including the reference object captured by an in-vehicle camera; and
a coordinate transformation information creating unit that creates a coordinate transformation table between the image-based coordinates and the world-based coordinates on the basis of the image-based information and the world-based information.
2. The coordinate transformation table creating system according to claim 1, wherein the world-based information obtaining unit includes:
a world-based vehicle detection unit that outputs a trigger signal when a world-based vehicle position of the running vehicle is acquired; and
a light source unit that turns on and off a light source when the trigger signal is input.
3. The coordinate transformation table creation system according to claim 2, wherein the image-based information acquisition unit includes:
a roadside camera that is arranged on a roadside and that captures an image including a running vehicle as a roadside captured image; and
an image-based vehicle detection unit that determines whether the vehicle exists in a measurement area set in advance on the basis of the road-side captured image and outputs the determination to the world-based vehicle detection unit as area determination information, and further determines whether the vehicle approaches the reference object and outputs the determination to the world-based vehicle detection unit as vehicle approach information.
4. The coordinate transformation table creating system according to claim 3, wherein
The image-based information acquisition unit extracts the roadside captured image in which the light source unit is turned on from among the plurality of roadside captured images, and acquires the image-based vehicle position from the extracted roadside captured image.
5. The coordinate transformation table creating system according to claim 1, wherein
When the image-based information acquisition unit acquires the image-based vehicle position, the image-based information acquisition unit acquires an acquisition time of the image-based vehicle position as an image-based position acquisition time, and outputs the image-based vehicle position and the image-based position acquisition time as image-based information to the coordinate transformation information creation unit;
when the world-based information acquisition unit acquires the world-based vehicle position, the world-based information acquisition unit acquires an acquisition time of the world-based vehicle position as a world-based position acquisition time, and outputs the world-based vehicle position and the world-based position acquisition time to the coordinate transformation information creation unit as world-based information; and is
The coordinate conversion information creating unit creates the coordinate conversion table on the basis of the image-based vehicle position and the world-based vehicle position in which the respective image-based position acquisition time and the world-based position acquisition time are coincident.
6. A coordinate conversion table creation method that creates a coordinate conversion table between image-based coordinates set to a captured image and world coordinates set to an immovable object, the coordinate conversion table creation method comprising:
an image-based information acquisition process that photographs a running vehicle, and acquires an image-based vehicle position according to image-based coordinates set to the photographed image, and outputs the acquired image-based vehicle position as image-based information;
a world-based information acquisition process for calculating a world-based vehicle position of the running vehicle to a predetermined reference object as world-based information from a vehicle-mounted captured image including the reference object captured by an in-vehicle camera; and
a coordinate transformation information creation process that creates a coordinate transformation table between the image-based coordinates and the world-based coordinates on the basis of the image-based information and the world-based information.
7. The coordinate transformation table creating method according to claim 6, wherein the world-based information obtaining process includes:
a world-based vehicle detection process that outputs a trigger signal when a world-based vehicle position of the running vehicle is acquired; and
a light emitting process of turning on and off a light source when the trigger signal is input.
8. The coordinate transformation table creating method according to claim 7, wherein the image-based information obtaining process includes:
a roadside photographing process in which a roadside camera arranged on the roadside photographs an image including a running vehicle as a roadside photographed image; and
an image-based vehicle detection process of determining whether the vehicle exists in a measurement area set in advance on the basis of the roadside captured image and outputting the determination to the world-based vehicle detection process as area determination information, and further, determining whether the vehicle approaches the reference object and outputting the determination to the world-based vehicle detection process as vehicle approach information.
9. The coordinate transformation table creating method according to claim 8,
the image-based information acquisition process extracts the roadside captured image in which the light emission process turns on the light source from among the plurality of roadside captured images, and acquires the image-based vehicle position from the extracted roadside captured image.
10. The coordinate transformation table creating method according to claim 6,
when the image-based information acquisition process acquires the image-based vehicle position, the image-based information acquisition process acquires an acquisition time of the image-based vehicle position as an image-based position acquisition time, and outputs the image-based vehicle position and the image-based position acquisition time as image-based information to the coordinate transformation information creation process;
when the world-based information acquisition process acquires the world-based vehicle position, the world-based information acquisition process acquires an acquisition time of the world-based vehicle position as a world-based position acquisition time, and outputs the world-based vehicle position and the world-based position acquisition time as world-based information to the coordinate transformation information creation process; and is
The coordinate conversion information creating process creates the coordinate conversion table on the basis of the image-based vehicle position and the world-based vehicle position in which the respective image-based position acquisition times and the world-based position acquisition times are coincident.
HK14110849.6A 2011-12-13 2012-10-23 Coordinate conversion table creation system and coordinate conversion table creation method HK1197294B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2011-272449 2011-12-13
JP2011272449 2011-12-13
PCT/JP2012/006774 WO2013088626A1 (en) 2011-12-13 2012-10-23 Coordinate conversion table creation system and coordinate conversion table creation method

Publications (2)

Publication Number Publication Date
HK1197294A1 HK1197294A1 (en) 2015-01-09
HK1197294B true HK1197294B (en) 2017-07-14

Family

ID=

Similar Documents

Publication Publication Date Title
CN106663374B (en) Signal detection device and signal detection method
KR101758576B1 (en) Method and apparatus for detecting object with radar and camera
TWI534764B (en) Apparatus and method for vehicle positioning
KR101988811B1 (en) Signaling device and signaling device recognition method
CN105580358B (en) Information presentation system
US11214248B2 (en) In-vehicle monitoring camera device
US9679208B2 (en) Traffic light detecting device and traffic light detecting method
US9077907B2 (en) Image processing apparatus
EP3470780B1 (en) Object distance detection device
CN103975221B (en) Coordinate transform table creates system and coordinate transform table creation method
JP2010276583A (en) Position measuring device for vehicles
KR101972690B1 (en) Signal device detection device and signal device detection method
WO2017059527A1 (en) Camera-based speed estimation and system calibration therefor
US20150138324A1 (en) Apparatus for detecting vehicle light and method thereof
KR102428765B1 (en) Autonomous driving vehicle navigation system using the tunnel lighting
JP2019078700A (en) Information processor and information processing system
JP4609467B2 (en) Peripheral vehicle information generation device, peripheral vehicle information generation system, computer program, and peripheral vehicle information generation method
JP6701153B2 (en) Position measurement system for moving objects
JP6530782B2 (en) Vehicle control device
KR102385907B1 (en) Method And Apparatus for Autonomous Vehicle Navigation System
KR101420242B1 (en) vehicle detector and method using stereo camera
HK1197294B (en) Coordinate conversion table creation system and coordinate conversion table creation method
KR100844640B1 (en) Object recognition and distance measurement method
JP2017049666A (en) Jumping-out object detection device and jumping-out object detection method
US20250278856A1 (en) Data generation device, data generation method, non-transitory storage medium storing data generation program, and traffic service providing system