HK1180818A - Vehicle system - Google Patents
Vehicle system Download PDFInfo
- Publication number
- HK1180818A HK1180818A HK13108121.0A HK13108121A HK1180818A HK 1180818 A HK1180818 A HK 1180818A HK 13108121 A HK13108121 A HK 13108121A HK 1180818 A HK1180818 A HK 1180818A
- Authority
- HK
- Hong Kong
- Prior art keywords
- image
- vehicle
- unit
- display
- virtual image
- Prior art date
Links
Description
Technical Field
The present invention relates to a vehicle system, and more particularly, to an image processing system for a vehicle.
Background
Conventionally, vehicles have been used to move in vast areas such as historic sites, trails, theme parks, and the like. Examples of the vehicle include a bus capable of carrying a plurality of passengers at a time, a passenger car, and a bicycle capable of moving independently. When a passenger moves around an ancient site, a theme park, or the like, the passenger can move around the surrounding scenery, feel the weather or the landscape of the place, and enjoy a scene different from a daily living environment.
However, even if the vehicle travels around an ancient site, for example, the historic site is a historic site of a building, and the site is merely a place of land, and thus the vehicle may be boring to people. On the other hand, if it is desired to recover the above by means of a duplicate, it is considered that not only is the environment and the ancient sites of the land damaged, but also it is difficult to change the site when temporarily constructed, and it is also necessary to repeatedly repair and store the site.
In recent years, however, devices that provide a Mixed Reality (Mixed Reality; including Augmented Reality and Augmented virtual Reality) that fuses real space and virtual space in real time have been actively studied (see, for example, japanese patent laid-open nos. 2008-293209 (hereinafter referred to as patent document 1) and 2008-275391 (hereinafter referred to as patent document 2)). The device for providing the mixed reality is, for example, a device that captures an image of a real space using an imaging device such as a camera, superimposes a virtual image on an image (real image) captured by the imaging device to generate a composite image, outputs the composite image, and gives a mixed reality in which the real image and the virtual image are fused to each other to a user who views the composite image. The device disclosed in patent document 1 and the device disclosed in patent document 2 are configured such that an imaging device is mounted on a Head Mounted Display (HMD), a virtual image is superimposed on an actual image captured by the imaging device in real time to generate a composite image, and the composite image is displayed on the head mounted display. Therefore, when the user looks around the surroundings while wearing the head mounted display, the user can feel the feeling as if the virtual image were present in the real space by displaying the virtual image superimposed on the actual image of the scenery present in front of the line of sight.
However, even if a device providing the mixed reality is used, the user can experience the mixed reality only in a range where the surrounding scenes can be viewed from the current position.
Disclosure of Invention
The present invention has been made in view of the above circumstances, and an object thereof is to provide a vehicle system that enables a passenger to view a landscape in which a building or the like is installed as a landscape surrounding the vehicle, and that enables the passenger to easily replace the landscape visible to the passenger, even when the building or the like is not actually installed.
The vehicle system of the present invention is characterized by comprising: a photographing device that photographs a scene from a vehicle to thereby generate an actual image; a display device disposed in a vehicle; and a control device that superimposes a virtual image corresponding to a predetermined position on the actual image to generate a composite image, and causes each display device to display the composite image.
According to this configuration, since the passenger can view the composite image in which the virtual image is superimposed on the actual image, the passenger can view the scenery in which the building or the like is installed as the scenery seen from the vehicle without installing the building or the like in the actual scenery. In addition, the image viewable by the passenger can be easily changed only by changing the contents of the virtual image.
In the vehicle system according to the present invention, it is preferable that the control device is a device mounted on a vehicle.
In the vehicle system according to the present invention, it is preferable that the control device includes: a storage unit that stores the virtual image; an image combining unit that superimposes the virtual image corresponding to a predetermined position of the actual image on the position to generate a combined image; and a display unit that causes the display device to display the composite image.
In addition, the vehicle system according to the present invention preferably further includes a GPS receiver provided in the vehicle for receiving a GPS signal from a GPS satellite, wherein the storage unit stores a plurality of virtual images associated with a specific position in an absolute coordinate system, and the image combining unit includes: a coordinate assigning unit that associates position information of an absolute coordinate system with a specific position in the actual image; and a positioning unit that superimposes the real image and the virtual image on each other based on position information of an absolute coordinate system.
In addition, the vehicle system according to the present invention preferably further includes a direction detection unit that detects a direction in which a passenger is facing, wherein the display devices are a plurality of head-mounted display devices, the imaging device is an omnidirectional camera, and the display unit includes: a display image determining means for extracting a region corresponding to the direction in which the passenger faces, the region being detected by the direction detecting means, from the composite image, and thereby determining a display image to be displayed on each head-mounted display device; and an image display unit that causes the head-mounted display device to display the display image.
In the vehicle system according to the present invention, it is preferable that the direction detection means is provided in the head-mounted display device.
In the vehicle system according to the present invention, it is preferable that the virtual image is a computer graphic image simulating a landscape of an era different from the present era.
Drawings
Fig. 1 is a perspective view of a vehicle system according to embodiment 1 of the present invention.
Fig. 2 is a diagram illustrating the field of view of the passenger according to this embodiment, fig. 2 (a) is a diagram showing a real landscape, and fig. 2 (b) is a diagram showing a composite image in which a virtual image is superimposed on an actual image.
Fig. 3 is a block diagram of this embodiment.
Fig. 4 is a flowchart showing the operation of this embodiment.
Fig. 5 is a block diagram of embodiment 2.
Fig. 6 is a block diagram of embodiment 3.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
The vehicle system according to embodiment 1 is a vehicle image processing system for a vehicle that makes a passenger 9 travel in a certain area and gives a mixed reality feeling (including a sense of augmented reality and a sense of augmented reality) to the passenger 9. In particular, in the vehicle system of the present embodiment, the vehicle 1 is made to travel around an area where there are historic sites and historic sites, and a virtual image 21 made up of a CG image simulating a historical building or landscape is superimposed on an actual image 20 obtained by capturing an actual landscape, and the occupant 9 of the vehicle 1 is made to view the composite image. The vehicle system of the present embodiment can give the passenger 9 a feeling as if it were the next time by continuously performing the above-described processing.
The vehicle 1 of the present embodiment is an automobile (bus) that automatically travels at a low speed (about 10 to 20 km/h) on a road, and uses an engine, a motor, and the like as a drive source. As shown in fig. 1, a driver's seat is provided at the front part of the interior of the vehicle 1, and a plurality of seats 12 on which a plurality of passengers 9 can ride are provided behind the driver's seat. The vehicle 1 is configured such that the upper half of the side and rear faces are open and the upper half of the front face is formed of a transparent glass 16, and the passenger 9 can look around substantially the entire circumference in the horizontal direction in a state of sitting on the seat 12. The vehicle 1 includes: an imaging device 13 that is provided on the upper surface of the ceiling and that images the surrounding scenery; a display device 3 disposed inside the vehicle 1; a control device 5 mounted on the vehicle 1 and controlling display of the display device 3; and a GPS receiver 4 for measuring the position of the vehicle 1 by receiving GPS signals from GPS (Global Positioning System) satellites.
The display device 3 of the present embodiment is configured by a plurality of head-mounted display devices 31 disposed in the vehicle 1. The Head Mounted Display device 31 is a so-called Head Mounted Display (HMD) configured to block the direction of sight of the passenger 9 when the passenger 9 wears the Display. The head-mounted display device 31 includes a direction detection unit 32 that detects a direction in which the passenger 9 faces. For example, a 3-axis geomagnetic sensor, a gyro sensor, or the like is preferably used as the direction detection unit 32. The detection signal detected by the direction detection means 32 (detection signal to which information on the direction in which the passenger 9 is facing) is transmitted to a display means 54 of the control device 5 described later.
The imaging device 13 is constituted by an omnidirectional camera 14, and images a landscape over a range of 120 ° in the vertical direction (a range of-60 ° to + 60 ° with respect to the horizontal direction) around the entire circumference in the horizontal direction, for example. The imaging device 13 is mounted on and fixed to a base 15, and the base 15 is magnetically attached to the upper surface of the ceiling of the vehicle 1. The image pickup device 13 picks up a scene in a certain range of area centered on the image pickup device 13, thereby generating an actual image 20. The actual image 20 captured by the imaging device 13 is given positional coordinate (absolute coordinate, so-called world coordinate) information in the area based on the positioning information of the GPS receiver 4. The information of the scenery around the vehicle 1 captured by the imaging device 13 is transmitted to the control device 5.
The control device 5 is a device to which a technique of generating a composite image by superimposing a specific virtual image 21 corresponding to a predetermined position in the real image 20 captured by the imaging device 13 and displaying the generated composite image on the display device 3, i.e., a technique of producing a mixed reality (including an augmented reality and an augmented virtual reality) is applied. As shown in fig. 3, the control device 5 of the present embodiment includes: a storage unit 51 for storing the virtual image 21; an image warping unit 52 that warps the virtual image 21 so that the virtual image 21 fits the actual image 20 according to the relative position of the vehicle 1 and the virtual image 21; an image synthesizing unit 53 that superimposes the virtual image 21 deformed by the image deforming unit 52 on the actual image 20; and a display unit 54 that causes the display device 3 to display the synthesized image generated by the image synthesizing unit 53. The control device 5 is constituted by a computer having a microprocessor as a main constituent element. The control device 5 of the present embodiment is housed in the vehicle 1. In other words, the control device 5 is mounted on the vehicle 1. Thereby, the control device 5 is configured to be movable together with the vehicle 1.
The storage unit 51 is configured by a virtual image memory, and stores the virtual image 21 in advance in association with position coordinate (absolute coordinate) information of a position to be superimposed on the virtual image 21. The alignment unit superimposes the virtual image 21 and the real image 20 on the basis of the position coordinate (absolute coordinate) information stored in the virtual image memory and the position coordinate (absolute coordinate) information given to the real image 20. As shown in fig. 2, a virtual image 21 according to the present embodiment is a computer graphic image (hereinafter referred to as a CG image) formed by simulating a historical landscape (e.g., a city or a castle). The position coordinate information given to the virtual image 21 is preferably coordinates of a plurality of positions.
The image warping unit 52 changes the size and the posture of the virtual image 21 stored in the storage unit 51 according to the position of the vehicle 1. The image warping unit 52 includes a virtual image warping unit 55, and the virtual image warping unit 55 calculates the relative position and relative angle of the virtual image 21 with respect to the vehicle 1, and warps the virtual image 21 based on the calculated values. The virtual image warping unit 55 calculates the relative distance and relative angle of the virtual image 21 with respect to the vehicle 1 from the position coordinates (absolute coordinates) of the vehicle 1 obtained by the GPS receiver 4 provided on the vehicle 1 and the position coordinates (absolute coordinates) given to the virtual image 21. Then, the virtual image warping unit 55 calculates the values of the distance and the relative angle, determines the size of the virtual image 21 based on the calculated distance, and determines the posture of the virtual image 21 based on the calculated relative angle, thereby warping the virtual image. In other words, the virtual image warping unit 55 of the present embodiment affine-transforms the two-dimensional virtual image 21 so as to fit the actual image 20, based on the calculated values of the distance and the relative angle.
Further, although the image warping unit 52 of the present embodiment has been described based on the image base method of creating a target image by warping a two-dimensional image stored in the storage unit 51, the image warping unit 52 may be created by using a model base method of creating a target image by acquiring a virtual image from an arbitrary viewpoint from a three-dimensional virtual object held in the storage unit 51.
The image warping unit 52 further includes a correcting unit 56, and the correcting unit 56 corrects the brightness by adding a shadow to the virtual image 21 in accordance with the current time and the brightness of the landscape in addition to warping the size and the posture of the virtual image 21. In other words, the correction unit 56 can more harmoniously superimpose the virtual image 21 on the actual image 20. The shading/brightness correction means 56 uses the method described in "jiantai taimen, dashiyue taimen, and chi keshi," high-speed shading expression technique using mixed reality of shading planes, "journal of the society of image information media 62 (5), 5.1.2008, and p.788-795".
In this way, the virtual image 21 deformed by the virtual image deforming unit 55 is sent to the image synthesizing unit 53.
The image synthesizing unit 53 superimposes the virtual image 21 deformed by the image deforming unit 52 on the real image 20. The image synthesizing unit 53 acquires the real image 20 captured by the imaging device 13, and then uses the real image 20 to draw a real space image in the main memory. Then, after acquiring information of the virtual image 21 from the image warping unit 52, a new image (composite image) is generated by superimposing and drawing the virtual image 21 on the previously rendered image in real space based on the absolute coordinates of the real image 20 and the position coordinates given to the virtual image 21. The information of the synthesized image generated here is sent to the display unit 54.
The display unit 54 causes the display device 3 to display the image generated by the image synthesizing unit 53. The display unit 54 includes: a display image determining unit 57 that determines a display area from the composite image drawn in the main memory based on the information from the direction detecting unit 32; and an image display unit 58 that causes each head-mounted display device 31 to display the image of the display area determined by the display image determination unit 57.
The display image determining means 57 calculates the field of view region of the passenger 9 based on the direction in which the passenger 9 is facing, which is detected by the direction detecting means 32 provided in the head-mounted display device 31, and cuts and extracts the calculated field of view region as a display region from the composite image, thereby determining an image to be output to the head-mounted display device 31. The display image determining unit 57 performs processing for each head-mounted display device 31, calculates different visual field regions, and determines an image corresponding to the visual field region. The information determined by the display image determining unit 57 is sent to the image display unit 58.
The image display unit 58 causes each head-mounted display device 31 to display the composite image determined by the display image determination unit 57. Upon receiving the image information from display image determining section 57, image display section 58 causes each head-mounted display device 31 disposed in vehicle 1 to display the composite image.
The operation of the vehicle system thus configured will be described. Fig. 4 is a flowchart showing an example of the operation of the vehicle system.
When the vehicle 1 is driven and the process of the control device 5 is started (S1), the control device 5 operates the image pickup device 13 to pick up an image of the surroundings of the vehicle 1 (S2). The image pickup device 13 picks up an image of a scene around the vehicle 1, and at substantially the same time, the control device 5 obtains the position information of the vehicle 1 by measuring the position of the vehicle 1 by the GPS receiver 4 (S3). Next, the control device 5 deforms the size and posture of the virtual image 21 by the image deforming unit 52 based on the position information of the vehicle 1 (S4). The image synthesizing unit 53 draws the real space image from the real image 20 in the main memory (S5), and then draws the virtual image 21 in superimposition on the real space image based on the position coordinate information given to the virtual image 21 (S6). Next, the control device 5 acquires the posture information of each display device 3 by the direction detecting means 32 (S7), calculates the visual field area of the passenger 9 from the direction detecting means 32, and determines the display area corresponding to the visual field area (S8). Then, the control device 5 causes each head-mounted display device 31 to display the image determined by the display image determining unit 57 (S9). Next, the control device 5 determines whether or not there is a signal to end the process (S10), and if the signal to end the process is not received, it returns to the process of step S1 and repeats the processes of steps S1 to S10. When receiving the signal of the end of the processing, the imaging device 13 is caused to end the imaging of the image, and the processing of the control device 5 is also ended (S11).
In the vehicle system having such a configuration, since the image displayed on the display device 3 is continuously changed according to the movement of the traveling vehicle 1 and the movement of the field of view of the passenger 9, and the virtual image 21 can be expressed as if it exists in a real scene, the passenger 9 can be given a more realistic sensation than a case where only the field of view is moved from the present position and a mixed realistic sensation cannot be obtained as in the related art. Further, the virtual image 21 is changed with respect to the movement of the viewpoint in which the movement of the vehicle 1 and the change of the free field of view of the passenger 9 are mixed, and therefore, it is possible to further give a sense of being personally on the scene.
In the vehicle system according to the present embodiment, the imaging device 13 is configured by the omnidirectional camera 14, and the display area is determined after generating the composite image around the vehicle 1, so that even when a plurality of head-mounted display devices 31 are used, it is not necessary to provide a CCD camera for each head-mounted display device 31, and a significant cost reduction can be achieved. The traveling speed of the vehicle 1 is slow and almost constant, while the moving speed of the field of view of the passenger 9 is not constant, but the processing load is reduced as much as possible because the display area to be displayed on each head-mounted display device 31 is determined based on the composite image after the superimposition processing that loads the processing. In other words, in the case where the CCD camera is mounted on the head-mounted display device 31, the burden is heavy since the superimposition processing needs to be performed at high speed on the display device 3.
Since the image displayed on the display device 3 is an image in which the virtual image 21 is superimposed on the real image 20 obtained by capturing a real landscape, the surrounding environment such as weather and brightness is reflected as it is. This makes it possible to further enhance the feeling of being personally on the scene, for example, compared to a case where a recorded image recorded in advance is simply displayed in accordance with the movement of the vehicle 1.
In the vehicle system according to the present embodiment, the control device 5 that performs the above-described processing at high speed is considerably large, but since the control device 5 is mounted on the vehicle 1, the control device 5 can be moved together with the vehicle 1, and the processing at higher speed can be performed while moving.
In the vehicle system according to the present embodiment, when changing the content, only the virtual image 21 stored in the storage unit 51 needs to be corrected and changed, and therefore, by periodically changing the content, a loop (repeat) can be expected.
The following examples are given as examples of the contents of the virtual image 21.
The images are not limited to the historical scenery such as the virtual image 21 of the present embodiment, and examples thereof include CG images simulating scenery expected in the future and CG images simulating battle scenes in the country of war. In this case, the scene of battle can be presented as an animation, thereby providing a sense of being personally on the scene and a sense of fun. Further, a dwelling or the like in the stoneware era can be represented as a CG image.
Further, the virtual image 21 in which the name, sightseeing information, and explanatory characters are described is exemplified for the ancient tomb, tower, and the like. Thereby, the display of the display device 3 can be used as a video guide. In addition, a so-called digital signage (electronic advertisement) in which a business name is displayed in front of a building or a specific business name, product name, and brand name are written on an advertising balloon floating in the air can be used as the virtual image 21.
The virtual image 21 may be an image that represents the underground or the interior of a building in a transmissive manner using a CG image, or may be an image that is used at night, such as an aurora, lightning, universe, or constellation, using a CG image.
The control device 5 may be configured to apply a 3D system so that the passenger 9 can feel a deep and stereoscopic feeling with respect to the content displayed on the display device 3. That is, the control device 5 may be provided with 3D image forming means for generating different images using the content observable by the right eye and the content observable by the left eye and generating an image for making the passenger 9 feel that the display content of the display device 3 is stereoscopic. Thereby enabling to further provide contents rich in the feeling of being personally on the scene.
Next, embodiment 2 will be described with reference to fig. 5. Since most parts of this embodiment are the same as embodiment 1, the same parts are denoted by the same reference numerals and description thereof is omitted, and different parts will be mainly described.
The vehicle 1 system of the present embodiment is a vehicle image processing system mounted on the vehicle 1 for use, as in embodiment 1. The vehicle system of the present embodiment includes an imaging device 13, a display device 3, and a control device 5.
The image pickup device 13 picks up a scene outside the vehicle 1 from the vehicle 1, thereby generating an actual image 20. The imaging device 13 is mounted on the vehicle 1. Specifically, the imaging device 13 is provided on the ceiling of the vehicle 1. The imaging device 13 is constituted by an omnidirectional camera 14. A projection plane is set between the imaging device 13 and the scenery outside the vehicle 1, and the scenery outside the vehicle 1 is projected onto the projection plane. The projection surface is set at a position away from the imaging device 13 by a predetermined distance.
The imaging device 13 may form a projection surface in all directions, or may form a projection surface only in a predetermined region.
The actual image 20 is an image in which a landscape outside the vehicle 1 is projected on the projection surface. In other words, the actual image 20 is constituted by a two-dimensional plane on which a three-dimensional object constituted by a landscape outside the vehicle 1 is projected.
The GPS receiver 4 inputs a GPS signal to the camera 13. The imaging device 13 outputs the image data of the actual image 20 and the GPS signal to the image synthesizing unit 53 of the control device 5.
The control device 5 includes a vehicle position recognition unit 61, an image warping unit 52, a storage unit 51, an image synthesis unit 53, and a display unit 54. The control device 5 is mounted on the vehicle 1.
The vehicle position recognition unit 61 receives the GPS signal output from the GPS receiver 4. The vehicle position recognition unit 61 recognizes the current position of the vehicle 1 in an absolute coordinate system (so-called world coordinate system) from the GPS signal. The vehicle position recognition unit 61 outputs recognition information of the position of the vehicle 1 to the image warping unit 52.
The storage unit 51 is constituted by a virtual image memory 510. The storage unit 51 stores data of a plurality of virtual images 21 associated with predetermined positions in the world coordinate system. In other words, the storage unit 51 stores a plurality of virtual images 21 associated with specific positions in the absolute coordinate system. The storage unit 51 receives a signal from the virtual image acquisition unit 63, and outputs each data of the set of virtual images 21 to the virtual image acquisition unit 63.
Here, a group of virtual images 21 is composed of a plurality of virtual images 21. The group of virtual images 21 is a plurality of virtual images 21 determined by the position of the vehicle 1.
The image warping unit 52 includes a virtual image acquisition unit 63, a correction unit 56, and a virtual image warping unit 55. The correcting means 56 is the same as the correcting means 56 in embodiment 1, and therefore, description thereof is omitted.
The virtual image acquisition means 63 outputs a signal indicating that a group of the plurality of virtual images 21 corresponding to the position of the vehicle 1 in the world coordinate system is to be output to the storage unit 51 after receiving the signal from the vehicle position recognition means 61. The storage unit 51 receives the signal and outputs a set of virtual images 21 to the virtual image acquisition unit 63. When image data of a plurality of virtual images 21 is input, the virtual image acquisition section 63 outputs the data to the correction section 56.
Upon receiving the data from the virtual image acquisition unit 63, the correction unit 56 performs correction in the same manner as in embodiment 1. The correcting unit 56 outputs the corrected data to the virtual image warping unit 55.
The virtual image warping unit 55 calculates the distance and relative angle of the virtual image 21 with respect to the vehicle 1 from the specific position and the position of the vehicle 1 associated with each virtual image 21. The virtual image warping unit 55 transforms the virtual image 21 based on the calculated information.
In other words, the virtual image warping unit 55 calculates the distance and relative angle between the vehicle 1 and the virtual image 21 in the world coordinate system based on the position information of the vehicle 1 in the world coordinate system and the position information of the virtual image 21 in the world coordinate system. The virtual image warping unit 55 calculates the distance and relative angle between the vehicle 1 and the virtual image 21 in the world coordinate system, and then warps the virtual image 21 based on the calculated values. The virtual image warping unit 55 transforms the virtual image 21 by performing affine transformation, projective transformation, and the like on the virtual image 21 in the local coordinate system, based on the values obtained by calculating the distance and relative angle between the vehicle 1 and the virtual image 21.
Further, the relative angle is an angle with respect to a reference. In other words, the relative angle is an angle with respect to an axis in a case where the vehicle 1 is the origin in the world coordinate system.
The storage unit 51 and the virtual image warping unit 55 may be configured as follows. The storage unit 51 stores a plurality of three-dimensional virtual objects. The virtual image warping unit 55 rotates the three-dimensional virtual object within the local coordinate system according to the distance and relative angle between the vehicle 1 and the virtual image 21, thereby generating the virtual image 21. In other words, the virtual image warping unit 55 transforms the virtual image 21 stored in the storage unit 51 into an image to be superimposed on the real image 20, according to the distance and relative angle between the vehicle 1 and the virtual image 21.
The virtual image warping unit 55 warps the virtual image 21 and then outputs the image data to the image synthesizing unit 53.
The image combining means 53 includes coordinate assigning means 59 and aligning means 60.
The coordinate assigning unit 59 associates a coordinate system (so-called screen coordinate system) within the real image 20 with a world coordinate system based on the image data of the real image 20 input from the photographing device 13 and the GPS signal. The coordinate assigning unit 59 transforms the screen coordinate system into the world coordinate system. In other words, the coordinate assigning unit 59 associates the position information of the world coordinate system with a specific position within the actual image 20. The coordinate assigning unit 59 outputs a signal for associating the position information of the world coordinate system with a specific position in the real image 20 to the aligning unit 60.
The alignment means 60 superimposes the real image 20 on the virtual image 21 (including the image converted by the virtual image transformation means 55) with reference to the position information of the world coordinate system. In other words, the alignment unit 60 superimposes the real image 20 and the virtual image 21 on each other with reference to the position information of the absolute coordinate system. Thereby, the alignment unit 60 generates a composite image. The alignment unit 60 outputs the image data of the synthesized image to the display unit 54.
The display unit 54 includes a display image determining unit 57 and an image display unit 58.
The display image decision unit 57 receives a signal output from the direction detection unit 32 of the head-mounted display device 31. Further, the display image determining unit 57 receives the signal output from the aligning unit 60. The display image decision unit 57 calculates the field of view of the passenger based on the signal from the direction detection unit 32. The display image determining unit 57 extracts the position of the synthesized image corresponding to the visual field region, thereby determining an image (display image) to be output to the head-mounted display device 31.
The display image determining unit 57 performs processing for each head-mounted display device 31, calculates different visual field regions, and extracts an image corresponding to the visual field region to determine a display image. The display image determining unit 57 outputs the data of the display image to the image display unit 58.
The image display unit 58 displays each head-mounted display device 31 based on the data of the composite image output from the display image determination unit 57. When data is input from the display image determining unit 57, the image display unit 58 outputs data for displaying the composite image on the display device 3.
Next, embodiment 3 will be explained. Since most parts of this embodiment are the same as embodiment 1, descriptions of the same parts will be omitted, and descriptions of different parts will be mainly given.
The vehicle 1 system of the present embodiment is a vehicle image processing system mounted on the vehicle 1 for use, as in embodiment 1. The vehicle system of the present embodiment includes an imaging device 13, a display device 3, and a control device 5. In the vehicle system of the present embodiment, the configurations of the imaging device 13 and the display device 3 are the same as those of embodiment 1.
The control device 5 is a device to which a mark recognition type mixed reality (including augmented reality and augmented reality) technique is applied. The control device 5 includes a mark recognition unit 62, an image warping unit 52, a storage unit 51, an image synthesis unit 53, and a display unit 54. The control device 5 superimposes the virtual image 21 corresponding to a predetermined position on the actual image 20 to generate a composite image, and causes the display device 3 to display the composite image. The control device 5 is mounted on the vehicle 1.
The mark includes a first recognition portion formed in a square frame shape in plan view, and a second recognition portion formed inside the first recognition portion. The first identification portion is formed with a constant width over the entire circumference and is formed of a black frame. The second recognition unit is configured with different marks for each virtual image 21. The second recognition portion is formed in a black frame as the first recognition portion.
The marker recognizing unit 62 recognizes the presence of a marker within the actual image 20 generated by the photographing device 13. The mark identifying unit 62 detects the first identifying portion of the mark, thereby identifying the presence of the mark. The mark recognition unit 62 detects the second recognition portion and compares it with the mark stored in the mark memory 511 of the storage portion 51, thereby recognizing the second recognition portion. The mark recognition unit 62 recognizes the size and angle of the mark based on the shapes of the projection surfaces of the first recognition unit and the second recognition unit.
The marker recognizing unit 62 outputs the position information of the marker within the screen coordinate system of the real image 20 to the image synthesizing unit 53. The marker recognition unit 62 outputs the information of the second recognition unit compared with the storage unit 51, and the size and angle of the marker to the image warping unit 52.
The image warping unit 52 includes a virtual image acquisition unit 63, a correction unit 56, and a virtual image warping unit 55.
The virtual image acquisition unit 63 acquires the virtual image 21 from the virtual image memory 510 based on the information input from the marker recognition unit 62.
The storage unit 51 includes a virtual image memory 510 and a tag memory 511. The virtual image memory 510 stores data of a plurality of virtual images 21 associated with the markers of the second identification portion. The virtual image memory 510 receives a signal from the virtual image acquisition unit 63, and outputs each data of the virtual image 21 to the virtual image acquisition unit 63. The marker memory 511 stores a plurality of markers corresponding to the markers of the second recognition unit arranged in the world coordinate system.
The virtual image warping unit 55 receives the size and angle of the marker input from the marker recognition unit 62, and transforms the virtual image 21 into a shape to be superimposed on the real image 20 according to the size and angle of the marker. The virtual image warping unit 55 outputs the transformed image data to the image synthesizing unit 53.
The image synthesizing unit 53 superimposes the transformed virtual image 21 on the position of the marker recognized by the marker recognizing unit 62, thereby generating a synthesized image. In other words, the virtual image 21 corresponding to a predetermined position of the real image 20 is superimposed on the position, thereby generating a composite image. The image synthesizing unit 53 outputs the data of the synthesized image to the display unit 54.
The display unit 54 causes the display device 3 to display the composite image. The display unit 54 includes a display image determining unit 57 and an image display unit 58, as in embodiment 2. The display unit 54 has the same structure as that of embodiment 2, and therefore, description thereof is omitted.
Further, the mark may be a specific three-dimensional object existing on the outer side of the vehicle 1. Examples of the three-dimensional object include a stone, a stone tablet, a plant, a building, and the like having a specific shape. In this case, the mark identifying unit 62 identifies a plurality of feature points on the mark, thereby identifying a specific mark. The characteristic points on the mark may be, for example, corners and straight lines of the mark.
The vehicle 1 according to embodiments 1 to 3 is configured by an automobile capable of carrying a plurality of passengers, but the vehicle according to the present invention may be a light vehicle such as a train in which a plurality of vehicles are connected, or a bicycle on which a single passenger rides, for example, and is not limited to an automobile.
In the vehicle system according to the present embodiment, the output and input of information among the control device 5, the display device 3, and the imaging device 13 are performed by wire, but the display device according to the present invention may also use wireless communication. In other words, the control device 5 may be provided outside the vehicle 1. In this case, the control device 5, the imaging device 13, and the display device 3 are provided with a signal transmission/reception unit for wireless communication.
Description of the reference numerals
1 … vehicle; 11 … driver's seat; 12 … seats; 13 … camera; 14 … omni-directional camera; 15 … base station; 16 … glass; 20 … actual image; 21 … virtual image; 3 … display device; 31 … head-mounted display device; a 32 … orientation detection unit; 4 … GPS device; 5 … control device; 51 … storage part; 52 … image deformation unit; 53 … image synthesis unit; 54 … display element; 55 … position and posture calculation unit; 56 … correction unit; 57 … display image decision unit; 58 … image display unit; 9 … passenger.
Claims (8)
1. A vehicle system used in a vehicle, the vehicle system comprising:
a photographing device that photographs a scene from the vehicle to thereby generate an actual image;
a display device disposed in the vehicle; and
and a control device that superimposes a virtual image corresponding to a predetermined position on the actual image to generate a composite image, and causes a display device to display the composite image.
2. The vehicle system of claim 1,
the control device is mounted on the vehicle.
3. The vehicle system according to claim 1 or 2,
the control device is provided with:
a storage unit that stores the virtual image;
an image combining unit that generates a combined image by superimposing the virtual image corresponding to a predetermined position of the actual image on the position; and
a display unit that causes the display device to display the composite image.
4. The vehicle system of claim 3,
a GPS receiver for receiving GPS signals from GPS satellites and provided on the vehicle,
the storage unit stores a plurality of the virtual images associated with a specific position in an absolute coordinate system,
the image synthesizing unit has:
a coordinate assigning unit that associates position information of an absolute coordinate system with a specific position within the actual image; and
and a positioning unit that superimposes the real image and the virtual image on each other with reference to position information of an absolute coordinate system.
5. The vehicle system of claim 4,
the control device is provided with:
a vehicle position identification unit which identifies the current position of the vehicle in an absolute coordinate system according to the GPS receiver; and
a virtual image deformation unit that calculates a distance and a relative angle of the virtual image with respect to the vehicle based on a specific position and a position of the vehicle associated with each of the virtual images, and transforms the virtual image based on the distance and the relative angle,
the virtual image superimposed on the actual image by the registration means is an image generated by the virtual image warping means.
6. The vehicle system according to claim 5,
further comprises a direction detection means for detecting the direction in which the passenger is facing,
the display device is a plurality of head-mounted display devices,
the photographing device is an omni-directional camera,
the display unit has: a display image determining means for determining a display image to be displayed on each head-mounted display device by extracting a region corresponding to the direction in which the passenger faces, the region being detected by the direction detecting means, from the composite image; and an image display unit that causes the head-mounted display device to display the display image.
7. The vehicle system of claim 6,
the direction detection unit is provided to the head-mounted display device.
8. The vehicle system according to any one of claims 1 to 7,
the virtual image is a computer graphic image simulating a landscape of an age different from the present.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2010-199146 | 2010-09-06 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| HK1180818A true HK1180818A (en) | 2013-10-25 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP5804571B2 (en) | Vehicle system | |
| CN110120072B (en) | Method and system for tracking mobile devices | |
| US8295644B2 (en) | Birds eye view virtual imaging for real time composited wide field of view | |
| JP4696248B2 (en) | MOBILE NAVIGATION INFORMATION DISPLAY METHOD AND MOBILE NAVIGATION INFORMATION DISPLAY DEVICE | |
| JP5161760B2 (en) | In-vehicle display system and display method | |
| US20140285523A1 (en) | Method for Integrating Virtual Object into Vehicle Displays | |
| JP5715778B2 (en) | Image display device for vehicle | |
| TWI441670B (en) | Ferris wheel | |
| WO2009119110A1 (en) | Blind spot display device | |
| AU2007361324A1 (en) | Method of and arrangement for mapping range sensor data on image sensor data | |
| WO2016102304A1 (en) | Method for presenting an image overlay element in an image with 3d information, driver assistance system and motor vehicle | |
| JP2012164157A (en) | Image synthesizer | |
| JP2009101718A (en) | Image display device and image display method | |
| CN102555905A (en) | Method for producing image of e.g. post in concrete parking bay of ego vehicle, involves installing graphical element in object region to produce image in surrounding from virtual camera position, where region is not visible for sensor | |
| WO2018134897A1 (en) | Position and posture detection device, ar display device, position and posture detection method, and ar display method | |
| JP2014211431A (en) | Navigation device, and display control method | |
| JPWO2018034171A1 (en) | Image processing apparatus and image processing method | |
| CN103028252A (en) | Tourist car | |
| WO2004048895A1 (en) | Moving body navigate information display method and moving body navigate information display device | |
| CN102538799B (en) | For the method and apparatus of display section surrounding environment | |
| KR101594071B1 (en) | Apparatus and system for exhibition and method for providing exhibition data using the same | |
| JP2015184804A (en) | Image display device, image display method, and image display program | |
| JP6448274B2 (en) | Information display control system and information display control method | |
| HK1180818A (en) | Vehicle system | |
| JP2021148906A (en) | Vehicle display system and display device |