[go: up one dir, main page]

HK1181690B - Tourist car - Google Patents

Tourist car Download PDF

Info

Publication number
HK1181690B
HK1181690B HK13109061.0A HK13109061A HK1181690B HK 1181690 B HK1181690 B HK 1181690B HK 13109061 A HK13109061 A HK 13109061A HK 1181690 B HK1181690 B HK 1181690B
Authority
HK
Hong Kong
Prior art keywords
unit
image
imaging
virtual image
sight
Prior art date
Application number
HK13109061.0A
Other languages
Chinese (zh)
Other versions
HK1181690A1 (en
Inventor
山田三郎
Original Assignee
泉阳兴业株式会社
Filing date
Publication date
Priority claimed from CN201110303517.4A external-priority patent/CN103028252B/en
Application filed by 泉阳兴业株式会社 filed Critical 泉阳兴业株式会社
Publication of HK1181690A1 publication Critical patent/HK1181690A1/en
Publication of HK1181690B publication Critical patent/HK1181690B/en

Links

Abstract

The invention provides a tourist car which is provided with a plurality of cabins. The tourist car comprises photographing parts, display parts and a control part, wherein the photographing parts are arranged on all of the cabins and used for shooting actual scenery outside the cabins, the display parts are arranged in all of the cabins, and the control part is used for overlapping virtual images to preset positions of actual images which are shot by the photographing parts, thereby generating composite images and enabling the display parts to display the composite images.

Description

Sightseeing vehicle
Technical Field
The invention relates to a sight-seeing bus.
Background
Conventionally, a sight-seeing vehicle installed in a recreation place or the like is known (for example, see patent document 1). The sightseeing vehicle has a swivel wheel and a plurality of gondolas (gondolas) provided at predetermined intervals in a circumferential direction of the swivel wheel. The rotary wheel is slowly rotated in a state where the passenger is seated in the cabin. The passengers in the cabin can enjoy the scenery whose observation angle gradually changes as they slowly move. Moreover, a view around the vertex can be enjoyed.
However, this existing sight-seeing bus is simply a view that is currently visible outside the cockpit. Therefore, a person may feel bored. Especially for those familiar with the place or who have taken a ride, the existing sightseeing vehicle lacks fun.
However, in recent years, there is a growing research on the Mixed Reality of real space and virtual space being fused in real time [ Mixed Reality; devices for enhancing Reality (Augmented Reality) and enhancing virtual Reality (Augmented virtual Reality) are also included (see, for example, patent documents 2 and 3). In this device for presenting a mixed reality, for example, first, an image pickup unit such as a video camera (including a still camera) picks up an image of a real space. Then, the device for presenting a mixed reality superimposes the virtual image on the image (actual image) captured by the image capturing unit to generate a composite image. Then, the device presenting the mixed reality outputs the composite image to the outside. In this way, the device for presenting a mixed reality can give a user who sees the composite image a mixed reality in which the real image and the virtual image are merged. In the apparatuses of patent documents 2 and 3, an imaging unit is mounted on a Head Mounted Display (HMD). Then, a virtual image is superimposed on the actual image captured by the image capturing unit in real time, and a composite image is generated. The composite image is then projected on a head mounted display. Therefore, when the user walks with the head mounted display, the virtual image is superimposed on the video image corresponding to the viewpoint (observation point) of the user. Therefore, the head-mounted display can give the user a feeling as if the virtual image exists in the real space.
However, if a user who merely walks or travels on the ground uses only such a device for presenting a mixed reality, the user can only check the mixed reality in a range of surrounding scenes that can be viewed from the ground.
[ patent document 1 ] JP patent application laid-open No. 2007-167215
[ patent document 2 ] JP 2008-293209A
[ patent document 3 ] JP patent application laid-open No. 2008-275391
Disclosure of Invention
The present invention has been made in view of the above circumstances, and an object thereof is to provide a sight-seeing bus which can make a presentation range of mixed reality wider from a viewpoint from the ground and give passengers in a cabin a fun of looking out only a landscape.
In order to solve the above problem, the sightseeing bus of the present invention has the following configuration.
The sightseeing vehicle of the invention comprises a plurality of cabins 63, a camera part 1, a display part 2 and a control part 3. The imaging unit 1 is provided in each cabin 63. The imaging unit 1 is configured to image an actual scene outside the cabin 63. The display unit 2 is provided in each cabin 63. The control unit 3 is configured to generate a composite image by superimposing a virtual image at a predetermined position on the actual image captured by the imaging unit 1. The control unit causes the display unit 2 to display the composite image.
With this configuration, the passengers in the cabin 63 can be given a mixed reality (including augmented reality and augmented reality) that utilizes a wide range of visual fields viewed from the cabin 63 of the sight-seeing car. Moreover, the cabin 63 always moves not only in the horizontal direction but also in the height direction. Therefore, a mixed reality that cannot be obtained only from a viewpoint from the ground can be given.
In the sightseeing vehicle of the present invention, the control unit 3 preferably includes a storage unit 34, a virtual image warping unit 36, an image synthesizing unit 33, and an image display unit 35. The storage unit 34 stores a virtual image. The virtual image warping unit 36 changes the size/angle of the virtual image stored in the storage unit 34 according to the position of the capsule 63. The image synthesizing unit 33 is configured to generate a synthesized image by superimposing the image changed by the virtual image deforming unit 36 on a predetermined position of the actual image captured by the imaging unit 1. The image display unit 35 causes the display unit to display the image generated by the image synthesizing unit 33.
With this configuration, the virtual image can be deformed in accordance with the movement of the gondola 63. This makes it possible to make the composite image closer to a real image.
The tourist car of the present invention preferably includes a position detecting unit and an imaging direction detecting unit. The position detection unit is configured to detect the position of the imaging unit 1. The position detection unit is configured to generate detection information when detecting the position of the imaging unit 1. The imaging direction detection unit is configured to detect the imaging direction of the imaging unit 1. The imaging direction detecting unit is configured to generate detection information when the imaging direction detecting unit detects the imaging direction of the imaging unit 1. The virtual image warping unit 36 changes the virtual image based on the detection information generated by the position detection unit and the detection information generated by the imaging direction detection unit.
With this configuration, the position matching between the actual image and the virtual image can be performed with high accuracy.
In the sightseeing vehicle according to the present invention, it is preferable that the control unit 3 further includes a mark recognition unit 30 and a position/orientation deriving unit 32. The marker recognizing unit 30 recognizes a specific object contained in the actual image as a marker. The position/orientation deriving unit 32 is configured to derive the position/orientation of the marker recognized by the marker recognizing unit 30. The storage unit 34 stores the virtual image associated with the mark in advance. The virtual image warping unit 36 changes the virtual image based on the position/orientation information of the marker derived by the position/orientation deriving unit 32. When the marker is recognized by the marker recognition means 30, the control unit 3 acquires a virtual image corresponding to the marker from the storage unit 34. The virtual image acquired by the marker recognition means 30 from the storage unit 34 is changed by the virtual image transformation means 36. The changed image is superimposed on a portion corresponding to the mark of the actual image by the image combining unit 33, and a combined image is generated based on this. Then, the image display unit 33 causes the display unit 2 to display the composite image.
With this configuration, the accuracy of position matching between the actual image and the virtual image can be improved.
It is also preferable that the virtual image is a computer graphic image simulating a landscape of an era different from the present.
Accordingly, for example, a past building on the land, a future plan, or the like can be superimposed on the actual image to generate a composite image, and the passenger of the cabin 63 can be given fun.
In the sightseeing bus of the present invention, it is preferable that the storage unit 34 stores virtual images of a plurality of different ages. Each cabin 63 further includes a setting unit 52 that can be set to a landscape of any age. The control unit 3 acquires a virtual image corresponding to the year set by the setting unit 52 from the storage unit 34, superimposes the virtual image on the actual image, and generates a composite image. The image display unit 35 causes the display unit 2 to display the composite image.
Accordingly, the scenery in the desired age of the passenger in the cabin 63 can be displayed on the display unit 2, and the passenger can be further enjoyed.
Further, it is preferable that the virtual image is a computer graphic image simulating a landscape different from the current landscape.
In this case, for example, a landscape different from the current landscape can be superimposed on the actual image to generate a composite image, and the passenger of the cabin 63 can be given fun.
In the sightseeing vehicle according to the present invention, the storage unit preferably stores a plurality of virtual images of scenery. In this case, the cabin further includes a setting unit that can be set to an arbitrary landscape. The control unit acquires a virtual image corresponding to the scene set by the setting unit from the storage unit, and generates a composite image by superimposing the virtual image on the actual image. The control unit causes the display unit to display the composite image via the image display unit.
In this case, the scenery desired by the passengers of the cabin can be displayed on the display unit. Therefore, the passenger can be given further pleasure.
In the sightseeing vehicle according to the present invention, the imaging unit 1 is preferably a video camera (including a camera). The camera is arranged outside the cabin 63. The camera is configured to be capable of changing an imaging direction. Preferably, the cabin 63 further includes an operation unit 5 for changing an imaging direction of the camera.
Accordingly, the actual image and the virtual image at the viewpoint desired by the passenger of the cabin 63 can be displayed in an overlapping manner. This can give the passenger a mixed reality feeling of using a wider field of view.
Further, the sight-seeing bus of the present invention is preferably such that the display section 2 is included in a head-mounted display device.
Preferably, the mark identifying means includes a position detecting unit. The position detection unit is configured to detect a position of the imaging unit and output imaging unit position information. The marker recognition unit derives position/orientation information of a marker indicating at least one of a position and an orientation of the marker from the imaging unit position information.
In addition, the sightseeing vehicle is provided with a driving device. The driving device is configured to move each cabin. Preferably, the position detecting unit has a timer. The timer is configured to measure an elapsed time from a time point when the imaging unit moves to a predetermined position, and output elapsed time information indicating the elapsed time. The position detection unit is configured to output imaging unit position information indicating a position of the imaging unit based on the measurement time information and the movement speed of the cabin.
Preferably, the position detection unit further includes a transmitter and a receiver. The transmitter is configured to emit a signal. The receiver is configured to receive the signal. The timer is configured to measure an elapsed time from a time when the imaging portion is located at a predetermined position and the signal is received.
Preferably, the driving device is configured to move each cabin at a constant (constant) moving speed.
The driving device is configured to move each cabin along a predetermined track. The timer is configured to measure an elapsed time from a time when the imaging unit moves to a predetermined position on the predetermined track, and to output elapsed time information indicating the elapsed time.
Preferably, the mark recognition means includes an imaging direction detection unit. The imaging direction detection unit is configured to detect an imaging direction of the imaging unit and output imaging direction information. The marker recognition unit derives position/orientation information of a marker indicating at least one of a position and an orientation of the marker from the imaging direction information.
Preferably, the control unit further includes a luminance detection unit and a correction unit. The brightness detection unit is configured to detect the brightness of the outside-cabin scenery captured by the imaging unit. The luminance detected by the luminance detecting unit is defined as a first luminance. The correction unit adjusts the brightness of the image stored in the storage unit so that the brightness of the image stored in the storage unit approaches a first brightness. The image synthesizing unit synthesizes the virtual image with the adjusted brightness and the actual image captured by the imaging unit to generate a synthesized image.
(effect of the invention)
According to the sight-seeing bus of the present invention, the mixed reality can be presented not only from the viewpoint of the ground but also over a wider range, and the passengers in the cabin can be given fun that cannot be obtained when only looking out the landscape.
Drawings
Fig. 1 is a block diagram showing control according to an embodiment of the present invention.
Fig. 2 is an overall front view of a sight-seeing bus showing an embodiment of the present invention.
Fig. 3 is a perspective view of the interior of the deck of the sight-seeing vehicle showing one embodiment of the present invention.
Fig. 4 is a perspective view showing the inside of the cabin of other embodiments.
Description of the symbols
1 image pickup part
2 display part
3 control part
30 mark recognition unit
31 mark storage part
32 position/orientation deriving unit
33 image synthesizing unit
34 storage part
35 image display unit
36 virtual image warping unit
5 operating part
51 operating rod
52 setting unit
60 supporting leg body
61 rotating shaft
62 rotating wheel
63 cabin
64 upper and lower platforms
65 entrance and exit
66 door
67 seat
68 through window
G setting surface
Detailed Description
Hereinafter, embodiments of the present invention will be described based on the drawings.
As shown in fig. 2, the observation vehicle according to embodiment 1 includes a pair of leg units 60. A pair of leg bodies 60 are erected at a distance from the front and rear of the installation surface G of the sight-seeing bus. A rotating shaft 61 is horizontally provided between the upper ends of the pair of leg units 60. The rotor 62 is pivotally supported on the rotary shaft 61, and the rotor 62 is thereby freely rotatable. The rotor 62 is driven to rotate by a driving device (not shown) provided on the leg unit 60. The rotor 62 is a skeleton body formed in a substantially circular shape. A plurality of pods 63 are hung at the outer peripheral end portion of the rotation wheel 62 at predetermined intervals in the circumferential direction. Each of the pods 63 suspended at the outer peripheral end of the rotor 62 is rotatably mounted to the rotor 62. Therefore, the vertical posture of each gondola 63 can be always maintained regardless of the position around the rotation wheel 62. The rotation locus of each of the capsules 63 is substantially in the form of a circumferential locus. A platform (platform)64 for passengers to get on and off the cabin 63 is provided at a lower end portion of the rotation locus of the cabin 63.
Further, since the rotor 62 is driven to rotate by the driving device, the cabin moves on a circular track. I.e. the circumferential trajectory is defined as a predetermined trajectory for the movement of the gondolas.
Each pod 63 has a substantially circular outer shape in front view. Each cabin 63 has a space for passengers to board therein. Each cabin 63 is provided with an entrance 65 through which passengers can enter and exit. Each cabin 63 includes a door 66, and the door 66 is configured to be able to open and close the doorway 65. The cabin 63 is provided with an imaging unit 1 for imaging a landscape (actual landscape) outside the cabin 63 at an outer lower portion thereof. The imaging unit 1 of the present embodiment is constituted by, for example, a CCD (Charge Coupled Device) camera. The imaging unit 1 is mounted on an imaging direction driving device (not shown). The imaging unit 1 is configured to be capable of changing in all directions (360 ° in the horizontal direction) by the imaging direction driving device in a state where the imaging direction is slightly downward from the horizontal direction. In other words, the imaging unit 1 is configured to be rotatable by 360 degrees in the horizontal direction by the imaging direction driving device. The imaging unit 1 is connected to an operation unit 5 provided in the cabin 63 through a control unit 3 described later. By operating the operation unit 5, the imaging direction of the imaging unit 1 can be changed. As shown in fig. 3, the operation unit 5 is provided on the inner side surface of the cabin 63 opposite to the doorway 65. The operation unit 5 is disposed adjacent to the display unit 2. The operation unit 5 has an operation lever 51 that can freely tilt. When the operation lever 51 is tilted, the imaging direction of the imaging unit 1 is changed. The display unit 2 is constituted by a monitor. The display unit 2 is disposed to be slightly inclined upward. Accordingly, the display surface is disposed to face a portion where the face of the passenger is located. The display unit 2 is connected to the control unit 3. The display unit 2 is controlled by the control unit 3. The display unit 2 is provided at a substantially middle portion of the pair of seats 67 disposed to face each other. A see-through window 68 is installed above the seat 67 in the cabin 63, and passengers can look out outside scenery (actual scenery) through the see-through window 68.
The operation unit 5 is not limited to the one fixedly attached to the inner surface of the cabin 63 as described above. As another example, the operation portion may be attached by electric wiring disposed on a side of the display portion 2, for example. In this case, the operation portion can be freely moved. As another example, a receiving portion (not shown) may be newly provided on the inner surface of the cabin 63. In this case, an operation unit having a transmission unit is also provided. The operation unit is operated wirelessly.
The control unit 3 is configured to generate a composite image by superimposing a virtual image on a specific object in the actual image captured by the imaging unit 1. The control unit 3 is configured to cause the display unit 2 to display the composite image. The control unit 3 superimposes a virtual image on the image (actual image) captured by the imaging unit 1 to generate a new image. The control unit 3 is configured to cause the display unit 2 to display the image, and is a system to which a so-called mixed reality (including augmented reality and augmented reality) technique is applied. The control unit 3 of the present embodiment is configured to recognize a specific object of a landscape as a marker. Then, the control unit 3 superimposes the virtual image on the portion corresponding to the mark. That is, the control unit 3 uses a mark recognition type hybrid realistic sensation technique. As shown in fig. 1, the control unit 3 is constituted by a computer. The computer mainly comprises a microprocessor. The computer includes a marker recognition unit 30, a marker storage unit 31, a position/orientation derivation unit 32, a virtual image deformation unit 36, a storage unit 34, an image synthesis unit 33, and an image display unit 35. The computer of the control unit 3 according to the present embodiment is housed inside the seat 67 of each cabin 63.
The marker recognition unit 30 recognizes a specific object included in the actual landscape imaged by the imaging unit 1 as a marker. The mark is a mark that is a reference for relative position and orientation when a virtual image is superimposed on an image obtained by capturing an actual landscape. The markers are recognized based on the characteristics of the three-dimensional shape of a specific object (e.g., a city site, a sightseeing stand, etc.) contained in an actual landscape. Information on the characteristics of the three-dimensional shape of the specific object is recorded in the mark storage unit 31 in advance. The flag storage unit 31 is formed of a memory. The mark storage unit 31 stores shape information of a plurality of different marks. The marker identifying means 30 identifies which marker of the plurality of markers stored in the marker storage unit 31 corresponds to by comparing the shape information stored in the marker storage unit 31 with the shape information of the specific object included in the image captured by the imaging unit 1. The processing performed by the marker recognition unit may be realized by an image comparison algorithm such as DP (dynamic programming) matching. In this process, the resolution of the imaging unit 1 is increased, so that comparison with the recording mark can be performed in more detail. Therefore, if the resolution of the imaging unit 1 is increased, the number of recording marks can be increased. Examples of the marker of the present embodiment include cities, city sites, observation platforms, towers, apartments, game facilities, stations, highways, and the like included in the actual image. In addition, the identification of the marker can be performed for a plurality of places with a plurality of objects included in the actual image as respective markers. In addition, as the marker of the present embodiment, a plurality of objects included in the actual image can be recognized as a set of markers. Information on the marker recognized by the marker recognition unit 30 is sent to the position/orientation derivation unit 32 and the virtual image warping unit 36.
The position/orientation deriving unit 32 obtains the three-dimensional position and orientation of the marker recognized by the marker recognizing unit 30. For example, the position/orientation deriving unit 32 calculates the position/orientation of the marker with respect to the imaging unit 1 based on (a) the information on the three-dimensional shape of the marker received from the marker recognizing unit 30 and (b) the three-dimensional position/shape of the object corresponding to the marker in the actual image. That is, the position/orientation deriving unit 32 derives the distance/angle of the relative mark viewed from the imaging unit 1. In other words, the position/orientation deriving unit 32 derives the distance from the imaging section 1 to the marker. Further, the position/orientation deriving unit 32 derives an angle between a direction from the imaging section 1 toward the mark and the horizontal direction. As the information for specifying the three-dimensional position/shape of the object corresponding to the mark, for example, coordinate values within the mark of at least 3 points of the feature point of the mark are listed. In addition, a known method may be used as a method of calculating the three-dimensional position/orientation of the marker. For example, the method can be exemplified by "jia teng bo yi, kawasaki male, jiu yao ba lang: a position matching method for enhancing realistic sensation based on matching of templates generated on-line by grid images, the journal of the Japanese virtual reality society thesis, Vol.7, No.2, pp.119-128(2002) "(Japanese:" Gangbon Boyi, tide Zazaki male, orange Kei Balang: テクスチャ one-picture からォンラィン generation されたテンプレ - ト, マッチングに base づく, for position matching ゎせ approach, the method disclosed in Japanese バ - チャルリァリティ society literature 35468, Vol.7, No.2, pp.119-128(2002) ".
Further, the position/orientation of the mark may be derived using a position detection section and an imaging direction detection section. The position detection unit is configured to detect the position of the cabin (i.e., the position of the imaging unit 1). Accordingly, the position detection unit generates detection information (imaging unit position information) indicating the position of the cabin. Further, the imaging direction detection unit detects the imaging direction of the imaging unit 1. Accordingly, the imaging direction detection unit generates detection information indicating the imaging direction of the imaging unit 1. The position and orientation of the marker with respect to the imaging unit 1 are derived from the detection information of the position detection unit and the detection information of the imaging direction detection unit. The position/posture of the marker is defined as position/posture information of the marker. The position detection unit is constituted by, for example, a trigger transmitter, a trigger receiver, and a timer. The trigger transmitters are disposed on the upper and lower tables 64. The trigger receivers are arranged on each cockpit. The timer is configured to measure a time from a time when the signal transmitted from the trigger transmitter is received by the trigger receiver. The elapsed time measured by the timer is defined as elapsed time information. The control unit 3 provided in each capsule 63 calculates the position of each capsule 63 based on the elapsed time information measured by the timer and the rotational speed information of the rotor 62 rotating at a constant speed. As the image pickup direction detecting unit, a sensor such as a geomagnetic sensor or a gyro sensor attached to the image pickup unit 1 can be considered. In addition, the wheel 62 of the sight-seeing carriage continuously rotates, and thus each cabin 63 moves at a constant speed. Therefore, the position/orientation of the marker included in the actual image changes all the time.
It is needless to say that a so-called Global Positioning System (GPS) can be used as the position detecting unit.
The position/orientation information derived by the position/orientation deriving unit 32 is sent to the virtual image warping unit 36.
Further, it is not necessary to use a position detection unit and an imaging direction detection unit in combination. That is, the position detection unit may be used independently of the imaging direction detection unit.
Further, in the above description, the trigger transmitter is provided on the upper and lower stages. The trigger receivers are arranged on each cockpit. However, the trigger transmitter need not necessarily be provided on the upper and lower stages. Also, the trigger receiver need not necessarily be located on each cockpit. That is, the trigger transmitter and the trigger receiver may be provided at any place.
For example, the trigger transmitter may also be arranged in the centre of the rotator wheel. Furthermore, the trigger receivers may also be arranged on the rotor wheel adjacent to the gondolas. Furthermore, the trigger receiver may be provided at least one of the swivel wheels.
The drive device of the sight-seeing bus is configured to move the rotary wheel at a constant speed. However, it is not essential that the drive means move the rotary wheel at a certain speed. That is, the driving means can appropriately change the rotation speed of the rotation wheel. In this case, the sightseeing vehicle may have a rotation speed detection unit. In this case, the position detection unit detects the position of the cabin based on the elapsed time information output by the timer and the rotation speed detected by the rotation speed detection unit.
In the above description, the position and orientation of the mark with respect to the image pickup unit are derived from the detection information of the position detection unit (image pickup unit position information) and the detection information of the image pickup direction detection unit (image pickup direction information). However, the virtual image may be superimposed on the actual image based on the detection information (imaging unit position information) of the position detection unit. In this case, the storage unit stores a virtual image associated with the imaging position information. Further, the virtual image may be superimposed on the actual image based on the detection information (imaging direction information) of the position detection unit. In this case, the storage unit stores a virtual image associated with the imaging direction information.
The timer may be configured to measure an elapsed time from a time when the imaging unit moves to a predetermined position. That is, the elapsed time from when the gondola moves to the upper and lower stages can be measured, and further, the elapsed time from when the gondola moves to the uppermost portion of the swivel wheel can be measured.
When receiving the information on the marker transmitted from the marker recognition unit 30, the virtual image warping unit 36 acquires the virtual image corresponding to the marker from the storage unit 34. The storage unit 34 is formed of a memory. The storage unit 34 is configured to store the specific mark and the specific virtual image in association with each other in advance. Examples of the virtual image include a computer graphic image (hereinafter referred to as a CG image) simulating a past landscape (for example, a city or a city), a CG image simulating a future landscape, and various images simulating a landscape of a different age from the present. In particular, when recognizing the historical site as a mark as a scene in the past, a CG image of a city, or the like, which has been built in the place before, may be superimposed on the actual image as a virtual image, or the appearance of a battle in the country of battle may be superimposed on the actual image as a CG image. Further, regarding the appearance of battles in the battle age, it is preferable to express the appearance of battles in a moving image, because it can provide a sense of presence and enjoyment. Needless to say, it is possible to further trace back, for example, a house in the stoneware era or the like as a CG image.
The virtual image warping unit 36 changes the size and angle of the virtual image stored in the storage unit 34 according to the position of the capsule 63. Specifically, the virtual image warping unit 36 changes the virtual image based on the position/orientation information of the marker derived by the position/orientation deriving unit 32. The virtual image warping unit 36 receives the information on the marker transmitted from the marker recognition unit 30, and then acquires the virtual image corresponding to the marker from the storage unit 34 as described above. Then, the virtual image warping unit 36 specifies the size and angle of the virtual image based on the information calculated by the position/orientation deriving unit 32. Further, the virtual image deforming unit 36 deforms the virtual image based on the size/angle specifying information of the virtual image. As this modification, it is preferable to assume that an object (e.g., a city or the like) represented by a three-dimensional virtual image is provided on the ground, and the virtual image is changed so as to match the shape of the object on the ground viewed from the imaging unit 1 of the cabin 63 in the air.
The virtual image warping unit 36 may also include a correcting unit that corrects the brightness of the virtual image based on the brightness of the actual landscape, in addition to warping the size and angle of the virtual image as described above. That is, the correction means operates the brightness of the virtual image, thereby allowing the virtual image to be superimposed on the actual image more naturally.
That is, the control unit may include a luminance detection unit and a correction unit. The brightness detection unit is configured to detect the brightness of the scenery outside the cabin captured by the imaging unit. The luminance detected by the luminance detecting unit is defined as a first luminance. The correction unit adjusts the brightness of the image stored in the storage unit so that the brightness of the image stored in the storage unit approaches the first brightness. More specifically, the correction unit adjusts the luminance of the image stored in the storage unit so that the luminance of the image stored in the storage unit is the same as the first luminance. The image synthesizing unit generates a synthesized image by combining the luminance-adjusted virtual image with the actual image captured by the image capturing section.
The correction unit may adjust the brightness of the entire image stored in the storage unit so that a part of the brightness of the image stored in the storage unit is equal to the first brightness.
The correction unit may adjust the brightness of the entire image stored in the storage unit so that the average brightness of the entire image stored in the storage unit is equal to the first brightness.
The correction unit may adjust the luminance of the entire image stored in the storage unit so that the average luminance of a part of the image stored in the storage unit is equal to the first luminance.
The information of the virtual image thus deformed by the virtual image deforming unit 36 is sent to the image synthesizing unit 33.
When the image synthesizing unit 33 acquires the information of the virtual image deformed by the virtual image deforming unit 36, the deformed virtual image is superimposed on the portion of the actual image corresponding to the mark. In this manner, the image synthesizing unit 33 generates a new image (synthesized image). The information of the synthesized image generated here is sent to the image display unit 35.
The image display unit 35 is configured to display the image generated by the image synthesizing unit 33 on the display unit 2. When receiving the information of the synthesized image from the image synthesizing unit 33, the image display unit 35 causes the display unit 2 disposed in the cabin 63 to display the synthesized image.
A series of control operations in the control unit 3 are continuously performed in accordance with the movement of the cabin 63 of the sightseeing bus. The relative attitude of the flag also changes gradually, for example, when the cabin 63 is raised. However, the virtual image warping unit 36 changes the virtual image according to the position of the cockpit. Therefore, the virtual image gradually becomes smaller as the cockpit 63 rises. Further, the virtual image changes to a gazing posture. Accordingly, even if the image superimposed on the actual image is a virtual image, the image displayed on the display unit 2 can be infinitely close to the actual image.
The image displayed on the display unit 2 is an image in which a virtual image is superimposed on an actual image. Therefore, the environment is synchronized with the surrounding environment such as weather and brightness. Accordingly, for example, the presence can be further increased as compared with a case where only a recorded image recorded in advance is displayed in accordance with the movement of the tourist car.
That is, the sight-seeing car according to the present embodiment can give the passengers in the cabin 63 a mixed reality feeling of using a wide range of visual fields viewed from the cabin 63 of the sight-seeing car. Moreover, the cabin 63 always moves not only in the horizontal direction but also in the height direction. Therefore, a mixed reality that cannot be obtained only from a viewpoint from the ground can be given.
The sightseeing bus of the present embodiment is provided with an operation lever 51 for allowing the imaging direction of the imaging unit 1 to be freely changed in the operation unit 5. Therefore, a mixed reality feeling can be given to the landscape in the orientation desired by the passenger. Accordingly, the passenger can further enjoy the fun.
In addition, the virtual image may not be a landscape of a different age. That is, the virtual image may be a landscape different from the current landscape.
Further, virtual images of a plurality of scenes may be stored in the storage section. In this case, the setting unit may set any scenery in the virtual image of the plurality of scenery included in the storage unit.
Here, the tourist car according to the present embodiment may be provided with a setting unit 52 capable of setting the contents of the virtual image, in addition to the operation lever 51, in the operation unit 5. The setting unit 52 has setting buttons that can perform various settings, such as: a year setting button for displaying the virtual image of the set era in the virtual image created by simulating each era; a character setting button for displaying various characters (e.g., monster) that do not actually exist in the world of fantasy. The setting unit 52 is connected to the control unit 3, and when a year setting button is operated, for example, the display unit 2 can be made to display a selection of a past scene such as a stay-in period or a lifetime period, or a future scene. The storage unit 34 stores a plurality of virtual images corresponding to each generation in advance. For example, when the safe times are set by the age setting button, the setting unit 52 transmits a signal indicating that the safe times are selected to the image synthesizing unit 33. Then, the control unit 3 acquires, from the storage unit 34, a virtual image (for example, a CG image simulating the peacekin) corresponding to the peaceful times and corresponding to the marker recognized by the marker recognition unit 30. Then, the virtual image warping unit 36 warps the size and orientation (angle) of the virtual image based on the information of the position/orientation deriving unit 32 (i.e., based on the position of the capsule 63), and the warped virtual image is superimposed on the actual image by the image synthesizing unit 33 to generate an image. The image generated here is sent to the image display unit 35 and displayed on the display section 2.
As described above, the setting unit 52 can set the image displayed on the display unit 2 to the time when the passenger desires, so that the history of the land, which cannot be obtained only from the view of the vehicle, can be learned, and the interest in the history can be aroused. Moreover, since a historic building or the like can be overlooked from above, the passenger can feel fun which has not been available.
In the sightseeing bus according to the present embodiment, since the virtual image stored in the storage unit 34 needs to be corrected and changed every time the content is changed, it is possible to obtain a repeat (repeat) if the content is changed regularly.
Further, if an image indicating the name of a specific object (an attraction, a place with a name, or the like) corresponding to a mark, or an image in which sightseeing information and explanatory characters of the specific object are described is applied as a virtual image, the display of the display unit 2 can be used as a video guide for the tourist car. The virtual image may be used as a so-called digital signage (electronic advertisement) in which a business name is displayed in front of a building or a business name, a product name, and a brand name are described in an advertising balloon floating in the air.
The virtual image may be an image created by using an image representing the appearance in the sea, water, or the ground as a CG image, or an image created by using an image of an aurora, a lightning, a universe, a constellation, or the like as a CG image when the actual scenery is at night.
As described above, the sightseeing vehicle according to embodiment 1 includes a plurality of cabins, an imaging unit, a display unit, and a control unit. The imaging part is arranged on each cabin. The image pickup unit is configured to pick up an image of a landscape outside the cabin. The display part is arranged in each cabin. The control unit is configured to generate a composite image by superimposing the virtual image on a predetermined position of the actual image captured by the imaging unit. The control unit is configured to cause the display unit to display the composite image.
With such a configuration, the passengers in the cabin 63 can be given a mixed reality using a wide range of visual fields from the cabin 63 of the sight-seeing car. Moreover, the cabin 63 always moves not only in the horizontal direction but also in the height direction, so that it is possible to give a mixed reality that cannot be obtained only from a viewpoint from the ground.
The control unit further includes a storage unit, a virtual image deformation unit, an image synthesis unit, and an image display unit. The storage unit stores a virtual image. The virtual image deforming means is configured to change the shape of the virtual image stored in the storage unit in accordance with the position of the imaging unit. More specifically, the virtual image warping unit is configured to change the size and angle of the virtual image stored in the storage unit according to the position of the cockpit (imaging unit). The image synthesizing means generates a synthesized image by superimposing the image changed by the virtual image deforming means on a predetermined position of the actual image captured by the imaging unit. The image display unit causes the display unit to display the image generated by the image synthesis unit.
In this case, the virtual image can be deformed in accordance with the movement of the gondola 63. This makes it possible to make the composite image closer to a real image.
The tourist car is provided with a position detection unit and an imaging direction detection unit. The position detection unit is configured to detect a position of the imaging unit. The image pickup direction detection unit is configured to detect an image pickup direction of the image pickup unit. The virtual image warping unit is configured to warp the virtual image based on detection information of the position detection unit and detection information of the imaging direction detection unit.
In this case, the position matching between the actual image and the virtual image can be performed with high accuracy.
The control unit further includes a mark recognition unit and a position/orientation derivation unit. The marker recognition unit is configured to recognize a specific object included in the actual image as a marker. The position/orientation deriving unit is configured to derive the position and orientation of the marker recognized by the marker recognizing unit. The storage unit stores the virtual image associated with the mark in advance. A virtual image warping unit changes the virtual image based on the position/orientation information of the marker derived by the position/orientation deriving unit. After the mark is recognized by the mark recognition means, the control unit acquires a virtual image corresponding to the mark from the storage unit.
The control unit causes the virtual image acquired from the storage unit to be changed by the virtual image deforming unit. In other words, the virtual image warping means changes the virtual image acquired by the control unit from the storage unit.
Then, the control unit superimposes the changed image on a portion corresponding to the mark of the actual image by the image combining unit to generate a combined image. The image display unit causes the display unit to display the composite image.
In this case, the accuracy of the position matching between the actual image and the virtual image can be improved.
The imaging unit 1 is a camera that is provided outside the cabin 63 and can change the imaging direction. The cabin 63 further includes an operation unit 5 for changing the imaging direction of the camera.
Accordingly, the virtual image can be displayed in a superimposed manner with the actual image at the viewpoint desired by the passenger of the cabin 63. Therefore, the passenger can be given a mixed reality feeling using a wider range of visual field.
Further, the mark identifying unit may further have a position detecting section. In this case, the position detection unit is configured to detect the position of the imaging unit and output imaging unit position information. The marker recognition unit derives position/orientation information of a marker indicating at least one of a position and an orientation of the marker, based on the imaging unit position information.
In this case, the composite image can be made closer to a real image.
In addition, the sightseeing vehicle is provided with a driving device. The driving device is configured to move each cabin. The position detection unit may further include a timer. In this case, the timer is configured to measure an elapsed time from a time when the imaging unit moves to a predetermined position. The timer is configured to output elapsed time information indicating the elapsed time. The position detection unit outputs imaging unit position information indicating the position of the imaging unit based on the elapsed time information and the movement speed of the cabin.
In this case, the virtual image can be more appropriately deformed according to the movement of the gondola 63. This makes it possible to make the composite image closer to a real image.
The position detection unit may further include a transmitter and a receiver. The transmitter is configured to emit a signal. The receiver is configured to receive the signal. The timer is configured to measure an elapsed time from a time when the imaging portion is located at a predetermined position and the signal is received.
In this case, too, the composite image can be made closer to a real image.
The driving device is configured to move each cabin at a constant (constant) moving speed.
The driving device is configured to move each of the pods along a predetermined trajectory. The timer is configured to measure an elapsed time from a time when the imaging unit moves to a predetermined position on the predetermined track. The timer is configured to output elapsed time information indicating an elapsed time.
Further, the mark recognition unit may further have an imaging direction detection section. The image pickup direction detection unit is configured to detect an image pickup direction of the image pickup unit and output image pickup direction information. The marker recognition unit derives position/orientation information of a marker indicating at least one of a position and an orientation of the marker from the imaging direction information.
The control unit may further include a luminance detection unit and a correction unit. The brightness detection unit is configured to detect the brightness of the outside-cabin scenery captured by the imaging unit. The luminance detected by the luminance detecting unit is defined as a first luminance. The correction unit adjusts the brightness of the image stored in the storage unit so that the brightness of the image stored in the storage unit approaches a first brightness. The image synthesizing unit synthesizes the brightness-adjusted virtual image with the actual image captured by the imaging unit to generate a synthesized image.
In this case, the composite image can be made closer to a real image.
The position detection unit may be used independently of other components. That is, the sight-seeing bus may have a position detecting unit. In this case, the position detection unit is configured to detect the position of the imaging unit and output imaging unit position information. The control unit includes a storage unit, an image combining unit, and an image display unit. The storage unit stores a virtual image associated with the imaging position information. The image combining unit is configured to read out a virtual image associated with the imaging position information from the storage unit based on the imaging position information. The image combining unit is configured to generate a combined image by superimposing the virtual image read from the storage unit on the actual image captured by the imaging unit. The image display unit is configured to cause the display unit to display the composite image.
In this case, the composite image can be made closer to a real image.
Further, the control section may have a virtual image deformation unit. In this case, the virtual image deforming means deforms the image stored in the storage unit based on the imaging unit position information. The image synthesizing means is configured to generate a synthesized image by superimposing the virtual image deformed by the virtual image deforming means on the actual image captured by the imaging unit.
In this case, too, the composite image can be made closer to a real image.
Next, embodiment 2 will be described with reference to fig. 4. Since this embodiment is largely the same as embodiment 1 described above, the same reference numerals are given to the same portions to omit descriptions, and different portions will be mainly described.
The sightseeing bus of the present embodiment is different from the above-described embodiments in that the display unit 2 is provided on the head-mounted display device 7. The head mounted Display device 7 is constituted by a head mounted Display 70 (HMD). The head mounted display 70 is a display that can be worn on the head. When the passenger wears the head mounted display 70, the display as the display unit 2 covers the front of the eyes of the passenger.
The sight-seeing bus of the present embodiment is provided with a connector portion 71 for connecting the head mount display 70 and the operation portion 5. The connector portion 71 is connected to the control portion 3. The head-mounted display 70 is connected to the connector section 71, receives an image signal transmitted from the image display unit 35 of the control section 3, and displays a composite image on the display section 2 covering the front of the eyes.
Accordingly, if the past scenery or the future scenery is displayed, it is possible to look out a landscape beyond space and time as if riding on a time machine (time machine).
Further, the content displayed on the display unit 2 of the head-mounted display 70 may be an image of a 3D system to which the passenger can perceive depth and a stereoscopic effect. That is, the control unit 3 may be provided with a 3D image forming unit that generates an image in which the content viewed by the right eye is different from the content viewed by the left eye, and further generates an image for making the passenger perceive that the display content of the display unit 2 is stereoscopic content. Accordingly, the sightseeing bus further enriching the telepresence can be provided.
Note that the configuration of this embodiment mode can be combined with the configuration of embodiment mode 1. Therefore, when the configuration of the present embodiment is combined with the configuration of embodiment 1, the same effects as those obtained in embodiment 1 can be obtained.
Although the tourist car of the present invention has been described above based on a tourist car using a mark recognition type mixed reality technology, the tourist car of the present invention may employ a so-called markerless tracking (exemplified in JP 2007) -207251, for example), which uses a feature point of an arbitrary image instead of a mark. In this case, the marker identifying means 30 in embodiment 1 is not necessarily required.
Further, the virtual image deformation unit of the present invention may deform the image according to the position of the cabin, and therefore the size and angle of the virtual image may be determined according to the stable movement of the cabin of the sightseeing bus. For this reason, the position/orientation deriving unit 32 according to embodiment 1 is not necessarily required in the present invention.
In the embodiment using the head mounted display 70 according to embodiment 2, the configuration may be such that: an imaging unit 1 is provided on a head mounted display 70 provided in each capsule 63, an imaging direction of the imaging unit 1 is aligned with a line of sight, and a position detection unit and an imaging direction detection unit are provided in the imaging unit 1. In this way, a mixed reality corresponding to the direction in which the passenger is actually facing can be presented. In this case, it goes without saying that the imaging unit 1 provided on the head mount display 70 is included in the category of "provided on each cabin" in the present invention. Further, as the head mounted display 70, a wireless device may also be employed.
In the sightseeing bus according to embodiment 1, the control unit 3 is provided in each cabin 63, but the sightseeing bus according to the present invention may be: the control unit is provided in a place (e.g., a room of a building) separated from the cabins, and remotely controls the display unit of each cabin. In the tour bus according to embodiment 1, the year setting button of the setting unit 52 corresponds to each time such as the romantic time and the mysterious time, but the setting unit of the tour bus according to the present invention may select the events in the west calendar and the history.

Claims (20)

1. A sight-seeing bus having a plurality of cabins, said sight-seeing bus comprising:
an imaging unit provided in each cabin and configured to image a landscape outside the cabin;
a display unit provided in each of the cabins;
a control unit that generates a composite image by superimposing the virtual image on a predetermined position of the actual image captured by the imaging unit, and causes the display unit to display the composite image;
a position detection unit that detects a position of the image pickup unit and outputs image pickup unit position information indicating the position of the image pickup unit;
a drive device configured to move each cabin,
the position detecting section has a timer provided therein,
the timer is configured to measure an elapsed time from a time point when the imaging unit moves to a predetermined position, and output elapsed time information indicating the elapsed time,
the position detection unit is configured to output the imaging unit position information based on the elapsed time information and the movement speed of the cabin.
2. The sight-seeing vehicle of claim 1,
the control unit includes:
a storage unit that stores a virtual image;
a virtual image deforming unit which changes the shape of the virtual image stored in the storage unit according to the position of the imaging unit;
an image synthesizing means for generating a synthesized image by superimposing the image changed by the virtual image deforming means on a predetermined position of the actual image captured by the imaging unit; and
and an image display unit that causes the display unit to display the synthesized image generated by the image synthesis unit.
3. The sight-seeing bus of claim 2, further comprising: an imaging direction detecting section for detecting an imaging direction of the imaging section,
the virtual image deformation unit changes the virtual image based on the detection information of the position detection unit and the detection information of the imaging direction detection unit.
4. The sight-seeing vehicle of claim 2,
the control section further includes:
a marker recognition unit that recognizes a specific object included in the actual image as a marker; and
a position/posture deriving unit that derives at least one of a position and a posture of the marker recognized by the marker recognizing unit,
the storage unit stores the virtual image in advance in association with the mark,
the virtual image warping unit changes the virtual image based on the position/orientation information of the marker derived by the position/orientation deriving unit,
and a control unit that, when the mark is recognized by the mark recognition unit, acquires a virtual image corresponding to the mark from the storage unit, changes the acquired virtual image by the virtual image deformation unit, superimposes the changed image on a portion corresponding to the mark of the actual image by the image synthesis unit to generate a synthesized image, and causes the image display unit to display the synthesized image.
5. The sight-seeing vehicle of claim 1,
the virtual image is a computer graphic image simulating a landscape different from the current one.
6. The sight-seeing vehicle of claim 5,
the control unit has a storage unit in which a plurality of virtual images of scenery are stored,
each cabin further comprises a setting part which can be set to any scenery,
the control unit generates a composite image by acquiring a virtual image corresponding to the scene set by the setting unit from the storage unit and superimposing the virtual image on the real image, and causes the display unit to display the composite image by the image display means.
7. The sight-seeing vehicle of claim 1,
the camera part is a camera which is arranged outside the cabin and can change the camera shooting direction,
the cabin further includes an operation portion that changes an imaging direction of the camera.
8. The sight-seeing vehicle of claim 1,
the display section is included in a head-mounted display device.
9. The sight-seeing vehicle of claim 4,
the marker recognition unit derives position/orientation information of a marker indicating at least one of a position and an orientation of the marker, based on the imaging unit position information.
10. The sight-seeing vehicle of claim 9,
the position detection part further includes a transmitter and a receiver,
the transmitter is configured to emit a signal that,
the receiver is configured to receive the signal and,
the timer is configured to measure an elapsed time from a time when the imaging portion is located at a predetermined position and the signal is received.
11. The sight-seeing vehicle of claim 9,
the driving device is configured to move each cabin at a constant moving speed.
12. The sight-seeing vehicle of claim 9,
the drive means are configured to move the gondolas along a predetermined trajectory,
the timer is configured to measure an elapsed time from a time when the imaging unit moves to a predetermined position on the predetermined track, and to output elapsed time information indicating the elapsed time.
13. The sight-seeing vehicle of claim 4,
the mark recognition unit has an image pickup direction detection section,
the image pickup direction detecting unit is configured to detect an image pickup direction of the image pickup unit and output image pickup direction information,
the marker recognition unit derives position/orientation information of a marker indicating at least one of a position and an orientation of the marker, based on the imaging direction information.
14. The sight-seeing vehicle of claim 2,
the control part is also provided with a brightness detection unit and a correction unit,
the brightness detection means is configured to detect the brightness of the outside-cabin scenery captured by the imaging unit, and define the brightness detected by the brightness detection means as a first brightness,
the correction unit adjusts the brightness of the image stored in the storage unit so that the brightness of the image stored in the storage unit approaches a first brightness,
the image synthesizing unit synthesizes the virtual image with the adjusted brightness and the actual image captured by the image capturing unit to generate a synthesized image.
15. The sight-seeing vehicle of claim 1,
the control section has a storage section, an image synthesizing unit and an image display unit,
the storage unit stores a virtual image, and the virtual image is associated with imaging position information,
the image combining means is configured to read out a virtual image associated with the imaging position information from the storage unit based on the imaging position information,
the image synthesizing unit is configured to generate a synthesized image by superimposing the virtual image read out from the storage unit on the actual image captured by the imaging unit,
the image display unit is configured to cause the display unit to display the composite image.
16. The sight-seeing vehicle of claim 15,
the control section further has a virtual image deformation unit,
the virtual image deforming means deforms the image stored in the storage unit based on the imaging unit position information,
the image synthesizing means is configured to generate a synthesized image by superimposing the virtual image deformed by the virtual image deforming means on the actual image captured by the imaging unit.
17. The sight-seeing vehicle of claim 15,
the position detection part further includes a transmitter and a receiver,
the transmitter is configured to emit a signal that,
the receiver is configured to receive the signal and,
the timer is configured to measure an elapsed time from a time when the imaging portion is located at a predetermined position and the signal is received.
18. The sight-seeing vehicle of claim 15,
the driving device is configured to move each cabin at a constant moving speed.
19. The sight-seeing vehicle of claim 15,
the drive means are configured to move the gondolas along a predetermined trajectory,
the timer is configured to measure an elapsed time from a point in time when the imaging unit moves to a predetermined position on the predetermined track, and to output elapsed time information indicating the elapsed time.
20. The sight-seeing vehicle of claim 15,
the sightseeing bus also comprises a camera shooting direction detection part,
the image pickup direction detecting unit is configured to detect an image pickup direction of the image pickup unit and output image pickup direction information,
the storage unit has a virtual image associated with imaging unit position information and imaging direction information,
the image combining means is configured to read out a virtual image associated with the imaging unit position information and the imaging direction information from the storage unit based on the imaging unit position information and the imaging direction information.
HK13109061.0A 2013-08-02 Tourist car HK1181690B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110303517.4A CN103028252B (en) 2011-09-29 2011-09-29 Tourist car

Publications (2)

Publication Number Publication Date
HK1181690A1 HK1181690A1 (en) 2013-11-15
HK1181690B true HK1181690B (en) 2015-07-10

Family

ID=

Similar Documents

Publication Publication Date Title
JP5759110B2 (en) Ferris wheel
JP5804571B2 (en) Vehicle system
JP6995799B2 (en) Systems and methods for generating augmented reality and virtual reality images
CN104781873B (en) Image display device, method for displaying image, mobile device, image display system
KR101748401B1 (en) Method for controlling virtual reality attraction and system thereof
JP6026088B2 (en) Remote control system
EP2039402B1 (en) Input instruction device, input instruction method, and dancing simultation system using the input instruction device and method
US8142277B2 (en) Program, game system, and movement control method for assisting a user to position a game object
JP6396027B2 (en) Program and game device
US20210192851A1 (en) Remote camera augmented reality system
CN103028252B (en) Tourist car
CN112150885A (en) Cockpit system based on mixed reality and scene construction method
JP2020204856A (en) Image generation system and program
US20210158623A1 (en) Information processing device, information processing method, information processing program
US11273374B2 (en) Information processing system, player-side apparatus control method, and program
JP6566209B2 (en) Program and eyewear
HK1181690B (en) Tourist car
JPH07306956A (en) Virtual space experience system using closed space equipment
JP2017099790A (en) Traveling object operation system
HK1180818A (en) Vehicle system
JP2001154571A (en) Method for selecting construction site
HK40087933A (en) Systems and methods for generating augmented and virtual reality images
JP2021159108A (en) Real image generation device, viewing device and real image generation method