[go: up one dir, main page]

WO2018030320A1 - Vehicle display device - Google Patents

Vehicle display device Download PDF

Info

Publication number
WO2018030320A1
WO2018030320A1 PCT/JP2017/028520 JP2017028520W WO2018030320A1 WO 2018030320 A1 WO2018030320 A1 WO 2018030320A1 JP 2017028520 W JP2017028520 W JP 2017028520W WO 2018030320 A1 WO2018030320 A1 WO 2018030320A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
user
vehicle
virtual image
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2017/028520
Other languages
French (fr)
Japanese (ja)
Inventor
勇希 舛屋
誠 秦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Seiki Co Ltd
Original Assignee
Nippon Seiki Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Seiki Co Ltd filed Critical Nippon Seiki Co Ltd
Priority to JP2018533019A priority Critical patent/JP6874769B2/en
Publication of WO2018030320A1 publication Critical patent/WO2018030320A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/24Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view in front of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/10Input arrangements, i.e. from user to vehicle, associated with vehicle functions or specially adapted therefor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/21Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor using visual output, e.g. blinking lights or matrix displays
    • B60K35/23Head-up displays [HUD]
    • B60K35/233Head-up displays [HUD] controlling the size or position in display areas of virtual images depending on the condition of the vehicle or the driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/28Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the type of the output information, e.g. video entertainment or vehicle dynamics information; characterised by the purpose of the output information, e.g. for attracting the attention of the driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/60Instruments characterised by their location or relative disposition in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/02Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays

Definitions

  • the present invention relates to a vehicle display device.
  • the present invention relates to a vehicle display device that can provide appropriate information to a user without being affected by a change in the position of the user's viewpoint.
  • a virtual image is visually recognized by a user sitting in the driver's seat using the light of the display image reflected by the front window shield.
  • a virtual image is visually recognized by the user sitting in the driver's seat so that the virtual image is formed on the vehicle traveling direction side (vehicle front side) with reference to the front window shield of the vehicle. .
  • an image display unit that displays a display image
  • a projection unit that includes an optical system that includes a concave mirror that projects the display image onto a front window shield of the vehicle; ,including.
  • a user who sits in the driver's seat of a vehicle equipped with such a vehicle display device can see a virtual image that gives information on the presence of other vehicles, obstacles, etc. on the road ahead of the vehicle through the front window shield. And can be visually recognized in a superimposed state.
  • the virtual image As the position at which the virtual image is visually recognized becomes higher in the vertical direction of the front window shield, the virtual image is visually recognized by being superimposed on the scenery on the far side of the landscape that can be seen through the front window shield.
  • the virtual image is superimposed on the landscape on the near side of the landscape that can be seen through the front window shield.
  • the position of the viewpoint of the user sitting in the driver's seat is not constant depending on the sitting height of the user, the sitting posture of the user, and the like.
  • the virtual image becomes more closely related to the scenery on the near side of the scenery seen through the front window shield as the position of the viewpoint of the user sitting in the driver's seat increases. Superimposed.
  • the object in the landscape on which the virtual image is superimposed shifts, which may give the user a sense of incongruity.
  • Patent Document 1 discloses a head-up display device (vehicle display) that adjusts the projection direction of the optical system including the concave mirror of the projection unit according to the position in the vertical direction of the viewpoint of the user sitting in the driver's seat of the vehicle.
  • the vehicle display device disclosed in Patent Document 1 includes a concave mirror actuator that adjusts the projection angle of the concave mirror of the projection unit, and a viewpoint detection camera that acquires the position of the viewpoint of the user sitting in the driver's seat of the vehicle.
  • the vehicle display device disclosed in Patent Document 1 projects a display image on the upper side in the vertical direction of the front window shield when the position of the viewpoint of the user sitting in the driver's seat of the vehicle acquired by the viewpoint detection camera is high. In this manner, the concave mirror actuator is controlled.
  • the display device for a vehicle shown in Patent Document 1 displays a display image vertically below the front window shield when the position of the viewpoint of the user sitting in the driver's seat of the vehicle acquired by the viewpoint detection camera is low.
  • the concave mirror actuator is controlled so that it is projected to the side.
  • the vehicular display device disclosed in Patent Document 1 is a target on which a virtual image is superimposed in a landscape seen through the front window shield even when the position of the viewpoint of the user sitting in the driver's seat of the vehicle changes. It is configured to prevent a large shift.
  • FIG. 10 illustrates the relationship between the viewpoint position of the user, the virtual image visually recognized by the user, and the range of the distance on the road surface of the landscape on which the virtual image is superimposed in the vehicle display device described in Patent Document 1. It is a schematic diagram for. In addition, FIG. 10 is for easily explaining the relationship between the viewpoint position of the user in the vertical direction, the virtual image area displaying the virtual image visually recognized by the user, and the range of the distance on the road surface of the landscape where the virtual image area is superimposed. In addition, the amount of change in the user's viewpoint position is exaggerated.
  • the vertical distances between the user viewpoint position 101u, the user viewpoint position 101r, and the user viewpoint position 101d shown in FIG. 10 are actually shorter than the example shown in FIG.
  • the z-axis positive direction represents the front direction of the vehicle
  • the y-axis positive direction represents the upper side in the vertical direction
  • the x-axis positive direction (upward direction perpendicular to the drawing) represents the vehicle left side. Represents a direction.
  • FIG. 10 shows three viewpoint positions of a user viewpoint position 101u, a user viewpoint position 101r, and a user viewpoint position 101d as examples of the viewpoint positions of the user sitting in the driver's seat of the vehicle.
  • the virtual image area 301u shown in FIG. 10 for example, when the viewpoint of the user sitting in the driver's seat of the vehicle is the user viewpoint position 101u, the projection angle of the display image is adjusted by the vehicle display device described in Patent Document 1. As a result, the virtual image visually recognized by the user is displayed.
  • the projection angle of the display image is adjusted by the vehicle display device described in Patent Document 1.
  • the virtual image visually recognized by the user is displayed.
  • the virtual image area 301d shown in FIG. 10 for example, when the viewpoint of the user sitting in the driver's seat of the vehicle is the user viewpoint position 101d, the projection angle of the display image is adjusted by the vehicle display device described in Patent Document 1. As a result, the virtual image visually recognized by the user is displayed.
  • the vehicular display device described in Patent Literature 1 when the position of the viewpoint of the user sitting in the driver's seat of the vehicle changes, the direction in which the display image is projected is changed. The area in which the image display unit displays the display image is not changed. Therefore, the vertical sizes of the virtual image region 301u, the virtual image region 301r, and the virtual image region 301d are all the same.
  • the overlap distance range 401u illustrated in FIG. 10 is, for example, a landscape in which the virtual image region 301u overlaps among the landscapes that can be seen through the front window shield 2 when the viewpoint of the user sitting in the driver's seat of the vehicle is the user viewpoint position 101u. This is a distance range on the road 91.
  • the superimposing distance range 401r shown in FIG. 10 is, for example, a scene in which the virtual image area 301r is superposed among the scenery that can be seen through the front window shield 2 when the viewpoint of the user sitting in the driver's seat of the vehicle is the user viewpoint position 101r. This is a distance range on the road 91.
  • the vehicle 10 is, for example, a scene in which the virtual image area 301d is superposed among the scenery seen through the front window shield 2 when the viewpoint of the user sitting in the driver's seat of the vehicle is the user viewpoint position 101d. This is a distance range on the road 91.
  • the amount of change in the vertical direction of the virtual image is smaller than the amount of change in the vertical direction of the user's viewpoint position. Then, as the user viewpoint position moves upward in the vertical direction, the angle between the line of sight where the user views the virtual image and the horizontal plane increases. On the other hand, as the user viewpoint position moves downward in the vertical direction, the angle between the line of sight where the user views the virtual image and the horizontal plane decreases. Therefore, the length of the overlapping distance range 401u at the user viewpoint position 101u that is higher than the user viewpoint position 101r is smaller than the length of the overlapping distance range 401r at the user viewpoint position 101r.
  • the length of the overlapping distance range 401d at the user viewpoint position 101d that is lower than the user viewpoint position 101r is larger than the length of the overlapping distance range 401r at the user viewpoint position 101r.
  • the positions of the end portions of only the vehicle rear side of the overlap distance range 401u, the overlap distance range 401r, and the overlap distance range 401d are shown to be fluctuating.
  • the position of the side edges can also vary.
  • the display device for a vehicle described in Patent Document 1 may give a sense of incongruity to the user when the position of the user's viewpoint changes.
  • One object of the present invention is to provide a vehicle display device that can provide appropriate information to a user without being affected by a change in the position of the user's viewpoint.
  • Other objects of the present invention will become apparent to those skilled in the art by referring to the aspects and preferred embodiments exemplified below and the accompanying drawings.
  • the vehicle display device of the present invention includes a viewpoint position acquisition unit that acquires a position of a viewpoint of a user sitting in a driver's seat of a vehicle, and road shape information that is information on a road shape in front of the vehicle from the travel route of the vehicle.
  • the road shape information acquisition unit to be acquired the image display unit having a display surface capable of displaying an image, and the image display unit according to the position of the user's viewpoint in the vertical direction acquired by the viewpoint position acquisition unit
  • An image generation unit that determines a position and a length of a use area used to display the image that is a part of the display surface, and displays the image in the use area of the display surface; and the display A projection unit for projecting light from the surface toward the translucent member to generate a virtual virtual image region corresponding to the use region and displaying a virtual image corresponding to the image in the virtual image region;
  • the image The formation unit determines the position and length of the use area corresponding to the vertical direction of the virtual image area according to the position of the viewpoint of the user in the vertical direction acquired by the viewpoint position acquisition unit, and the road According to the road shape information acquired by the shape information acquisition unit, the length of the use area is corrected so that the upper end of the virtual image area approaches the lower end and the vertical direction of the virtual image area is shortened.
  • FIG. 1A It is a block diagram which shows the example of a structure of the display apparatus for vehicles of this invention. It is a figure which shows the example of a structure of the image display part shown by FIG. 1A. It is sectional drawing of the projection part shown by FIG. 1A. It is a figure which shows the example of the landscape and virtual image which can be seen from the user sitting in the driver's seat of a vehicle provided with the display apparatus for vehicles shown by FIG. 1A. It is a flowchart figure which shows the example of operation
  • FIG. 1A It is a figure which shows the relationship between the position of a user's viewpoint, and the image displayed by the image display part of the display apparatus for vehicles shown by FIG. 1A. It is a figure which shows the relationship between the position of a user's viewpoint, and the image displayed by the image display part of the display apparatus for vehicles shown by FIG. 1A. It is a figure which shows the relationship between the position of a user's viewpoint, and the image displayed by the image display part of the display apparatus for vehicles shown by FIG. 1A. It is a figure which shows the relationship between the position of a user's viewpoint, and the image displayed by the image display part of the display apparatus for vehicles shown by FIG. 1A.
  • the vehicle display device of the present invention is a schematic diagram for explaining the relationship between the viewpoint position of the user, the virtual image visually recognized by the user, and the range of the distance on the road surface of the landscape on which the virtual image is superimposed. It is a figure which shows the relationship between the position of a user's viewpoint, and the image displayed by the image display part of the display apparatus for vehicles shown by FIG. 1A. It is a figure which shows the relationship between the position of a user's viewpoint, and the image displayed by the image display part of the display apparatus for vehicles shown by FIG. 1A. It is a figure which shows the relationship between the position of a user's viewpoint, and the image displayed by the image display part of the display apparatus for vehicles shown by FIG. 1A.
  • the vehicle display device of the present invention is a schematic diagram for explaining the relationship between the viewpoint position of the user, the virtual image visually recognized by the user, and the range of the distance on the road surface of the landscape on which the virtual image is superimposed. It is a figure which shows the correct
  • Patent Document 1 Japanese Patent Laid-Open No. 2014-210537
  • the relationship between the viewpoint position of the user, the virtual image visually recognized by the user, and the range of the distance on the road surface of the landscape on which the virtual image is superimposed It is a typical figure for demonstrating.
  • the z-axis is defined in the vehicle front-rear direction with the traveling direction of the vehicle 1 as the vehicle front direction, and the vertical direction (vehicle 1 When the road surface on which the vehicle travels is horizontal, the y-axis is defined in the vertical direction), and the x-axis is defined in the left-right direction (vehicle left-right direction) facing the front direction of the vehicle.
  • the x-axis positive direction represents the left direction of the vehicle
  • the y-axis positive direction represents the upper side in the vertical direction (upward in real space)
  • the z-axis positive direction represents the front direction of the vehicle.
  • the vehicle display device 10 includes an image display unit 20, an image generation unit 30, a viewpoint position acquisition unit 40, a projection unit 50, and a front information acquisition unit 60.
  • the image display unit 20 has a display surface 21 that can display an image, as shown in FIG. 1B.
  • An area 210 on the display surface 21 where an image can be displayed is referred to as a display area 210, for example.
  • An example of the display surface 21 is, for example, a liquid crystal panel 21 having a plurality of pixels 22 as shown in FIG. 1B.
  • the display area 210 is, for example, the pixels 22 of the entire liquid crystal panel 21.
  • An example of the image display unit 20 is the liquid crystal panel module 20 including, for example, a liquid crystal panel 21 and a drive circuit 26 for the liquid crystal panel 21.
  • the image display unit 20 when a signal representing an image generated by the image generation unit 30 is input, the image display unit 20 includes at least a part of the display surface 21 in the use area 210 of the display surface 21 according to the input signal. An image is displayed using the pixel 22.
  • the liquid crystal panel module 20 is used as an example of the image display unit 20 as appropriate, but the image display unit 20 may be another display device.
  • the image display unit 20 may be a self-luminous display panel module such as an organic EL (Electro-Luminescence) element, or a reflective type such as DMD (Digital-Micromirror Device) or LCoS (Liquid-Crystal-on Silicon) (registered trademark).
  • the image display unit 20 is a projection display device such as a reflective display panel module or a scanning display device
  • the display surface 21 is a screen on which an image is generated by the projection light from the projection display device. Applicable.
  • the Ix axis is defined in the lateral direction of the display surface 21 at the viewpoint when the display surface 21 of the image display unit 20 is viewed from the front.
  • An Iy axis is defined in the vertical direction of the surface 21.
  • the positive direction of the Ix axis represents the left direction of the display surface 21, and the positive direction of the Iy axis represents the upward direction of the display surface 21.
  • the Ix-axis positive direction on the display surface 21 corresponds to, for example, the above-described x-axis positive direction, that is, the vehicle left direction in real space.
  • the Iy-axis positive direction on the display surface 21 corresponds to, for example, the above-described y-axis positive direction, that is, the vertical direction upper side (vertical upward direction) in real space.
  • the viewpoint position acquisition unit 40 includes, for example, a vehicle interior image acquisition unit 41 and a vehicle interior image analysis unit 42.
  • the viewpoint position acquisition unit 40 acquires the position 100 of the viewpoint of the user sitting in the driver's seat of the vehicle 1.
  • the position 100 of the viewpoint of the user sitting in the driver's seat of the vehicle 1 is also referred to as the user viewpoint position 100.
  • the viewpoint position acquisition unit 40 is configured to be able to acquire the user viewpoint position 100 in at least the y-axis direction.
  • the vehicle interior image acquisition unit 41 is, for example, a vehicle camera that captures an image of the vehicle interior.
  • the vehicle interior image acquisition unit 41 may be, for example, a common vehicle camera attached for the purpose of preventing vehicle theft or the like, or a vehicle camera dedicated to the vehicle display device 10.
  • the vehicle interior image acquisition unit 41 preferably captures the user viewpoint position 100 from the lower side in the vertical direction than the user viewpoint position 100, and may be attached to the dashboard 4 or the like, for example.
  • the vehicle interior image acquisition unit 41 is preferably capable of infrared imaging so that the user viewpoint position 100 can be acquired even when the vehicle interior is dark.
  • the vehicle interior image acquisition unit 41 outputs the acquired vehicle interior image to the vehicle interior image analysis unit 42, for example.
  • the vehicle interior image analysis unit 42 analyzes the input image of the vehicle interior using, for example, known image processing, a pattern matching method, and the like. As a result of analyzing the input image in front of the vehicle, the vehicle interior image analysis unit 42 shows that the user's face sitting in the driver's seat is included in the input vehicle interior image, for example, in the user viewpoint position 100 in the real space.
  • the user viewpoint position 100 is acquired by specifying the coordinates (y).
  • the vehicle interior image analysis unit 42 outputs the acquired user viewpoint position 100 to the image generation unit 30 via the bus 5 such as CAN (Controller ⁇ Area Network) bus communication, for example.
  • CAN Controller ⁇ Area Network
  • the vehicle interior image analysis unit 42 may be provided, for example, in a vehicle camera, and the image generation unit 30 may include the function of the vehicle interior image analysis unit 42. Further, the image generation unit 30 may directly input the user viewpoint position 100 from the vehicle interior image analysis unit 42 without using the bus 5.
  • the front information acquisition unit (road shape information acquisition unit) 60 includes, for example, a front image acquisition unit 61 and a front image analysis unit 62.
  • the forward information acquisition unit 60 for example, information on the front of the vehicle, such as the shape of the road in the front direction of the vehicle, position information of other vehicles and obstacles existing in the front direction of the vehicle, information on road signs in the front direction of the vehicle, etc. get.
  • the front image acquisition unit 61 is, for example, a camera outside the vehicle that captures an image in front of the vehicle.
  • the front image acquisition unit 61 may be, for example, a shared vehicle camera used for a drive recorder or the like, or a vehicle camera dedicated to the vehicle display device 10.
  • the camera outside the vehicle may be a monocular camera, but it is preferable that the camera outside the vehicle is a stereo camera in order to accurately acquire the distance between the object existing ahead of the vehicle and the host vehicle 1.
  • the camera outside the vehicle may be capable of infrared imaging so that an image ahead of the vehicle can be taken even when the vehicle front is dark.
  • the front image acquisition unit 61 outputs, for example, the acquired front image of the vehicle to the front image analysis unit 62.
  • the front image analysis unit 62 analyzes the input image ahead of the vehicle using, for example, known image processing, a pattern matching method, or the like.
  • the forward image analysis unit 62 analyzes the input image in front of the vehicle to obtain road shape information (lane, white line, stop line, pedestrian crossing, road width, lane number, intersection, curve, To acquire a branch road). Further, the front image analysis unit 62 analyzes the input image in front of the vehicle, so that the position and size of other vehicles and obstacles existing in front of the vehicle, the distance from the own vehicle 1, the own vehicle 1 and Obstacle information such as relative speed is acquired. For example, the front image analysis unit 62 outputs the acquired front information to the image generation unit 30 via the bus 5.
  • the front image analysis unit 62 may be provided, for example, in a camera outside the vehicle, and the image generation unit 30 may include the function of the front image analysis unit 62. Further, the image generation unit 30 may directly input the forward information including the road shape information from the forward image analysis unit 62 without using the bus 5.
  • the front information acquisition unit 60 includes a laser radar, a millimeter wave radar, an ultrasonic sensor, or other known sensors instead of the front image acquisition unit 61 or in combination with the front image acquisition unit 61. Also good.
  • the front image analysis unit 62 inputs data output from a laser radar, a millimeter wave radar, an ultrasonic sensor, a known sensor, or the like instead of the image in front of the vehicle or in combination with the image in front of the vehicle.
  • the forward information as described above may be acquired by analyzing the above.
  • the vehicle interior image acquisition unit 41 and the front image acquisition unit 61 are illustrated as being attached to another place of the vehicle 1, but this is not necessarily the case, and the vehicle interior image acquisition unit is not necessarily limited thereto.
  • 41 and the front image acquisition part 61 may be attached to the same place of the vehicle 1.
  • the vehicle interior image acquisition part 41 and the front image acquisition part 61 may be provided in one same housing
  • the road shape information acquisition unit 60 that acquires road shape information ahead of the vehicle 1 according to the present invention can store and read road shape information ahead of the vehicle 1 by itself based on the position information of the vehicle 1 or a network. May be replaced by a navigation device that can be obtained by the above-described method, or may be configured in combination with the above-described image analysis means.
  • the image generation unit 30 includes a processing unit 31 and a storage unit 32.
  • the processing unit 31 includes, for example, one or a plurality of microprocessors, a microcontroller, an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), and any other IC (Integrated Circuit).
  • the storage unit 32 is, for example, a rewritable RAM (Random Access Memory), a read-only ROM (Read Only Memory), an erasable program read-only EEPROM (Electrically-Erasable Programmable Read-Only Memory), or a nonvolatile memory. It has one or a plurality of memories capable of storing programs and / or data such as a flash memory.
  • the image generation unit 30 determines a use area 220 that is a part used for displaying an image in the display area 210 of the display surface 21 of the image display unit 20. To do.
  • the use area 220 is a range 220 of the pixels 22 used for displaying an image in the display area 210 that is the entire pixels 22 of the liquid crystal panel 21.
  • the storage unit 32 of the image generation unit 30 stores a table in which the user viewpoint position 100 and the parameters for determining the use area 220 corresponding to the user viewpoint position 100 are associated with each other.
  • the processing unit 31 refers to the table, and the image generation unit 30 determines the use area 220 corresponding to the user viewpoint position 100 to be input.
  • the storage unit 32 of the image generation unit 30 stores an arithmetic expression for determining the use area 220 corresponding to the user viewpoint position 100.
  • the image generation unit 30 determines the use area 220 corresponding to the user viewpoint position 100 to be input by the processing unit 31 calculating an arithmetic expression. The relationship between the user viewpoint position 100 and the use area 220 corresponding to the user viewpoint position 100 will be described later.
  • the image generation unit 30 uses the use region 220 so that the upper end of the visually recognized virtual image region 300 is shortened so as to approach the lower end in accordance with the road shape information regarding the road shape ahead of the vehicle input from the front information acquisition unit 60. Correct the size. Processing for correcting the size of the use area 220 in accordance with the road shape will be described later.
  • the projection unit 50 projects the image displayed by the image display unit 20 toward the light transmissive member 2 such as the front window shield 2 of the vehicle 1.
  • the light 80 constituting the projected image is reflected by the front window shield 2 into the vehicle interior.
  • the light 80 constituting the image is also referred to as image light 80.
  • the projection unit 50 projects an image so that the image light 80 reflected by the front window shield 2 enters toward the user viewpoint position 100.
  • the light transmissive member 2 of the vehicle 1 may be a combiner provided in the vehicle 1.
  • a user sitting in the driver's seat receives a virtual image 310 (see FIG. 2) on a virtual virtual image region 300 generated on the vehicle front side with respect to the front window shield 2 when the image light 80 enters the user viewpoint position 100. ).
  • the user can visually recognize the virtual image 310 on the virtual image region 300 in a state in which at least a part of the scenery seen through the front window shield 2 and the virtual image region 300 overlap each other.
  • a virtual image 310 that is a virtual image of the image displayed on the display surface 21 of the image display unit 20 is visually recognized.
  • the projection unit 50 houses, for example, an optical system such as a plane mirror 54 and a concave mirror 55 and an actuator 56 inside the housing 51.
  • the casing 51 includes, for example, an upper case 52 and a lower case 53 that are arranged in the dashboard 4 of the vehicle 1 and are formed of a black light-shielding synthetic resin or the like.
  • An upper case opening 52a is provided substantially in the middle of the upper case 52 in the z-axis direction.
  • the upper case opening 52a is covered with a transparent cover 57 formed of, for example, a transparent translucent synthetic resin.
  • On the vehicle rear side of the lower case 53 for example, a lower case opening 53a is provided on the vehicle rear side of the lower case 53.
  • the lower case opening 53 a is provided in the lower case 53 so that, for example, image light 80 emitted from the display surface 21 of the image display unit 20 attached to the outside of the housing 51 can enter.
  • the flat mirror 54 is attached to the vehicle rear side of the lower case 53 via an attachment member (not shown), for example.
  • the mounting position and the mounting angle of the flat mirror 54 are fixed so that, for example, the image light 80 emitted from the display surface 21 incident from the lower case opening 53a is reflected toward the front of the vehicle.
  • the concave mirror 55 is attached to the front side of the vehicle from the plane mirror 54 of the lower case 53 via an actuator 56, for example.
  • the mounting angle of the concave mirror 55 can be rotated by the actuator 56, for example, with the x axis as a rotation axis.
  • the position of the concave mirror 55 is fixed so that the image light 80 reflected by the plane mirror 54 is incident, and the attachment angle is finely adjusted so that the incident image light 80 is reflected toward the front window shield 2. .
  • the user viewpoint position 100 stored in the storage unit 32 of the image generation unit 30 and the table or calculation formula for determining the use area 220 corresponding to the user viewpoint position 100 are corrected.
  • the actuator 56 includes, for example, a motor, a speed reduction mechanism, a concave mirror rotating member, and a support member for the concave mirror 55, all of which are not shown.
  • the actuator 56 is attached to the lower case 53 on the lower side in the vertical direction of the concave mirror 55 via an attachment member (not shown).
  • the actuator 56 rotates the motor in accordance with a signal input from an actuator control unit (not shown), decelerates the rotation of the motor by the speed reduction mechanism, transmits it to the concave mirror rotating member, and rotates the concave mirror 55.
  • the actuator 56 is not necessarily provided.
  • a light shielding portion 52b is provided between the upper case opening 52a and the plane mirror 54.
  • the light shielding unit 52b is provided, for example, to prevent light from the outside of the housing 51 that enters from the upper case opening 52a from traveling to the image display unit 20.
  • the example of the structure of the projection unit 50 described with reference to FIG. 1C is merely an example, and does not limit the structure of the projection unit 50 of the vehicle display device 10 at all.
  • FIG. 2 shows an example of a landscape and a virtual image 310 that a user sitting in the driver's seat of the vehicle 1 can see through the front window shield 2.
  • a three-lane road (road 91) extending in front of the vehicle and another vehicle (front vehicle 92) existing in front of the vehicle are shown as examples of the scenery that can be seen through the front window shield 2.
  • the superimposed object 90 on which the virtual image 310 is superimposed is a road 91 and a forward vehicle 92.
  • the virtual image 310 is a navigation mark 311 that is superimposed on the road 91 and visually recognized, and a notification mark 312 that is superimposed on the preceding vehicle 1 and visually recognized by the user.
  • the region 300 is a region used for displaying the virtual image 310 corresponding to the used region 220 on the display surface 21 of the image display unit 20.
  • the region 300 corresponding to the use region 220 on the display surface 21 of the image display unit 20 is also referred to as a virtual image region 300. That is, the virtual image area 300 is an area where the user can visually recognize the virtual image 310.
  • the Ix-axis positive direction on the display surface 21 of the image display unit 20 in FIG. 1B corresponds to, for example, the x-axis positive direction in the virtual image region 300, that is, the vehicle left direction.
  • the Iy-axis positive direction on the display surface 21 of the image display unit 20 in FIG. 1B corresponds to, for example, the y-axis positive direction in the virtual image region 300, that is, the upper side in the vertical direction.
  • the operation of the vehicle display device 10 is performed, for example, for a predetermined waiting time when the power of the vehicle 1 is turned on, when an engine (not shown) is driven, or when the power of the vehicle 1 is turned on or the engine is driven. It starts after time has passed.
  • step S01 the forward information acquisition unit 60 acquires forward information (road shape information and obstacle information).
  • step S02 the viewpoint position acquisition unit 40 acquires the user viewpoint position 100. Note that step S01 and step S02 are not necessarily in this order, and the order may be changed.
  • step S03 the image generation unit 30 generates an image including, for example, a notification mark, a navigation mark, and other marks according to the forward information acquired by the forward information acquisition unit 60 in step S01.
  • step S04 the image generation unit 30 determines the use region 220 in the display region 210 of the display surface 21 of the image display unit 20 according to the user viewpoint position 100 acquired by the viewpoint position acquisition unit 40 in step S02. .
  • step S03 and step S04 are not necessarily in this order, and the order may be changed.
  • the image generation unit 30 corrects the size of the use area 220 according to the road shape information acquired by the front information acquisition unit 60 in step S01.
  • step S05 the image display unit 20 displays the image generated in step S03 using the total number of pixels 22 in the use area 220 determined by the image generation unit 30 in step S04.
  • the flow returns to Start.
  • a predetermined waiting time is required after the execution of the process of step S05 until the flow returns to Start. Time may be inserted.
  • FIGS. 4A, 4B, 4C, 4D, 4E, and 5 the relationship between the user viewpoint position 100 and the use area 220 corresponding to the user viewpoint position 100 will be described.
  • FIGS. 4A, 4B, 4C, 4D, and 4E coordinate axes representing the user viewpoint position 100 on the y axis and the z axis in real space are shown.
  • the display of the display surface 21 of the image display unit 20 corresponding to the user viewpoint position 100 on the y axis and the z axis in the real space is displayed.
  • a use area 220 used for displaying an image which is determined by the image generation unit 30, is shown.
  • FIG. 5 is a schematic diagram for explaining the relationship between the user viewpoint position 100 in the vertical direction, the virtual image area 300, and the range of the distance on the road 91 in the landscape where the virtual image area 300 overlaps in the vehicle display device 10. It is a simple figure.
  • FIG. 5 shows the user viewpoint position 100 in order to easily understand the relationship between the user viewpoint position 100 in the vertical direction, the virtual image area 300, and the range of the distance on the road 91 in the landscape where the virtual image area 300 overlaps. The amount of change is exaggerated. Specifically, the vertical distances between the user viewpoint position 100r and the user viewpoint position 100u and between the user viewpoint position 100r and the user viewpoint position 100d shown in FIG. 5 are actually closer.
  • the virtual image region 300r, the virtual image region 300u, and the virtual image region 300d are shown in FIG. 5 so that there is no overlapping portion.
  • the virtual image region 300r, the virtual image region 300u, the virtual image region 300r, and the virtual image region 300d partially overlap.
  • the range of the distance on the road 91 in the landscape where the virtual image region 300 is superimposed is also referred to as a superimposed distance range 400.
  • FIG. 5 shows a virtual image area 300r at the user viewpoint position 100r shown in FIG. 4A, a virtual image area 300u at the user viewpoint position 100u shown in FIG. 4B, and a user viewpoint position 100d shown in FIG. 4C.
  • the virtual image region 300d is shown.
  • FIG. 5 shows a superimposition distance range 400r that is a range of the distance on the road 91 of the landscape that overlaps the virtual image region 300r among the landscapes that can be seen through the front window shield 2 at the user viewpoint position 100r, and the user viewpoint position.
  • the superimposing distance range 400u that is the distance range of the road 91 over the virtual image area 300u and the front window shield 2 at the user viewpoint position 100d.
  • a superimposition distance range 400d that is a distance range on the road 91 of the scenery that overlaps the virtual image area 300d is shown.
  • the user viewpoint position 100r shown in FIG. 4A is represented at the intersection of the y axis and the z axis in the coordinate axes shown in FIG. 4A.
  • the user viewpoint position 100r illustrated in FIG. 4A is also referred to as a reference user viewpoint position 100r.
  • the image generation unit 30 displays the display surface 21 of the image display unit 20 in step S04 illustrated in FIG.
  • the use area 220 is determined as the use area 220r shown in FIG. 4A.
  • the use area 220r corresponding to the reference user viewpoint position 100r illustrated in FIG. 4A is also referred to as a reference use area 220r.
  • the user viewpoint position 100u shown in FIG. 4B is an example of the user viewpoint position 100 located on the upper side in the vertical direction compared to the reference user viewpoint position 100r.
  • the use area 220 is determined to be the use area 220u shown in FIG. 4B.
  • the use area 220u shown in FIG. 4B is located on the Iy axis positive direction side as compared with the reference use area 220r. Also, the length 221u in the Iy-axis direction in the use area 220u shown in FIG. 4B is longer than the length 221r in the Iy-axis direction in the reference use area 220r. As a result, as shown in FIG. 5, the virtual image area 300u corresponding to the use area 220u is positioned above the virtual image area 300r corresponding to the reference use area 220r in the vertical direction on the real space, and The length in the vertical direction in real space becomes longer. The use area 220u overlaps a part of the reference use area 220r.
  • the position of the use area 220 of the display surface 21 is determined to be positioned on the Iy axis positive direction side.
  • the length of the use area 220 of the display surface 21 in the Iy-axis direction is determined to be longer.
  • the virtual image region 300 is positioned on the upper side in the vertical direction in the real space and the vertical length in the real space is increased. Lengthens.
  • the user viewpoint position 100d shown in FIG. 4C is an example of the user viewpoint position 100 located on the lower side in the vertical direction as compared with the reference user viewpoint position 100r.
  • the use area 220 is determined to be the use area 220d shown in FIG. 4C.
  • the use area 220d shown in FIG. 4C is located on the Iy-axis negative direction side as compared to the reference use area 220r. Further, the length 221d in the Iy-axis direction in the use area 220d shown in FIG. 4C is shorter than the length 221r in the Iy-axis direction in the reference use area 220r. As a result, as shown in FIG. 5, the virtual image area 300d corresponding to the use area 220d shown in FIG. 4C is lower in the vertical direction in the real space than the virtual image area 300r corresponding to the reference use area 220r. The vertical length in real space is short. The use area 220d overlaps a part of the reference use area 220r.
  • the position of the use area 220 of the display surface 21 is determined to be positioned on the Iy axis negative direction side.
  • the length in the Iy axis direction of the use area 220 of the display surface 21 is determined to be shorter.
  • the virtual image region 300 is positioned in the vertical direction lower side in the real space and the vertical direction in the real space. The length of is shortened.
  • the overlap distance range 400r, the overlap distance range 400u, and the overlap distance range 400d are coincident.
  • the amount of change in the vertical direction of the virtual image region 300 is smaller than the amount of change in the vertical direction of the user viewpoint position 100. Then, for example, as the user viewpoint position 100 moves upward in the vertical direction, the angle between the line of sight where the user views the virtual image region 300 and the horizontal plane increases. On the other hand, for example, as the user viewpoint position 100 moves downward in the vertical direction, the angle between the line of sight when the user views the virtual image region 300 and the horizontal plane decreases.
  • the vertical position of the virtual image region 300 is vertically changed as the user viewpoint position 100 moves upward in the vertical direction. It is necessary to increase the length in the vertical direction as well as the upper side in the direction.
  • the vertical position of the virtual image region 300 is changed as the user viewpoint position 100 moves downward in the vertical direction. It is necessary to shorten the length in the vertical direction as well as on the lower side in the vertical direction.
  • the distance range 400 can be made constant. By making the superimposition distance range 400 constant, it is possible to cope with a shift in the object in the landscape on which the virtual image 310 visually recognized by the user is superimposed.
  • the user viewpoint position 100f illustrated in FIG. 4D is an example of the user viewpoint position 100 that is located in the vehicle front direction compared to the reference user viewpoint position 100r.
  • the use area 220 is determined to be the use area 220f shown in FIG. 4D.
  • Both the length 222f in the Ix-axis direction and the length 221f in the Iy-axis direction in the use region 220f shown in FIG. 4D are compared with the length 222r in the Ix-axis direction and the length 221r in the Iy-axis direction in the reference use region 220r. And it is getting shorter.
  • the virtual image area 300 corresponding to the use area 220f shown in FIG. 4D is compared with the virtual image area 300 corresponding to the reference use area 220r in the vehicle left-right direction length and vertical length in real space. Both are short.
  • both the length in the Ix axis direction and the length in the Iy axis direction of the use area 220 of the display surface 21 are shortened. To be determined.
  • the virtual image area 300 becomes shorter in both the length in the vehicle left-right direction and the length in the vertical direction in real space. .
  • the user viewpoint position 100b shown in FIG. 4E is an example of the user viewpoint position 100 positioned in the rearward direction of the vehicle as compared with the reference user viewpoint position 100r.
  • the image generation unit 30 in the step S04 shown in FIG. Of the display area 210 the use area 220 is determined to be the use area 220b shown in FIG. 4E.
  • Both the length 222b in the Ix-axis direction and the length 221b in the Iy-axis direction in the use region 220b shown in FIG. 4E are compared with the length 222r in the Ix-axis direction and the length 221r in the Iy-axis direction in the reference use region 220r. And it is getting longer.
  • the virtual image area 300 corresponding to the use area 220b shown in FIG. 4E is compared with the virtual image area 300 corresponding to the reference use area 220r in the vehicle left-right direction length and vertical length in real space. Both are long.
  • both the length in the Ix axis direction and the length in the Iy axis direction of the use area 220 of the display surface 21 are increased. To be determined.
  • the virtual image area 300 becomes longer in both the vehicle left-right length and the vertical length in real space. .
  • the range of the landscape that overlaps the virtual image region 300 is widened.
  • the distance between the user viewpoint position 100 and the virtual image area 300 (the distance in the vehicle front-rear direction) increases, among the scenery that can be seen from the user viewpoint position 100 through the front window shield 2, the scenery that overlaps the virtual image area 300.
  • the virtual image area 300 is moved as the user viewpoint position 100 moves in the vehicle front direction. It is necessary to shorten both the length in the vehicle left-right direction and the length in the vertical direction.
  • the virtual image area 300 increases as the user viewpoint position 100 moves backward in the vehicle. It is necessary to increase both the length in the vehicle left-right direction and the length in the vertical direction.
  • the user viewpoint position 100 in the vehicle front-rear direction is influenced by appropriately determining the length in the Ix axis and the length in the Iy axis of the use region 220.
  • the range of the landscape to be superimposed can be made constant. Since the range of the landscape to be superimposed becomes constant, it is possible to cope with a shift in the object in the landscape on which the virtual image 310 visually recognized by the user is superimposed.
  • FIGS. 6A, 6B, 6C, and 7 the relationship between the user viewpoint position 100 and the use area 220 corresponding to the forward information including the road shape information acquired by the forward information acquisition unit 60 is described. explain. On the left side of FIGS. 6A, 6B, and 6C, coordinate axes representing the user viewpoint position 100 on the y axis and the z axis in real space are shown. Further, on the right side of FIGS. 6A, 6 ⁇ / b> B, and 6 ⁇ / b> C, an image is displayed in the display area 210 of the display surface 21 of the image display unit 20 corresponding to the user viewpoint position 100 on the y axis and the z axis in real space.
  • a use area 220 used for displaying an image determined by the generation unit 30 is shown.
  • the image generation unit 30 uses the use area 220 determined by the user viewpoint position 100 according to the forward information including the road shape acquired by the front information acquisition unit 60 in step S01, and the upper end of the virtual image area 300 approaches the lower end.
  • the correction use area 221 is corrected to be short.
  • the image generation unit 30 determines that the road shape in front of the vehicle 1 is a curve, a T-shaped road, or the like based on the road shape information acquired by the front information acquisition unit 60 in step S01.
  • the area of the use area 220 corresponding to the upper end side of the virtual image area 300 that does not overlap with the road is set to the non-use area 222 that does not display an image
  • the correction use area 221 in which an image can be displayed is set.
  • FIG. 7 is a schematic diagram of the vehicle display device 10, in the user viewpoint position 100 in the vertical direction, the corrected virtual image region 301 corrected by the road shape ahead of the vehicle 1, and the landscape road 91 where the corrected virtual image region 301 overlaps. It is a schematic diagram for demonstrating the relationship with the range of distance.
  • FIG. 7 shows the user viewpoint position 100 for easy understanding of the relationship between the user viewpoint position 100 in the vertical direction, the corrected virtual image area 301, and the range of the distance on the road 91 in the landscape where the corrected virtual image area 300 is superimposed. The amount of change of 100 is exaggerated.
  • FIG. 7 shows a corrected virtual image area 301r at the user viewpoint position 100r shown in FIG. 6A, a corrected virtual image area 301u at the user viewpoint position 100u shown in FIG. 6B, and a user viewpoint position 100d shown in FIG. 6C.
  • the corrected virtual image area 301d at the time of is shown. Further, FIG.
  • FIG. 7 shows a corrected overlapping distance range 401r that is a range of the distance on the road 91 of the scenery superimposed on the corrected virtual image area 301r among the scenery that can be seen through the front window shield 2 at the user viewpoint position 100r, and the user Of the scenery that can be seen through the front window shield 2 at the viewpoint position 100u, the corrected superimposing distance range 401u that is the range of the distance on the road 91 of the scenery that overlaps the corrected virtual image area 301u and the front window at the user viewpoint position 100d.
  • a corrected superimposition distance range 401d which is a distance range on the road 91 of the scenery superimposed with the corrected virtual image area 301d, is shown. Further, in FIG.
  • the virtual image is displayed by the road shape in front of the vehicle 1 in the virtual image region 300 r determined by the user viewpoint position 100 r in the landscape that can be seen through the front window shield 2 at the user viewpoint position 100 r.
  • the virtual image region 300u determined by the user viewpoint position 100u out of the non-overlapping distance range 402r that is the range of the distance on the road 91 of the scenery that does not perform and the landscape that can be seen through the front window shield 2 at the user viewpoint position 100u.
  • a non-overlapping distance range 402u that is a range of a distance on the road 91 of a landscape that does not display a virtual image due to a road shape in front of the vehicle 1 and a landscape that can be seen through the front window shield 2 at the user viewpoint position 100d
  • the virtual image area determined by the user viewpoint position 100d Of 300d, and non-overlapping distance range 402d which is in the range of distance in landscape of the road 91 that does not perform a display of the virtual image is indicated by the road shape ahead of the vehicle 1.
  • the relationship between the user viewpoint position 100 in the vertical direction and the use area 220 corresponding to the road shape ahead of the vehicle 1 will be described.
  • the user viewpoint position 100 acquired in step S02 shown in FIG. 3 is the reference user viewpoint position 100r
  • the area 222r of the use area 220r corresponding to a part of the virtual image area 300r that does not overlap is set as an unused area 222r that does not display an image
  • the other area 221r is set as a correction use area 221r that can display an image.
  • the correction use area 221r corresponding to the reference user viewpoint position 100r shown in FIG. 6A is also referred to as a reference correction use area 221r
  • the non-use area 222r is also referred to as a reference non-use area 222r.
  • the user viewpoint position 100u shown in FIG. 6B is an example of the user viewpoint position 100 located on the upper side in the vertical direction as compared with the reference user viewpoint position 100r.
  • the user viewpoint position 100 acquired in step S02 shown in FIG. 3 is the user viewpoint position 100u
  • it is superimposed on the road based on the forward information indicating the road shape ahead of the vehicle 1 acquired in step S01.
  • An area 222u of the use area 220u corresponding to a part of the virtual image area 300u that is not to be displayed is set as an unused area 222u that does not display an image, and the other area 221u is set as a correction use area 221u that can display an image.
  • the length 221ua in the Iy-axis direction in the correction use area 221u shown in FIG. 6B is longer than the length 221ra in the Iy-axis direction in the reference correction use area 221r.
  • the vertical length of the corrected virtual image region 301u in the real space is increased.
  • the position of the correction use area 221 of the display surface 21 is determined to be positioned on the Iy axis positive direction side. Further, as the user viewpoint position 100 detected by the viewpoint position acquisition unit 40 moves upward in the vertical direction, the length in the Iy axis direction of the correction use area 221 of the display surface 21 is determined to be longer. As a result, as the user viewpoint position 100 detected by the viewpoint position acquisition unit 40 moves upward in the vertical direction, the corrected virtual image region 301 is positioned on the upper side in the vertical direction in the real space and the vertical direction in the real space Length increases.
  • the user viewpoint position 100d shown in FIG. 6C is an example of the user viewpoint position 100 located on the lower side in the vertical direction as compared with the reference user viewpoint position 100r.
  • the user viewpoint position 100 acquired in step S02 shown in FIG. 3 is the user viewpoint position 100d
  • it is superimposed on the road based on the forward information indicating the road shape ahead of the vehicle 1 acquired in step S01.
  • the area 222d of the use area 220d corresponding to a part of the virtual image area 300d that is not displayed is set as a nonuse area 222d that does not display an image
  • the other area 221d is set as a correction use area 221d that can display an image.
  • the length 221da in the Iy-axis direction in the correction use area 221d shown in FIG. 6C is shorter than the length 221ra in the Iy-axis direction in the reference correction use area 221r.
  • the vertical length of the corrected virtual image region 301d in the real space is shortened.
  • the position of the correction use area 221 of the display surface 21 is determined to be positioned on the Iy axis negative direction side. Further, as the user viewpoint position 100 detected by the viewpoint position acquisition unit 40 moves downward in the vertical direction, the length in the Iy axis direction of the correction use area 221 of the display surface 21 is determined to be shorter. As a result, as the user viewpoint position 100 detected by the viewpoint position acquisition unit 40 moves downward in the vertical direction, the corrected virtual image region 301 is positioned in the vertical direction lower side in the real space and the vertical position in the real space. The length of the direction becomes shorter.
  • the correction overlapping distance is not affected by the user viewpoint position 100 in the vertical direction.
  • the range 401 can be made constant.
  • the image generation unit 30 of the vehicle display device 10 of the present invention is used to display an image on the display surface 21 of the image display unit 20 according to the user viewpoint position 100 acquired by the viewpoint position acquisition unit 40.
  • the use area 220 to be determined is determined.
  • the vehicle display device 10 changes the user viewpoint position 100 as compared with a vehicle display device that can adjust only the position of the virtual image region 300 by changing the projection angle of the concave mirror 55 of the projection unit 50, for example.
  • the object in the landscape on which the virtual image 310 is superimposed is eliminated. Therefore, the vehicle display device 10 of the present invention can provide appropriate information to the user without being influenced by the user viewpoint position 100.
  • the image generation unit 30 may determine the use area 220 only according to the user viewpoint position 100 in the vertical direction, or determine the use area 220 only according to the user viewpoint position 100 in the vehicle front-rear direction. Also good.
  • the change in the user viewpoint position 100 in the vertical direction has a greater influence on the shift of the object in the landscape on which the virtual image 310 is superimposed than the change in the user viewpoint position 100 in the vehicle front-rear direction. Therefore, it is preferable that the image generation unit 30 determines the use area 220 according to at least the user viewpoint position 100 in the vertical direction.
  • the image generation unit 30 of the vehicle display device 10 causes the upper end of the virtual image region 300 to approach the lower end according to the road shape information acquired by the road shape information acquisition unit 60, and the vertical direction of the virtual image region 300. So that the length of the use area 220 is corrected. Thereby, it is possible to prevent the viewer from feeling uncomfortable by displaying the virtual image 310 at a position deviated from the road 91 (for example, a shoulder or a wall around the road), and the user viewpoint in the vertical direction. Without being affected by the position 100, the corrected overlapping distance range 401, which is a distance range in which the virtual image region 300 on which the virtual image 310 is displayed overlaps the road 91, can be made constant.
  • steps S02 and S04 shown in FIG. 3 do not necessarily have to be executed every time.
  • steps S02 and S04 may be executed only when the flow shown in FIG. 3 is executed for the first time after the vehicle 1 is turned on. Thereafter, when the flow shown in FIG. 3 is executed for the second and subsequent times after the power of the vehicle 1 is turned on, the processes of step S02 and step S04 may be omitted. For example, while the user who drives the vehicle 1 is not changed, it is unlikely that the user viewpoint position 100 in the vertical direction is changed significantly.
  • the use area 220 is a first use area 230 corresponding to a first virtual image area (not shown) in the virtual image area 300, and is visually recognized by being positioned vertically below the first virtual image area.
  • At least a second use area 240 corresponding to a second virtual image area (not shown), and the image generation unit 30 performs the first operation according to the user viewpoint position 100 in the vertical direction acquired by the viewpoint position acquisition unit 40.
  • the position and size of the use area 230 are determined, and the second use area without changing the size of the second use area 240 according to the user viewpoint position 100 in the vertical direction acquired by the viewpoint position acquisition unit 40. Only 240 positions may be determined.
  • the size in the vertical direction of the second virtual image area on the real space corresponding to the second use area 240 hardly changes.
  • the user's viewpoint position changes in the vertical direction the user moves to the second virtual image area. It is possible to prevent the information represented in the displayed virtual image from becoming difficult to recognize.
  • the size of the second use area 240 may be changed according to the user viewpoint position 100 in the vertical direction acquired by the viewpoint position acquisition unit 40.
  • the change rate according to the change in the user viewpoint position 100 in the second use area 240 is set to be smaller than the change rate in the first use area 230.
  • the image 250 displayed in the display area 210 includes a plurality of images 251, 252, and 253 that are arranged substantially along the Iy axis direction from the Iy axis positive direction side of the display surface 21.
  • the size of the used area 220 in the Iy-axis direction is reduced and corrected to the corrected used area 221 according to road shape information, only a part of the images 251 and 252 are corrected as shown in FIG. 9B. You may make it display on the use area
  • the vehicle display device of the present invention is suitable as a head-up display that is mounted on a moving body such as a vehicle and allows a viewer to visually recognize a virtual image.
  • SYMBOLS 1 ... Vehicle, 2 ... Front window shield, 10 ... Display apparatus for vehicles, 20 ... Image display part, Liquid crystal panel module, 21 ... Display surface, Liquid crystal panel, 30 ... Image Generating unit, 40 ... viewpoint position acquisition unit, 41 ... vehicle interior image acquisition unit, 42 ... vehicle interior image analysis unit, 50 ... projection unit, 60 ... forward information acquisition unit (road shape Information acquisition unit), 80 ... image light, 100 ... user viewpoint position, 210 ... display area, 220 ... use area, 300 ... virtual image area, 310 ... virtual image, 400 ... ⁇ Overlapping distance range

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Instrument Panels (AREA)

Abstract

Provided is a vehicle display device which is capable of providing proper information to a user without being affected by a change in the viewing position of the user. This vehicle display device 10 comprises: an image display unit 20 which has a display surface 21 on which an image can be displayed; an image generation unit 30 which generates an image to be displayed by the image display unit 20; a viewing position acquisition unit 40 which acquires the viewing position 100 of a user sitting in the driver's seat of a vehicle 1; a road shape information acquisition unit 60 which acquires information relating to the shape of the road ahead of the vehicle 1; and a projection unit 50 which projects the image onto the front windshield 2 of the vehicle 1 so that the user sitting in the driver's seat can visually identify a virtual image 31. The image generation unit 30 determines the position and size of a usable region 220 to be used for displaying the image within the display surface 21 according to the viewing position 100 of the user as acquired by the viewing position acquisition unit 40, and corrects the length of the usable region 220 according to the road shape acquired by the road shape information acquisition unit 60.

Description

車両用表示装置Vehicle display device

 本発明は、車両用表示装置に関する。本発明は、特に、ユーザーの視点の位置の変化に影響されることなく、ユーザーに適切な情報を提供可能な車両用表示装置に関する。 The present invention relates to a vehicle display device. In particular, the present invention relates to a vehicle display device that can provide appropriate information to a user without being affected by a change in the position of the user's viewpoint.

 車両用表示装置として、車両のフロントウィンドウシールド等の透光部材に表示画像を投影することによって、フロントウィンドウシールドで反射される表示画像の光を用いて運転席に座ったユーザーに、虚像を視認させる、いわゆるヘッドアップディスプレイがある。このような車両用表示装置では、虚像は、運転席に座ったユーザーによって、車両のフロントウィンドウシールドを基準にして車両進行方向側(車両前方側)に虚像が結像されるように視認される。このような車両用表示装置の一般的な構成として、例えば、表示画像を表示する画像表示部と、この表示画像を車両のフロントウィンドウシールドに投影する凹面鏡を含む光学系から構成される投影部と、を含む。 As a display device for vehicles, by projecting a display image onto a translucent member such as a front window shield of the vehicle, a virtual image is visually recognized by a user sitting in the driver's seat using the light of the display image reflected by the front window shield. There is a so-called head-up display. In such a vehicle display device, the virtual image is visually recognized by the user sitting in the driver's seat so that the virtual image is formed on the vehicle traveling direction side (vehicle front side) with reference to the front window shield of the vehicle. . As a general configuration of such a vehicle display device, for example, an image display unit that displays a display image, and a projection unit that includes an optical system that includes a concave mirror that projects the display image onto a front window shield of the vehicle; ,including.

 このような車両用表示装置が備えられている車両の運転席に座るユーザーは、例えば車両前方の道路に他の車両、障害物等が存在する情報を与える虚像を、フロントウィンドウシールド越しに見える風景と重畳された状態で、視認することができる。虚像が視認される位置がフロントウィンドウシールドの鉛直方向上側になるにつれて、虚像は、フロントウィンドウシールド越しに見える風景のうち距離が遠い側の景色と重畳されて視認される。その一方で、虚像が視認される位置がフロントウィンドウシールドの鉛直方向下側になるにつれて、虚像は、フロントウィンドウシールド越しに見える風景のうち距離が近い側の風景と重畳される。 A user who sits in the driver's seat of a vehicle equipped with such a vehicle display device can see a virtual image that gives information on the presence of other vehicles, obstacles, etc. on the road ahead of the vehicle through the front window shield. And can be visually recognized in a superimposed state. As the position at which the virtual image is visually recognized becomes higher in the vertical direction of the front window shield, the virtual image is visually recognized by being superimposed on the scenery on the far side of the landscape that can be seen through the front window shield. On the other hand, as the position at which the virtual image is visually recognized becomes lower in the vertical direction of the front window shield, the virtual image is superimposed on the landscape on the near side of the landscape that can be seen through the front window shield.

 ここで、ユーザーの座高、ユーザーの着席姿勢等によって、運転席に座るユーザーの視点の位置は一定ではない。例えば、表示画像が投影される位置が固定されているときは、運転席に座るユーザーの視点の位置が高くなるにつれて、虚像は、フロントウィンドウシールド越しに見える風景のうち距離が近い側の風景と重畳される。このように、運転席に座るユーザーの視点の位置が変化することによって、虚像が重畳される風景内の対象がずれるため、ユーザーに違和感を与える可能性がある。 Here, the position of the viewpoint of the user sitting in the driver's seat is not constant depending on the sitting height of the user, the sitting posture of the user, and the like. For example, when the position at which the display image is projected is fixed, the virtual image becomes more closely related to the scenery on the near side of the scenery seen through the front window shield as the position of the viewpoint of the user sitting in the driver's seat increases. Superimposed. As described above, since the position of the viewpoint of the user sitting in the driver's seat changes, the object in the landscape on which the virtual image is superimposed shifts, which may give the user a sense of incongruity.

 そこで、例えば、特許文献1には、車両の運転席に座るユーザーの視点の鉛直方向における位置に応じて、投影部の凹面鏡を含む光学系の投影方向を調整するヘッドアップディスプレイ装置(車両用表示装置)が示されている。特許文献1に示されている車両用表示装置は、投影部の凹面鏡の投影角度を調整する凹面鏡アクチュエータと、車両の運転席に座るユーザーの視点の位置を取得する視点検知カメラと、を備える。 Therefore, for example, Patent Document 1 discloses a head-up display device (vehicle display) that adjusts the projection direction of the optical system including the concave mirror of the projection unit according to the position in the vertical direction of the viewpoint of the user sitting in the driver's seat of the vehicle. Device). The vehicle display device disclosed in Patent Document 1 includes a concave mirror actuator that adjusts the projection angle of the concave mirror of the projection unit, and a viewpoint detection camera that acquires the position of the viewpoint of the user sitting in the driver's seat of the vehicle.

 特許文献1に示されている車両用表示装置は、視点検知カメラで取得した車両の運転席に座るユーザーの視点の位置が高いときに、表示画像がフロントウィンドウシールドの鉛直方向上側に投影されるように、凹面鏡アクチュエータを制御する。その一方で、特許文献1に示されている車両用表示装置は、視点検知カメラで取得した車両の運転席に座るユーザーの視点の位置が低いときに、表示画像がフロントウィンドウシールドの鉛直方向下側に投影されるように、凹面鏡アクチュエータを制御する。したがって、特許文献1に示される車両用表示装置は、車両の運転席に座るユーザーの視点の位置が変化したときであっても、フロントウィンドウシールド越しに見える風景のうち虚像が重畳される対象が大きくずれることが防止されるように構成されている。 The vehicle display device disclosed in Patent Document 1 projects a display image on the upper side in the vertical direction of the front window shield when the position of the viewpoint of the user sitting in the driver's seat of the vehicle acquired by the viewpoint detection camera is high. In this manner, the concave mirror actuator is controlled. On the other hand, the display device for a vehicle shown in Patent Document 1 displays a display image vertically below the front window shield when the position of the viewpoint of the user sitting in the driver's seat of the vehicle acquired by the viewpoint detection camera is low. The concave mirror actuator is controlled so that it is projected to the side. Therefore, the vehicular display device disclosed in Patent Document 1 is a target on which a virtual image is superimposed in a landscape seen through the front window shield even when the position of the viewpoint of the user sitting in the driver's seat of the vehicle changes. It is configured to prevent a large shift.

特開2014-210537号公報JP 2014-210537 A

 しかしながら、特許文献1に記載されている車両用表示装置では、ユーザーの視点の位置が変化したときに、ユーザーに違和感を与える可能性があることを、本発明者は認識した。この点について、以下、図10を用いて説明する。図10は、特許文献1に記載されている車両用表示装置において、ユーザーの視点位置と、ユーザーによって視認される虚像と、この虚像が重畳する風景の路面における距離の範囲との関係を説明するための模式的な図である。なお、図10は、鉛直方向におけるユーザーの視点位置と、ユーザーによって視認される虚像を表示する虚像領域と、この虚像領域が重畳する風景の路面における距離の範囲との関係を分かりやすく説明するために、ユーザーの視点位置の変化量を誇張して表現している。具体的に、図10に示される、ユーザー視点位置101uとユーザー視点位置101rとユーザー視点位置101dとの鉛直方向における距離は、実際には図10に示される例よりも近い。また、図10に示されている座標軸において、z軸正方向は車両前方向を表し、y軸正方向は鉛直方向上側を表し、x軸正方向(図面に対して垂直上方向)は車両左方向を表す。 However, the present inventor has recognized that the vehicle display device described in Patent Document 1 may give the user a sense of discomfort when the position of the user's viewpoint changes. This point will be described below with reference to FIG. FIG. 10 illustrates the relationship between the viewpoint position of the user, the virtual image visually recognized by the user, and the range of the distance on the road surface of the landscape on which the virtual image is superimposed in the vehicle display device described in Patent Document 1. It is a schematic diagram for. In addition, FIG. 10 is for easily explaining the relationship between the viewpoint position of the user in the vertical direction, the virtual image area displaying the virtual image visually recognized by the user, and the range of the distance on the road surface of the landscape where the virtual image area is superimposed. In addition, the amount of change in the user's viewpoint position is exaggerated. Specifically, the vertical distances between the user viewpoint position 101u, the user viewpoint position 101r, and the user viewpoint position 101d shown in FIG. 10 are actually shorter than the example shown in FIG. In the coordinate axes shown in FIG. 10, the z-axis positive direction represents the front direction of the vehicle, the y-axis positive direction represents the upper side in the vertical direction, and the x-axis positive direction (upward direction perpendicular to the drawing) represents the vehicle left side. Represents a direction.

 図10には、車両の運転席に座るユーザーの視点の位置の例として、ユーザー視点位置101u、ユーザー視点位置101r及びユーザー視点位置101dの3つの視点の位置が示されている。図10に示される虚像領域301uは、例えば車両の運転席に座るユーザーの視点がユーザー視点位置101uであるときに、特許文献1に記載されている車両用表示装置によって表示画像の投影角度が調整された結果、ユーザーによって視認される虚像が表示される領域である。図10に示される虚像領域301rは、例えば車両の運転席に座るユーザーの視点がユーザー視点位置101rであるときに、特許文献1に記載されている車両用表示装置によって表示画像の投影角度が調整された結果、ユーザーによって視認される虚像が表示される領域である。図10に示される虚像領域301dは、例えば車両の運転席に座るユーザーの視点がユーザー視点位置101dであるときに、特許文献1に記載されている車両用表示装置によって表示画像の投影角度が調整された結果、ユーザーによって視認される虚像が表示される領域である。特許文献1に記載されている車両用表示装置では、車両の運転席に座るユーザーの視点の位置が変化したときに、表示画像が投影される方向が変更されるのであって、例えば表示器(画像表示部)が表示画像を表示する領域は変更されない。そのため、虚像領域301u、虚像領域301r及び虚像領域301dの鉛直方向の大きさはいずれも同じである。 FIG. 10 shows three viewpoint positions of a user viewpoint position 101u, a user viewpoint position 101r, and a user viewpoint position 101d as examples of the viewpoint positions of the user sitting in the driver's seat of the vehicle. In the virtual image area 301u shown in FIG. 10, for example, when the viewpoint of the user sitting in the driver's seat of the vehicle is the user viewpoint position 101u, the projection angle of the display image is adjusted by the vehicle display device described in Patent Document 1. As a result, the virtual image visually recognized by the user is displayed. In the virtual image area 301r shown in FIG. 10, for example, when the viewpoint of the user sitting in the driver's seat of the vehicle is the user viewpoint position 101r, the projection angle of the display image is adjusted by the vehicle display device described in Patent Document 1. As a result, the virtual image visually recognized by the user is displayed. In the virtual image area 301d shown in FIG. 10, for example, when the viewpoint of the user sitting in the driver's seat of the vehicle is the user viewpoint position 101d, the projection angle of the display image is adjusted by the vehicle display device described in Patent Document 1. As a result, the virtual image visually recognized by the user is displayed. In the vehicular display device described in Patent Literature 1, when the position of the viewpoint of the user sitting in the driver's seat of the vehicle changes, the direction in which the display image is projected is changed. The area in which the image display unit displays the display image is not changed. Therefore, the vertical sizes of the virtual image region 301u, the virtual image region 301r, and the virtual image region 301d are all the same.

 図10に示される重畳距離範囲401uは、例えば車両の運転席に座るユーザーの視点がユーザー視点位置101uであるときに、フロントウィンドウシールド2越しに見える風景のうち、虚像領域301uが重畳する風景の道路91における距離の範囲である。図10に示される重畳距離範囲401rは、例えば車両の運転席に座るユーザーの視点がユーザー視点位置101rであるときに、フロントウィンドウシールド2越しに見える風景のうち、虚像領域301rが重畳する風景の道路91における距離の範囲である。図10に示される重畳距離範囲401dは、例えば車両の運転席に座るユーザーの視点がユーザー視点位置101dであるときに、フロントウィンドウシールド2越しに見える風景のうち、虚像領域301dが重畳する風景の道路91における距離の範囲である。 The overlap distance range 401u illustrated in FIG. 10 is, for example, a landscape in which the virtual image region 301u overlaps among the landscapes that can be seen through the front window shield 2 when the viewpoint of the user sitting in the driver's seat of the vehicle is the user viewpoint position 101u. This is a distance range on the road 91. The superimposing distance range 401r shown in FIG. 10 is, for example, a scene in which the virtual image area 301r is superposed among the scenery that can be seen through the front window shield 2 when the viewpoint of the user sitting in the driver's seat of the vehicle is the user viewpoint position 101r. This is a distance range on the road 91. The superimposition distance range 401d shown in FIG. 10 is, for example, a scene in which the virtual image area 301d is superposed among the scenery seen through the front window shield 2 when the viewpoint of the user sitting in the driver's seat of the vehicle is the user viewpoint position 101d. This is a distance range on the road 91.

 図10に示される例のように、ユーザーの視点位置の鉛直方向における変化量に対して、虚像の鉛直方向における変化量が小さくなる。そうすると、ユーザー視点位置が鉛直方向上側にいくにつれて、ユーザーが虚像を見る視線と水平面との角度が大きくなる。その一方で、ユーザー視点位置が鉛直方向下側にいくにつれて、ユーザーが虚像を見る視線と水平面との角度が小さくなる。したがって、ユーザー視点位置101rより高い位置であるユーザー視点位置101uのときの重畳距離範囲401uの長さは、ユーザー視点位置101rのときの重畳距離範囲401rの長さより小さくなる。また、ユーザー視点位置101rより低い位置であるユーザー視点位置101dのときの重畳距離範囲401dの長さは、ユーザー視点位置101rのときの重畳距離範囲401rの長さより大きくなる。なお、図10においては、重畳距離範囲401u,重畳距離範囲401r及び重畳距離範囲401dの車両後方側のみの端部の位置が変動しているように示されているが、実際には、車両前方側の端部の位置も変動し得る。 As in the example shown in FIG. 10, the amount of change in the vertical direction of the virtual image is smaller than the amount of change in the vertical direction of the user's viewpoint position. Then, as the user viewpoint position moves upward in the vertical direction, the angle between the line of sight where the user views the virtual image and the horizontal plane increases. On the other hand, as the user viewpoint position moves downward in the vertical direction, the angle between the line of sight where the user views the virtual image and the horizontal plane decreases. Therefore, the length of the overlapping distance range 401u at the user viewpoint position 101u that is higher than the user viewpoint position 101r is smaller than the length of the overlapping distance range 401r at the user viewpoint position 101r. Further, the length of the overlapping distance range 401d at the user viewpoint position 101d that is lower than the user viewpoint position 101r is larger than the length of the overlapping distance range 401r at the user viewpoint position 101r. In FIG. 10, the positions of the end portions of only the vehicle rear side of the overlap distance range 401u, the overlap distance range 401r, and the overlap distance range 401d are shown to be fluctuating. The position of the side edges can also vary.

 以上のように、特許文献1に記載されている車両用表示装置では、運転席に座るユーザーの視点の位置が変化することによって、フロントウィンドウシールド越しに見える風景のうち、虚像が表示される領域が重畳する風景の路面における距離の範囲が変化する。その結果、特許文献1に記載されている車両用表示装置では、例えばユーザーの視点の位置が鉛直方向上側に変化したときに、フロントウィンドウシールド越しに見える風景のうちの虚像を重畳させる対象に対して、小さ過ぎる虚像がユーザーによって視認される状況が発生し得る。同様に、特許文献1に記載されている車両用表示装置では、例えばユーザーの視点の位置が鉛直方向下側に変化したときに、フロントウィンドウシールド越しに見える風景のうちの虚像を重畳させる対象に対して、大き過ぎる虚像がユーザーによって視認される状況が発生し得る。このように、特許文献1に記載されている車両用表示装置では、ユーザーの視点の位置が変化することによって、ユーザーに対して違和感を与える可能性があることを、本発明者は認識した。 As described above, in the vehicular display device described in Patent Document 1, a region in which a virtual image is displayed in a landscape seen through the front window shield by changing the position of the viewpoint of the user sitting in the driver's seat. The range of the distance on the road surface of the landscape where is superimposed changes. As a result, in the vehicle display device described in Patent Document 1, for example, when the position of the user's viewpoint has changed to the upper side in the vertical direction, the object to be superimposed with the virtual image of the landscape seen through the front window shield Thus, a situation may occur in which a virtual image that is too small is viewed by the user. Similarly, in the vehicle display device described in Patent Document 1, for example, when the position of the user's viewpoint changes to the lower side in the vertical direction, the virtual image of the landscape that can be seen through the front window shield is superimposed. On the other hand, a situation may occur in which a virtual image that is too large is viewed by the user. As described above, the present inventor has recognized that the display device for a vehicle described in Patent Document 1 may give a sense of incongruity to the user when the position of the user's viewpoint changes.

 本発明の1つの目的は、ユーザーの視点の位置の変化に影響されることなく、ユーザーに適切な情報を提供可能な車両用表示装置を提供することにある。本発明の他の目的は、以下に例示する態様及び好ましい実施形態、並びに添付の図面を参照することによって、当業者に明らかになるであろう。 One object of the present invention is to provide a vehicle display device that can provide appropriate information to a user without being affected by a change in the position of the user's viewpoint. Other objects of the present invention will become apparent to those skilled in the art by referring to the aspects and preferred embodiments exemplified below and the accompanying drawings.

 本発明の車両用表示装置は、車両の運転席に座るユーザーの視点の位置を取得する視点位置取得部と、前記車両の走行経路から前記車両の前方の道路形状の情報である道路形状情報を取得する道路形状情報取得部と、画像を表示可能な表示面を有する画像表示部と、前記視点位置取得部によって取得される上下方向における前記ユーザーの前記視点の位置に応じて、前記画像表示部の前記表示面のうちの一部である前記画像の表示に使用する使用領域の位置および長さを決定し、前記表示面の前記使用領域内に前記画像を表示させる画像生成部と、前記表示面からの光を透光部材に向けて投影することで、前記使用領域に対応する仮想的な虚像領域を生成し、前記虚像領域に前記画像に対応する虚像を表示する投影部と、を備え、前記画像生成部は、前記視点位置取得部によって取得される上下方向における前記ユーザーの前記視点の位置に応じて、前記虚像領域の上下方向に対応する前記使用領域の位置及び長さを決定し、前記道路形状情報取得部によって取得される前記道路形状情報に応じて、前記虚像領域の上端が下端に近づき前記虚像領域の上下方向が短くなるように、前記使用領域の長さを補正する。 The vehicle display device of the present invention includes a viewpoint position acquisition unit that acquires a position of a viewpoint of a user sitting in a driver's seat of a vehicle, and road shape information that is information on a road shape in front of the vehicle from the travel route of the vehicle. The road shape information acquisition unit to be acquired, the image display unit having a display surface capable of displaying an image, and the image display unit according to the position of the user's viewpoint in the vertical direction acquired by the viewpoint position acquisition unit An image generation unit that determines a position and a length of a use area used to display the image that is a part of the display surface, and displays the image in the use area of the display surface; and the display A projection unit for projecting light from the surface toward the translucent member to generate a virtual virtual image region corresponding to the use region and displaying a virtual image corresponding to the image in the virtual image region; The image The formation unit determines the position and length of the use area corresponding to the vertical direction of the virtual image area according to the position of the viewpoint of the user in the vertical direction acquired by the viewpoint position acquisition unit, and the road According to the road shape information acquired by the shape information acquisition unit, the length of the use area is corrected so that the upper end of the virtual image area approaches the lower end and the vertical direction of the virtual image area is shortened.

本発明の車両用表示装置の構成の例を示すブロック図である。It is a block diagram which shows the example of a structure of the display apparatus for vehicles of this invention. 図1Aに示される画像表示部の構成の例を示す図である。It is a figure which shows the example of a structure of the image display part shown by FIG. 1A. 図1Aに示される投影部の断面図である。It is sectional drawing of the projection part shown by FIG. 1A. 図1Aに示される車両用表示装置を備える車両の運転席に座るユーザーから見える風景及び虚像の例を示す図である。It is a figure which shows the example of the landscape and virtual image which can be seen from the user sitting in the driver's seat of a vehicle provided with the display apparatus for vehicles shown by FIG. 1A. 図1Aに示される車両用表示装置の動作の例を示すフローチャート図である。It is a flowchart figure which shows the example of operation | movement of the display apparatus for vehicles shown by FIG. 1A. ユーザーの視点の位置と図1Aに示される車両用表示装置の画像表示部によって表示される画像との関係を示す図である。It is a figure which shows the relationship between the position of a user's viewpoint, and the image displayed by the image display part of the display apparatus for vehicles shown by FIG. 1A. ユーザーの視点の位置と図1Aに示される車両用表示装置の画像表示部によって表示される画像との関係を示す図である。It is a figure which shows the relationship between the position of a user's viewpoint, and the image displayed by the image display part of the display apparatus for vehicles shown by FIG. 1A. ユーザーの視点の位置と図1Aに示される車両用表示装置の画像表示部によって表示される画像との関係を示す図である。It is a figure which shows the relationship between the position of a user's viewpoint, and the image displayed by the image display part of the display apparatus for vehicles shown by FIG. 1A. ユーザーの視点の位置と図1Aに示される車両用表示装置の画像表示部によって表示される画像との関係を示す図である。It is a figure which shows the relationship between the position of a user's viewpoint, and the image displayed by the image display part of the display apparatus for vehicles shown by FIG. 1A. ユーザーの視点の位置と図1Aに示される車両用表示装置の画像表示部によって表示される画像との関係を示す図である。It is a figure which shows the relationship between the position of a user's viewpoint, and the image displayed by the image display part of the display apparatus for vehicles shown by FIG. 1A. 本発明の車両用表示装置において、ユーザーの視点位置と、ユーザーによって視認される虚像と、この虚像が重畳する風景の路面における距離の範囲との関係を説明するための模式的な図である。In the vehicle display device of the present invention, it is a schematic diagram for explaining the relationship between the viewpoint position of the user, the virtual image visually recognized by the user, and the range of the distance on the road surface of the landscape on which the virtual image is superimposed. ユーザーの視点の位置と図1Aに示される車両用表示装置の画像表示部によって表示される画像との関係を示す図である。It is a figure which shows the relationship between the position of a user's viewpoint, and the image displayed by the image display part of the display apparatus for vehicles shown by FIG. 1A. ユーザーの視点の位置と図1Aに示される車両用表示装置の画像表示部によって表示される画像との関係を示す図である。It is a figure which shows the relationship between the position of a user's viewpoint, and the image displayed by the image display part of the display apparatus for vehicles shown by FIG. 1A. ユーザーの視点の位置と図1Aに示される車両用表示装置の画像表示部によって表示される画像との関係を示す図である。It is a figure which shows the relationship between the position of a user's viewpoint, and the image displayed by the image display part of the display apparatus for vehicles shown by FIG. 1A. 本発明の車両用表示装置において、ユーザーの視点位置と、ユーザーによって視認される虚像と、この虚像が重畳する風景の路面における距離の範囲との関係を説明するための模式的な図である。In the vehicle display device of the present invention, it is a schematic diagram for explaining the relationship between the viewpoint position of the user, the virtual image visually recognized by the user, and the range of the distance on the road surface of the landscape on which the virtual image is superimposed. 第2実施形態における車両用表示装置の画像表示部に設定される補正された使用領域を示す図である。It is a figure which shows the correct | amended use area | region set to the image display part of the display apparatus for vehicles in 2nd Embodiment. 第3実施形態における車両用表示装置の画像表示部に設定される補正された使用領域と画像とを示す図である。It is a figure which shows the corrected use area | region and image which are set to the image display part of the display apparatus for vehicles in 3rd Embodiment. 第3実施形態における車両用表示装置の画像表示部に設定される補正された使用領域と画像とを示す図である。It is a figure which shows the corrected use area | region and image which are set to the image display part of the display apparatus for vehicles in 3rd Embodiment. 特許文献1(特開2014-210537号公報)に示される車両用表示装置において、ユーザーの視点位置と、ユーザーによって視認される虚像と、この虚像が重畳する風景の路面における距離の範囲との関係を説明するための模式的な図である。In the vehicular display device disclosed in Patent Document 1 (Japanese Patent Laid-Open No. 2014-210537), the relationship between the viewpoint position of the user, the virtual image visually recognized by the user, and the range of the distance on the road surface of the landscape on which the virtual image is superimposed It is a typical figure for demonstrating.

 以下に、適宜図面を参照しながら、本発明の実施例を詳細に説明する。但し、既によく知られた事項や実質的に同一な構成に対する詳細な説明は省略する場合がある。なお、本発明の要旨は、添付図面および以下の説明に限定されるものではなく、本発明を逸脱することなく様々の変更が可能である。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings as appropriate. However, detailed descriptions of already well-known matters and substantially the same configuration may be omitted. The gist of the present invention is not limited to the attached drawings and the following description, and various modifications can be made without departing from the present invention.

 《第1実施形態》 図1A、図1B及び図1Cを参照して、本発明の車両用表示装置10の全体の構成の例を説明する。以下の説明を容易にするために、図1Aに示されるように、実空間において、例えば、車両1の進行方向を車両前方向とした車両前後方向にz軸を規定し、上下方向(車両1が走行する路面が水平である場合、鉛直方向)にy軸を規定し、車両前方向を向いて左右方向(車両左右方向)にx軸を規定する。このとき、x軸正方向は車両左方向を表し、y軸正方向は鉛直方向上側(実空間上における上方向)を表し、z軸正方向は車両前方向を表す。 << 1st Embodiment >> With reference to FIG. 1A, FIG. 1B, and FIG. 1C, the example of the whole structure of the display apparatus 10 for vehicles of this invention is demonstrated. In order to facilitate the following description, as shown in FIG. 1A, in real space, for example, the z-axis is defined in the vehicle front-rear direction with the traveling direction of the vehicle 1 as the vehicle front direction, and the vertical direction (vehicle 1 When the road surface on which the vehicle travels is horizontal, the y-axis is defined in the vertical direction), and the x-axis is defined in the left-right direction (vehicle left-right direction) facing the front direction of the vehicle. At this time, the x-axis positive direction represents the left direction of the vehicle, the y-axis positive direction represents the upper side in the vertical direction (upward in real space), and the z-axis positive direction represents the front direction of the vehicle.

 図1Aに示されるように、車両用表示装置10は、画像表示部20と画像生成部30と視点位置取得部40と投影部50と前方情報取得部60とを備える。 As shown in FIG. 1A, the vehicle display device 10 includes an image display unit 20, an image generation unit 30, a viewpoint position acquisition unit 40, a projection unit 50, and a front information acquisition unit 60.

 画像表示部20は、図1Bに示されるように、画像を表示可能な表示面21を有する。表示面21のうち、画像を表示可能な領域210を、例えば表示領域210という。表示面21の一例は、図1Bに示されるように、例えば、複数の画素22を有する液晶パネル21である。液晶パネル21において、表示領域210は、例えば、液晶パネル21全体の画素22である。画像表示部20の一例は、例えば液晶パネル21と液晶パネル21の駆動回路26とを有する液晶パネルモジュール20である。 The image display unit 20 has a display surface 21 that can display an image, as shown in FIG. 1B. An area 210 on the display surface 21 where an image can be displayed is referred to as a display area 210, for example. An example of the display surface 21 is, for example, a liquid crystal panel 21 having a plurality of pixels 22 as shown in FIG. 1B. In the liquid crystal panel 21, the display area 210 is, for example, the pixels 22 of the entire liquid crystal panel 21. An example of the image display unit 20 is the liquid crystal panel module 20 including, for example, a liquid crystal panel 21 and a drive circuit 26 for the liquid crystal panel 21.

 画像表示部20は、例えば、画像生成部30によって生成される画像を表す信号を入力したときに、表示面21の使用領域210のうち、入力した信号に応じて表示面21の少なくとも一部の画素22を用いて画像を表示する。なお、以下の説明において、適宜、画像表示部20の例として液晶パネルモジュール20を用いて説明するが、画像表示部20は他の表示機器であってもよい。例えば、画像表示部20は、有機EL(Electro Luminescence)素子等の自発光表示パネルモジュールであってもよく、DMD(Digital Micromirror Device)、LCoS(Liquid Crystal on Silicon)(登録商標)等の反射型表示パネルモジュールであってもよく、レーザー光を走査する走査型表示装置等であってもよい。なお、画像表示部20が、反射型表示パネルモジュールや走査型表示装置などの投射型表示デバイスであった場合、表示面21は、投射型表示デバイスからの投射光により画像が生成されるスクリーンが該当する。 For example, when a signal representing an image generated by the image generation unit 30 is input, the image display unit 20 includes at least a part of the display surface 21 in the use area 210 of the display surface 21 according to the input signal. An image is displayed using the pixel 22. In the following description, the liquid crystal panel module 20 is used as an example of the image display unit 20 as appropriate, but the image display unit 20 may be another display device. For example, the image display unit 20 may be a self-luminous display panel module such as an organic EL (Electro-Luminescence) element, or a reflective type such as DMD (Digital-Micromirror Device) or LCoS (Liquid-Crystal-on Silicon) (registered trademark). It may be a display panel module or a scanning display device that scans with laser light. When the image display unit 20 is a projection display device such as a reflective display panel module or a scanning display device, the display surface 21 is a screen on which an image is generated by the projection light from the projection display device. Applicable.

 以下の説明を容易にするために、図1Bに示されるように、画像表示部20の表示面21を正面から見た視点において、例えば、表示面21の横方向にIx軸を規定し、表示面21の縦方向にIy軸を規定する。このとき、Ix軸正方向は表示面21の左方向を表し、Iy軸正方向は表示面21の上方向を表す。また、表示面21におけるIx軸正方向は、例えば上述したx軸正方向、すなわち実空間上における車両左方向に対応する。同様に、表示面21におけるIy軸正方向は、例えば上述したy軸正方向、すなわち実空間上における鉛直方向上側(鉛直上方向)に対応する。 In order to facilitate the following description, as shown in FIG. 1B, for example, the Ix axis is defined in the lateral direction of the display surface 21 at the viewpoint when the display surface 21 of the image display unit 20 is viewed from the front. An Iy axis is defined in the vertical direction of the surface 21. At this time, the positive direction of the Ix axis represents the left direction of the display surface 21, and the positive direction of the Iy axis represents the upward direction of the display surface 21. Further, the Ix-axis positive direction on the display surface 21 corresponds to, for example, the above-described x-axis positive direction, that is, the vehicle left direction in real space. Similarly, the Iy-axis positive direction on the display surface 21 corresponds to, for example, the above-described y-axis positive direction, that is, the vertical direction upper side (vertical upward direction) in real space.

 視点位置取得部40は、例えば、車室内画像取得部41と車室内画像解析部42とを含む。視点位置取得部40は、車両1の運転席に座るユーザーの視点の位置100を取得する。以下、車両1の運転席に座るユーザーの視点の位置100を、ユーザー視点位置100とも呼ぶ。視点位置取得部40は、少なくともy軸方向におけるユーザー視点位置100を取得可能に構成される。 The viewpoint position acquisition unit 40 includes, for example, a vehicle interior image acquisition unit 41 and a vehicle interior image analysis unit 42. The viewpoint position acquisition unit 40 acquires the position 100 of the viewpoint of the user sitting in the driver's seat of the vehicle 1. Hereinafter, the position 100 of the viewpoint of the user sitting in the driver's seat of the vehicle 1 is also referred to as the user viewpoint position 100. The viewpoint position acquisition unit 40 is configured to be able to acquire the user viewpoint position 100 in at least the y-axis direction.

 車室内画像取得部41は、例えば、車室内の画像を撮像する車内カメラである。車室内画像取得部41は、例えば、車両盗難等を防止する目的で取り付けられる共用の車内カメラ等であってもよく、車両用表示装置10専用の車内カメラ等であってもよい。車室内画像取得部41は、ユーザー視点位置100をユーザー視点位置100よりも鉛直方向下側から撮像することが好ましく、例えばダッシュボード4等に取り付けられていてもよい。また、車室内画像取得部41は、車室内が暗いときであってもユーザー視点位置100を取得できるように赤外線撮像が可能であることが好ましい。車室内画像取得部41は、例えば、取得した車室内の画像を車室内画像解析部42に出力する。 The vehicle interior image acquisition unit 41 is, for example, a vehicle camera that captures an image of the vehicle interior. The vehicle interior image acquisition unit 41 may be, for example, a common vehicle camera attached for the purpose of preventing vehicle theft or the like, or a vehicle camera dedicated to the vehicle display device 10. The vehicle interior image acquisition unit 41 preferably captures the user viewpoint position 100 from the lower side in the vertical direction than the user viewpoint position 100, and may be attached to the dashboard 4 or the like, for example. The vehicle interior image acquisition unit 41 is preferably capable of infrared imaging so that the user viewpoint position 100 can be acquired even when the vehicle interior is dark. The vehicle interior image acquisition unit 41 outputs the acquired vehicle interior image to the vehicle interior image analysis unit 42, for example.

 車室内画像解析部42は、例えば、公知の画像処理、パターンマッチング手法等を用いて、入力した車室内の画像を解析する。車室内画像解析部42は、入力した車両前方の画像を解析した結果、入力した車室内の画像に運転席に座るユーザーの顔が含まれているときは、ユーザー視点位置100の例えば実空間における座標(y)を特定することによって、ユーザー視点位置100を取得する。車室内画像解析部42は、例えば、取得したユーザー視点位置100を、CAN(Controller Area Network)バス通信等のバス5を介して、画像生成部30に出力する。ここで、車室内画像解析部42は、例えば、車内カメラの中に含まれて設けられていてもよく、画像生成部30が車室内画像解析部42の機能を含んでもよい。また、画像生成部30は、バス5を介さずに車室内画像解析部42からユーザー視点位置100を直接入力してもよい。 The vehicle interior image analysis unit 42 analyzes the input image of the vehicle interior using, for example, known image processing, a pattern matching method, and the like. As a result of analyzing the input image in front of the vehicle, the vehicle interior image analysis unit 42 shows that the user's face sitting in the driver's seat is included in the input vehicle interior image, for example, in the user viewpoint position 100 in the real space. The user viewpoint position 100 is acquired by specifying the coordinates (y). The vehicle interior image analysis unit 42 outputs the acquired user viewpoint position 100 to the image generation unit 30 via the bus 5 such as CAN (Controller 等 Area Network) bus communication, for example. Here, the vehicle interior image analysis unit 42 may be provided, for example, in a vehicle camera, and the image generation unit 30 may include the function of the vehicle interior image analysis unit 42. Further, the image generation unit 30 may directly input the user viewpoint position 100 from the vehicle interior image analysis unit 42 without using the bus 5.

 前方情報取得部(道路形状情報取得部)60は、例えば、前方画像取得部61と前方画像解析部62とを含む。前方情報取得部60は、例えば、車両前方向の道路の形状、車両前方向に存在する他の車両及び障害物等の位置情報、車両前方向の道路標識の情報等の車両前方の情報等を取得する。 The front information acquisition unit (road shape information acquisition unit) 60 includes, for example, a front image acquisition unit 61 and a front image analysis unit 62. The forward information acquisition unit 60, for example, information on the front of the vehicle, such as the shape of the road in the front direction of the vehicle, position information of other vehicles and obstacles existing in the front direction of the vehicle, information on road signs in the front direction of the vehicle, etc. get.

 前方画像取得部61は、例えば、車両前方の画像を撮像する車外カメラである。前方画像取得部61は、例えば、ドライブレコーダー等に用いられている共用の車外カメラ等であってもよく、車両用表示装置10専用の車外カメラ等であってもよい。また、車外カメラは、単眼カメラであってもよいが、車両前方に存在する物体と自車両1との距離を正確に取得するために、車外カメラはステレオカメラであることが好ましい。また、車外カメラは、車両前方が暗いときであっても車両前方の画像を撮像できるように、赤外線撮像が可能であってもよい。前方画像取得部61は、例えば、取得した車両前方の画像を前方画像解析部62に出力する。 The front image acquisition unit 61 is, for example, a camera outside the vehicle that captures an image in front of the vehicle. The front image acquisition unit 61 may be, for example, a shared vehicle camera used for a drive recorder or the like, or a vehicle camera dedicated to the vehicle display device 10. In addition, the camera outside the vehicle may be a monocular camera, but it is preferable that the camera outside the vehicle is a stereo camera in order to accurately acquire the distance between the object existing ahead of the vehicle and the host vehicle 1. Further, the camera outside the vehicle may be capable of infrared imaging so that an image ahead of the vehicle can be taken even when the vehicle front is dark. The front image acquisition unit 61 outputs, for example, the acquired front image of the vehicle to the front image analysis unit 62.

 前方画像解析部62は、例えば、公知の画像処理、パターンマッチング手法等を用いて、入力した車両前方の画像を解析する。前方画像解析部62は、入力した車両前方の画像を解析することによって、車両前方の道路形状に関する道路形状情報(車線、白線、停止線、横断歩道、道路の幅員、車線数、交差点、カーブ、分岐路等)を取得する。また、前方画像解析部62は、入力した車両前方の画像を解析することによって、車両前方に存在する他の車両、障害物等の位置、大きさ、自車両1との距離、自車両1との相対速度等の障害物情報を取得する。前方画像解析部62は、例えば、取得した前方情報をバス5を介して、画像生成部30に出力する。ここで、前方画像解析部62は、例えば、車外カメラの中に含まれて設けられていてもよく、画像生成部30が前方画像解析部62の機能を含んでもよい。また、画像生成部30は、バス5を介さずに前方画像解析部62から道路形状情報を含む前方情報を直接入力してもよい。 The front image analysis unit 62 analyzes the input image ahead of the vehicle using, for example, known image processing, a pattern matching method, or the like. The forward image analysis unit 62 analyzes the input image in front of the vehicle to obtain road shape information (lane, white line, stop line, pedestrian crossing, road width, lane number, intersection, curve, To acquire a branch road). Further, the front image analysis unit 62 analyzes the input image in front of the vehicle, so that the position and size of other vehicles and obstacles existing in front of the vehicle, the distance from the own vehicle 1, the own vehicle 1 and Obstacle information such as relative speed is acquired. For example, the front image analysis unit 62 outputs the acquired front information to the image generation unit 30 via the bus 5. Here, the front image analysis unit 62 may be provided, for example, in a camera outside the vehicle, and the image generation unit 30 may include the function of the front image analysis unit 62. Further, the image generation unit 30 may directly input the forward information including the road shape information from the forward image analysis unit 62 without using the bus 5.

 また、前方情報取得部60は、前方画像取得部61の代わりに、又は前方画像取得部61と併せて、レーザーレーダー、ミリ波レーダー、超音波センサ、又は他の公知のセンサ等を有してもよい。このとき、前方画像解析部62は、車両前方の画像の代わりに、又は車両前方の画像と併せて、レーザーレーダー、ミリ波レーダー、超音波センサ、又は公知のセンサ等が出力するデータを入力して解析することによって、上述したような前方情報を取得してもよい。 Further, the front information acquisition unit 60 includes a laser radar, a millimeter wave radar, an ultrasonic sensor, or other known sensors instead of the front image acquisition unit 61 or in combination with the front image acquisition unit 61. Also good. At this time, the front image analysis unit 62 inputs data output from a laser radar, a millimeter wave radar, an ultrasonic sensor, a known sensor, or the like instead of the image in front of the vehicle or in combination with the image in front of the vehicle. The forward information as described above may be acquired by analyzing the above.

 さらに、図1Aでは、車室内画像取得部41及び前方画像取得部61が、車両1の別の場所に取り付けられているように表されているが、必ずしもこの限りではなく、車室内画像取得部41及び前方画像取得部61が車両1の同じ場所に取り付けられていてもよい。また、車室内画像取得部41及び前方画像取得部61は1つの同じ筐体に設けられていてもよい。 Further, in FIG. 1A, the vehicle interior image acquisition unit 41 and the front image acquisition unit 61 are illustrated as being attached to another place of the vehicle 1, but this is not necessarily the case, and the vehicle interior image acquisition unit is not necessarily limited thereto. 41 and the front image acquisition part 61 may be attached to the same place of the vehicle 1. Moreover, the vehicle interior image acquisition part 41 and the front image acquisition part 61 may be provided in one same housing | casing.

 なお、本発明における車両1の前方の道路形状情報を取得する道路形状情報取得部60は、車両1の位置情報を元に、車両1の前方の道路形状情報を自身で記憶し読み出し可能またはネットワークにより取得可能なナビゲーション装置で代替されてもよく、上述した画像解析による手段と組み合わせて構成されてもよい。 The road shape information acquisition unit 60 that acquires road shape information ahead of the vehicle 1 according to the present invention can store and read road shape information ahead of the vehicle 1 by itself based on the position information of the vehicle 1 or a network. May be replaced by a navigation device that can be obtained by the above-described method, or may be configured in combination with the above-described image analysis means.

 画像生成部30は、処理部31と記憶部32とを含む。処理部31は、例えば、1又は複数のマイクロプロセッサ、マイクロコントローラ、ASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)、任意の他のIC(Integrated Circuit)等を有する。記憶部32は、例えば、書き換え可能なRAM(Random Access Memory)、読み出し専用のROM(Read Only Memory)、消去不能なプログラム読み出し専用のEEPROM(Electrically Erasable Programmable Read-Only Memory)、不揮発性メモリであるフラッシュメモリ等のプログラム及び/又はデータを記憶可能な1又は複数のメモリを有する。 The image generation unit 30 includes a processing unit 31 and a storage unit 32. The processing unit 31 includes, for example, one or a plurality of microprocessors, a microcontroller, an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), and any other IC (Integrated Circuit). The storage unit 32 is, for example, a rewritable RAM (Random Access Memory), a read-only ROM (Read Only Memory), an erasable program read-only EEPROM (Electrically-Erasable Programmable Read-Only Memory), or a nonvolatile memory. It has one or a plurality of memories capable of storing programs and / or data such as a flash memory.

 画像生成部30は、視点位置取得部40から入力するユーザー視点位置100に応じて、画像表示部20の表示面21の表示領域210のうち画像の表示に使用する部分である使用領域220を決定する。使用領域220は、例えば、図1Bに示される画像表示部20の例において、液晶パネル21の画素22全体である表示領域210のうち画像の表示に使用する画素22の範囲220である。 In accordance with the user viewpoint position 100 input from the viewpoint position acquisition unit 40, the image generation unit 30 determines a use area 220 that is a part used for displaying an image in the display area 210 of the display surface 21 of the image display unit 20. To do. For example, in the example of the image display unit 20 illustrated in FIG. 1B, the use area 220 is a range 220 of the pixels 22 used for displaying an image in the display area 210 that is the entire pixels 22 of the liquid crystal panel 21.

 例えば、画像生成部30の記憶部32には、ユーザー視点位置100と、そのユーザー視点位置100に対応した使用領域220を決定するためのパラメータとが対応付けられたテーブルが記憶されている。画像生成部30は、例えば、処理部31がテーブルを参照することによって、入力するユーザー視点位置100に対応した使用領域220を決定する。 For example, the storage unit 32 of the image generation unit 30 stores a table in which the user viewpoint position 100 and the parameters for determining the use area 220 corresponding to the user viewpoint position 100 are associated with each other. For example, the processing unit 31 refers to the table, and the image generation unit 30 determines the use area 220 corresponding to the user viewpoint position 100 to be input.

 また、例えば、画像生成部30の記憶部32には、ユーザー視点位置100に対応した使用領域220を決定するための演算式が記憶されている。画像生成部30は、例えば、処理部31が演算式を演算することによって、入力するユーザー視点位置100に対応した使用領域220を決定する。ユーザー視点位置100とユーザー視点位置100に対応した使用領域220との関係については、後述する。 Further, for example, the storage unit 32 of the image generation unit 30 stores an arithmetic expression for determining the use area 220 corresponding to the user viewpoint position 100. For example, the image generation unit 30 determines the use area 220 corresponding to the user viewpoint position 100 to be input by the processing unit 31 calculating an arithmetic expression. The relationship between the user viewpoint position 100 and the use area 220 corresponding to the user viewpoint position 100 will be described later.

 また、画像生成部30は、前方情報取得部60から入力する車両前方の道路形状に関する道路形状情報に応じて、視認される虚像領域300における上端が下端に近づくように短くなるように使用領域220のサイズを補正する。道路形状に応じて使用領域220のサイズを補正する処理については後述する。 Further, the image generation unit 30 uses the use region 220 so that the upper end of the visually recognized virtual image region 300 is shortened so as to approach the lower end in accordance with the road shape information regarding the road shape ahead of the vehicle input from the front information acquisition unit 60. Correct the size. Processing for correcting the size of the use area 220 in accordance with the road shape will be described later.

 投影部50は、画像表示部20が表示する画像を、車両1のフロントウィンドウシールド2等の透光部材2に向けて投影する。投影された画像を構成する光80は、フロントウィンドウシールド2によって車室内に反射される。以下、画像を構成する光80を画像光80とも呼ぶ。投影部50は、フロントウィンドウシールド2によって反射される画像光80が、ユーザー視点位置100に向かって入射するように、画像を投影する。また、車両1の透光部材2は、車両1に設けられるコンバイナであってもよい。 The projection unit 50 projects the image displayed by the image display unit 20 toward the light transmissive member 2 such as the front window shield 2 of the vehicle 1. The light 80 constituting the projected image is reflected by the front window shield 2 into the vehicle interior. Hereinafter, the light 80 constituting the image is also referred to as image light 80. The projection unit 50 projects an image so that the image light 80 reflected by the front window shield 2 enters toward the user viewpoint position 100. Further, the light transmissive member 2 of the vehicle 1 may be a combiner provided in the vehicle 1.

 運転席に座るユーザーは、画像光80がユーザー視点位置100に入射することによって、フロントウィンドウシールド2を基準にして車両前方側に生成される仮想的な虚像領域300上に虚像310(図2参照)を視認することができる。ユーザーは、例えば、フロントウィンドウシールド2越しに見える景色の少なくとも一部と、虚像領域300とが、重畳した状態で虚像領域300上の虚像310を視認することができる。この虚像領域300には、例えば、画像表示部20の表示面21に表示される前記画像の虚像である虚像310が視認される。 A user sitting in the driver's seat receives a virtual image 310 (see FIG. 2) on a virtual virtual image region 300 generated on the vehicle front side with respect to the front window shield 2 when the image light 80 enters the user viewpoint position 100. ). For example, the user can visually recognize the virtual image 310 on the virtual image region 300 in a state in which at least a part of the scenery seen through the front window shield 2 and the virtual image region 300 overlap each other. In the virtual image region 300, for example, a virtual image 310 that is a virtual image of the image displayed on the display surface 21 of the image display unit 20 is visually recognized.

 図1Cを用いて、投影部50の構造の例を説明する。投影部50は、例えば、筐体51の内部に、平面鏡54及び凹面鏡55等の光学系と、アクチュエータ56とを収納する。筐体51は、例えば、車両1のダッシュボード4の中に配置され、黒色の遮光性合成樹脂等で形成される上ケース52及び下ケース53を含む。上ケース52のz軸方向略中間には、上ケース開口部52aが設けられている。上ケース開口部52aは、例えば、透明の透光性合成樹脂等で形成される透明カバー57によって覆われている。下ケース53の車両後方側には、例えば、下ケース開口部53aが設けられている。下ケース開口部53aは、例えば、筐体51の外部に取り付けられる画像表示部20の表示面21から発せられる画像光80が入射可能に、下ケース53に設けられている。 An example of the structure of the projection unit 50 will be described with reference to FIG. 1C. The projection unit 50 houses, for example, an optical system such as a plane mirror 54 and a concave mirror 55 and an actuator 56 inside the housing 51. The casing 51 includes, for example, an upper case 52 and a lower case 53 that are arranged in the dashboard 4 of the vehicle 1 and are formed of a black light-shielding synthetic resin or the like. An upper case opening 52a is provided substantially in the middle of the upper case 52 in the z-axis direction. The upper case opening 52a is covered with a transparent cover 57 formed of, for example, a transparent translucent synthetic resin. On the vehicle rear side of the lower case 53, for example, a lower case opening 53a is provided. The lower case opening 53 a is provided in the lower case 53 so that, for example, image light 80 emitted from the display surface 21 of the image display unit 20 attached to the outside of the housing 51 can enter.

 平面鏡54は、例えば、図示されていない取り付け部材を介して下ケース53の車両後方側に取り付けられている。平面鏡54は、例えば、下ケース開口部53aから入射する表示面21から発せられる画像光80を車両前方向に向けて反射するように、その取り付け位置及びその取り付け角度が固定されている。 The flat mirror 54 is attached to the vehicle rear side of the lower case 53 via an attachment member (not shown), for example. The mounting position and the mounting angle of the flat mirror 54 are fixed so that, for example, the image light 80 emitted from the display surface 21 incident from the lower case opening 53a is reflected toward the front of the vehicle.

 凹面鏡55は、例えば、アクチュエータ56を介して下ケース53の平面鏡54より車両前方側に取り付けられている。凹面鏡55は、アクチュエータ56によって、例えばx軸を回転軸として取り付け角度が回転させられ得る。凹面鏡55は、例えば、平面鏡54によって反射される画像光80を入射するように位置が固定され、入射する画像光80をフロントウィンドウシールド2に向かって反射するように、取り付け角度が微調整される。なお、取り付け角度に応じて、例えば、画像生成部30の記憶部32が記憶するユーザー視点位置100と、そのユーザー視点位置100に対応した使用領域220を決定するためのテーブル又は演算式が補正される。 The concave mirror 55 is attached to the front side of the vehicle from the plane mirror 54 of the lower case 53 via an actuator 56, for example. The mounting angle of the concave mirror 55 can be rotated by the actuator 56, for example, with the x axis as a rotation axis. For example, the position of the concave mirror 55 is fixed so that the image light 80 reflected by the plane mirror 54 is incident, and the attachment angle is finely adjusted so that the incident image light 80 is reflected toward the front window shield 2. . Depending on the attachment angle, for example, the user viewpoint position 100 stored in the storage unit 32 of the image generation unit 30 and the table or calculation formula for determining the use area 220 corresponding to the user viewpoint position 100 are corrected. The

 アクチュエータ56は、例えば、いずれも図示されていないモータ、減速機構、凹面鏡回転部材及び凹面鏡55の支持部材を含む。アクチュエータ56は、例えば、図示されていない取り付け部材を介して凹面鏡55の鉛直方向下側に下ケース53に取り付けられている。アクチュエータ56は、図示されていないアクチュエータ制御部から入力する信号に応じてモータを回転させ、減速機構によってモータの回転を減速して、凹面鏡回転部材に伝達し、凹面鏡55を回転させる。なお、アクチュエータ56は、必ずしも設けられている必要はない。 The actuator 56 includes, for example, a motor, a speed reduction mechanism, a concave mirror rotating member, and a support member for the concave mirror 55, all of which are not shown. For example, the actuator 56 is attached to the lower case 53 on the lower side in the vertical direction of the concave mirror 55 via an attachment member (not shown). The actuator 56 rotates the motor in accordance with a signal input from an actuator control unit (not shown), decelerates the rotation of the motor by the speed reduction mechanism, transmits it to the concave mirror rotating member, and rotates the concave mirror 55. The actuator 56 is not necessarily provided.

 また、図1Cの筐体51の上ケース52において、上ケース開口部52aと平面鏡54との間には、遮光部52bが設けられている。遮光部52bは、例えば、上ケース開口部52aから入射する筐体51外部からの光が画像表示部20へ進行することを防止するために設けられる。図1Cを参照して説明した投影部50の構造の例は、一例に過ぎず、車両用表示装置10の投影部50の構造を何ら制限するものではない。 Further, in the upper case 52 of the casing 51 of FIG. 1C, a light shielding portion 52b is provided between the upper case opening 52a and the plane mirror 54. The light shielding unit 52b is provided, for example, to prevent light from the outside of the housing 51 that enters from the upper case opening 52a from traveling to the image display unit 20. The example of the structure of the projection unit 50 described with reference to FIG. 1C is merely an example, and does not limit the structure of the projection unit 50 of the vehicle display device 10 at all.

 図2には、車両1の運転席に座るユーザーが、フロントウィンドウシールド2越しに見える風景及び虚像310の例が示されている。図2に示される例において、フロントウィンドウシールド2越しに見える風景の例として、車両前方に延びる3車線道路(道路91)及び車両前方に存在する他の車両(前方車両92)が示されている。図2に示されるフロントウィンドウシールド2越しに見える風景の例において、虚像310が重畳する重畳対象物90は、道路91や前方車両92である。図2に示される例において、虚像310は、道路91に重畳されて視認されるナビゲーションマーク311と、前方車両1と重畳されてユーザーに視認される報知マーク312である。 FIG. 2 shows an example of a landscape and a virtual image 310 that a user sitting in the driver's seat of the vehicle 1 can see through the front window shield 2. In the example shown in FIG. 2, a three-lane road (road 91) extending in front of the vehicle and another vehicle (front vehicle 92) existing in front of the vehicle are shown as examples of the scenery that can be seen through the front window shield 2. . In the example of the landscape seen through the front window shield 2 shown in FIG. 2, the superimposed object 90 on which the virtual image 310 is superimposed is a road 91 and a forward vehicle 92. In the example illustrated in FIG. 2, the virtual image 310 is a navigation mark 311 that is superimposed on the road 91 and visually recognized, and a notification mark 312 that is superimposed on the preceding vehicle 1 and visually recognized by the user.

 また、図2に示される例において、領域300は、画像表示部20の表示面21における使用領域220に対応する虚像310の表示に使用される領域である。以下、画像表示部20の表示面21における使用領域220に対応する領域300を虚像領域300とも呼ぶ。すなわち、虚像領域300は、ユーザーが虚像310を視認可能な領域である。 Further, in the example shown in FIG. 2, the region 300 is a region used for displaying the virtual image 310 corresponding to the used region 220 on the display surface 21 of the image display unit 20. Hereinafter, the region 300 corresponding to the use region 220 on the display surface 21 of the image display unit 20 is also referred to as a virtual image region 300. That is, the virtual image area 300 is an area where the user can visually recognize the virtual image 310.

 また、図1Bの画像表示部20の表示面21におけるIx軸正方向は、例えば、虚像領域300においてx軸正方向、すなわち車両左方向に対応する。同様に、図1Bの画像表示部20の表示面21におけるIy軸正方向は、例えば、虚像領域300においてy軸正方向、すなわち鉛直方向上側に対応する。 Also, the Ix-axis positive direction on the display surface 21 of the image display unit 20 in FIG. 1B corresponds to, for example, the x-axis positive direction in the virtual image region 300, that is, the vehicle left direction. Similarly, the Iy-axis positive direction on the display surface 21 of the image display unit 20 in FIG. 1B corresponds to, for example, the y-axis positive direction in the virtual image region 300, that is, the upper side in the vertical direction.

 図3を参照して、車両用表示装置10の動作の例を説明する。車両用表示装置10の動作は、例えば、車両1の電源がONされたとき、図示されていないエンジンが駆動されたとき、又は、車両1の電源がONもしくはエンジンが駆動されたときから所定待機時間が経過した後等に開始される。 An example of the operation of the vehicle display device 10 will be described with reference to FIG. The operation of the vehicle display device 10 is performed, for example, for a predetermined waiting time when the power of the vehicle 1 is turned on, when an engine (not shown) is driven, or when the power of the vehicle 1 is turned on or the engine is driven. It starts after time has passed.

 ステップS01では、前方情報取得部60が前方情報(道路形状情報や障害物情報)を取得する。ステップS02では、視点位置取得部40がユーザー視点位置100を取得する。なお、ステップS01及びステップS02は、必ずしもこの順番である必要はなく、順番が入れ替わってもよい。 In step S01, the forward information acquisition unit 60 acquires forward information (road shape information and obstacle information). In step S02, the viewpoint position acquisition unit 40 acquires the user viewpoint position 100. Note that step S01 and step S02 are not necessarily in this order, and the order may be changed.

 ステップS03では、画像生成部30は、ステップS01で前方情報取得部60によって取得された前方情報に応じて、例えば、報知マーク、ナビゲーションマーク及び他のマーク等を含ませた画像を生成する。 In step S03, the image generation unit 30 generates an image including, for example, a notification mark, a navigation mark, and other marks according to the forward information acquired by the forward information acquisition unit 60 in step S01.

 ステップS04では、画像生成部30は、ステップS02で視点位置取得部40によって取得されたユーザー視点位置100に応じて、画像表示部20の表示面21の表示領域210のうち使用領域220を決定する。なお、ステップS03及びステップS04は、必ずしもこの順番である必要はなく、順番が入れ替わってもよい。また、画像生成部30は、ステップS01で前方情報取得部60によって取得された道路形状情報に応じて、使用領域220の大きさを補正する。 In step S04, the image generation unit 30 determines the use region 220 in the display region 210 of the display surface 21 of the image display unit 20 according to the user viewpoint position 100 acquired by the viewpoint position acquisition unit 40 in step S02. . Note that step S03 and step S04 are not necessarily in this order, and the order may be changed. In addition, the image generation unit 30 corrects the size of the use area 220 according to the road shape information acquired by the front information acquisition unit 60 in step S01.

 ステップS05では、画像表示部20は、ステップS04で画像生成部30によって決定された使用領域220内の画素22の総数を用いて、ステップS03で生成された画像を表示する。ステップS05の処理を実行すると、フローはStartに戻る。ここで、図3に示されるフローチャートが、予め設定された所定間隔毎に繰り返して実行されるように、ステップS05の処理の実行が終了した後フローがStartに戻るまでの間に、所定の待機時間が挿入されていてもよい。 In step S05, the image display unit 20 displays the image generated in step S03 using the total number of pixels 22 in the use area 220 determined by the image generation unit 30 in step S04. When the process of step S05 is executed, the flow returns to Start. Here, in order that the flowchart shown in FIG. 3 is repeatedly executed at predetermined intervals set in advance, a predetermined waiting time is required after the execution of the process of step S05 until the flow returns to Start. Time may be inserted.

 図4A、図4B、図4C、図4D、図4E及び図5を参照して、ユーザー視点位置100とユーザー視点位置100に対応した使用領域220との関係について説明する。図4A、図4B、図4C、図4D及び図4Eの左側には、実空間上のy軸及びz軸におけるユーザー視点位置100を表す座標軸が示される。また、図4A、図4B、図4C、図4D及び図4Eの右側には、実空間上のy軸及びz軸におけるユーザー視点位置100に対応して、画像表示部20の表示面21の表示領域210のうち、画像生成部30によって決定される、画像の表示に使用される使用領域220が示されている。 4A, 4B, 4C, 4D, 4E, and 5, the relationship between the user viewpoint position 100 and the use area 220 corresponding to the user viewpoint position 100 will be described. On the left side of FIGS. 4A, 4B, 4C, 4D, and 4E, coordinate axes representing the user viewpoint position 100 on the y axis and the z axis in real space are shown. In addition, on the right side of FIGS. 4A, 4B, 4C, 4D, and 4E, the display of the display surface 21 of the image display unit 20 corresponding to the user viewpoint position 100 on the y axis and the z axis in the real space is displayed. Of the area 210, a use area 220 used for displaying an image, which is determined by the image generation unit 30, is shown.

 図5は、車両用表示装置10において、鉛直方向におけるユーザー視点位置100と、虚像領域300と、虚像領域300内が重畳する風景の道路91における距離の範囲との関係を説明するための模式的な図である。なお、図5は、鉛直方向におけるユーザー視点位置100と、虚像領域300と、虚像領域300内が重畳する風景の道路91における距離の範囲との関係を分かりやすく説明するために、ユーザー視点位置100の変化量を誇張して表現している。具体的に、図5に示される、ユーザー視点位置100rとユーザー視点位置100u、ユーザー視点位置100rとユーザー視点位置100dの鉛直方向における距離は、実際にはもっと近い。その結果、虚像領域300rと虚像領域300uと虚像領域300dとはいずれも重なっている部分がないように、図5に示されている。しかしながら、図4B及び図4Cに示されているように、実際は、少なくとも、虚像領域300r及び虚像領域300u、虚像領域300r及び虚像領域300dはその一部が重なる。虚像領域300内が重畳する風景の道路91における距離の範囲を、以下、重畳距離範囲400とも呼ぶ。 FIG. 5 is a schematic diagram for explaining the relationship between the user viewpoint position 100 in the vertical direction, the virtual image area 300, and the range of the distance on the road 91 in the landscape where the virtual image area 300 overlaps in the vehicle display device 10. It is a simple figure. FIG. 5 shows the user viewpoint position 100 in order to easily understand the relationship between the user viewpoint position 100 in the vertical direction, the virtual image area 300, and the range of the distance on the road 91 in the landscape where the virtual image area 300 overlaps. The amount of change is exaggerated. Specifically, the vertical distances between the user viewpoint position 100r and the user viewpoint position 100u and between the user viewpoint position 100r and the user viewpoint position 100d shown in FIG. 5 are actually closer. As a result, the virtual image region 300r, the virtual image region 300u, and the virtual image region 300d are shown in FIG. 5 so that there is no overlapping portion. However, as shown in FIGS. 4B and 4C, in reality, at least the virtual image region 300r, the virtual image region 300u, the virtual image region 300r, and the virtual image region 300d partially overlap. Hereinafter, the range of the distance on the road 91 in the landscape where the virtual image region 300 is superimposed is also referred to as a superimposed distance range 400.

 図5には、図4Aに示されるユーザー視点位置100rのときの虚像領域300rと、図4Bに示されるユーザー視点位置100uのときの虚像領域300uと、図4Cに示されるユーザー視点位置100dのときの虚像領域300dとが示されている。また、図5には、ユーザー視点位置100rのときにフロントウィンドウシールド2越しに見える風景のうち、虚像領域300rと重畳する風景の道路91における距離の範囲である重畳距離範囲400rと、ユーザー視点位置100uのときにフロントウィンドウシールド2越しに見える風景のうち、虚像領域300uと重畳する風景の道路91における距離の範囲である重畳距離範囲400uと、ユーザー視点位置100dのときにフロントウィンドウシールド2越しに見える風景のうち、虚像領域300dと重畳する風景の道路91における距離の範囲である重畳距離範囲400dとが示されている。 FIG. 5 shows a virtual image area 300r at the user viewpoint position 100r shown in FIG. 4A, a virtual image area 300u at the user viewpoint position 100u shown in FIG. 4B, and a user viewpoint position 100d shown in FIG. 4C. The virtual image region 300d is shown. Also, FIG. 5 shows a superimposition distance range 400r that is a range of the distance on the road 91 of the landscape that overlaps the virtual image region 300r among the landscapes that can be seen through the front window shield 2 at the user viewpoint position 100r, and the user viewpoint position. Of the scenery that can be seen through the front window shield 2 at 100u, the superimposing distance range 400u that is the distance range of the road 91 over the virtual image area 300u and the front window shield 2 at the user viewpoint position 100d. Of the scenery that can be seen, a superimposition distance range 400d that is a distance range on the road 91 of the scenery that overlaps the virtual image area 300d is shown.

 まず、鉛直方向におけるユーザー視点位置100と鉛直方向におけるユーザー視点位置100に対応した使用領域220との関係について説明する。図4Aに示されるユーザー視点位置100rは、図4Aに示される座標軸において、y軸とz軸との交差点に表されている。以下、図4Aに示されるユーザー視点位置100rを基準ユーザー視点位置100rとも呼ぶ。例えば、図3に示されるステップS02で取得されるユーザー視点位置100が基準ユーザー視点位置100rであるとき、図3に示されるステップS04で、画像生成部30は、画像表示部20の表示面21の表示領域210のうち使用領域220を図4Aに示される使用領域220rに決定する。以下、図4Aに示される、基準ユーザー視点位置100rに対応する使用領域220rを基準使用領域220rとも呼ぶ。 First, the relationship between the user viewpoint position 100 in the vertical direction and the use area 220 corresponding to the user viewpoint position 100 in the vertical direction will be described. The user viewpoint position 100r shown in FIG. 4A is represented at the intersection of the y axis and the z axis in the coordinate axes shown in FIG. 4A. Hereinafter, the user viewpoint position 100r illustrated in FIG. 4A is also referred to as a reference user viewpoint position 100r. For example, when the user viewpoint position 100 acquired in step S02 illustrated in FIG. 3 is the reference user viewpoint position 100r, the image generation unit 30 displays the display surface 21 of the image display unit 20 in step S04 illustrated in FIG. The use area 220 is determined as the use area 220r shown in FIG. 4A. Hereinafter, the use area 220r corresponding to the reference user viewpoint position 100r illustrated in FIG. 4A is also referred to as a reference use area 220r.

 図4Bに示されるユーザー視点位置100uは、基準ユーザー視点位置100rと比較して鉛直方向上側に位置するユーザー視点位置100の例である。例えば、図3に示されるステップS02で取得されるユーザー視点位置100がユーザー視点位置100uであるとき、図3に示されるステップS04で、画像生成部30は、画像表示部20の表示面21の表示領域210のうち使用領域220を図4Bに示される使用領域220uに決定する。 The user viewpoint position 100u shown in FIG. 4B is an example of the user viewpoint position 100 located on the upper side in the vertical direction compared to the reference user viewpoint position 100r. For example, when the user viewpoint position 100 acquired in step S02 shown in FIG. 3 is the user viewpoint position 100u, in step S04 shown in FIG. Of the display area 210, the use area 220 is determined to be the use area 220u shown in FIG. 4B.

 図4Bに示される使用領域220uは、基準使用領域220rと比較して、Iy軸正方向側に位置する。また、図4Bに示される使用領域220uにおけるIy軸方向の長さ221uは、基準使用領域220rにおけるIy軸方向の長さ221rと比較して、長くなっている。その結果、図5に示されるように、使用領域220uに対応する虚像領域300uは、基準使用領域220rに対応する虚像領域300rと比較して、実空間上の鉛直方向上側に位置し、且つ、実空間上の鉛直方向の長さが長くなる。なお、使用領域220uは、基準使用領域220rの一部と重なっている。 The use area 220u shown in FIG. 4B is located on the Iy axis positive direction side as compared with the reference use area 220r. Also, the length 221u in the Iy-axis direction in the use area 220u shown in FIG. 4B is longer than the length 221r in the Iy-axis direction in the reference use area 220r. As a result, as shown in FIG. 5, the virtual image area 300u corresponding to the use area 220u is positioned above the virtual image area 300r corresponding to the reference use area 220r in the vertical direction on the real space, and The length in the vertical direction in real space becomes longer. The use area 220u overlaps a part of the reference use area 220r.

 すなわち、視点位置取得部40によって検出されるユーザー視点位置100が鉛直方向上側にいくにつれて、表示面21の使用領域220の位置がIy軸正方向側に位置するように決定される。また、視点位置取得部40によって検出されるユーザー視点位置100が鉛直方向上側にいくにつれて、表示面21の使用領域220のIy軸方向の長さが長くなるように決定される。その結果、視点位置取得部40によって検出されるユーザー視点位置100が鉛直方向上側にいくにつれて、虚像領域300は、実空間上の鉛直方向上側に位置し、且つ、実空間上の鉛直方向の長さが長くなる。 That is, as the user viewpoint position 100 detected by the viewpoint position acquisition unit 40 moves upward in the vertical direction, the position of the use area 220 of the display surface 21 is determined to be positioned on the Iy axis positive direction side. In addition, as the user viewpoint position 100 detected by the viewpoint position acquisition unit 40 moves upward in the vertical direction, the length of the use area 220 of the display surface 21 in the Iy-axis direction is determined to be longer. As a result, as the user viewpoint position 100 detected by the viewpoint position acquisition unit 40 moves upward in the vertical direction, the virtual image region 300 is positioned on the upper side in the vertical direction in the real space and the vertical length in the real space is increased. Lengthens.

 図4Cに示されるユーザー視点位置100dは、基準ユーザー視点位置100rと比較して鉛直方向下側に位置するユーザー視点位置100の例である。例えば、図3に示されるステップS02で取得されるユーザー視点位置100がユーザー視点位置100dであるとき、図3に示されるステップS04で、画像生成部30は、画像表示部20の表示面21の表示領域210のうち使用領域220を図4Cに示される使用領域220dに決定する。 The user viewpoint position 100d shown in FIG. 4C is an example of the user viewpoint position 100 located on the lower side in the vertical direction as compared with the reference user viewpoint position 100r. For example, when the user viewpoint position 100 acquired in step S02 shown in FIG. 3 is the user viewpoint position 100d, in step S04 shown in FIG. Of the display area 210, the use area 220 is determined to be the use area 220d shown in FIG. 4C.

 図4Cに示される使用領域220dは、基準使用領域220rと比較して、Iy軸負方向側に位置する。また、図4Cに示される使用領域220dにおけるIy軸方向の長さ221dは、基準使用領域220rにおけるIy軸方向の長さ221rと比較して、短くなっている。その結果、図5に示されるように、図4Cに示される使用領域220dに対応する虚像領域300dは、基準使用領域220rに対応する虚像領域300rと比較して、実空間上の鉛直方向下側に位置し、実空間上の鉛直方向の長さが短い。なお、使用領域220dは、基準使用領域220rの一部と重なっている。 The use area 220d shown in FIG. 4C is located on the Iy-axis negative direction side as compared to the reference use area 220r. Further, the length 221d in the Iy-axis direction in the use area 220d shown in FIG. 4C is shorter than the length 221r in the Iy-axis direction in the reference use area 220r. As a result, as shown in FIG. 5, the virtual image area 300d corresponding to the use area 220d shown in FIG. 4C is lower in the vertical direction in the real space than the virtual image area 300r corresponding to the reference use area 220r. The vertical length in real space is short. The use area 220d overlaps a part of the reference use area 220r.

 すなわち、視点位置取得部40によって検出されるユーザー視点位置100が鉛直方向下側にいくにつれて、表示面21の使用領域220の位置がIy軸負方向側に位置するように決定される。また、視点位置取得部40によって検出されるユーザー視点位置100が鉛直方向下側にいくにつれて、表示面21の使用領域220のIy軸方向の長さが短くなるように決定される。その結果、視点位置取得部40によって検出されるユーザー視点位置100が鉛直方向下側にいくにつれて、虚像領域300は、実空間上の鉛直方向下側に位置し、且つ、実空間上の鉛直方向の長さが短くなる。 That is, as the user viewpoint position 100 detected by the viewpoint position acquisition unit 40 moves downward in the vertical direction, the position of the use area 220 of the display surface 21 is determined to be positioned on the Iy axis negative direction side. In addition, as the user viewpoint position 100 detected by the viewpoint position acquisition unit 40 moves downward in the vertical direction, the length in the Iy axis direction of the use area 220 of the display surface 21 is determined to be shorter. As a result, as the user viewpoint position 100 detected by the viewpoint position acquisition unit 40 moves downward in the vertical direction, the virtual image region 300 is positioned in the vertical direction lower side in the real space and the vertical direction in the real space. The length of is shortened.

 ここで、図5を参照すると、重畳距離範囲400rと重畳距離範囲400uと重畳距離範囲400dとが一致している。図5に示される例のように、ユーザー視点位置100の鉛直方向における変化量に対して、虚像領域300の鉛直方向における変化量が小さくなる。そうすると、例えば、ユーザー視点位置100が鉛直方向上側にいくにつれて、ユーザーが虚像領域300を見る視線と水平面との角度が大きくなる。その一方で、例えば、ユーザー視点位置100が鉛直方向下側にいくにつれて、ユーザーが虚像領域300を見る視線と水平面との角度が小さくなる。 Here, referring to FIG. 5, the overlap distance range 400r, the overlap distance range 400u, and the overlap distance range 400d are coincident. As in the example shown in FIG. 5, the amount of change in the vertical direction of the virtual image region 300 is smaller than the amount of change in the vertical direction of the user viewpoint position 100. Then, for example, as the user viewpoint position 100 moves upward in the vertical direction, the angle between the line of sight where the user views the virtual image region 300 and the horizontal plane increases. On the other hand, for example, as the user viewpoint position 100 moves downward in the vertical direction, the angle between the line of sight when the user views the virtual image region 300 and the horizontal plane decreases.

 その結果、鉛直方向におけるユーザー視点位置100に影響されることなく重畳距離範囲400を一定にするためには、ユーザー視点位置100が鉛直方向上側にいくにつれて、虚像領域300の鉛直方向の位置を鉛直方向上側にするだけでなく、鉛直方向の長さを長くする必要がある。同様に、鉛直方向におけるユーザー視点位置100に影響されることなく重畳距離範囲400を一定にするためには、ユーザー視点位置100が鉛直方向下側にいくにつれて、虚像領域300の鉛直方向の位置を鉛直方向下側にするだけでなく、鉛直方向の長さを短くする必要がある。 As a result, in order to make the overlapping distance range 400 constant without being affected by the user viewpoint position 100 in the vertical direction, the vertical position of the virtual image region 300 is vertically changed as the user viewpoint position 100 moves upward in the vertical direction. It is necessary to increase the length in the vertical direction as well as the upper side in the direction. Similarly, in order to make the overlapping distance range 400 constant without being affected by the user viewpoint position 100 in the vertical direction, the vertical position of the virtual image region 300 is changed as the user viewpoint position 100 moves downward in the vertical direction. It is necessary to shorten the length in the vertical direction as well as on the lower side in the vertical direction.

 すなわち、鉛直方向におけるユーザー視点位置100に応じて、使用領域220のIy軸における位置及びIy軸における長さを適切に決定することによって、鉛直方向におけるユーザー視点位置100に影響されることなく、重畳距離範囲400を一定にすることができる。重畳距離範囲400が一定になることによって、ユーザーが視認する虚像310が重畳される風景内の対象がずれることに対処することができる。 That is, by appropriately determining the position on the Iy axis and the length on the Iy axis of the use area 220 according to the user viewpoint position 100 in the vertical direction, the superimposition is performed without being affected by the user viewpoint position 100 in the vertical direction. The distance range 400 can be made constant. By making the superimposition distance range 400 constant, it is possible to cope with a shift in the object in the landscape on which the virtual image 310 visually recognized by the user is superimposed.

 続いて、車両前後方向におけるユーザー視点位置100と車両前後方向におけるユーザー視点位置100に対応した使用領域220との関係について説明する。図4Dに示されるユーザー視点位置100fは、基準ユーザー視点位置100rと比較して車両前方向に位置するユーザー視点位置100の例である。例えば、図3に示されるステップS02で取得されるユーザー視点位置100がユーザー視点位置100fであるとき、図3に示されるステップS04で、画像生成部30は、画像表示部20の表示面21の表示領域210のうち使用領域220を図4Dに示される使用領域220fに決定する。 Subsequently, the relationship between the user viewpoint position 100 in the vehicle front-rear direction and the use area 220 corresponding to the user viewpoint position 100 in the vehicle front-rear direction will be described. The user viewpoint position 100f illustrated in FIG. 4D is an example of the user viewpoint position 100 that is located in the vehicle front direction compared to the reference user viewpoint position 100r. For example, when the user viewpoint position 100 acquired in step S02 shown in FIG. 3 is the user viewpoint position 100f, in step S04 shown in FIG. Of the display area 210, the use area 220 is determined to be the use area 220f shown in FIG. 4D.

 図4Dに示される使用領域220fにおけるIx軸方向の長さ222f及びIy軸方向の長さ221fの双方は、基準使用領域220rにおけるIx軸方向の長さ222r及びIy軸方向の長さ221rと比較して、短くなっている。その結果、図4Dに示される使用領域220fに対応する虚像領域300は、基準使用領域220rに対応する虚像領域300と比較して、実空間上の車両左右方向の長さ及び鉛直方向の長さの双方が短い。 Both the length 222f in the Ix-axis direction and the length 221f in the Iy-axis direction in the use region 220f shown in FIG. 4D are compared with the length 222r in the Ix-axis direction and the length 221r in the Iy-axis direction in the reference use region 220r. And it is getting shorter. As a result, the virtual image area 300 corresponding to the use area 220f shown in FIG. 4D is compared with the virtual image area 300 corresponding to the reference use area 220r in the vehicle left-right direction length and vertical length in real space. Both are short.

 すなわち、視点位置取得部40によって検出されるユーザー視点位置100が車両前方向にいくにつれて、表示面21の使用領域220のIx軸方向の長さ及びIy軸方向の長さの双方が短くなるように決定される。その結果、視点位置取得部40によって検出されるユーザー視点位置100が車両前方向にいくにつれて、虚像領域300は、実空間上の車両左右方向の長さ及び鉛直方向の長さの双方が短くなる。 That is, as the user viewpoint position 100 detected by the viewpoint position acquisition unit 40 moves in the vehicle front direction, both the length in the Ix axis direction and the length in the Iy axis direction of the use area 220 of the display surface 21 are shortened. To be determined. As a result, as the user viewpoint position 100 detected by the viewpoint position acquisition unit 40 moves in the front direction of the vehicle, the virtual image area 300 becomes shorter in both the length in the vehicle left-right direction and the length in the vertical direction in real space. .

 図4Eに示されるユーザー視点位置100bは、基準ユーザー視点位置100rと比較して車両後方向に位置するユーザー視点位置100の例である。例えば、図3に示されるステップS02で取得されるユーザー視点位置100がユーザー視点位置100bであるとき、図3に示されるステップS04で、画像生成部30は、画像表示部20の表示面21の表示領域210のうち使用領域220を図4Eに示される使用領域220bに決定する。 The user viewpoint position 100b shown in FIG. 4E is an example of the user viewpoint position 100 positioned in the rearward direction of the vehicle as compared with the reference user viewpoint position 100r. For example, when the user viewpoint position 100 acquired in step S02 shown in FIG. 3 is the user viewpoint position 100b, the image generation unit 30 in the step S04 shown in FIG. Of the display area 210, the use area 220 is determined to be the use area 220b shown in FIG. 4E.

 図4Eに示される使用領域220bにおけるIx軸方向の長さ222b及びIy軸方向の長さ221bの双方は、基準使用領域220rにおけるIx軸方向の長さ222r及びIy軸方向の長さ221rと比較して、長くなっている。その結果、図4Eに示される使用領域220bに対応する虚像領域300は、基準使用領域220rに対応する虚像領域300と比較して、実空間上の車両左右方向の長さ及び鉛直方向の長さの双方が長い。 Both the length 222b in the Ix-axis direction and the length 221b in the Iy-axis direction in the use region 220b shown in FIG. 4E are compared with the length 222r in the Ix-axis direction and the length 221r in the Iy-axis direction in the reference use region 220r. And it is getting longer. As a result, the virtual image area 300 corresponding to the use area 220b shown in FIG. 4E is compared with the virtual image area 300 corresponding to the reference use area 220r in the vehicle left-right direction length and vertical length in real space. Both are long.

 すなわち、視点位置取得部40によって検出されるユーザー視点位置100が車両後方向にいくにつれて、表示面21の使用領域220のIx軸方向の長さ及びIy軸方向の長さの双方が長くなるように決定される。その結果、視点位置取得部40によって検出されるユーザー視点位置100が車両後方向にいくにつれて、虚像領域300は、実空間上の車両左右方向の長さ及び鉛直方向の長さの双方が長くなる。 That is, as the user viewpoint position 100 detected by the viewpoint position acquisition unit 40 moves in the vehicle rearward direction, both the length in the Ix axis direction and the length in the Iy axis direction of the use area 220 of the display surface 21 are increased. To be determined. As a result, as the user viewpoint position 100 detected by the viewpoint position acquisition unit 40 moves in the vehicle rearward direction, the virtual image area 300 becomes longer in both the vehicle left-right length and the vertical length in real space. .

 例えば、虚像領域300が一定であるとすると、ユーザー視点位置100と虚像領域300との距離(車両前後方向の距離)が近いほど、ユーザー視点位置100からフロントウィンドウシールド2越しに見える風景のうち、虚像領域300内と重畳する風景の範囲は広くなる。その一方で、ユーザー視点位置100と虚像領域300との距離(車両前後方向の距離)が遠いほど、ユーザー視点位置100からフロントウィンドウシールド2越しに見える風景のうち、虚像領域300内と重畳する風景の範囲は狭くなる。 For example, if the virtual image area 300 is constant, the closer the distance between the user viewpoint position 100 and the virtual image area 300 (the distance in the vehicle front-rear direction) is, the landscape that can be seen from the user viewpoint position 100 through the front window shield 2 is: The range of the landscape that overlaps the virtual image region 300 is widened. On the other hand, as the distance between the user viewpoint position 100 and the virtual image area 300 (the distance in the vehicle front-rear direction) increases, among the scenery that can be seen from the user viewpoint position 100 through the front window shield 2, the scenery that overlaps the virtual image area 300. The range of becomes narrower.

 その結果、車両前後方向におけるユーザー視点位置100に影響されることなく虚像領域300内と重畳する風景の範囲を一定にするためには、ユーザー視点位置100が車両前方向にいくにつれて、虚像領域300の車両左右方向の長さ及び鉛直方向の長さの双方を短くする必要がある。同様に、車両前後方向におけるユーザー視点位置100に影響されることなく虚像領域300内と重畳する風景の範囲を一定にするためには、ユーザー視点位置100が車両後方向にいくにつれて、虚像領域300の車両左右方向の長さ及び鉛直方向の長さの双方を長くする必要がある。 As a result, in order to make the range of the landscape superimposed on the virtual image area 300 unaffected by the user viewpoint position 100 in the vehicle front-rear direction, the virtual image area 300 is moved as the user viewpoint position 100 moves in the vehicle front direction. It is necessary to shorten both the length in the vehicle left-right direction and the length in the vertical direction. Similarly, in order to make the range of the landscape superimposed on the virtual image area 300 unaffected by the user viewpoint position 100 in the vehicle front-rear direction, the virtual image area 300 increases as the user viewpoint position 100 moves backward in the vehicle. It is necessary to increase both the length in the vehicle left-right direction and the length in the vertical direction.

 すなわち、車両前後方向におけるユーザー視点位置100に応じて、使用領域220のIx軸における長さ及びIy軸における長さを適切に決定することによって、車両前後方向におけるユーザー視点位置100に影響されることなく、重畳する風景の範囲を一定にすることができる。重畳する風景の範囲が一定になることによって、ユーザーが視認する虚像310が重畳される風景内の対象がずれることに対処することができる。 That is, depending on the user viewpoint position 100 in the vehicle front-rear direction, the user viewpoint position 100 in the vehicle front-rear direction is influenced by appropriately determining the length in the Ix axis and the length in the Iy axis of the use region 220. The range of the landscape to be superimposed can be made constant. Since the range of the landscape to be superimposed becomes constant, it is possible to cope with a shift in the object in the landscape on which the virtual image 310 visually recognized by the user is superimposed.

 次に、図6A、図6B、図6C及び図7を参照して、ユーザー視点位置100と前方情報取得部60によって取得された道路形状情報を含む前方情報に対応した使用領域220との関係について説明する。図6A、図6B、図6Cの左側には、実空間上のy軸及びz軸におけるユーザー視点位置100を表す座標軸が示される。また、図6A、図6B、図6Cの右側には、実空間上のy軸及びz軸におけるユーザー視点位置100に対応して、画像表示部20の表示面21の表示領域210のうち、画像生成部30によって決定される、画像の表示に使用される使用領域220が示されている。画像生成部30は、ユーザー視点位置100によって決定された使用領域220を、ステップS01で前方情報取得部60によって取得された道路形状を含む前方情報に応じて、虚像領域300における上端が下端に近づくように短く補正して補正使用領域221とする。具体的には、画像生成部30は、ステップS01で前方情報取得部60によって取得された道路形状情報より、車両1の前方の道路形状がカーブやT字路等であり、虚像領域300の一部が道路に重畳しないと判定した場合、その道路と重畳しない虚像領域300の上端側に対応する使用領域220の領域を、画像を表示しない不使用領域222に設定し、それ以外の領域を、画像を表示可能な補正使用領域221に設定する。 Next, with reference to FIGS. 6A, 6B, 6C, and 7, the relationship between the user viewpoint position 100 and the use area 220 corresponding to the forward information including the road shape information acquired by the forward information acquisition unit 60 is described. explain. On the left side of FIGS. 6A, 6B, and 6C, coordinate axes representing the user viewpoint position 100 on the y axis and the z axis in real space are shown. Further, on the right side of FIGS. 6A, 6 </ b> B, and 6 </ b> C, an image is displayed in the display area 210 of the display surface 21 of the image display unit 20 corresponding to the user viewpoint position 100 on the y axis and the z axis in real space. A use area 220 used for displaying an image determined by the generation unit 30 is shown. The image generation unit 30 uses the use area 220 determined by the user viewpoint position 100 according to the forward information including the road shape acquired by the front information acquisition unit 60 in step S01, and the upper end of the virtual image area 300 approaches the lower end. Thus, the correction use area 221 is corrected to be short. Specifically, the image generation unit 30 determines that the road shape in front of the vehicle 1 is a curve, a T-shaped road, or the like based on the road shape information acquired by the front information acquisition unit 60 in step S01. When it is determined that the part does not overlap with the road, the area of the use area 220 corresponding to the upper end side of the virtual image area 300 that does not overlap with the road is set to the non-use area 222 that does not display an image, The correction use area 221 in which an image can be displayed is set.

 図7は、車両用表示装置10において、鉛直方向におけるユーザー視点位置100と、車両1の前方の道路形状によって補正された補正虚像領域301と、この補正虚像領域301が重畳する風景の道路91における距離の範囲との関係を説明するための模式的な図である。なお、図7は、鉛直方向におけるユーザー視点位置100と、補正虚像領域301と、補正虚像領域300が重畳する風景の道路91における距離の範囲との関係を分かりやすく説明するために、ユーザー視点位置100の変化量を誇張して表現している。 FIG. 7 is a schematic diagram of the vehicle display device 10, in the user viewpoint position 100 in the vertical direction, the corrected virtual image region 301 corrected by the road shape ahead of the vehicle 1, and the landscape road 91 where the corrected virtual image region 301 overlaps. It is a schematic diagram for demonstrating the relationship with the range of distance. FIG. 7 shows the user viewpoint position 100 for easy understanding of the relationship between the user viewpoint position 100 in the vertical direction, the corrected virtual image area 301, and the range of the distance on the road 91 in the landscape where the corrected virtual image area 300 is superimposed. The amount of change of 100 is exaggerated.

 図7には、図6Aに示されるユーザー視点位置100rのときの補正虚像領域301rと、図6Bに示されるユーザー視点位置100uのときの補正虚像領域301uと、図6Cに示されるユーザー視点位置100dのときの補正虚像領域301dとが示されている。また、図7には、ユーザー視点位置100rのときにフロントウィンドウシールド2越しに見える風景のうち、補正虚像領域301rと重畳する風景の道路91における距離の範囲である補正重畳距離範囲401rと、ユーザー視点位置100uのときにフロントウィンドウシールド2越しに見える風景のうち、補正虚像領域301uと重畳する風景の道路91における距離の範囲である補正重畳距離範囲401uと、ユーザー視点位置100dのときにフロントウィンドウシールド2越しに見える風景のうち、補正虚像領域301dと重畳する風景の道路91における距離の範囲である補正重畳距離範囲401dとが示されている。また、図7には、ユーザー視点位置100rのときにフロントウィンドウシールド2越しに見える風景のうち、ユーザー視点位置100rにより決定された虚像領域300rのうち、車両1の前方の道路形状により虚像の表示を行わない風景の道路91における距離の範囲である非重畳距離範囲402rと、ユーザー視点位置100uのときにフロントウィンドウシールド2越しに見える風景のうち、ユーザー視点位置100uにより決定された虚像領域300uのうち、車両1の前方の道路形状により虚像の表示を行わない風景の道路91における距離の範囲である非重畳距離範囲402uと、ユーザー視点位置100dのときにフロントウィンドウシールド2越しに見える風景のうち、ユーザー視点位置100dにより決定された虚像領域300dのうち、車両1の前方の道路形状により虚像の表示を行わない風景の道路91における距離の範囲である非重畳距離範囲402dとが示されている。 FIG. 7 shows a corrected virtual image area 301r at the user viewpoint position 100r shown in FIG. 6A, a corrected virtual image area 301u at the user viewpoint position 100u shown in FIG. 6B, and a user viewpoint position 100d shown in FIG. 6C. The corrected virtual image area 301d at the time of is shown. Further, FIG. 7 shows a corrected overlapping distance range 401r that is a range of the distance on the road 91 of the scenery superimposed on the corrected virtual image area 301r among the scenery that can be seen through the front window shield 2 at the user viewpoint position 100r, and the user Of the scenery that can be seen through the front window shield 2 at the viewpoint position 100u, the corrected superimposing distance range 401u that is the range of the distance on the road 91 of the scenery that overlaps the corrected virtual image area 301u and the front window at the user viewpoint position 100d. Of the scenery seen through the shield 2, a corrected superimposition distance range 401d, which is a distance range on the road 91 of the scenery superimposed with the corrected virtual image area 301d, is shown. Further, in FIG. 7, the virtual image is displayed by the road shape in front of the vehicle 1 in the virtual image region 300 r determined by the user viewpoint position 100 r in the landscape that can be seen through the front window shield 2 at the user viewpoint position 100 r. Of the virtual image region 300u determined by the user viewpoint position 100u out of the non-overlapping distance range 402r that is the range of the distance on the road 91 of the scenery that does not perform and the landscape that can be seen through the front window shield 2 at the user viewpoint position 100u. Among these, a non-overlapping distance range 402u that is a range of a distance on the road 91 of a landscape that does not display a virtual image due to a road shape in front of the vehicle 1 and a landscape that can be seen through the front window shield 2 at the user viewpoint position 100d The virtual image area determined by the user viewpoint position 100d Of 300d, and non-overlapping distance range 402d which is in the range of distance in landscape of the road 91 that does not perform a display of the virtual image is indicated by the road shape ahead of the vehicle 1.

 まず、鉛直方向におけるユーザー視点位置100と車両1の前方の道路形状に対応した使用領域220との関係について説明する。例えば、図3に示されるステップS02で取得されるユーザー視点位置100が基準ユーザー視点位置100rであるとき、ステップS01で取得される車両1の前方の道路形状を示す前方情報に基づいて、道路と重畳しない虚像領域300rの一部に対応する使用領域220rの領域222rを、画像を表示しない不使用領域222rに設定し、それ以外の領域221rを、画像を表示可能な補正使用領域221rに設定する。以下、図6Aに示される、基準ユーザー視点位置100rに対応する補正使用領域221rを基準補正使用領域221rとも呼び、不使用領域222rを基準不使用領域222rとも呼ぶ。 First, the relationship between the user viewpoint position 100 in the vertical direction and the use area 220 corresponding to the road shape ahead of the vehicle 1 will be described. For example, when the user viewpoint position 100 acquired in step S02 shown in FIG. 3 is the reference user viewpoint position 100r, based on the forward information indicating the road shape ahead of the vehicle 1 acquired in step S01, The area 222r of the use area 220r corresponding to a part of the virtual image area 300r that does not overlap is set as an unused area 222r that does not display an image, and the other area 221r is set as a correction use area 221r that can display an image. . Hereinafter, the correction use area 221r corresponding to the reference user viewpoint position 100r shown in FIG. 6A is also referred to as a reference correction use area 221r, and the non-use area 222r is also referred to as a reference non-use area 222r.

 図6Bに示されるユーザー視点位置100uは、基準ユーザー視点位置100rと比較して鉛直方向上側に位置するユーザー視点位置100の例である。例えば、図3に示されるステップS02で取得されるユーザー視点位置100がユーザー視点位置100uであるとき、ステップS01で取得される車両1の前方の道路形状を示す前方情報に基づいて、道路と重畳しない虚像領域300uの一部に対応する使用領域220uの領域222uを、画像を表示しない不使用領域222uに設定し、それ以外の領域221uを、画像を表示可能な補正使用領域221uに設定する。 The user viewpoint position 100u shown in FIG. 6B is an example of the user viewpoint position 100 located on the upper side in the vertical direction as compared with the reference user viewpoint position 100r. For example, when the user viewpoint position 100 acquired in step S02 shown in FIG. 3 is the user viewpoint position 100u, it is superimposed on the road based on the forward information indicating the road shape ahead of the vehicle 1 acquired in step S01. An area 222u of the use area 220u corresponding to a part of the virtual image area 300u that is not to be displayed is set as an unused area 222u that does not display an image, and the other area 221u is set as a correction use area 221u that can display an image.

 図6Bに示される補正使用領域221uにおけるIy軸方向の長さ221uaは、基準補正使用領域221rにおけるIy軸方向の長さ221raと比較して、長くなっている。その結果、図7に示されるように、実空間上の補正虚像領域301uの鉛直方向の長さが長くなる。 6B, the length 221ua in the Iy-axis direction in the correction use area 221u shown in FIG. 6B is longer than the length 221ra in the Iy-axis direction in the reference correction use area 221r. As a result, as shown in FIG. 7, the vertical length of the corrected virtual image region 301u in the real space is increased.

 すなわち、視点位置取得部40によって検出されるユーザー視点位置100が鉛直方向上側にいくにつれて、表示面21の補正使用領域221の位置がIy軸正方向側に位置するように決定される。また、視点位置取得部40によって検出されるユーザー視点位置100が鉛直方向上側にいくにつれて、表示面21の補正使用領域221のIy軸方向の長さが長くなるように決定される。その結果、視点位置取得部40によって検出されるユーザー視点位置100が鉛直方向上側にいくにつれて、補正虚像領域301は、実空間上の鉛直方向上側に位置し、且つ、実空間上の鉛直方向の長さが長くなる。 That is, as the user viewpoint position 100 detected by the viewpoint position acquisition unit 40 moves upward in the vertical direction, the position of the correction use area 221 of the display surface 21 is determined to be positioned on the Iy axis positive direction side. Further, as the user viewpoint position 100 detected by the viewpoint position acquisition unit 40 moves upward in the vertical direction, the length in the Iy axis direction of the correction use area 221 of the display surface 21 is determined to be longer. As a result, as the user viewpoint position 100 detected by the viewpoint position acquisition unit 40 moves upward in the vertical direction, the corrected virtual image region 301 is positioned on the upper side in the vertical direction in the real space and the vertical direction in the real space Length increases.

 図6Cに示されるユーザー視点位置100dは、基準ユーザー視点位置100rと比較して鉛直方向下側に位置するユーザー視点位置100の例である。例えば、図3に示されるステップS02で取得されるユーザー視点位置100がユーザー視点位置100dであるとき、ステップS01で取得される車両1の前方の道路形状を示す前方情報に基づいて、道路と重畳しない虚像領域300dの一部に対応する使用領域220dの領域222dを、画像を表示しない不使用領域222dに設定し、それ以外の領域221dを、画像を表示可能な補正使用領域221dに設定する。 The user viewpoint position 100d shown in FIG. 6C is an example of the user viewpoint position 100 located on the lower side in the vertical direction as compared with the reference user viewpoint position 100r. For example, when the user viewpoint position 100 acquired in step S02 shown in FIG. 3 is the user viewpoint position 100d, it is superimposed on the road based on the forward information indicating the road shape ahead of the vehicle 1 acquired in step S01. The area 222d of the use area 220d corresponding to a part of the virtual image area 300d that is not displayed is set as a nonuse area 222d that does not display an image, and the other area 221d is set as a correction use area 221d that can display an image.

 図6Cに示される補正使用領域221dにおけるIy軸方向の長さ221daは、基準補正使用領域221rにおけるIy軸方向の長さ221raと比較して、短くなっている。その結果、図7に示されるように、実空間上の補正虚像領域301dの鉛直方向の長さが短くなる。 The length 221da in the Iy-axis direction in the correction use area 221d shown in FIG. 6C is shorter than the length 221ra in the Iy-axis direction in the reference correction use area 221r. As a result, as shown in FIG. 7, the vertical length of the corrected virtual image region 301d in the real space is shortened.

 すなわち、視点位置取得部40によって検出されるユーザー視点位置100が鉛直方向下側にいくにつれて、表示面21の補正使用領域221の位置がIy軸負方向側に位置するように決定される。また、視点位置取得部40によって検出されるユーザー視点位置100が鉛直方向下側にいくにつれて、表示面21の補正使用領域221のIy軸方向の長さが短くなるように決定される。その結果、視点位置取得部40によって検出されるユーザー視点位置100が鉛直方向下側にいくにつれて、補正虚像領域301は、実空間上の鉛直方向下側に位置し、且つ、実空間上の鉛直方向の長さが短くなる。 That is, as the user viewpoint position 100 detected by the viewpoint position acquisition unit 40 moves downward in the vertical direction, the position of the correction use area 221 of the display surface 21 is determined to be positioned on the Iy axis negative direction side. Further, as the user viewpoint position 100 detected by the viewpoint position acquisition unit 40 moves downward in the vertical direction, the length in the Iy axis direction of the correction use area 221 of the display surface 21 is determined to be shorter. As a result, as the user viewpoint position 100 detected by the viewpoint position acquisition unit 40 moves downward in the vertical direction, the corrected virtual image region 301 is positioned in the vertical direction lower side in the real space and the vertical position in the real space. The length of the direction becomes shorter.

 すなわち、道路形状に応じて、虚像領域300の上端が下端に近づくように鉛直方向の長さを短くする(使用領域220のサイズを縮小した補正使用領域221に補正する)ことによって、道路91から外れた位置(例えば、路肩や道路周辺の壁など)に虚像310が表示されることで視認者が違和感を覚えることを防止することができる。また、ユーザー視点位置100に応じて、補正使用領域221のIy軸における位置及びIy軸における長さを適切に決定することによって、鉛直方向におけるユーザー視点位置100に影響されることなく、補正重畳距離範囲401を一定にすることができる。補正重畳距離範囲400が一定になることによって、ユーザーが視認する虚像310が重畳される風景内の対象がずれることに対処することができる。 That is, from the road 91 by shortening the length in the vertical direction so that the upper end of the virtual image area 300 approaches the lower end according to the road shape (correcting the corrected use area 221 by reducing the size of the use area 220). It is possible to prevent the viewer from feeling uncomfortable by displaying the virtual image 310 at a position deviated (for example, a shoulder or a wall around the road). Further, by appropriately determining the position on the Iy axis and the length on the Iy axis of the correction use area 221 in accordance with the user viewpoint position 100, the correction overlapping distance is not affected by the user viewpoint position 100 in the vertical direction. The range 401 can be made constant. By making the corrected superimposition distance range 400 constant, it is possible to cope with a shift in the object in the landscape on which the virtual image 310 visually recognized by the user is superimposed.

 以上のように、本発明の車両用表示装置10の画像生成部30は、視点位置取得部40が取得するユーザー視点位置100に応じて、画像表示部20の表示面21の画像の表示に使用する使用領域220を決定する。その結果、使用領域220に対応する領域であり、ユーザーが虚像310を視認可能な領域である虚像領域300の位置を調整できるだけでなく、虚像領域300のサイズも調整可能である。したがって、車両用表示装置10は、例えば、投影部50の凹面鏡55の投影角度を変更することによって虚像領域300の位置のみが調整可能な車両用表示装置と比較して、ユーザー視点位置100が変わったときに、虚像310が重畳される風景内の対象がずれることが解消される。よって、本発明の車両用表示装置10は、ユーザー視点位置100に影響されることなく、ユーザーに適切な情報を提供することができる。 As described above, the image generation unit 30 of the vehicle display device 10 of the present invention is used to display an image on the display surface 21 of the image display unit 20 according to the user viewpoint position 100 acquired by the viewpoint position acquisition unit 40. The use area 220 to be determined is determined. As a result, not only can the position of the virtual image region 300, which is a region corresponding to the use region 220 and is a region where the user can visually recognize the virtual image 310, be adjusted, but also the size of the virtual image region 300 can be adjusted. Therefore, the vehicle display device 10 changes the user viewpoint position 100 as compared with a vehicle display device that can adjust only the position of the virtual image region 300 by changing the projection angle of the concave mirror 55 of the projection unit 50, for example. The object in the landscape on which the virtual image 310 is superimposed is eliminated. Therefore, the vehicle display device 10 of the present invention can provide appropriate information to the user without being influenced by the user viewpoint position 100.

 ここで、画像生成部30は、鉛直方向におけるユーザー視点位置100のみに応じて使用領域220を決定してもよく、車両前後方向におけるユーザー視点位置100にのみに応じて使用領域220を決定してもよい。しかしながら、虚像310が重畳される風景内の対象がずれることについては、車両前後方向におけるユーザー視点位置100の変化よりも鉛直方向におけるユーザー視点位置100の変化の方が影響が大きい。したがって、画像生成部30は、少なくとも鉛直方向におけるユーザー視点位置100に応じて使用領域220を決定することが好ましい。 Here, the image generation unit 30 may determine the use area 220 only according to the user viewpoint position 100 in the vertical direction, or determine the use area 220 only according to the user viewpoint position 100 in the vehicle front-rear direction. Also good. However, the change in the user viewpoint position 100 in the vertical direction has a greater influence on the shift of the object in the landscape on which the virtual image 310 is superimposed than the change in the user viewpoint position 100 in the vehicle front-rear direction. Therefore, it is preferable that the image generation unit 30 determines the use area 220 according to at least the user viewpoint position 100 in the vertical direction.

 そして、本発明の車両用表示装置10の画像生成部30は、道路形状情報取得部60によって取得される前記道路形状情報に応じて、虚像領域300の上端が下端に近づき虚像領域300の鉛直方向が短くなるように、使用領域220の長さを補正する。これにより、道路91から外れた位置(例えば、路肩や道路周辺の壁など)に虚像310が表示されることで視認者が違和感を覚えることを防止することができ、かつ、鉛直方向におけるユーザー視点位置100に影響されることなく、虚像310が表示される虚像領域300が道路91に重なる距離の範囲である補正重畳距離範囲401を一定にすることができる。 Then, the image generation unit 30 of the vehicle display device 10 according to the present invention causes the upper end of the virtual image region 300 to approach the lower end according to the road shape information acquired by the road shape information acquisition unit 60, and the vertical direction of the virtual image region 300. So that the length of the use area 220 is corrected. Thereby, it is possible to prevent the viewer from feeling uncomfortable by displaying the virtual image 310 at a position deviated from the road 91 (for example, a shoulder or a wall around the road), and the user viewpoint in the vertical direction. Without being affected by the position 100, the corrected overlapping distance range 401, which is a distance range in which the virtual image region 300 on which the virtual image 310 is displayed overlaps the road 91, can be made constant.

 また、上述した図3に示されるステップS02及びステップS04は、必ず毎回実行される必要はない。例えば、車両1の電源がONされてから1回目に図3に示されるフローが実行されるときのみ、ステップS02及びステップS04が実行されてもよい。その後、車両1の電源がONされてから2回目以降に図3に示されるフローが実行されるときは、ステップS02及びステップS04の処理が省略されてもよい。例えば、車両1を運転するユーザーが変更されない間は、特に鉛直方向におけるユーザー視点位置100が大きく変更される可能性が低い。したがって、車両1の電源がONされた後に1回のみ車両1を運転するユーザー視点位置100を取得することによって、例えば、虚像310が重畳される風景内の対象がずれることに対処することと、車両用表示装置10の動作の高速化とが両立できる。 Further, the above-described steps S02 and S04 shown in FIG. 3 do not necessarily have to be executed every time. For example, steps S02 and S04 may be executed only when the flow shown in FIG. 3 is executed for the first time after the vehicle 1 is turned on. Thereafter, when the flow shown in FIG. 3 is executed for the second and subsequent times after the power of the vehicle 1 is turned on, the processes of step S02 and step S04 may be omitted. For example, while the user who drives the vehicle 1 is not changed, it is unlikely that the user viewpoint position 100 in the vertical direction is changed significantly. Therefore, by acquiring the user viewpoint position 100 that drives the vehicle 1 only once after the vehicle 1 is turned on, for example, dealing with a shift in the object in the landscape on which the virtual image 310 is superimposed, Both speeding up of the operation of the vehicle display device 10 can be achieved.

 以上が、本実施形態における車両用表示装置10の説明であるが、本発明は上記実施形態及び図面によって限定されるものではない。これらに変更(構成要素の削除も含む)を加えることができるのはもちろんである。以下に変形例の一例を示す。 The above is the description of the vehicle display device 10 in the present embodiment, but the present invention is not limited to the embodiment and the drawings. Of course, changes (including deletion of components) can be added to these. An example of a modification is shown below.

 本発明における使用領域220は、図8に示すように、虚像領域300のうち図示しない第1虚像領域に対応する第1使用領域230と、前記第1虚像領域より鉛直下側に位置して視認される図示しない第2虚像領域に対応する第2使用領域240と、を少なくとも含み、画像生成部30は、視点位置取得部40によって取得される鉛直方向におけるユーザー視点位置100に応じて、第1使用領域230の位置及び大きさを決定すると共に、視点位置取得部40によって取得される鉛直方向におけるユーザー視点位置100に応じて、第2使用領域240の大きさを変化させずに第2使用領域240の位置のみを決定してもよい。これによりユーザー視点位置100が鉛直方向に変化したときに、第2使用領域240に対応する実空間上の前記第2虚像領域の鉛直方向における大きさがほとんど変化しない。その結果、例えば、第2使用領域240に文字等で構成された画像を表示していた場合であっても、ユーザーの視点位置が鉛直方向に変化したときに、ユーザーが前記第2虚像領域に表示された虚像に表されている情報を認識し辛くなることを防ぐことができる。 As shown in FIG. 8, the use area 220 according to the present invention is a first use area 230 corresponding to a first virtual image area (not shown) in the virtual image area 300, and is visually recognized by being positioned vertically below the first virtual image area. At least a second use area 240 corresponding to a second virtual image area (not shown), and the image generation unit 30 performs the first operation according to the user viewpoint position 100 in the vertical direction acquired by the viewpoint position acquisition unit 40. The position and size of the use area 230 are determined, and the second use area without changing the size of the second use area 240 according to the user viewpoint position 100 in the vertical direction acquired by the viewpoint position acquisition unit 40. Only 240 positions may be determined. Thereby, when the user viewpoint position 100 changes in the vertical direction, the size in the vertical direction of the second virtual image area on the real space corresponding to the second use area 240 hardly changes. As a result, for example, even when an image composed of characters or the like is displayed in the second use area 240, when the user's viewpoint position changes in the vertical direction, the user moves to the second virtual image area. It is possible to prevent the information represented in the displayed virtual image from becoming difficult to recognize.

 なお、視点位置取得部40によって取得される鉛直方向におけるユーザー視点位置100に応じて、第2使用領域240の大きさを変化させてもよい。この場合、第2使用領域240のユーザー視点位置100の変化に応じた変化率は、第1使用領域230の前記変化率よりも小さくする。これにより、第1使用領域230の大きさのみ変化させる場合に比べて、第1使用領域230の大きさの変化量を少なく抑えることができ、ユーザーの視点位置が鉛直方向に変化したときに、ユーザーが前記第1虚像領域に表示された虚像に表されている情報を認識し辛くなることを抑制することができる。 Note that the size of the second use area 240 may be changed according to the user viewpoint position 100 in the vertical direction acquired by the viewpoint position acquisition unit 40. In this case, the change rate according to the change in the user viewpoint position 100 in the second use area 240 is set to be smaller than the change rate in the first use area 230. Thereby, compared with the case where only the magnitude | size of the 1st use area | region 230 is changed, the variation | change_quantity of the magnitude | size of the 1st use area | region 230 can be restrained small, and when a user's viewpoint position changes to the perpendicular direction, It can be suppressed that the user becomes difficult to recognize the information represented in the virtual image displayed in the first virtual image area.

 また、表示領域210に表示される画像250は、図9Aに示されるように、表示面21のIy軸正方向側から概ねIy軸方向に沿って並ぶ複数の画像251,画像252,画像253から構成され、道路形状情報に応じて、使用領域220のIy軸方向のサイズを、補正使用領域221に縮小補正する場合、図9Bに示されるように、一部の画像251,画像252のみを補正使用領域221に表示するようにしてもよい。 Further, as shown in FIG. 9A, the image 250 displayed in the display area 210 includes a plurality of images 251, 252, and 253 that are arranged substantially along the Iy axis direction from the Iy axis positive direction side of the display surface 21. When the size of the used area 220 in the Iy-axis direction is reduced and corrected to the corrected used area 221 according to road shape information, only a part of the images 251 and 252 are corrected as shown in FIG. 9B. You may make it display on the use area | region 221. FIG.

 本発明の車両用表示装置は、車両などの移動体に搭載され、視認者に虚像を視認させるヘッドアップディスプレイとして好適である。 The vehicle display device of the present invention is suitable as a head-up display that is mounted on a moving body such as a vehicle and allows a viewer to visually recognize a virtual image.

 1・・・車両、2・・・フロントウィンドウシールド、10・・・車両用表示装置、20・・・画像表示部,液晶パネルモジュール、21・・・表示面,液晶パネル、30・・・画像生成部、40・・・視点位置取得部、41・・・車室内画像取得部、42・・・車室内画像解析部、50・・・投影部、60・・・前方情報取得部(道路形状情報取得部)、80・・・画像光、100・・・ユーザー視点位置、210・・・表示領域、220・・・使用領域、300・・・虚像領域、310・・・虚像、400・・・重畳距離範囲 DESCRIPTION OF SYMBOLS 1 ... Vehicle, 2 ... Front window shield, 10 ... Display apparatus for vehicles, 20 ... Image display part, Liquid crystal panel module, 21 ... Display surface, Liquid crystal panel, 30 ... Image Generating unit, 40 ... viewpoint position acquisition unit, 41 ... vehicle interior image acquisition unit, 42 ... vehicle interior image analysis unit, 50 ... projection unit, 60 ... forward information acquisition unit (road shape Information acquisition unit), 80 ... image light, 100 ... user viewpoint position, 210 ... display area, 220 ... use area, 300 ... virtual image area, 310 ... virtual image, 400 ...・ Overlapping distance range

Claims (5)

 車両の運転席に座るユーザーの視点の位置を取得する視点位置取得部(40)と、
 前記車両の前方の道路形状の情報である道路形状情報を取得する道路形状情報取得部(60)と、
 画像を表示可能な表示面(21)を有する画像表示部(20)と、
 前記視点位置取得部によって取得される上下方向における前記ユーザーの前記視点の位置に応じて、前記画像表示部の前記表示面のうちの一部である前記画像の表示に使用する使用領域の位置および長さを決定し、前記表示面の前記使用領域内に前記画像を表示させる画像生成部(30)と、
 前記表示面からの光を透光部材(2)に向けて投影することで、前記使用領域に対応する仮想的な虚像領域を生成し、前記虚像領域に前記画像に対応する虚像を表示する投影部(50)と、を備え、
 前記画像生成部は、前記視点位置取得部によって取得される上下方向における前記ユーザーの前記視点の位置に応じて、前記虚像領域の上下方向に対応する前記使用領域の位置及び長さを決定し、前記道路形状情報取得部によって取得される前記道路形状情報に応じて、前記虚像領域の上端が下端に近づき前記虚像領域の上下方向が短くなるように、前記使用領域の長さを補正する、
ことを特徴とする車両用表示装置。
A viewpoint position acquisition unit (40) that acquires the position of the viewpoint of the user sitting in the driver's seat of the vehicle;
A road shape information acquisition unit (60) for acquiring road shape information which is information of a road shape ahead of the vehicle;
An image display unit (20) having a display surface (21) capable of displaying an image;
The position of the use area used for displaying the image that is a part of the display surface of the image display unit according to the position of the viewpoint of the user in the vertical direction acquired by the viewpoint position acquisition unit, and An image generator (30) for determining a length and displaying the image in the use area of the display surface;
Projecting light from the display surface toward the translucent member (2) to generate a virtual virtual image region corresponding to the use region, and displaying a virtual image corresponding to the image in the virtual image region Part (50),
The image generation unit determines the position and length of the use area corresponding to the vertical direction of the virtual image area according to the position of the viewpoint of the user in the vertical direction acquired by the viewpoint position acquisition unit, In accordance with the road shape information acquired by the road shape information acquisition unit, the length of the use area is corrected so that the upper end of the virtual image area approaches the lower end and the vertical direction of the virtual image area is shortened.
A display device for a vehicle.
 前記画像生成部は、前記視点位置取得部によって取得される前記ユーザーの前記視点の位置が上下方向上側にいくにつれて、前記表示面において、前記使用領域の前記位置を、前記虚像領域が上下方向上側となる方向に決定し、且つ、前記使用領域の前記大きさを、前記虚像領域の上下方向に対応する方向に大きく決定する一方で、
 前記画像生成部は、前記視点位置取得部によって取得される前記ユーザーの前記視点の位置が下側にいくにつれて、前記表示面において、前記使用領域の前記位置を、前記虚像領域が下側となる方向に決定し、且つ、前記使用領域の前記大きさを、前記虚像領域の上下方向に対応する方向に小さく決定する、請求項1に記載の車両用表示装置。
The image generation unit is configured to display the position of the use area on the display surface and the virtual image area on the upper side in the vertical direction as the position of the viewpoint of the user acquired by the viewpoint position acquisition unit goes on the upper side in the vertical direction. And the size of the use area is largely determined in a direction corresponding to the vertical direction of the virtual image area,
As the position of the viewpoint of the user acquired by the viewpoint position acquisition unit goes downward, the image generation unit causes the position of the use area to be lower and the virtual image area to be lower on the display surface. The vehicle display device according to claim 1, wherein the vehicle display device is determined in a direction and the size of the use area is determined to be small in a direction corresponding to a vertical direction of the virtual image area.
 前記画像生成部は、上下方向における前記ユーザーの前記視点の位置の変化に影響されることなく、前記ユーザーが前記透光部材越しに見える風景のうち前記虚像領域が重畳する風景の路面における距離の範囲が一定となるように、前記表示面において、前記使用領域の前記位置及び前記大きさを決定する、請求項1又は2に記載の車両用表示装置。 The image generation unit is not affected by a change in the position of the user's viewpoint in the vertical direction, and the distance on the road surface of the landscape where the virtual image region overlaps among the landscapes that the user can see through the translucent member. The vehicle display device according to claim 1, wherein the position and the size of the use area are determined on the display surface so that the range is constant.  前記使用領域は、前記虚像領域のうち第1虚像領域に対応する第1使用領域と、前記第1虚像領域より下側に位置して視認される第2虚像領域に対応する第2使用領域と、を少なくとも含み、
 前記画像生成部は、前記視点位置取得部によって取得される上下方向における前記ユーザーの前記視点の位置に応じて、前記第1使用領域の位置及び大きさを決定すると共に、
 前記画像生成部は、前記視点位置取得部によって取得される上下方向における前記ユーザーの前記視点の位置に応じて、前記第2使用領域の位置を決定する、請求項1乃至3のいずれか一項に記載の車両用表示装置。
The use area includes a first use area corresponding to a first virtual image area in the virtual image area, and a second use area corresponding to a second virtual image area which is visually recognized by being positioned below the first virtual image area. Including at least
The image generation unit determines the position and size of the first use area according to the position of the viewpoint of the user in the vertical direction acquired by the viewpoint position acquisition unit,
The said image generation part determines the position of the said 2nd use area | region according to the position of the said viewpoint of the said user in the up-down direction acquired by the said viewpoint position acquisition part. The vehicle display device described in 1.
 前記使用領域は、前記虚像領域のうち第1虚像領域に対応する第1使用領域と、前記第1虚像領域より下側に位置して視認される第2虚像領域に対応する第2使用領域と、を少なくとも含み、
 前記画像生成部は、前記視点位置取得部によって取得される上下方向における前記ユーザーの前記視点の位置に応じて、前記第1使用領域の位置及び大きさを決定すると共に、
 前記画像生成部は、前記視点位置取得部によって取得される鉛直方向における前記ユーザーの前記視点の位置に応じて、前記第2使用領域の位置及び大きさを決定し、
 前記第2使用領域の前記ユーザーの前記視点の位置の変化に応じた変化率は、前記第1使用領域の前記変化率よりも小さくする、請求項1乃至3のいずれか一項に記載の車両用表示装置。
The use area includes a first use area corresponding to a first virtual image area in the virtual image area, and a second use area corresponding to a second virtual image area which is visually recognized by being positioned below the first virtual image area. Including at least
The image generation unit determines the position and size of the first use area according to the position of the viewpoint of the user in the vertical direction acquired by the viewpoint position acquisition unit,
The image generation unit determines the position and size of the second use area according to the position of the viewpoint of the user in the vertical direction acquired by the viewpoint position acquisition unit,
4. The vehicle according to claim 1, wherein a rate of change according to a change in the position of the viewpoint of the user in the second usage region is smaller than the rate of change in the first usage region. 5. Display device.
PCT/JP2017/028520 2016-08-10 2017-08-07 Vehicle display device Ceased WO2018030320A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2018533019A JP6874769B2 (en) 2016-08-10 2017-08-07 Vehicle display device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016157512 2016-08-10
JP2016-157512 2016-08-10

Publications (1)

Publication Number Publication Date
WO2018030320A1 true WO2018030320A1 (en) 2018-02-15

Family

ID=61162149

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/028520 Ceased WO2018030320A1 (en) 2016-08-10 2017-08-07 Vehicle display device

Country Status (2)

Country Link
JP (1) JP6874769B2 (en)
WO (1) WO2018030320A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019224922A1 (en) * 2018-05-22 2019-11-28 三菱電機株式会社 Head-up display control device, head-up display system, and head-up display control method
CN111231833A (en) * 2020-01-30 2020-06-05 华东交通大学 A car assisted driving system based on the combination of holographic projection and AR
CN114503010A (en) * 2019-10-04 2022-05-13 株式会社小糸制作所 Head-up display
JP2025087848A (en) * 2023-11-10 2025-06-10 京セラ株式会社 Display device, imaging device, display system and vehicle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10203199A (en) * 1997-01-17 1998-08-04 Nissan Motor Co Ltd Display device for vehicles
JP2008296701A (en) * 2007-05-30 2008-12-11 Calsonic Kansei Corp Vehicle display
JP2010002341A (en) * 2008-06-20 2010-01-07 Nissan Motor Co Ltd On-vehicle information presenting device
JP2015060180A (en) * 2013-09-20 2015-03-30 日本精機株式会社 Head-up display device
JP2016101805A (en) * 2014-11-27 2016-06-02 パイオニア株式会社 Display device, control method, program and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10203199A (en) * 1997-01-17 1998-08-04 Nissan Motor Co Ltd Display device for vehicles
JP2008296701A (en) * 2007-05-30 2008-12-11 Calsonic Kansei Corp Vehicle display
JP2010002341A (en) * 2008-06-20 2010-01-07 Nissan Motor Co Ltd On-vehicle information presenting device
JP2015060180A (en) * 2013-09-20 2015-03-30 日本精機株式会社 Head-up display device
JP2016101805A (en) * 2014-11-27 2016-06-02 パイオニア株式会社 Display device, control method, program and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019224922A1 (en) * 2018-05-22 2019-11-28 三菱電機株式会社 Head-up display control device, head-up display system, and head-up display control method
CN114503010A (en) * 2019-10-04 2022-05-13 株式会社小糸制作所 Head-up display
CN111231833A (en) * 2020-01-30 2020-06-05 华东交通大学 A car assisted driving system based on the combination of holographic projection and AR
JP2025087848A (en) * 2023-11-10 2025-06-10 京セラ株式会社 Display device, imaging device, display system and vehicle
JP7801516B2 (en) 2023-11-10 2026-01-16 京セラ株式会社 Display device, imaging device, display system and vehicle

Also Published As

Publication number Publication date
JPWO2018030320A1 (en) 2019-06-13
JP6874769B2 (en) 2021-05-19

Similar Documents

Publication Publication Date Title
JP6443122B2 (en) Vehicle display device
CN107848417B (en) Display device for vehicle
JP7194906B2 (en) Video display system, video display method, program, and moving body provided with video display system
US11803053B2 (en) Display control device and non-transitory tangible computer-readable medium therefor
JP5723106B2 (en) Vehicle display device and vehicle display method
US11004424B2 (en) Image display system, image display method, movable object including the image display system, and non-transitory computer-readable medium
CN112424570A (en) Head-up display
CN110001400A (en) Display apparatus
JP2016159656A (en) Display device for vehicle
JP6874769B2 (en) Vehicle display device
WO2018180596A1 (en) Vehicular display device
JP6481445B2 (en) Head-up display
US20200152157A1 (en) Image processing unit, and head-up display device provided with same
WO2022209439A1 (en) Virtual image display device
JP7145509B2 (en) VIDEO DISPLAY SYSTEM, VIDEO DISPLAY METHOD, PROGRAM AND MOBILE BODY
JP2009005054A (en) Driving support device, driving support method, and program
JP2018167669A (en) Head-up display device
JP2018103888A (en) Head-up display device
WO2018037887A1 (en) Vehicular display device
WO2018139433A1 (en) Vehicle display device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17839391

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018533019

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17839391

Country of ref document: EP

Kind code of ref document: A1