[go: up one dir, main page]

WO2014049787A1 - Display device, display method, program, and recording medium - Google Patents

Display device, display method, program, and recording medium Download PDF

Info

Publication number
WO2014049787A1
WO2014049787A1 PCT/JP2012/074943 JP2012074943W WO2014049787A1 WO 2014049787 A1 WO2014049787 A1 WO 2014049787A1 JP 2012074943 W JP2012074943 W JP 2012074943W WO 2014049787 A1 WO2014049787 A1 WO 2014049787A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
combiner
touch
user
control unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2012/074943
Other languages
French (fr)
Japanese (ja)
Inventor
哲也 藤栄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pioneer Corp
Original Assignee
Pioneer Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pioneer Corp filed Critical Pioneer Corp
Priority to JP2014537959A priority Critical patent/JP5813243B2/en
Priority to PCT/JP2012/074943 priority patent/WO2014049787A1/en
Publication of WO2014049787A1 publication Critical patent/WO2014049787A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/0418Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/044Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Definitions

  • the present invention relates to a technical field for visually recognizing a virtual image.
  • Patent Document 1 describes a technique of forming a touch panel on a windshield corresponding to a telephone push button display displayed on the front outside through the windshield by a head-up display.
  • the main object of the present invention is to enable a touch operation to be easily performed in a display device for visually recognizing a virtual image.
  • the display device includes a projecting unit that projects light constituting the image, a combiner that reflects the light projected from the projecting unit, and causes the user to visually recognize the image as a virtual image;
  • the operation acquisition means for acquiring touch operation information on the combiner by the user, and the determination means for determining the user operation based on the touch operation information.
  • a display device comprising: a projecting unit that projects light constituting an image; and a combiner that reflects the light projected from the projecting unit and causes the user to visually recognize the image as a virtual image.
  • the display method executed by the method includes an operation acquisition step of acquiring touch operation information on the combiner by the user, and a determination step of determining the user operation based on the touch operation information. It is characterized by that.
  • the invention according to claim 7 includes a computer, a projecting unit that projects light constituting the image, and a combiner that reflects the light projected from the projecting unit and causes the user to visually recognize the image as a virtual image.
  • the program executed by the display device includes: an operation acquisition unit that acquires information on a touch operation performed on the combiner by the user; a determination unit that determines an operation of the user based on the information on the touch operation;
  • the computer is operated as follows.
  • the invention according to claim 8 is characterized in that the recording medium records the program according to claim 7.
  • FIG. 1 shows a schematic configuration of a head-up display according to the present embodiment.
  • the figure for demonstrating image correction is shown.
  • region is shown.
  • the processing flow concerning a present Example is shown.
  • the schematic structure of the system which concerns on a modification is shown.
  • the display device includes a projection unit that projects light constituting the image, a combiner that reflects the light projected from the projection unit, and causes the user to visually recognize the image as a virtual image; Operation acquisition means for acquiring touch operation information on the combiner by the user, and determination means for determining the user operation based on the touch operation information.
  • the projecting unit projects the light constituting the image
  • the combiner reflects the light projected from the projecting unit to visually recognize the image as a virtual image to the user (for example, a driver of the moving body).
  • the operation acquisition unit acquires information on a touch operation performed on the combiner by the user.
  • the operation acquisition unit acquires information on a touch operation performed by the user on the combiner from a touch panel arranged integrally with the combiner.
  • the determination unit determines the user's operation based on the touch operation information acquired by the operation acquisition unit.
  • a touch operation may be performed on a combiner arranged at a position closer to the user. Therefore, the user can easily perform a touch operation.
  • the display device further includes a correction control unit that corrects the shape of the image and projects light constituting the corrected image from the projection unit, and the determination unit includes at least the touch operation.
  • the user's operation is determined based on the above information and the correction amount of the image by the correction control means. By considering the correction amount of such an image, it is possible to accurately determine the user's operation.
  • the combiner is configured in a concave shape having a predetermined curvature, which is recessed toward the projection unit, and the determination unit includes the touch operation information,
  • the user's operation is determined based on the correction amount of the image by the correction control means and information related to the curvature of the combiner.
  • the correction control means determines the correction amount of the image so that the distortion of the virtual image visually recognized by the user is corrected. Thereby, the user can perform a touch operation while visually recognizing a virtual image without distortion.
  • the combiner is configured to be able to adjust a tilt angle
  • the correction control means determines the correction amount of the image according to the tilt angle of the combiner.
  • the image correction according to the tilt angle of the combiner can be appropriately performed.
  • a display device having projection means for projecting light constituting an image, and a combiner for reflecting the light projected from the projection means and allowing the user to visually recognize the image as a virtual image.
  • the display method to be executed includes an operation acquisition step of acquiring touch operation information on the combiner by the user, and a determination step of determining the user operation based on the touch operation information.
  • the computer has a computer, a projection unit that projects light constituting the image, and a combiner that reflects the light projected from the projection unit and causes the user to visually recognize the image as a virtual image.
  • the program executed by the display device includes: an operation acquisition unit that acquires information on a touch operation performed on the combiner by the user; a determination unit that determines an operation of the user based on the information on the touch operation; To make the computer function.
  • the above program can be suitably handled in a state of being recorded on a recording medium.
  • FIG. 1 is a schematic configuration diagram of a head-up display 2 according to the present embodiment.
  • the head-up display 1 according to the present embodiment mainly includes a light source unit 3 and a combiner 9, and includes a front window 25, a ceiling portion 27, a hood 28, a dashboard 29, and the like. Mounted on the vehicle.
  • the head-up display 2 is an example of the “display device” in the present invention.
  • the light source unit 3 is installed on the ceiling 27 in the passenger compartment via the support members 5a and 5b, and emits light constituting an image to be displayed toward the combiner 9. Specifically, the light source unit 3 generates an original image (real image) of the display image in the light source unit 3 based on the control of the control unit 4, and emits light constituting the image to the combiner 9.
  • the virtual image “Iv” is visually recognized by the driver via the combiner 9.
  • a laser, DLP (Digital Light Processing), LCOS (Liquid Crystal On On Silicon), or the like is applied to the light source unit 3 (“DLP” and “LCOS” are registered trademarks).
  • the light source unit 3 corresponds to an example of a “projection unit” in the present invention.
  • the combiner 9 is configured as a half mirror having a reflection function and a transmission function.
  • the combiner 9 projects the display image emitted from the light source unit 3 and reflects the display image as a virtual image Iv by reflecting the display image to the driver's eye point Pe.
  • the combiner 9 is configured in a concave shape having a predetermined curvature, which is recessed toward the light source unit 3. Thereby, the driver can visually recognize the virtual image Iv obtained by enlarging the display image.
  • the combiner 9 has the support shaft part 8 installed in the ceiling part 27, and rotates the support shaft part 8 as a spindle. That is, the combiner 9 is configured to be able to adjust the tilt angle with the support shaft portion 8 as a support shaft.
  • the support shaft portion 8 is installed, for example, in the vicinity of the ceiling portion 27 in the vicinity of the upper end of the front window 25, in other words, the position where a sun visor (not shown) for the driver is installed.
  • the support shaft portion 8 may be installed instead of the above-described sun visor.
  • the light source unit 3 and the combiner 9 are separate bodies, but the light source unit and the combiner may be integrated. In this case as well, the combiner is attached to the light source unit via a support shaft that enables adjustment of the tilt angle of the combiner.
  • an electrostatic sheet 9 a is provided on the surface of the combiner 9.
  • the electrostatic sheet 9 a is a capacitive touch panel, and outputs a signal corresponding to a touch operation by the driver to the control unit 4.
  • the electrostatic sheet 9a outputs a signal corresponding to the position touched on the electrostatic sheet 9a (synonymous with the position touched on the combiner 9. The same applies hereinafter) to the control unit 4.
  • the electrostatic sheet 9 a has a shape that follows the curved surface of the combiner 9, and is attached to the surface of the combiner 9 on which the light from the light source unit 3 is projected.
  • the electrostatic sheet 9a is configured by a transparent sheet. Thereby, the reflection function and transmission function of the combiner 9 are ensured.
  • the control unit 4 is built in the light source unit 3 and has a CPU, RAM, ROM, etc. (not shown), and performs general control of the head-up display 2.
  • the control unit 4 projects light from the light source unit 3 to cause the driver to visually recognize the image as a virtual image via the combiner 9, and from the electrostatic sheet 9a.
  • a signal corresponding to the touch operation by the driver is acquired, and the operation of the driver is determined based on the signal.
  • the control part 4 may produce
  • the control unit 4 corresponds to an example of “operation acquisition unit”, “determination unit”, and “correction control unit” in the present invention.
  • the electrostatic sheet 9a is provided on the surface of the combiner 9 on which the light from the light source unit 3 is projected, but the surface opposite to the surface of the combiner 9 on which the light from the light source unit 3 is projected.
  • An electrostatic sheet 9a may be provided.
  • the detection sensitivity of the operation can be increased with respect to the touch operation from the side opposite to the surface of the combiner 9 on which the light from the light source unit 3 is projected (that is, the windshield side). If a touch operation is performed from the windshield side, the light projected from the light source unit 3 to the combiner 9 is not blocked.
  • a touch panel having both a reflection function and a transmission function may be used instead of the electrostatic sheet 9a. In that case, if the touch panel has the same function as the combiner 9, it is not necessary to use the combiner 9 separately. Moreover, it is not limited to applying a capacitive system to a touch panel, In addition to this, various well-known systems (for example, resistance film system) can be applied.
  • the light source unit 3 is not limited to being installed on the ceiling portion 27, and the light source unit 3 may be installed inside the dashboard 29 instead of the ceiling portion 27.
  • the control unit 4 projects light constituting the image including an image (hereinafter referred to as “touch image”) for causing the driver to perform a touch operation such as a button from the light source unit 3, and Based on the signal acquired from the electric sheet 9a, an operation on the touch image by the driver is determined. Specifically, the control unit 4 first obtains an area corresponding to the touch image in the image to be displayed (hereinafter referred to as “first touch reaction area”), and the first touch reaction area is projected. A region on the combiner 9 (in other words, a region corresponding to the first touch reaction region formed on the combiner 9 and hereinafter referred to as a “second touch reaction region”) is obtained.
  • the reason why the second touch reaction area is obtained from the first touch reaction area is to correct the difference between the virtual image visually recognized by the driver and the image formed on the combiner 9. This corresponds to performing correction relating to the coordinates of the touch panel on the sheet 9a (in other words, calibration, hereinafter also referred to as “touch panel correction”).
  • control unit 4 is based on a signal acquired from the electrostatic sheet 9a, and the position on the combiner 9 where the driver performs a touch operation (uniquely the position on the electrostatic sheet 9a). ) And comparing the position with the second touch reaction area, the operation on the touch image by the driver is determined. In this case, when the position at which the touch operation is performed is included in the second touch reaction area, the control unit 4 determines that the touch operation has been performed on the touch image and associates it with the touch image. Perform a predetermined operation.
  • the control unit 4 corrects the shape of the image (original image) to be displayed and corrects the original image so that the distortion of the virtual image visually recognized by the driver is corrected.
  • the light constituting the image (hereinafter referred to as “corrected image”) is projected from the light source unit 3.
  • the control unit 4 performs various image corrections such as rotation correction and trapezoidal correction.
  • the control part 4 calculates
  • control unit 4 obtains the second touch reaction area in consideration of not only the image correction amount but also the curvature of the combiner 9 (the curvature of the concave shape). This is because an image in which the corrected image is enlarged by the curvature of the combiner 9 is formed on the combiner 9.
  • the virtual image is not distorted, it is not necessary to correct the original image. That is, it is not necessary to generate a corrected image. In this case, as described above, it is not necessary to consider the image correction amount in obtaining the second touch reaction area. Further, when the combiner 9 has a planar shape instead of a concave shape (that is, when it does not have a curvature), it is not necessary to consider the curvature of the combiner 9 in obtaining the second touch reaction region as described above.
  • control unit 4 Next, the control method performed by the control unit 4 will be described more specifically with reference to FIGS. 2 and 3.
  • FIG. 2 is a diagram for explaining image correction.
  • FIG. 2A shows an example of the original image 70.
  • the original image 70 includes two buttons 70a and 70b as touch images.
  • the control unit 4 obtains an area corresponding to the buttons 70a and 70b in the original image 70 as the first touch reaction area.
  • FIG. 2B shows an example of a virtual image 71 visually recognized when the original image 70 that has not been corrected is used. It can be seen that the virtual image 71 is curved in one direction as compared to the original image 70. That is, it can be seen that the virtual image 71 is distorted. Such distortion is caused by, for example, that the shape of the combiner 9 is concave, or that the combiner 9 is inclined with respect to the light source unit 3 (that is, light from the light source unit 3 is incident on the combiner 9 obliquely). Or the driver's eye point is out of the proper position.
  • FIG. 2 (c) shows a corrected image 72 for correcting the distortion of the virtual image 71 as shown in FIG. 2 (b).
  • the corrected image 72 is an image obtained by curving the original image 70 in the direction opposite to the direction of distortion generated in the virtual image 71.
  • the corrected image 72 is an image projected by the light source unit 3.
  • the control unit 4 generates a corrected image 72 in response to a driver's operation on an input device (switch, button, remote controller, etc., not shown in FIG. 1) of the head-up display 2. . That is, in this example, the driver operates the input device so that the distortion of the visually recognized virtual image is eliminated, and the control unit 4 generates the corrected image 72 according to the operation of such an input device.
  • an image corresponding to a virtual image visually recognized by the driver is captured by a camera, and the control unit 4 analyzes the captured image, thereby correcting the corrected image 72 that can eliminate distortion generated in the virtual image. Is generated.
  • FIG. 2D shows an example of a virtual image 73 that is visually recognized when the corrected image 72 shown in FIG. 2C is used.
  • the virtual image 73 it can be seen that the distortion as shown in FIG. That is, it can be said that a virtual image 73 that substantially matches the original image 70 is visually recognized.
  • FIG. 3 is a diagram for specifically explaining a method of obtaining the second touch reaction area.
  • the combiner 9 including the electrostatic sheet 9a
  • FIG. 3B shows an example of an image 75 (virtual image) formed on the plane S ⁇ b> 1 facing the light source unit 3 by the light projected from the light source unit 3.
  • the image 75 corresponds to an image projected by the light source unit 3.
  • FIG. 3B illustrates an image 75 obtained when the corrected image 72 shown in FIG. 2C is used. This image 75 basically matches the corrected image 72 shown in FIG.
  • FIG. 3C shows an image 76 formed on the surface (curved surface) S ⁇ b> 2 of the combiner 9 facing the direction of the light source unit 3 by the light projected from the light source unit 3.
  • This image 76 corresponds to a virtual image visually recognized by the driver.
  • FIG. 3C illustrates an image 76 obtained when the corrected image 72 shown in FIG. 2C is used, as in FIG. 3B.
  • an image 76 formed on the surface S2 of the combiner 9 is an image obtained by enlarging the image 75 formed on the surface S1 shown in FIG. It can be seen that the image 72 is an image enlarged in the left-right direction. This is because the combiner 9 is configured in a concave shape having a curvature, as shown in FIG.
  • an area corresponding to the buttons 76a and 76b included in the image 76 formed on the surface S2 of the combiner 9 is obtained as the second touch reaction area.
  • an image 76 obtained by enlarging the corrected image 72 obtained by the procedure described in FIG. 2 according to the curvature of the combiner 9 is obtained, and buttons 76a and 76b included in the image 76 are obtained.
  • a second touch reaction region corresponding to is obtained.
  • the control unit 4 changes the first touch reaction area corresponding to the buttons 70 a and 70 b in the original image 70 based on the image correction amount used for the image correction and the curvature of the combiner 9, thereby Determine the 2-touch reaction area.
  • a coordinate system with a predetermined position on the combiner 9 as the origin is defined, and the control unit 4 obtains the position of the second touch reaction area defined by the coordinate system.
  • the coordinate system for defining the position of the second touch reaction region is preferably the same as the coordinate system for determining the position where the touch operation is performed based on the signal of the electrostatic sheet 9a.
  • an image enlarged in the left-right direction is shown as the image 76 formed on the surface S ⁇ b> 2 of the combiner 9, but the image 76 formed on the surface S ⁇ b> 2 of the combiner 9 corresponds to the curvature of the combiner 9.
  • the image may be enlarged in the vertical direction instead of the horizontal direction, and may be enlarged in both the horizontal direction and the vertical direction.
  • step S101 the control unit 4 generates an image (original image) to be displayed.
  • the control unit 4 generates an original image including a touch image.
  • the control unit 4 is not limited to generating the original image, and a device outside the head-up display 2 may generate the original image, and the control unit 4 may acquire the original image.
  • step S102 the process proceeds to step S102.
  • step S102 the control unit 4 obtains a first touch reaction area corresponding to the touch image in the original image.
  • the control unit 4 obtains a first touch reaction area corresponding to each of the plurality of touch images. Then, the process proceeds to step S103.
  • step S103 the control unit 4 corrects the shape of the original image such as rotation correction and keystone correction so that the distortion of the virtual image visually recognized by the driver is corrected.
  • the driver operates the input device (switch, button, remote controller, etc.) of the head-up display 2 so that the distortion of the visible virtual image is eliminated, and the control unit 4 operates such an input device.
  • the original image is corrected accordingly.
  • the process proceeds to step S104.
  • step S104 the control unit 4 acquires the curvature of the combiner 9 (the curvature of the concave shape) stored in the memory or the like in the head-up display 2. Instead of acquiring the curvature of such a combiner 9, the degree to which the image is enlarged according to the curvature of the combiner 9 (for example, the degree to which the image is enlarged in the horizontal direction and / or the vertical direction), or after the enlargement You may acquire image size etc. These are all information related to the curvature of the combiner.
  • step S104 the process proceeds to step S105.
  • step S105 the control unit 4 changes the first touch reaction area obtained in step S102 based on the image correction amount in step S103 and the curvature of the combiner 9 acquired in step S104, thereby changing the second touch reaction area.
  • Find the touch response area That is, the control unit 4 takes into account that the corrected image obtained by correcting the original image is enlarged according to the curvature when the image is formed on the combiner 9, and the second touch reaction corresponding to the first touch reaction region. Find the area. In this case, the control unit 4 obtains the position of the second touch reaction area defined by the coordinate system with the predetermined position on the combiner 9 as the origin.
  • control unit 4 obtains a second touch reaction area corresponding to the first touch reaction area based on the image correction amount and the curvature of the combiner 9 using a predetermined arithmetic expression determined in advance.
  • a table in which a parameter for obtaining the second touch reaction area from the first touch reaction area (that is, a correction amount used for touch panel correction) is associated with the image correction amount and the curvature of the combiner 9 is obtained in advance. The control unit 4 refers to such a table to obtain a second touch reaction area corresponding to the first touch reaction area.
  • step S102 When a plurality of first touch reaction areas are obtained in step S102 (that is, when a plurality of touch images are included in the original image), the control unit 4 corresponds to each of the plurality of first touch reaction areas. A second touch reaction area is determined. And the control part 4 memorize
  • step S106 the control unit 4 determines whether or not the combiner 9 has been touched. In this case, the control part 4 performs the said determination according to whether the signal from the electrostatic sheet 9a was acquired. If touched (step S106: Yes), the process proceeds to step S107. If not touched (step S106: No), the process ends.
  • step S107 the control unit 4 obtains the touched position on the combiner 9 based on the signal acquired from the electrostatic sheet 9a, and the second touch reaction stored in the RAM or the like in step S105. It is determined whether it is included in the area. In this case, the control unit 4 compares the touched position with the second touch reaction area using a coordinate system having a predetermined position on the combiner 9 as an origin. When the touched position is included in the second touch reaction area (step S107: Yes), the process proceeds to step S108. When a plurality of second touch reaction areas are obtained in step S105 (that is, when a plurality of touch images are included in the original image), the control unit 4 determines which second touch reaction area is touched. Shall be specified. On the other hand, when the touched position is not included in the second touch reaction area (step S107: No), the process ends.
  • step S108 the control unit 4 executes an operation corresponding to the touched second touch reaction area. That is, the control unit 4 executes a predetermined operation associated with the touch image corresponding to the touched second touch reaction area. In this case, for example, the control unit 4 outputs a control signal for controlling the components in the head-up display 2 or outputs a control signal for controlling a device outside the head-up display 2. . Then, the process ends.
  • the electrostatic sheet 9a is provided on the combiner 9 disposed closer to the driver than the windshield, for example, compared to the technique described in Patent Document 1, for example.
  • the driver can easily perform the touch operation and can ensure the safety of driving during the touch operation.
  • the touch operation can be performed with high accuracy by using the second touch reaction region obtained in consideration of the correction of the shape of the image and the curvature of the combiner 9 (that is, by performing touch panel correction). It becomes possible to judge.
  • the touch panel correction can be automatically completed only by the driver performing an operation for correcting the shape of the image. That is, according to the present embodiment, when touch panel correction is performed, an operation different from the operation for image correction need not be newly imposed on the driver.
  • the second touch reaction area is obtained based on the image correction amount and the curvature of the combiner 9. That is, an optimal correction amount for touch panel correction (hereinafter referred to as “touch panel correction amount” as appropriate) is obtained based on the image correction amount and the curvature of the combiner 9.
  • touch panel correction amount an optimal correction amount for touch panel correction
  • the curvature of the combiner 9 is constant, it can be said that basically the touch panel correction amount can be obtained if the image correction amount is determined. Therefore, in another example, a table in which the image correction amount and the touch panel correction amount are associated with each other is created in advance, and the control unit 4 can obtain the touch panel correction amount with reference to such a table.
  • the image correction amount is generally determined according to the driver's eye point (that is, the image correction amount tends to be substantially constant if the eye point is the same), the image correction amount for each driver's eye point is obtained.
  • the control unit 4 can obtain the touch panel correction amount with reference to a table in which such eye points are associated with the touch panel correction amount.
  • both image correction and touch panel correction can be appropriately performed only by inputting information (for example, ID) for specifying the driver to the head-up display 2.
  • a table that associates drivers with touch panel correction amounts may be used.
  • the combiner 9 is configured so that the tilt angle can be adjusted.
  • the image correction amount is generally determined according to the tilt angle of the combiner 9 (that is, if the tilt angle is the same, the image correction amount is substantially constant). Therefore, by obtaining the image correction amount for each tilt angle of the combiner 9, a table in which the tilt angle and the touch panel correction amount are associated can be created. Therefore, in yet another example, the control unit 4 can obtain the touch panel correction amount by referring to a table in which such a tilt angle is associated with the touch panel correction amount. Thereby, both image correction and touch panel correction can be appropriately performed according to the adjustment of the tilt angle by the driver.
  • the tilt angle of the combiner 9 can be obtained by providing an angle sensor or the like that can detect the tilt angle.
  • the touch panel correction amount can be obtained using a table in which the eye point, the tilt angle, and the touch panel correction amount are associated with each other. This example takes into consideration that different tilt angles may be set even if the eye point is the same. This makes it possible to perform both image correction and touch panel correction with higher accuracy.
  • Modification 2 In the above-described embodiment, an example in which the present invention is applied to the head-up display 2 is shown, but the present invention is not limited to this.
  • the present invention can also be applied to a system including the head-up display 2 and a navigation device that can communicate with the head-up display 2.
  • the system including the head-up display 2 and the navigation device corresponds to an example of the “display device” in the present invention.
  • FIG. 5 is a block diagram showing a schematic configuration of a system according to the second modification.
  • the system includes a navigation device 100 and a head-up display 2.
  • the navigation device 100 is configured to be communicable with the head-up display 2 (either wireless communication or wired communication), and includes a CPU 100a and the like.
  • the navigation device 100 may be a stationary navigation device installed in a vehicle, a portable terminal such as a PND (Portable Navigation Device), or a smartphone.
  • the CPU 100a in the navigation device 100 performs route guidance from the departure place to the destination, for example.
  • the head-up display 2 has a configuration similar to that shown in FIG.
  • the CPU 100a in the navigation device 100 generates an image to be displayed including a touch image, obtains a first touch reaction area corresponding to the touch image in the image, and is formed on the combiner 9. A second touch reaction area corresponding to the first touch reaction area is obtained. More specifically, the CPU 100a corrects the shape of the image to be displayed (original image), and obtains the second touch reaction area based on the image correction amount and the curvature of the combiner 9. In this case, the CPU 100a generates a corrected image and supplies the corrected image to the head-up display 2.
  • the CPU 100a acquires a signal from the electrostatic sheet 9a of the head-up display 2, obtains a position on the combiner 9 where the touch operation is performed based on the signal, and determines the position and the second touch reaction area. By comparing, the operation on the touch image by the driver is determined.
  • the CPU 100a in the navigation device 100 functions as the “operation acquisition unit”, “determination unit”, and “correction control unit” in the present invention.
  • the control unit 4 in the head-up display 2 may perform image correction.
  • the control unit 4 supplies information on the image correction amount used for the image correction to the navigation device 100, and the CPU 100a obtains the second touch reaction area based on the image correction amount.
  • the control unit 4 in the head-up display 2 functions as “correction control means” in the present invention
  • the CPU 100a in the navigation device 100 functions as “operation acquisition means” and “determination means” in the present invention. To do.
  • Modification 3 Although the example which applies this invention to a vehicle was shown above, application of this invention is not limited to this.
  • the present invention can be applied to various mobile objects such as ships, helicopters, and airplanes in addition to vehicles.
  • the present invention can be applied to a head-up display, a navigation device (including a mobile phone such as a smartphone), and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Instrument Panels (AREA)
  • User Interface Of Digital Computer (AREA)

Description

表示装置、表示方法、プログラム及び記録媒体Display device, display method, program, and recording medium

 本発明は、虚像を視認させる技術分野に関する。 The present invention relates to a technical field for visually recognizing a virtual image.

 従来から、虚像として画像を視認させるヘッドアップディスプレイなどの表示装置が知られている。例えば、特許文献1には、ヘッドアップディスプレイによりフロントガラス越し外側前方に表示された電話プッシュボタン表示に対応して、フロントガラス上にタッチパネルを形成する技術が記載されている。 Conventionally, display devices such as a head-up display for visually recognizing an image as a virtual image are known. For example, Patent Document 1 describes a technique of forming a touch panel on a windshield corresponding to a telephone push button display displayed on the front outside through the windshield by a head-up display.

特開平7-307775号公報JP 7-307775 A

 しかしながら、特許文献1に記載された技術では、運転者からある程度離れたフロントガラス上にタッチパネルを形成していたため、タッチ操作を行いづらかった。例えば、運転中には、安全性の面などから基本的にはタッチ操作を行うことができなかった。 However, in the technique described in Patent Document 1, it is difficult to perform a touch operation because the touch panel is formed on the windshield that is somewhat distant from the driver. For example, during operation, basically a touch operation cannot be performed from the viewpoint of safety.

 本発明が解決しようとする課題としては、上記のものが一例として挙げられる。本発明は、虚像を視認させる表示装置において、タッチ操作を容易に行わせることを可能とする、ことを主な目的とする。 The above is one example of problems to be solved by the present invention. The main object of the present invention is to enable a touch operation to be easily performed in a display device for visually recognizing a virtual image.

 請求項1に記載の発明では、表示装置は、画像を構成する光を投射する投射手段と、前記投射手段から投射された光を反射して、前記画像を利用者に虚像として視認させるコンバイナと、前記利用者による前記コンバイナに対するタッチ操作の情報を取得する操作取得手段と、前記タッチ操作の情報に基づいて、前記利用者の操作を判断する判断手段と、を備えることを特徴とする。 In the first aspect of the invention, the display device includes a projecting unit that projects light constituting the image, a combiner that reflects the light projected from the projecting unit, and causes the user to visually recognize the image as a virtual image; The operation acquisition means for acquiring touch operation information on the combiner by the user, and the determination means for determining the user operation based on the touch operation information.

 請求項6に記載の発明では、画像を構成する光を投射する投射手段と、前記投射手段から投射された光を反射して、前記画像を利用者に虚像として視認させるコンバイナとを有する表示装置によって実行される表示方法は、前記利用者による前記コンバイナに対するタッチ操作の情報を取得する操作取得工程と、前記タッチ操作の情報に基づいて、前記利用者の操作を判断する判断工程と、を備えることを特徴とする。 According to a sixth aspect of the present invention, there is provided a display device comprising: a projecting unit that projects light constituting an image; and a combiner that reflects the light projected from the projecting unit and causes the user to visually recognize the image as a virtual image. The display method executed by the method includes an operation acquisition step of acquiring touch operation information on the combiner by the user, and a determination step of determining the user operation based on the touch operation information. It is characterized by that.

 請求項7に記載の発明では、コンピュータを有すると共に、画像を構成する光を投射する投射手段と、前記投射手段から投射された光を反射して、前記画像を利用者に虚像として視認させるコンバイナとを有する表示装置によって実行されるプログラムは、前記利用者による前記コンバイナに対するタッチ操作の情報を取得する操作取得手段、前記タッチ操作の情報に基づいて、前記利用者の操作を判断する判断手段、として前記コンピュータを機能させることを特徴とする。 The invention according to claim 7 includes a computer, a projecting unit that projects light constituting the image, and a combiner that reflects the light projected from the projecting unit and causes the user to visually recognize the image as a virtual image. The program executed by the display device includes: an operation acquisition unit that acquires information on a touch operation performed on the combiner by the user; a determination unit that determines an operation of the user based on the information on the touch operation; The computer is operated as follows.

 請求項8に記載の発明では、記録媒体は、請求項7に記載のプログラムを記録したことを特徴とする。 The invention according to claim 8 is characterized in that the recording medium records the program according to claim 7.

本実施例に係るヘッドアップディスプレイの概略構成を示す。1 shows a schematic configuration of a head-up display according to the present embodiment. 画像補正について説明するための図を示す。The figure for demonstrating image correction is shown. 第2タッチ反応領域を求める方法を説明するための図を示す。The figure for demonstrating the method of calculating | requiring a 2nd touch reaction area | region is shown. 本実施例に係る処理フローを示す。The processing flow concerning a present Example is shown. 変形例に係るシステムの概略構成を示す。The schematic structure of the system which concerns on a modification is shown.

 本発明の1つの観点では、表示装置は、画像を構成する光を投射する投射手段と、前記投射手段から投射された光を反射して、前記画像を利用者に虚像として視認させるコンバイナと、前記利用者による前記コンバイナに対するタッチ操作の情報を取得する操作取得手段と、前記タッチ操作の情報に基づいて、前記利用者の操作を判断する判断手段と、を備える。 In one aspect of the present invention, the display device includes a projection unit that projects light constituting the image, a combiner that reflects the light projected from the projection unit, and causes the user to visually recognize the image as a virtual image; Operation acquisition means for acquiring touch operation information on the combiner by the user, and determination means for determining the user operation based on the touch operation information.

 上記の表示装置では、投射手段は、画像を構成する光を投射し、コンバイナは、投射手段から投射された光を反射して、画像を利用者(例えば移動体の運転者)に虚像として視認させる。操作取得手段は、利用者によるコンバイナに対するタッチ操作の情報を取得する。例えば、操作取得手段は、コンバイナと一体に配置されたタッチパネルから、利用者によるコンバイナに対するタッチ操作の情報を取得する。そして、判断手段は、操作取得手段によって取得されたタッチ操作の情報に基づいて、利用者の操作を判断する。上記の表示装置では、例えば特許文献1に記載された技術と比較すると、利用者により近い位置に配置されたコンバイナに対してタッチ操作を行えば良い。したがって、利用者はタッチ操作を容易に行うことが可能となる。 In the above display device, the projecting unit projects the light constituting the image, and the combiner reflects the light projected from the projecting unit to visually recognize the image as a virtual image to the user (for example, a driver of the moving body). Let The operation acquisition unit acquires information on a touch operation performed on the combiner by the user. For example, the operation acquisition unit acquires information on a touch operation performed by the user on the combiner from a touch panel arranged integrally with the combiner. Then, the determination unit determines the user's operation based on the touch operation information acquired by the operation acquisition unit. In the display device described above, for example, as compared with the technique described in Patent Document 1, a touch operation may be performed on a combiner arranged at a position closer to the user. Therefore, the user can easily perform a touch operation.

 上記の表示装置の一態様では、画像の形状についての補正を行い、補正後の画像を構成する光を前記投射手段から投射させる補正制御手段を更に備え、前記判断手段は、少なくとも、前記タッチ操作の情報と、前記補正制御手段による画像の補正量とに基づいて、前記利用者の操作を判断する。このような画像の補正量を考慮することで、利用者の操作を精度良く判断することが可能となる。 In one aspect of the above display device, the display device further includes a correction control unit that corrects the shape of the image and projects light constituting the corrected image from the projection unit, and the determination unit includes at least the touch operation. The user's operation is determined based on the above information and the correction amount of the image by the correction control means. By considering the correction amount of such an image, it is possible to accurately determine the user's operation.

 上記の表示装置の他の一態様では、前記コンバイナは、前記投射手段に向かって凹んだ、所定の曲率を有する凹形状にて構成されており、前記判断手段は、前記タッチ操作の情報と、前記補正制御手段による画像の補正量と、前記コンバイナの曲率に関連する情報とに基づいて、前記利用者の操作を判断する。このように画像の補正量だけでなくコンバイナの曲率も考慮することで、利用者の操作をより精度良く判断することが可能となる。 In another aspect of the above display device, the combiner is configured in a concave shape having a predetermined curvature, which is recessed toward the projection unit, and the determination unit includes the touch operation information, The user's operation is determined based on the correction amount of the image by the correction control means and information related to the curvature of the combiner. Thus, by considering not only the image correction amount but also the curvature of the combiner, it becomes possible to determine the user's operation with higher accuracy.

 上記の表示装置において好適には、前記補正制御手段は、前記利用者に視認される虚像の歪みが補正されるように、前記画像の補正量を決定する。これにより、利用者は歪みが無いような虚像を視認しながらタッチ操作を行うことができる。 Preferably, in the above display device, the correction control means determines the correction amount of the image so that the distortion of the virtual image visually recognized by the user is corrected. Thereby, the user can perform a touch operation while visually recognizing a virtual image without distortion.

 上記の表示装置において好適には、前記コンバイナは、チルト角度を調整可能に構成され、前記補正制御手段は、前記コンバイナのチルト角度に応じて、前記画像の補正量を決定する。これにより、コンバイナのチルト角度に応じた画像補正を適切に行うことができる。 Preferably, in the above display device, the combiner is configured to be able to adjust a tilt angle, and the correction control means determines the correction amount of the image according to the tilt angle of the combiner. Thereby, the image correction according to the tilt angle of the combiner can be appropriately performed.

 本発明の他の観点では、画像を構成する光を投射する投射手段と、前記投射手段から投射された光を反射して、前記画像を利用者に虚像として視認させるコンバイナとを有する表示装置によって実行される表示方法は、前記利用者による前記コンバイナに対するタッチ操作の情報を取得する操作取得工程と、前記タッチ操作の情報に基づいて、前記利用者の操作を判断する判断工程と、を備える。 In another aspect of the present invention, there is provided a display device having projection means for projecting light constituting an image, and a combiner for reflecting the light projected from the projection means and allowing the user to visually recognize the image as a virtual image. The display method to be executed includes an operation acquisition step of acquiring touch operation information on the combiner by the user, and a determination step of determining the user operation based on the touch operation information.

 本発明の更に他の観点では、コンピュータを有すると共に、画像を構成する光を投射する投射手段と、前記投射手段から投射された光を反射して、前記画像を利用者に虚像として視認させるコンバイナとを有する表示装置によって実行されるプログラムは、前記利用者による前記コンバイナに対するタッチ操作の情報を取得する操作取得手段、前記タッチ操作の情報に基づいて、前記利用者の操作を判断する判断手段、として前記コンピュータを機能させる。 In still another aspect of the present invention, the computer has a computer, a projection unit that projects light constituting the image, and a combiner that reflects the light projected from the projection unit and causes the user to visually recognize the image as a virtual image. The program executed by the display device includes: an operation acquisition unit that acquires information on a touch operation performed on the combiner by the user; a determination unit that determines an operation of the user based on the information on the touch operation; To make the computer function.

 上記のプログラムは、記録媒体に記録した状態で好適に取り扱うことができる。 The above program can be suitably handled in a state of being recorded on a recording medium.

 以下、図面を参照して本発明の好適な実施例について説明する。 Hereinafter, preferred embodiments of the present invention will be described with reference to the drawings.

 [装置構成]
 図1は、本実施例に係るヘッドアップディスプレイ2の概略構成図である。図1に示すように、本実施例に係るヘッドアップディスプレイ1は、主に、光源ユニット3と、コンバイナ9とを備え、フロントウィンドウ25、天井部27、ボンネット28、及びダッシュボード29などを備える車両に取り付けられる。ヘッドアップディスプレイ2は、本発明における「表示装置」の一例である。
[Device configuration]
FIG. 1 is a schematic configuration diagram of a head-up display 2 according to the present embodiment. As shown in FIG. 1, the head-up display 1 according to the present embodiment mainly includes a light source unit 3 and a combiner 9, and includes a front window 25, a ceiling portion 27, a hood 28, a dashboard 29, and the like. Mounted on the vehicle. The head-up display 2 is an example of the “display device” in the present invention.

 光源ユニット3は、支持部材5a、5bを介して車室内の天井部27に設置され、表示すべき画像を構成する光を、コンバイナ9に向けて出射する。具体的には、光源ユニット3は、制御部4の制御に基づき、光源ユニット3内に表示像の元画像(実像)を生成し、その画像を構成する光をコンバイナ9へ出射することで、コンバイナ9を介して運転者に虚像「Iv」を視認させる。例えば、光源ユニット3には、レーザや、DLP(Digital Light Processing)や、LCOS(Liquid Crystal On Silicon)などが適用される(なお、「DLP」及び「LCOS」は登録商標である)。光源ユニット3は、本発明における「投射手段」の一例に相当する。 The light source unit 3 is installed on the ceiling 27 in the passenger compartment via the support members 5a and 5b, and emits light constituting an image to be displayed toward the combiner 9. Specifically, the light source unit 3 generates an original image (real image) of the display image in the light source unit 3 based on the control of the control unit 4, and emits light constituting the image to the combiner 9. The virtual image “Iv” is visually recognized by the driver via the combiner 9. For example, a laser, DLP (Digital Light Processing), LCOS (Liquid Crystal On On Silicon), or the like is applied to the light source unit 3 (“DLP” and “LCOS” are registered trademarks). The light source unit 3 corresponds to an example of a “projection unit” in the present invention.

 コンバイナ9は、反射機能及び透過機能を有するハーフミラーとして構成されている。コンバイナ9は、光源ユニット3から出射される表示像が投影されると共に、表示像を運転者のアイポイントPeへ反射することで当該表示像を虚像Ivとして視認させる。また、コンバイナ9は、光源ユニット3に向かって凹んだ、所定の曲率を有する凹形状にて構成されている。これにより、表示像を拡大した虚像Ivを運転者に視認させることができる。更に、コンバイナ9は、天井部27に設置された支持軸部8を有し、支持軸部8を支軸として回動する。つまり、コンバイナ9は、支持軸部8を支軸としたチルト角度を調整可能に構成されている。支持軸部8は、例えば、フロントウィンドウ25の上端近傍の天井部27、言い換えると運転者用の図示しないサンバイザが設置される位置の近傍に設置される。なお、支持軸部8は、上述のサンバイザに代えて設置されても良い。本実施例では光源ユニット3とコンバイナ9が別体とした例であるが、光源ユニットとコンバイナは一体となっていてもよい。この場合もコンバイナは、コンバイナのチルト角度を調整可能とする支持軸部を介して光源ユニットに取り付けられる。 The combiner 9 is configured as a half mirror having a reflection function and a transmission function. The combiner 9 projects the display image emitted from the light source unit 3 and reflects the display image as a virtual image Iv by reflecting the display image to the driver's eye point Pe. The combiner 9 is configured in a concave shape having a predetermined curvature, which is recessed toward the light source unit 3. Thereby, the driver can visually recognize the virtual image Iv obtained by enlarging the display image. Furthermore, the combiner 9 has the support shaft part 8 installed in the ceiling part 27, and rotates the support shaft part 8 as a spindle. That is, the combiner 9 is configured to be able to adjust the tilt angle with the support shaft portion 8 as a support shaft. The support shaft portion 8 is installed, for example, in the vicinity of the ceiling portion 27 in the vicinity of the upper end of the front window 25, in other words, the position where a sun visor (not shown) for the driver is installed. The support shaft portion 8 may be installed instead of the above-described sun visor. In this embodiment, the light source unit 3 and the combiner 9 are separate bodies, but the light source unit and the combiner may be integrated. In this case as well, the combiner is attached to the light source unit via a support shaft that enables adjustment of the tilt angle of the combiner.

 また、コンバイナ9の表面には、静電シート9aが設けられている。静電シート9aは、静電容量式のタッチパネルであり、運転者によるタッチ操作に応じた信号を制御部4に出力する。例えば、静電シート9aは、当該静電シート9a上でタッチされた位置(コンバイナ9上でタッチされた位置と同義である。以下同様とする。)に応じた信号を制御部4に出力する。また、静電シート9aは、コンバイナ9の曲面に沿うような形状を有しており、光源ユニット3からの光が投射されるコンバイナ9の面に貼り付けられている。加えて、静電シート9aは、透明シートにて構成されている。これにより、コンバイナ9の反射機能及び透過機能が確保される。 Further, an electrostatic sheet 9 a is provided on the surface of the combiner 9. The electrostatic sheet 9 a is a capacitive touch panel, and outputs a signal corresponding to a touch operation by the driver to the control unit 4. For example, the electrostatic sheet 9a outputs a signal corresponding to the position touched on the electrostatic sheet 9a (synonymous with the position touched on the combiner 9. The same applies hereinafter) to the control unit 4. . The electrostatic sheet 9 a has a shape that follows the curved surface of the combiner 9, and is attached to the surface of the combiner 9 on which the light from the light source unit 3 is projected. In addition, the electrostatic sheet 9a is configured by a transparent sheet. Thereby, the reflection function and transmission function of the combiner 9 are ensured.

 制御部4は、光源ユニット3に内蔵されており、図示しないCPUやRAM、ROMなどを有し、ヘッドアップディスプレイ2の全般的な制御を行う。本実施例では、制御部4は、画像を構成する光を光源ユニット3から投射させることで、コンバイナ9を介して当該画像を虚像として運転者に視認させる制御を行うと共に、静電シート9aから運転者によるタッチ操作に応じた信号を取得し、その信号に基づいて運転者の操作を判断する。なお、表示すべき画像については、制御部4が生成しても良いし、ヘッドアップディスプレイ2の外部の装置などが生成したものを取得しても良い。詳細は後述するが、制御部4は、本発明における「操作取得手段」、「判断手段」及び「補正制御手段」の一例に相当する。 The control unit 4 is built in the light source unit 3 and has a CPU, RAM, ROM, etc. (not shown), and performs general control of the head-up display 2. In this embodiment, the control unit 4 projects light from the light source unit 3 to cause the driver to visually recognize the image as a virtual image via the combiner 9, and from the electrostatic sheet 9a. A signal corresponding to the touch operation by the driver is acquired, and the operation of the driver is determined based on the signal. In addition, about the image which should be displayed, the control part 4 may produce | generate and what was produced | generated by the apparatus outside the head-up display 2 etc. may be acquired. Although details will be described later, the control unit 4 corresponds to an example of “operation acquisition unit”, “determination unit”, and “correction control unit” in the present invention.

 なお、図1では、光源ユニット3からの光が投射されるコンバイナ9の面に静電シート9aを設けているが、光源ユニット3からの光が投射されるコンバイナ9の面と反対側の面に静電シート9aを設けても良い。この場合、光源ユニット3からの光が投射されるコンバイナ9の面と反対側(つまりフロントガラス側)からのタッチ操作に対して操作の検出感度を上げることができる。フロントガラス側からタッチ操作を行えば、光源ユニット3からコンバイナ9へ投射される光を遮らない。 In FIG. 1, the electrostatic sheet 9a is provided on the surface of the combiner 9 on which the light from the light source unit 3 is projected, but the surface opposite to the surface of the combiner 9 on which the light from the light source unit 3 is projected. An electrostatic sheet 9a may be provided. In this case, the detection sensitivity of the operation can be increased with respect to the touch operation from the side opposite to the surface of the combiner 9 on which the light from the light source unit 3 is projected (that is, the windshield side). If a touch operation is performed from the windshield side, the light projected from the light source unit 3 to the combiner 9 is not blocked.

 また、静電シート9aの代わりに、反射機能及び透過機能の両方を具備するタッチパネルを用いても良い。その場合には、当該タッチパネルにコンバイナ9と同様の機能を具備させれば、コンバイナ9を別途用いる必要はない。また、タッチパネルに静電容量方式を適用することに限定はされず、これ以外にも公知の種々の方式(例えば抵抗膜方式)を適用することができる。 Further, instead of the electrostatic sheet 9a, a touch panel having both a reflection function and a transmission function may be used. In that case, if the touch panel has the same function as the combiner 9, it is not necessary to use the combiner 9 separately. Moreover, it is not limited to applying a capacitive system to a touch panel, In addition to this, various well-known systems (for example, resistance film system) can be applied.

 更に、図1に示したように、光源ユニット3を天井部27に設置することに限定はされず、天井部27の代わりにダッシュボード29の内部に光源ユニット3を設置しても良い。 Furthermore, as shown in FIG. 1, the light source unit 3 is not limited to being installed on the ceiling portion 27, and the light source unit 3 may be installed inside the dashboard 29 instead of the ceiling portion 27.

 [制御方法]
 次に、本実施例においてヘッドアップディスプレイ2の制御部4が行う制御方法について説明する。
[Control method]
Next, a control method performed by the control unit 4 of the head-up display 2 in the present embodiment will be described.

 本実施例では、制御部4は、ボタンなどの運転者にタッチ操作させるための画像(以下では「タッチ用画像」と呼ぶ。)を含む画像を構成する光を光源ユニット3から投射させ、静電シート9aから取得された信号に基づいて、運転者によるタッチ用画像に対する操作を判断する。具体的には、制御部4は、まず、表示すべき画像中のタッチ用画像に対応する領域(以下では「第1タッチ反応領域」と呼ぶ。)を求め、当該第1タッチ反応領域が投射されるコンバイナ9上での領域(言い換えるとコンバイナ9上に形成される第1タッチ反応領域に対応する領域であり、以下では「第2タッチ反応領域」と呼ぶ。)を求める。このように第1タッチ反応領域から第2タッチ反応領域を求めるのは、運転者が視認する虚像とコンバイナ9上に形成される像との差異を補正するためであり、この補正は、静電シート9aにおけるタッチパネルの座標に関する補正(言い換えるとキャリブレーションであり、以下では「タッチパネル補正」とも呼ぶ。)を行うことに相当する。 In the present embodiment, the control unit 4 projects light constituting the image including an image (hereinafter referred to as “touch image”) for causing the driver to perform a touch operation such as a button from the light source unit 3, and Based on the signal acquired from the electric sheet 9a, an operation on the touch image by the driver is determined. Specifically, the control unit 4 first obtains an area corresponding to the touch image in the image to be displayed (hereinafter referred to as “first touch reaction area”), and the first touch reaction area is projected. A region on the combiner 9 (in other words, a region corresponding to the first touch reaction region formed on the combiner 9 and hereinafter referred to as a “second touch reaction region”) is obtained. The reason why the second touch reaction area is obtained from the first touch reaction area is to correct the difference between the virtual image visually recognized by the driver and the image formed on the combiner 9. This corresponds to performing correction relating to the coordinates of the touch panel on the sheet 9a (in other words, calibration, hereinafter also referred to as “touch panel correction”).

 この後、制御部4は、静電シート9aから取得された信号に基づいて、運転者によってタッチ操作が行われたコンバイナ9上での位置(一義的に静電シート9a上での位置となる)を求め、その位置と第2タッチ反応領域とを比較することで、運転者によるタッチ用画像に対する操作を判断する。この場合、制御部4は、タッチ操作が行われた位置が第2タッチ反応領域に含まれる場合には、タッチ用画像に対するタッチ操作がなされたものと判断して、タッチ用画像に関連付けられた所定の操作を実行する。 Thereafter, the control unit 4 is based on a signal acquired from the electrostatic sheet 9a, and the position on the combiner 9 where the driver performs a touch operation (uniquely the position on the electrostatic sheet 9a). ) And comparing the position with the second touch reaction area, the operation on the touch image by the driver is determined. In this case, when the position at which the touch operation is performed is included in the second touch reaction area, the control unit 4 determines that the touch operation has been performed on the touch image and associates it with the touch image. Perform a predetermined operation.

 また、本実施例では、制御部4は、運転者に視認される虚像の歪みが補正されるように、表示すべき画像(元画像)の形状についての補正を行い、元画像を補正した後の画像(以下では「補正画像」と呼ぶ。)を構成する光を光源ユニット3から投射させる。例えば、制御部4は、回転補正や台形補正などの種々の画像補正を行う。そして、制御部4は、このような画像補正に用いた補正量(以下では「画像補正量」と呼ぶ。)に基づいて、上記した第2タッチ反応領域を求める。こうするのは、補正画像に応じた像がコンバイナ9上に形成されるからである。更に、本実施例では、制御部4は、画像補正量だけでなく、コンバイナ9の曲率(凹形状についての曲率)も考慮して、第2タッチ反応領域を求める。こうするのは、コンバイナ9の曲率により補正画像が拡大された像が、コンバイナ9上に形成されるからである。 In the present embodiment, the control unit 4 corrects the shape of the image (original image) to be displayed and corrects the original image so that the distortion of the virtual image visually recognized by the driver is corrected. The light constituting the image (hereinafter referred to as “corrected image”) is projected from the light source unit 3. For example, the control unit 4 performs various image corrections such as rotation correction and trapezoidal correction. And the control part 4 calculates | requires an above-mentioned 2nd touch reaction area | region based on the correction amount (henceforth "image correction amount") used for such image correction. This is because an image corresponding to the corrected image is formed on the combiner 9. Furthermore, in the present embodiment, the control unit 4 obtains the second touch reaction area in consideration of not only the image correction amount but also the curvature of the combiner 9 (the curvature of the concave shape). This is because an image in which the corrected image is enlarged by the curvature of the combiner 9 is formed on the combiner 9.

 なお、虚像に歪みが生じていない場合には、元画像を補正する必要はない。つまり、補正画像を生成する必要はない。この場合には、上記したように、第2タッチ反応領域を求めるに当たって、画像補正量を考慮する必要はない。また、コンバイナ9が凹形状ではなく平面形状である場合には(つまり曲率を有しない場合)、上記したように、第2タッチ反応領域を求めるに当たって、コンバイナ9の曲率を考慮する必要はない。 If the virtual image is not distorted, it is not necessary to correct the original image. That is, it is not necessary to generate a corrected image. In this case, as described above, it is not necessary to consider the image correction amount in obtaining the second touch reaction area. Further, when the combiner 9 has a planar shape instead of a concave shape (that is, when it does not have a curvature), it is not necessary to consider the curvature of the combiner 9 in obtaining the second touch reaction region as described above.

 次に、図2及び図3を参照して、制御部4が行う制御方法をより具体的に説明する。 Next, the control method performed by the control unit 4 will be described more specifically with reference to FIGS. 2 and 3.

 図2は、画像補正について説明するための図を示す。図2(a)は、元画像70の一例を示している。この元画像70には、タッチ用画像として、2つのボタン70a、70bが含まれている。本実施例では、制御部4は、元画像70中のボタン70a、70bに対応する領域を、第1タッチ反応領域として求める。 FIG. 2 is a diagram for explaining image correction. FIG. 2A shows an example of the original image 70. The original image 70 includes two buttons 70a and 70b as touch images. In the present embodiment, the control unit 4 obtains an area corresponding to the buttons 70a and 70b in the original image 70 as the first touch reaction area.

 図2(b)は、補正していない元画像70を用いた場合に視認される虚像71の一例を示している。この虚像71は、元画像70と比較して、一方向に向かって湾曲していることがわかる。つまり、虚像71に歪みが生じていることがわかる。このような歪みは、例えば、コンバイナ9の形状が凹形状であることや、コンバイナ9が光源ユニット3に対して傾いていること(つまり光源ユニット3からの光がコンバイナ9に対して斜めに入射すること)や、運転者のアイポイントが適正位置からずれていることなどに起因して生じ得る。 FIG. 2B shows an example of a virtual image 71 visually recognized when the original image 70 that has not been corrected is used. It can be seen that the virtual image 71 is curved in one direction as compared to the original image 70. That is, it can be seen that the virtual image 71 is distorted. Such distortion is caused by, for example, that the shape of the combiner 9 is concave, or that the combiner 9 is inclined with respect to the light source unit 3 (that is, light from the light source unit 3 is incident on the combiner 9 obliquely). Or the driver's eye point is out of the proper position.

 図2(c)は、図2(b)に示したような虚像71の歪みを補正するための補正画像72を示している。この補正画像72は、虚像71に生じている歪みの方向と逆方向に、元画像70を湾曲させた画像である。補正画像72は、光源ユニット3が投射する画像となる。1つの例では、制御部4は、ヘッドアップディスプレイ2の入力装置(スイッチや、ボタンや、リモコンなど。図1では図示せず。)に対する運転者の操作に応じて、補正画像72を生成する。つまり、この例では、運転者は、視認される虚像の歪みが解消するように入力装置を操作し、制御部4は、そのような入力装置の操作に応じた補正画像72を生成する。他の例では、運転者に視認される虚像に対応する画像をカメラで撮影し、制御部4は、その撮影画像を解析することで、虚像に生じている歪みが解消できるような補正画像72を生成する。 FIG. 2 (c) shows a corrected image 72 for correcting the distortion of the virtual image 71 as shown in FIG. 2 (b). The corrected image 72 is an image obtained by curving the original image 70 in the direction opposite to the direction of distortion generated in the virtual image 71. The corrected image 72 is an image projected by the light source unit 3. In one example, the control unit 4 generates a corrected image 72 in response to a driver's operation on an input device (switch, button, remote controller, etc., not shown in FIG. 1) of the head-up display 2. . That is, in this example, the driver operates the input device so that the distortion of the visually recognized virtual image is eliminated, and the control unit 4 generates the corrected image 72 according to the operation of such an input device. In another example, an image corresponding to a virtual image visually recognized by the driver is captured by a camera, and the control unit 4 analyzes the captured image, thereby correcting the corrected image 72 that can eliminate distortion generated in the virtual image. Is generated.

 図2(d)は、図2(c)に示した補正画像72を用いた場合に視認される虚像73の一例を示している。この虚像73では、図2(b)に示したような歪みが解消されていることがわかる。つまり、元画像70と概ね一致するような虚像73が視認されると言える。 FIG. 2D shows an example of a virtual image 73 that is visually recognized when the corrected image 72 shown in FIG. 2C is used. In the virtual image 73, it can be seen that the distortion as shown in FIG. That is, it can be said that a virtual image 73 that substantially matches the original image 70 is visually recognized.

 図3は、第2タッチ反応領域を求める方法を具体的に説明するための図を示す。ここでは、図3(a)に示すように、コンバイナ9(静電シート9aも含む)が、光源ユニット3に向かって所定の曲率にて凹んだ形状(凹形状)を有する場合を考える。図3(b)は、光源ユニット3から投射された光により、光源ユニット3に対向する平面S1上に形成される像75(仮想的な像)の一例を示している。像75は、光源ユニット3が投射した画像に相当する。図3(b)では、図2(c)に示した補正画像72を用いた場合に得られる像75を例示している。この像75は、基本的には、図2(c)に示した補正画像72と一致する。 FIG. 3 is a diagram for specifically explaining a method of obtaining the second touch reaction area. Here, as shown in FIG. 3A, a case is considered in which the combiner 9 (including the electrostatic sheet 9a) has a shape (concave shape) that is recessed toward the light source unit 3 with a predetermined curvature. FIG. 3B shows an example of an image 75 (virtual image) formed on the plane S <b> 1 facing the light source unit 3 by the light projected from the light source unit 3. The image 75 corresponds to an image projected by the light source unit 3. FIG. 3B illustrates an image 75 obtained when the corrected image 72 shown in FIG. 2C is used. This image 75 basically matches the corrected image 72 shown in FIG.

 図3(c)は、光源ユニット3から投射された光により、光源ユニット3の方向に面したコンバイナ9の面(湾曲面)S2上に形成される像76を示している。この像76は、運転者が視認する虚像に対応するものである。図3(c)では、図3(b)と同様に、図2(c)に示した補正画像72を用いた場合に得られる像76を例示している。図3(c)に示すように、コンバイナ9の面S2に形成される像76は、図3(b)に示した面S1上に形成される像75を左右方向に拡大した像(つまり補正画像72を左右方向に拡大した像)となっていることがわかる。これは、図3(a)に示したように、コンバイナ9が曲率を有する凹形状にて構成されていることに起因する。 FIG. 3C shows an image 76 formed on the surface (curved surface) S <b> 2 of the combiner 9 facing the direction of the light source unit 3 by the light projected from the light source unit 3. This image 76 corresponds to a virtual image visually recognized by the driver. FIG. 3C illustrates an image 76 obtained when the corrected image 72 shown in FIG. 2C is used, as in FIG. 3B. As shown in FIG. 3C, an image 76 formed on the surface S2 of the combiner 9 is an image obtained by enlarging the image 75 formed on the surface S1 shown in FIG. It can be seen that the image 72 is an image enlarged in the left-right direction. This is because the combiner 9 is configured in a concave shape having a curvature, as shown in FIG.

 本実施例では、このようなコンバイナ9の面S2に形成される像76に含まれるボタン76a、76bに対応する領域を、第2タッチ反応領域として求める。具体的には、本実施例では、図2で述べた手順で得られた補正画像72を、コンバイナ9の曲率に応じて拡大した像76を求めて、その像76に含まれるボタン76a、76bに対応する第2タッチ反応領域を求める。例えば、制御部4は、画像補正に用いた画像補正量と、コンバイナ9の曲率とに基づいて、元画像70中のボタン70a、70bに対応する第1タッチ反応領域を変化させることで、第2タッチ反応領域を求める。この場合、コンバイナ9上の所定位置を原点とした座標系を定義しておき、制御部4は、その座標系にて規定される第2タッチ反応領域の位置を求める。第2タッチ反応領域の位置を規定するための座標系は、静電シート9aの信号に基づいてタッチ操作が行われた位置を求めるための座標系と同一にすることが望ましい。 In this embodiment, an area corresponding to the buttons 76a and 76b included in the image 76 formed on the surface S2 of the combiner 9 is obtained as the second touch reaction area. Specifically, in this embodiment, an image 76 obtained by enlarging the corrected image 72 obtained by the procedure described in FIG. 2 according to the curvature of the combiner 9 is obtained, and buttons 76a and 76b included in the image 76 are obtained. A second touch reaction region corresponding to is obtained. For example, the control unit 4 changes the first touch reaction area corresponding to the buttons 70 a and 70 b in the original image 70 based on the image correction amount used for the image correction and the curvature of the combiner 9, thereby Determine the 2-touch reaction area. In this case, a coordinate system with a predetermined position on the combiner 9 as the origin is defined, and the control unit 4 obtains the position of the second touch reaction area defined by the coordinate system. The coordinate system for defining the position of the second touch reaction region is preferably the same as the coordinate system for determining the position where the touch operation is performed based on the signal of the electrostatic sheet 9a.

 なお、図3では、コンバイナ9の面S2に形成される像76として、左右方向に拡大された像を示したが、コンバイナ9の面S2に形成される像76は、コンバイナ9の曲率に応じて、左右方向の代わりに上下方向に拡大される場合もあるし、左右方向及び上下方向の両方向に拡大される場合もある。 In FIG. 3, an image enlarged in the left-right direction is shown as the image 76 formed on the surface S <b> 2 of the combiner 9, but the image 76 formed on the surface S <b> 2 of the combiner 9 corresponds to the curvature of the combiner 9. Thus, the image may be enlarged in the vertical direction instead of the horizontal direction, and may be enlarged in both the horizontal direction and the vertical direction.

 [処理フロー]
 次に、図4を参照して、本実施例に係る処理フローについて説明する。この処理フローは、ヘッドアップディスプレイ2内の制御部4によって繰り返し実行される。
[Processing flow]
Next, a processing flow according to the present embodiment will be described with reference to FIG. This processing flow is repeatedly executed by the control unit 4 in the head-up display 2.

 まず、ステップS101では、制御部4は、表示すべき画像(元画像)を生成する。ここでは、制御部4は、タッチ用画像を含む元画像を生成するものとする。なお、制御部4が元画像を生成することに限定はされず、ヘッドアップディスプレイ2の外部の装置などが元画像を生成し、制御部4は、その元画像を取得することとしても良い。ステップS101の後、処理はステップS102に進む。 First, in step S101, the control unit 4 generates an image (original image) to be displayed. Here, it is assumed that the control unit 4 generates an original image including a touch image. Note that the control unit 4 is not limited to generating the original image, and a device outside the head-up display 2 may generate the original image, and the control unit 4 may acquire the original image. After step S101, the process proceeds to step S102.

 ステップS102では、制御部4は、元画像中のタッチ用画像に対応する第1タッチ反応領域を求める。制御部4は、元画像に複数のタッチ用画像が含まれる場合には、複数のタッチ用画像のそれぞれに対応する第1タッチ反応領域を求める。そして、処理はステップS103に進む。 In step S102, the control unit 4 obtains a first touch reaction area corresponding to the touch image in the original image. When the original image includes a plurality of touch images, the control unit 4 obtains a first touch reaction area corresponding to each of the plurality of touch images. Then, the process proceeds to step S103.

 ステップS103では、制御部4は、運転者に視認される虚像の歪みが補正されるように、回転補正や台形補正などの元画像の形状についての補正を行う。例えば、運転者は、視認される虚像の歪みが解消するようにヘッドアップディスプレイ2の入力装置(スイッチや、ボタンや、リモコンなど)を操作し、制御部4は、そのような入力装置の操作に応じて元画像の補正を行う。そして、処理はステップS104に進む。 In step S103, the control unit 4 corrects the shape of the original image such as rotation correction and keystone correction so that the distortion of the virtual image visually recognized by the driver is corrected. For example, the driver operates the input device (switch, button, remote controller, etc.) of the head-up display 2 so that the distortion of the visible virtual image is eliminated, and the control unit 4 operates such an input device. The original image is corrected accordingly. Then, the process proceeds to step S104.

 ステップS104では、制御部4は、ヘッドアップディスプレイ2内のメモリなどに記憶されたコンバイナ9の曲率(凹形状についての曲率)を取得する。なお、このようなコンバイナ9の曲率を取得する代わりに、コンバイナ9の曲率に応じて画像が拡大される度合い(例えば左右方向及び/又は上下方向に画像が拡大される度合い)、若しくは拡大後の画像サイズなどを取得しても良い。これらは、いずれもコンバイナの曲率に関連する情報である。ステップS104の後、処理はステップS105に進む。 In step S104, the control unit 4 acquires the curvature of the combiner 9 (the curvature of the concave shape) stored in the memory or the like in the head-up display 2. Instead of acquiring the curvature of such a combiner 9, the degree to which the image is enlarged according to the curvature of the combiner 9 (for example, the degree to which the image is enlarged in the horizontal direction and / or the vertical direction), or after the enlargement You may acquire image size etc. These are all information related to the curvature of the combiner. After step S104, the process proceeds to step S105.

 ステップS105では、制御部4は、ステップS103での画像補正量と、ステップS104で取得したコンバイナ9の曲率とに基づいて、ステップS102で求めた第1タッチ反応領域を変化させることで、第2タッチ反応領域を求める。つまり、制御部4は、元画像を補正した補正画像がコンバイナ9上に像を形成する場合に曲率に応じて拡大されることを考慮して、第1タッチ反応領域に対応する第2タッチ反応領域を求める。この場合、制御部4は、コンバイナ9上の所定位置を原点とした座標系にて規定される第2タッチ反応領域の位置を求める。1つの例では、制御部4は、事前に定めた所定の演算式を用いて、画像補正量とコンバイナ9の曲率とに基づいて、第1タッチ反応領域に対応する第2タッチ反応領域を求める。他の例では、第1タッチ反応領域から第2タッチ反応領域を求めるためのパラメータ(つまりタッチパネル補正に用いる補正量)を画像補正量とコンバイナ9の曲率とに関連付けたテーブルを事前に求めておき、制御部4は、そのようなテーブルを参照して、第1タッチ反応領域に対応する第2タッチ反応領域を求める。制御部4は、ステップS102で複数の第1タッチ反応領域が求められた場合(つまり元画像に複数のタッチ用画像が含まれる場合)には、複数の第1タッチ反応領域のそれぞれに対応する第2タッチ反応領域を求める。そして、制御部4は、このように求めた第2タッチ反応領域をRAMなどに記憶させる。例えば、制御部4は、画像のピクセル単位で第2タッチ反応領域を記憶させる。この後、処理はステップS106に進む。 In step S105, the control unit 4 changes the first touch reaction area obtained in step S102 based on the image correction amount in step S103 and the curvature of the combiner 9 acquired in step S104, thereby changing the second touch reaction area. Find the touch response area. That is, the control unit 4 takes into account that the corrected image obtained by correcting the original image is enlarged according to the curvature when the image is formed on the combiner 9, and the second touch reaction corresponding to the first touch reaction region. Find the area. In this case, the control unit 4 obtains the position of the second touch reaction area defined by the coordinate system with the predetermined position on the combiner 9 as the origin. In one example, the control unit 4 obtains a second touch reaction area corresponding to the first touch reaction area based on the image correction amount and the curvature of the combiner 9 using a predetermined arithmetic expression determined in advance. . In another example, a table in which a parameter for obtaining the second touch reaction area from the first touch reaction area (that is, a correction amount used for touch panel correction) is associated with the image correction amount and the curvature of the combiner 9 is obtained in advance. The control unit 4 refers to such a table to obtain a second touch reaction area corresponding to the first touch reaction area. When a plurality of first touch reaction areas are obtained in step S102 (that is, when a plurality of touch images are included in the original image), the control unit 4 corresponds to each of the plurality of first touch reaction areas. A second touch reaction area is determined. And the control part 4 memorize | stores the 2nd touch reaction area | region calculated | required in this way in RAM. For example, the control unit 4 stores the second touch reaction area in pixel units of the image. Thereafter, the process proceeds to step S106.

 ステップS106では、制御部4は、コンバイナ9がタッチされたか否かを判定する。この場合、制御部4は、静電シート9aからの信号が取得されたか否かに応じて、当該判定を行う。タッチされた場合(ステップS106:Yes)、処理はステップS107に進み、タッチされていない場合(ステップS106:No)、処理は終了する。 In step S106, the control unit 4 determines whether or not the combiner 9 has been touched. In this case, the control part 4 performs the said determination according to whether the signal from the electrostatic sheet 9a was acquired. If touched (step S106: Yes), the process proceeds to step S107. If not touched (step S106: No), the process ends.

 ステップS107では、制御部4は、静電シート9aから取得された信号に基づいて、タッチされたコンバイナ9上での位置を求め、その位置がステップS105でRAMなどに記憶された第2タッチ反応領域に含まれるか否かを判定する。この場合、制御部4は、コンバイナ9上の所定位置を原点とした座標系を用いて、タッチされた位置と第2タッチ反応領域とを比較する。タッチされた位置が第2タッチ反応領域に含まれる場合(ステップS107:Yes)、処理はステップS108に進む。なお、ステップS105で複数の第2タッチ反応領域が求められた場合(つまり元画像に複数のタッチ用画像が含まれる場合)には、制御部4は、どの第2タッチ反応領域がタッチされたかを特定するものとする。他方で、タッチされた位置が第2タッチ反応領域に含まれない場合(ステップS107:No)、処理は終了する。 In step S107, the control unit 4 obtains the touched position on the combiner 9 based on the signal acquired from the electrostatic sheet 9a, and the second touch reaction stored in the RAM or the like in step S105. It is determined whether it is included in the area. In this case, the control unit 4 compares the touched position with the second touch reaction area using a coordinate system having a predetermined position on the combiner 9 as an origin. When the touched position is included in the second touch reaction area (step S107: Yes), the process proceeds to step S108. When a plurality of second touch reaction areas are obtained in step S105 (that is, when a plurality of touch images are included in the original image), the control unit 4 determines which second touch reaction area is touched. Shall be specified. On the other hand, when the touched position is not included in the second touch reaction area (step S107: No), the process ends.

 ステップS108では、制御部4は、タッチされた第2タッチ反応領域に応じた操作を実行する。つまり、制御部4は、タッチされた第2タッチ反応領域に対応するタッチ用画像に関連付けられた所定の操作を実行する。この場合、制御部4は、例えば、ヘッドアップディスプレイ2内の構成部を制御するための制御信号を出力したり、ヘッドアップディスプレイ2の外部の装置を制御するための制御信号を出力したりする。そして、処理は終了する。 In step S108, the control unit 4 executes an operation corresponding to the touched second touch reaction area. That is, the control unit 4 executes a predetermined operation associated with the touch image corresponding to the touched second touch reaction area. In this case, for example, the control unit 4 outputs a control signal for controlling the components in the head-up display 2 or outputs a control signal for controlling a device outside the head-up display 2. . Then, the process ends.

 [本実施例の作用・効果]
 以上説明したように、本実施例によれば、例えば特許文献1に記載された技術などと比較すると、フロントガラスよりも運転者に近い位置に配置されたコンバイナ9に静電シート9aを設けることで、運転者はタッチ操作を容易に行うことができると共に、タッチ操作時の運転の安全性を確保することができる。
[Operation and effect of this embodiment]
As described above, according to the present embodiment, the electrostatic sheet 9a is provided on the combiner 9 disposed closer to the driver than the windshield, for example, compared to the technique described in Patent Document 1, for example. Thus, the driver can easily perform the touch operation and can ensure the safety of driving during the touch operation.

 また、本実施例によれば、画像の形状についての補正及びコンバイナ9の曲率を考慮して求めた第2タッチ反応領域を用いることで(つまりタッチパネル補正を行うことで)、タッチ操作を精度良く判断することが可能となる。 Further, according to the present embodiment, the touch operation can be performed with high accuracy by using the second touch reaction region obtained in consideration of the correction of the shape of the image and the curvature of the combiner 9 (that is, by performing touch panel correction). It becomes possible to judge.

 更に、本実施例によれば、例えば運転者が画像の形状を補正するための操作を行うだけで、自動的にタッチパネル補正を完了させることができる。つまり、本実施例によれば、タッチパネル補正を行う場合に、画像補正のための操作とは別の操作を運転者に新たに課さなくて良い。 Furthermore, according to the present embodiment, for example, the touch panel correction can be automatically completed only by the driver performing an operation for correcting the shape of the image. That is, according to the present embodiment, when touch panel correction is performed, an operation different from the operation for image correction need not be newly imposed on the driver.

 [変形例]
 以下では、上記した実施例に好適な変形例について説明する。なお、下記の変形例は、任意に組み合わせて上述の実施例に適用することができる。
[Modification]
Below, the modification suitable for an above-described Example is demonstrated. It should be noted that the following modifications can be applied to the above-described embodiments in any combination.

 (変形例1)
 上記したように、第2タッチ反応領域は、画像補正量とコンバイナ9の曲率とに基づいて求められる。つまり、タッチパネル補正についての最適な補正量(以下では適宜「タッチパネル補正量」と呼ぶ。)は、画像補正量とコンバイナ9の曲率とに基づいて求められる。ここで、コンバイナ9の曲率は一定であるため、基本的には画像補正量が決まれば一義的にタッチパネル補正量が求まると言える。したがって、他の例では、画像補正量とタッチパネル補正量とを関連付けたテーブルを事前に作成しておき、制御部4は、そのようなテーブルを参照してタッチパネル補正量を求めることができる。
(Modification 1)
As described above, the second touch reaction area is obtained based on the image correction amount and the curvature of the combiner 9. That is, an optimal correction amount for touch panel correction (hereinafter referred to as “touch panel correction amount” as appropriate) is obtained based on the image correction amount and the curvature of the combiner 9. Here, since the curvature of the combiner 9 is constant, it can be said that basically the touch panel correction amount can be obtained if the image correction amount is determined. Therefore, in another example, a table in which the image correction amount and the touch panel correction amount are associated with each other is created in advance, and the control unit 4 can obtain the touch panel correction amount with reference to such a table.

 また、画像補正量は運転者のアイポイントに応じて概ね決まるため(つまりアイポイントが同じなら画像補正量は概ね一定となる傾向にあるため)、運転者のアイポイントごとの画像補正量を得ることで、アイポイントとタッチパネル補正量とを関連付けたテーブルを作成することができる。したがって、更に他の例では、制御部4は、そのようなアイポイントとタッチパネル補正量とを関連付けたテーブルを参照して、タッチパネル補正量を求めることができる。これにより、運転者を特定するための情報(例えばIDなど)をヘッドアップディスプレイ2に入力するだけで、画像補正及びタッチパネル補正の両方を適切に行うことができる。なお、アイポイントとタッチパネル補正量とを関連付けたテーブルを用いる代わりに、運転者とタッチパネル補正量とを関連付けたテーブルを用いても良い。 Further, since the image correction amount is generally determined according to the driver's eye point (that is, the image correction amount tends to be substantially constant if the eye point is the same), the image correction amount for each driver's eye point is obtained. Thus, it is possible to create a table associating eye points with touch panel correction amounts. Therefore, in yet another example, the control unit 4 can obtain the touch panel correction amount with reference to a table in which such eye points are associated with the touch panel correction amount. Thereby, both image correction and touch panel correction can be appropriately performed only by inputting information (for example, ID) for specifying the driver to the head-up display 2. Instead of using a table that associates eye points with touch panel correction amounts, a table that associates drivers with touch panel correction amounts may be used.

 また、前述したようにコンバイナ9はチルト角度を調整可能に構成されているが、画像補正量はコンバイナ9のチルト角度に応じて概ね決まるため(つまりチルト角度が同じなら画像補正量は概ね一定となる傾向にあるため)、コンバイナ9のチルト角度ごとの画像補正量を得ることで、チルト角度とタッチパネル補正量とを関連付けたテーブルを作成することができる。したがって、更に他の例では、制御部4は、そのようなチルト角度とタッチパネル補正量とを関連付けたテーブルを参照して、タッチパネル補正量を求めることができる。これにより、運転者によるチルト角度の調整に応じて、画像補正及びタッチパネル補正の両方を適切に行うことができる。なお、コンバイナ9のチルト角度については、チルト角度を検出可能な角度センサなどを設けることで取得することができる。 Further, as described above, the combiner 9 is configured so that the tilt angle can be adjusted. However, the image correction amount is generally determined according to the tilt angle of the combiner 9 (that is, if the tilt angle is the same, the image correction amount is substantially constant). Therefore, by obtaining the image correction amount for each tilt angle of the combiner 9, a table in which the tilt angle and the touch panel correction amount are associated can be created. Therefore, in yet another example, the control unit 4 can obtain the touch panel correction amount by referring to a table in which such a tilt angle is associated with the touch panel correction amount. Thereby, both image correction and touch panel correction can be appropriately performed according to the adjustment of the tilt angle by the driver. The tilt angle of the combiner 9 can be obtained by providing an angle sensor or the like that can detect the tilt angle.

 更に他の例では、アイポイントとチルト角度とタッチパネル補正量とを関連付けたテーブルを用いて、タッチパネル補正量を求めることができる。この例は、アイポイントが同じであっても異なるチルト角度に設定される場合があることなどを考慮したものである。これにより、画像補正及びタッチパネル補正の両方をより精度良く行うことが可能となる。 In still another example, the touch panel correction amount can be obtained using a table in which the eye point, the tilt angle, and the touch panel correction amount are associated with each other. This example takes into consideration that different tilt angles may be set even if the eye point is the same. This makes it possible to perform both image correction and touch panel correction with higher accuracy.

 (変形例2)
 上記した実施例では、本発明を、ヘッドアップディスプレイ2に適用した例を示したが、これに限定はされない。本発明は、ヘッドアップディスプレイ2と、当該ヘッドアップディスプレイ2と通信可能なナビゲーション装置とから成るシステムにも適用することができる。この場合には、ヘッドアップディスプレイ2とナビゲーション装置とから成るシステムが、本発明における「表示装置」の一例に相当する。
(Modification 2)
In the above-described embodiment, an example in which the present invention is applied to the head-up display 2 is shown, but the present invention is not limited to this. The present invention can also be applied to a system including the head-up display 2 and a navigation device that can communicate with the head-up display 2. In this case, the system including the head-up display 2 and the navigation device corresponds to an example of the “display device” in the present invention.

 図5は、変形例2に係るシステムの概略構成を示すブロック図である。図5に示すように、当該システムは、ナビゲーション装置100とヘッドアップディスプレイ2とを備える。ナビゲーション装置100は、ヘッドアップディスプレイ2と通信可能に構成され(無線通信でも有線通信でも良い)、CPU100aなどを備える。例えば、ナビゲーション装置100は、車両に設置される据え置き型のナビゲーション装置、PND(Portable Navigation Device)、又はスマートフォンなどの携帯型端末とすることができる。ナビゲーション装置100内のCPU100aは、例えば出発地から目的地までのルート案内を行う。ヘッドアップディスプレイ2は、図1に示した構成と同様の構成を有する。 FIG. 5 is a block diagram showing a schematic configuration of a system according to the second modification. As shown in FIG. 5, the system includes a navigation device 100 and a head-up display 2. The navigation device 100 is configured to be communicable with the head-up display 2 (either wireless communication or wired communication), and includes a CPU 100a and the like. For example, the navigation device 100 may be a stationary navigation device installed in a vehicle, a portable terminal such as a PND (Portable Navigation Device), or a smartphone. The CPU 100a in the navigation device 100 performs route guidance from the departure place to the destination, for example. The head-up display 2 has a configuration similar to that shown in FIG.

 変形例2では、ナビゲーション装置100内のCPU100aは、タッチ用画像を含む表示すべき画像を生成し、その画像中のタッチ用画像に対応する第1タッチ反応領域を求め、コンバイナ9上に形成される第1タッチ反応領域に対応する第2タッチ反応領域を求める。より具体的には、CPU100aは、表示すべき画像(元画像)の形状についての補正を行い、その画像補正量とコンバイナ9の曲率とに基づいて第2タッチ反応領域を求める。この場合、CPU100aは、補正画像を生成し、当該補正画像をヘッドアップディスプレイ2に供給する。そして、CPU100aは、ヘッドアップディスプレイ2の静電シート9aから信号を取得し、当該信号に基づいてタッチ操作が行われたコンバイナ9上での位置を求め、その位置と第2タッチ反応領域とを比較することで、運転者によるタッチ用画像に対する操作を判断する。このように、変形例2では、ナビゲーション装置100内のCPU100aは、本発明における「操作取得手段」、「判断手段」及び「補正制御手段」として機能する。 In the second modification, the CPU 100a in the navigation device 100 generates an image to be displayed including a touch image, obtains a first touch reaction area corresponding to the touch image in the image, and is formed on the combiner 9. A second touch reaction area corresponding to the first touch reaction area is obtained. More specifically, the CPU 100a corrects the shape of the image to be displayed (original image), and obtains the second touch reaction area based on the image correction amount and the curvature of the combiner 9. In this case, the CPU 100a generates a corrected image and supplies the corrected image to the head-up display 2. Then, the CPU 100a acquires a signal from the electrostatic sheet 9a of the head-up display 2, obtains a position on the combiner 9 where the touch operation is performed based on the signal, and determines the position and the second touch reaction area. By comparing, the operation on the touch image by the driver is determined. Thus, in the second modification, the CPU 100a in the navigation device 100 functions as the “operation acquisition unit”, “determination unit”, and “correction control unit” in the present invention.

 なお、上記では、ナビゲーション装置100内のCPU100aが画像補正を行う例を示したが、この代わりに、ヘッドアップディスプレイ2内の制御部4が画像補正を行うこととしても良い。この場合には、制御部4は、画像補正に用いた画像補正量の情報を、ナビゲーション装置100に供給し、CPU100aは、当該画像補正量に基づいて第2タッチ反応領域を求める。この例では、ヘッドアップディスプレイ2内の制御部4は、本発明における「補正制御手段」として機能し、ナビゲーション装置100内のCPU100aは、本発明における「操作取得手段」及び「判断手段」として機能する。 In the above, an example in which the CPU 100a in the navigation device 100 performs image correction has been described, but instead, the control unit 4 in the head-up display 2 may perform image correction. In this case, the control unit 4 supplies information on the image correction amount used for the image correction to the navigation device 100, and the CPU 100a obtains the second touch reaction area based on the image correction amount. In this example, the control unit 4 in the head-up display 2 functions as “correction control means” in the present invention, and the CPU 100a in the navigation device 100 functions as “operation acquisition means” and “determination means” in the present invention. To do.

 (変形例3)
 上記では本発明を車両に適用する例を示したが、本発明の適用はこれに限定されない。本発明は、車両の他に、船や、ヘリコプターや、飛行機などの種々の移動体に適用することができる。
(Modification 3)
Although the example which applies this invention to a vehicle was shown above, application of this invention is not limited to this. The present invention can be applied to various mobile objects such as ships, helicopters, and airplanes in addition to vehicles.

 本発明は、ヘッドアップディスプレイやナビゲーション装置(スマートフォンなどの携帯電話も含む)などに適用することができる。 The present invention can be applied to a head-up display, a navigation device (including a mobile phone such as a smartphone), and the like.

 2 ヘッドアップディスプレイ
 3 光源ユニット
 4 制御部
 9 コンバイナ
 9a 静電シート
 200 ナビゲーション装置
2 Head Up Display 3 Light Source Unit 4 Control Unit 9 Combiner 9a Electrostatic Sheet 200 Navigation Device

Claims (8)

 画像を構成する光を投射する投射手段と、
 前記投射手段から投射された光を反射して、前記画像を利用者に虚像として視認させるコンバイナと、
 前記利用者による前記コンバイナに対するタッチ操作の情報を取得する操作取得手段と、
 前記タッチ操作の情報に基づいて、前記利用者の操作を判断する判断手段と、
 を備えることを特徴とする表示装置。
Projection means for projecting light constituting the image;
A combiner that reflects the light projected from the projection means and causes the user to visually recognize the image as a virtual image;
Operation acquisition means for acquiring information of a touch operation on the combiner by the user;
Determining means for determining the operation of the user based on the information of the touch operation;
A display device comprising:
 画像の形状についての補正を行い、補正後の画像を構成する光を前記投射手段から投射させる補正制御手段を更に備え、
 前記判断手段は、少なくとも、前記タッチ操作の情報と、前記補正制御手段による画像の補正量とに基づいて、前記利用者の操作を判断することを特徴とする請求項1に記載の表示装置。
A correction control means for correcting the shape of the image and projecting light constituting the corrected image from the projection means;
The display device according to claim 1, wherein the determination unit determines the user's operation based on at least the information on the touch operation and an image correction amount by the correction control unit.
 前記コンバイナは、前記投射手段に向かって凹んだ、所定の曲率を有する凹形状にて構成されており、
 前記判断手段は、前記タッチ操作の情報と、前記補正制御手段による画像の補正量と、前記コンバイナの曲率に関連する情報とに基づいて、前記利用者の操作を判断することを特徴とする請求項2に記載の表示装置。
The combiner is configured in a concave shape having a predetermined curvature, which is recessed toward the projection means.
The determination unit determines the user's operation based on information on the touch operation, an image correction amount by the correction control unit, and information related to the curvature of the combiner. Item 3. The display device according to Item 2.
 前記補正制御手段は、前記利用者に視認される虚像の歪みが補正されるように、前記画像の補正量を決定することを特徴とする請求項2又は3に記載の表示装置。 4. The display device according to claim 2, wherein the correction control unit determines a correction amount of the image so that distortion of a virtual image visually recognized by the user is corrected.  前記コンバイナは、チルト角度を調整可能に構成され、
 前記補正制御手段は、前記コンバイナのチルト角度に応じて、前記画像の補正量を決定することを特徴とする請求項2乃至4のいずれか一項に記載の表示装置。
The combiner is configured to be adjustable in tilt angle,
The display device according to claim 2, wherein the correction control unit determines a correction amount of the image according to a tilt angle of the combiner.
 画像を構成する光を投射する投射手段と、前記投射手段から投射された光を反射して、前記画像を利用者に虚像として視認させるコンバイナとを有する表示装置によって実行される表示方法であって、
 前記利用者による前記コンバイナに対するタッチ操作の情報を取得する操作取得工程と、
 前記タッチ操作の情報に基づいて、前記利用者の操作を判断する判断工程と、
 を備えることを特徴とする表示方法。
A display method executed by a display device having projection means for projecting light constituting an image, and a combiner for reflecting light projected from the projection means and allowing the user to visually recognize the image as a virtual image. ,
An operation acquisition step of acquiring information of a touch operation on the combiner by the user;
A determination step of determining an operation of the user based on the information of the touch operation;
A display method comprising:
 コンピュータを有すると共に、画像を構成する光を投射する投射手段と、前記投射手段から投射された光を反射して、前記画像を利用者に虚像として視認させるコンバイナとを有する表示装置によって実行されるプログラムであって、
 前記利用者による前記コンバイナに対するタッチ操作の情報を取得する操作取得手段、
 前記タッチ操作の情報に基づいて、前記利用者の操作を判断する判断手段、
 として前記コンピュータを機能させることを特徴とするプログラム。
It is executed by a display device having a computer and a projection unit that projects light constituting an image, and a combiner that reflects the light projected from the projection unit and causes the user to visually recognize the image as a virtual image. A program,
Operation acquisition means for acquiring touch operation information on the combiner by the user;
Determining means for determining the operation of the user based on the information of the touch operation;
A program for causing the computer to function as:
 請求項7に記載のプログラムを記録したことを特徴とする記録媒体。 A recording medium on which the program according to claim 7 is recorded.
PCT/JP2012/074943 2012-09-27 2012-09-27 Display device, display method, program, and recording medium Ceased WO2014049787A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2014537959A JP5813243B2 (en) 2012-09-27 2012-09-27 Display device
PCT/JP2012/074943 WO2014049787A1 (en) 2012-09-27 2012-09-27 Display device, display method, program, and recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2012/074943 WO2014049787A1 (en) 2012-09-27 2012-09-27 Display device, display method, program, and recording medium

Publications (1)

Publication Number Publication Date
WO2014049787A1 true WO2014049787A1 (en) 2014-04-03

Family

ID=50387252

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/074943 Ceased WO2014049787A1 (en) 2012-09-27 2012-09-27 Display device, display method, program, and recording medium

Country Status (2)

Country Link
JP (1) JP5813243B2 (en)
WO (1) WO2014049787A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105530365A (en) * 2014-12-26 2016-04-27 比亚迪股份有限公司 A kind of car telephone system and the vehicle containing the system
JP2017097274A (en) * 2015-11-27 2017-06-01 株式会社デンソー Display correction device
WO2019097843A1 (en) * 2017-11-16 2019-05-23 株式会社デンソー Virtual image display system, virtual image display device, operation input device, method for displaying virtual image, program, and recording medium
JP2020100388A (en) * 2018-12-20 2020-07-02 セイコーエプソン株式会社 Circuit device, electronic device, and vehicle
CN113272170A (en) * 2019-02-06 2021-08-17 宝马股份公司 Vehicle with adjustable screen

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7248715B2 (en) * 2021-01-27 2023-03-29 本田技研工業株式会社 Head-up display control system and head-up display display method
DE102023100043A1 (en) * 2023-01-02 2024-07-04 Bayerische Motoren Werke Aktiengesellschaft DISPLAY SYSTEM OF A VEHICLE FOR DISPLAYING A VIRTUAL IMAGE AND METHOD FOR DISPLAYING THE VIRTUAL IMAGE FOR THE VEHICLE

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1021007A (en) * 1996-07-02 1998-01-23 Hitachi Ltd Touch position image projection method for front projection type touch panel and front projection type touch panel system
JP2001125740A (en) * 1999-10-29 2001-05-11 Seiko Epson Corp Pointing position detection device, image display device, presentation system, and information storage medium
JP2007310285A (en) * 2006-05-22 2007-11-29 Denso Corp Display device
JP4907744B1 (en) * 2010-09-15 2012-04-04 パイオニア株式会社 Display device

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2826470B2 (en) * 1994-05-13 1998-11-18 日本電気株式会社 Car phone equipment
JP2000241748A (en) * 1999-02-23 2000-09-08 Asahi Glass Co Ltd Information display device
US20020140633A1 (en) * 2000-02-03 2002-10-03 Canesta, Inc. Method and system to present immersion virtual simulations using three-dimensional measurement
US6654070B1 (en) * 2001-03-23 2003-11-25 Michael Edward Rofe Interactive heads up display (IHUD)
JP2006065092A (en) * 2004-08-27 2006-03-09 Denso Corp Head-up display
WO2009023880A2 (en) * 2007-08-15 2009-02-19 Frederick Johannes Bruwer Grid touch position determination
US8884883B2 (en) * 2008-01-25 2014-11-11 Microsoft Corporation Projection of graphical objects on interactive irregular displays
EP2194418B1 (en) * 2008-12-02 2014-07-02 Saab Ab Head-up display for night vision goggles
US8089568B1 (en) * 2009-10-02 2012-01-03 Rockwell Collins, Inc. Method of and system for providing a head up display (HUD)
JP5802002B2 (en) * 2010-09-13 2015-10-28 矢崎総業株式会社 Head-up display
JP2012123252A (en) * 2010-12-09 2012-06-28 Nikon Corp Image display apparatus
JP4928014B1 (en) * 2011-02-28 2012-05-09 パイオニア株式会社 Display device
JP5021094B2 (en) * 2011-10-13 2012-09-05 パイオニア株式会社 Head-up display

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1021007A (en) * 1996-07-02 1998-01-23 Hitachi Ltd Touch position image projection method for front projection type touch panel and front projection type touch panel system
JP2001125740A (en) * 1999-10-29 2001-05-11 Seiko Epson Corp Pointing position detection device, image display device, presentation system, and information storage medium
JP2007310285A (en) * 2006-05-22 2007-11-29 Denso Corp Display device
JP4907744B1 (en) * 2010-09-15 2012-04-04 パイオニア株式会社 Display device

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105530365A (en) * 2014-12-26 2016-04-27 比亚迪股份有限公司 A kind of car telephone system and the vehicle containing the system
JP2017097274A (en) * 2015-11-27 2017-06-01 株式会社デンソー Display correction device
WO2017090318A1 (en) * 2015-11-27 2017-06-01 株式会社デンソー Display correction device
WO2019097843A1 (en) * 2017-11-16 2019-05-23 株式会社デンソー Virtual image display system, virtual image display device, operation input device, method for displaying virtual image, program, and recording medium
JP2020100388A (en) * 2018-12-20 2020-07-02 セイコーエプソン株式会社 Circuit device, electronic device, and vehicle
JP7419721B2 (en) 2018-12-20 2024-01-23 セイコーエプソン株式会社 Circuit devices, electronic equipment and mobile objects
CN113272170A (en) * 2019-02-06 2021-08-17 宝马股份公司 Vehicle with adjustable screen
JP2022519272A (en) * 2019-02-06 2022-03-22 バイエリシエ・モトーレンウエルケ・アクチエンゲゼルシヤフト Vehicles with adjustable displays
US20220089025A1 (en) * 2019-02-06 2022-03-24 Bayerische Motoren Werke Aktiengesellschaft Vehicle Having an Adjustable Display Screen
JP7471310B2 (en) 2019-02-06 2024-04-19 バイエリシエ・モトーレンウエルケ・アクチエンゲゼルシヤフト Vehicle with adjustable display
US12151556B2 (en) * 2019-02-06 2024-11-26 Bayerische Motoren Werke Aktiengesellschaft Vehicle having an adjustable display screen

Also Published As

Publication number Publication date
JP5813243B2 (en) 2015-11-17
JPWO2014049787A1 (en) 2016-08-22

Similar Documents

Publication Publication Date Title
JP5813243B2 (en) Display device
JP6528139B2 (en) Display control device and display control program
JP6221942B2 (en) Head-up display device
CN105376526B (en) Dynamically calibrated head-up display
CN106458059B (en) Automatic regulating apparatus, automatic adjustment system and automatic adjusting method
EP1878618B1 (en) Driving support method and apparatus
US9471151B2 (en) Display and method capable of moving image
JP6377508B2 (en) Display device, control method, program, and storage medium
CN110001400B (en) Display device for vehicle
JP4726621B2 (en) In-vehicle sensor correction device
JP2015087619A (en) Vehicle information projection system and projection device
JP2016222061A (en) Display system for vehicle
WO2015072100A1 (en) Gaze direction sensing device
JP2019026198A (en) Head-up display device, and driver viewpoint detection method therefor
US20210116710A1 (en) Vehicular display device
JP2013207622A (en) Calibration device and calibration method
JP2016185768A (en) Vehicle display system
US20160147299A1 (en) Apparatus and method for displaying image of head up display
JP6845988B2 (en) Head-up display
JP6318772B2 (en) Virtual image display device
WO2017090318A1 (en) Display correction device
JP6813437B2 (en) Display system
JP2018084767A (en) Display control device, control method, program, and storage medium
US20150156447A1 (en) Curved display apparatus for vehicle
JP2015182672A (en) Virtual image display device, control method, program, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12885520

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2014537959

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12885520

Country of ref document: EP

Kind code of ref document: A1