[go: up one dir, main page]

WO2018092674A1 - Display apparatus and gesture input method - Google Patents

Display apparatus and gesture input method Download PDF

Info

Publication number
WO2018092674A1
WO2018092674A1 PCT/JP2017/040404 JP2017040404W WO2018092674A1 WO 2018092674 A1 WO2018092674 A1 WO 2018092674A1 JP 2017040404 W JP2017040404 W JP 2017040404W WO 2018092674 A1 WO2018092674 A1 WO 2018092674A1
Authority
WO
WIPO (PCT)
Prior art keywords
gesture
unit
mode
reception mode
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2017/040404
Other languages
French (fr)
Japanese (ja)
Inventor
泰 谷河
善行 小川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Konica Minolta Inc
Original Assignee
Konica Minolta Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Konica Minolta Inc filed Critical Konica Minolta Inc
Publication of WO2018092674A1 publication Critical patent/WO2018092674A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Definitions

  • the present invention relates to a display device capable of gesture input and a gesture input method.
  • Gesture input is to operate a display device (for example, a terminal device or a game machine) by gesture or hand gesture.
  • a display device for example, a terminal device or a game machine
  • gesture input can be performed by touching the screen.
  • Patent Literature 1 discloses a display device that includes a display, a motion reception unit, and a display control unit that controls display information displayed on the display according to the motion received by the motion reception unit.
  • a detection unit hereinafter referred to as a detection unit
  • the motion reception unit is a detection unit.
  • ⁇ Wrong gesture input That is, a gesture that is not intended by the user may be detected by the detection unit. For example, the movement of the hand that the user moves unconsciously may be detected by the detection unit. In particular, in the case of a head-mounted display, since the detection unit is not visible to the user, an erroneous gesture input is likely to occur.
  • gesture input is possible during the reception mode period for accepting gesture input, and that gesture input is not possible during periods other than this period. If an incorrect gesture is input during a certain reception mode, the gesture cannot be input unless the next reception mode is waited. This is stressful for the user. In other words, when a wrong gesture is input, if a waiting time occurs to re-input the gesture, it is stressful for the user.
  • An object of the present invention is to provide a display device and a gesture input method capable of performing gesture input without waiting for the next reception mode period even if an incorrect gesture input is performed during the reception mode period for receiving gesture input. Is to provide.
  • a display device reflecting one aspect of the present invention includes a display unit, a detection unit, a mode control unit, and a processing unit.
  • the detection unit has a detection region at a position different from that of the display unit, and can detect and distinguish two or more predetermined gestures.
  • the mode control unit executes a reception mode for receiving a gesture input for a predetermined period.
  • the processing unit has input information pre-assigned to the last detected gesture among the one or more gestures detected by the detection unit during the reception mode after the reception mode ends. Is used to perform predetermined processing.
  • FIG. 10 is a waveform diagram showing changes in output levels of focus elements RA to RD when gesture 1 is made.
  • FIG. 9 is a waveform diagram showing changes in the output levels of focus elements RA to RD when gesture 2 is made. It is a flowchart explaining the operation
  • FIG. 10 is a waveform diagram showing changes in output levels of focus elements RA to RD when gesture 1 is made.
  • FIG. 9 is a waveform diagram showing changes in the output levels of focus elements RA to RD when gesture 2 is made.
  • FIG. 10 is a waveform diagram showing changes in the output levels of focus elements RA to RD provided in the first mode when a gesture input is made to switch the screen to the next screen. It is explanatory drawing explaining an example of the screen containing the image which shows whether it is reception mode, and the image which shows reception time. In the 2nd mode of HMD concerning this embodiment, it is a flow chart explaining operation at the time of gesture input.
  • FIG. 12 is a waveform diagram showing changes in the output levels of focus elements RA to RD provided in the second mode when a gesture input is made to switch the screen to the next screen. It is explanatory drawing explaining an example of the screen containing the image which shows whether it is reception mode, and the image which shows the time regarding a non-detection state.
  • FIG. 10 is a waveform diagram showing changes in the output levels of focus elements RA to RD provided in the third mode when a gesture input is made to switch the screen to the next screen. It is explanatory drawing explaining an example of the screen containing the image which shows whether it is reception mode, the image which shows reception time, and the image which shows the time regarding a non-detection state.
  • the display device is, for example, a wearable terminal (head mounted display (HMD), wristwatch type terminal, etc.), or a smart terminal (smart phone, tablet terminal, etc.).
  • a head mounted display HMD
  • HMD head mounted display
  • FIG. 1 is a perspective view showing a structural configuration of the HMD 100 according to the present embodiment.
  • FIG. 2 is a front view showing a structural configuration of the HMD 100 according to the present embodiment.
  • FIG. 3 is a schematic cross-sectional view showing the configuration of the display unit 104 provided in the HMD 100 according to the present embodiment.
  • FIG. 4 is a diagram illustrating a configuration of the proximity sensor 105 provided in the HMD 100 according to the present embodiment.
  • FIG. 5 is a block diagram showing an electrical configuration of the HMD 100 according to the present embodiment.
  • the right side and the left side of the HMD 100 refer to the right side and the left side for the user wearing the HMD 100.
  • HMD100 which concerns on this embodiment is provided with the flame
  • the frame 101 includes a front part 101a to which two spectacle lenses 102 are attached, and side parts 101b and 101c extending rearward from both ends of the front part 101a.
  • the two spectacle lenses 102 attached to the frame 101 may or may not have refractive power (optical power, reciprocal of focal length).
  • the cylindrical main body 103 is fixed to the front part 101 a of the frame 101 at the upper part of the right eyeglass lens 102 (which may be the left side according to the user's dominant eye etc.).
  • the main body 103 is provided with a display unit 104.
  • a display control unit 104DR (FIG. 5) that controls display of the display unit 104 based on an instruction from a control processing unit 121 (FIG. 5) described later is arranged.
  • a display unit may be disposed in front of both eyes as necessary.
  • the display unit 104 includes an image forming unit 104A and an image display unit 104B.
  • the image forming unit 104A is incorporated in the main body unit 103, and includes a light source 104a, a one-way diffusing plate 104b, a condenser lens 104c, and a display element 104d.
  • the image display unit 104B which is a so-called see-through type display member, is generally plate-shaped and is disposed so as to extend downward from the main body unit 103 and parallel to one eyeglass lens 102 (FIG. 1).
  • the eyepiece prism 104f, the deflecting prism 104g, and the hologram optical element 104h are included.
  • the light source 104a has a function of illuminating the display element 104d.
  • the peak wavelength of light intensity and the half width of the light intensity are 462 ⁇ 12 nm (blue light (B light)), 525 ⁇ 17 nm (green light (G light )), 635 ⁇ 11 nm (red light (R light)), and is composed of RGB integrated light emitting diodes (LEDs) that emit light in three wavelength bands.
  • the display element 104d displays an image by modulating the light emitted from the light source 104a in accordance with image data, and is configured by a transmissive liquid crystal display element having pixels that serve as light transmitting regions in a matrix. Is done. Note that the display element 104d may be of a reflective type.
  • the eyepiece prism 104f totally reflects the image light from the display element 104d incident through the base end face PL1 by the opposed parallel inner side face PL2 and outer side face PL3, and passes through the hologram optical element 104h to the user's pupil. On the other hand, external light is transmitted and guided to the user's pupil.
  • the eyepiece prism 104f is formed of, for example, an acrylic resin together with the deflecting prism 104g.
  • the eyepiece prism 104f and the deflection prism 104g are joined by an adhesive with the hologram optical element 104h sandwiched between inclined surfaces PL4 and PL5 inclined with respect to the inner surface PL2 and the outer surface PL3.
  • the deflection prism 104g is joined to the eyepiece prism 104f, and becomes a substantially parallel flat plate integrated with the eyepiece prism 104f.
  • the spectacle lens 102 FIG. 1
  • the hologram optical element 104h diffracts and reflects the image light (light having a wavelength corresponding to the three primary colors) emitted from the display element 104d, guides it to the pupil B, enlarges the image displayed on the display element 104d, and enlarges the user's pupil. It is a volume phase type reflection hologram guided as a virtual image.
  • the hologram optical element 104h has, for example, three wavelength ranges of 465 ⁇ 5 nm (B light), 521 ⁇ 5 nm (G light), and 634 ⁇ 5 nm (R light) with a peak wavelength of diffraction efficiency and a wavelength width of half the diffraction efficiency. It is made to diffract (reflect) light.
  • the peak wavelength of diffraction efficiency is the wavelength at which the diffraction efficiency reaches a peak
  • the wavelength width at half maximum of the diffraction efficiency is the wavelength width at which the diffraction efficiency is at half maximum of the diffraction efficiency peak. is there.
  • the display unit 104 having such a configuration, light emitted from the light source 104a is diffused by the unidirectional diffusion plate 104b, condensed by the condenser lens 104c, and incident on the display element 104d.
  • the light incident on the display element 104d is modulated for each pixel based on the image data input from the display control unit 104DR, and is emitted as image light. Thereby, a color image is displayed on the display element 104d.
  • the image light from the display element 104d enters the eyepiece prism 104f from its base end face PL1, is totally reflected a plurality of times by the inner side face PL2 and the outer side face PL3, and enters the hologram optical element 104h.
  • the light incident on the hologram optical element 104h is reflected there, passes through the inner side surface PL2, and reaches the pupil B.
  • the user can observe an enlarged virtual image of the image displayed on the display element 104d, and can visually recognize it as a screen formed on the image display unit 104B.
  • the eyepiece prism 104f, the deflecting prism 104g, and the hologram optical element 104h transmit almost all of the external light, the user can observe the external image (real image) through these. Therefore, the virtual image of the image displayed on the display element 104d is observed so as to overlap with a part of the external image. In this way, the user of the HMD 100 can simultaneously observe the image provided from the display element 104d and the external image via the hologram optical element 104h.
  • the image display unit 104B is transparent and can observe only an external image.
  • the display unit is configured by combining a light source, a liquid crystal display element, and an optical system.
  • a self-luminous display element for example, an organic EL display
  • Element may be used.
  • a transmissive organic EL display panel having transparency in a non-light emitting state may be used.
  • a proximity sensor 105 disposed near the center of the frame 101 and a lens 106a of the camera 106 disposed near the side 101b are disposed forward. It is provided so as to face.
  • the “proximity sensor” means a proximity range in front of the detection surface of the proximity sensor in order to detect that an object, for example, a part of a human body (hand, finger, etc.) is in front of the user's eyes.
  • the signal is output by detecting whether or not it exists within the detection area.
  • the proximity range may be set as appropriate according to the characteristics and preferences of the user. For example, the proximity range from the detection surface of the proximity sensor can be within a range of 200 mm.
  • the user can put his / her palm or finger into / out of the user's field of view with his / her arm bent, so that the hand, finger or pointing tool (for example, a rod-shaped member) It is possible to easily perform an operation by a gesture using the symbol, and the possibility of erroneous detection of a human body or furniture other than the user is reduced.
  • a passive proximity sensor has a detection device that detects invisible light or electromagnetic waves emitted from an object when the object approaches.
  • a passive proximity sensor there are a pyroelectric sensor that detects invisible light such as infrared rays emitted from an approaching human body, and a capacitance sensor that detects a change in electrostatic capacitance between the approaching human body and the like.
  • the active proximity sensor includes a projection device that projects invisible light (or sound waves), and a detection device that receives invisible light (or sound waves) reflected back from an object.
  • an infrared sensor that projects infrared rays and receives infrared rays reflected by an object
  • a laser sensor that projects laser beams and receives laser light reflected by an object
  • ultrasonic waves There is an ultrasonic sensor that receives ultrasonic waves reflected by an object.
  • a passive proximity sensor does not need to project energy toward an object, and thus has excellent low power consumption.
  • An active proximity sensor is easy to improve the certainty of detection. For example, even when a user wears a glove that does not transmit detection light emitted from a human body such as infrared light, Can detect movement.
  • a plurality of types of proximity sensors may be combined.
  • a pyroelectric sensor including a plurality of pyroelectric elements arranged in a two-dimensional matrix is used as the proximity sensor 105.
  • the “right side” and “left side” in FIG. 4 refer to the right side and the left side for the user wearing the HMD 100.
  • the proximity sensor 105 includes four pyroelectric elements RA, RB, RC, and RD arranged in two rows and two columns, and receives invisible light such as infrared light emitted from the human body as detection light.
  • a corresponding signal is output from each of the pyroelectric elements RA to RD.
  • the outputs of the pyroelectric elements RA to RD change in intensity according to the distance from the light receiving surface of the proximity sensor 105 to the object, and the intensity increases as the distance decreases.
  • the right sub-body portion 108-R is attached to the right side portion 101b of the frame 101
  • the left sub-body portion 108-L is attached to the left side portion 101c of the frame 101. Is attached.
  • the right sub body 108-R and the left sub body 108-L have an elongated plate shape.
  • the main main body 103 and the right sub main body 108-R are connected to each other by a wiring HS so that signals can be transmitted.
  • the right sub main body 108-R is connected to the control unit CTU via a cord CD extending from the rear end thereof. It is connected to the.
  • the HMD 100 includes a control unit CTU, a display unit 104, a display control unit 104DR, a proximity sensor 105, and a camera 106.
  • the control unit CTU includes a control processing unit 121, an operation unit 122, a storage unit 125, a battery 126, and a power supply circuit 127.
  • the display control unit 104DR is a circuit that is connected to the control processing unit 121 and controls the image forming unit 104A of the display unit 104 according to the control of the control processing unit 121 to form an image on the image forming unit 104A.
  • the image forming unit 104A is as described above.
  • the camera 106 is an apparatus that is connected to the control processing unit 121 and generates an image of a subject under the control of the control processing unit 121.
  • the camera 106 is, for example, an image forming optical system that forms an optical image of a subject on a predetermined image forming surface, and a light receiving surface that matches the image forming surface.
  • An image sensor that converts the image sensor into a digital signal processor (DSP) that performs known image processing on the output of the image sensor to generate an image (image data).
  • DSP digital signal processor
  • the imaging optical system includes one or more lenses, and includes the lens 106a as one of them.
  • the camera 106 outputs the generated image data to the control processing unit 121.
  • the proximity sensor 105 is connected to the control processing unit 121.
  • the proximity sensor 105 is as described above, and outputs the output to the control processing unit 121.
  • the operation unit 122 is connected to the control processing unit 121 and is a device that inputs a predetermined instruction, such as power on / off, to the HMD 100, for example, one or a plurality of switches assigned a predetermined function Etc.
  • the battery 126 is a battery that accumulates electric power and supplies the electric power.
  • the battery 126 may be a primary battery or a secondary battery.
  • the power supply circuit 127 is a circuit that supplies power supplied from the battery 126 to each part of the HMD 100 that requires power at a voltage corresponding to each part.
  • the storage unit 125 is a circuit that is connected to the control processing unit 121 and stores various predetermined programs and various predetermined data under the control of the control processing unit 121.
  • Examples of the various predetermined programs include control processing programs such as a control program for controlling each unit of the HMD 100 according to the function of each unit, and a gesture processing program for determining a gesture based on the output of the proximity sensor 105. Is included.
  • the storage unit 125 includes, for example, a ROM (Read Only Memory) that is a nonvolatile storage element and an EEPROM (Electrically Erasable Programmable Read Only Memory) that is a rewritable nonvolatile storage element.
  • the storage unit 125 includes a RAM (Random Access Memory) that serves as a working memory of the control processing unit 121 that stores data generated during the execution of the predetermined program.
  • the control processing unit 121 controls each unit of the HMD 100 according to the function of each unit, determines a predetermined gesture set in advance based on the output of the proximity sensor 105, and executes processing according to the determination result. Is.
  • the control processing unit 121 includes, for example, a CPU (Central Processing Unit) and its peripheral circuits. In the control processing unit 121, a control processing program is executed, so that a control unit 1211, a gesture processing unit 1212, and a processing unit 1213 are functionally configured. Note that part or all of the functions of the control unit 1211 may be realized by processing by a DSP (Digital Signal Processor) instead of or by processing by the CPU. In addition, some or all of the functions of the control unit 1211 may be realized by processing using a dedicated hardware circuit instead of or together with processing by software. What has been described above also applies to the gesture processing unit 1212 and the processing unit 1213.
  • DSP Digital Signal Processor
  • the control unit 1211 controls each unit of the HMD 100 according to the function of each unit.
  • the control unit 1211 has functions of a mode control unit 1214 and a storage control unit 1215. These functions will be described later.
  • the gesture processing unit 1212 determines a predetermined gesture set in advance based on the outputs of the plurality of pyroelectric elements in the proximity sensor 105, in this embodiment, the four pyroelectric elements RA to RD.
  • the gesture processing unit 1212 notifies the processing unit 1213 of the determination result.
  • the gesture processing unit 1212 and the proximity sensor 105 constitute a detection unit 128.
  • the detection unit 128 has a detection area SA (FIGS. 7A, 7B, and 8) at a position different from that of the image display unit 104B (an example of the display unit), and distinguishes and detects two or more predetermined gestures.
  • the processing unit 1213 performs a predetermined process (for example, sends a command to switch the screen to the next screen to the display control unit 104DR) using the determination result of the gesture processing unit 1212. Details of the processing unit 1213 will be described later.
  • FIG. 6 is a front view when the HMD 100 according to the present embodiment is mounted.
  • FIG. 7A is a side view when the HMD 100 according to the present embodiment is mounted.
  • FIG. 7B is a partial top view when the HMD 100 according to the present embodiment is mounted. 7A and 7B also show the hand HD of the user US.
  • FIG. 8 is a diagram illustrating an example of an image visually recognized by the user through the see-through type image display unit 104B.
  • FIG. 9 is a diagram illustrating an example of the output of the proximity sensor 105 provided in the HMD 100 according to the present embodiment. 9A shows the output of the pyroelectric element RA, FIG.
  • FIG. 9B shows the output of the pyroelectric element RB
  • FIG. 9C shows the output of the pyroelectric element RC
  • FIG. (D) shows the output of the pyroelectric element RD.
  • the horizontal axis of each figure in FIG. 9 is time, and the vertical axis thereof is the output level (intensity).
  • the gesture input is an operation in which at least the hand HD or finger of the user US enters or leaves the detection area SA of the proximity sensor 105, and the gesture processing of the control processing unit 121 of the HMD 100 is performed via the proximity sensor 105.
  • the part 1212 can detect it.
  • screen 104i of image display unit 104B is arranged so as to overlap with effective visual field EV of the user's eye facing image display unit 104B (here, positioned within effective visual field EV).
  • the detection area SA of the proximity sensor 105 is in the visual field of the user's eye facing the image display unit 104B.
  • the detection area SA is in the stable eye-field of the user's eye or in the field of view inside thereof (within about 90 ° horizontally and within about 70 ° vertically). More preferably, the detection area SA is positioned so as to overlap with the effective field EV or the field inside thereof (within about 30 ° in the horizontal and within about 20 ° in the vertical) located inside the stable focus field.
  • FIG. 8 shows an example in which the detection area SA overlaps the screen 104i. It is set so that the detection area SA of the proximity sensor 105 is located in the visual field of the eyes of the user US while the user US is wearing the frame 101 that is a head mounting member on the head. Accordingly, it is possible to surely visually recognize the approach and withdrawal of the hand to the detection area SA of the proximity sensor 105 without moving the eye while observing the hand HD through the screen 104i. In particular, by setting the detection area SA of the proximity sensor 105 within the stable visual field or the inner visual field, it is possible to reliably perform gesture input while recognizing the detection area SA even when the user observes the screen. .
  • gesture input can be performed more reliably. If the detection area SA overlaps the screen 104i, gesture input can be performed more reliably.
  • the proximity sensor 105 has a plurality of pyroelectric elements RA to RD as in the present embodiment, the entire light receiving area of the plurality of pyroelectric elements RA to RD is regarded as one light receiving unit, and It is assumed that the maximum detection range is the detection area SA. As shown in FIG.
  • the gesture processing unit 1212 of the control processing unit 121 determines that no gesture is performed.
  • the gesture processing unit 1212 determines that a gesture has been performed according to an output signal from the proximity sensor 105 based on the gesture processing unit 1212.
  • the gesture is performed using the hand HD of the user US.
  • the gesture may be performed using a finger or other part, or an indication tool made of a material that can emit invisible light is used. May be used for gestures.
  • the proximity sensor 105 has four pyroelectric elements RA to RD arranged in two rows and two columns (see FIG. 4). Therefore, when the user US brings the hand HD close to the front of the HMD 100 from either the left, right, up, or down directions, the output timings of signals detected by the pyroelectric elements RA to RD are different.
  • the invisible light emitted from the hand HD is close
  • the light enters the sensor 105.
  • the pyroelectric elements RA and RC first receive invisible light. Therefore, referring to FIGS. 4 and 9, first, the signals of pyroelectric elements RA and RC rise, and the signals of pyroelectric elements RB and RD rise after a delay. Thereafter, the signals of the pyroelectric elements RA and RC fall, and the signals of the pyroelectric elements RB and RD fall after a delay.
  • the gesture processing unit 1212 detects the timing of this signal, and the gesture processing unit 1212 determines that the user US has made a gesture by moving the hand HD from right to left.
  • FIG. 10 is an explanatory diagram illustrating the relationship between gestures and input information in the present embodiment.
  • FIG. 10 includes arrows indicating the movement of each hand of the four gestures and input information pre-assigned to each of the four gestures.
  • the gesture processing unit 1212 shown in FIG. 5 determines gestures 1 to 4 using the timing when the output levels of the pyroelectric elements RA to RD exceed the threshold value th.
  • FIG. 11 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when the gesture 1 is performed.
  • the horizontal axis indicates time
  • the vertical axis indicates the output level.
  • the threshold values th are all the same value.
  • the gesture 1 is a gesture in which the hand HD enters the detection area SA from the left side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the right side of the detection area SA.
  • the output level of the pyroelectric elements RB and RD exceeds the threshold value th, and later, the output level of the pyroelectric elements RA and RC exceeds the threshold value th, and The output levels of the pyroelectric elements RB and RD become the threshold value th or less, and the output levels of the pyroelectric elements RA and RC become the threshold value th or later after this.
  • the gesture processing unit 1212 FIG.
  • the gesture processing unit 1212 determines that the gesture 1 has been made.
  • the input information previously assigned to the gesture 1 is a “command to switch to the next screen”.
  • the user can input a “command to switch to the next screen” to the HMD 100 by performing the gesture 1.
  • FIG. 12 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when the gesture 2 is performed.
  • the horizontal axis and vertical axis of this waveform diagram are the same as the horizontal axis and vertical axis of the waveform diagram of FIG.
  • the gesture 2 is a gesture in which the hand HD enters the detection area SA from the right side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the left side of the detection area SA.
  • the output levels of the pyroelectric elements RA and RC exceed the threshold value th, and later, the output levels of the pyroelectric elements RB and RD exceed the threshold value th, and The output levels of the pyroelectric elements RA and RC become the threshold value th or less, and the output levels of the pyroelectric elements RB and RD become the threshold value th or later after this.
  • the gesture processing unit 1212 FIG.
  • the gesture processing unit 1212 determines that the gesture 2 has been made.
  • the input information previously assigned to the gesture 2 is a “command to switch to the previous screen”.
  • the user can input a “command to switch to the previous screen” to the HMD 100 by performing the gesture 2.
  • hand HD enters detection area SA from above detection area SA (FIGS. 7A, 7B, and 8), and detection area SA from below detection area SA. It is a gesture to exit.
  • the input information previously assigned to the gesture 3 is a “command to switch to the last screen”.
  • the user can input a “command to switch to the last screen” to the HMD 100 by performing the gesture 3.
  • the gesture 4 is a gesture in which the hand HD enters the detection area SA from the lower side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the upper side of the detection area SA.
  • the input information previously assigned to the gesture 4 is a “command to switch to the first screen”.
  • the user can input a “command to switch to the first screen” to the HMD 100 by performing the gesture 4.
  • gestures 1 to 4 The combination of the gestures 1 to 4 and the input information is not limited to the example shown in FIG.
  • input information (“command to switch to the previous screen” may be assigned to gesture 1.
  • Gestures that can be detected by proximity sensor 105 are not limited to gestures 1 to 4.
  • Arbitrary input information may be assigned.
  • the detection unit 128 includes a proximity sensor 105 and a gesture processing unit 1212.
  • the detection unit 128 is not limited to this configuration.
  • the detection unit 128 may include a camera 106 (two-dimensional imaging device) and an image processing unit that performs predetermined image processing on an image captured by the camera 106 and recognizes a gesture.
  • FIG. 7B shows a state where the hand HD is the right hand and the hand HD is in the detection area SA.
  • the gesture 1 is performed in a state where the hand HD is located on the right side of the detection area SA.
  • the user US In order to make the gesture 1, the user US must first move the hand HD to the left side of the detection area SA. This operation must be done outside the detection area SA.
  • the detection unit 128 detects a gesture for moving the hand HD from right to left (that is, gesture 2). Therefore, since a gesture that is not intended by the user US is detected by the detection unit 128, an incorrect gesture is input.
  • the user US has a habit of touching hair. Since the HMD 100 is worn on the head of the user US, touching the hair causes a wrong gesture input.
  • the gesture input can be immediately performed again.
  • the HMD 100 has a first aspect, a second aspect, and a third aspect.
  • the operation of these modes will be described by taking as an example a gesture input (gesture 1) for switching the screen displayed on the image display unit 104B to the next screen.
  • gesture 1 for switching the screen displayed on the image display unit 104B to the next screen.
  • the mode control unit 1214 shown in FIG. 5 ends the reception mode when the length of time that has elapsed since the reception mode started reaches a predetermined value.
  • a predetermined value is 5 seconds.
  • the period of the reception mode is fixed (5 seconds).
  • the user operates the operation unit 122 (for example, a cross key provided in the operation unit 122) to set a predetermined value in the mode control unit 1214 in advance. Thereby, the user can determine the length of the period of the reception mode.
  • FIG. 13 is a flowchart for explaining the operation when a gesture is input in the first mode.
  • FIG. 14 is a waveform diagram showing changes in the output levels of the focus elements RA to RD provided in the first mode when a gesture is input to switch the screen to the next screen. Referring to FIGS. 4, 5, 13, and 14, it is assumed that the user places a hand in front of proximity sensor 105 and performs gesture 1, but erroneously performs gesture 2 due to the above-described cause.
  • mode control unit 1214 starts a reception mode for accepting a gesture input (step S1 in FIG. 13 and time t1 in FIG. 14).
  • the display control unit 104DR includes an image indicating whether the mode is the reception mode and an image indicating the reception time on the screen displayed on the image display unit 104B.
  • FIG. 15 is an explanatory diagram for explaining an example of the screen 10-1 including these images.
  • the image 11 has a circular shape, and uses the color of the image 11 to indicate whether or not the reception mode is set. In the acceptance mode, the color of the image 11 is blue, and when the acceptance mode is finished, the color of the image 11 is red. Here, the color of the image 11 is blue.
  • Image 12 shows reception time. This is information indicating the remaining time in the reception mode. More specifically, the mode control unit 1214 has a function of a countdown timer. When the reception mode starts, the mode control unit 1214 sets the countdown timer to 5 seconds and starts the timer. The display control unit 104DR changes the image 12 to an image indicating 5 seconds. When the time indicated by the countdown timer is 4 seconds, 3 seconds, 2 seconds, 1 second, and 0 seconds, the display control unit 104DR displays the image 12 as an image indicating 4 seconds, an image indicating 3 seconds, and 2 seconds. The image is an image showing 1 second and an image showing 0 second. When the time indicated by the countdown timer reaches 0 seconds, the mode control unit 1214 ends the acceptance mode (step S8).
  • the gesture processing unit 1212 detects the hand HD from any of the upper side, the lower side, the left side, and the right side from the detection area SA (FIGS. 7A, 7B, It is determined whether it has entered FIG.
  • the output level of the pyroelectric elements RA and RC exceeds the threshold th earlier than the output level of the pyroelectric elements RB and RD, it is determined as the right side.
  • the storage control unit 1215 stores information indicating “right side” in the storage unit 125 (step S2 in FIG. 13).
  • the gesture processing unit 1212 determines from which of the upper side, the lower side, the left side, and the right side the hand HD has exited the detection area SA.
  • the output levels of the pyroelectric elements RB and RD are equal to or lower than the threshold th after the output levels of the pyroelectric elements RA and RC, it is determined as the left side.
  • the storage control unit 1215 stores information indicating “left side” in the storage unit 125 (step S3 in FIG. 13).
  • the gesture processing unit 1212 determines the type of gesture using the results of step S2 and step S3 (step S4 in FIG. 13).
  • the types of gestures are the four gestures described in FIG.
  • the storage unit 125 stores in advance a table (hereinafter referred to as the table of FIG. 10) indicating the correspondence between the gestures 1 to 4 and the input information assigned to them.
  • the gesture processing unit 1212 reads out the determination results of step S2 and step S3 stored in the storage unit 125.
  • the determination result of step S2 is “right side”
  • the determination result of step S3 is “left side”. Accordingly, since the hand HD enters the detection area SA from the right side of the detection area SA and exits the detection area SA from the left side of the detection area SA, the gesture processing unit 1212 determines that the gesture is 2. Note that the gesture processing unit 1212 performs error processing when it is determined that none of the gestures 1 to 4 corresponds. As a result, the display control unit 104DR causes the image display unit 104B to display a screen for prompting a correct gesture.
  • the storage control unit 1215 refers to the table in FIG. 10 and stores the input information assigned to the gesture determined in step S4 in the storage unit 125 (step S5 in FIG. 13).
  • the input information assigned to the gesture 2 (“command to switch to the previous screen”) is stored in the storage unit 125.
  • the display control unit 104DR includes a character image (not shown) indicating “an instruction to switch to the previous screen has been input” on the screen 10-1 shown in FIG. The user recognizes that he / she made a wrong gesture by looking at the character image.
  • the mode control unit 1214 determines whether or not the length of time that has elapsed since the start of the reception mode has reached 5 seconds (a predetermined value) (step S6 in FIG. 13). When the elapsed time has not reached 5 seconds (No in step S6), the mode control unit 1214 determines whether or not the output levels of the pyroelectric elements RA to RD all exceed the threshold th. (Step S7 in FIG. 13). That is, the mode control unit 1214 waits until the next gesture is made. When the mode control unit 1214 determines that all the output levels of the pyroelectric elements RA to RD do not satisfy the condition of exceeding the threshold value th (No in step S7), the mode control unit 1214 performs step S6. Process.
  • the user recognizes that he / she made a wrong gesture, so the gesture is redone.
  • the user intended gesture gesture 1).
  • step S7 When the mode control unit 1214 determines that the output levels of the pyroelectric elements RA to RD all exceed the threshold th (Yes in step S7), the gesture processing unit 1212 performs the process of step S2.
  • the gesture processing unit 1212 determines from which of the upper side, the lower side, the left side, and the right side the hand HD has entered the detection area SA. Here, since the output level of the pyroelectric elements RB and RD exceeds the threshold th earlier than the output level of the pyroelectric elements RA and RC, it is determined as the left side.
  • the storage control unit 1215 stores information indicating “left side” in the storage unit 125 (step S2).
  • the gesture processing unit 1212 determines from which of the upper side, the lower side, the left side, and the right side the hand HD has exited the detection area SA.
  • the output levels of the pyroelectric elements RA and RC are equal to or lower than the threshold th after the output levels of the pyroelectric elements RB and RD, it is determined that the output is on the right side.
  • the storage control unit 1215 stores information indicating “right side” in the storage unit 125 (step S3).
  • the gesture processing unit 1212 determines the type of gesture using the results of step S2 and step S3 (step S4). More specifically, the gesture processing unit 1212 reads the determination results of step S2 and step S3 stored in the storage unit 125. Here, the determination result of step S2 is “left side”, and the determination result of step S3 is “right side”. Accordingly, since the hand HD has entered the detection area SA from the left side of the detection area SA and has exited the detection area SA from the right side of the detection area SA, the gesture processing unit 1212 determines that the gesture is 1.
  • the storage control unit 1215 refers to the table in FIG. 10 and stores the input information assigned to the gesture determined in step S4 in the storage unit 125 (step S5).
  • the input information (“command to switch to the next screen”) assigned to gesture 1 is stored in storage unit 125.
  • the display control unit 104DR includes a character image (not shown) indicating “an instruction to switch to the next screen has been input” on the screen 10-1 shown in FIG. The user sees this character image and knows that he has made the intended gesture. The user waits for the end of the reception mode without making a gesture during the period of the reception mode.
  • the mode control unit 1214 ends the reception mode when the length of time that has elapsed since the start of the reception mode has reached 5 seconds (a predetermined value) (Yes in step S6) (step in FIG. 13). S8, time t2 in FIG.
  • the display control unit 104DR sets the color of the image 11 included in the screen 10-1 (FIG. 15) displayed on the image display unit 104B to red, and sets the image 12 to an image indicating 0 second.
  • the processing unit 1213 stores the input information of the gesture 1 in the input information (here, the input information of the gesture 2 and the input information of the gesture 1) stored in the storage unit 125.
  • a predetermined process is performed by using them (step S9 in FIG. 13).
  • predetermined processing is performed using the input information previously assigned to the last detected gesture. Since the last detected gesture is gesture 1, as a predetermined process, the processing unit 1213 sends a command to switch to the next screen to the display control unit 104DR. Thereby, the display control unit 104DR switches the screen 10-1 displayed on the image display unit 104B to the next screen.
  • the gesture processing unit 1212 determines that all the output levels of the pyroelectric elements RA to RD have exceeded the threshold value th.
  • the control unit 1214 starts the reception mode (step S1, time t3).
  • the first gesture is a gesture intended by the user (for example, gesture 1). Therefore, the next gesture is not performed during this acceptance mode period.
  • the processing unit 1213 performs a predetermined process (step S9). Since the last detected gesture is gesture 1, as a predetermined process, the processing unit 1213 sends a command to switch to the next screen to the display control unit 104DR. Thereby, the display control unit 104DR switches the screen displayed on the image display unit 104B to the next screen.
  • processing unit 1213 preliminarily applies gesture 1 in each of the reception mode period defined from time t1 to time t2 and the reception mode period defined from time t3 to time t4.
  • the assigned input information (command to switch the screen to the next screen) is sent to the display control unit 104DR.
  • the processing unit 1213 performs a predetermined process using the input information previously assigned to the last detected gesture among the one or more detected gestures during the period of the reception mode. For this reason, since the last gesture should just be the gesture which a user intends, if it is the period of reception mode, the user can repeat gesture input many times. Therefore, according to the first aspect, even if an incorrect gesture input is made during the period of the reception mode for accepting the gesture input, the gesture input can be performed without waiting for the period of the next reception mode. This effect also occurs in the second and third aspects.
  • display control unit 104DR causes image display unit 104B to display image 12 indicating the reception time during the reception mode.
  • the image 12 is information indicating the remaining time in the reception mode. Therefore, according to the first aspect, the user can recognize when the period of the reception mode ends. This effect also occurs in the second and third aspects.
  • mode control unit 1214 sets the reception mode when the levels of pyroelectric elements RA to RD output all exceed threshold value th (time t1, time t3) in a period other than reception mode.
  • th time t1, time t3
  • the acceptance mode is started when the output level of some of the pyroelectric elements RA to RD exceeds a threshold value.
  • the gesture is not determined, but the acceptance mode is continued. .
  • the acceptance mode may end before the gesture ends.
  • the reception mode since the reception mode is started when the output values of the pyroelectric elements RA to RD all exceed the threshold value th during the period other than the reception mode, such inconvenience does not occur. . This effect also occurs in the third aspect.
  • step S ⁇ b> 5 of FIG. 13 every time a gesture is detected by the detection unit 128, the display control unit 104 ⁇ / b> DR is a character indicating input information assigned in advance to the detected gesture.
  • a screen including an image (for example, a character image indicating “a command to switch to the previous screen has been input”) is displayed on the image display unit 104B. Thereby, the user can determine whether the gesture he / she made is wrong.
  • the display control unit 104DR and the image display unit 104B function as a notification unit. Every time a gesture is detected by the detection unit 128, the notification unit notifies input information pre-assigned to the detected gesture.
  • Aspect 1 includes a display unit (image display unit 104B) and a display control unit 104DR that causes the display unit (image display unit 104B) to display a screen showing input information pre-assigned to the detected gesture.
  • Aspect 2 includes an audio processing unit that converts input information pre-assigned to the detected gesture into an audio signal, an amplifier that amplifies the audio signal, and a speaker that outputs the amplified audio signal as audio.
  • the audio processing unit is realized by a combination of hardware such as a CPU, RAM, and ROM, and a program that converts input information into an audio signal.
  • Aspect 3 includes Aspect 1 and Aspect 2.
  • the second aspect will be described mainly with respect to differences from the first aspect.
  • the mode control unit 1214 shown in FIG. 5 changes during the reception mode from the detection state in which the gesture is detected by the detection unit 128 to the non-detection state in which no gesture is detected, no detection is performed. Measurement of the time of the state is started, and when the length of time of the non-detection state reaches a predetermined value, the period of the reception mode is ended.
  • An example of the predetermined value is 2 seconds. That is, when the period of the non-detection state continues for 2 seconds, the reception mode ends.
  • the period of the reception mode is not fixed.
  • the user operates the operation unit 122 to set a predetermined value in the mode control unit 1214 in advance. Thereby, the user can determine the length of time of the non-detection state which is a condition for terminating the reception mode.
  • FIG. 16 is a flowchart for explaining the operation when a gesture is input in the second mode.
  • FIG. 17 is a waveform diagram showing changes in the output levels of the focus elements RA to RD provided in the second mode when a gesture input is made to switch the screen to the next screen.
  • the flowchart shown in FIG. 16 differs from the flowchart shown in FIG. 13 in the following two points.
  • step S1 of FIG. 16 a screen 10-2 shown in FIG. 18 is displayed on the image display unit 104B instead of the screen 10-1 shown in FIG.
  • Step S10 in FIG. 16 is executed instead of step S6 in FIG.
  • mode control unit 1214 starts a reception mode for accepting a gesture input (step S1 in FIG. 16, time t1 in FIG. 17).
  • the display control unit 104DR includes an image indicating whether or not the reception mode is set and an image indicating the time related to the non-detection state on the screen displayed on the image display unit 104B.
  • FIG. 18 is an explanatory diagram for explaining an example of the screen 10-2 including these images.
  • the image 11 indicates whether or not the reception mode is set as described in the screen 10-1 shown in FIG. Here, since it is a reception mode, the color of the image 11 is blue.
  • Image 13 shows the time related to the non-detection state. This is information indicating the remaining time in the reception mode. More specifically, it is assumed that when at least one of the output levels of the pyroelectric elements RA to RD exceeds the threshold th, the detection unit 128 detects a gesture. When the output levels of the pyroelectric elements RA to RD are all equal to or lower than the threshold th, the detection unit 128 is in a non-detection state in which no gesture is detected.
  • the mode control unit 1214 has a function of a countdown timer, and when the detection state is changed to the non-detection state (time t10, time t11, time t12 in FIG. 17), the countdown timer is set to 2 seconds.
  • the display control unit 104DR changes the image 13 to an image indicating 2 seconds.
  • the display control unit 104DR changes the image 13 to an image indicating 1 second and an image indicating 0 second.
  • the mode control unit 1214 ends the reception mode.
  • step S10 is executed after step S5. More specifically, when one hand moves out of the detection area SA (FIGS. 7A, 7B, and 8) by the end of one gesture, the detection state changes to the non-detection state. This change occurs during the period of step S3.
  • the mode control unit 1214 changes from the detection state to the no detection state, the mode control unit 1214 sets the countdown timer to 2 seconds and starts the timer. The mode control unit 1214 determines whether or not the time indicated by the timer has reached 0 seconds. That is, the mode control unit 1214 determines whether or not the length of time of the non-detection state has reached a predetermined value (2 seconds) (step S10).
  • the mode control unit 1214 determines that the length of time of the non-detection state has not reached a predetermined value (No in step S10), the mode control unit 1214 performs the process of step S7.
  • the mode control unit 1214 determines that the length of time of the non-detection state has reached a predetermined value (Yes in step S10, time t2, time t4 in FIG. 17), the mode control unit 1214 Terminates the acceptance mode (step S8).
  • the reception mode does not end unless the length of time of the non-detection state reaches 2 seconds (predetermined value). Therefore, according to the second aspect, it is possible to prevent the reception mode from ending while the user is making a gesture.
  • the third aspect is an aspect in which the first aspect and the second aspect are combined.
  • a first value that is an upper limit value for the period of the reception mode is set. Even if the period of the reception mode has not reached the upper limit value, the mode control unit 1214 ends the reception mode when the length of time of the non-detection state reaches a predetermined second value.
  • the second value is smaller than the first value.
  • the first value takes 5 seconds as an example.
  • the second value takes 2 seconds as an example.
  • the user operates the operation unit 122 to set the first value and the second value in the mode control unit 1214 in advance. Thereby, the user can determine the first value and the second value which are the conditions for terminating the reception mode.
  • FIG. 19 is a flowchart for explaining the operation when a gesture is input in the third mode.
  • FIG. 20 is a waveform diagram showing changes in the output levels of the focus elements RA to RD provided in the third mode when a gesture input is made to switch the screen to the next screen.
  • the flowchart shown in FIG. 19 differs from the flowchart shown in FIG. 13 in the following two points.
  • step S1 of FIG. 19 a screen 10-3 shown in FIG. 21 is displayed on the image display unit 104B instead of the screen 10-1 shown in FIG. If step S6 is No, step S10 is executed.
  • Step S10 is the same as step S10 in FIG.
  • mode control unit 1214 starts a reception mode for accepting a gesture input (step S1 in FIG. 19 and time t1 in FIG. 20).
  • the display control unit 104DR includes, on the screen displayed on the image display unit 104B, an image indicating whether or not the reception mode is set, an image indicating the reception time, and an image indicating the time related to the non-detection state.
  • FIG. 21 is an explanatory diagram for explaining an example of the screen 10-3 including these images.
  • the image 11 indicates whether or not the reception mode is set as described in the screen 10-1 shown in FIG. Here, since it is a reception mode, the color of the image 11 is blue.
  • the image 12 shows the reception time as described in the screen 10-1 shown in FIG. This is information indicating the remaining time in the reception mode.
  • the image 12 is an image showing 5 seconds.
  • the image 13 shows the time relating to the non-detection state as described in the screen 10-2 shown in FIG. This is information indicating the remaining time in the reception mode.
  • the display control unit 104DR changes the image 13 to an image indicating 2 seconds.
  • the time portion indicated by the image 13 is left blank.
  • mode control unit 1214 determines whether or not the length of time that has elapsed since the start of the reception mode has reached a predetermined value (first value) (FIG. 19). Step S6). When the length of the elapsed time has not reached the predetermined value (first value) (No in step S6), the mode control unit 1214 determines that the length of time in the non-detection state is predetermined. It is determined whether or not a predetermined value (second value) has been reached (step S10).
  • the mode control unit 1214 When the length of time of the non-detection state does not reach the predetermined value (second value) (No in step S10), the mode control unit 1214 outputs the levels of the pyroelectric elements RA to RD. It is determined whether or not all have exceeded the threshold value th (step S7 in FIG. 19).
  • the mode control unit 1214 ends the reception mode (step S8).
  • the first value is set to be relatively large (for example, 5 seconds) in order to allow the user a margin for gesture input.
  • the processing unit 1213 performs a predetermined process after the end of the reception mode (step S9 in FIG. 19). If the first gesture input is correct, if the acceptance mode ends 5 seconds after the acceptance mode starts, a relatively large waiting time occurs after the first gesture ends until the predetermined processing is performed. (For example, 4 seconds). This is wasted time for the user. Therefore, if the length of time in the non-detection state reaches a second value (for example, 2 seconds), the mode control unit 1214 ends the reception mode even before it reaches 5 seconds after the reception mode starts. (Time t4).
  • the display device includes a display unit, a detection unit having a detection region at a position different from the display unit, and capable of distinguishing and detecting two or more predetermined gestures, and a gesture input A mode control unit that executes a reception mode for reception for a predetermined period, and after the reception mode ends, the last detected in one or more of the gestures detected by the detection unit during the period of the reception mode A processing unit that performs predetermined processing using input information pre-assigned to the gesture.
  • the display device performs a predetermined process using the input information previously assigned to the last detected gesture among the one or more detected gestures during the period of the reception mode in which the gesture input is received (for example, And a command to switch the screen displayed on the display unit to the next screen). For this reason, since the last gesture should just be the gesture which a user intends, if it is the period of reception mode, the user can repeat gesture input many times. Therefore, according to the display device according to the first aspect of the embodiment, even if an incorrect gesture input is made during the period of the reception mode for accepting the gesture input, the gesture input is performed without waiting for the next period of the reception mode. It becomes possible.
  • the display device is, for example, a wearable terminal.
  • a wearable terminal is a terminal device that can be worn on a part of a body (for example, a head or an arm).
  • the display device has a mode in which the period of the reception mode is fixed (first mode) and a mode in which the period of the reception mode is not fixed (first mode) and a mode in which the period of the reception mode is not fixed (first mode) and a mode in which the period of the reception mode is not fixed (first mode) and a mode in which the period of the reception mode is not fixed (first mode) and a mode in which the period of the reception mode is not fixed (first mode) and a mode in which the period of the reception mode is not fixed (third aspect).
  • the mode control unit ends the reception mode when the length of time that has elapsed since the reception mode started reaches a predetermined value.
  • the above configuration further includes an operation unit capable of performing an operation of setting the predetermined value.
  • the user since the user can set a predetermined value, the user can determine the length of the period of the reception mode.
  • the mode control unit changes from the detection state in which the gesture is detected by the detection unit to the non-detection state in which the gesture is not detected during the reception mode, Time measurement is started, and when the length of time in the non-detection state reaches a predetermined value, the reception mode is terminated.
  • the reception mode does not end unless the length of time of the non-detection state reaches a predetermined value (for example, 2 seconds). Therefore, according to the second aspect, it is possible to prevent the reception mode from ending while the user is making a gesture.
  • the above configuration further includes an operation unit capable of performing an operation of setting the predetermined value.
  • the user can set a predetermined value. Therefore, the user can determine the length of time of the non-detection state that is a condition for terminating the reception mode.
  • the mode control unit ends the reception mode when the length of time that has elapsed since the reception mode started reaches a first predetermined value, and the mode control unit
  • the second value is a predetermined value smaller than the first value, and the gesture is not detected from the detection state in which the gesture is detected by the detection unit during the reception mode.
  • the first value is set to be relatively large (for example, 5 seconds) in order to allow the user a margin for gesture input.
  • the processing unit performs a predetermined process after the acceptance mode ends. If the first gesture input is correct, if the acceptance mode ends 5 seconds after the acceptance mode starts, a relatively large waiting time occurs after the first gesture ends until the predetermined processing is performed. (For example, 4 seconds). This is wasted time for the user. Therefore, if the length of time of the non-detection state reaches a second value (for example, 2 seconds), the mode control unit ends the reception mode even before reaching 5 seconds after the reception mode starts. .
  • the above configuration further includes an operation unit capable of performing an operation of setting the first value and the second value.
  • the user can set the first value and the second value. Therefore, the user can determine the first value and the second value that are the conditions for ending the acceptance mode.
  • the above configuration further includes a display control unit that displays information indicating the remaining time in the reception mode on the display unit during the reception mode.
  • This configuration allows the user to recognize when the acceptance mode ends.
  • the detection unit includes a plurality of pyroelectric elements arranged in a two-dimensional matrix, and a gesture processing unit that determines a gesture based on each output of the plurality of pyroelectric elements.
  • the gesture processing unit determines the gesture using a timing at which an output value of each of the plurality of pyroelectric elements exceeds a predetermined threshold, and the mode control unit When the output values of the plurality of pyroelectric elements all exceed the threshold during a period other than the acceptance mode, the acceptance mode is started.
  • This configuration is an example of a condition for starting the reception mode.
  • This configuration has the following effects with respect to the modes (first mode and third mode) in which the reception mode is ended when a predetermined time has elapsed since the start of the reception mode.
  • the acceptance mode is started when the output values of some of the pyroelectric elements exceed a threshold value.
  • the gesture is stopped before all the output values of the plurality of pyroelectric elements exceed the threshold value, the gesture is not determined, but the reception mode continues. For this reason, when the gesture is performed again, the acceptance mode may end before the gesture ends.
  • the reception mode since the reception mode is started when the output values of the plurality of pyroelectric elements all exceed the threshold value during the period other than the reception mode, such inconvenience does not occur.
  • the above configuration further includes a notification unit that notifies the input information indicating an input pre-assigned to the detected gesture each time the gesture is detected by the detection unit.
  • This configuration can notify the user of input information every time the user makes a gesture. Thereby, the user can determine whether the gesture he / she made is wrong.
  • a gesture input method includes a display unit, and a detection unit that has a detection region at a position different from the display unit and can detect and detect two or more predetermined gestures.
  • a method for performing gesture input to a display device wherein a first step of executing a reception mode for receiving a gesture input for a predetermined period of time, and detection by the detection unit during the reception mode after the reception mode ends A second step of performing a predetermined process using input information pre-assigned to the last detected gesture among the one or more gestures performed.
  • the gesture input method according to the second aspect of the embodiment defines the display device according to the first aspect of the embodiment from the viewpoint of the method, and has the same operation as the display device according to the first aspect of the embodiment. Has an effect.
  • a display device and a gesture input method can be provided.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

This display apparatus has: a display unit; a detection unit; a mode control unit; and a processing unit. The detection unit has a detection region at a positon different from that of the display unit, and can discriminate and detect two or more predetermined gestures. The mode control unit executes a reception mode for receiving gesture inputs for a prescribed period of time. The processing unit executes, after the reception mode has ended, a prescribed process by using input information allocated to a last detected gesture among one or more of the gestures detected by the detection unit during the period of the reception mode.

Description

表示装置及びジェスチャー入力方法Display device and gesture input method

 本発明は、ジェスチャー入力が可能な表示装置及びジェスチャー入力方法に関する。 The present invention relates to a display device capable of gesture input and a gesture input method.

 ジェスチャー入力は、身振り又は手振り等によって、表示装置(例えば、端末装置、ゲーム機)を動作させることである。例えば、スマートフォンの場合、画面にタッチしてジェスチャー入力をすることができる。 Gesture input is to operate a display device (for example, a terminal device or a game machine) by gesture or hand gesture. For example, in the case of a smartphone, gesture input can be performed by touching the screen.

 ユーザの手が汚れている場合等、画面にタッチしたくないことがある。そこで、画面を用いないジェスチャー入力が提案されている。例えば、特許文献1は、ディスプレイと、モーション受付部と、モーション受付部によって受け付けられたモーションに従って、ディスプレイに表示する表示情報を制御する表示制御部と、を備えるディスプレイ装置を開示している。画面を用いないジェスチャー入力では、画面が表示される表示部と異なる位置に、ジェスチャーを検出する検出部(以下、検出部)が設けられている。モーション受付部は、検出部である。 ∙ You may not want to touch the screen when the user's hand is dirty. Therefore, gesture input without using a screen has been proposed. For example, Patent Literature 1 discloses a display device that includes a display, a motion reception unit, and a display control unit that controls display information displayed on the display according to the motion received by the motion reception unit. In gesture input without using a screen, a detection unit (hereinafter referred to as a detection unit) that detects a gesture is provided at a position different from the display unit on which the screen is displayed. The motion reception unit is a detection unit.

 間違ったジェスチャー入力がされることがある。すなわち、ユーザが意図しないジェスチャーが、検出部によって検出されることがある。例えば、ユーザが無意識で動かした手の動きが、検出部で検出されることがある。特に、ヘッドマウントディスプレイの場合、検出部がユーザから見えないので、間違ったジェスチャー入力が発生しやすい。 ⿟ Wrong gesture input. That is, a gesture that is not intended by the user may be detected by the detection unit. For example, the movement of the hand that the user moves unconsciously may be detected by the detection unit. In particular, in the case of a head-mounted display, since the detection unit is not visible to the user, an erroneous gesture input is likely to occur.

 ジェスチャー入力を受け付ける受付モードの期間に、ジェスチャー入力が可能であり、この期間以外の期間にジェスチャー入力ができないとする。ある受付モードの期間において、間違ったジェスチャー入力がされた場合、次の受付モードの期間まで待たなければ、ジェスチャー入力ができないことになる。これは、ユーザに対してストレスとなる。すなわち、間違ったジェスチャー入力がされた場合、ジェスチャー再入力をするために、待ち時間が発生すれば、ユーザに対してストレスである。 Suppose that gesture input is possible during the reception mode period for accepting gesture input, and that gesture input is not possible during periods other than this period. If an incorrect gesture is input during a certain reception mode, the gesture cannot be input unless the next reception mode is waited. This is stressful for the user. In other words, when a wrong gesture is input, if a waiting time occurs to re-input the gesture, it is stressful for the user.

特開2010-129069号公報JP 2010-1229069 A

 本発明の目的は、ジェスチャー入力を受け付ける受付モードの期間に、間違ったジェスチャー入力がされても、次の受付モードの期間を待つことなく、ジェスチャー入力が可能な表示装置、及び、ジェスチャー入力方法を提供することである。 An object of the present invention is to provide a display device and a gesture input method capable of performing gesture input without waiting for the next reception mode period even if an incorrect gesture input is performed during the reception mode period for receiving gesture input. Is to provide.

 上述した目的を実現するために、本発明の一側面を反映した表示装置は、表示部と、検出部と、モード制御部と、処理部と、を有する。前記検出部は、前記表示部と異なる位置に検出領域を有し、予め定められた二以上のジェスチャーを区別して検出できる。前記モード制御部は、ジェスチャー入力を受け付ける受付モードを所定期間実行する。前記処理部は、前記受付モードが終了した後、前記受付モードの期間に前記検出部によって検出された一つ以上の前記ジェスチャーの中で、最後に検出された前記ジェスチャーに予め割り当てられた入力情報を用いて、所定の処理をする。 In order to realize the above-described object, a display device reflecting one aspect of the present invention includes a display unit, a detection unit, a mode control unit, and a processing unit. The detection unit has a detection region at a position different from that of the display unit, and can detect and distinguish two or more predetermined gestures. The mode control unit executes a reception mode for receiving a gesture input for a predetermined period. The processing unit has input information pre-assigned to the last detected gesture among the one or more gestures detected by the detection unit during the reception mode after the reception mode ends. Is used to perform predetermined processing.

 発明の1又は複数の実施形態により与えられる利点及び特徴は以下に与えられる詳細な説明及び添付図面から十分に理解される。これら詳細な説明及び添付図面は、例としてのみ与えられるものであり本発明の限定の定義として意図されるものではない。 The advantages and features afforded by one or more embodiments of the invention will be more fully understood from the detailed description and accompanying drawings provided below. The detailed description and the accompanying drawings are given by way of example only and are not intended as a definition of the limitations of the invention.

本実施形態に係るHMDの構造的な構成を示す斜視図である。It is a perspective view which shows the structural structure of HMD which concerns on this embodiment. 本実施形態に係るHMDの構造的な構成を示す正面図である。It is a front view which shows the structural structure of HMD which concerns on this embodiment. 本実施形態に係るHMDに備えられるディスプレイユニットの構成を示す概略断面図である。It is a schematic sectional drawing which shows the structure of the display unit with which HMD which concerns on this embodiment is equipped. 本実施形態に係るHMDに備えられる近接センサの構成を示す図である。It is a figure which shows the structure of the proximity sensor with which HMD which concerns on this embodiment is equipped. 本実施形態に係るHMDの電気的な構成を示すブロック図である。It is a block diagram which shows the electrical structure of HMD which concerns on this embodiment. 本実施形態に係るHMDを装着した場合の正面図である。It is a front view at the time of mounting HMD concerning this embodiment. 本実施形態に係るHMDを装着した場合の側面図である。It is a side view at the time of mounting HMD concerning this embodiment. 本実施形態に係るHMDを装着した場合の部分上面図である。It is a partial top view at the time of mounting | wearing with HMD which concerns on this embodiment. シースルー型の画像表示部を通してユーザが視認する像の一例を示す図である。It is a figure which shows an example of the image which a user visually recognizes through a see-through type image display part. 本実施形態に係るHMDに備えられる近接センサの出力の一例を示す図である。It is a figure which shows an example of the output of the proximity sensor with which HMD which concerns on this embodiment is equipped. 本実施形態において、ジェスチャーと入力情報との関係を説明する説明図である。In this embodiment, it is explanatory drawing explaining the relationship between a gesture and input information. ジェスチャー1がされた場合において、焦点素子RA~RDの出力のレベルの変化を示す波形図である。FIG. 10 is a waveform diagram showing changes in output levels of focus elements RA to RD when gesture 1 is made. ジェスチャー2がされた場合において、焦点素子RA~RDの出力のレベルの変化を示す波形図である。FIG. 9 is a waveform diagram showing changes in the output levels of focus elements RA to RD when gesture 2 is made. 本実施形態に係るHMDの第1の態様において、ジェスチャー入力がされた場合の動作を説明するフローチャートである。It is a flowchart explaining the operation | movement when the gesture input is made in the 1st aspect of HMD which concerns on this embodiment. 画面を次の画面に切り替えるために、ジェスチャー入力がされた場合において、第1の態様に備えられる焦点素子RA~RDの出力のレベルの変化を示す波形図である。FIG. 10 is a waveform diagram showing changes in the output levels of focus elements RA to RD provided in the first mode when a gesture input is made to switch the screen to the next screen. 受付モードか否かを示す画像と、受付時間を示す画像と、を含む画面の一例を説明する説明図である。It is explanatory drawing explaining an example of the screen containing the image which shows whether it is reception mode, and the image which shows reception time. 本実施形態に係るHMDの第2の態様において、ジェスチャー入力がされた場合の動作を説明するフローチャートである。In the 2nd mode of HMD concerning this embodiment, it is a flow chart explaining operation at the time of gesture input. 画面を次の画面に切り替えるために、ジェスチャー入力がされた場合において、第2の態様に備えられる焦点素子RA~RDの出力のレベルの変化を示す波形図である。FIG. 12 is a waveform diagram showing changes in the output levels of focus elements RA to RD provided in the second mode when a gesture input is made to switch the screen to the next screen. 受付モードか否かを示す画像と、無検出状態に関する時間を示す画像と、を含む画面の一例を説明する説明図である。It is explanatory drawing explaining an example of the screen containing the image which shows whether it is reception mode, and the image which shows the time regarding a non-detection state. 本実施形態に係るHMDの第3の態様において、ジェスチャー入力がされた場合の動作を説明するフローチャートである。In the 3rd mode of HMD concerning this embodiment, it is a flow chart explaining operation when gesture input is performed. 画面を次の画面に切り替えるために、ジェスチャー入力がされた場合において、第3の態様に備えられる焦点素子RA~RDの出力のレベルの変化を示す波形図である。FIG. 10 is a waveform diagram showing changes in the output levels of focus elements RA to RD provided in the third mode when a gesture input is made to switch the screen to the next screen. 受付モードか否かを示す画像と、受付時間を示す画像と、無検出状態に関する時間を示す画像と、を含む画面の一例を説明する説明図である。It is explanatory drawing explaining an example of the screen containing the image which shows whether it is reception mode, the image which shows reception time, and the image which shows the time regarding a non-detection state.

 以下、図面を参照して、本発明の1又は複数の実施形態が説明される。しかし、発明の範囲は、開示された実施形態に限定されない。各図において同一の符号を付した構成は、同一の構成であることを示し、適宜、その説明を省略する。 Hereinafter, one or more embodiments of the present invention will be described with reference to the drawings. However, the scope of the invention is not limited to the disclosed embodiments. Configurations denoted by the same reference numerals in the drawings indicate the same configuration, and description thereof will be omitted as appropriate.

 本実施形態に係る表示装置は、例えば、ウェアラブル端末(ヘッドマウントディスプレイ(HMD)、腕時計型端末等)、スマート端末(スマートフォン、タブレット端末等)である。本明細書では、ヘッドマウントディスプレイ(HMD)を例にして説明する。 The display device according to the present embodiment is, for example, a wearable terminal (head mounted display (HMD), wristwatch type terminal, etc.), or a smart terminal (smart phone, tablet terminal, etc.). In this specification, a head mounted display (HMD) will be described as an example.

 図1は、本実施形態に係るHMD100の構造的な構成を示す斜視図である。図2は、本実施形態に係るHMD100の構造的な構成を示す正面図である。図3は、本実施形態に係るHMD100に備えられるディスプレイユニット104の構成を示す概略断面図である。図4は、本実施形態に係るHMD100に備えられる近接センサ105の構成を示す図である。図5は、本実施形態に係るHMD100の電気的な構成を示すブロック図である。以下、HMD100の右側および左側とは、HMD100を装着したユーザにとっての右側および左側をいうものとする。 FIG. 1 is a perspective view showing a structural configuration of the HMD 100 according to the present embodiment. FIG. 2 is a front view showing a structural configuration of the HMD 100 according to the present embodiment. FIG. 3 is a schematic cross-sectional view showing the configuration of the display unit 104 provided in the HMD 100 according to the present embodiment. FIG. 4 is a diagram illustrating a configuration of the proximity sensor 105 provided in the HMD 100 according to the present embodiment. FIG. 5 is a block diagram showing an electrical configuration of the HMD 100 according to the present embodiment. Hereinafter, the right side and the left side of the HMD 100 refer to the right side and the left side for the user wearing the HMD 100.

 HMD100の構造的な構成について説明する。図1及び図2を参照して、本実施形態に係るHMD100は、頭部に装着するための頭部装着部材の一例であるフレーム101を備える。フレーム101は、2つの眼鏡レンズ102を取り付ける前方部101aと、前方部101aの両端から後方へと延在する側部101b、101cとを備える。フレーム101に取り付けられた2つの眼鏡レンズ102は、屈折力(光学的パワー、焦点距離の逆数)を有して良く、また、有しなくても良い。 The structural configuration of the HMD 100 will be described. With reference to FIG.1 and FIG.2, HMD100 which concerns on this embodiment is provided with the flame | frame 101 which is an example of the head mounting member for mounting | wearing a head. The frame 101 includes a front part 101a to which two spectacle lenses 102 are attached, and side parts 101b and 101c extending rearward from both ends of the front part 101a. The two spectacle lenses 102 attached to the frame 101 may or may not have refractive power (optical power, reciprocal of focal length).

 右側(ユーザーの利き目等に応じて左側でもよい)の眼鏡レンズ102の上部において、円筒状の主本体部103がフレーム101の前方部101aに固定されている。主本体部103にはディスプレイユニット104が設けられている。主本体部103内には、後述する制御処理部121(図5)からの指示に基づいてディスプレイユニット104の表示制御を司る表示制御部104DR(図5)が配置されている。なお、必要に応じて両眼の前にそれぞれディスプレイユニットが配置されてもよい。 The cylindrical main body 103 is fixed to the front part 101 a of the frame 101 at the upper part of the right eyeglass lens 102 (which may be the left side according to the user's dominant eye etc.). The main body 103 is provided with a display unit 104. In the main body 103, a display control unit 104DR (FIG. 5) that controls display of the display unit 104 based on an instruction from a control processing unit 121 (FIG. 5) described later is arranged. Note that a display unit may be disposed in front of both eyes as necessary.

 図3を参照して、ディスプレイユニット104は、画像形成部104Aと画像表示部104Bとを備えて構成される。画像形成部104Aは、主本体部103内に組み込まれており、光源104aと、一方向拡散板104bと、集光レンズ104cと、表示素子104dとを備える。一方、いわゆるシースルー型の表示部材である画像表示部104Bは、主本体部103から下方に向かい、片方の眼鏡レンズ102(図1)に平行に延在するように配置された全体的に板状であって、接眼プリズム104fと、偏向プリズム104gと、ホログラム光学素子104hとを有している。 Referring to FIG. 3, the display unit 104 includes an image forming unit 104A and an image display unit 104B. The image forming unit 104A is incorporated in the main body unit 103, and includes a light source 104a, a one-way diffusing plate 104b, a condenser lens 104c, and a display element 104d. On the other hand, the image display unit 104B, which is a so-called see-through type display member, is generally plate-shaped and is disposed so as to extend downward from the main body unit 103 and parallel to one eyeglass lens 102 (FIG. 1). The eyepiece prism 104f, the deflecting prism 104g, and the hologram optical element 104h are included.

 光源104aは、表示素子104dを照明する機能を有し、例えば光強度のピーク波長および光強度半値の波長幅で462±12nm(青色光(B光))、525±17nm(緑色光(G光))、635±11nm(赤色光(R光))となる3つの波長帯域の光を発するRGB一体型の発光ダイオード(LED)で構成されている。 The light source 104a has a function of illuminating the display element 104d. For example, the peak wavelength of light intensity and the half width of the light intensity are 462 ± 12 nm (blue light (B light)), 525 ± 17 nm (green light (G light )), 635 ± 11 nm (red light (R light)), and is composed of RGB integrated light emitting diodes (LEDs) that emit light in three wavelength bands.

 表示素子104dは、光源104aからの出射光を画像データに応じて変調して画像を表示するものであり、光が透過する領域となる各画素をマトリクス状に有する透過型の液晶表示素子で構成される。なお、表示素子104dは、反射型であってもよい。 The display element 104d displays an image by modulating the light emitted from the light source 104a in accordance with image data, and is configured by a transmissive liquid crystal display element having pixels that serve as light transmitting regions in a matrix. Is done. Note that the display element 104d may be of a reflective type.

 接眼プリズム104fは、基端面PL1を介して入射する表示素子104dからの画像光を、相対する平行な内側面PL2と外側面PL3とで全反射させ、ホログラム光学素子104hを介してユーザの瞳に導く一方、外光を透過させてユーザの瞳に導く。接眼プリズム104fは、偏向プリズム104gとともに、例えばアクリル系樹脂で形成されている。接眼プリズム104fと偏向プリズム104gとは、内側面PL2および外側面PL3に対して傾斜した傾斜面PL4、PL5でホログラム光学素子104hを挟み、接着剤で接合される。 The eyepiece prism 104f totally reflects the image light from the display element 104d incident through the base end face PL1 by the opposed parallel inner side face PL2 and outer side face PL3, and passes through the hologram optical element 104h to the user's pupil. On the other hand, external light is transmitted and guided to the user's pupil. The eyepiece prism 104f is formed of, for example, an acrylic resin together with the deflecting prism 104g. The eyepiece prism 104f and the deflection prism 104g are joined by an adhesive with the hologram optical element 104h sandwiched between inclined surfaces PL4 and PL5 inclined with respect to the inner surface PL2 and the outer surface PL3.

 偏向プリズム104gは、接眼プリズム104fに接合されて、接眼プリズム104fと一体となって略平行平板となるものである。なお、ディスプレイユニット104とユーザの瞳の間に眼鏡レンズ102(図1)を装着すると、通常眼鏡を使用しているユーザでも画像を観察することが可能である。 The deflection prism 104g is joined to the eyepiece prism 104f, and becomes a substantially parallel flat plate integrated with the eyepiece prism 104f. In addition, if the spectacle lens 102 (FIG. 1) is mounted between the display unit 104 and the user's pupil, it is possible for a user who normally uses spectacles to observe an image.

 ホログラム光学素子104hは、表示素子104dから出射される画像光(3原色に対応した波長の光)を回折反射して瞳孔Bに導き、表示素子104dに表示される画像を拡大してユーザの瞳に虚像として導く体積位相型の反射型ホログラムである。ホログラム光学素子104hは、例えば、回折効率のピーク波長および回折効率半値の波長幅で465±5nm(B光)、521±5nm(G光)、634±5nm(R光)の3つの波長域の光を回折(反射)させるように作製されている。ここで、回折効率のピーク波長は、回折効率がピークとなるときの波長のことであり、回折効率半値の波長幅とは、回折効率が回折効率ピークの半値となるときの波長幅のことである。 The hologram optical element 104h diffracts and reflects the image light (light having a wavelength corresponding to the three primary colors) emitted from the display element 104d, guides it to the pupil B, enlarges the image displayed on the display element 104d, and enlarges the user's pupil. It is a volume phase type reflection hologram guided as a virtual image. The hologram optical element 104h has, for example, three wavelength ranges of 465 ± 5 nm (B light), 521 ± 5 nm (G light), and 634 ± 5 nm (R light) with a peak wavelength of diffraction efficiency and a wavelength width of half the diffraction efficiency. It is made to diffract (reflect) light. Here, the peak wavelength of diffraction efficiency is the wavelength at which the diffraction efficiency reaches a peak, and the wavelength width at half maximum of the diffraction efficiency is the wavelength width at which the diffraction efficiency is at half maximum of the diffraction efficiency peak. is there.

 このような構成のディスプレイユニット104では、光源104aから出射された光は、一方向拡散板104bにて拡散され、集光レンズ104cにて集光されて表示素子104dに入射する。表示素子104dに入射した光は、表示制御部104DRから入力された画像データに基づいて画素ごとに変調され、画像光として出射される。これにより、表示素子104dには、カラー画像が表示される。表示素子104dからの画像光は、接眼プリズム104fの内部にその基端面PL1から入射し、内側面PL2と外側面PL3で複数回全反射されて、ホログラム光学素子104hに入射する。ホログラム光学素子104hに入射した光は、そこで反射され、内側面PL2を透過して瞳孔Bに達する。瞳孔Bの位置では、ユーザは、表示素子104dに表示された画像の拡大虚像を観察することができ、画像表示部104Bに形成される画面として視認することができる。 In the display unit 104 having such a configuration, light emitted from the light source 104a is diffused by the unidirectional diffusion plate 104b, condensed by the condenser lens 104c, and incident on the display element 104d. The light incident on the display element 104d is modulated for each pixel based on the image data input from the display control unit 104DR, and is emitted as image light. Thereby, a color image is displayed on the display element 104d. The image light from the display element 104d enters the eyepiece prism 104f from its base end face PL1, is totally reflected a plurality of times by the inner side face PL2 and the outer side face PL3, and enters the hologram optical element 104h. The light incident on the hologram optical element 104h is reflected there, passes through the inner side surface PL2, and reaches the pupil B. At the position of the pupil B, the user can observe an enlarged virtual image of the image displayed on the display element 104d, and can visually recognize it as a screen formed on the image display unit 104B.

 一方、接眼プリズム104f、偏向プリズム104gおよびホログラム光学素子104hは、外光をほとんど全て透過させるので、ユーザはこれらを介して外界像(実像)を観察できる。したがって、表示素子104dに表示された画像の虚像は、外界像の一部に重なって観察されることになる。このようにして、HMD100のユーザは、ホログラム光学素子104hを介して、表示素子104dから提供される画像と外界像とを同時に観察できる。なお、ディスプレイユニット104が非表示状態の場合、画像表示部104Bは、素通しとなり、外界像のみを観察できる。なお、本実施形態では、光源と液晶表示素子と光学系とを組み合わせてディスプレイユニットが構成されているが、光源と液晶表示素子の組合せに代え、自発光型の表示素子(例えば、有機EL表示素子)が用いられても良い。また、光源と液晶表示素子と光学系の組合せに代えて、非発光状態で透過性を有する透過型有機EL表示パネルが用いられてもよい。 On the other hand, since the eyepiece prism 104f, the deflecting prism 104g, and the hologram optical element 104h transmit almost all of the external light, the user can observe the external image (real image) through these. Therefore, the virtual image of the image displayed on the display element 104d is observed so as to overlap with a part of the external image. In this way, the user of the HMD 100 can simultaneously observe the image provided from the display element 104d and the external image via the hologram optical element 104h. In addition, when the display unit 104 is in a non-display state, the image display unit 104B is transparent and can observe only an external image. In this embodiment, the display unit is configured by combining a light source, a liquid crystal display element, and an optical system. However, instead of the combination of the light source and the liquid crystal display element, a self-luminous display element (for example, an organic EL display) is used. Element) may be used. Further, instead of a combination of a light source, a liquid crystal display element, and an optical system, a transmissive organic EL display panel having transparency in a non-light emitting state may be used.

 図1及び図2を参照して、主本体部103の正面には、フレーム101の中央寄りに配置された近接センサ105と、側部101b寄りに配置されたカメラ106のレンズ106aとが、前方を向くようにして設けられている。 With reference to FIGS. 1 and 2, on the front surface of the main body 103, a proximity sensor 105 disposed near the center of the frame 101 and a lens 106a of the camera 106 disposed near the side 101b are disposed forward. It is provided so as to face.

 本明細書において、「近接センサ」とは、物体、例えば人体の一部(手、指など)がユーザの眼前に近接していることを検知するために、近接センサの検出面前方の近接範囲にある検出領域内に存在しているか否かを検出して信号を出力するものをいう。近接範囲は、ユーザの特性や好みに応じて適宜設定すればよいが、例えば、近接センサの検出面からの距離が200mm以内の範囲とすることができる。近接センサからの距離が200mm以内であれば、ユーザが腕を曲げた状態で、手のひら又は指をユーザの視野内に入れたり出したりできるため、手、指又は指示具(例えば、棒状の部材)を使ったジェスチャーによって容易に操作を行うことができ、また、ユーザ以外の人体や家具等を誤って検出する虞が少なくなる。 In the present specification, the “proximity sensor” means a proximity range in front of the detection surface of the proximity sensor in order to detect that an object, for example, a part of a human body (hand, finger, etc.) is in front of the user's eyes. The signal is output by detecting whether or not it exists within the detection area. The proximity range may be set as appropriate according to the characteristics and preferences of the user. For example, the proximity range from the detection surface of the proximity sensor can be within a range of 200 mm. If the distance from the proximity sensor is within 200 mm, the user can put his / her palm or finger into / out of the user's field of view with his / her arm bent, so that the hand, finger or pointing tool (for example, a rod-shaped member) It is possible to easily perform an operation by a gesture using the symbol, and the possibility of erroneous detection of a human body or furniture other than the user is reduced.

 近接センサには、パッシブ型とアクティブ型とがある。パッシブ型の近接センサは、物体が近接した際に物体から放射される不可視光、又は、電磁波を検出する検出装置を有する。パッシブ型の近接センサとして、接近した人体から放射される赤外線等の不可視光を検出する焦電センサ、及び、接近した人体との間の静電容量変化を検出する静電容量センサ等がある。アクティブ型の近接センサは、不可視光(又は音波)を投射する投射装置と、物体に反射して戻った不可視光(又は音波)を受ける検出装置とを有する。アクティブ型の近接センサとして、赤外線を投射して物体で反射された赤外線を受光する赤外線センサ、レーザ光を投射して物体で反射されたレーザ光を受光するレーザセンサ、及び、超音波を投射して物体で反射された超音波を受け取る超音波センサ等がある。なお、パッシブ型の近接センサは、物体に向けてエネルギーを投射する必要がないので、低消費電力性に優れている。アクティブ型の近接センサは、検知の確実性を向上させ易く、例えば、ユーザが、赤外光などの人体から放射される検出光を透過しない手袋をしているような場合でも、ユーザの手の動きを検出できる。複数種類の近接センサが組み合わされても良い。 There are two types of proximity sensors: passive type and active type. A passive proximity sensor has a detection device that detects invisible light or electromagnetic waves emitted from an object when the object approaches. As a passive proximity sensor, there are a pyroelectric sensor that detects invisible light such as infrared rays emitted from an approaching human body, and a capacitance sensor that detects a change in electrostatic capacitance between the approaching human body and the like. The active proximity sensor includes a projection device that projects invisible light (or sound waves), and a detection device that receives invisible light (or sound waves) reflected back from an object. As an active proximity sensor, an infrared sensor that projects infrared rays and receives infrared rays reflected by an object, a laser sensor that projects laser beams and receives laser light reflected by an object, and projects ultrasonic waves There is an ultrasonic sensor that receives ultrasonic waves reflected by an object. Note that a passive proximity sensor does not need to project energy toward an object, and thus has excellent low power consumption. An active proximity sensor is easy to improve the certainty of detection. For example, even when a user wears a glove that does not transmit detection light emitted from a human body such as infrared light, Can detect movement. A plurality of types of proximity sensors may be combined.

 図4を参照して、本実施形態では、近接センサ105として、2次元マトリクス状に配列された複数の焦電素子を備えた焦電センサが用いられている。図4の「右側」と「左側」は、HMD100を装着したユーザにとっての右側および左側をいう。近接センサ105は、2行2列に配列された4個の焦電素子RA,RB,RC,RDを備えて構成され、人体から放射される赤外光等の不可視光を検出光として受光し、それに対応した信号が各焦電素子RA~RDそれぞれから出力される。各焦電素子RA~RDの各出力は、近接センサ105の受光面から物体までの距離に応じて強度が変化し、距離が近いほど強度が大きくなる。 Referring to FIG. 4, in this embodiment, a pyroelectric sensor including a plurality of pyroelectric elements arranged in a two-dimensional matrix is used as the proximity sensor 105. The “right side” and “left side” in FIG. 4 refer to the right side and the left side for the user wearing the HMD 100. The proximity sensor 105 includes four pyroelectric elements RA, RB, RC, and RD arranged in two rows and two columns, and receives invisible light such as infrared light emitted from the human body as detection light. A corresponding signal is output from each of the pyroelectric elements RA to RD. The outputs of the pyroelectric elements RA to RD change in intensity according to the distance from the light receiving surface of the proximity sensor 105 to the object, and the intensity increases as the distance decreases.

 図1及び図2を参照して、フレーム101の右側の側部101bには、右副本体部108-Rが取り付けられ、フレーム101の左側の側部101cには、左副本体部108-Lが取り付けられている。右副本体部108-Rおよび左副本体部108-Lは、細長い板形状を有する。 Referring to FIGS. 1 and 2, the right sub-body portion 108-R is attached to the right side portion 101b of the frame 101, and the left sub-body portion 108-L is attached to the left side portion 101c of the frame 101. Is attached. The right sub body 108-R and the left sub body 108-L have an elongated plate shape.

 主本体部103と右副本体部108-Rとは、配線HSで信号伝達可能に接続されており、右副本体部108-Rは、その後端から延在するコードCDを介して制御ユニットCTUに接続されている。 The main main body 103 and the right sub main body 108-R are connected to each other by a wiring HS so that signals can be transmitted. The right sub main body 108-R is connected to the control unit CTU via a cord CD extending from the rear end thereof. It is connected to the.

 次に、HMD100の電気的な構成について説明する。図5を参照して、HMD100は、制御ユニットCTUと、ディスプレイユニット104と、表示制御部104DRと、近接センサ105と、カメラ106と、を備える。制御ユニットCTUは、制御処理部121と、操作部122と、記憶部125と、バッテリ126と、電源回路127とを備える。 Next, the electrical configuration of the HMD 100 will be described. Referring to FIG. 5, the HMD 100 includes a control unit CTU, a display unit 104, a display control unit 104DR, a proximity sensor 105, and a camera 106. The control unit CTU includes a control processing unit 121, an operation unit 122, a storage unit 125, a battery 126, and a power supply circuit 127.

 表示制御部104DRは、制御処理部121に接続され、制御処理部121の制御に従ってディスプレイユニット104の画像形成部104Aを制御することで、画像形成部104Aに画像を形成させる回路である。画像形成部104Aは、上述した通りである。 The display control unit 104DR is a circuit that is connected to the control processing unit 121 and controls the image forming unit 104A of the display unit 104 according to the control of the control processing unit 121 to form an image on the image forming unit 104A. The image forming unit 104A is as described above.

 カメラ106は、制御処理部121に接続され、制御処理部121の制御に従って、被写体の画像を生成する装置である。カメラ106は、例えば、被写体の光学像を所定の結像面上に結像する結像光学系、前記結像面に受光面を一致させて配置され、前記被写体の光学像を電気的な信号に変換するイメージセンサ、前記イメージセンサの出力に対し公知の画像処理を施して画像(画像データ)を生成するデジタルシグナルプロセッサ(DSP)等を備えて構成される。前記結像光学系は、1または複数のレンズを備えて構成され、その1つとして前記レンズ106aを含む。カメラ106は、前記生成した画像データを制御処理部121へ出力する。 The camera 106 is an apparatus that is connected to the control processing unit 121 and generates an image of a subject under the control of the control processing unit 121. The camera 106 is, for example, an image forming optical system that forms an optical image of a subject on a predetermined image forming surface, and a light receiving surface that matches the image forming surface. An image sensor that converts the image sensor into a digital signal processor (DSP) that performs known image processing on the output of the image sensor to generate an image (image data). The imaging optical system includes one or more lenses, and includes the lens 106a as one of them. The camera 106 outputs the generated image data to the control processing unit 121.

 近接センサ105は、制御処理部121に接続される。近接センサ105は、上述した通りであり、その出力を制御処理部121へ出力する。 The proximity sensor 105 is connected to the control processing unit 121. The proximity sensor 105 is as described above, and outputs the output to the control processing unit 121.

 操作部122は、制御処理部121に接続され、例えば電源のオンオフ等の、予め設定された所定の指示をHMD100に入力する機器であり、例えば、所定の機能を割り付けられた1または複数のスイッチ等である。 The operation unit 122 is connected to the control processing unit 121 and is a device that inputs a predetermined instruction, such as power on / off, to the HMD 100, for example, one or a plurality of switches assigned a predetermined function Etc.

 バッテリ126は、電力を蓄積し、前記電力を供給する電池である。バッテリ126は、一次電池でもいし、二次電池でもよい。電源回路127は、バッテリ126から供給された電力を、電力を必要とする、当該HMD100の各部へ各部に応じた電圧で供給する回路である。 The battery 126 is a battery that accumulates electric power and supplies the electric power. The battery 126 may be a primary battery or a secondary battery. The power supply circuit 127 is a circuit that supplies power supplied from the battery 126 to each part of the HMD 100 that requires power at a voltage corresponding to each part.

 記憶部125は、制御処理部121に接続され、制御処理部121の制御に従って、各種の所定のプログラムおよび各種の所定のデータを記憶する回路である。前記各種の所定のプログラムには、例えば、当該HMD100の各部を当該各部の機能に応じて制御する制御プログラム、及び、近接センサ105の出力に基づいてジェスチャーを判定するジェスチャー処理プログラム等の制御処理プログラムが含まれる。記憶部125は、例えば不揮発性の記憶素子であるROM(Read Only Memory)、及び、書き換え可能な不揮発性の記憶素子であるEEPROM(Electrically Erasable Programmable Read Only Memory)等を備える。そして、記憶部125は、前記所定のプログラムの実行中に生じるデータ等を記憶するいわゆる制御処理部121のワーキングメモリとなるRAM(Random Access Memory)等を含む。 The storage unit 125 is a circuit that is connected to the control processing unit 121 and stores various predetermined programs and various predetermined data under the control of the control processing unit 121. Examples of the various predetermined programs include control processing programs such as a control program for controlling each unit of the HMD 100 according to the function of each unit, and a gesture processing program for determining a gesture based on the output of the proximity sensor 105. Is included. The storage unit 125 includes, for example, a ROM (Read Only Memory) that is a nonvolatile storage element and an EEPROM (Electrically Erasable Programmable Read Only Memory) that is a rewritable nonvolatile storage element. The storage unit 125 includes a RAM (Random Access Memory) that serves as a working memory of the control processing unit 121 that stores data generated during the execution of the predetermined program.

 制御処理部121は、HMD100の各部を当該各部の機能に応じてそれぞれ制御し、近接センサ105の出力に基づいて予め設定された所定のジェスチャーを判定し、この判定結果に応じた処理を実行するものである。制御処理部121は、例えば、CPU(Central Processing Unit)およびその周辺回路等を備えて構成される。制御処理部121には、制御処理プログラムが実行されることによって、制御部1211、ジェスチャー処理部1212及び処理部1213が機能的に構成される。なお、制御部1211の機能の一部又は全部は、CPUによる処理に替えて、又は、これと共に、DSP(Digital Signal Processor)による処理によって実現されてもよい。又、制御部1211の機能の一部又は全部は、ソフトウェアによる処理に替えて、又は、これと共に、専用のハードウェア回路による処理によって実現されてもよい。以上説明したことは、ジェスチャー処理部1212、処理部1213についても同様である。 The control processing unit 121 controls each unit of the HMD 100 according to the function of each unit, determines a predetermined gesture set in advance based on the output of the proximity sensor 105, and executes processing according to the determination result. Is. The control processing unit 121 includes, for example, a CPU (Central Processing Unit) and its peripheral circuits. In the control processing unit 121, a control processing program is executed, so that a control unit 1211, a gesture processing unit 1212, and a processing unit 1213 are functionally configured. Note that part or all of the functions of the control unit 1211 may be realized by processing by a DSP (Digital Signal Processor) instead of or by processing by the CPU. In addition, some or all of the functions of the control unit 1211 may be realized by processing using a dedicated hardware circuit instead of or together with processing by software. What has been described above also applies to the gesture processing unit 1212 and the processing unit 1213.

 制御部1211は、HMD100の各部を当該各部の機能に応じてそれぞれ制御するものである。制御部1211は、モード制御部1214及び記憶制御部1215の機能を有する。これらの機能については、後で説明する。 The control unit 1211 controls each unit of the HMD 100 according to the function of each unit. The control unit 1211 has functions of a mode control unit 1214 and a storage control unit 1215. These functions will be described later.

 ジェスチャー処理部1212は、近接センサ105における複数の焦電素子、本実施形態では4個の焦電素子RA~RDの各出力に基づいて予め設定された所定のジェスチャーを判定するものである。ジェスチャー処理部1212は、その判定結果を処理部1213へ通知する。ジェスチャー処理部1212と近接センサ105とにより、検出部128が構成される。検出部128は、画像表示部104B(表示部の一例)と異なる位置に検出領域SA(図7A、図7B、図8)を有し、予め定められた二以上のジェスチャーを区別して検出する。 The gesture processing unit 1212 determines a predetermined gesture set in advance based on the outputs of the plurality of pyroelectric elements in the proximity sensor 105, in this embodiment, the four pyroelectric elements RA to RD. The gesture processing unit 1212 notifies the processing unit 1213 of the determination result. The gesture processing unit 1212 and the proximity sensor 105 constitute a detection unit 128. The detection unit 128 has a detection area SA (FIGS. 7A, 7B, and 8) at a position different from that of the image display unit 104B (an example of the display unit), and distinguishes and detects two or more predetermined gestures.

 処理部1213は、ジェスチャー処理部1212の判定結果を用いて、所定の処理(例えば、画面を次の画面に切り替える命令を表示制御部104DRに送る)をする。処理部1213の詳細は、後で説明する。 The processing unit 1213 performs a predetermined process (for example, sends a command to switch the screen to the next screen to the display control unit 104DR) using the determination result of the gesture processing unit 1212. Details of the processing unit 1213 will be described later.

 HMD100における、ジェスチャーを検知する基本動作について説明する。図6は、本実施形態に係るHMD100を装着した場合の正面図である。図7Aは、本実施形態に係るHMD100を装着した場合の側面図である。図7Bは、本実施形態に係るHMD100を装着した場合の部分上面図である。図7A及び図7Bには、ユーザUSの手HDも図示されている。図8は、シースルー型の画像表示部104Bを通してユーザが視認する像の一例を示す図である。図9は、本実施形態に係るHMD100に備えられる近接センサ105の出力の一例を示す図である。図9(A)は、焦電素子RAの出力を示し、図9(B)は、焦電素子RBの出力を示し、図9(C)は、焦電素子RCの出力を示し、図9(D)は、焦電素子RDの出力を示す。図9の各図の横軸は、時間であり、それらの縦軸は、出力のレベル(強度)である。ここで、ジェスチャー入力とは、少なくともユーザUSの手HD又は指が、近接センサ105の検出領域SA内に進入又は離間する動作であり、近接センサ105を介してHMD100の制御処理部121のジェスチャー処理部1212が検知できるものである。 The basic operation for detecting a gesture in the HMD 100 will be described. FIG. 6 is a front view when the HMD 100 according to the present embodiment is mounted. FIG. 7A is a side view when the HMD 100 according to the present embodiment is mounted. FIG. 7B is a partial top view when the HMD 100 according to the present embodiment is mounted. 7A and 7B also show the hand HD of the user US. FIG. 8 is a diagram illustrating an example of an image visually recognized by the user through the see-through type image display unit 104B. FIG. 9 is a diagram illustrating an example of the output of the proximity sensor 105 provided in the HMD 100 according to the present embodiment. 9A shows the output of the pyroelectric element RA, FIG. 9B shows the output of the pyroelectric element RB, FIG. 9C shows the output of the pyroelectric element RC, and FIG. (D) shows the output of the pyroelectric element RD. The horizontal axis of each figure in FIG. 9 is time, and the vertical axis thereof is the output level (intensity). Here, the gesture input is an operation in which at least the hand HD or finger of the user US enters or leaves the detection area SA of the proximity sensor 105, and the gesture processing of the control processing unit 121 of the HMD 100 is performed via the proximity sensor 105. The part 1212 can detect it.

 図8を参照して、画像表示部104Bの画面104iは、画像表示部104Bに対向するユーザの眼の有効視野EVに重なるように(ここでは、有効視野EV内に位置するように)配置される。近接センサ105の検出領域SAは、画像表示部104Bに対向するユーザの眼の視野内にある。好ましくは、検出領域SAは、ユーザの眼の安定注視野またはその内側の視野内(水平約90°以内、垂直約70°以内)にある。さらに好ましくは、検出領域SAは、安定注視野よりも内側に位置する、有効視野EVまたはその内側の視野内(水平約30°以内、垂直約20°以内)と重なって位置する。近接センサ105の配置と向きを調整して、近接センサ105を設置することにより、検出領域SAを、それらの位置にすることができる。 Referring to FIG. 8, screen 104i of image display unit 104B is arranged so as to overlap with effective visual field EV of the user's eye facing image display unit 104B (here, positioned within effective visual field EV). The The detection area SA of the proximity sensor 105 is in the visual field of the user's eye facing the image display unit 104B. Preferably, the detection area SA is in the stable eye-field of the user's eye or in the field of view inside thereof (within about 90 ° horizontally and within about 70 ° vertically). More preferably, the detection area SA is positioned so as to overlap with the effective field EV or the field inside thereof (within about 30 ° in the horizontal and within about 20 ° in the vertical) located inside the stable focus field. By installing the proximity sensor 105 by adjusting the arrangement and orientation of the proximity sensor 105, the detection area SA can be set to those positions.

 図8には、検出領域SAが画面104iに重なっている例が示されている。ユーザUSが頭部装着部材であるフレーム101を頭部に装着した状態で、ユーザUSの眼の視野内に近接センサ105の検出領域SAが位置するように設定されている。これにより、画面104iを通して手HDを観察しつつ、眼の移動を伴うことなく、近接センサ105の検出領域SAへの手の進入と退避とを確実に視認できる。特に、近接センサ105の検出領域SAを安定注視野またはその内側の視野内とすることで、ユーザが画面を観察していても検出領域SAを認識しつつ、確実にジェスチャー入力を行うことができる。また、近接センサ105の検出領域SAを有効視野EVまたはその内側の視野内とすることで、さらに確実にジェスチャー入力を行うことができる。検出領域SAが画面104iに重なるようにすれば、さらに確実にジェスチャー入力を行うことができる。なお、本実施形態のように、近接センサ105が複数の焦電素子RA~RDを有する場合は、複数の焦電素子RA~RDの受光領域全体を一つの受光部とみて、その受光部の最大検出範囲を検出領域SAとみなすものとする。図8のように、近接センサ105の検出領域SAが画面104iに重なるように設定されている場合、検出領域SAを示す画像を画面104iに表示する(例えば、領域SAの範囲を実線で表示する)ことが好ましい。これにより、ユーザは、検出領域SAを確実に認識できるので、ジェスチャーによる操作をより確実に行うことができる。 FIG. 8 shows an example in which the detection area SA overlaps the screen 104i. It is set so that the detection area SA of the proximity sensor 105 is located in the visual field of the eyes of the user US while the user US is wearing the frame 101 that is a head mounting member on the head. Accordingly, it is possible to surely visually recognize the approach and withdrawal of the hand to the detection area SA of the proximity sensor 105 without moving the eye while observing the hand HD through the screen 104i. In particular, by setting the detection area SA of the proximity sensor 105 within the stable visual field or the inner visual field, it is possible to reliably perform gesture input while recognizing the detection area SA even when the user observes the screen. . Further, by making the detection area SA of the proximity sensor 105 within the effective visual field EV or the visual field inside the effective visual field EV, gesture input can be performed more reliably. If the detection area SA overlaps the screen 104i, gesture input can be performed more reliably. When the proximity sensor 105 has a plurality of pyroelectric elements RA to RD as in the present embodiment, the entire light receiving area of the plurality of pyroelectric elements RA to RD is regarded as one light receiving unit, and It is assumed that the maximum detection range is the detection area SA. As shown in FIG. 8, when the detection area SA of the proximity sensor 105 is set to overlap the screen 104i, an image showing the detection area SA is displayed on the screen 104i (for example, the range of the area SA is displayed with a solid line). Is preferred. Thereby, since the user can recognize the detection area SA with certainty, the operation by the gesture can be more reliably performed.

 次に、ジェスチャーの検出の基本原理について説明する。近接センサ105が作動しているときに、ユーザUSの前方に何も存在しないとする。この場合、近接センサ105は、検出光としての不可視光を受光しないので、制御処理部121のジェスチャー処理部1212は、ジェスチャーが行われていないと判断する。一方、図7A及び図7Bに示すように、ユーザUSの目の前にユーザUS自身の手HDを接近させると、手HDから放射される不可視光を近接センサ105が検出する。ジェスチャー処理部1212は、これに基づく近接センサ105からの出力信号に応じて、ジェスチャーが行われたと判断する。なお、以下においては、ユーザUSの手HDによってジェスチャーを行うものとして説明するが、指又はその他の部位を用いてジェスチャーを行ってもよいし、不可視光を放射できる材料からなる指示具をユーザUSが用いてジェスチャーを行ってもよい。 Next, the basic principle of gesture detection will be described. Assume that nothing is in front of the user US when the proximity sensor 105 is operating. In this case, since the proximity sensor 105 does not receive invisible light as detection light, the gesture processing unit 1212 of the control processing unit 121 determines that no gesture is performed. On the other hand, as shown in FIGS. 7A and 7B, when the user US's own hand HD is approached in front of the user US, the proximity sensor 105 detects invisible light emitted from the hand HD. The gesture processing unit 1212 determines that a gesture has been performed according to an output signal from the proximity sensor 105 based on the gesture processing unit 1212. In the following description, the gesture is performed using the hand HD of the user US. However, the gesture may be performed using a finger or other part, or an indication tool made of a material that can emit invisible light is used. May be used for gestures.

 上述したように、近接センサ105は、2行2列に並べられた4個の焦電素子RA~RDを有する(図4参照)。したがって、ユーザUSが、左右上下いずれの方向から手HDをHMD100の前方に近づけた場合、各焦電素子RA~RDで検出する信号の出力タイミングが異なる。 As described above, the proximity sensor 105 has four pyroelectric elements RA to RD arranged in two rows and two columns (see FIG. 4). Therefore, when the user US brings the hand HD close to the front of the HMD 100 from either the left, right, up, or down directions, the output timings of signals detected by the pyroelectric elements RA to RD are different.

 例えば、図7A、図7B及び図8を参照して、ユーザUSがHMD100の前方で右方から左方に向かって手HDを移動させたジェスチャーの場合、手HDから放射された不可視光が近接センサ105に入射する。この場合に、最初に不可視光を受光するのは、焦電素子RA、RCである。したがって、図4及び図9を参照して、まず、焦電素子RA、RCの信号が立ち上がり、遅れて焦電素子RB、RDの信号が立ち上がる。そして、この後、焦電素子RA、RCの信号が立ち下がって、遅れて焦電素子RB、RDの信号が立ち下がる。この信号のタイミングをジェスチャー処理部1212が検出し、ジェスチャー処理部1212は、ユーザUSが手HDを右から左へと移動させたジェスチャーを行ったと判定する。 For example, referring to FIG. 7A, FIG. 7B and FIG. 8, in the case of a gesture in which the user US moves the hand HD from the right to the left in front of the HMD 100, the invisible light emitted from the hand HD is close The light enters the sensor 105. In this case, the pyroelectric elements RA and RC first receive invisible light. Therefore, referring to FIGS. 4 and 9, first, the signals of pyroelectric elements RA and RC rise, and the signals of pyroelectric elements RB and RD rise after a delay. Thereafter, the signals of the pyroelectric elements RA and RC fall, and the signals of the pyroelectric elements RB and RD fall after a delay. The gesture processing unit 1212 detects the timing of this signal, and the gesture processing unit 1212 determines that the user US has made a gesture by moving the hand HD from right to left.

 本実施形態は、予め定められた二以上のジェスチャーとして、4個のジェスチャーを例にして説明する。二以上とは、複数を意味し、4に限定されない。図10は、本実施形態において、ジェスチャーと入力情報との関係を説明する説明図である。図10は、4個のジェスチャーのそれぞれの手の動きを示す矢印、4個のジェスチャーのそれぞれに、予め割り当てられた入力情報を含む。図5に示すジェスチャー処理部1212は、焦電素子RA~RDのそれぞれの出力のレベルが、しきい値thを超えたタイミングを用いて、ジェスチャー1~ジェスチャー4を判定する。 This embodiment will be described by taking four gestures as an example of two or more predetermined gestures. Two or more means a plurality and is not limited to four. FIG. 10 is an explanatory diagram illustrating the relationship between gestures and input information in the present embodiment. FIG. 10 includes arrows indicating the movement of each hand of the four gestures and input information pre-assigned to each of the four gestures. The gesture processing unit 1212 shown in FIG. 5 determines gestures 1 to 4 using the timing when the output levels of the pyroelectric elements RA to RD exceed the threshold value th.

 図4、図10及び図11を参照して、ジェスチャー1から説明する。図11は、ジェスチャー1がされた場合において、焦点素子RA~RDの出力のレベルの変化を示す波形図である。この波形図において、横軸は、時間を示し、縦軸は、出力のレベルを示す。しきい値thは、全て同じ値とする。 The gesture 1 will be described with reference to FIG. 4, FIG. 10, and FIG. FIG. 11 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when the gesture 1 is performed. In this waveform diagram, the horizontal axis indicates time, and the vertical axis indicates the output level. The threshold values th are all the same value.

 ジェスチャー1は、手HDが、検出領域SA(図7A、図7B、図8)の左側から検出領域SAに入り、検出領域SAの右側から検出領域SAを出るジェスチャーである。ユーザがジェスチャー1をしたとき、焦電素子RB,RDの出力のレベルがしきい値thを超え、これより遅れて、焦電素子RA,RCの出力のレベルがしきい値thを超え、そして、焦電素子RB,RDの出力のレベルがしきい値th以下になり、これより遅れて焦電素子RA,RCの出力のレベルがしきい値th以下になる。このような出力のレベルの変化をジェスチャー処理部1212(図5)が検出したとき、ジェスチャー処理部1212は、ジェスチャー1がされたと判定する。ジェスチャー1に予め割り当てられていた入力情報は、「次の画面に切り替える命令」である。ユーザは、ジェスチャー1をすることにより、「次の画面に切り替える命令」をHMD100に入力することができる。 The gesture 1 is a gesture in which the hand HD enters the detection area SA from the left side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the right side of the detection area SA. When the user performs gesture 1, the output level of the pyroelectric elements RB and RD exceeds the threshold value th, and later, the output level of the pyroelectric elements RA and RC exceeds the threshold value th, and The output levels of the pyroelectric elements RB and RD become the threshold value th or less, and the output levels of the pyroelectric elements RA and RC become the threshold value th or later after this. When the gesture processing unit 1212 (FIG. 5) detects such a change in output level, the gesture processing unit 1212 determines that the gesture 1 has been made. The input information previously assigned to the gesture 1 is a “command to switch to the next screen”. The user can input a “command to switch to the next screen” to the HMD 100 by performing the gesture 1.

 図4、図10及び図12を参照して、ジェスチャー2を説明する。図12は、ジェスチャー2がされた場合において、焦点素子RA~RDの出力のレベルの変化を示す波形図である。この波形図の横軸及び縦軸は、図11の波形図の横軸及び縦軸と同じである。 The gesture 2 will be described with reference to FIG. 4, FIG. 10, and FIG. FIG. 12 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when the gesture 2 is performed. The horizontal axis and vertical axis of this waveform diagram are the same as the horizontal axis and vertical axis of the waveform diagram of FIG.

 ジェスチャー2は、手HDが、検出領域SA(図7A、図7B、図8)の右側から検出領域SAに入り、検出領域SAの左側から検出領域SAを出るジェスチャーである。ユーザがジェスチャー2をしたとき、焦電素子RA,RCの出力のレベルがしきい値thを超え、これより遅れて、焦電素子RB,RDの出力のレベルがしきい値thを超え、そして、焦電素子RA,RCの出力のレベルがしきい値th以下になり、これより遅れて焦電素子RB,RDの出力のレベルがしきい値th以下になる。このような出力のレベルの変化をジェスチャー処理部1212(図5)が検出したとき、ジェスチャー処理部1212は、ジェスチャー2がされたと判定する。ジェスチャー2に予め割り当てられていた入力情報は、「前の画面に切り替える命令」である。ユーザは、ジェスチャー2をすることにより、「前の画面に切り替える命令」をHMD100に入力することができる。 The gesture 2 is a gesture in which the hand HD enters the detection area SA from the right side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the left side of the detection area SA. When the user performs gesture 2, the output levels of the pyroelectric elements RA and RC exceed the threshold value th, and later, the output levels of the pyroelectric elements RB and RD exceed the threshold value th, and The output levels of the pyroelectric elements RA and RC become the threshold value th or less, and the output levels of the pyroelectric elements RB and RD become the threshold value th or later after this. When the gesture processing unit 1212 (FIG. 5) detects such a change in output level, the gesture processing unit 1212 determines that the gesture 2 has been made. The input information previously assigned to the gesture 2 is a “command to switch to the previous screen”. The user can input a “command to switch to the previous screen” to the HMD 100 by performing the gesture 2.

 ジェスチャー3及びジェスチャー4を説明する。これらのジェスチャーについて、波形図は省略する。また、これらのジェスチャーについて、波形変化の説明は省略する(ジェスチャー1及びジェスチャー2と同様の考え方である)。 Explain gesture 3 and gesture 4. Waveform diagrams are omitted for these gestures. In addition, description of waveform change is omitted for these gestures (the concept is the same as for gesture 1 and gesture 2).

 図4及び図10を参照して、ジェスチャー3は、手HDが、検出領域SA(図7A、図7B、図8)の上側から検出領域SAに入り、検出領域SAの下側から検出領域SAを出るジェスチャーである。ジェスチャー3に予め割り当てられていた入力情報は、「最後の画面に切り替える命令」である。ユーザは、ジェスチャー3をすることにより、「最後の画面に切り替える命令」をHMD100に入力することができる。 Referring to FIGS. 4 and 10, in gesture 3, hand HD enters detection area SA from above detection area SA (FIGS. 7A, 7B, and 8), and detection area SA from below detection area SA. It is a gesture to exit. The input information previously assigned to the gesture 3 is a “command to switch to the last screen”. The user can input a “command to switch to the last screen” to the HMD 100 by performing the gesture 3.

 ジェスチャー4を説明する。ジェスチャー4は、手HDが、検出領域SA(図7A、図7B、図8)の下側から検出領域SAに入り、検出領域SAの上側から検出領域SAを出るジェスチャーである。ジェスチャー4に予め割り当てられていた入力情報は、「最初の画面に切り替える命令」である。ユーザは、ジェスチャー4をすることにより、「最初の画面に切り替える命令」をHMD100に入力することができる。 Explain gesture 4. The gesture 4 is a gesture in which the hand HD enters the detection area SA from the lower side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the upper side of the detection area SA. The input information previously assigned to the gesture 4 is a “command to switch to the first screen”. The user can input a “command to switch to the first screen” to the HMD 100 by performing the gesture 4.

 ジェスチャー1~ジェスチャー4と、入力情報との組み合わせは、図10に示す例に限らず、異なる組み合わせにしてもよい。例えば、ジェスチャー1に入力情報(「前の画面に切り替える命令」が割り当てられていてもよい。近接センサ105で検出できるジェスチャーは、ジェスチャー1~ジェスチャー4に限定されないので、これらのジェスチャー以外のジェスチャーに、任意の入力情報が割り当てられていてもよい。 The combination of the gestures 1 to 4 and the input information is not limited to the example shown in FIG. For example, input information (“command to switch to the previous screen” may be assigned to gesture 1. Gestures that can be detected by proximity sensor 105 are not limited to gestures 1 to 4. Arbitrary input information may be assigned.

 図5を参照して、本実施形態では、近接センサ105とジェスチャー処理部1212とを備える検出部128によって、予め定められた二以上のジェスチャーを区別して検出している。検出部128は、この構成に限定されない。例えば、カメラ106(二次元撮像素子)と、カメラ106が撮像した画像に対して、所定の画像処理をして、ジェスチャーを認識する画像処理部と、を備える検出部128でもよい。 Referring to FIG. 5, in the present embodiment, two or more predetermined gestures are distinguished and detected by a detection unit 128 including a proximity sensor 105 and a gesture processing unit 1212. The detection unit 128 is not limited to this configuration. For example, the detection unit 128 may include a camera 106 (two-dimensional imaging device) and an image processing unit that performs predetermined image processing on an image captured by the camera 106 and recognizes a gesture.

 間違ったジェスチャー入力がされる原因としていくつかあるが、そのうち二つについて説明する。図7Bは、手HDが右手であり、手HDが検出領域SAに入っている状態を示している。手HDが検出領域SAよりも右側に位置した状態で、例えば、ジェスチャー1がされるとする。ユーザUSは、ジェスチャー1をするために、まず、手HDを検出領域SAよりも左側に移動させる動作をしなければならない。この動作は、検出領域SAの外でされなければならない。 There are several reasons why wrong gestures are input. FIG. 7B shows a state where the hand HD is the right hand and the hand HD is in the detection area SA. For example, it is assumed that the gesture 1 is performed in a state where the hand HD is located on the right side of the detection area SA. In order to make the gesture 1, the user US must first move the hand HD to the left side of the detection area SA. This operation must be done outside the detection area SA.

 しかし、誤って、その動作が、検出領域SAでされた場合、検出部128は、手HDを右から左に動かすジェスチャー(すなわち、ジェスチャー2)を検出する。よって、ユーザUSが意図しないジェスチャーが、検出部128によって検出されるので、間違ったジェスチャー入力がされたことになる。 However, if the operation is erroneously performed in the detection area SA, the detection unit 128 detects a gesture for moving the hand HD from right to left (that is, gesture 2). Therefore, since a gesture that is not intended by the user US is detected by the detection unit 128, an incorrect gesture is input.

 また、ユーザUSが、髪をさわるくせを有するとする。HMD100は、ユーザUSの頭部に装着されるので、髪をさわるくせは、間違ったジェスチャー入力がされる原因となる。 Further, it is assumed that the user US has a habit of touching hair. Since the HMD 100 is worn on the head of the user US, touching the hair causes a wrong gesture input.

 以下に説明するように、本実施形態では、間違ったジェスチャー入力がされても、直ちに、ジェスチャー入力のやり直しをすることができる。 As described below, in this embodiment, even if an incorrect gesture is input, the gesture input can be immediately performed again.

 本実施形態に係るHMD100は、第1の態様、第2の態様、及び、第3の態様がある。画像表示部104Bに表示される画面を次の画面に切り替えるジェスチャー入力(ジェスチャー1)を例にして、これらの態様の動作を説明する。 The HMD 100 according to the present embodiment has a first aspect, a second aspect, and a third aspect. The operation of these modes will be described by taking as an example a gesture input (gesture 1) for switching the screen displayed on the image display unit 104B to the next screen.

 第1の態様において、図5に示すモード制御部1214は、受付モードが開始してから経過した時間の長さが、予め定められた値に到達したとき、受付モードを終了させる。予め定められた値は、5秒を例にする。第1の態様は、受付モードの期間が固定である(5秒)。ユーザは、操作部122(例えば、操作部122に備えられる十字キー)を操作して、予め定められた値をモード制御部1214に予め設定する。これにより、受付モードの期間の長さをユーザが決定することができる。 In the first mode, the mode control unit 1214 shown in FIG. 5 ends the reception mode when the length of time that has elapsed since the reception mode started reaches a predetermined value. An example of the predetermined value is 5 seconds. In the first aspect, the period of the reception mode is fixed (5 seconds). The user operates the operation unit 122 (for example, a cross key provided in the operation unit 122) to set a predetermined value in the mode control unit 1214 in advance. Thereby, the user can determine the length of the period of the reception mode.

 図13は、第1の態様において、ジェスチャー入力がされた場合の動作を説明するフローチャートである。図14は、画面を次の画面に切り替えるために、ジェスチャー入力がされた場合において、第1の態様に備えられる焦点素子RA~RDの出力のレベルの変化を示す波形図である。図4、図5、図13及び図14を参照して、ユーザは、近接センサ105の前方に手を位置させ、ジェスチャー1をするが、上述した原因で間違ってジェスチャー2をしたとする。焦電素子RA~RDの出力のレベルが全てしきい値thを超えたとき、モード制御部1214は、ジェスチャー入力を受け付ける受付モードを開始する(図13のステップS1、図14の時刻t1)。 FIG. 13 is a flowchart for explaining the operation when a gesture is input in the first mode. FIG. 14 is a waveform diagram showing changes in the output levels of the focus elements RA to RD provided in the first mode when a gesture is input to switch the screen to the next screen. Referring to FIGS. 4, 5, 13, and 14, it is assumed that the user places a hand in front of proximity sensor 105 and performs gesture 1, but erroneously performs gesture 2 due to the above-described cause. When the output levels of pyroelectric elements RA to RD all exceed threshold value th, mode control unit 1214 starts a reception mode for accepting a gesture input (step S1 in FIG. 13 and time t1 in FIG. 14).

 このとき、表示制御部104DRは、画像表示部104Bに表示している画面に、受付モードか否かを示す画像と受付時間を示す画像とを含める。図15は、これらの画像を含む画面10-1の一例を説明する説明図である。画像11は、円形を有しており、画像11の色を用いて、受付モードか否かを示す。受付モードのとき、画像11の色が青となり、受付モードが終了したとき、画像11の色が赤となる。ここでは、画像11の色が青である。 At this time, the display control unit 104DR includes an image indicating whether the mode is the reception mode and an image indicating the reception time on the screen displayed on the image display unit 104B. FIG. 15 is an explanatory diagram for explaining an example of the screen 10-1 including these images. The image 11 has a circular shape, and uses the color of the image 11 to indicate whether or not the reception mode is set. In the acceptance mode, the color of the image 11 is blue, and when the acceptance mode is finished, the color of the image 11 is red. Here, the color of the image 11 is blue.

 画像12は、受付時間を示す。これは、受付モードの残り時間を示す情報となる。詳しく説明すると、モード制御部1214は、カウントダウンタイマーの機能を有しており、受付モードが開始したとき、カウントダウンタイマーを5秒にセットして、タイマーをスタートさせる。表示制御部104DRは、画像12を、5秒を示す画像にする。カウントダウンタイマーが示す時間が、4秒、3秒、2秒、1秒、0秒になると、表示制御部104DRは、画像12を、4秒を示す画像、3秒を示す画像、2秒を示す画像、1秒を示す画像、0秒を示す画像にする。カウントダウンタイマーが示す時間が、0秒に到達したとき、モード制御部1214は、受付モードを終了させる(ステップS8)。 Image 12 shows reception time. This is information indicating the remaining time in the reception mode. More specifically, the mode control unit 1214 has a function of a countdown timer. When the reception mode starts, the mode control unit 1214 sets the countdown timer to 5 seconds and starts the timer. The display control unit 104DR changes the image 12 to an image indicating 5 seconds. When the time indicated by the countdown timer is 4 seconds, 3 seconds, 2 seconds, 1 second, and 0 seconds, the display control unit 104DR displays the image 12 as an image indicating 4 seconds, an image indicating 3 seconds, and 2 seconds. The image is an image showing 1 second and an image showing 0 second. When the time indicated by the countdown timer reaches 0 seconds, the mode control unit 1214 ends the acceptance mode (step S8).

 図4、図5、図13及び図14を参照して、ジェスチャー処理部1212は、上側、下側、左側、右側のうち、どの側から、手HDが検出領域SA(図7A、図7B、図8)に入ったかを判定する。ここでは、焦電素子RA,RCの出力のレベルが焦電素子RB,RDの出力のレベルよりも早くしきい値thを超えたので、右側と判定する。記憶制御部1215は、「右側」を示す情報を記憶部125に記憶させる(図13のステップS2)。 4, 5, 13, and 14, the gesture processing unit 1212 detects the hand HD from any of the upper side, the lower side, the left side, and the right side from the detection area SA (FIGS. 7A, 7B, It is determined whether it has entered FIG. Here, since the output level of the pyroelectric elements RA and RC exceeds the threshold th earlier than the output level of the pyroelectric elements RB and RD, it is determined as the right side. The storage control unit 1215 stores information indicating “right side” in the storage unit 125 (step S2 in FIG. 13).

 ジェスチャー処理部1212は、上側、下側、左側、右側のうち、どの側から、手HDが検出領域SAを出たかを判定する。ここでは、焦電素子RB,RDの出力のレベルが焦電素子RA,RCの出力のレベルよりも後に、しきい値th以下になったので、左側と判定する。記憶制御部1215は、「左側」を示す情報を記憶部125に記憶させる(図13のステップS3)。 The gesture processing unit 1212 determines from which of the upper side, the lower side, the left side, and the right side the hand HD has exited the detection area SA. Here, since the output levels of the pyroelectric elements RB and RD are equal to or lower than the threshold th after the output levels of the pyroelectric elements RA and RC, it is determined as the left side. The storage control unit 1215 stores information indicating “left side” in the storage unit 125 (step S3 in FIG. 13).

 ジェスチャー処理部1212は、ステップS2及びステップS3の結果を用いて、ジェスチャーの種類を判定する(図13のステップS4)。ジェスチャーの種類とは、図10で説明した4個のジェスチャーである。記憶部125は、ジェスチャー1~ジェスチャー4と、これらに割り当てられた入力情報との対応関係を示すテーブル(以下、図10のテーブル)が予め記憶している。 The gesture processing unit 1212 determines the type of gesture using the results of step S2 and step S3 (step S4 in FIG. 13). The types of gestures are the four gestures described in FIG. The storage unit 125 stores in advance a table (hereinafter referred to as the table of FIG. 10) indicating the correspondence between the gestures 1 to 4 and the input information assigned to them.

 ジェスチャー処理部1212は、記憶部125に記憶されている、ステップS2及びステップS3の判定結果を読み出す。ここでは、ステップS2の判定結果は、「右側」であり、ステップS3の判定結果は、「左側」である。従って、手HDが、検出領域SAの右側から検出領域SAに入り、検出領域SAの左側から検出領域SAを出たことになるので、ジェスチャー処理部1212は、ジェスチャー2と判定する。なお、ジェスチャー処理部1212は、ジェスチャー1~ジェスチャー4のいずれにも該当しないと判定したとき、エラー処理をする。これにより、表示制御部104DRは、正しいジェスチャーを促す画面を画像表示部104Bに表示させる。 The gesture processing unit 1212 reads out the determination results of step S2 and step S3 stored in the storage unit 125. Here, the determination result of step S2 is “right side”, and the determination result of step S3 is “left side”. Accordingly, since the hand HD enters the detection area SA from the right side of the detection area SA and exits the detection area SA from the left side of the detection area SA, the gesture processing unit 1212 determines that the gesture is 2. Note that the gesture processing unit 1212 performs error processing when it is determined that none of the gestures 1 to 4 corresponds. As a result, the display control unit 104DR causes the image display unit 104B to display a screen for prompting a correct gesture.

 記憶制御部1215は、図10のテーブルを参照し、ステップS4で判定されたジェスチャーに割り当てられた入力情報を、記憶部125に記憶させる(図13のステップS5)。ここでは、ジェスチャー2に割り当てられた入力情報(「前の画面に切り替える命令」)が、記憶部125に記憶される。 The storage control unit 1215 refers to the table in FIG. 10 and stores the input information assigned to the gesture determined in step S4 in the storage unit 125 (step S5 in FIG. 13). Here, the input information assigned to the gesture 2 (“command to switch to the previous screen”) is stored in the storage unit 125.

 表示制御部104DRは、図15に示す画面10-1に「前の画面に切り替える命令が入力されました」を示す文字画像(不図示)を含める。ユーザは、この文字画像を見て、間違ったジェスチャーをしたことを認識する。 The display control unit 104DR includes a character image (not shown) indicating “an instruction to switch to the previous screen has been input” on the screen 10-1 shown in FIG. The user recognizes that he / she made a wrong gesture by looking at the character image.

 モード制御部1214は、受付モードが開始してから経過した時間の長さが、5秒(予め定められた値)に到達したか否かを判断する(図13のステップS6)。経過時間の長さが5秒に到達していない場合(ステップS6でNo)、モード制御部1214は、焦電素子RA~RDの出力のレベルが全てしきい値thを超えたか否かを判断する(図13のステップS7)。すなわち、モード制御部1214は、次のジェスチャーがされるまで、待機する。モード制御部1214は、焦電素子RA~RDの出力のレベルが全てしきい値thを超えている条件を満たしていないと判断したとき(ステップS7でNo)、モード制御部1214は、ステップS6の処理をする。 The mode control unit 1214 determines whether or not the length of time that has elapsed since the start of the reception mode has reached 5 seconds (a predetermined value) (step S6 in FIG. 13). When the elapsed time has not reached 5 seconds (No in step S6), the mode control unit 1214 determines whether or not the output levels of the pyroelectric elements RA to RD all exceed the threshold th. (Step S7 in FIG. 13). That is, the mode control unit 1214 waits until the next gesture is made. When the mode control unit 1214 determines that all the output levels of the pyroelectric elements RA to RD do not satisfy the condition of exceeding the threshold value th (No in step S7), the mode control unit 1214 performs step S6. Process.

 上述したように、ユーザは間違ったジェスチャーをしたことを認識したので、ジェスチャーをやり直す。ここでは、ユーザが意図したジェスチャーをしたとする(ジェスチャー1)。 As described above, the user recognizes that he / she made a wrong gesture, so the gesture is redone. Here, it is assumed that the user intended gesture (gesture 1).

 モード制御部1214は、焦電素子RA~RDの出力のレベルが全てしきい値thを超えたと判断したとき(ステップS7でYes)、ジェスチャー処理部1212は、ステップS2の処理をする。 When the mode control unit 1214 determines that the output levels of the pyroelectric elements RA to RD all exceed the threshold th (Yes in step S7), the gesture processing unit 1212 performs the process of step S2.

 ジェスチャー処理部1212は、上側、下側、左側、右側のうち、どの側から、手HDが検出領域SAに入ったかを判定する。ここでは、焦電素子RB,RDの出力のレベルが焦電素子RA,RCの出力のレベルよりも早くしきい値thを超えたので、左側と判定する。記憶制御部1215は、「左側」を示す情報を記憶部125に記憶させる(ステップS2)。 The gesture processing unit 1212 determines from which of the upper side, the lower side, the left side, and the right side the hand HD has entered the detection area SA. Here, since the output level of the pyroelectric elements RB and RD exceeds the threshold th earlier than the output level of the pyroelectric elements RA and RC, it is determined as the left side. The storage control unit 1215 stores information indicating “left side” in the storage unit 125 (step S2).

 ジェスチャー処理部1212は、上側、下側、左側、右側のうち、どの側から、手HDが検出領域SAを出たかを判定する。ここでは、焦電素子RA,RCの出力のレベルが焦電素子RB,RDの出力のレベルよりも後に、しきい値th以下になったので、右側と判定する。記憶制御部1215は、「右側」を示す情報を記憶部125に記憶させる(ステップS3)。 The gesture processing unit 1212 determines from which of the upper side, the lower side, the left side, and the right side the hand HD has exited the detection area SA. Here, since the output levels of the pyroelectric elements RA and RC are equal to or lower than the threshold th after the output levels of the pyroelectric elements RB and RD, it is determined that the output is on the right side. The storage control unit 1215 stores information indicating “right side” in the storage unit 125 (step S3).

 ジェスチャー処理部1212は、ステップS2及びステップS3の結果を用いて、ジェスチャーの種類を判定する(ステップS4)。詳しく説明すると、ジェスチャー処理部1212は、記憶部125に記憶されている、ステップS2及びステップS3の判定結果を読み出す。ここでは、ステップS2の判定結果は、「左側」であり、ステップS3の判定結果は、「右側」である。従って、手HDが、検出領域SAの左側から検出領域SAに入り、検出領域SAの右側から検出領域SAを出たことになるので、ジェスチャー処理部1212は、ジェスチャー1と判定する。 The gesture processing unit 1212 determines the type of gesture using the results of step S2 and step S3 (step S4). More specifically, the gesture processing unit 1212 reads the determination results of step S2 and step S3 stored in the storage unit 125. Here, the determination result of step S2 is “left side”, and the determination result of step S3 is “right side”. Accordingly, since the hand HD has entered the detection area SA from the left side of the detection area SA and has exited the detection area SA from the right side of the detection area SA, the gesture processing unit 1212 determines that the gesture is 1.

 記憶制御部1215は、図10のテーブルを参照し、ステップS4で判定されたジェスチャーに割り当てられた入力情報を、記憶部125に記憶させる(ステップS5)。ここでは、ジェスチャー1に割り当てられた入力情報(「次の画面に切り替える命令」)が、記憶部125に記憶される。 The storage control unit 1215 refers to the table in FIG. 10 and stores the input information assigned to the gesture determined in step S4 in the storage unit 125 (step S5). Here, the input information (“command to switch to the next screen”) assigned to gesture 1 is stored in storage unit 125.

 表示制御部104DRは、図15に示す画面10-1に「次の画面に切り替える命令が入力されました」を示す文字画像(不図示)を含める。ユーザは、この文字画像を見て、意図したジェスチャーをしたことが分かる。ユーザは、この受付モードの期間において、ジェスチャーをせずに、受付モードの終了を待つ。 The display control unit 104DR includes a character image (not shown) indicating “an instruction to switch to the next screen has been input” on the screen 10-1 shown in FIG. The user sees this character image and knows that he has made the intended gesture. The user waits for the end of the reception mode without making a gesture during the period of the reception mode.

 モード制御部1214は、受付モードが開始してから経過した時間の長さが5秒(予め定められた値)に到達したとき(ステップS6でYes)、受付モードを終了させる(図13のステップS8、図14の時刻t2)。表示制御部104DRは、画像表示部104Bに表示している画面10-1(図15)に含まれる画像11の色を赤にし、画像12を、0秒を示す画像にする。 The mode control unit 1214 ends the reception mode when the length of time that has elapsed since the start of the reception mode has reached 5 seconds (a predetermined value) (Yes in step S6) (step in FIG. 13). S8, time t2 in FIG. The display control unit 104DR sets the color of the image 11 included in the screen 10-1 (FIG. 15) displayed on the image display unit 104B to red, and sets the image 12 to an image indicating 0 second.

 処理部1213は、記憶部125に記憶されている、一つ以上のジェスチャーのそれぞれの入力情報(ここでは、ジェスチャー2の入力情報、ジェスチャー1の入力情報)の中で、ジェスチャー1の入力情報を用いて、所定の処理をする(図13のステップS9)。すなわち、受付モードの期間に検出部128によって検出された一つ以上のジェスチャーの中で、最後に検出されたジェスチャーに予め割り当てられた入力情報を用いて、所定の処理をする。最後に検出されたジェスチャーがジェスチャー1なので、所定の処理として、処理部1213は、次の画面に切り替える命令を表示制御部104DRに送る。これにより、表示制御部104DRは、画像表示部104Bに表示される画面10-1を次の画面に切り替える。 The processing unit 1213 stores the input information of the gesture 1 in the input information (here, the input information of the gesture 2 and the input information of the gesture 1) stored in the storage unit 125. A predetermined process is performed by using them (step S9 in FIG. 13). In other words, among the one or more gestures detected by the detection unit 128 during the reception mode, predetermined processing is performed using the input information previously assigned to the last detected gesture. Since the last detected gesture is gesture 1, as a predetermined process, the processing unit 1213 sends a command to switch to the next screen to the display control unit 104DR. Thereby, the display control unit 104DR switches the screen 10-1 displayed on the image display unit 104B to the next screen.

 所定の処理(ステップS9)の終了後、ユーザがジェスチャーを開始することにより、ジェスチャー処理部1212が、焦電素子RA~RDの出力のレベルが全てしきい値thを超えたと判断したとき、モード制御部1214は受付モードを開始する(ステップS1、時刻t3)。ここでは、最初のジェスチャーが、ユーザが意図するジェスチャーとする(例えば、ジェスチャー1)。よって、この受付モードの期間では、次のジェスチャーはされない。受付モードが終了することにより(ステップS8、時刻t4)、処理部1213は、所定の処理をする(ステップS9)。最後に検出されたジェスチャーがジェスチャー1なので、所定の処理として、処理部1213は、次の画面に切り替える命令を表示制御部104DRに送る。これにより、表示制御部104DRは、画像表示部104Bに表示される画面を次の画面に切り替える。 When the user starts a gesture after the completion of the predetermined process (step S9), the gesture processing unit 1212 determines that all the output levels of the pyroelectric elements RA to RD have exceeded the threshold value th. The control unit 1214 starts the reception mode (step S1, time t3). Here, the first gesture is a gesture intended by the user (for example, gesture 1). Therefore, the next gesture is not performed during this acceptance mode period. When the acceptance mode ends (step S8, time t4), the processing unit 1213 performs a predetermined process (step S9). Since the last detected gesture is gesture 1, as a predetermined process, the processing unit 1213 sends a command to switch to the next screen to the display control unit 104DR. Thereby, the display control unit 104DR switches the screen displayed on the image display unit 104B to the next screen.

 第1の態様の主な効果を説明する。図5及び図14を参照して、処理部1213は、時刻t1~時刻t2で規定される受付モードの期間、時刻t3~時刻t4で規定される受付モードの期間のそれぞれにおいて、ジェスチャー1に予め割り当てられた入力情報(画面を次の画面に切り替える命令)を表示制御部104DRに送る。このように、処理部1213は、受付モードの期間に、検出された一つ以上のジェスチャーの中で、最後に検出されたジェスチャーに予め割り当てられた入力情報を用いて、所定の処理をする。このため、最後のジェスチャーが、ユーザが意図するジェスチャーであればよいので、受付モードの期間であれば、ユーザは何回もジェスチャー入力のやり直しができる。従って、第1の態様によれば、ジェスチャー入力を受け付ける受付モードの期間に、間違ったジェスチャー入力がされても、次の受付モードの期間を待つことなく、ジェスチャー入力が可能となる。この効果は、第2の態様及び第3の態様でも生じる。 The main effect of the first aspect will be described. Referring to FIGS. 5 and 14, processing unit 1213 preliminarily applies gesture 1 in each of the reception mode period defined from time t1 to time t2 and the reception mode period defined from time t3 to time t4. The assigned input information (command to switch the screen to the next screen) is sent to the display control unit 104DR. In this way, the processing unit 1213 performs a predetermined process using the input information previously assigned to the last detected gesture among the one or more detected gestures during the period of the reception mode. For this reason, since the last gesture should just be the gesture which a user intends, if it is the period of reception mode, the user can repeat gesture input many times. Therefore, according to the first aspect, even if an incorrect gesture input is made during the period of the reception mode for accepting the gesture input, the gesture input can be performed without waiting for the period of the next reception mode. This effect also occurs in the second and third aspects.

 図5及び図15を参照して、表示制御部104DRは、受付モード中に、受付時間を示す画像12を画像表示部104Bに表示させる。画像12は、受付モードの残り時間を示す情報である。従って、第1の態様によれば、受付モードの期間がいつ終了するかを、ユーザに認識させることができる。この効果は、第2の態様及び第3の態様でも生じる。 Referring to FIGS. 5 and 15, display control unit 104DR causes image display unit 104B to display image 12 indicating the reception time during the reception mode. The image 12 is information indicating the remaining time in the reception mode. Therefore, according to the first aspect, the user can recognize when the period of the reception mode ends. This effect also occurs in the second and third aspects.

 図14を参照して、モード制御部1214は、受付モードでない期間において、焦電素子RA~RD出力のレベルが、全てしきい値thを超えたとき(時刻t1、時刻t3)、受付モードを開始する。焦電素子RA~RDのうち、一部の焦電素子の出力のレベルがしきい値を超えたとき、受付モードが開始される場合を考える。この場合、受付モードの開始後、焦電素子RA~RDの出力のレベルが、全てしきい値thを超える前にジェスチャーが止められたとき、ジェスチャーの判定はされないが、受付モードが継続される。このため、再びジェスチャーをしたとき、ジェスチャーが終了する前に受付モードが終了するおそれがある。第1の態様によれば、受付モードでない期間において、焦電素子RA~RDの出力の値が、全てしきい値thを超えたとき、受付モードが開始するので、このような不都合が生じない。この効果は、第3の態様でも生じる。 Referring to FIG. 14, mode control unit 1214 sets the reception mode when the levels of pyroelectric elements RA to RD output all exceed threshold value th (time t1, time t3) in a period other than reception mode. Start. Consider a case where the acceptance mode is started when the output level of some of the pyroelectric elements RA to RD exceeds a threshold value. In this case, after the start of the acceptance mode, when the gesture is stopped before the output levels of the pyroelectric elements RA to RD all exceed the threshold value th, the gesture is not determined, but the acceptance mode is continued. . For this reason, when the gesture is performed again, the acceptance mode may end before the gesture ends. According to the first aspect, since the reception mode is started when the output values of the pyroelectric elements RA to RD all exceed the threshold value th during the period other than the reception mode, such inconvenience does not occur. . This effect also occurs in the third aspect.

 図5を参照して、図13のステップS5で説明したように、表示制御部104DRは、検出部128によってジェスチャーが検知される毎に、検知されたジェスチャーに予め割り当てられた入力情報を示す文字画像(例えば、「前の画面に切り替える命令が入力されました」を示す文字画像)を含む画面を、画像表示部104Bに表示させる。これにより、ユーザは、自身がしたジェスチャーが間違っているか否かを判断することができる。 Referring to FIG. 5, as described in step S <b> 5 of FIG. 13, every time a gesture is detected by the detection unit 128, the display control unit 104 </ b> DR is a character indicating input information assigned in advance to the detected gesture. A screen including an image (for example, a character image indicating “a command to switch to the previous screen has been input”) is displayed on the image display unit 104B. Thereby, the user can determine whether the gesture he / she made is wrong.

 このように、表示制御部104DR及画像表示部104Bは、報知部として機能する。報知部は、検出部128によってジェスチャーが検知される毎に、検知されたジェスチャーに予め割り当てられた入力情報を報知する。 Thus, the display control unit 104DR and the image display unit 104B function as a notification unit. Every time a gesture is detected by the detection unit 128, the notification unit notifies input information pre-assigned to the detected gesture.

 報知部として、画面を用いる態様1と、音声を用いる態様2と、両方を用いる態様3とがある。態様1は、表示部(画像表示部104B)と、検知されたジェスチャーに予め割り当てられた入力情報を示す画面を、表示部(画像表示部104B)に表示させる表示制御部104DRと、を備える。態様2は、検知されたジェスチャーに予め割り当てられた入力情報を音声信号に変換する音声処理部と、音声信号を増幅するアンプと、増幅された音声信号を音声として出力するスピーカと、を備える。音声処理部は、CPU、RAM及びROM等のハードウェア、並びに、入力情報を音声信号に変換するプログラム等の組み合わせにより実現される。態様3は、態様1と態様2とを備える。 There are an aspect 1 that uses a screen, an aspect 2 that uses sound, and an aspect 3 that uses both as notification parts. Aspect 1 includes a display unit (image display unit 104B) and a display control unit 104DR that causes the display unit (image display unit 104B) to display a screen showing input information pre-assigned to the detected gesture. Aspect 2 includes an audio processing unit that converts input information pre-assigned to the detected gesture into an audio signal, an amplifier that amplifies the audio signal, and a speaker that outputs the amplified audio signal as audio. The audio processing unit is realized by a combination of hardware such as a CPU, RAM, and ROM, and a program that converts input information into an audio signal. Aspect 3 includes Aspect 1 and Aspect 2.

 第2の態様について、第1の態様と相違する点を主に説明する。第2の態様において、図5に示すモード制御部1214は、受付モード中に、検出部128によってジェスチャーが検出されている検出状態からジェスチャーが検出されていない無検出状態に変化したとき、無検出状態の時間の計測を開始し、無検出状態の時間の長さが、予め定められた値に到達したとき、受付モードの期間を終了させる。予め定められた値は、2秒を例にする。すなわち、無検出状態の期間が2秒継続したとき、受付モードが終了する。第2の態様は、受付モードの期間が固定されていない。ユーザは、操作部122を操作して、予め定められた値をモード制御部1214に予め設定する。これにより、受付モードを終了させる条件となる無検出状態の時間の長さをユーザが決定することができる。 The second aspect will be described mainly with respect to differences from the first aspect. In the second mode, when the mode control unit 1214 shown in FIG. 5 changes during the reception mode from the detection state in which the gesture is detected by the detection unit 128 to the non-detection state in which no gesture is detected, no detection is performed. Measurement of the time of the state is started, and when the length of time of the non-detection state reaches a predetermined value, the period of the reception mode is ended. An example of the predetermined value is 2 seconds. That is, when the period of the non-detection state continues for 2 seconds, the reception mode ends. In the second aspect, the period of the reception mode is not fixed. The user operates the operation unit 122 to set a predetermined value in the mode control unit 1214 in advance. Thereby, the user can determine the length of time of the non-detection state which is a condition for terminating the reception mode.

 図16は、第2の態様において、ジェスチャー入力がされた場合の動作を説明するフローチャートである。図17は、画面を次の画面に切り替えるために、ジェスチャー入力がされた場合において、第2の態様に備えられる焦点素子RA~RDの出力のレベルの変化を示す波形図である。図16に示すフローチャートが、図13に示すフローチャートと異なる点は、以下の二つである。図16のステップS1において、図15に示す画面10-1の替わりに、図18に示す画面10-2が、画像表示部104Bに表示される。図13のステップS6の替わりに、図16のステップS10が実行される。 FIG. 16 is a flowchart for explaining the operation when a gesture is input in the second mode. FIG. 17 is a waveform diagram showing changes in the output levels of the focus elements RA to RD provided in the second mode when a gesture input is made to switch the screen to the next screen. The flowchart shown in FIG. 16 differs from the flowchart shown in FIG. 13 in the following two points. In step S1 of FIG. 16, a screen 10-2 shown in FIG. 18 is displayed on the image display unit 104B instead of the screen 10-1 shown in FIG. Step S10 in FIG. 16 is executed instead of step S6 in FIG.

 図4、図5、図16及び図17を参照して、ユーザは、近接センサ105の前方に手を位置させ、ジェスチャー1をするが、上述した原因で間違ってジェスチャー2をしたとする。焦電素子RA~RDの出力のレベルが全てしきい値thを超えたとき、モード制御部1214は、ジェスチャー入力を受け付ける受付モードを開始する(図16のステップS1、図17の時刻t1)。このとき、表示制御部104DRは、画像表示部104Bに表示している画面に、受付モードか否かを示す画像と無検出状態に関する時間を示す画像とを含める。図18は、これらの画像を含む画面10-2の一例を説明する説明図である。画像11は、図15に示す画面10-1で説明したように、受付モードか否かを示す。ここでは、受付モードなので、画像11の色が青である。 Referring to FIG. 4, FIG. 5, FIG. 16, and FIG. 17, it is assumed that the user places a hand in front of proximity sensor 105 and performs gesture 1, but makes gesture 2 incorrectly for the above-described reasons. When the output levels of pyroelectric elements RA to RD all exceed threshold value th, mode control unit 1214 starts a reception mode for accepting a gesture input (step S1 in FIG. 16, time t1 in FIG. 17). At this time, the display control unit 104DR includes an image indicating whether or not the reception mode is set and an image indicating the time related to the non-detection state on the screen displayed on the image display unit 104B. FIG. 18 is an explanatory diagram for explaining an example of the screen 10-2 including these images. The image 11 indicates whether or not the reception mode is set as described in the screen 10-1 shown in FIG. Here, since it is a reception mode, the color of the image 11 is blue.

 画像13は、無検出状態に関する時間を示す。これは、受付モードの残り時間を示す情報となる。詳しく説明すると、焦電素子RA~RDの出力のレベルのうち、少なくとも一つがしきい値thを超えているとき、検出部128によってジェスチャーが検出されている検出状態とする。焦電素子RA~RDの出力のレベルが全てしきい値th以下のとき、検出部128によってジェスチャーが検出されていない無検出状態とする。モード制御部1214は、カウントダウンタイマーの機能を有しており、検出状態から無検出状態に変化したとき(図17の時刻t10、時刻t11、時刻t12)、カウントダウンタイマーを2秒にセットして、タイマーをスタートさせる。表示制御部104DRは、画像13を、2秒を示す画像にする。カウントダウンタイマーが示す時間が、1秒、0秒になると、表示制御部104DRは、画像13を、1秒を示す画像、0秒を示す画像にする。カウントダウンタイマーが示す時間が、0秒に到達したとき、モード制御部1214は、受付モードを終了させる。 Image 13 shows the time related to the non-detection state. This is information indicating the remaining time in the reception mode. More specifically, it is assumed that when at least one of the output levels of the pyroelectric elements RA to RD exceeds the threshold th, the detection unit 128 detects a gesture. When the output levels of the pyroelectric elements RA to RD are all equal to or lower than the threshold th, the detection unit 128 is in a non-detection state in which no gesture is detected. The mode control unit 1214 has a function of a countdown timer, and when the detection state is changed to the non-detection state (time t10, time t11, time t12 in FIG. 17), the countdown timer is set to 2 seconds. Start the timer. The display control unit 104DR changes the image 13 to an image indicating 2 seconds. When the time indicated by the countdown timer is 1 second and 0 seconds, the display control unit 104DR changes the image 13 to an image indicating 1 second and an image indicating 0 second. When the time indicated by the countdown timer reaches 0 seconds, the mode control unit 1214 ends the reception mode.

 第2の態様は、ステップS5の次に、ステップS10を実行する。詳しく説明すると、一つのジェスチャーが終了することによって、手HDが、検出領域SA(図7A、図7B、図8)の外に出たとき、検出状態から無検出状態に変化する。この変化は、ステップS3の期間に発生する。モード制御部1214は、検出状態から無検出状態に変化したとき、上記カウントダウンタイマーを2秒にセットして、タイマーをスタートさせる。モード制御部1214は、タイマーが示す時間が0秒に到達したか否かを判断する。すなわち、モード制御部1214は、無検出状態の時間の長さが、予め定められた値(2秒)に到達したか否かを判断する(ステップS10)。 In the second mode, step S10 is executed after step S5. More specifically, when one hand moves out of the detection area SA (FIGS. 7A, 7B, and 8) by the end of one gesture, the detection state changes to the non-detection state. This change occurs during the period of step S3. When the mode control unit 1214 changes from the detection state to the no detection state, the mode control unit 1214 sets the countdown timer to 2 seconds and starts the timer. The mode control unit 1214 determines whether or not the time indicated by the timer has reached 0 seconds. That is, the mode control unit 1214 determines whether or not the length of time of the non-detection state has reached a predetermined value (2 seconds) (step S10).

 モード制御部1214が、無検出状態の時間の長さが、予め定められた値に到達していないと判断したとき(ステップS10でNo)、モード制御部1214は、ステップS7の処理をする。 When the mode control unit 1214 determines that the length of time of the non-detection state has not reached a predetermined value (No in step S10), the mode control unit 1214 performs the process of step S7.

 モード制御部1214が、無検出状態の時間の長さが、予め定められた値に到達していると判断したとき(ステップS10でYes、図17の時刻t2、時刻t4)、モード制御部1214は、受付モードを終了させる(ステップS8)。 When the mode control unit 1214 determines that the length of time of the non-detection state has reached a predetermined value (Yes in step S10, time t2, time t4 in FIG. 17), the mode control unit 1214 Terminates the acceptance mode (step S8).

 以上説明したように、第2の態様によれば、無検出状態の時間の長さが、2秒(予め定められた値)に到達しなければ、受付モードが終了しない。従って、第2の態様によれば、ユーザがジェスチャーをしている最中に、受付モードが終了することを防止できる。 As described above, according to the second aspect, the reception mode does not end unless the length of time of the non-detection state reaches 2 seconds (predetermined value). Therefore, according to the second aspect, it is possible to prevent the reception mode from ending while the user is making a gesture.

 第3の態様を説明する。第3の態様は、第1の態様と第2の態様とを組み合わせた態様である。図5に示すモード制御部1214には、受付モードの期間の上限値となる第1の値が設定される。モード制御部1214は、受付モードの期間が上限値に到達していなくても、無検出状態の時間の長さが予め定められた第2の値に到達したとき、受付モードを終了させる。第2の値は第1の値より小さい。第1の値は、5秒を例にする。第2の値は、2秒を例にする。ユーザは、操作部122を操作して、第1の値及び第2の値をモード制御部1214に予め設定する。これにより、受付モードを終了させる条件となる第1の値及び第2の値をユーザが決定することができる。 The third aspect will be described. The third aspect is an aspect in which the first aspect and the second aspect are combined. In the mode control unit 1214 illustrated in FIG. 5, a first value that is an upper limit value for the period of the reception mode is set. Even if the period of the reception mode has not reached the upper limit value, the mode control unit 1214 ends the reception mode when the length of time of the non-detection state reaches a predetermined second value. The second value is smaller than the first value. The first value takes 5 seconds as an example. The second value takes 2 seconds as an example. The user operates the operation unit 122 to set the first value and the second value in the mode control unit 1214 in advance. Thereby, the user can determine the first value and the second value which are the conditions for terminating the reception mode.

 図19は、第3の態様において、ジェスチャー入力がされた場合の動作を説明するフローチャートである。図20は、画面を次の画面に切り替えるために、ジェスチャー入力がされた場合において、第3の態様に備えられる焦点素子RA~RDの出力のレベルの変化を示す波形図である。図19に示すフローチャートが、図13に示すフローチャートと異なる点は、以下の二つである。図19のステップS1において、図15に示す画面10-1の替わりに、図21に示す画面10-3が、画像表示部104Bに表示される。ステップS6がNoの場合、ステップS10が実行される。ステップS10は、図16のステップS10と同じである。 FIG. 19 is a flowchart for explaining the operation when a gesture is input in the third mode. FIG. 20 is a waveform diagram showing changes in the output levels of the focus elements RA to RD provided in the third mode when a gesture input is made to switch the screen to the next screen. The flowchart shown in FIG. 19 differs from the flowchart shown in FIG. 13 in the following two points. In step S1 of FIG. 19, a screen 10-3 shown in FIG. 21 is displayed on the image display unit 104B instead of the screen 10-1 shown in FIG. If step S6 is No, step S10 is executed. Step S10 is the same as step S10 in FIG.

 図4、図5、図19及び図20を参照して、ユーザは、近接センサ105の前方に手を位置させ、ジェスチャー1をするが、上述した原因で間違ってジェスチャー2をしたとする。焦電素子RA~RDの出力のレベルが全てしきい値thを超えたとき、モード制御部1214は、ジェスチャー入力を受け付ける受付モードを開始する(図19のステップS1、図20の時刻t1)。このとき、表示制御部104DRは、画像表示部104Bに表示している画面に、受付モードか否かを示す画像、受付時間を示す画像、及び、無検出状態に関する時間を示す画像を含める。図21は、これらの画像を含む画面10-3の一例を説明する説明図である。画像11は、図15に示す画面10-1で説明したように、受付モードか否かを示す。ここでは、受付モードなので、画像11の色が青である。 Referring to FIG. 4, FIG. 5, FIG. 19, and FIG. 20, it is assumed that the user places a hand in front of proximity sensor 105 and makes gesture 1, but makes gesture 2 by the above-mentioned cause. When the output levels of pyroelectric elements RA to RD all exceed threshold value th, mode control unit 1214 starts a reception mode for accepting a gesture input (step S1 in FIG. 19 and time t1 in FIG. 20). At this time, the display control unit 104DR includes, on the screen displayed on the image display unit 104B, an image indicating whether or not the reception mode is set, an image indicating the reception time, and an image indicating the time related to the non-detection state. FIG. 21 is an explanatory diagram for explaining an example of the screen 10-3 including these images. The image 11 indicates whether or not the reception mode is set as described in the screen 10-1 shown in FIG. Here, since it is a reception mode, the color of the image 11 is blue.

 画像12は、図15に示す画面10-1で説明したように、受付時間を示す。これは、受付モードの残り時間を示す情報となる。ここでは、画像12は、5秒を示す画像である。 The image 12 shows the reception time as described in the screen 10-1 shown in FIG. This is information indicating the remaining time in the reception mode. Here, the image 12 is an image showing 5 seconds.

 画像13は、図18に示す画面10-2で説明したように、無検出状態に関する時間を示す。これは、受付モードの残り時間を示す情報となる。受付モード中に無検出状態が開始したとき(図20の時刻t10、時刻t11、時刻t12)、表示制御部104DRは、画像13を、2秒を示す画像にする。ここでは、これに該当しないので、画像13で示される時間の箇所は、空欄にされている。 The image 13 shows the time relating to the non-detection state as described in the screen 10-2 shown in FIG. This is information indicating the remaining time in the reception mode. When the non-detection state starts during the reception mode (time t10, time t11, and time t12 in FIG. 20), the display control unit 104DR changes the image 13 to an image indicating 2 seconds. Here, since this is not the case, the time portion indicated by the image 13 is left blank.

 第3の態様において、モード制御部1214は、受付モードが開始してから経過した時間の長さが、予め定められた値(第1の値)に到達したか否かを判断する(図19のステップS6)。経過時間の長さが、予め定められた値(第1の値)に到達していない場合(ステップS6でNo)、モード制御部1214は、無検出状態の時間の長さが、予め定められた値(第2の値)に到達したか否かを判断する(ステップS10)。 In the third mode, mode control unit 1214 determines whether or not the length of time that has elapsed since the start of the reception mode has reached a predetermined value (first value) (FIG. 19). Step S6). When the length of the elapsed time has not reached the predetermined value (first value) (No in step S6), the mode control unit 1214 determines that the length of time in the non-detection state is predetermined. It is determined whether or not a predetermined value (second value) has been reached (step S10).

 無検出状態の時間の長さが、予め定められた値(第2の値)に到達していない場合(ステップS10でNo)、モード制御部1214は、焦電素子RA~RDの出力のレベルが全てしきい値thを超えたか否かを判断する(図19のステップS7)。 When the length of time of the non-detection state does not reach the predetermined value (second value) (No in step S10), the mode control unit 1214 outputs the levels of the pyroelectric elements RA to RD. It is determined whether or not all have exceeded the threshold value th (step S7 in FIG. 19).

 無検出状態の時間の長さが、予め定められた値(第2の値)に到達している場合(ステップS10でYes)、モード制御部1214は、受付モードを終了させる(ステップS8)。 When the length of time of the non-detection state has reached a predetermined value (second value) (Yes in step S10), the mode control unit 1214 ends the reception mode (step S8).

 第3の態様の主な効果を説明する。図5及び図20を参照して、ジェスチャー入力に余裕をユーザに持たせるために、第1の値は、比較的大きく設定される(例えば、5秒)。処理部1213は、受付モードの終了後、所定の処理をする(図19のステップS9)。最初のジェスチャー入力が正しい場合、受付モードが開始してから5秒後に受付モードが終了するとすれば、最初のジェスチャーが終了してから、所定の処理がされるまでに比較的大きな待ち時間が発生する(例えば、4秒)。これは、ユーザにとって無駄な時間である。そこで、モード制御部1214は、無検出状態の時間の長さが第2の値(例えば、2秒)に到達すれば、受付モードが開始してから5秒に到達する前でも受付モードを終了させる(時刻t4)。 The main effect of the third aspect will be described. With reference to FIG. 5 and FIG. 20, the first value is set to be relatively large (for example, 5 seconds) in order to allow the user a margin for gesture input. The processing unit 1213 performs a predetermined process after the end of the reception mode (step S9 in FIG. 19). If the first gesture input is correct, if the acceptance mode ends 5 seconds after the acceptance mode starts, a relatively large waiting time occurs after the first gesture ends until the predetermined processing is performed. (For example, 4 seconds). This is wasted time for the user. Therefore, if the length of time in the non-detection state reaches a second value (for example, 2 seconds), the mode control unit 1214 ends the reception mode even before it reaches 5 seconds after the reception mode starts. (Time t4).

(実施形態の纏め)
 実施形態の第1の局面に係る表示装置は、表示部と、前記表示部と異なる位置に検出領域を有し、予め定められた二以上のジェスチャーを区別して検出できる検出部と、ジェスチャー入力を受け付ける受付モードを所定期間実行するモード制御部と、前記受付モードが終了した後、前記受付モードの期間に前記検出部によって検出された一つ以上の前記ジェスチャーの中で、最後に検出された前記ジェスチャーに予め割り当てられた入力情報を用いて、所定の処理をする処理部と、を備える。
(Summary of embodiment)
The display device according to the first aspect of the embodiment includes a display unit, a detection unit having a detection region at a position different from the display unit, and capable of distinguishing and detecting two or more predetermined gestures, and a gesture input A mode control unit that executes a reception mode for reception for a predetermined period, and after the reception mode ends, the last detected in one or more of the gestures detected by the detection unit during the period of the reception mode A processing unit that performs predetermined processing using input information pre-assigned to the gesture.

 表示装置は、ジェスチャー入力を受け付ける受付モードの期間に、検出された一つ以上のジェスチャーの中で、最後に検出されたジェスチャーに予め割り当てられた入力情報を用いて、所定の処理をする(例えば、表示部に表示された画面を次の画面に切り替える命令をする)。このため、最後のジェスチャーが、ユーザが意図するジェスチャーであればよいので、受付モードの期間であれば、ユーザは何回もジェスチャー入力のやり直しができる。従って、実施形態の第1の局面に係る表示装置によれば、ジェスチャー入力を受け付ける受付モードの期間に、間違ったジェスチャー入力がされても、次の受付モードの期間を待つことなく、ジェスチャー入力が可能となる。 The display device performs a predetermined process using the input information previously assigned to the last detected gesture among the one or more detected gestures during the period of the reception mode in which the gesture input is received (for example, And a command to switch the screen displayed on the display unit to the next screen). For this reason, since the last gesture should just be the gesture which a user intends, if it is the period of reception mode, the user can repeat gesture input many times. Therefore, according to the display device according to the first aspect of the embodiment, even if an incorrect gesture input is made during the period of the reception mode for accepting the gesture input, the gesture input is performed without waiting for the next period of the reception mode. It becomes possible.

 表示装置は、例えば、ウェアラブル端末である。ウェアラブル端末とは、体の一部(例えば、頭、腕)に装着できる端末装置である。 The display device is, for example, a wearable terminal. A wearable terminal is a terminal device that can be worn on a part of a body (for example, a head or an arm).

 以下に説明するように、実施形態の第1の局面に係る表示装置には、受付モードの期間が固定されている態様(第1の態様)と、受付モードの期間が固定されていない態様(第2の態様)と、これら二つの態様を組み合わせた態様(第3の態様)と、がある。 As will be described below, the display device according to the first aspect of the embodiment has a mode in which the period of the reception mode is fixed (first mode) and a mode in which the period of the reception mode is not fixed ( There are a second aspect) and a combination of these two aspects (third aspect).

 上記構成において、前記モード制御部は、前記受付モードが開始してから経過した時間の長さが、予め定められた値に到達したとき、前記受付モードを終了させる。 In the above configuration, the mode control unit ends the reception mode when the length of time that has elapsed since the reception mode started reaches a predetermined value.

 これは、受付モードの期間が固定されている態様である(第1の態様)。予め定められた値が、例えば、5秒の場合、受付モードの期間は、5秒となる。 This is a mode in which the period of the reception mode is fixed (first mode). If the predetermined value is, for example, 5 seconds, the period of the reception mode is 5 seconds.

 上記構成において、前記予め定められた値を設定する操作ができる操作部をさらに備える。 The above configuration further includes an operation unit capable of performing an operation of setting the predetermined value.

 この構成によれば、ユーザが予め定められた値を設定できるので、受付モードの期間の長さをユーザが決定することができる。 According to this configuration, since the user can set a predetermined value, the user can determine the length of the period of the reception mode.

 上記構成において、前記モード制御部は、前記受付モード中に、前記検出部によって前記ジェスチャーが検出されている検出状態から前記ジェスチャーが検出されていない無検出状態に変化したとき、前記無検出状態の時間の計測を開始し、前記無検出状態の時間の長さが、予め定められた値に到達したとき、前記受付モードを終了させる。 In the above configuration, when the mode control unit changes from the detection state in which the gesture is detected by the detection unit to the non-detection state in which the gesture is not detected during the reception mode, Time measurement is started, and when the length of time in the non-detection state reaches a predetermined value, the reception mode is terminated.

 これは、受付モードの期間が固定されていない態様である(第2の態様)。第2の態様では、無検出状態の時間の長さが、予め定められた値(例えば、2秒)に到達しなければ、受付モードが終了しない。従って、第2の態様によれば、ユーザがジェスチャーをしている最中に、受付モードが終了することを防止できる。 This is a mode in which the period of the reception mode is not fixed (second mode). In the second aspect, the reception mode does not end unless the length of time of the non-detection state reaches a predetermined value (for example, 2 seconds). Therefore, according to the second aspect, it is possible to prevent the reception mode from ending while the user is making a gesture.

 上記構成において、前記予め定められた値を設定する操作ができる操作部をさらに備える。 The above configuration further includes an operation unit capable of performing an operation of setting the predetermined value.

 この構成によれば、ユーザが予め定められた値を設定できる。従って、受付モードを終了させる条件となる無検出状態の時間の長さをユーザが決定することができる。 According to this configuration, the user can set a predetermined value. Therefore, the user can determine the length of time of the non-detection state that is a condition for terminating the reception mode.

 上記構成において、前記モード制御部は、前記受付モードが開始してから経過した時間の長さが、予め定められた第1の値に到達したとき、前記受付モードを終了させ、前記モード制御部は、前記第1の値より小さい予め定められた値を第2の値とし、前記受付モード中に、前記検出部によって前記ジェスチャーが検出されている検出状態から前記ジェスチャーが検出されていない無検出状態に変化したとき、前記無検出状態の時間の計測を開始し、前記無検出状態の時間の長さが、前記第2の値に到達したとき、前記受付モードが開始してから経過した時間の長さが前記第1の値に到達する前であっても前記受付モードを終了させる。 In the above configuration, the mode control unit ends the reception mode when the length of time that has elapsed since the reception mode started reaches a first predetermined value, and the mode control unit The second value is a predetermined value smaller than the first value, and the gesture is not detected from the detection state in which the gesture is detected by the detection unit during the reception mode. When the state is changed, the measurement of the time of the non-detection state is started, and when the length of the time of the non-detection state reaches the second value, the time elapsed after the reception mode is started The reception mode is terminated even before the length reaches the first value.

 これは、第1の態様と第2の態様とを組み合わせた態様である(第3の態様)。ジェスチャー入力に余裕をユーザに持たせるために、第1の値は、比較的大きく設定される(例えば、5秒)。処理部は、受付モードの終了後、所定の処理をする。最初のジェスチャー入力が正しい場合、受付モードが開始してから5秒後に受付モードが終了するとすれば、最初のジェスチャーが終了してから、所定の処理がされるまでに比較的大きな待ち時間が発生する(例えば、4秒)。これは、ユーザにとって無駄な時間である。そこで、モード制御部は、無検出状態の時間の長さが第2の値(例えば、2秒)に到達すれば、受付モードが開始してから5秒に到達する前でも受付モードを終了させる。 This is a mode combining the first mode and the second mode (third mode). The first value is set to be relatively large (for example, 5 seconds) in order to allow the user a margin for gesture input. The processing unit performs a predetermined process after the acceptance mode ends. If the first gesture input is correct, if the acceptance mode ends 5 seconds after the acceptance mode starts, a relatively large waiting time occurs after the first gesture ends until the predetermined processing is performed. (For example, 4 seconds). This is wasted time for the user. Therefore, if the length of time of the non-detection state reaches a second value (for example, 2 seconds), the mode control unit ends the reception mode even before reaching 5 seconds after the reception mode starts. .

 上記構成において、前記第1の値及び前記第2の値を設定する操作ができる操作部をさらに備える。 The above configuration further includes an operation unit capable of performing an operation of setting the first value and the second value.

 この構成によれば、ユーザが第1の値及び第2の値の設定をできる。従って、受付モードを終了させる条件となる第1の値及び第2の値をユーザが決定することができる。 According to this configuration, the user can set the first value and the second value. Therefore, the user can determine the first value and the second value that are the conditions for ending the acceptance mode.

 上記構成において、前記受付モードの残り時間を示す情報を、前記受付モード中に、前記表示部に表示させる表示制御部をさらに備える。 The above configuration further includes a display control unit that displays information indicating the remaining time in the reception mode on the display unit during the reception mode.

 この構成によれば、受付モードがいつ終了するかを、ユーザに認識させることができる。 This configuration allows the user to recognize when the acceptance mode ends.

 上記構成において、前記検出部は、2次元マトリクス状に配列された複数の焦電素子と、前記複数の焦電素子の各出力に基づいてジェスチャーを判定するジェスチャー処理部と、を備える。 In the above configuration, the detection unit includes a plurality of pyroelectric elements arranged in a two-dimensional matrix, and a gesture processing unit that determines a gesture based on each output of the plurality of pyroelectric elements.

 これは、検出部の一例である。 This is an example of a detection unit.

 上記構成において、前記ジェスチャー処理部は、前記複数の焦電素子のそれぞれの出力の値が、予め定められたしきい値を超えたタイミングを用いて、前記ジェスチャーを判定し、前記モード制御部は、前記受付モードでない期間において、前記複数の焦電素子の出力の値が、全て前記しきい値を超えたとき、前記受付モードを開始する。 In the above configuration, the gesture processing unit determines the gesture using a timing at which an output value of each of the plurality of pyroelectric elements exceeds a predetermined threshold, and the mode control unit When the output values of the plurality of pyroelectric elements all exceed the threshold during a period other than the acceptance mode, the acceptance mode is started.

 この構成は、受付モードが開始される条件の一例である。この構成は、受付モードの開始から予め定められた時間に到達したときに、受付モードを終了させる態様(第1の態様、第3の態様)に対して、以下の効果が生じる。複数の焦電素子のうち、一部の焦電素子の出力の値がしきい値を超えたとき、受付モードが開始される場合を考える。この場合、受付モードの開始後、複数の焦電素子の出力の値が、全てしきい値を超える前にジェスチャーが止められたとき、ジェスチャーの判定はされないが、受付モードは継続する。このため、再びジェスチャーをしたとき、ジェスチャーが終了する前に受付モードが終了するおそれがある。この構成によれば、受付モードでない期間において、複数の焦電素子の出力の値が、全てしきい値を超えたとき、受付モードが開始するので、このような不都合が生じない。 This configuration is an example of a condition for starting the reception mode. This configuration has the following effects with respect to the modes (first mode and third mode) in which the reception mode is ended when a predetermined time has elapsed since the start of the reception mode. Consider a case where the acceptance mode is started when the output values of some of the pyroelectric elements exceed a threshold value. In this case, after the start of the reception mode, when the gesture is stopped before all the output values of the plurality of pyroelectric elements exceed the threshold value, the gesture is not determined, but the reception mode continues. For this reason, when the gesture is performed again, the acceptance mode may end before the gesture ends. According to this configuration, since the reception mode is started when the output values of the plurality of pyroelectric elements all exceed the threshold value during the period other than the reception mode, such inconvenience does not occur.

 上記構成において、前記検出部によって前記ジェスチャーが検知される毎に、検知された前記ジェスチャーに予め割り当てられた入力を示す前記入力情報を報知する報知部をさらに備える。 The above configuration further includes a notification unit that notifies the input information indicating an input pre-assigned to the detected gesture each time the gesture is detected by the detection unit.

 この構成によれば、ユーザがジェスチャーをする毎に、ユーザに入力情報を報知することができる。これにより、ユーザは、自身がしたジェスチャーが間違っているか否かを判断することができる。 This configuration can notify the user of input information every time the user makes a gesture. Thereby, the user can determine whether the gesture he / she made is wrong.

 実施形態の第2の局面に係るジェスチャー入力方法は、表示部と、前記表示部と異なる位置に検出領域を有し、予め定められた二以上のジェスチャーを区別して検出できる検出部と、を備える表示装置に対して、ジェスチャー入力する方法であって、ジェスチャー入力を受け付ける受付モードを所定期間実行する第1のステップと、前記受付モードが終了した後、前記受付モードの期間に前記検出部によって検出された一つ以上の前記ジェスチャーの中で、最後に検出された前記ジェスチャーに予め割り当てられた入力情報を用いて、所定の処理をする第2のステップと、を備える。 A gesture input method according to a second aspect of the embodiment includes a display unit, and a detection unit that has a detection region at a position different from the display unit and can detect and detect two or more predetermined gestures. A method for performing gesture input to a display device, wherein a first step of executing a reception mode for receiving a gesture input for a predetermined period of time, and detection by the detection unit during the reception mode after the reception mode ends A second step of performing a predetermined process using input information pre-assigned to the last detected gesture among the one or more gestures performed.

 実施形態の第2の局面に係るジェスチャー入力方法は、実施形態の第1の局面に係る表示装置を方法の観点から規定しており、実施形態の第1の局面に係る表示装置と同様の作用効果を有する。 The gesture input method according to the second aspect of the embodiment defines the display device according to the first aspect of the embodiment from the viewpoint of the method, and has the same operation as the display device according to the first aspect of the embodiment. Has an effect.

 本発明の実施形態が詳細に図示され、かつ、説明されたが、それは単なる図例及び実例であって限定ではない。本発明の範囲は、添付されたクレームの文言によって解釈されるべきである。 Although embodiments of the present invention have been illustrated and described in detail, it is merely exemplary and illustrative and not limiting. The scope of the invention should be construed by the language of the appended claims.

 2016年11月18日に提出された日本国特許出願特願2016-225078は、その全体の開示が、その全体において参照によりここに組み込まれる。 The entire disclosure of Japanese Patent Application No. 2016-225078 filed on November 18, 2016 is incorporated herein by reference in its entirety.

 本発明によれば、表示装置及びジェスチャー入力方法を提供することができる。 According to the present invention, a display device and a gesture input method can be provided.

Claims (13)

 表示部と、
 前記表示部と異なる位置に検出領域を有し、予め定められた二以上のジェスチャーを区別して検出できる検出部と、
 ジェスチャー入力を受け付ける受付モードを所定期間実行するモード制御部と、
 前記受付モードが終了した後、前記受付モードの期間に前記検出部によって検出された一つ以上の前記ジェスチャーの中で、最後に検出された前記ジェスチャーに予め割り当てられた入力情報を用いて、所定の処理をする処理部と、を備える表示装置。
A display unit;
A detection unit having a detection region at a different position from the display unit, and capable of distinguishing and detecting two or more predetermined gestures;
A mode control unit for executing a reception mode for receiving a gesture input for a predetermined period;
After the acceptance mode ends, among the one or more gestures detected by the detection unit during the acceptance mode period, input information pre-assigned to the last detected gesture is used for predetermined And a processing unit that performs the process.
 前記モード制御部は、前記受付モードが開始してから経過した時間の長さが、予め定められた値に到達したとき、前記受付モードを終了させる、請求項1に記載の表示装置。 The display device according to claim 1, wherein the mode control unit terminates the reception mode when a length of time that has elapsed since the reception mode started reaches a predetermined value.  前記予め定められた値を設定する操作ができる操作部をさらに備える請求項2に記載の表示装置。 The display device according to claim 2, further comprising an operation unit capable of performing an operation of setting the predetermined value.  前記モード制御部は、前記受付モード中に、前記検出部によって前記ジェスチャーが検出されている検出状態から前記ジェスチャーが検出されていない無検出状態に変化したとき、前記無検出状態の時間の計測を開始し、前記無検出状態の時間の長さが、予め定められた値に到達したとき、前記受付モードを終了させる、請求項1に記載の表示装置。 The mode control unit measures the time of the non-detection state when the detection unit changes from a detection state where the gesture is detected to a non-detection state where the gesture is not detected during the reception mode. The display device according to claim 1, wherein the display mode is started, and when the length of time of the non-detection state reaches a predetermined value, the reception mode is ended.  前記予め定められた値を設定する操作ができる操作部をさらに備える請求項4に記載の表示装置。 The display device according to claim 4, further comprising an operation unit capable of performing an operation of setting the predetermined value.  前記モード制御部は、前記受付モードが開始してから経過した時間の長さが、予め定められた第1の値に到達したとき、前記受付モードを終了させ、
 前記モード制御部は、前記第1の値より小さい予め定められた値を第2の値とし、前記受付モード中に、前記検出部によって前記ジェスチャーが検出されている検出状態から前記ジェスチャーが検出されていない無検出状態に変化したとき、前記無検出状態の時間の計測を開始し、前記無検出状態の時間の長さが、前記第2の値に到達したとき、前記受付モードが開始してから経過した時間の長さが前記第1の値に到達する前であっても前記受付モードを終了させる、請求項1に記載の表示装置。
The mode control unit ends the reception mode when the length of time that has elapsed since the reception mode has started reaches a first predetermined value,
The mode control unit sets a predetermined value smaller than the first value as a second value, and detects the gesture from a detection state in which the gesture is detected by the detection unit during the reception mode. When the non-detection state is changed, measurement of the time of the non-detection state is started, and when the length of time of the non-detection state reaches the second value, the reception mode is started. The display device according to claim 1, wherein the reception mode is ended even before the length of time elapsed since the time reaches the first value.
 前記第1の値及び前記第2の値を設定する操作ができる操作部をさらに備える請求項6に記載の表示装置。 The display device according to claim 6, further comprising an operation unit capable of performing an operation for setting the first value and the second value.  前記受付モードの残り時間を示す情報を、前記受付モード中に、前記表示部に表示させる表示制御部をさらに備える請求項1~7のいずれか一項に記載の表示装置。 The display device according to any one of claims 1 to 7, further comprising a display control unit configured to display information indicating the remaining time in the reception mode on the display unit during the reception mode.  前記検出部は、2次元マトリクス状に配列された複数の焦電素子と、前記複数の焦電素子の各出力に基づいてジェスチャーを判定するジェスチャー処理部と、を備える請求項1~8のいずれか一項に記載の表示装置。 The detection unit includes a plurality of pyroelectric elements arranged in a two-dimensional matrix, and a gesture processing unit that determines a gesture based on each output of the plurality of pyroelectric elements. A display device according to claim 1.  前記ジェスチャー処理部は、前記複数の焦電素子のそれぞれの出力の値が、予め定められたしきい値を超えたタイミングを用いて、前記ジェスチャーを判定し、
 前記モード制御部は、前記受付モードでない期間において、前記複数の焦電素子の出力の値が、全て前記しきい値を超えたとき、前記受付モードを開始する請求項9に記載の表示装置。
The gesture processing unit determines the gesture using a timing at which an output value of each of the plurality of pyroelectric elements exceeds a predetermined threshold value,
10. The display device according to claim 9, wherein the mode control unit starts the reception mode when output values of the plurality of pyroelectric elements all exceed the threshold during a period other than the reception mode.
 前記表示装置は、ウェアラブル端末である請求項1~10のいずれか一項に記載の表示装置。 The display device according to any one of claims 1 to 10, wherein the display device is a wearable terminal.  前記検出部によって前記ジェスチャーが検知される毎に、検知された前記ジェスチャーに予め割り当てられた入力を示す前記入力情報を報知する報知部をさらに備える請求項1~11のいずれか一項に記載の表示装置。 The reporting unit according to any one of claims 1 to 11, further comprising a notification unit that notifies the input information indicating an input pre-assigned to the detected gesture every time the gesture is detected by the detection unit. Display device.  表示部と、前記表示部と異なる位置に検出領域を有し、予め定められた二以上のジェスチャーを区別して検出できる検出部と、を備える表示装置に対して、ジェスチャー入力する方法であって、
 ジェスチャー入力を受け付ける受付モードを所定期間実行する第1のステップと、
 前記受付モードが終了した後、前記受付モードの期間に前記検出部によって検出された一つ以上の前記ジェスチャーの中で、最後に検出された前記ジェスチャーに予め割り当てられた入力情報を用いて、所定の処理をする第2のステップと、を備えるジェスチャー入力方法。
A method for inputting gestures to a display device comprising: a display unit; and a detection unit having a detection region at a position different from the display unit and capable of distinguishing and detecting two or more predetermined gestures,
A first step of executing a reception mode for accepting a gesture input for a predetermined period;
After the acceptance mode ends, among the one or more gestures detected by the detection unit during the acceptance mode period, input information pre-assigned to the last detected gesture is used for predetermined A gesture input method comprising: a second step of performing the process.
PCT/JP2017/040404 2016-11-18 2017-11-09 Display apparatus and gesture input method Ceased WO2018092674A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-225078 2016-11-18
JP2016225078 2016-11-18

Publications (1)

Publication Number Publication Date
WO2018092674A1 true WO2018092674A1 (en) 2018-05-24

Family

ID=62145410

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/040404 Ceased WO2018092674A1 (en) 2016-11-18 2017-11-09 Display apparatus and gesture input method

Country Status (1)

Country Link
WO (1) WO2018092674A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1096648A (en) * 1996-07-31 1998-04-14 Aisin Aw Co Ltd Information indicator with touch panel
JP2014078124A (en) * 2012-10-10 2014-05-01 Mitsubishi Electric Corp Gesture input device and display system
WO2016052061A1 (en) * 2014-09-30 2016-04-07 コニカミノルタ株式会社 Head-mounted display
JP2016076061A (en) * 2014-10-06 2016-05-12 三菱電機株式会社 Operation input device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1096648A (en) * 1996-07-31 1998-04-14 Aisin Aw Co Ltd Information indicator with touch panel
JP2014078124A (en) * 2012-10-10 2014-05-01 Mitsubishi Electric Corp Gesture input device and display system
WO2016052061A1 (en) * 2014-09-30 2016-04-07 コニカミノルタ株式会社 Head-mounted display
JP2016076061A (en) * 2014-10-06 2016-05-12 三菱電機株式会社 Operation input device

Similar Documents

Publication Publication Date Title
US9360935B2 (en) Integrated bi-sensing optical structure for head mounted display
US10310624B2 (en) Electronic apparatus, method for controlling electronic apparatus, and control program for the same
US20210033760A1 (en) Smart mirror
CN109564495B (en) Display device, storage medium, display method, and control device
TWI498771B (en) Glasses that can recognize gestures
CN106716318B (en) Projection display unit and function control method
US20130181888A1 (en) Head-mounted display
WO2018003861A1 (en) Display device and control device
JP6398870B2 (en) Wearable electronic device and gesture detection method for wearable electronic device
KR20210156613A (en) Augmented reality glass and operating method thereof
KR20220046494A (en) Eye tracking method and eye tracking sensor
US11573634B2 (en) Display method, display device, and program
US20230013134A1 (en) Electronic device
US11886636B2 (en) Head-mounted display apparatus and method for controlling head-mounted display apparatus
EP3276448B1 (en) Wearable electronic device, gesture detection method for wearable electronic device, and gesture detection program for wearable electronic device
WO2016052061A1 (en) Head-mounted display
WO2018092674A1 (en) Display apparatus and gesture input method
KR20150091724A (en) Wearable eyeglass device
WO2016072271A1 (en) Display device, method for controlling display device, and control program therefor
WO2017221809A1 (en) Display device and gesture input method
WO2017104525A1 (en) Input device, electronic device, and head-mounted display
EP4350420A1 (en) Lens assembly including light-emitting element disposed on first lens, and wearable electronic device including same
KR102770284B1 (en) Augmented reality implementation device and electronic device control method using the same
WO2017065050A1 (en) Input device, electronic apparatus, electronic apparatus input method, and input program for input device, electronic apparatus, and electronic apparatus input method
WO2017094557A1 (en) Electronic device and head-mounted display

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17871228

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17871228

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP