[go: up one dir, main page]

WO2017221809A1 - Display device and gesture input method - Google Patents

Display device and gesture input method Download PDF

Info

Publication number
WO2017221809A1
WO2017221809A1 PCT/JP2017/022165 JP2017022165W WO2017221809A1 WO 2017221809 A1 WO2017221809 A1 WO 2017221809A1 JP 2017022165 W JP2017022165 W JP 2017022165W WO 2017221809 A1 WO2017221809 A1 WO 2017221809A1
Authority
WO
WIPO (PCT)
Prior art keywords
gesture
gestures
unit
input information
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2017/022165
Other languages
French (fr)
Japanese (ja)
Inventor
善行 小川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Konica Minolta Inc
Original Assignee
Konica Minolta Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Konica Minolta Inc filed Critical Konica Minolta Inc
Publication of WO2017221809A1 publication Critical patent/WO2017221809A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the present invention relates to a display device capable of gesture input.
  • Gesture input is to operate a display device (for example, a terminal device or a game machine) by gesture or hand gesture.
  • a display device for example, a terminal device or a game machine
  • gesture input can be performed by touching the screen.
  • Patent Literature 1 discloses a display device that includes a display, a motion reception unit, and a display control unit that controls display information displayed on the display according to the motion received by the motion reception unit.
  • the motion accepting unit accepts only the motion of the hand moving in the left direction and the motion of the hand moving in the right direction, and can perform only a simple operation. For this reason, since input corresponding to a plurality of key operations cannot be performed, for example, personal authentication by password input cannot be performed.
  • Personal authentication can be performed using devices such as a hardware keyboard, fingerprint authentication device, and vein sensor. However, when the device is used for personal authentication of a portable terminal or wearable terminal, the device must be carried, which is inconvenient. When personal authentication is performed using a barcode or QR code (registered trademark), there is a risk of loss or theft of the barcode or QR code. When performing personal authentication by voice recognition, there is a risk that someone else will ask for your password.
  • QR code registered trademark
  • An object of the present invention is to provide a display device that improves gesture input without using a screen, and a gesture input method applied to the display device.
  • the display device includes a display unit, a detection unit, a mode control unit, a storage unit, a storage control unit, and a processing unit.
  • the detection unit has a detection region at a position different from that of the display unit, and can detect and distinguish two or more predetermined gestures.
  • the mode control unit starts a reception mode for receiving a gesture input, and ends the reception mode after a predetermined period.
  • the storage control unit for a series of gestures executed a plurality of times after the start of the reception mode, input information indicating inputs pre-assigned to the gestures in order from the gesture first detected by the detection unit Is stored in the storage unit.
  • the processing unit performs a predetermined process using the input information of each of a plurality of gestures constituting the series of gestures stored in the storage unit after the acceptance mode ends.
  • FIG. 10 is a waveform diagram showing changes in output levels of focus elements RA to RD when gesture 1 is made.
  • FIG. 9 is a waveform diagram showing changes in the output levels of focus elements RA to RD when gesture 2 is made.
  • FIG. 7 is a waveform diagram showing changes in the output levels of focus elements RA to RD when gesture 3 is made.
  • FIG. 10 is a waveform diagram showing changes in the output levels of focus elements RA to RD when gesture 4 is made.
  • FIG. 6 is a waveform diagram showing changes in the output levels of focus elements RA to RD when a series of gestures (gesture 2, gesture 1, gesture 4) is performed for password input.
  • FIG. 10 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when a gesture 11 is made to switch to the next screen.
  • FIG. 11 is a waveform diagram showing changes in the output levels of focus elements RA to RD when a series of gestures (gesture 11, gesture 12, gesture 11) indicating “bye-bye” is performed. It is a flowchart explaining the operation
  • FIG. 6 is a waveform diagram showing changes in the output levels of focus elements RA to RD when a series of gestures (gesture 2, gesture 1, gesture 4) is performed for password input.
  • FIG. 10 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when a gesture 11 is made to switch to the next screen.
  • FIG. 16 is a screen diagram illustrating a first example of a screen displayed on an image display unit in Modification 5 of the present embodiment.
  • it is a screen figure which shows the 2nd example of the screen displayed on the image display part.
  • it is a screen figure which shows the 3rd example of the screen displayed on the image display part.
  • the display device is, for example, a wearable terminal (for example, a head mounted display (HMD) or a wristwatch type terminal) or a smart terminal (for example, a smartphone or a tablet terminal).
  • a wearable terminal for example, a head mounted display (HMD) or a wristwatch type terminal
  • a smart terminal for example, a smartphone or a tablet terminal.
  • HMD head mounted display
  • FIG. 1 is a perspective view showing a structural configuration of the HMD 100 according to the present embodiment.
  • FIG. 2 is a front view showing a structural configuration of the HMD 100 according to the present embodiment.
  • FIG. 3 is a schematic cross-sectional view showing the configuration of the display unit 104 provided in the HMD 100 according to the present embodiment.
  • FIG. 4 is a diagram illustrating a configuration of the proximity sensor 105 provided in the HMD 100 according to the present embodiment.
  • FIG. 5 is a block diagram showing an electrical configuration of the HMD 100 according to the present embodiment.
  • the right side and the left side of the HMD 100 refer to the right side and the left side for the user wearing the HMD 100.
  • HMD100 which concerns on this embodiment is provided with the flame
  • the frame 101 includes a front part 101a to which two spectacle lenses 102 are attached, and side parts 101b and 101c extending rearward from both ends of the front part 101a.
  • the two spectacle lenses 102 attached to the frame 101 may or may not have refractive power (optical power, reciprocal of focal length).
  • the cylindrical main body 103 is fixed to the front part 101 a of the frame 101 at the upper part of the right eyeglass lens 102 (which may be the left side according to the user's dominant eye etc.).
  • the main body 103 is provided with a display unit 104.
  • a display control unit 104DR (FIG. 5) that performs display control of the display unit 104 based on an instruction from a control processing unit 121 described later is disposed in the main body unit 103. Note that a display unit may be disposed in front of both eyes as necessary.
  • the display unit 104 includes an image forming unit 104A and an image display unit 104B.
  • the image forming unit 104A is incorporated in the main body unit 103, and includes a light source 104a, a one-way diffusing plate 104b, a condenser lens 104c, and a display element 104d.
  • the image display unit 104B which is a so-called see-through type display member, is generally plate-shaped and is disposed so as to extend downward from the main body unit 103 and parallel to one eyeglass lens 102 (FIG. 1).
  • the eyepiece prism 104f, the deflecting prism 104g, and the hologram optical element 104h are included.
  • the light source 104a has a function of illuminating the display element 104d.
  • the peak wavelength of light intensity and the half width of the light intensity are 462 ⁇ 12 nm (blue light (B light)), 525 ⁇ 17 nm (green light (G light )), 635 ⁇ 11 nm (red light (R light)), and is composed of RGB integrated light emitting diodes (LEDs) that emit light in three wavelength bands.
  • the display element 104d displays an image by modulating the light emitted from the light source 104a in accordance with image data, and is configured by a transmissive liquid crystal display element having pixels that serve as light transmitting regions in a matrix. Is done. Note that the display element 104d may be of a reflective type.
  • the eyepiece prism 104f totally reflects the image light from the display element 104d incident through the base end face PL1 by the opposed parallel inner side face PL2 and outer side face PL3, and passes through the hologram optical element 104h to the user's pupil. On the other hand, it transmits external light and guides it to the user's pupil, and is formed of, for example, an acrylic resin together with the deflecting prism 104g.
  • the eyepiece prism 104f and the deflection prism 104g are joined by an adhesive with the hologram optical element 104h sandwiched between inclined surfaces PL4 and PL5 inclined with respect to the inner surface PL2 and the outer surface PL3.
  • the deflection prism 104g is joined to the eyepiece prism 104f, and becomes a substantially parallel flat plate integrated with the eyepiece prism 104f.
  • the spectacle lens 102 FIG. 1
  • the hologram optical element 104h diffracts and reflects the image light (light having a wavelength corresponding to the three primary colors) emitted from the display element 104d, guides it to the pupil B, enlarges the image displayed on the display element 104d, and enlarges the user's pupil. It is a volume phase type reflection hologram guided as a virtual image.
  • the hologram optical element 104h has, for example, three wavelength ranges of 465 ⁇ 5 nm (B light), 521 ⁇ 5 nm (G light), and 634 ⁇ 5 nm (R light) with a peak wavelength of diffraction efficiency and a wavelength width of half the diffraction efficiency. The light is diffracted (reflected).
  • the peak wavelength of diffraction efficiency is the wavelength at which the diffraction efficiency reaches a peak
  • the wavelength width at half maximum of the diffraction efficiency is the wavelength width at which the diffraction efficiency is at half maximum of the diffraction efficiency peak. is there.
  • the display unit 104 having such a configuration, light emitted from the light source 104a is diffused by the unidirectional diffusion plate 104b, condensed by the condenser lens 104c, and incident on the display element 104d.
  • the light incident on the display element 104d is modulated for each pixel based on the image data input from the display control unit 104DR, and is emitted as image light. Thereby, a color image is displayed on the display element 104d.
  • the image light from the display element 104d enters the eyepiece prism 104f from its base end face PL1, is totally reflected a plurality of times by the inner side face PL2 and the outer side face PL3, and enters the hologram optical element 104h.
  • the light incident on the hologram optical element 104h is reflected there, passes through the inner side surface PL2, and reaches the pupil B.
  • the user can observe an enlarged virtual image of the image displayed on the display element 104d, and can visually recognize it as a screen formed on the image display unit 104B.
  • the eyepiece prism 104f, the deflecting prism 104g, and the hologram optical element 104h transmit almost all of the external light, the user can observe the external image (real image) through these. Therefore, the virtual image of the image displayed on the display element 104d is observed so as to overlap with a part of the external image. In this way, the user of the HMD 100 can simultaneously observe the image provided from the display element 104d and the external image via the hologram optical element 104h.
  • the image display unit 104B is transparent and can observe only an external image.
  • the display unit is configured by combining a light source, a liquid crystal display element, and an optical system.
  • a self-luminous display element for example, an organic EL display
  • Element may be used.
  • a transmissive organic EL display panel having transparency in a non-light emitting state may be used.
  • the “proximity sensor” refers to a proximity range in front of a detection surface of a proximity sensor in order to detect that an object, for example, a part of a human body (such as a hand or a finger) is close to the user's eyes
  • the signal is output by detecting whether or not it exists within the detection area.
  • the proximity range may be set as appropriate according to the characteristics and preferences of the user. For example, the proximity range from the detection surface of the proximity sensor can be within a range of 200 mm.
  • the user can put a palm or a finger into or out of the user's field of view with the arm bent, so that the hand, finger, or pointing tool (for example, a rod-shaped member) It is possible to easily perform an operation by a gesture using the symbol, and the possibility of erroneous detection of a human body or furniture other than the user is reduced.
  • a passive proximity sensor has a detection device that detects invisible light and electromagnetic waves emitted from an object when the object approaches.
  • a passive proximity sensor there are a pyroelectric sensor that detects invisible light such as infrared rays emitted from an approaching human body, an electrostatic capacitance sensor that detects a change in electrostatic capacitance between the approaching human body, and the like.
  • the active proximity sensor includes a projection device that projects invisible light and sound waves, and a detection device that receives invisible light and sound waves that have been reflected back to the object.
  • Active proximity sensors include infrared sensors that project infrared rays and receive infrared rays reflected by objects, laser sensors that project laser beams and receive laser beams reflected by objects, and project ultrasonic waves. Then, there is an ultrasonic sensor that receives ultrasonic waves reflected by an object. Note that a passive proximity sensor does not need to project energy toward an object, and thus has excellent low power consumption. An active proximity sensor is easy to improve the certainty of detection. For example, even when a user wears a glove that does not transmit detection light emitted from a human body such as infrared light, Can detect movement. A plurality of types of proximity sensors may be combined.
  • a pyroelectric sensor including a plurality of pyroelectric elements arranged in a two-dimensional matrix is used as the proximity sensor 105.
  • the proximity sensor 105 includes four pyroelectric elements RA, RB, RC, and RD arranged in two rows and two columns, and receives invisible light such as infrared light emitted from the human body as detection light.
  • a corresponding signal is output from each of the pyroelectric elements RA to RD.
  • the outputs of the pyroelectric elements RA to RD change in intensity according to the distance from the light receiving surface of the proximity sensor 105 to the object, and the intensity increases as the distance decreases.
  • the right sub-body portion 108-R is attached to the right side portion 101b of the frame 101
  • the left sub-body portion 108-L is attached to the left side portion 101c of the frame 101. Is attached.
  • the right sub body 108-R and the left sub body 108-L have an elongated plate shape.
  • the main main body 103 and the right sub main body 108-R are connected to each other by a wiring HS so that signals can be transmitted.
  • the right sub main body 108-R is connected to the control unit CTU via a cord CD extending from the rear end thereof. It is connected to the.
  • the HMD 100 includes a control unit CTU, a display unit 104, a display control unit 104DR, a proximity sensor 105, and a camera 106.
  • the control unit CTU includes a control processing unit 121, an operation unit 122, a storage unit 125, a battery 126, and a power supply circuit 127.
  • the display control unit 104DR is a circuit that is connected to the control processing unit 121 and controls the image forming unit 104A of the display unit 104 according to the control of the control processing unit 121 to form an image on the image forming unit 104A.
  • the image forming unit 104A is as described above.
  • the camera 106 is an apparatus that is connected to the control processing unit 121 and generates an image of a subject under the control of the control processing unit 121.
  • the camera 106 is, for example, an image forming optical system that forms an optical image of a subject on a predetermined image forming surface, and a light receiving surface that matches the image forming surface.
  • An image sensor that converts the image sensor into a digital signal processor (DSP) that performs known image processing on the output of the image sensor to generate an image (image data).
  • DSP digital signal processor
  • the imaging optical system includes one or more lenses, and includes the lens 106a as one of them.
  • the camera 106 outputs the generated image data to the control processing unit 121.
  • the proximity sensor 105 is connected to the control processing unit 121.
  • the proximity sensor 105 is as described above, and outputs the output to the control processing unit 121.
  • the operation unit 122 is connected to the control processing unit 121 and is a device that inputs a predetermined instruction, such as power on / off, to the HMD 100, for example, one or a plurality of switches assigned a predetermined function Etc.
  • the battery 126 is a battery that accumulates electric power and supplies the electric power.
  • the battery 126 may be a primary battery or a secondary battery.
  • the power supply circuit 127 is a circuit that supplies power supplied from the battery 126 to each part of the HMD 100 that requires power at a voltage corresponding to each part.
  • the storage unit 125 is a circuit that is connected to the control processing unit 121 and stores various predetermined programs and various predetermined data under the control of the control processing unit 121.
  • Examples of the various predetermined programs include control processing programs such as a control program that controls each unit of the HMD 100 according to the function of each unit, and a gesture processing program that determines a gesture based on the output of the proximity sensor 105. included.
  • the storage unit 125 includes, for example, a ROM (Read Only Memory) that is a nonvolatile storage element, an EEPROM (Electrically Erasable Programmable Read Only Memory) that is a rewritable nonvolatile storage element, and the like.
  • the storage unit 125 includes a RAM (Random Access Memory) that serves as a working memory of the control processing unit 121 that stores data generated during the execution of the predetermined program.
  • the control processing unit 121 controls each unit of the HMD 100 according to the function of each unit, determines a predetermined gesture set in advance based on the output of the proximity sensor 105, and executes processing according to the determination result. Is.
  • the control processing unit 121 includes, for example, a CPU (Central Processing Unit) and its peripheral circuits. In the control processing unit 121, a control processing program is executed, so that a control unit 1211, a gesture processing unit 1212, and a processing unit 1213 are functionally configured. Note that some or all of the control unit 1211, the gesture processing unit 1212, and the processing unit 1213 may be configured by hardware.
  • the control unit 1211 controls each unit of the HMD 100 according to the function of each unit.
  • the control unit 1211 has functions of a mode control unit 1214 and a storage control unit 1215. These functions will be described later.
  • the gesture processing unit 1212 determines a predetermined gesture set in advance based on the outputs of the plurality of pyroelectric elements in the proximity sensor 105, in this embodiment, the four pyroelectric elements RA to RD.
  • the gesture processing unit 1212 notifies the processing unit 1213 of the determination result.
  • the gesture processing unit 1212 and the proximity sensor 105 constitute a detection unit 128.
  • the detection unit 128 has a detection area SA (FIGS. 7A, 7B, and 8) at a position different from that of the image display unit 104B (an example of the display unit), and distinguishes and detects two or more predetermined gestures.
  • the processing unit 1213 performs a predetermined process (for example, password authentication) using the determination result of the gesture processing unit 1212. Details of the processing unit 1213 will be described later.
  • a predetermined process for example, password authentication
  • FIG. 6 is a front view when the HMD 100 according to the present embodiment is mounted.
  • FIG. 7A is a side view when the HMD 100 according to the present embodiment is mounted.
  • FIG. 7B is a partial top view in this case.
  • the hand HD of the user US is also shown.
  • FIG. 8 is a diagram illustrating an example of an image visually recognized by the user through the see-through type image display unit 104B.
  • FIG. 9 is a diagram illustrating an example of the output of the proximity sensor 105 provided in the HMD 100 according to the present embodiment.
  • 9A shows the output of the pyroelectric element RA
  • FIG. 9B shows the output of the pyroelectric element RB
  • FIG. 9C shows the output of the pyroelectric element RC
  • FIG. (D) shows the output of the pyroelectric element RD.
  • the horizontal axis of each figure in FIG. 9 is time, and the vertical axis thereof is the output level (intensity).
  • the gesture input is an operation in which at least the hand HD or finger of the user US enters or leaves the detection area SA of the proximity sensor 105, and the gesture processing unit of the control processing unit 121 of the HMD 100 via the proximity sensor 105. 1212 can be detected.
  • screen 104i of image display unit 104B is arranged so as to overlap with effective visual field EV of the user's eye facing image display unit 104B (here, positioned within effective visual field EV).
  • the detection area SA of the proximity sensor 105 is in the visual field of the user's eye facing the image display unit 104B.
  • the detection area SA is located within the stable field of view of the user's eye or the field inside thereof (within about 90 ° horizontal and within about 70 ° vertical), and more preferably located inside the stable field of view.
  • the proximity sensor 105 may be installed with its arrangement and orientation adjusted so as to overlap the effective visual field EV or the inner visual field (horizontal within about 30 °, vertical within about 20 °).
  • FIG. 8 shows an example in which the detection area SA overlaps the screen 104i.
  • the detection region SA of the proximity sensor 105 within the visual field of the eye of the user US while the user US is wearing the frame 101 that is the head mounting member on the head, the screen While observing the hand HD through 104i, the approach and retraction of the hand to the detection area SA of the proximity sensor 105 can be surely visually recognized without moving the eye.
  • the detection area SA of the proximity sensor 105 within the stable visual field or the inner visual field, it is possible to reliably perform gesture input while recognizing the detection area SA even when the user observes the screen. .
  • gesture input can be performed more reliably. If the detection area SA overlaps the screen 104i, gesture input can be performed more reliably.
  • the proximity sensor 105 has a plurality of pyroelectric elements RA to RD as in the present embodiment, the entire light receiving area of the plurality of pyroelectric elements RA to RD is regarded as one light receiving unit, and It is assumed that the maximum detection range is the detection area SA. As shown in FIG.
  • the gesture processing unit 1212 of the control processing unit 121 determines that no gesture is performed.
  • the gesture processing unit 1212 determines that a gesture has been performed.
  • the gesture may be performed by the user US using an indicator made of a material capable of emitting invisible light. You may go.
  • the proximity sensor 105 has four pyroelectric elements RA to RD arranged in two rows and two columns (see FIG. 4). Therefore, when the user US brings the hand HD close to the front of the HMD 100 from either the left, right, up, or down directions, the output timings of signals detected by the pyroelectric elements RA to RD are different.
  • the invisible light emitted from the hand HD is close
  • the light enters the sensor 105.
  • the pyroelectric elements RA and RC first receive invisible light. Therefore, referring to FIGS. 4 and 9, first, the signals of pyroelectric elements RA and RC rise, and the signals of pyroelectric elements RB and RD rise after a delay. Thereafter, the signals of the pyroelectric elements RA and RC fall, and the signals of the pyroelectric elements RB and RD fall after a delay.
  • the gesture processing unit 1212 detects the timing of this signal, and the gesture processing unit 1212 determines that the user US has made a gesture by moving the hand HD from right to left.
  • FIG. 10 is an explanatory diagram for explaining the relationship between a gesture and input information.
  • FIG. 10 includes arrows indicating the hand movements of each of the 12 gestures, and input information indicating inputs pre-assigned to each of the 12 gestures.
  • FIG. 11 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when the gesture 1 is performed.
  • the horizontal axis indicates time
  • the vertical axis indicates the output level.
  • the threshold values th are all the same value.
  • the gesture 1 is a gesture in which the hand HD enters the detection area SA from the upper side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the lower side of the detection area SA.
  • the output levels of the pyroelectric elements RA and RB exceed the threshold value th, and later, the output levels of the pyroelectric elements RC and RD exceed the threshold value th, and The output levels of the pyroelectric elements RA and RB become equal to or lower than the threshold value th, and the output levels of the pyroelectric elements RC and RD become equal to or lower than the threshold value th later.
  • the gesture processing unit 1212 determines that the gesture 1 has been made.
  • the input information indicating the input previously assigned to the gesture 1 is “1”.
  • the user can input the number 1 to the HMD 100 by making the gesture 1.
  • FIG. 12 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when the gesture 2 is performed.
  • the horizontal axis and vertical axis of this waveform diagram are the same as the horizontal axis and vertical axis of the waveform diagram of FIG.
  • the gesture 2 is a gesture in which the hand HD enters the detection area SA from the lower side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the upper side of the detection area SA.
  • the output levels of the pyroelectric elements RC and RD exceed the threshold value th, and later, the output levels of the pyroelectric elements RA and RB exceed the threshold value th, and The output levels of the pyroelectric elements RC and RD become the threshold value th or less, and the output levels of the pyroelectric elements RA and RB become the threshold value th or later after this.
  • the gesture processing unit 1212 (FIG. 5) detects such a change in output level, the gesture processing unit 1212 determines that the gesture 2 has been made.
  • the input information indicating the input previously assigned to the gesture 2 is “2”.
  • the user can input the number 2 to the HMD 100 by making the gesture 2.
  • FIG. 13 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when the gesture 3 is performed.
  • the horizontal axis and vertical axis of this waveform diagram are the same as the horizontal axis and vertical axis of the waveform diagram of FIG.
  • the gesture 3 is a gesture in which the hand HD enters the detection area SA from the right side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the lower side of the detection area SA.
  • the output levels of the pyroelectric elements RA and RC exceed the threshold value th, and later, the output levels of the pyroelectric elements RB and RD exceed the threshold value th, and The output levels of the pyroelectric elements RA and RB become equal to or lower than the threshold value th, and the output levels of the pyroelectric elements RC and RD become equal to or lower than the threshold value th later.
  • the gesture processing unit 1212 (FIG. 5) detects such a change in output level, the gesture processing unit 1212 determines that the gesture 3 has been made.
  • the input information indicating the input previously assigned to the gesture 3 is “3”.
  • the user can input the number 3 to the HMD 100 by making the gesture 3.
  • FIG. 14 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when the gesture 4 is performed.
  • the horizontal axis and vertical axis of this waveform diagram are the same as the horizontal axis and vertical axis of the waveform diagram of FIG.
  • the gesture 4 is a gesture in which the hand HD enters the detection area SA from the lower side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the right side of the detection area SA.
  • the output levels of the pyroelectric elements RC and RD exceed the threshold value th, and later, the output levels of the pyroelectric elements RA and RB exceed the threshold value th, and The output levels of the pyroelectric elements RB and RD become the threshold value th or less, and the output levels of the pyroelectric elements RA and RC become the threshold value th or later after this.
  • the gesture processing unit 1212 (FIG. 5) detects such a change in output level, the gesture processing unit 1212 determines that the gesture 4 has been made.
  • the input information indicating the input previously assigned to the gesture 4 is “4”. The user can input the number 4 to the HMD 100 by making the gesture 4.
  • Waveform diagrams are omitted for Gesture 5 to Gesture 12. Regarding these gestures, description of waveform changes is omitted (the same concept as gestures 1 to 4).
  • the gesture 5 will be described with reference to FIGS. 4 and 10.
  • the gesture 5 is a gesture in which the hand HD enters the detection area SA from the right side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the upper side of the detection area SA.
  • the input information indicating the input previously assigned to the gesture 5 is “5”.
  • the user can input the number 5 to the HMD 100 by making the gesture 5.
  • the gesture 6 is a gesture in which the hand HD enters the detection area SA from the upper side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the right side of the detection area SA.
  • the input information indicating the input previously assigned to the gesture 6 is “6”.
  • the user can input the number 6 to the HMD 100 by making the gesture 6.
  • the gesture 7 is a gesture in which the hand HD enters the detection area SA from the left side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the lower side of the detection area SA.
  • the input information indicating the input previously assigned to the gesture 7 is “7”.
  • the user can input the number 7 to the HMD 100 by making the gesture 7.
  • the gesture 8 is a gesture in which the hand HD enters the detection area SA from the lower side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the left side of the detection area SA.
  • the input information indicating the input previously assigned to the gesture 8 is “8”.
  • the user can input the number 8 to the HMD 100 by making the gesture 8.
  • the gesture 9 is a gesture in which the hand HD enters the detection area SA from the left side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the upper side of the detection area SA.
  • the input information indicating the input previously assigned to the gesture 9 is “9”.
  • the user can input the number 9 to the HMD 100 by making the gesture 9.
  • the gesture 10 is a gesture in which the hand HD enters the detection area SA from the upper side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the left side of the detection area SA.
  • the input information indicating the input previously assigned to the gesture 10 is “10”.
  • the user can input the number 0 to the HMD 100 by making the gesture 10.
  • the gesture 11 will be described.
  • the gesture 11 is a gesture in which the hand HD enters the detection area SA from the left side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the right side of the detection area SA.
  • the input information indicating the input previously assigned to the gesture 11 is a “command to switch to the next screen”.
  • the user can input a “command to switch to the next screen” to the HMD 100 by making the gesture 11.
  • the gesture 12 is a gesture in which the hand HD enters the detection area SA from the right side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the left side of the detection area SA.
  • the input information indicating the input previously assigned to the gesture 12 is a “command to switch to the previous screen”.
  • the user can input a “command to switch to the previous screen” to the HMD 100 by making the gesture 12.
  • Input information (“0”) to input information (“9”) are elements constituting a password. This element is not limited to numbers but may be alphabets or the like.
  • the combinations of the gestures 1 to 12 and the input information are not limited to the example shown in FIG. 10, and may be different combinations (for example, the input information (“1” may be assigned to the gesture 11)).
  • the gestures that can be detected by the proximity sensor 105 are not limited to the gestures 1 to 12, and arbitrary input information may be assigned to gestures other than these gestures.
  • FIG. 15 is a flowchart for explaining this operation.
  • the user operates operation unit 122 to turn on the power.
  • the display control unit 104DR displays a password input screen on the image display unit 104B for password authentication.
  • the password will be described by taking the number of digits as an example.
  • FIG. 16 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when a series of gestures (gesture 2, gesture 1, gesture 4) is performed for password input.
  • the user inputs a password by making a gesture.
  • the user password is “214”.
  • the user places a hand in front of the proximity sensor 105 and performs gesture 2 (FIG. 10) to which input information (“2”) is assigned.
  • the mode control unit 1214 starts a reception mode for accepting a gesture input (step S1 in FIG. 15).
  • the acceptance mode is started when the output levels of the pyroelectric elements RC and RD exceed the threshold value th.
  • the mode control unit 1214 ends the reception mode after a predetermined period (for example, 3 seconds) has elapsed since the start of the reception mode.
  • a predetermined period for example, 3 seconds
  • the gesture processing unit 1212 determines from which of the upper side, the lower side, the left side, and the right side the hand has entered the detection area SA (FIGS. 7A, 7B, and 8). Here, since the output level of the pyroelectric elements RC and RD exceeds the threshold th earlier than the output level of the pyroelectric elements RA and RB, it is determined to be lower.
  • the storage control unit 1215 stores information indicating “lower side” in the storage unit 125 (step S2 in FIG. 15).
  • the gesture processing unit 1212 determines from which of the upper side, the lower side, the left side, and the right side the hand has exited the detection area SA.
  • the output level of the pyroelectric elements RA and RB is equal to or lower than the threshold th after the output level of the pyroelectric elements RC and RD, it is determined as the upper side.
  • the storage control unit 1215 stores information indicating “upper side” in the storage unit 125 (step S3 in FIG. 15).
  • the gesture processing unit 1212 determines the type of gesture using the results of step S2 and step S3 (step S4 in FIG. 15).
  • the types of gestures are the 12 gestures described in FIG.
  • the storage unit 125 stores in advance a table (hereinafter referred to as the table of FIG. 10) indicating the correspondence between the gestures 1 to 12 and the input information assigned to them.
  • the gesture processing unit 1212 reads out the determination results of step S2 and step S3 stored in the storage unit 125.
  • the determination result of step S2 is “lower side”
  • the determination result of step S3 is “upper side”. Accordingly, since the hand enters the detection area SA from the lower side of the detection area SA and exits the detection area SA from the upper side of the detection area SA, the gesture processing unit 1212 determines that the gesture is 2. Note that the gesture processing unit 1212 performs error processing when it is determined that none of the gestures 1 to 12 corresponds. As a result, the display control unit 104DR causes the image display unit 104B to display a screen for prompting a correct gesture.
  • the storage control unit 1215 refers to the table in FIG. 10 and stores the input information assigned to the gesture determined in step S4 in the storage unit 125 (step S5 in FIG. 15).
  • the input information (“2”) assigned to gesture 2 is stored in storage unit 125.
  • the gesture processing unit 1212 determines whether or not a predetermined period has elapsed from the start of the reception mode (step S1 in FIG. 15) (step S6 in FIG. 15). If the predetermined period has not elapsed (No in step S6), the gesture processing unit 1212 determines whether any of the output levels of the pyroelectric elements RA to RD has exceeded the threshold value th ( Step S7 in FIG. That is, the gesture processing unit 1212 stands by until the next gesture is made. When the gesture processing unit 1212 determines that all the output levels of the pyroelectric elements RA to RD are equal to or lower than the threshold th (No in step S7), the gesture processing unit 1212 performs the process of step S6.
  • step S7 When the gesture processing unit 1212 determines that the output level of any one of the pyroelectric elements RA to RD exceeds the threshold th (Yes in step S7), the gesture processing unit 1212 performs the process of step S2. To do. Here, since the user performs the gesture 1 to which the input information (“1”) is assigned, the gesture processing unit 1212 performs the process of step S2.
  • the gesture processing unit 1212 determines from which of the upper side, the lower side, the left side, and the right side the hand has entered the detection area SA. Here, since the output level of the pyroelectric elements RA and RB exceeds the threshold th earlier than the output level of the pyroelectric elements RC and RD, it is determined to be the upper side.
  • the storage control unit 1215 stores information indicating “upper side” in the storage unit 125 (step S2).
  • the gesture processing unit 1212 determines from which of the upper side, the lower side, the left side, and the right side the hand has exited the detection area SA.
  • the output levels of the pyroelectric elements RC and RD are equal to or lower than the threshold th after the output levels of the pyroelectric elements RA and RB, it is determined that the output is lower.
  • the storage control unit 1215 stores information indicating “lower side” in the storage unit 125 (step S3).
  • the gesture processing unit 1212 determines the type of gesture using the results of step S2 and step S3 (step S4). More specifically, the gesture processing unit 1212 reads the determination results of step S2 and step S3 stored in the storage unit 125. Here, the determination result of step S2 is “upper side”, and the determination result of step S3 is “lower side”. Accordingly, since the hand enters the detection area SA from the upper side of the detection area SA and exits the detection area SA from the lower side of the detection area SA, the gesture processing unit 1212 determines that the gesture is 1.
  • the storage control unit 1215 refers to the table in FIG. 10 and stores the input information assigned to the gesture determined in step S4 in the storage unit 125 (step S5).
  • the input information (“1”) assigned to gesture 1 is stored in storage unit 125.
  • the storage control unit 1215 stores input information (“2”), input information (“1”), and input information (“4”) in the storage unit 125. As described above, the storage control unit 1215 first detects a series of gestures (gesture 2, gesture 1, gesture 4) executed a plurality of times by the detection unit 128 after the start of the reception mode (step S1 in FIG. 15). The storage unit 125 is controlled to store input information indicating an input previously assigned to the gesture in order from the gesture (gesture 2).
  • the mode control unit 1214 ends the reception mode after elapse of a predetermined period (for example, 3 seconds) after starting the reception mode. Therefore, the user must make a series of gestures including gesture 2, gesture 1, and gesture 4 within a predetermined period.
  • a predetermined period for example, 3 seconds
  • the processing unit 1213 stores each input information (of gesture 2) of a plurality of gestures constituting a series of gestures stored in the storage unit 125.
  • a predetermined process is performed using the input information (“2”), the input information of the gesture 1 (“1”), and the input information of the gesture 4 (“4”)) (step S8 in FIG. 15).
  • the processing unit 1213 performs password authentication using the input information stored in the storage unit 125 as a predetermined process.
  • the processing unit 1213 performs password authentication by using the input information (“2”), the input information (“1”), and the input information (“4”), assuming that the input password is “214”.
  • the processing unit 1213 determines that the password authentication has failed, it performs error processing.
  • the display control unit 104DR causes the image display unit 104B to display a screen indicating that password authentication has failed.
  • the display control unit 104DR switches the screen displayed on the image display unit 104B from the password input screen to the initial screen.
  • FIG. 17 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when the gesture 11 is made to switch to the next screen.
  • an instruction to switch to the next screen is input to HMD 100.
  • the mode control unit 1214 sets a reception mode for accepting gesture input. Start (step S1 in FIG. 15). In this case, the acceptance mode is started when the output levels of the pyroelectric elements RB and RD exceed the threshold value th.
  • the mode control unit 1214 ends the reception mode after a predetermined period (for example, 3 seconds) has elapsed since the start of the reception mode.
  • the gesture processing unit 1212 determines from which of the upper side, the lower side, the left side, and the right side the hand has entered the detection area SA (FIGS. 7A, 7B, and 8). .
  • the output level of the pyroelectric elements RB and RD exceeds the threshold th earlier than the output level of the pyroelectric elements RA and RC, it is determined as the left side.
  • the storage control unit 1215 stores information indicating “left side” in the storage unit 125 (step S2 in FIG. 15).
  • the gesture processing unit 1212 determines from which of the upper side, the lower side, the left side, and the right side the hand has exited the detection area SA.
  • the output levels of the pyroelectric elements RA and RC are equal to or lower than the threshold th after the output levels of the pyroelectric elements RB and RD, it is determined that the output is on the right side.
  • the storage control unit 1215 stores information indicating “right side” in the storage unit 125 (step S3 in FIG. 15).
  • the gesture processing unit 1212 determines the type of gesture using the results of step S2 and step S3 (step S4 in FIG. 15). More specifically, the gesture processing unit 1212 reads the determination results of step S2 and step S3 stored in the storage unit 125. Here, the determination result of step S2 is “left side”, and the determination result of step S3 is “right side”. Therefore, since the hand enters the detection area SA from the left side of the detection area SA and exits the detection area SA from the right side of the detection area SA, the gesture processing unit 1212 determines that it is the gesture 11.
  • the storage control unit 1215 refers to the table in FIG. 10 and stores the input information assigned to the gesture determined in step S4 in the storage unit 125 (step S5 in FIG. 15). .
  • the input information (“command to switch to the next screen”) assigned to the gesture 11 is stored in the storage unit 125.
  • the gesture processing unit 1212 determines whether or not a predetermined period has elapsed since the start of the reception mode (step S1 in FIG. 15) (step S6 in FIG. 15). If the predetermined period has not elapsed (No in step S6), the gesture processing unit 1212 determines whether any of the output levels of the pyroelectric elements RA to RD has exceeded the threshold value th ( Step S7 in FIG. When the gesture processing unit 1212 determines that the output level of any of the pyroelectric elements RA to RD is equal to or lower than the threshold th (No in step S7), the gesture processing unit 1212 performs the process of step S6. Here, since only the gesture 11 is made, the gesture processing unit 1212 performs the process of step S6.
  • the processing unit 1213 When a predetermined period has elapsed from the start of the acceptance mode (Yes in step S6), the processing unit 1213 performs a predetermined process (step S8 in FIG. 15). The processing unit 1213 generates a command for switching to the next screen as a predetermined process. With this command, the display control unit 104DR switches the screen displayed on the image display unit 104B from the initial screen to the next screen.
  • the display control unit 104DR switches the screen displayed on the image display unit 104B from the initial screen to the next screen.
  • detection unit 128 does not detect a gesture using the screen displayed on image display unit 104 ⁇ / b> B, but detects detection area SA ( 7A, 7B, and 8), and two or more predetermined gestures are distinguished and detected.
  • the processing unit 1213 performs password authentication (an example of a predetermined process) using input information (that is, three input information) of each of the three gestures constituting the series of gestures (step S8). For this reason, it is possible to perform a process that requires a plurality of inputs such as password authentication using gesture input without using a screen. Therefore, according to this embodiment, it is possible to improve gesture input without using a screen.
  • the detection unit 128 includes a proximity sensor 105 and a gesture processing unit 1212.
  • the detection unit 128 is not limited to this configuration.
  • the detection unit 128 may include a camera 106 (two-dimensional imaging device) and an image processing unit that performs predetermined image processing on an image captured by the camera 106 and recognizes a gesture.
  • the first modification can be input by an intuitive operation. For example, when a series of gestures is a hand movement indicating “bye-bye”, an input of “command to switch to initial screen” is made.
  • the instruction is not limited to this, and may be, for example, “an instruction to cancel the previous input”.
  • gesture 11 shown in FIG. 10, then the gesture 12, and then the gesture 11, and vice versa. In some cases, gesture 11 is made, and then gesture 12 is made.
  • the former will be described as an example.
  • FIG. 18 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when a series of gestures (gesture 11, gesture 12, and gesture 11) indicating “bye-by” is performed.
  • the processes in steps S1 to S7 are performed as in the case of password input.
  • the processing unit 1213 performs a predetermined process (step S8).
  • the processing unit 1213 generates a command for switching to the initial screen as a predetermined process. With this command, the display control unit 104DR switches the screen to be displayed on the image display unit 104B from the current screen to the initial screen.
  • the predetermined period (that is, the valid period of the reception mode) is fixed.
  • the time required for a series of three gestures for password input is set as a predetermined period. For this reason, when the processing unit 1213 performs a predetermined process using input information obtained by one gesture input or two gesture inputs (for example, generation of a command for switching to the next screen), There is a waiting time before starting. Therefore, in the second modification, when the state where the start of the next gesture is not detected continues, the acceptance mode is terminated without waiting for the elapse of a predetermined period, and a predetermined process is performed.
  • FIG. 19 is a flowchart for explaining the operation when a gesture is input in the second modification of the present embodiment.
  • FIG. 20 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when the gesture 11 is made to switch to the next screen.
  • the flowchart shown in FIG. 19 is different from the flowchart shown in FIG. 15 in that step S9 is added between steps S6 and S7.
  • gesture processing unit 1212 determines whether the predetermined period has elapsed since the start of the reception mode (step S ⁇ b> 1). It is determined whether or not (step S6). If the predetermined period has not elapsed (No in step S6), the gesture processing unit 1212 determines whether the non-detection period in which the start of the next gesture has not been detected has reached a predetermined value ( Step S9).
  • the non-detection period is a period in which all output levels of the pyroelectric elements RA to RD are, for example, the threshold value th or less.
  • a period in which all the output levels of the pyroelectric elements RA to RD are 0 may be a non-detection period.
  • the predetermined value is a value smaller than a predetermined period (for example, 3 seconds) (for example, 0.5 seconds), and takes into account the time required for the start of the next gesture after the end of one gesture. Is set.
  • the gesture processing unit 1212 determines that the non-detection period has not reached a predetermined value (No in step S9), the gesture processing unit 1212 performs the process of step S7.
  • the mode control unit 1214 ends the reception mode.
  • the processing unit 1213 performs a predetermined process without waiting for the elapse of the predetermined period (step S8).
  • the storage unit 125 stores the input information (“command to switch to the next screen”) illustrated in FIG. 10.
  • the processing unit 1213 generates a command for switching to the next screen as a predetermined process.
  • the display control unit 104DR switches the screen displayed on the image display unit 104B to the next screen.
  • the waiting time until the start of the predetermined process after the end of the gesture can be shortened.
  • Modification 3 will be described. As shown in FIG. 16, in the present embodiment, the length of the predetermined period is fixed, but the third modification can extend the predetermined period.
  • a series of gestures there are a first series gesture composed of a first number of gestures and a second series gesture composed of a second number of gestures greater than the first number.
  • the number of passwords for a user is a first number (for example, 3)
  • the number of special passwords for example, a password for accessing a screen dedicated to a serviceman
  • the user inputs a password by making a first series of gestures (that is, a series of gestures composed of three gestures).
  • the service person inputs a password by making a second series of gestures (that is, a series of gestures composed of four gestures).
  • Modification 3 assumes that the initial predetermined period (that is, the initial value of the predetermined period) is longer than the time required for the first series of gestures, but shorter than the time required for the second series of gestures. This makes it difficult for the user to access the screen dedicated to the service person.
  • the first part of the second series of gestures is composed of a third number of gestures equal to or less than the first number.
  • a command for extending a predetermined period is assigned in advance to the input information of the first part.
  • the first part of the second series of gestures is, for example, two gestures configured by the first and second gestures.
  • the password for accessing the screen dedicated to the serviceman is a four-digit password including, for example, “00” at the beginning (for example, 0012).
  • a command for extending a predetermined period is assigned to the input information (“00”).
  • the predetermined period is extended, and a time for inputting the remaining number “12” is secured.
  • Specified period will be extended for input of 4-digit password including 00. For this reason, when there are a plurality of screens dedicated to the serviceman, a four-digit different password including 00 can be assigned to each screen (that is, a dedicated password can be given to each screen).
  • the input information to which a command for extending a predetermined period is assigned is not limited to one, and may be plural.
  • a command for extending a predetermined period may be assigned to each of the input information (“00”) and the input information “99”.
  • FIG. 21 is a flowchart for explaining the operation when a gesture is input in the third modification of the present embodiment.
  • the flowchart shown in FIG. 21 is different from the flowchart shown in FIG. 19 in that steps S10 and S11 are added between steps S5 and S6.
  • processing unit 1213 determines whether or not the predetermined process based on the input information stored in storage unit 125 generates an extension command for a predetermined period. Is determined (step S10). When input information other than “00” is stored in the storage unit 125, the processing unit 1213 determines that the predetermined process is not a process for generating an extension command for a predetermined period (No in step S10), and the gesture processing unit 1212 performs the process of step S6.
  • the processing unit 1213 determines that the predetermined process is a process of generating an extension command for a predetermined period (Yes in step S10), and the processing unit 1213 Generate an extension command for a predetermined period. Thereby, the mode control part 1214 extends a predetermined period (step S11). Then, the gesture processing unit 1212 performs step S7.
  • Modification 4 will be described.
  • the input information assigned to the first part of the second series of gestures is a command for extending the predetermined period.
  • input information assigned to one or more predetermined gestures is used as a command for extending a predetermined period.
  • the processing unit 1213 extends the predetermined period.
  • the password is “924845”.
  • “00” is a command for extending the predetermined period.
  • the command for extending the predetermined period is not limited to a plurality of digits, and may be a single digit. In this case, the predetermined period is extended by one gesture.
  • the user performs the gesture 9 to which the input information (“9”) is assigned, the gesture 2 to which the input information (“2”) is assigned, and the gesture 4 to which the input information (“4”) is assigned.
  • the gesture 10 assigned (“0”) is repeated twice.
  • the processing unit 1213 extends the predetermined period (for example, extends 5 seconds from the current time).
  • the user performs the gesture 8 to which the input information (“8”) is assigned, the gesture 4 to which the input information (“4”) is assigned, and the gesture 5 to which the input information (“5”) is assigned.
  • the processing unit 1213 includes input information (“9”, “2”, “4”, “0”, “0”, “8”, “4”, “5”) stored in the storage unit 125.
  • the processing unit 1213 performs processing excluding “0” and “0.”
  • the processing unit 1213 uses the remaining input information ((“9”, “2”, “4”, “8”, “4”, “5”)) as a password.
  • Modification 5 will be described.
  • the user must finish a series of gestures within a predetermined period (eg, 3 seconds).
  • Modification 5 shows the remainder of the predetermined period on the image display unit 104B (FIG. 5). Thereby, the user can adjust the speed of the gesture so that the time required for a series of gestures falls within a predetermined period.
  • Modification 5 displays information indicating the remainder of the predetermined period on the image display unit 104B illustrated in FIG. 5 during the predetermined period.
  • FIG. 22 is a screen diagram illustrating a first example of a screen displayed on the image display unit 104B in Modification 5 of the present embodiment.
  • FIG. 23 is a screen diagram illustrating a second example of a screen displayed on the image display unit 104B in Modification 5 of the present embodiment.
  • display control unit 104DR causes image display unit 104B to display screen SC1.
  • the remaining time is information indicating the remaining of the predetermined period.
  • the display control unit 104DR displays the screen SC2 on the image display unit 104B.
  • Screen SC2 includes the remaining time and characters indicating that the predetermined period has been extended.
  • the user of the HMD 100 can adjust the speed of the gesture so that the time required for a series of gestures falls within a predetermined period.
  • the display control unit 104DR may cause the image display unit 104B to display information indicating the time elapsed since the start of the reception mode instead of the information indicating the remainder of the predetermined period, or display both of them on the image display unit 104B. You may let them.
  • the time elapsed from the start of the reception mode is the time elapsed from the start of the predetermined period.
  • the display control unit 104DR causes the image display unit 104B to display a screen SC3 including a white region and a gray region.
  • the white area indicates the time that has elapsed since the start of the reception mode, and the display control unit 104DR increases the area of the white area as the time increases.
  • the gray area indicates the remaining time of the predetermined period, and the display control unit 104DR decreases the area of the gray area as the time becomes shorter.
  • the screen SC3 shown in FIG. 24 includes both information (gray area) indicating the remainder of the predetermined period and information (white area) indicating the time elapsed since the start of the reception mode.
  • the display device has a display unit, a detection unit having a detection region at a position different from the display unit, capable of distinguishing and detecting two or more predetermined gestures, and a reception mode for receiving a gesture input And a gesture that is first detected by the detection unit with respect to a mode control unit that terminates the reception mode after a predetermined period, a storage unit, and a series of gestures that are executed a plurality of times after the reception mode is started.
  • a storage control unit for controlling the storage unit to store input information indicating an input pre-assigned to the gesture, and the storage unit stored in the storage unit after completion of the reception mode,
  • a processing unit that performs predetermined processing using the input information of each of a plurality of gestures constituting a series of gestures.
  • the detection unit does not detect a gesture using the screen displayed on the display unit, but has a detection area at a position different from the display unit, and detects two or more predetermined gestures separately.
  • the display device according to the embodiment performs a predetermined process using input information (that is, a plurality of input information) of a plurality of gestures constituting a series of gestures. For this reason, a process (predetermined process) that requires a plurality of inputs such as password authentication can be performed using gesture input without using a screen. Therefore, according to the display device according to the embodiment, gesture input without using a screen can be improved.
  • the detection unit includes, for example, a plurality of pyroelectric elements arranged in a two-dimensional matrix, and a gesture processing unit that determines a gesture based on each output of the plurality of pyroelectric elements.
  • the display device is, for example, a wearable terminal that can be worn on the head or arm.
  • a wearable terminal is a terminal device that can be worn on a part of a body (for example, a head or an arm).
  • the input information of each of the two or more gestures indicates an element constituting a password
  • the processing unit performs password authentication as the predetermined processing.
  • the elements constituting the password are, for example, numbers and alphabets. According to this configuration, password authentication can be performed by gesture input without using a screen.
  • the processing unit performs the predetermined process using the input information stored in the storage unit without waiting for the predetermined period to elapse.
  • the predetermined value is a value smaller than a predetermined period (for example, 3 seconds) (for example, 0.5 seconds).
  • the time required for a series of gestures is set as a predetermined period. For this reason, for example, when the processing unit performs a predetermined process using input information from one gesture input (for example, generation of a command to switch to the next screen), the process waits until the predetermined process starts after the gesture ends. Time occurs.
  • the reception mode is terminated and predetermined processing is performed without waiting for the predetermined period to elapse. . For this reason, waiting time can be shortened.
  • a command for extending the predetermined period is assigned in advance to the input information assigned to one or more predetermined gestures, and the one or more commands are stored in the storage unit during the predetermined period.
  • the mode control unit extends the predetermined period.
  • the predetermined period can be extended during the predetermined period. Therefore, even when the number of gestures is large or when the user has slowed down the gestures, the series of gestures can be completed within a predetermined period.
  • the series of gestures a first series of gestures configured by a first number of gestures, and a second series of gestures configured by a second number of gestures greater than the first number
  • the first part of the second series of gestures is composed of a third number of gestures equal to or less than the first number, and the command is preliminarily applied to the input information of the first part.
  • the mode control unit extends the predetermined period.
  • the number of passwords for users is the first number
  • the number of special passwords for example, passwords for accessing screens dedicated to service personnel
  • the user inputs a password by making a first series of gestures.
  • the service person makes a second series of gestures and inputs a password.
  • the initial predetermined period that is, the initial value of the predetermined period
  • the initial value of the predetermined period is longer than the time required for the first series of gestures but shorter than the time required for the second series of gestures. This makes it difficult for the user to access the screen dedicated to the service person.
  • a command for extending a predetermined period is assigned in advance to the input information of the first part of the second series of gestures.
  • the mode control unit extends the predetermined period. This ensures time for performing the remaining part of the second series of gestures. Therefore, according to this configuration, the processing unit uses the input information of each of the plurality of gestures constituting the second series of gestures stored in the storage unit to perform predetermined processing (for example, dedicated to a serviceman To access the screen).
  • a display control unit that causes the display unit to display at least one of information indicating the rest of the predetermined period and information indicating a time elapsed since the start of the reception mode during the predetermined period. Further prepare.
  • the predetermined period is 3 seconds and 1.4 seconds have elapsed since the start of the reception mode.
  • the remainder of the predetermined period is 1.6 seconds.
  • the time elapsed from the start of the reception mode (that is, the start of the predetermined period) is 1.4 seconds.
  • the user of the display device can adjust the speed of the gesture so that the time required for a series of gestures falls within a predetermined period.
  • the gesture input method is directed to a display device including a display unit and a detection unit that has a detection region at a position different from the display unit and can distinguish and detect two or more predetermined gestures.
  • a method for inputting a gesture, the first step of starting a reception mode for receiving a gesture input, and ending the reception mode after a predetermined period of time, and a series of steps executed a plurality of times after the start of the reception mode A second step of controlling the storage unit to store input information indicating an input pre-assigned to the gesture in order from the gesture first detected by the detection unit, and the reception mode. After completion of the operation, it is stored in the storage unit, and the plurality of gestures constituting the series of gestures Using the input information Les, and a third step of the predetermined processing.
  • the gesture input method according to the embodiment defines the display device according to the embodiment from the viewpoint of the method, and has the same effects as the display device according to the embodiment.
  • a display device and a gesture input method can be provided.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Provided is a display device, comprising a display unit, a detection unit, a mode control unit, a storage unit, a storage control unit, and a processing unit. The detection unit comprises a detection region which is in a different position from the display unit, and is capable of differentiating and detecting two or more predetermined gestures. The mode control unit commences an acceptance mode of accepting a gesture input, and ends the acceptance mode after a prescribed interval has elapsed. After the commencement of the acceptance mode, the storage control unit carries out a control which causes the storage unit to store, for a series of gestures which are executed a plurality of times, input information which indicates an input which has been assigned to the gestures, in sequence beginning with the gesture which has been first detected by the detection unit. After the ending of the acceptance mode, the processing unit carries out a prescribed process, using the input information which is stored in the storage unit of each of the plurality of gestures which configure the series of gestures.

Description

表示装置及びジェスチャー入力方法Display device and gesture input method

 本発明は、ジェスチャー入力が可能な表示装置に関する。 The present invention relates to a display device capable of gesture input.

 ジェスチャー入力は、身振りや手振りによって、表示装置(例えば、端末装置、ゲーム機)を動作させることである。例えば、スマートフォンの場合、画面にタッチしてジェスチャー入力をすることができる。 Gesture input is to operate a display device (for example, a terminal device or a game machine) by gesture or hand gesture. For example, in the case of a smartphone, gesture input can be performed by touching the screen.

 ユーザの手が汚れている場合等、画面にタッチしたくないことがある。そこで、画面を用いないジェスチャー入力が提案されている。例えば、特許文献1は、ディスプレイと、モーション受付部と、モーション受付部によって受け付けられたモーションに従って、ディスプレイに表示する表示情報を制御する表示制御部と、を備えるディスプレイ装置を開示している。 ∙ You may not want to touch the screen when the user's hand is dirty. Therefore, gesture input without using a screen has been proposed. For example, Patent Literature 1 discloses a display device that includes a display, a motion reception unit, and a display control unit that controls display information displayed on the display according to the motion received by the motion reception unit.

 上記特許文献1において、モーション受付部は、左方向に移動する手のモーションと右方向に移動する手のモーションとしか受け付けず、単純な操作しかできない。このため、複数のキー操作に相当する入力ができないので、例えば、パスワード入力による個人認証をすることができない。 In Patent Document 1, the motion accepting unit accepts only the motion of the hand moving in the left direction and the motion of the hand moving in the right direction, and can perform only a simple operation. For this reason, since input corresponding to a plurality of key operations cannot be performed, for example, personal authentication by password input cannot be performed.

 ハードウェアキーボード、指紋認証装置、静脈センサ等の装置を使用して個人認証をすることができる。しかし、携帯端末やウェアラブル端末の個人認証に、上記装置を使用する場合、上記装置を持ち運ばなければならず、不便である。バーコードやQRコード(登録商標)を使用して個人認証をする場合、バーコードやQRコードの紛失、盗難の危険性がある。音声認識で個人認証をする場合、他人にパスワードを聞かれる危険性がある。 Personal authentication can be performed using devices such as a hardware keyboard, fingerprint authentication device, and vein sensor. However, when the device is used for personal authentication of a portable terminal or wearable terminal, the device must be carried, which is inconvenient. When personal authentication is performed using a barcode or QR code (registered trademark), there is a risk of loss or theft of the barcode or QR code. When performing personal authentication by voice recognition, there is a risk that someone else will ask for your password.

 パスワード入力等ができるように、画面を用いないジェスチャー入力のさらなる改善が望まれる。 * Further improvement of gesture input without using a screen is desired so that passwords can be entered.

特開2010-129069号公報JP 2010-1229069 A

 本発明の目的は、画面を用いないジェスチャー入力を改善させた表示装置、及び、この表示装置に適用されるジェスチャー入力方法を提供することである。 An object of the present invention is to provide a display device that improves gesture input without using a screen, and a gesture input method applied to the display device.

 本発明の一局面に係る表示装置は、表示部と、検出部と、モード制御部と、記憶部と、記憶制御部と、処理部と、を備える。前記検出部は、前記表示部と異なる位置に検出領域を有し、予め定められた二以上のジェスチャーを区別して検出できる。前記モード制御部は、ジェスチャー入力を受け付ける受付モードを開始し、所定期間の経過後、前記受付モードを終了させる。前記記憶制御部は、前記受付モードの開始後、複数回実行される一連のジェスチャーについて、前記検出部によって最初に検出されたジェスチャーから順番に、当該ジェスチャーに予め割り当てられていた入力を示す入力情報を、前記記憶部に記憶させる制御をする。前記処理部は、前記受付モードの終了後、前記記憶部に記憶されている、前記一連のジェスチャーを構成する複数のジェスチャーのそれぞれの前記入力情報を用いて、所定の処理をする。 The display device according to one aspect of the present invention includes a display unit, a detection unit, a mode control unit, a storage unit, a storage control unit, and a processing unit. The detection unit has a detection region at a position different from that of the display unit, and can detect and distinguish two or more predetermined gestures. The mode control unit starts a reception mode for receiving a gesture input, and ends the reception mode after a predetermined period. The storage control unit, for a series of gestures executed a plurality of times after the start of the reception mode, input information indicating inputs pre-assigned to the gestures in order from the gesture first detected by the detection unit Is stored in the storage unit. The processing unit performs a predetermined process using the input information of each of a plurality of gestures constituting the series of gestures stored in the storage unit after the acceptance mode ends.

 上記並びにその他の本発明の目的、特徴及び利点は、以下の詳細な記載と添付図面から明らかになるであろう。 The above and other objects, features and advantages of the present invention will become apparent from the following detailed description and the accompanying drawings.

本実施形態に係るHMDの構造的な構成を示す斜視図である。It is a perspective view which shows the structural structure of HMD which concerns on this embodiment. 本実施形態に係るHMDの構造的な構成を示す正面図である。It is a front view which shows the structural structure of HMD which concerns on this embodiment. 本実施形態に係るHMDに備えられるディスプレイユニットの構成を示す概略断面図である。It is a schematic sectional drawing which shows the structure of the display unit with which HMD which concerns on this embodiment is equipped. 本実施形態に係るHMDに備えられる近接センサの構成を示す図である。It is a figure which shows the structure of the proximity sensor with which HMD which concerns on this embodiment is equipped. 本実施形態に係るHMDの電気的な構成を示すブロック図である。It is a block diagram which shows the electrical structure of HMD which concerns on this embodiment. 本実施形態に係るHMDを装着した場合の正面図である。It is a front view at the time of mounting HMD concerning this embodiment. 本実施形態に係るHMDを装着した場合の側面図である。It is a side view at the time of mounting HMD concerning this embodiment. 本実施形態に係るHMDを装着した場合の部分上面図である。It is a partial top view at the time of mounting | wearing with HMD which concerns on this embodiment. シースルー型の画像表示部を通してユーザが視認する像の一例を示す図である。It is a figure which shows an example of the image which a user visually recognizes through a see-through type image display part. 本実施形態に係るHMDに備えられる近接センサの出力の一例を示す図である。It is a figure which shows an example of the output of the proximity sensor with which HMD which concerns on this embodiment is equipped. ジェスチャーと入力情報との関係を説明する説明図である。It is explanatory drawing explaining the relationship between a gesture and input information. ジェスチャー1がされた場合において、焦点素子RA~RDの出力のレベルの変化を示す波形図である。FIG. 10 is a waveform diagram showing changes in output levels of focus elements RA to RD when gesture 1 is made. ジェスチャー2がされた場合において、焦点素子RA~RDの出力のレベルの変化を示す波形図である。FIG. 9 is a waveform diagram showing changes in the output levels of focus elements RA to RD when gesture 2 is made. ジェスチャー3がされた場合において、焦点素子RA~RDの出力のレベルの変化を示す波形図である。FIG. 7 is a waveform diagram showing changes in the output levels of focus elements RA to RD when gesture 3 is made. ジェスチャー4がされた場合において、焦点素子RA~RDの出力のレベルの変化を示す波形図である。FIG. 10 is a waveform diagram showing changes in the output levels of focus elements RA to RD when gesture 4 is made. 本実施形態に係るHMDにおいて、ジェスチャー入力がされた場合の動作について説明するフローチャートである。It is a flowchart explaining the operation | movement at the time of gesture input in HMD which concerns on this embodiment. パスワード入力のために、一連のジェスチャー(ジェスチャー2、ジェスチャー1、ジェスチャー4)がされた場合において、焦点素子RA~RDの出力のレベルの変化を示す波形図である。FIG. 6 is a waveform diagram showing changes in the output levels of focus elements RA to RD when a series of gestures (gesture 2, gesture 1, gesture 4) is performed for password input. 次の画面に切り替えるために、ジェスチャー11がされた場合において、焦点素子RA~RDの出力のレベルの変化を示す波形図である。FIG. 10 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when a gesture 11 is made to switch to the next screen. 「バイバイ」を示す一連のジェスチャー(ジェスチャー11、ジェスチャー12、ジェスチャー11)がされた場合において、焦点素子RA~RDの出力のレベルの変化を示す波形図である。FIG. 11 is a waveform diagram showing changes in the output levels of focus elements RA to RD when a series of gestures (gesture 11, gesture 12, gesture 11) indicating “bye-bye” is performed. 本実施形態の変形例2において、ジェスチャー入力がされた場合の動作について説明するフローチャートである。It is a flowchart explaining the operation | movement at the time of the gesture input in the modification 2 of this embodiment. 次の画面に切り替えるために、ジェスチャーがされた場合において、焦点素子RA~RDの出力のレベルの変化を示す波形図である。FIG. 6 is a waveform diagram showing changes in the output levels of focus elements RA to RD when a gesture is made to switch to the next screen. 本実施形態の変形例3において、ジェスチャー入力がされた場合の動作について説明するフローチャートである。It is a flowchart explaining the operation | movement at the time of the gesture input in the modification 3 of this embodiment. 本実施形態の変形例5において、画像表示部に表示された画面の第1例を示す画面図である。FIG. 16 is a screen diagram illustrating a first example of a screen displayed on an image display unit in Modification 5 of the present embodiment. 本実施形態の変形例5において、画像表示部に表示された画面の第2例を示す画面図である。In Modification 5 of this embodiment, it is a screen figure which shows the 2nd example of the screen displayed on the image display part. 本実施形態の変形例5において、画像表示部に表示された画面の第3例を示す画面図である。In Modification 5 of this embodiment, it is a screen figure which shows the 3rd example of the screen displayed on the image display part.

 以下、本発明に係る実施の一形態を図面に基づいて説明する。なお、各図において同一の符号を付した構成は、同一の構成であることを示し、適宜、その説明を省略する。 Hereinafter, an embodiment according to the present invention will be described with reference to the drawings. In addition, the structure which attached | subjected the same code | symbol in each figure shows that it is the same structure, The description is abbreviate | omitted suitably.

 本実施形態に係る表示装置は、例えば、ウェアラブル端末(例えば、ヘッドマウントディスプレイ(HMD)、腕時計型端末)、スマート端末(例えば、スマートフォン、タブレット端末)である。本明細書では、ヘッドマウントディスプレイ(HMD)を例にして説明する。 The display device according to the present embodiment is, for example, a wearable terminal (for example, a head mounted display (HMD) or a wristwatch type terminal) or a smart terminal (for example, a smartphone or a tablet terminal). In this specification, a head mounted display (HMD) will be described as an example.

 図1は、本実施形態に係るHMD100の構造的な構成を示す斜視図である。図2は、本実施形態に係るHMD100の構造的な構成を示す正面図である。図3は、本実施形態に係るHMD100に備えられるディスプレイユニット104の構成を示す概略断面図である。図4は、本実施形態に係るHMD100に備えられる近接センサ105の構成を示す図である。図5は、本実施形態に係るHMD100の電気的な構成を示すブロック図である。以下、HMD100の右側および左側とは、HMD100を装着したユーザにとっての右側および左側をいうものとする。 FIG. 1 is a perspective view showing a structural configuration of the HMD 100 according to the present embodiment. FIG. 2 is a front view showing a structural configuration of the HMD 100 according to the present embodiment. FIG. 3 is a schematic cross-sectional view showing the configuration of the display unit 104 provided in the HMD 100 according to the present embodiment. FIG. 4 is a diagram illustrating a configuration of the proximity sensor 105 provided in the HMD 100 according to the present embodiment. FIG. 5 is a block diagram showing an electrical configuration of the HMD 100 according to the present embodiment. Hereinafter, the right side and the left side of the HMD 100 refer to the right side and the left side for the user wearing the HMD 100.

 HMD100の構造的な構成について説明する。図1及び図2を参照して、本実施形態に係るHMD100は、頭部に装着するための頭部装着部材の一例であるフレーム101を備える。フレーム101は、2つの眼鏡レンズ102を取り付ける前方部101aと、前方部101aの両端から後方へと延在する側部101b、101cとを備える。フレーム101に取り付けられた2つの眼鏡レンズ102は、屈折力(光学的パワー、焦点距離の逆数)を有して良く、また、有しなくて良い。 The structural configuration of the HMD 100 will be described. With reference to FIG.1 and FIG.2, HMD100 which concerns on this embodiment is provided with the flame | frame 101 which is an example of the head mounting member for mounting | wearing a head. The frame 101 includes a front part 101a to which two spectacle lenses 102 are attached, and side parts 101b and 101c extending rearward from both ends of the front part 101a. The two spectacle lenses 102 attached to the frame 101 may or may not have refractive power (optical power, reciprocal of focal length).

 右側(ユーザーの利き目等に応じて左側でもよい)の眼鏡レンズ102の上部において、円筒状の主本体部103がフレーム101の前方部101aに固定されている。主本体部103にはディスプレイユニット104が設けられている。主本体部103内には、後述する制御処理部121からの指示に基づいてディスプレイユニット104の表示制御を司る表示制御部104DR(図5)が配置されている。なお、必要に応じて両眼の前にそれぞれディスプレイユニットが配置されてもよい。 The cylindrical main body 103 is fixed to the front part 101 a of the frame 101 at the upper part of the right eyeglass lens 102 (which may be the left side according to the user's dominant eye etc.). The main body 103 is provided with a display unit 104. A display control unit 104DR (FIG. 5) that performs display control of the display unit 104 based on an instruction from a control processing unit 121 described later is disposed in the main body unit 103. Note that a display unit may be disposed in front of both eyes as necessary.

 図3を参照して、ディスプレイユニット104は、画像形成部104Aと画像表示部104Bとを備えて構成される。画像形成部104Aは、主本体部103内に組み込まれており、光源104aと、一方向拡散板104bと、集光レンズ104cと、表示素子104dとを備える。一方、いわゆるシースルー型の表示部材である画像表示部104Bは、主本体部103から下方に向かい、片方の眼鏡レンズ102(図1)に平行に延在するように配置された全体的に板状であって、接眼プリズム104fと、偏向プリズム104gと、ホログラム光学素子104hとを有している。 Referring to FIG. 3, the display unit 104 includes an image forming unit 104A and an image display unit 104B. The image forming unit 104A is incorporated in the main body unit 103, and includes a light source 104a, a one-way diffusing plate 104b, a condenser lens 104c, and a display element 104d. On the other hand, the image display unit 104B, which is a so-called see-through type display member, is generally plate-shaped and is disposed so as to extend downward from the main body unit 103 and parallel to one eyeglass lens 102 (FIG. 1). The eyepiece prism 104f, the deflecting prism 104g, and the hologram optical element 104h are included.

 光源104aは、表示素子104dを照明する機能を有し、例えば光強度のピーク波長および光強度半値の波長幅で462±12nm(青色光(B光))、525±17nm(緑色光(G光))、635±11nm(赤色光(R光))となる3つの波長帯域の光を発するRGB一体型の発光ダイオード(LED)で構成されている。 The light source 104a has a function of illuminating the display element 104d. For example, the peak wavelength of light intensity and the half width of the light intensity are 462 ± 12 nm (blue light (B light)), 525 ± 17 nm (green light (G light )), 635 ± 11 nm (red light (R light)), and is composed of RGB integrated light emitting diodes (LEDs) that emit light in three wavelength bands.

 表示素子104dは、光源104aからの出射光を画像データに応じて変調して画像を表示するものであり、光が透過する領域となる各画素をマトリクス状に有する透過型の液晶表示素子で構成される。なお、表示素子104dは、反射型であってもよい。 The display element 104d displays an image by modulating the light emitted from the light source 104a in accordance with image data, and is configured by a transmissive liquid crystal display element having pixels that serve as light transmitting regions in a matrix. Is done. Note that the display element 104d may be of a reflective type.

 接眼プリズム104fは、基端面PL1を介して入射する表示素子104dからの画像光を、相対する平行な内側面PL2と外側面PL3とで全反射させ、ホログラム光学素子104hを介してユーザの瞳に導く一方、外光を透過させてユーザの瞳に導くものであり、偏向プリズム104gとともに、例えばアクリル系樹脂で形成されている。この接眼プリズム104fと偏向プリズム104gとは、内側面PL2および外側面PL3に対して傾斜した傾斜面PL4、PL5でホログラム光学素子104hを挟み、接着剤で接合される。 The eyepiece prism 104f totally reflects the image light from the display element 104d incident through the base end face PL1 by the opposed parallel inner side face PL2 and outer side face PL3, and passes through the hologram optical element 104h to the user's pupil. On the other hand, it transmits external light and guides it to the user's pupil, and is formed of, for example, an acrylic resin together with the deflecting prism 104g. The eyepiece prism 104f and the deflection prism 104g are joined by an adhesive with the hologram optical element 104h sandwiched between inclined surfaces PL4 and PL5 inclined with respect to the inner surface PL2 and the outer surface PL3.

 偏向プリズム104gは、接眼プリズム104fに接合されて、接眼プリズム104fと一体となって略平行平板となるものである。なお、ディスプレイユニット104とユーザの瞳の間に眼鏡レンズ102(図1)を装着すると、通常眼鏡を使用しているユーザでも画像を観察することが可能である。 The deflection prism 104g is joined to the eyepiece prism 104f, and becomes a substantially parallel flat plate integrated with the eyepiece prism 104f. In addition, if the spectacle lens 102 (FIG. 1) is mounted between the display unit 104 and the user's pupil, it is possible for a user who normally uses spectacles to observe an image.

 ホログラム光学素子104hは、表示素子104dから出射される画像光(3原色に対応した波長の光)を回折反射して瞳孔Bに導き、表示素子104dに表示される画像を拡大してユーザの瞳に虚像として導く体積位相型の反射型ホログラムである。このホログラム光学素子104hは、例えば、回折効率のピーク波長および回折効率半値の波長幅で465±5nm(B光)、521±5nm(G光)、634±5nm(R光)の3つの波長域の光を回折(反射)させるように作製されている。ここで、回折効率のピーク波長は、回折効率がピークとなるときの波長のことであり、回折効率半値の波長幅とは、回折効率が回折効率ピークの半値となるときの波長幅のことである。 The hologram optical element 104h diffracts and reflects the image light (light having a wavelength corresponding to the three primary colors) emitted from the display element 104d, guides it to the pupil B, enlarges the image displayed on the display element 104d, and enlarges the user's pupil. It is a volume phase type reflection hologram guided as a virtual image. The hologram optical element 104h has, for example, three wavelength ranges of 465 ± 5 nm (B light), 521 ± 5 nm (G light), and 634 ± 5 nm (R light) with a peak wavelength of diffraction efficiency and a wavelength width of half the diffraction efficiency. The light is diffracted (reflected). Here, the peak wavelength of diffraction efficiency is the wavelength at which the diffraction efficiency reaches a peak, and the wavelength width at half maximum of the diffraction efficiency is the wavelength width at which the diffraction efficiency is at half maximum of the diffraction efficiency peak. is there.

 このような構成のディスプレイユニット104では、光源104aから出射された光は、一方向拡散板104bにて拡散され、集光レンズ104cにて集光されて表示素子104dに入射する。表示素子104dに入射した光は、表示制御部104DRから入力された画像データに基づいて画素ごとに変調され、画像光として出射される。これにより、表示素子104dには、カラー画像が表示される。表示素子104dからの画像光は、接眼プリズム104fの内部にその基端面PL1から入射し、内側面PL2と外側面PL3で複数回全反射されて、ホログラム光学素子104hに入射する。ホログラム光学素子104hに入射した光は、そこで反射され、内側面PL2を透過して瞳孔Bに達する。瞳孔Bの位置では、ユーザは、表示素子104dに表示された画像の拡大虚像を観察することができ、画像表示部104Bに形成される画面として視認することができる。 In the display unit 104 having such a configuration, light emitted from the light source 104a is diffused by the unidirectional diffusion plate 104b, condensed by the condenser lens 104c, and incident on the display element 104d. The light incident on the display element 104d is modulated for each pixel based on the image data input from the display control unit 104DR, and is emitted as image light. Thereby, a color image is displayed on the display element 104d. The image light from the display element 104d enters the eyepiece prism 104f from its base end face PL1, is totally reflected a plurality of times by the inner side face PL2 and the outer side face PL3, and enters the hologram optical element 104h. The light incident on the hologram optical element 104h is reflected there, passes through the inner side surface PL2, and reaches the pupil B. At the position of the pupil B, the user can observe an enlarged virtual image of the image displayed on the display element 104d, and can visually recognize it as a screen formed on the image display unit 104B.

 一方、接眼プリズム104f、偏向プリズム104gおよびホログラム光学素子104hは、外光をほとんど全て透過させるので、ユーザはこれらを介して外界像(実像)を観察できる。したがって、表示素子104dに表示された画像の虚像は、外界像の一部に重なって観察されることになる。このようにして、HMD100のユーザは、ホログラム光学素子104hを介して、表示素子104dから提供される画像と外界像とを同時に観察できる。なお、ディスプレイユニット104が非表示状態の場合、画像表示部104Bは、素通しとなり、外界像のみを観察できる。なお、本実施形態では、光源と液晶表示素子と光学系とを組み合わせてディスプレイユニットが構成されているが、光源と液晶表示素子の組合せに代え、自発光型の表示素子(例えば、有機EL表示素子)が用いられても良い。また、光源と液晶表示素子と光学系の組合せに代えて、非発光状態で透過性を有する透過型有機EL表示パネルが用いられてもよい。 On the other hand, since the eyepiece prism 104f, the deflecting prism 104g, and the hologram optical element 104h transmit almost all of the external light, the user can observe the external image (real image) through these. Therefore, the virtual image of the image displayed on the display element 104d is observed so as to overlap with a part of the external image. In this way, the user of the HMD 100 can simultaneously observe the image provided from the display element 104d and the external image via the hologram optical element 104h. In addition, when the display unit 104 is in a non-display state, the image display unit 104B is transparent and can observe only an external image. In this embodiment, the display unit is configured by combining a light source, a liquid crystal display element, and an optical system. However, instead of the combination of the light source and the liquid crystal display element, a self-luminous display element (for example, an organic EL display) is used. Element) may be used. Further, instead of a combination of a light source, a liquid crystal display element, and an optical system, a transmissive organic EL display panel having transparency in a non-light emitting state may be used.

 図1及び図2を参照して、主本体部103の正面には、フレーム101の中央寄りに配置された近接センサ105と、側部寄りに配置されたカメラ106のレンズ106aとが、前方を向くようにして設けられている。 With reference to FIGS. 1 and 2, on the front surface of the main body 103, a proximity sensor 105 disposed near the center of the frame 101 and a lens 106a of the camera 106 disposed near the side face the front. It is provided to face.

 本明細書において、「近接センサ」とは、物体、例えば人体の一部(手や指など)がユーザの眼前に近接していることを検知するために、近接センサの検出面前方の近接範囲にある検出領域内に存在しているか否かを検出して信号を出力するものをいう。近接範囲は、ユーザの特性や好みに応じて適宜設定すればよいが、例えば、近接センサの検出面からの距離が200mm以内の範囲とすることができる。近接センサからの距離が200mm以内であれば、ユーザが腕を曲げた状態で、手のひらや指をユーザの視野内に入れたり出したりできるため、手、指、指示具(例えば、棒状の部材)を使ったジェスチャーによって容易に操作を行うことができ、また、ユーザ以外の人体や家具等を誤って検出する虞が少なくなる。 In this specification, the “proximity sensor” refers to a proximity range in front of a detection surface of a proximity sensor in order to detect that an object, for example, a part of a human body (such as a hand or a finger) is close to the user's eyes The signal is output by detecting whether or not it exists within the detection area. The proximity range may be set as appropriate according to the characteristics and preferences of the user. For example, the proximity range from the detection surface of the proximity sensor can be within a range of 200 mm. If the distance from the proximity sensor is 200 mm or less, the user can put a palm or a finger into or out of the user's field of view with the arm bent, so that the hand, finger, or pointing tool (for example, a rod-shaped member) It is possible to easily perform an operation by a gesture using the symbol, and the possibility of erroneous detection of a human body or furniture other than the user is reduced.

 近接センサには、パッシブ型とアクティブ型とがある。パッシブ型の近接センサは、物体が近接した際に物体から放射される不可視光や電磁波を検出する検出装置を有する。パッシブ型の近接センサとして、接近した人体から放射される赤外線等の不可視光を検出する焦電センサや、接近した人体との間の静電容量変化を検出する静電容量センサ等がある。アクティブ型の近接センサは、不可視光や音波を投射する投射装置と、物体に反射して戻った不可視光や音波を受ける検出装置とを有する。アクティブ型の近接センサとしては、赤外線を投射して物体で反射された赤外線を受光する赤外線センサや、レーザ光を投射して物体で反射されたレーザ光を受光するレーザセンサや、超音波を投射して物体で反射された超音波を受け取る超音波センサ等がある。なお、パッシブ型の近接センサは、物体に向けてエネルギーを投射する必要がないので、低消費電力性に優れている。アクティブ型の近接センサは、検知の確実性を向上させ易く、例えば、ユーザが、赤外光などの人体から放射される検出光を透過しない手袋をしているような場合でも、ユーザの手の動きを検出できる。複数種類の近接センサが組み合わされても良い。 There are two types of proximity sensors: passive type and active type. A passive proximity sensor has a detection device that detects invisible light and electromagnetic waves emitted from an object when the object approaches. As a passive proximity sensor, there are a pyroelectric sensor that detects invisible light such as infrared rays emitted from an approaching human body, an electrostatic capacitance sensor that detects a change in electrostatic capacitance between the approaching human body, and the like. The active proximity sensor includes a projection device that projects invisible light and sound waves, and a detection device that receives invisible light and sound waves that have been reflected back to the object. Active proximity sensors include infrared sensors that project infrared rays and receive infrared rays reflected by objects, laser sensors that project laser beams and receive laser beams reflected by objects, and project ultrasonic waves. Then, there is an ultrasonic sensor that receives ultrasonic waves reflected by an object. Note that a passive proximity sensor does not need to project energy toward an object, and thus has excellent low power consumption. An active proximity sensor is easy to improve the certainty of detection. For example, even when a user wears a glove that does not transmit detection light emitted from a human body such as infrared light, Can detect movement. A plurality of types of proximity sensors may be combined.

 図4を参照して、本実施形態では、近接センサ105として、2次元マトリクス状に配列された複数の焦電素子を備えた焦電センサが用いられている。近接センサ105は、2行2列に配列された4個の焦電素子RA,RB,RC,RDを備えて構成され、人体から放射される赤外光等の不可視光を検出光として受光し、それに対応した信号が各焦電素子RA~RDそれぞれから出力される。各焦電素子RA~RDの各出力は、近接センサ105の受光面から物体までの距離に応じて強度が変化し、距離が近いほど強度が大きくなる。 Referring to FIG. 4, in this embodiment, a pyroelectric sensor including a plurality of pyroelectric elements arranged in a two-dimensional matrix is used as the proximity sensor 105. The proximity sensor 105 includes four pyroelectric elements RA, RB, RC, and RD arranged in two rows and two columns, and receives invisible light such as infrared light emitted from the human body as detection light. A corresponding signal is output from each of the pyroelectric elements RA to RD. The outputs of the pyroelectric elements RA to RD change in intensity according to the distance from the light receiving surface of the proximity sensor 105 to the object, and the intensity increases as the distance decreases.

 図1及び図2を参照して、フレーム101の右側の側部101bには、右副本体部108-Rが取り付けられ、フレーム101の左側の側部101cには、左副本体部108-Lが取り付けられている。右副本体部108-Rおよび左副本体部108-Lは、細長い板形状を有する。 Referring to FIGS. 1 and 2, the right sub-body portion 108-R is attached to the right side portion 101b of the frame 101, and the left sub-body portion 108-L is attached to the left side portion 101c of the frame 101. Is attached. The right sub body 108-R and the left sub body 108-L have an elongated plate shape.

 主本体部103と右副本体部108-Rとは、配線HSで信号伝達可能に接続されており、右副本体部108-Rは、その後端から延在するコードCDを介して制御ユニットCTUに接続されている。 The main main body 103 and the right sub main body 108-R are connected to each other by a wiring HS so that signals can be transmitted. The right sub main body 108-R is connected to the control unit CTU via a cord CD extending from the rear end thereof. It is connected to the.

 次に、HMD100の電気的な構成について説明する。図5を参照して、HMD100は、制御ユニットCTUと、ディスプレイユニット104と、表示制御部104DRと、近接センサ105と、カメラ106と、を備える。制御ユニットCTUは、制御処理部121と、操作部122と、記憶部125と、バッテリ126と、電源回路127とを備える。 Next, the electrical configuration of the HMD 100 will be described. Referring to FIG. 5, the HMD 100 includes a control unit CTU, a display unit 104, a display control unit 104DR, a proximity sensor 105, and a camera 106. The control unit CTU includes a control processing unit 121, an operation unit 122, a storage unit 125, a battery 126, and a power supply circuit 127.

 表示制御部104DRは、制御処理部121に接続され、制御処理部121の制御に従ってディスプレイユニット104の画像形成部104Aを制御することで、画像形成部104Aに画像を形成させる回路である。画像形成部104Aは、上述した通りである。 The display control unit 104DR is a circuit that is connected to the control processing unit 121 and controls the image forming unit 104A of the display unit 104 according to the control of the control processing unit 121 to form an image on the image forming unit 104A. The image forming unit 104A is as described above.

 カメラ106は、制御処理部121に接続され、制御処理部121の制御に従って、被写体の画像を生成する装置である。カメラ106は、例えば、被写体の光学像を所定の結像面上に結像する結像光学系、前記結像面に受光面を一致させて配置され、前記被写体の光学像を電気的な信号に変換するイメージセンサ、前記イメージセンサの出力に対し公知の画像処理を施して画像(画像データ)を生成するデジタルシグナルプロセッサ(DSP)等を備えて構成される。前記結像光学系は、1または複数のレンズを備えて構成され、その1つとして前記レンズ106aを含む。カメラ106は、前記生成した画像データを制御処理部121へ出力する。 The camera 106 is an apparatus that is connected to the control processing unit 121 and generates an image of a subject under the control of the control processing unit 121. The camera 106 is, for example, an image forming optical system that forms an optical image of a subject on a predetermined image forming surface, and a light receiving surface that matches the image forming surface. An image sensor that converts the image sensor into a digital signal processor (DSP) that performs known image processing on the output of the image sensor to generate an image (image data). The imaging optical system includes one or more lenses, and includes the lens 106a as one of them. The camera 106 outputs the generated image data to the control processing unit 121.

 近接センサ105は、制御処理部121に接続される。近接センサ105は、上述した通りであり、その出力を制御処理部121へ出力する。 The proximity sensor 105 is connected to the control processing unit 121. The proximity sensor 105 is as described above, and outputs the output to the control processing unit 121.

 操作部122は、制御処理部121に接続され、例えば電源のオンオフ等の、予め設定された所定の指示をHMD100に入力する機器であり、例えば、所定の機能を割り付けられた1または複数のスイッチ等である。 The operation unit 122 is connected to the control processing unit 121 and is a device that inputs a predetermined instruction, such as power on / off, to the HMD 100, for example, one or a plurality of switches assigned a predetermined function Etc.

 バッテリ126は、電力を蓄積し、前記電力を供給する電池である。バッテリ126は、一次電池であってよく、また、二次電池であってよい。電源回路127は、バッテリ126から供給された電力を、電力を必要とする、当該HMD100の各部へ各部に応じた電圧で供給する回路である。 The battery 126 is a battery that accumulates electric power and supplies the electric power. The battery 126 may be a primary battery or a secondary battery. The power supply circuit 127 is a circuit that supplies power supplied from the battery 126 to each part of the HMD 100 that requires power at a voltage corresponding to each part.

 記憶部125は、制御処理部121に接続され、制御処理部121の制御に従って、各種の所定のプログラムおよび各種の所定のデータを記憶する回路である。前記各種の所定のプログラムには、例えば、当該HMD100の各部を当該各部の機能に応じて制御する制御プログラムや、近接センサ105の出力に基づいてジェスチャーを判定するジェスチャー処理プログラム等の制御処理プログラムが含まれる。記憶部125は、例えば不揮発性の記憶素子であるROM(Read Only Memory)や書き換え可能な不揮発性の記憶素子であるEEPROM(Electrically Erasable Programmable Read Only Memory)等を備える。そして、記憶部125は、前記所定のプログラムの実行中に生じるデータ等を記憶するいわゆる制御処理部121のワーキングメモリとなるRAM(Random Access Memory)等を含む。 The storage unit 125 is a circuit that is connected to the control processing unit 121 and stores various predetermined programs and various predetermined data under the control of the control processing unit 121. Examples of the various predetermined programs include control processing programs such as a control program that controls each unit of the HMD 100 according to the function of each unit, and a gesture processing program that determines a gesture based on the output of the proximity sensor 105. included. The storage unit 125 includes, for example, a ROM (Read Only Memory) that is a nonvolatile storage element, an EEPROM (Electrically Erasable Programmable Read Only Memory) that is a rewritable nonvolatile storage element, and the like. The storage unit 125 includes a RAM (Random Access Memory) that serves as a working memory of the control processing unit 121 that stores data generated during the execution of the predetermined program.

 制御処理部121は、HMD100の各部を当該各部の機能に応じてそれぞれ制御し、近接センサ105の出力に基づいて予め設定された所定のジェスチャーを判定し、この判定結果に応じた処理を実行するものである。制御処理部121は、例えば、CPU(Central Processing Unit)およびその周辺回路等を備えて構成される。制御処理部121には、制御処理プログラムが実行されることによって、制御部1211、ジェスチャー処理部1212及び処理部1213が機能的に構成される。なお、制御部1211、ジェスチャー処理部1212及び処理部1213の一部または全部は、ハードウェアで構成されても良い。 The control processing unit 121 controls each unit of the HMD 100 according to the function of each unit, determines a predetermined gesture set in advance based on the output of the proximity sensor 105, and executes processing according to the determination result. Is. The control processing unit 121 includes, for example, a CPU (Central Processing Unit) and its peripheral circuits. In the control processing unit 121, a control processing program is executed, so that a control unit 1211, a gesture processing unit 1212, and a processing unit 1213 are functionally configured. Note that some or all of the control unit 1211, the gesture processing unit 1212, and the processing unit 1213 may be configured by hardware.

 制御部1211は、HMD100の各部を当該各部の機能に応じてそれぞれ制御するものである。制御部1211は、モード制御部1214及び記憶制御部1215の機能を有する。これらの機能については、後で説明する。 The control unit 1211 controls each unit of the HMD 100 according to the function of each unit. The control unit 1211 has functions of a mode control unit 1214 and a storage control unit 1215. These functions will be described later.

 ジェスチャー処理部1212は、近接センサ105における複数の焦電素子、本実施形態では4個の焦電素子RA~RDの各出力に基づいて予め設定された所定のジェスチャーを判定するものである。ジェスチャー処理部1212は、その判定結果を処理部1213へ通知する。ジェスチャー処理部1212と近接センサ105とにより、検出部128が構成される。検出部128は、画像表示部104B(表示部の一例)と異なる位置に検出領域SA(図7A、図7B、図8)を有し、予め定められた二以上のジェスチャーを区別して検出する。 The gesture processing unit 1212 determines a predetermined gesture set in advance based on the outputs of the plurality of pyroelectric elements in the proximity sensor 105, in this embodiment, the four pyroelectric elements RA to RD. The gesture processing unit 1212 notifies the processing unit 1213 of the determination result. The gesture processing unit 1212 and the proximity sensor 105 constitute a detection unit 128. The detection unit 128 has a detection area SA (FIGS. 7A, 7B, and 8) at a position different from that of the image display unit 104B (an example of the display unit), and distinguishes and detects two or more predetermined gestures.

 処理部1213は、ジェスチャー処理部1212の判定結果を用いて、所定の処理(例えば、パスワード認証)をする。処理部1213の詳細は、後で説明する。 The processing unit 1213 performs a predetermined process (for example, password authentication) using the determination result of the gesture processing unit 1212. Details of the processing unit 1213 will be described later.

 HMD100における、ジェスチャーを検知する基本動作について説明する。図6は、本実施形態に係るHMD100を装着した場合の正面図である。図7Aは、本実施形態に係るHMD100を装着した場合の側面図である。図7Bは、この場合の部分上面図である。これらの図には、ユーザUSの手HDも図示されている。図8は、シースルー型の画像表示部104Bを通してユーザが視認する像の一例を示す図である。図9は、本実施形態に係るHMD100に備えられる近接センサ105の出力の一例を示す図である。図9(A)は、焦電素子RAの出力を示し、図9(B)は、焦電素子RBの出力を示し、図9(C)は、焦電素子RCの出力を示し、図9(D)は、焦電素子RDの出力を示す。図9の各図の横軸は、時間であり、それらの縦軸は、出力のレベル(強度)である。ここで、ジェスチャー入力とは、少なくともユーザUSの手HDや指が近接センサ105の検出領域SA内に進入または離間する動作であり、近接センサ105を介してHMD100の制御処理部121のジェスチャー処理部1212が検知できるものである。 The basic operation for detecting a gesture in the HMD 100 will be described. FIG. 6 is a front view when the HMD 100 according to the present embodiment is mounted. FIG. 7A is a side view when the HMD 100 according to the present embodiment is mounted. FIG. 7B is a partial top view in this case. In these figures, the hand HD of the user US is also shown. FIG. 8 is a diagram illustrating an example of an image visually recognized by the user through the see-through type image display unit 104B. FIG. 9 is a diagram illustrating an example of the output of the proximity sensor 105 provided in the HMD 100 according to the present embodiment. 9A shows the output of the pyroelectric element RA, FIG. 9B shows the output of the pyroelectric element RB, FIG. 9C shows the output of the pyroelectric element RC, and FIG. (D) shows the output of the pyroelectric element RD. The horizontal axis of each figure in FIG. 9 is time, and the vertical axis thereof is the output level (intensity). Here, the gesture input is an operation in which at least the hand HD or finger of the user US enters or leaves the detection area SA of the proximity sensor 105, and the gesture processing unit of the control processing unit 121 of the HMD 100 via the proximity sensor 105. 1212 can be detected.

 図8を参照して、画像表示部104Bの画面104iは、画像表示部104Bに対向するユーザの眼の有効視野EVに重なるように(ここでは、有効視野EV内に位置するように)配置される。近接センサ105の検出領域SAは、画像表示部104Bに対向するユーザの眼の視野内にある。好ましくは、検出領域SAは、ユーザの眼の安定注視野またはその内側の視野内(水平約90°以内、垂直約70°以内)に位置し、さらに好ましくは、安定注視野よりも内側に位置する、有効視野EVまたはその内側の視野内(水平約30°以内、垂直約20°以内)に重なるように位置するように、近接センサ105の配置と向きを調整して設置するとよい。 Referring to FIG. 8, screen 104i of image display unit 104B is arranged so as to overlap with effective visual field EV of the user's eye facing image display unit 104B (here, positioned within effective visual field EV). The The detection area SA of the proximity sensor 105 is in the visual field of the user's eye facing the image display unit 104B. Preferably, the detection area SA is located within the stable field of view of the user's eye or the field inside thereof (within about 90 ° horizontal and within about 70 ° vertical), and more preferably located inside the stable field of view. The proximity sensor 105 may be installed with its arrangement and orientation adjusted so as to overlap the effective visual field EV or the inner visual field (horizontal within about 30 °, vertical within about 20 °).

 図8には、検出領域SAが画面104iに重なっている例が示されている。このように、ユーザUSが頭部装着部材であるフレーム101を頭部に装着した状態で、ユーザUSの眼の視野内に近接センサ105の検出領域SAが位置するように設定することで、画面104iを通して手HDを観察しつつ、眼の移動を伴うことなく、近接センサ105の検出領域SAへの手の進入と退避とを確実に視認できる。特に、近接センサ105の検出領域SAを安定注視野またはその内側の視野内とすることで、ユーザが画面を観察していても検出領域SAを認識しつつ、確実にジェスチャー入力を行うことができる。また、近接センサ105の検出領域SAを有効視野EVまたはその内側の視野内とすることで、さらに確実にジェスチャー入力を行うことができる。検出領域SAが画面104iに重なるようにすれば、さらに確実にジェスチャー入力を行うことができる。なお、本実施形態のように、近接センサ105が複数の焦電素子RA~RDを有する場合は、複数の焦電素子RA~RDの受光領域全体を一つの受光部とみて、その受光部の最大検出範囲を検出領域SAとみなすものとする。図8のように、近接センサ105の検出領域SAが画面104iに重なるように設定されている場合は、検出領域SAを示す画像を画面104iに表示する(例えば、領域SAの範囲を実線で表示する)と、ユーザは、検出領域SAを確実に認識できるので、ジェスチャーによる操作をより確実に行うことができる。 FIG. 8 shows an example in which the detection area SA overlaps the screen 104i. In this way, by setting the detection region SA of the proximity sensor 105 within the visual field of the eye of the user US while the user US is wearing the frame 101 that is the head mounting member on the head, the screen While observing the hand HD through 104i, the approach and retraction of the hand to the detection area SA of the proximity sensor 105 can be surely visually recognized without moving the eye. In particular, by setting the detection area SA of the proximity sensor 105 within the stable visual field or the inner visual field, it is possible to reliably perform gesture input while recognizing the detection area SA even when the user observes the screen. . Further, by making the detection area SA of the proximity sensor 105 within the effective visual field EV or the visual field inside the effective visual field EV, gesture input can be performed more reliably. If the detection area SA overlaps the screen 104i, gesture input can be performed more reliably. When the proximity sensor 105 has a plurality of pyroelectric elements RA to RD as in the present embodiment, the entire light receiving area of the plurality of pyroelectric elements RA to RD is regarded as one light receiving unit, and It is assumed that the maximum detection range is the detection area SA. As shown in FIG. 8, when the detection area SA of the proximity sensor 105 is set to overlap the screen 104i, an image showing the detection area SA is displayed on the screen 104i (for example, the range of the area SA is displayed by a solid line). ), The user can surely recognize the detection area SA, so that the operation by the gesture can be more reliably performed.

 次に、ジェスチャーの検出の基本原理について説明する。近接センサ105が作動しているときに、ユーザUSの前方に何も存在しなければ、近接センサ105は、検出光としての不可視光を受光しないので、制御処理部121のジェスチャー処理部1212は、ジェスチャーが行われていないと判断する。一方、図7A及び図7Bに示すように、ユーザUSの目の前にユーザUS自身の手HDを接近させると、手HDから放射される不可視光を近接センサ105が検出し、これに基づく近接センサ105からの出力信号に応じてジェスチャー処理部1212は、ジェスチャーが行われたと判断する。なお、以下においては、ユーザUSの手HDによってジェスチャーを行うものとして説明するが、指やその他の部位であってもよく、不可視光を放射できる材料からなる指示具をユーザUSが用いてジェスチャーを行ってもよい。 Next, the basic principle of gesture detection will be described. If there is nothing in front of the user US when the proximity sensor 105 is operating, the proximity sensor 105 does not receive invisible light as detection light, so the gesture processing unit 1212 of the control processing unit 121 It is determined that no gesture is performed. On the other hand, as shown in FIGS. 7A and 7B, when the user US's own hand HD is approached in front of the user US, the proximity sensor 105 detects invisible light emitted from the hand HD, and the proximity based on the invisible light is detected. In response to the output signal from the sensor 105, the gesture processing unit 1212 determines that a gesture has been performed. In the following description, it is assumed that a gesture is performed with the hand HD of the user US. However, the gesture may be performed by the user US using an indicator made of a material capable of emitting invisible light. You may go.

 上述したように、近接センサ105は、2行2列に並べられた4個の焦電素子RA~RDを有する(図4参照)。したがって、ユーザUSが、左右上下いずれの方向から手HDをHMD100の前方に近づけた場合、各焦電素子RA~RDで検出する信号の出力タイミングが異なる。 As described above, the proximity sensor 105 has four pyroelectric elements RA to RD arranged in two rows and two columns (see FIG. 4). Therefore, when the user US brings the hand HD close to the front of the HMD 100 from either the left, right, up, or down directions, the output timings of signals detected by the pyroelectric elements RA to RD are different.

 例えば、図7A、図7B及び図8を参照して、ユーザUSがHMD100の前方で右方から左方に向かって手HDを移動させたジェスチャーの場合、手HDから放射された不可視光が近接センサ105に入射する。この場合に、最初に不可視光を受光するのは、焦電素子RA、RCである。したがって、図4及び図9を参照して、まず、焦電素子RA、RCの信号が立ち上がり、遅れて焦電素子RB、RDの信号が立ち上がる。そして、この後、焦電素子RA、RCの信号が立ち下がって、遅れて焦電素子RB、RDの信号が立ち下がる。この信号のタイミングをジェスチャー処理部1212が検出し、ジェスチャー処理部1212は、ユーザUSが手HDを右から左へと移動させたジェスチャーを行ったと判定する。 For example, referring to FIG. 7A, FIG. 7B and FIG. 8, in the case of a gesture in which the user US moves the hand HD from the right to the left in front of the HMD 100, the invisible light emitted from the hand HD is close The light enters the sensor 105. In this case, the pyroelectric elements RA and RC first receive invisible light. Therefore, referring to FIGS. 4 and 9, first, the signals of pyroelectric elements RA and RC rise, and the signals of pyroelectric elements RB and RD rise after a delay. Thereafter, the signals of the pyroelectric elements RA and RC fall, and the signals of the pyroelectric elements RB and RD fall after a delay. The gesture processing unit 1212 detects the timing of this signal, and the gesture processing unit 1212 determines that the user US has made a gesture by moving the hand HD from right to left.

 本実施形態は、予め定められた二以上のジェスチャーとして、12個のジェスチャーを例にして説明する。二以上とは、複数を意味し、12に限定されない。図10は、ジェスチャーと入力情報との関係を説明する説明図である。図10は、12個のジェスチャーのそれぞれの手の動きを示す矢印、12個のジェスチャーのそれぞれに、予め割り当てられた入力を示す入力情報を含む。 This embodiment will be described using twelve gestures as examples of two or more predetermined gestures. Two or more means a plurality and is not limited to twelve. FIG. 10 is an explanatory diagram for explaining the relationship between a gesture and input information. FIG. 10 includes arrows indicating the hand movements of each of the 12 gestures, and input information indicating inputs pre-assigned to each of the 12 gestures.

 図4、図10及び図11を参照して、ジェスチャー1から説明する。図11は、ジェスチャー1がされた場合において、焦点素子RA~RDの出力のレベルの変化を示す波形図である。この波形図において、横軸は、時間を示し、縦軸は、出力のレベルを示す。しきい値thは、全て同じ値とする。 The gesture 1 will be described with reference to FIG. 4, FIG. 10, and FIG. FIG. 11 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when the gesture 1 is performed. In this waveform diagram, the horizontal axis indicates time, and the vertical axis indicates the output level. The threshold values th are all the same value.

 ジェスチャー1は、手HDが、検出領域SA(図7A、図7B、図8)の上側から検出領域SAに入り、検出領域SAの下側から検出領域SAを出るジェスチャーである。ユーザがジェスチャー1をしたとき、焦電素子RA,RBの出力のレベルがしきい値thを超え、これより遅れて、焦電素子RC,RDの出力のレベルがしきい値thを超え、そして、焦電素子RA,RBの出力のレベルがしきい値th以下になり、これより遅れて焦電素子RC,RDの出力のレベルがしきい値th以下になる。このような出力のレベルの変化をジェスチャー処理部1212が検出したとき、ジェスチャー処理部1212(図5)は、ジェスチャー1がされたと判定する。ジェスチャー1に予め割り当てられていた入力を示す入力情報は、「1」である。ユーザは、ジェスチャー1をすることにより、数字の1をHMD100に入力することができる。 The gesture 1 is a gesture in which the hand HD enters the detection area SA from the upper side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the lower side of the detection area SA. When the user performs gesture 1, the output levels of the pyroelectric elements RA and RB exceed the threshold value th, and later, the output levels of the pyroelectric elements RC and RD exceed the threshold value th, and The output levels of the pyroelectric elements RA and RB become equal to or lower than the threshold value th, and the output levels of the pyroelectric elements RC and RD become equal to or lower than the threshold value th later. When the gesture processing unit 1212 detects such a change in output level, the gesture processing unit 1212 (FIG. 5) determines that the gesture 1 has been made. The input information indicating the input previously assigned to the gesture 1 is “1”. The user can input the number 1 to the HMD 100 by making the gesture 1.

 図4、図10及び図12を参照して、ジェスチャー2を説明する。図12は、ジェスチャー2がされた場合において、焦点素子RA~RDの出力のレベルの変化を示す波形図である。この波形図の横軸及び縦軸は、図11の波形図の横軸及び縦軸と同じである。 The gesture 2 will be described with reference to FIG. 4, FIG. 10, and FIG. FIG. 12 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when the gesture 2 is performed. The horizontal axis and vertical axis of this waveform diagram are the same as the horizontal axis and vertical axis of the waveform diagram of FIG.

 ジェスチャー2は、手HDが、検出領域SA(図7A、図7B、図8)の下側から検出領域SAに入り、検出領域SAの上側から検出領域SAを出るジェスチャーである。ユーザがジェスチャー2をしたとき、焦電素子RC,RDの出力のレベルがしきい値thを超え、これより遅れて、焦電素子RA,RBの出力のレベルがしきい値thを超え、そして、焦電素子RC,RDの出力のレベルがしきい値th以下になり、これより遅れて焦電素子RA,RBの出力のレベルがしきい値th以下になる。このような出力のレベルの変化をジェスチャー処理部1212(図5)が検出したとき、ジェスチャー処理部1212は、ジェスチャー2がされたと判定する。ジェスチャー2に予め割り当てられていた入力を示す入力情報は、「2」である。ユーザは、ジェスチャー2をすることにより、数字の2をHMD100に入力することができる。 The gesture 2 is a gesture in which the hand HD enters the detection area SA from the lower side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the upper side of the detection area SA. When the user performs gesture 2, the output levels of the pyroelectric elements RC and RD exceed the threshold value th, and later, the output levels of the pyroelectric elements RA and RB exceed the threshold value th, and The output levels of the pyroelectric elements RC and RD become the threshold value th or less, and the output levels of the pyroelectric elements RA and RB become the threshold value th or later after this. When the gesture processing unit 1212 (FIG. 5) detects such a change in output level, the gesture processing unit 1212 determines that the gesture 2 has been made. The input information indicating the input previously assigned to the gesture 2 is “2”. The user can input the number 2 to the HMD 100 by making the gesture 2.

 図4、図10及び図13を参照して、ジェスチャー3を説明する。図13は、ジェスチャー3がされた場合において、焦点素子RA~RDの出力のレベルの変化を示す波形図である。この波形図の横軸及び縦軸は、図11の波形図の横軸及び縦軸と同じである。 The gesture 3 will be described with reference to FIGS. 4, 10, and 13. FIG. 13 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when the gesture 3 is performed. The horizontal axis and vertical axis of this waveform diagram are the same as the horizontal axis and vertical axis of the waveform diagram of FIG.

 ジェスチャー3は、手HDが、検出領域SA(図7A、図7B、図8)の右側から検出領域SAに入り、検出領域SAの下側から検出領域SAを出るジェスチャーである。ユーザがジェスチャー3をした場合、焦電素子RA,RCの出力のレベルがしきい値thを超え、これより遅れて、焦電素子RB,RDの出力のレベルがしきい値thを超え、そして、焦電素子RA,RBの出力のレベルがしきい値th以下になり、これより遅れて焦電素子RC,RDの出力のレベルがしきい値th以下になる。このような出力のレベルの変化をジェスチャー処理部1212(図5)が検出したとき、ジェスチャー処理部1212は、ジェスチャー3がされたと判定する。ジェスチャー3に予め割り当てられていた入力を示す入力情報は、「3」である。ユーザは、ジェスチャー3をすることにより、数字の3をHMD100に入力することができる。 The gesture 3 is a gesture in which the hand HD enters the detection area SA from the right side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the lower side of the detection area SA. When the user performs gesture 3, the output levels of the pyroelectric elements RA and RC exceed the threshold value th, and later, the output levels of the pyroelectric elements RB and RD exceed the threshold value th, and The output levels of the pyroelectric elements RA and RB become equal to or lower than the threshold value th, and the output levels of the pyroelectric elements RC and RD become equal to or lower than the threshold value th later. When the gesture processing unit 1212 (FIG. 5) detects such a change in output level, the gesture processing unit 1212 determines that the gesture 3 has been made. The input information indicating the input previously assigned to the gesture 3 is “3”. The user can input the number 3 to the HMD 100 by making the gesture 3.

 図4、図10及び図14を参照して、ジェスチャー4を説明する。図14は、ジェスチャー4がされた場合において、焦点素子RA~RDの出力のレベルの変化を示す波形図である。この波形図の横軸及び縦軸は、図11の波形図の横軸及び縦軸と同じである。 The gesture 4 will be described with reference to FIG. 4, FIG. 10, and FIG. FIG. 14 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when the gesture 4 is performed. The horizontal axis and vertical axis of this waveform diagram are the same as the horizontal axis and vertical axis of the waveform diagram of FIG.

 ジェスチャー4は、手HDが、検出領域SA(図7A、図7B、図8)の下側から検出領域SAに入り、検出領域SAの右側から検出領域SAを出るジェスチャーである。ユーザがジェスチャー4をした場合、焦電素子RC,RDの出力のレベルがしきい値thを超え、これより遅れて、焦電素子RA,RBの出力のレベルがしきい値thを超え、そして、焦電素子RB,RDの出力のレベルがしきい値th以下になり、これより遅れて焦電素子RA,RCの出力のレベルがしきい値th以下になる。このような出力のレベルの変化をジェスチャー処理部1212(図5)が検出したとき、ジェスチャー処理部1212は、ジェスチャー4がされたと判定する。ジェスチャー4に予め割り当てられていた入力を示す入力情報は、「4」である。ユーザは、ジェスチャー4をすることにより、数字の4をHMD100に入力することができる。 The gesture 4 is a gesture in which the hand HD enters the detection area SA from the lower side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the right side of the detection area SA. When the user performs gesture 4, the output levels of the pyroelectric elements RC and RD exceed the threshold value th, and later, the output levels of the pyroelectric elements RA and RB exceed the threshold value th, and The output levels of the pyroelectric elements RB and RD become the threshold value th or less, and the output levels of the pyroelectric elements RA and RC become the threshold value th or later after this. When the gesture processing unit 1212 (FIG. 5) detects such a change in output level, the gesture processing unit 1212 determines that the gesture 4 has been made. The input information indicating the input previously assigned to the gesture 4 is “4”. The user can input the number 4 to the HMD 100 by making the gesture 4.

 ジェスチャー5~ジェスチャー12について、波形図は省略する。これらのジェスチャーに関して、波形変化の説明は省略する(ジェスチャー1~ジェスチャー4と同様の考え方である)。 Waveform diagrams are omitted for Gesture 5 to Gesture 12. Regarding these gestures, description of waveform changes is omitted (the same concept as gestures 1 to 4).

 図4及び図10を参照して、ジェスチャー5を説明する。ジェスチャー5は、手HDが、検出領域SA(図7A、図7B、図8)の右側から検出領域SAに入り、検出領域SAの上側から検出領域SAを出るジェスチャーである。ジェスチャー5に予め割り当てられていた入力を示す入力情報は、「5」である。ユーザは、ジェスチャー5をすることにより、数字の5をHMD100に入力することができる。 The gesture 5 will be described with reference to FIGS. 4 and 10. The gesture 5 is a gesture in which the hand HD enters the detection area SA from the right side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the upper side of the detection area SA. The input information indicating the input previously assigned to the gesture 5 is “5”. The user can input the number 5 to the HMD 100 by making the gesture 5.

 ジェスチャー6を説明する。ジェスチャー6は、手HDが、検出領域SA(図7A、図7B、図8)の上側から検出領域SAに入り、検出領域SAの右側から検出領域SAを出るジェスチャーである。ジェスチャー6に予め割り当てられていた入力を示す入力情報は、「6」である。ユーザは、ジェスチャー6をすることにより、数字の6をHMD100に入力することができる。 Explain gesture 6. The gesture 6 is a gesture in which the hand HD enters the detection area SA from the upper side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the right side of the detection area SA. The input information indicating the input previously assigned to the gesture 6 is “6”. The user can input the number 6 to the HMD 100 by making the gesture 6.

 ジェスチャー7を説明する。ジェスチャー7は、手HDが、検出領域SA(図7A、図7B、図8)の左側から検出領域SAに入り、検出領域SAの下側から検出領域SAを出るジェスチャーである。ジェスチャー7に予め割り当てられていた入力を示す入力情報は、「7」である。ユーザは、ジェスチャー7をすることにより、数字の7をHMD100に入力することができる。 Explain gesture 7. The gesture 7 is a gesture in which the hand HD enters the detection area SA from the left side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the lower side of the detection area SA. The input information indicating the input previously assigned to the gesture 7 is “7”. The user can input the number 7 to the HMD 100 by making the gesture 7.

 ジェスチャー8を説明する。ジェスチャー8は、手HDが、検出領域SA(図7A、図7B、図8)の下側から検出領域SAに入り、検出領域SAの左側から検出領域SAを出るジェスチャーである。ジェスチャー8に予め割り当てられていた入力を示す入力情報は、「8」である。ユーザは、ジェスチャー8をすることにより、数字の8をHMD100に入力することができる。 Explain gesture 8. The gesture 8 is a gesture in which the hand HD enters the detection area SA from the lower side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the left side of the detection area SA. The input information indicating the input previously assigned to the gesture 8 is “8”. The user can input the number 8 to the HMD 100 by making the gesture 8.

 ジェスチャー9を説明する。ジェスチャー9は、手HDが、検出領域SA(図7A、図7B、図8)の左側から検出領域SAに入り、検出領域SAの上側から検出領域SAを出るジェスチャーである。ジェスチャー9に予め割り当てられていた入力を示す入力情報は、「9」である。ユーザは、ジェスチャー9をすることにより、数字の9をHMD100に入力することができる。 Explain gesture 9. The gesture 9 is a gesture in which the hand HD enters the detection area SA from the left side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the upper side of the detection area SA. The input information indicating the input previously assigned to the gesture 9 is “9”. The user can input the number 9 to the HMD 100 by making the gesture 9.

 ジェスチャー10を説明する。ジェスチャー10は、手HDが、検出領域SA(図7A、図7B、図8)の上側から検出領域SAに入り、検出領域SAの左側から検出領域SAを出るジェスチャーである。ジェスチャー10に予め割り当てられていた入力を示す入力情報は、「10」である。ユーザは、ジェスチャー10をすることにより、数字の0をHMD100に入力することができる。 The gesture 10 will be described. The gesture 10 is a gesture in which the hand HD enters the detection area SA from the upper side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the left side of the detection area SA. The input information indicating the input previously assigned to the gesture 10 is “10”. The user can input the number 0 to the HMD 100 by making the gesture 10.

 ジェスチャー11を説明する。ジェスチャー11は、手HDが、検出領域SA(図7A、図7B、図8)の左側から検出領域SAに入り、検出領域SAの右側から検出領域SAを出るジェスチャーである。ジェスチャー11に予め割り当てられていた入力を示す入力情報は、「次の画面に切り替える命令」である。ユーザは、ジェスチャー11をすることにより、「次の画面に切り替える命令」をHMD100に入力することができる。 The gesture 11 will be described. The gesture 11 is a gesture in which the hand HD enters the detection area SA from the left side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the right side of the detection area SA. The input information indicating the input previously assigned to the gesture 11 is a “command to switch to the next screen”. The user can input a “command to switch to the next screen” to the HMD 100 by making the gesture 11.

 ジェスチャー12を説明する。ジェスチャー12は、手HDが、検出領域SA(図7A、図7B、図8)の右側から検出領域SAに入り、検出領域SAの左側から検出領域SAを出るジェスチャーである。ジェスチャー12に予め割り当てられていた入力を示す入力情報は、「前の画面に切り替える命令」である。ユーザは、ジェスチャー12をすることにより、「前の画面に切り替える命令」をHMD100に入力することができる。 The gesture 12 will be described. The gesture 12 is a gesture in which the hand HD enters the detection area SA from the right side of the detection area SA (FIGS. 7A, 7B, and 8) and exits the detection area SA from the left side of the detection area SA. The input information indicating the input previously assigned to the gesture 12 is a “command to switch to the previous screen”. The user can input a “command to switch to the previous screen” to the HMD 100 by making the gesture 12.

 入力情報(「0」)~入力情報(「9」)は、パスワードを構成する要素である。この要素は、数字に限らず、アルファベット等でもよい。 Input information (“0”) to input information (“9”) are elements constituting a password. This element is not limited to numbers but may be alphabets or the like.

 ジェスチャー1~ジェスチャー12と、入力情報との組み合わせは、図10に示す例に限らず、異なる組み合わせにしてもよい(例えば、ジェスチャー11に入力情報(「1」が割り当てられていてもよい)。近接センサ105で検出できるジェスチャーは、ジェスチャー1~ジェスチャー12に限定されないので、これらのジェスチャー以外のジェスチャーに、任意の入力情報が割り当てられていてもよい。 The combinations of the gestures 1 to 12 and the input information are not limited to the example shown in FIG. 10, and may be different combinations (for example, the input information (“1” may be assigned to the gesture 11)). The gestures that can be detected by the proximity sensor 105 are not limited to the gestures 1 to 12, and arbitrary input information may be assigned to gestures other than these gestures.

 本実施形態に係るHMD100において、ジェスチャー入力がされた場合の動作について説明する。図15は、この動作を説明するフローチャートである。図5を参照して、ユーザは、操作部122を操作して、電源をオンする。表示制御部104DRは、パスワード認証のために、パスワード入力画面を画像表示部104Bに表示させる。パスワードの桁数が、3桁を例にして説明する。 In the HMD 100 according to the present embodiment, an operation when a gesture input is performed will be described. FIG. 15 is a flowchart for explaining this operation. Referring to FIG. 5, the user operates operation unit 122 to turn on the power. The display control unit 104DR displays a password input screen on the image display unit 104B for password authentication. The password will be described by taking the number of digits as an example.

 図16は、パスワード入力のために、一連のジェスチャー(ジェスチャー2、ジェスチャー1、ジェスチャー4)がされた場合において、焦点素子RA~RDの出力のレベルの変化を示す波形図である。図4、図5及び図16を参照して、ユーザは、ジェスチャーをすることにより、パスワードを入力する。例えば、ユーザのパスワードが、「214」とする。ユーザは、近接センサ105の前方に手を位置させ、入力情報(「2」)が割り当てられたジェスチャー2(図10)をする。いずれかの焦電素子RA~RDの出力のレベルがしきい値thを最初に超えたとき、モード制御部1214は、ジェスチャー入力を受け付ける受付モードを開始する(図15のステップS1)。この場合は、焦電素子RC,RDの出力のレベルがしきい値thを超えることにより、受付モードが開始される。モード制御部1214は、受付モードの開始から所定期間(例えば、3秒)の経過後、受付モードを終了させる。受付モードが終了したとき、ユーザはジェスチャー入力をすることができない。 FIG. 16 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when a series of gestures (gesture 2, gesture 1, gesture 4) is performed for password input. Referring to FIGS. 4, 5, and 16, the user inputs a password by making a gesture. For example, the user password is “214”. The user places a hand in front of the proximity sensor 105 and performs gesture 2 (FIG. 10) to which input information (“2”) is assigned. When the output level of any of the pyroelectric elements RA to RD first exceeds the threshold th, the mode control unit 1214 starts a reception mode for accepting a gesture input (step S1 in FIG. 15). In this case, the acceptance mode is started when the output levels of the pyroelectric elements RC and RD exceed the threshold value th. The mode control unit 1214 ends the reception mode after a predetermined period (for example, 3 seconds) has elapsed since the start of the reception mode. When the acceptance mode ends, the user cannot make a gesture input.

 ジェスチャー処理部1212は、上側、下側、左側、右側のうち、どの側から、手が検出領域SA(図7A、図7B、図8)に入ったかを判定する。ここでは、焦電素子RC,RDの出力のレベルが焦電素子RA,RBの出力のレベルよりも早くしきい値thを超えたので、下側と判定する。記憶制御部1215は、「下側」を示す情報を記憶部125に記憶させる(図15のステップS2)。 The gesture processing unit 1212 determines from which of the upper side, the lower side, the left side, and the right side the hand has entered the detection area SA (FIGS. 7A, 7B, and 8). Here, since the output level of the pyroelectric elements RC and RD exceeds the threshold th earlier than the output level of the pyroelectric elements RA and RB, it is determined to be lower. The storage control unit 1215 stores information indicating “lower side” in the storage unit 125 (step S2 in FIG. 15).

 ジェスチャー処理部1212は、上側、下側、左側、右側のうち、どの側から、手が検出領域SAを出たかを判定する。ここでは、焦電素子RA,RBの出力のレベルが焦電素子RC,RDの出力のレベルよりも後に、しきい値th以下になったので、上側と判定する。記憶制御部1215は、「上側」を示す情報を記憶部125に記憶させる(図15のステップS3)。 The gesture processing unit 1212 determines from which of the upper side, the lower side, the left side, and the right side the hand has exited the detection area SA. Here, since the output level of the pyroelectric elements RA and RB is equal to or lower than the threshold th after the output level of the pyroelectric elements RC and RD, it is determined as the upper side. The storage control unit 1215 stores information indicating “upper side” in the storage unit 125 (step S3 in FIG. 15).

 ジェスチャー処理部1212は、ステップS2及びステップS3の結果を用いて、ジェスチャーの種類を判定する(図15のステップS4)。ジェスチャーの種類とは、図10で説明した12個のジェスチャーである。記憶部125は、ジェスチャー1~ジェスチャー12と、これらに割り当てられた入力情報との対応関係を示すテーブル(以下、図10のテーブル)が予め記憶している。 The gesture processing unit 1212 determines the type of gesture using the results of step S2 and step S3 (step S4 in FIG. 15). The types of gestures are the 12 gestures described in FIG. The storage unit 125 stores in advance a table (hereinafter referred to as the table of FIG. 10) indicating the correspondence between the gestures 1 to 12 and the input information assigned to them.

 ジェスチャー処理部1212は、記憶部125に記憶されている、ステップS2及びステップS3の判定結果を読み出す。ここでは、ステップS2の判定結果は、「下側」であり、ステップS3の判定結果は、「上側」である。従って、手が、検出領域SAの下側から検出領域SAに入り、検出領域SAの上側から検出領域SAを出たことになるので、ジェスチャー処理部1212は、ジェスチャー2と判定する。なお、ジェスチャー処理部1212は、ジェスチャー1~ジェスチャー12のいずれにも該当しないと判定したとき、エラー処理をする。これにより、表示制御部104DRは、正しいジェスチャーを促す画面を画像表示部104Bに表示させる。 The gesture processing unit 1212 reads out the determination results of step S2 and step S3 stored in the storage unit 125. Here, the determination result of step S2 is “lower side”, and the determination result of step S3 is “upper side”. Accordingly, since the hand enters the detection area SA from the lower side of the detection area SA and exits the detection area SA from the upper side of the detection area SA, the gesture processing unit 1212 determines that the gesture is 2. Note that the gesture processing unit 1212 performs error processing when it is determined that none of the gestures 1 to 12 corresponds. As a result, the display control unit 104DR causes the image display unit 104B to display a screen for prompting a correct gesture.

 記憶制御部1215は、図10のテーブルを参照し、ステップS4で判定されたジェスチャーに割り当てられた入力情報を、記憶部125に記憶させる(図15のステップS5)。ここでは、ジェスチャー2に割り当てられた入力情報(「2」)が、記憶部125に記憶される。 The storage control unit 1215 refers to the table in FIG. 10 and stores the input information assigned to the gesture determined in step S4 in the storage unit 125 (step S5 in FIG. 15). Here, the input information (“2”) assigned to gesture 2 is stored in storage unit 125.

 ジェスチャー処理部1212は、受付モードの開始(図15のステップS1)から所定期間が経過したか否かを判断する(図15のステップS6)。所定期間が経過していない場合(ステップS6でNo)、ジェスチャー処理部1212は、焦電素子RA~RDのうち、いずれかの出力のレベルがしきい値thを超えたか否かを判断する(図15のステップS7)。すなわち、ジェスチャー処理部1212は、次のジェスチャーがされるまで、待機する。ジェスチャー処理部1212は、焦電素子RA~RDの全ての出力のレベルがしきい値th以下と判断したとき(ステップS7でNo)、ジェスチャー処理部1212は、ステップS6の処理をする。 The gesture processing unit 1212 determines whether or not a predetermined period has elapsed from the start of the reception mode (step S1 in FIG. 15) (step S6 in FIG. 15). If the predetermined period has not elapsed (No in step S6), the gesture processing unit 1212 determines whether any of the output levels of the pyroelectric elements RA to RD has exceeded the threshold value th ( Step S7 in FIG. That is, the gesture processing unit 1212 stands by until the next gesture is made. When the gesture processing unit 1212 determines that all the output levels of the pyroelectric elements RA to RD are equal to or lower than the threshold th (No in step S7), the gesture processing unit 1212 performs the process of step S6.

 ジェスチャー処理部1212は、焦電素子RA~RDのうち、いずれかの出力のレベルがしきい値thを超えたと判断したとき(ステップS7でYes)、ジェスチャー処理部1212は、ステップS2の処理をする。ここでは、ユーザが、入力情報(「1」)が割り当てられたジェスチャー1をするので、ジェスチャー処理部1212は、ステップS2の処理をする。 When the gesture processing unit 1212 determines that the output level of any one of the pyroelectric elements RA to RD exceeds the threshold th (Yes in step S7), the gesture processing unit 1212 performs the process of step S2. To do. Here, since the user performs the gesture 1 to which the input information (“1”) is assigned, the gesture processing unit 1212 performs the process of step S2.

 ジェスチャー処理部1212は、上側、下側、左側、右側のうち、どの側から、手が検出領域SAに入ったかを判定する。ここでは、焦電素子RA,RBの出力のレベルが焦電素子RC,RDの出力のレベルよりも早くしきい値thを超えたので、上側と判定する。記憶制御部1215は、「上側」を示す情報を記憶部125に記憶させる(ステップS2)。 The gesture processing unit 1212 determines from which of the upper side, the lower side, the left side, and the right side the hand has entered the detection area SA. Here, since the output level of the pyroelectric elements RA and RB exceeds the threshold th earlier than the output level of the pyroelectric elements RC and RD, it is determined to be the upper side. The storage control unit 1215 stores information indicating “upper side” in the storage unit 125 (step S2).

 ジェスチャー処理部1212は、上側、下側、左側、右側のうち、どの側から、手が検出領域SAを出たかを判定する。ここでは、焦電素子RC,RDの出力のレベルが焦電素子RA,RBの出力のレベルよりも後に、しきい値th以下になったので、下側と判定する。記憶制御部1215は、「下側」を示す情報を記憶部125に記憶させる(ステップS3)。 The gesture processing unit 1212 determines from which of the upper side, the lower side, the left side, and the right side the hand has exited the detection area SA. Here, since the output levels of the pyroelectric elements RC and RD are equal to or lower than the threshold th after the output levels of the pyroelectric elements RA and RB, it is determined that the output is lower. The storage control unit 1215 stores information indicating “lower side” in the storage unit 125 (step S3).

 ジェスチャー処理部1212は、ステップS2及びステップS3の結果を用いて、ジェスチャーの種類を判定する(ステップS4)。詳しく説明すると、ジェスチャー処理部1212は、記憶部125に記憶されている、ステップS2及びステップS3の判定結果を読み出す。ここでは、ステップS2の判定結果は、「上側」であり、ステップS3の判定結果は、「下側」である。従って、手が、検出領域SAの上側から検出領域SAに入り、検出領域SAの下側から検出領域SAを出たことになるので、ジェスチャー処理部1212は、ジェスチャー1と判定する。 The gesture processing unit 1212 determines the type of gesture using the results of step S2 and step S3 (step S4). More specifically, the gesture processing unit 1212 reads the determination results of step S2 and step S3 stored in the storage unit 125. Here, the determination result of step S2 is “upper side”, and the determination result of step S3 is “lower side”. Accordingly, since the hand enters the detection area SA from the upper side of the detection area SA and exits the detection area SA from the lower side of the detection area SA, the gesture processing unit 1212 determines that the gesture is 1.

 記憶制御部1215は、図10のテーブルを参照し、ステップS4で判定されたジェスチャーに割り当てられた入力情報を、記憶部125に記憶させる(ステップS5)。ここでは、ジェスチャー1に割り当てられた入力情報(「1」)が、記憶部125に記憶される。 The storage control unit 1215 refers to the table in FIG. 10 and stores the input information assigned to the gesture determined in step S4 in the storage unit 125 (step S5). Here, the input information (“1”) assigned to gesture 1 is stored in storage unit 125.

 ジェスチャー4については、詳しい説明を省略するが、ジェスチャー2及びジェスチャー1と同様にして、ジェスチャー4に割り当てられた入力情報(「4」)が、記憶部125に記憶される。記憶制御部1215によって、入力情報(「2」)、入力情報(「1」)、入力情報(「4」)が記憶部125に記憶されている。このように、記憶制御部1215は、受付モードの開始後(図15のステップS1)、複数回実行される一連のジェスチャー(ジェスチャー2、ジェスチャー1、ジェスチャー4)について、検出部128によって最初に検出されたジェスチャー(ジェスチャー2)から順番に、当該ジェスチャーに予め割り当てられていた入力を示す入力情報を、記憶部125に記憶させる制御をする。 Detailed description of the gesture 4 is omitted, but the input information (“4”) assigned to the gesture 4 is stored in the storage unit 125 in the same manner as the gesture 2 and the gesture 1. The storage control unit 1215 stores input information (“2”), input information (“1”), and input information (“4”) in the storage unit 125. As described above, the storage control unit 1215 first detects a series of gestures (gesture 2, gesture 1, gesture 4) executed a plurality of times by the detection unit 128 after the start of the reception mode (step S1 in FIG. 15). The storage unit 125 is controlled to store input information indicating an input previously assigned to the gesture in order from the gesture (gesture 2).

 モード制御部1214は、受付モードを開始してから、所定期間(例えば、3秒)の経過後、受付モードを終了させる。従って、ユーザは、ジェスチャー2、ジェスチャー1及びジェスチャー4で構成される一連のジェスチャーを所定期間内にしなければならない。 The mode control unit 1214 ends the reception mode after elapse of a predetermined period (for example, 3 seconds) after starting the reception mode. Therefore, the user must make a series of gestures including gesture 2, gesture 1, and gesture 4 within a predetermined period.

 受付モードの開始から所定期間が経過したとき(ステップS6でYes)、処理部1213は、記憶部125に記憶されている、一連のジェスチャーを構成する複数のジェスチャーのそれぞれの入力情報(ジェスチャー2の入力情報(「2」)、ジェスチャー1の入力情報(「1」)、ジェスチャー4の入力情報(「4」))を用いて、所定の処理をする(図15のステップS8)。パスワード入力画面が画像表示部104Bに表示されているので(すなわちパスワード認証モード)、処理部1213は、所定の処理として、記憶部125に記憶されている入力情報を用いてパスワード認証をする。ここでは、処理部1213は、入力情報(「2」)、入力情報(「1」)、入力情報(「4」)を用いて、入力されたパスワードが「214」として、パスワード認証をする。 When a predetermined period has elapsed from the start of the reception mode (Yes in step S6), the processing unit 1213 stores each input information (of gesture 2) of a plurality of gestures constituting a series of gestures stored in the storage unit 125. A predetermined process is performed using the input information (“2”), the input information of the gesture 1 (“1”), and the input information of the gesture 4 (“4”)) (step S8 in FIG. 15). Since the password input screen is displayed on the image display unit 104B (that is, the password authentication mode), the processing unit 1213 performs password authentication using the input information stored in the storage unit 125 as a predetermined process. Here, the processing unit 1213 performs password authentication by using the input information (“2”), the input information (“1”), and the input information (“4”), assuming that the input password is “214”.

 処理部1213は、パスワード認証が失敗したと判断したとき、エラー処理をする。これにより、表示制御部104DRは、パスワード認証が失敗したことを示す画面を画像表示部104Bに表示させる。 When the processing unit 1213 determines that the password authentication has failed, it performs error processing. As a result, the display control unit 104DR causes the image display unit 104B to display a screen indicating that password authentication has failed.

 処理部1213は、パスワード認証が成功したと判断したとき、表示制御部104DRは、画像表示部104Bに表示する画面を、パスワード入力画面から初期画面に切り替える。 When the processing unit 1213 determines that the password authentication is successful, the display control unit 104DR switches the screen displayed on the image display unit 104B from the password input screen to the initial screen.

 初期画面から次の画面への切り替えについて説明する。パスワード入力では、一連の三つのジェスチャー入力が実行されたが、画面の切り替えを命令する入力では、一つのジェスチャー入力が実行される。 Explain about switching from the initial screen to the next screen. In the password input, a series of three gesture inputs are executed, but in the input for instructing switching of the screen, one gesture input is executed.

 図17は、次の画面に切り替えるために、ジェスチャー11がされた場合において、焦点素子RA~RDの出力のレベルの変化を示す波形図である。図4、図5及び図17を参照して、ユーザがジェスチャー11(図10)をすることにより、次の画面に切り替える命令をHMD100に入力する。詳しく説明すると、パスワード入力の場合と同様に、いずれかの焦電素子RA~RDの出力のレベルがしきい値thを最初に超えたとき、モード制御部1214は、ジェスチャー入力を受け付ける受付モードを開始する(図15のステップS1)。この場合は、焦電素子RB,RDの出力のレベルがしきい値thを超えることにより、受付モードが開始される。パスワード入力の場合と同様に、モード制御部1214は、受付モードの開始から所定期間(例えば、3秒)の経過後、受付モードを終了させる。 FIG. 17 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when the gesture 11 is made to switch to the next screen. With reference to FIGS. 4, 5, and 17, when the user performs gesture 11 (FIG. 10), an instruction to switch to the next screen is input to HMD 100. More specifically, as in the case of password input, when the output level of any of the pyroelectric elements RA to RD first exceeds the threshold th, the mode control unit 1214 sets a reception mode for accepting gesture input. Start (step S1 in FIG. 15). In this case, the acceptance mode is started when the output levels of the pyroelectric elements RB and RD exceed the threshold value th. As in the case of password input, the mode control unit 1214 ends the reception mode after a predetermined period (for example, 3 seconds) has elapsed since the start of the reception mode.

 パスワード入力の場合と同様に、ジェスチャー処理部1212は、上側、下側、左側、右側のうち、どの側から、手が検出領域SA(図7A、図7B、図8)に入ったかを判定する。ここでは、焦電素子RB,RDの出力のレベルが焦電素子RA,RCの出力のレベルよりも早くしきい値thを超えたので、左側と判定する。記憶制御部1215は、「左側」を示す情報を記憶部125に記憶させる(図15のステップS2)。 As in the case of password input, the gesture processing unit 1212 determines from which of the upper side, the lower side, the left side, and the right side the hand has entered the detection area SA (FIGS. 7A, 7B, and 8). . Here, since the output level of the pyroelectric elements RB and RD exceeds the threshold th earlier than the output level of the pyroelectric elements RA and RC, it is determined as the left side. The storage control unit 1215 stores information indicating “left side” in the storage unit 125 (step S2 in FIG. 15).

 パスワード入力の場合と同様に、ジェスチャー処理部1212は、上側、下側、左側、右側のうち、どの側から、手が検出領域SAを出たかを判定する。ここでは、焦電素子RA,RCの出力のレベルが焦電素子RB,RDの出力のレベルよりも後に、しきい値th以下になったので、右側と判定する。記憶制御部1215は、「右側」を示す情報を記憶部125に記憶させる(図15のステップS3)。 As in the case of password input, the gesture processing unit 1212 determines from which of the upper side, the lower side, the left side, and the right side the hand has exited the detection area SA. Here, since the output levels of the pyroelectric elements RA and RC are equal to or lower than the threshold th after the output levels of the pyroelectric elements RB and RD, it is determined that the output is on the right side. The storage control unit 1215 stores information indicating “right side” in the storage unit 125 (step S3 in FIG. 15).

 パスワード入力の場合と同様に、ジェスチャー処理部1212は、ステップS2及びステップS3の結果を用いて、ジェスチャーの種類を判定する(図15のステップS4)。詳しく説明すると、ジェスチャー処理部1212は、記憶部125に記憶されている、ステップS2及びステップS3の判定結果を読み出す。ここでは、ステップS2の判定結果は、「左側」であり、ステップS3の判定結果は、「右側」である。従って、手が、検出領域SAの左側から検出領域SAに入り、検出領域SAの右側から検出領域SAを出たことになるので、ジェスチャー処理部1212は、ジェスチャー11と判定する。 As in the case of password input, the gesture processing unit 1212 determines the type of gesture using the results of step S2 and step S3 (step S4 in FIG. 15). More specifically, the gesture processing unit 1212 reads the determination results of step S2 and step S3 stored in the storage unit 125. Here, the determination result of step S2 is “left side”, and the determination result of step S3 is “right side”. Therefore, since the hand enters the detection area SA from the left side of the detection area SA and exits the detection area SA from the right side of the detection area SA, the gesture processing unit 1212 determines that it is the gesture 11.

 パスワード入力の場合と同様に、記憶制御部1215は、図10のテーブルを参照し、ステップS4で判定されたジェスチャーに割り当てられた入力情報を、記憶部125に記憶させる(図15のステップS5)。ここでは、ジェスチャー11に割り当てられた入力情報(「次の画面に切り替える命令」)が、記憶部125に記憶される。 As in the case of password input, the storage control unit 1215 refers to the table in FIG. 10 and stores the input information assigned to the gesture determined in step S4 in the storage unit 125 (step S5 in FIG. 15). . Here, the input information (“command to switch to the next screen”) assigned to the gesture 11 is stored in the storage unit 125.

 パスワード入力の場合と同様に、ジェスチャー処理部1212は、受付モードの開始(図15のステップS1)から所定期間が経過したか否かを判断する(図15のステップS6)。所定期間が経過していない場合(ステップS6でNo)、ジェスチャー処理部1212は、焦電素子RA~RDのうち、いずれかの出力のレベルがしきい値thを超えたか否かを判断する(図15のステップS7)。ジェスチャー処理部1212は、焦電素子RA~RDのいずれの出力のレベルもしきい値th以下と判断したとき(ステップS7でNo)、ジェスチャー処理部1212は、ステップS6の処理をする。ここでは、ジェスチャー11だけがされるので、ジェスチャー処理部1212は、ステップS6の処理をする。 As in the case of password input, the gesture processing unit 1212 determines whether or not a predetermined period has elapsed since the start of the reception mode (step S1 in FIG. 15) (step S6 in FIG. 15). If the predetermined period has not elapsed (No in step S6), the gesture processing unit 1212 determines whether any of the output levels of the pyroelectric elements RA to RD has exceeded the threshold value th ( Step S7 in FIG. When the gesture processing unit 1212 determines that the output level of any of the pyroelectric elements RA to RD is equal to or lower than the threshold th (No in step S7), the gesture processing unit 1212 performs the process of step S6. Here, since only the gesture 11 is made, the gesture processing unit 1212 performs the process of step S6.

 受付モードの開始から所定期間が経過したとき(ステップS6でYes)、処理部1213は、所定の処理をする(図15のステップS8)。処理部1213は、所定の処理として、次の画面に切り替えるコマンドを生成する。このコマンドにより、表示制御部104DRは、画像表示部104Bに表示させる画面を、初期画面から次の画面に切り替える。以下、詳しい説明は省略するが、ジェスチャー11がされる毎に、画面が、次の画面に切り替えられ、ジェスチャー12がされる毎に、画面が、前の画面に切り替えられる。 When a predetermined period has elapsed from the start of the acceptance mode (Yes in step S6), the processing unit 1213 performs a predetermined process (step S8 in FIG. 15). The processing unit 1213 generates a command for switching to the next screen as a predetermined process. With this command, the display control unit 104DR switches the screen displayed on the image display unit 104B from the initial screen to the next screen. Hereinafter, although detailed description is omitted, each time the gesture 11 is performed, the screen is switched to the next screen, and each time the gesture 12 is performed, the screen is switched to the previous screen.

 本実施形態の主な効果を説明する。図5、図15及び図16を参照して、検出部128は、画像表示部104Bに表示された画面を用いてジェスチャーを検出するのではなく、画像表示部104Bと異なる位置に検出領域SA(図7A、図7B、図8)を有し、予め定められた二以上のジェスチャーを区別して検出する。処理部1213は、一連のジェスチャーを構成する三つのジェスチャーのそれぞれの入力情報(つまり、三つの入力情報)を用いて、パスワード認証(所定の処理の一例)をする(ステップS8)。このため、画面を用いないジェスチャー入力を用いて、パスワード認証等のような複数の入力を必要とする処理をすることができる。従って、本実施形態によれば、画面を用いないジェスチャー入力を改善することができる。 The main effect of this embodiment will be described. Referring to FIGS. 5, 15, and 16, detection unit 128 does not detect a gesture using the screen displayed on image display unit 104 </ b> B, but detects detection area SA ( 7A, 7B, and 8), and two or more predetermined gestures are distinguished and detected. The processing unit 1213 performs password authentication (an example of a predetermined process) using input information (that is, three input information) of each of the three gestures constituting the series of gestures (step S8). For this reason, it is possible to perform a process that requires a plurality of inputs such as password authentication using gesture input without using a screen. Therefore, according to this embodiment, it is possible to improve gesture input without using a screen.

 図5を参照して、本実施形態では、近接センサ105とジェスチャー処理部1212とを備える検出部128によって、予め定められた二以上のジェスチャーを区別して検出している。検出部128は、この構成に限定されない。例えば、カメラ106(二次元撮像素子)と、カメラ106が撮像した画像に対して、所定の画像処理をして、ジェスチャーを認識する画像処理部と、を備える検出部128でもよい。 Referring to FIG. 5, in the present embodiment, two or more predetermined gestures are distinguished and detected by a detection unit 128 including a proximity sensor 105 and a gesture processing unit 1212. The detection unit 128 is not limited to this configuration. For example, the detection unit 128 may include a camera 106 (two-dimensional imaging device) and an image processing unit that performs predetermined image processing on an image captured by the camera 106 and recognizes a gesture.

 本実施形態の変形例について、本実施形態と相違する点を主に説明する。変形例として、変形例1から変形例5がある。変形例1は、直感的な操作により、入力することができる。例えば、一連のジェスチャーが、「バイバイ」を示す手の動きの場合、「初期画面に切り替える命令」の入力とする。命令は、これに限らず、例えば、「一つ前の入力を取り消す命令」でもよい。 About the modification of this embodiment, the point different from this embodiment is mainly demonstrated. As modifications, there are Modification 1 to Modification 5. The first modification can be input by an intuitive operation. For example, when a series of gestures is a hand movement indicating “bye-bye”, an input of “command to switch to initial screen” is made. The instruction is not limited to this, and may be, for example, “an instruction to cancel the previous input”.

 「バイバイ」を示す手の動きは、図10に示すジェスチャー11がされ、次に、ジェスチャー12がされ、次に、ジェスチャー11がされる場合と、これと逆、すなわち、ジェスチャー12がされ、次に、ジェスチャー11がされ、次に、ジェスチャー12がされる場合とがある。前者を例にして説明する。 The movement of the hand indicating “bye-bye” is performed by the gesture 11 shown in FIG. 10, then the gesture 12, and then the gesture 11, and vice versa. In some cases, gesture 11 is made, and then gesture 12 is made. The former will be described as an example.

 図18は、「バイバイ」を示す一連のジェスチャー(ジェスチャー11、ジェスチャー12、ジェスチャー11)がされた場合において、焦点素子RA~RDの出力のレベルの変化を示す波形図である。図5、図15及び図18を参照して、変形例1は、パスワード入力の場合と同様に、ステップS1~ステップS7の処理をする。受付モードの開始から所定期間が経過したとき(ステップS6でYes)、処理部1213は、所定の処理をする(ステップS8)。処理部1213は、所定の処理として、初期画面に切り替えるコマンドを生成する。このコマンドにより、表示制御部104DRは、画像表示部104Bに表示させる画面を、現在の画面から初期画面に切り替える。 FIG. 18 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when a series of gestures (gesture 11, gesture 12, and gesture 11) indicating “bye-by” is performed. Referring to FIG. 5, FIG. 15, and FIG. 18, in the first modification, the processes in steps S1 to S7 are performed as in the case of password input. When a predetermined period has elapsed from the start of the reception mode (Yes in step S6), the processing unit 1213 performs a predetermined process (step S8). The processing unit 1213 generates a command for switching to the initial screen as a predetermined process. With this command, the display control unit 104DR switches the screen to be displayed on the image display unit 104B from the current screen to the initial screen.

 変形例2を説明する。図17に示すように、本実施形態では、所定期間(すなわち、受付モードの有効期間)が固定されている。パスワード入力のために、一連の三つのジェスチャーに必要となる時間が所定期間として設定されている。このため、一つのジェスチャー入力又は二つのジェスチャー入力による入力情報を用いて、処理部1213が所定の処理をする場合(例えば、次の画面に切り替えるコマンドの生成)、ジェスチャー終了後、所定の処理の開始までに待ち時間が発生する。そこで、変形例2は、次のジェスチャーの開始が検出されない状態が続くとき、所定期間の経過を待たずに、受付モードを終了し、所定の処理をする。 Modification 2 will be described. As shown in FIG. 17, in this embodiment, the predetermined period (that is, the valid period of the reception mode) is fixed. The time required for a series of three gestures for password input is set as a predetermined period. For this reason, when the processing unit 1213 performs a predetermined process using input information obtained by one gesture input or two gesture inputs (for example, generation of a command for switching to the next screen), There is a waiting time before starting. Therefore, in the second modification, when the state where the start of the next gesture is not detected continues, the acceptance mode is terminated without waiting for the elapse of a predetermined period, and a predetermined process is performed.

 図19は、本実施形態の変形例2において、ジェスチャー入力がされた場合の動作について説明するフローチャートである。図20は、次の画面に切り替えるために、ジェスチャー11がされた場合において、焦点素子RA~RDの出力のレベルの変化を示す波形図である。図19に示すフローチャートが、図15に示すフローチャートと異なる点は、ステップS6とステップS7との間に、ステップS9が追加されていることである。 FIG. 19 is a flowchart for explaining the operation when a gesture is input in the second modification of the present embodiment. FIG. 20 is a waveform diagram showing changes in the output levels of the focus elements RA to RD when the gesture 11 is made to switch to the next screen. The flowchart shown in FIG. 19 is different from the flowchart shown in FIG. 15 in that step S9 is added between steps S6 and S7.

 図5、図19及び図20を参照して、検出部128が、所定期間内にジェスチャー11を検出した後、ジェスチャー処理部1212は、受付モードの開始(ステップS1)から所定期間が経過したか否かを判断する(ステップS6)。所定期間が経過していない場合(ステップS6でNo)、ジェスチャー処理部1212は、次のジェスチャーの開始を検出していない無検出期間が予め定められた値に到達したか否かを判断する(ステップS9)。無検出期間は、焦電素子RA~RDの全ての出力のレベルが、例えば、しきい値th以下の期間である。焦電素子RA~RDの全ての出力のレベルが、0の期間を無検出期間としてもよい。予め定められた値は、所定期間(例えば、3秒)より小さい値であり(例えば、0.5秒)、一つのジェスチャーが終了した後、次のジェスチャーが開始するまでに要する時間等を考慮して設定される。 Referring to FIGS. 5, 19, and 20, after detecting unit 128 detects gesture 11 within a predetermined period, gesture processing unit 1212 determines whether the predetermined period has elapsed since the start of the reception mode (step S <b> 1). It is determined whether or not (step S6). If the predetermined period has not elapsed (No in step S6), the gesture processing unit 1212 determines whether the non-detection period in which the start of the next gesture has not been detected has reached a predetermined value ( Step S9). The non-detection period is a period in which all output levels of the pyroelectric elements RA to RD are, for example, the threshold value th or less. A period in which all the output levels of the pyroelectric elements RA to RD are 0 may be a non-detection period. The predetermined value is a value smaller than a predetermined period (for example, 3 seconds) (for example, 0.5 seconds), and takes into account the time required for the start of the next gesture after the end of one gesture. Is set.

 ジェスチャー処理部1212は、無検出期間が予め定められた値に到達していないと判断したとき(ステップS9でNo)、ジェスチャー処理部1212は、ステップS7の処理をする。これに対して、ジェスチャー処理部1212は、無検出期間が予め定められた値に到達したと判断したとき(ステップS9でYes)、モード制御部1214は、受付モードを終了させる。これにより、処理部1213は、所定期間の経過を待たずに、所定の処理をする(ステップS8)。ここでは、ジェスチャー11がされているので、記憶部125には、図10に示す入力情報(「次の画面に切り替える命令」)が記憶されている。処理部1213は、所定の処理として、次の画面に切り替えるコマンドを生成する。これにより、表示制御部104DRは、画像表示部104Bに表示する画面を、次の画面に切り替える。 When the gesture processing unit 1212 determines that the non-detection period has not reached a predetermined value (No in step S9), the gesture processing unit 1212 performs the process of step S7. On the other hand, when the gesture processing unit 1212 determines that the non-detection period has reached a predetermined value (Yes in step S9), the mode control unit 1214 ends the reception mode. Thereby, the processing unit 1213 performs a predetermined process without waiting for the elapse of the predetermined period (step S8). Here, since the gesture 11 is performed, the storage unit 125 stores the input information (“command to switch to the next screen”) illustrated in FIG. 10. The processing unit 1213 generates a command for switching to the next screen as a predetermined process. Thereby, the display control unit 104DR switches the screen displayed on the image display unit 104B to the next screen.

 以上説明したように、変形例2によれば、ジェスチャー終了後、所定の処理の開始までの待ち時間を短くすることができる。 As described above, according to the second modification, the waiting time until the start of the predetermined process after the end of the gesture can be shortened.

 変形例3を説明する。図16に示すように、本実施形態は、所定期間の長さが固定されているが、変形例3は、所定期間を延長することができる。 Modification 3 will be described. As shown in FIG. 16, in the present embodiment, the length of the predetermined period is fixed, but the third modification can extend the predetermined period.

 一連のジェスチャーとして、第1の個数のジェスチャーにより構成される第1の一連ジェスチャーと、第1の個数より多い第2の個数のジェスチャーにより構成される第2の一連ジェスチャーとがある。パスワード認証において、ユーザ用のパスワード数が第1の個数(例えば、3個)であり、特殊なパスワード(例えば、サービスマン専用の画面にアクセスするためのパスワード)の数が第2の個数(例えば、4個)とする。ユーザは、第1の一連ジェスチャー(すなわち、三つのジェスチャーにより構成される一連のジェスチャー)をしてパスワードを入力する。サービスマンは、第2の一連ジェスチャー(すなわち、四つのジェスチャーにより構成される一連のジェスチャー)をしてパスワードを入力する。 As a series of gestures, there are a first series gesture composed of a first number of gestures and a second series gesture composed of a second number of gestures greater than the first number. In password authentication, the number of passwords for a user is a first number (for example, 3), and the number of special passwords (for example, a password for accessing a screen dedicated to a serviceman) is a second number (for example, 4). The user inputs a password by making a first series of gestures (that is, a series of gestures composed of three gestures). The service person inputs a password by making a second series of gestures (that is, a series of gestures composed of four gestures).

 変形例3は、当初の所定期間(すなわち、所定期間の初期値)が、第1の一連ジェスチャーに要する時間より長いが、第2の一連ジェスチャーに要する時間より短いことを前提する。これにより、ユーザがサービスマン専用の画面にアクセスすることを困難にしている。 Modification 3 assumes that the initial predetermined period (that is, the initial value of the predetermined period) is longer than the time required for the first series of gestures, but shorter than the time required for the second series of gestures. This makes it difficult for the user to access the screen dedicated to the service person.

 第2の一連ジェスチャーの最初の部分は、第1の個数以下の第3の個数のジェスチャーにより構成されている。変形例3は、最初の部分の入力情報に対して、所定期間を延長するコマンドを予め割り当てている。第2の一連のジェスチャーの最初の部分は、例えば、1番目及び2番目のジェスチャーにより構成される2個のジェスチャーとする。 The first part of the second series of gestures is composed of a third number of gestures equal to or less than the first number. In Modification 3, a command for extending a predetermined period is assigned in advance to the input information of the first part. The first part of the second series of gestures is, for example, two gestures configured by the first and second gestures.

 サービスマン専用の画面にアクセスするためのパスワードが、例えば、最初に「00」含む4桁のパスワードとする(例えば、0012)。入力情報(「00」)に対して、所定期間を延長するコマンドが割り当てられている。図10に示すジェスチャー10が二回繰り返されることにより、「00」が入力されると、所定期間が延長され、残りの数字「12」を入力するための時間が確保される。 Suppose that the password for accessing the screen dedicated to the serviceman is a four-digit password including, for example, “00” at the beginning (for example, 0012). A command for extending a predetermined period is assigned to the input information (“00”). By repeating gesture 10 shown in FIG. 10 twice, when “00” is input, the predetermined period is extended, and a time for inputting the remaining number “12” is secured.

 00を含む4桁のパスワードの入力に対して、所定期間の延長がされる。このため、サービスマン専用の画面が複数ある場合、それぞれの画面に対して、00を含む4桁の異なるパスワードを割り当てることができる(すなわち、それぞれの画面に専用のパスワードを与えることができる)。 Specified period will be extended for input of 4-digit password including 00. For this reason, when there are a plurality of screens dedicated to the serviceman, a four-digit different password including 00 can be assigned to each screen (that is, a dedicated password can be given to each screen).

 所定期間を延長するコマンドが割り当てられた入力情報は、一つに限らず、複数でもよい。例えば、入力情報(「00」)、入力情報「99」のそれぞれに対して、所定期間を延長するコマンドを割り当ててもよい。 The input information to which a command for extending a predetermined period is assigned is not limited to one, and may be plural. For example, a command for extending a predetermined period may be assigned to each of the input information (“00”) and the input information “99”.

 図21は、本実施形態の変形例3において、ジェスチャー入力がされた場合の動作について説明するフローチャートである。図21に示すフローチャートが、図19に示すフローチャートと異なる点は、ステップS5とステップS6との間に、ステップS10及びステップS11が追加されていることである。 FIG. 21 is a flowchart for explaining the operation when a gesture is input in the third modification of the present embodiment. The flowchart shown in FIG. 21 is different from the flowchart shown in FIG. 19 in that steps S10 and S11 are added between steps S5 and S6.

 図5及び図21を参照して、ステップS5後、処理部1213は、記憶部125に記憶されている入力情報を基にしてする所定の処理が、所定期間の延長コマンドを生成する処理か否かを判断する(ステップS10)。記憶部125に「00」以外の入力情報が記憶されている場合、処理部1213は、所定の処理が所定期間の延長コマンドを生成する処理でないと判断し(ステップS10でNo)、ジェスチャー処理部1212は、ステップS6の処理をする。 Referring to FIGS. 5 and 21, after step S5, processing unit 1213 determines whether or not the predetermined process based on the input information stored in storage unit 125 generates an extension command for a predetermined period. Is determined (step S10). When input information other than “00” is stored in the storage unit 125, the processing unit 1213 determines that the predetermined process is not a process for generating an extension command for a predetermined period (No in step S10), and the gesture processing unit 1212 performs the process of step S6.

 記憶部125に「00」の入力情報が記憶されている場合、処理部1213は、所定の処理が所定期間の延長コマンドを生成する処理と判断し(ステップS10でYes)、処理部1213は、所定期間の延長コマンドを生成する。これにより、モード制御部1214は、所定期間を延長する(ステップS11)。そして、ジェスチャー処理部1212は、ステップS7の処理をする。 When the input information of “00” is stored in the storage unit 125, the processing unit 1213 determines that the predetermined process is a process of generating an extension command for a predetermined period (Yes in step S10), and the processing unit 1213 Generate an extension command for a predetermined period. Thereby, the mode control part 1214 extends a predetermined period (step S11). Then, the gesture processing unit 1212 performs step S7.

 変形例4を説明する。変形例3は、第2の一連ジェスチャーの最初の部分に割り当てられた入力情報を、所定期間を延長するコマンドにしている。これに対して、変形例4は、予め定められた一以上のジェスチャーに割り当てられた入力情報を、所定期間を延長するコマンドにしている。 Modification 4 will be described. In the third modification, the input information assigned to the first part of the second series of gestures is a command for extending the predetermined period. On the other hand, in the fourth modification, input information assigned to one or more predetermined gestures is used as a command for extending a predetermined period.

 一連のジェスチャーの数が多い場合、又は、ユーザがジェスチャーをゆっくりした場合、所定期間が比較的長くても(例えば、5秒)、所定期間内に一連のジェスチャーを完了することができない。そこで、一連のジェスチャーの途中で、ユーザが、所定期間を延長するコマンドとなる上記一以上のジェスチャーをしたとき、処理部1213は、所定期間を延長する。 When there are a large number of gestures in a series or when the user has slowed down the gestures, the series of gestures cannot be completed within the predetermined period even if the predetermined period is relatively long (for example, 5 seconds). Therefore, when the user makes one or more gestures as a command for extending the predetermined period during the series of gestures, the processing unit 1213 extends the predetermined period.

 具体的に説明すると、例えば、パスワードが「924845」とする。「00」が所定期間を延長するコマンドとする。所定期間を延長するコマンドは、複数桁に限らず、一桁でもよい。この場合、一つのジェスチャーにより、所定期間が延長される。 More specifically, for example, the password is “924845”. “00” is a command for extending the predetermined period. The command for extending the predetermined period is not limited to a plurality of digits, and may be a single digit. In this case, the predetermined period is extended by one gesture.

 ユーザは、入力情報(「9」)が割り当てられたジェスチャー9、入力情報(「2」)が割り当てられたジェスチャー2、入力情報(「4」)が割り当てられたジェスチャー4をした後、入力情報(「0」)が割り当てられたジェスチャー10を2度繰り返す。これにより、処理部1213は、所定期間を延長する(例えば、現時点から5秒延長する)。ユーザは、入力情報(「8」)が割り当てられたジェスチャー8、入力情報(「4」)が割り当てられたジェスチャー4、入力情報(「5」)が割り当てられたジェスチャー5をする。 The user performs the gesture 9 to which the input information (“9”) is assigned, the gesture 2 to which the input information (“2”) is assigned, and the gesture 4 to which the input information (“4”) is assigned. The gesture 10 assigned (“0”) is repeated twice. As a result, the processing unit 1213 extends the predetermined period (for example, extends 5 seconds from the current time). The user performs the gesture 8 to which the input information (“8”) is assigned, the gesture 4 to which the input information (“4”) is assigned, and the gesture 5 to which the input information (“5”) is assigned.

 処理部1213は、記憶部125に記憶されている入力情報(「9」、「2」、「4」、「0」、「0」、「8」、「4」、「5」のうち、「0」、「0」を除く処理をする。処理部1213は、残った入力情報((「9」、「2」、「4」、「8」、「4」、「5」)をパスワードとして、パスワード認証をする。 The processing unit 1213 includes input information (“9”, “2”, “4”, “0”, “0”, “8”, “4”, “5”) stored in the storage unit 125. The processing unit 1213 performs processing excluding “0” and “0.” The processing unit 1213 uses the remaining input information ((“9”, “2”, “4”, “8”, “4”, “5”)) as a password. As a password authentication.

 変形例5を説明する。ユーザは、所定期間(例えば、3秒)内に、一連のジェスチャーを終了しなければならない。変形例5は、画像表示部104B(図5)に所定期間の残りを示す。これにより、ユーザは、一連のジェスチャーに要する時間が所定期間内に収まるように、ジェスチャーのスピードを調整することができる。 Modification 5 will be described. The user must finish a series of gestures within a predetermined period (eg, 3 seconds). Modification 5 shows the remainder of the predetermined period on the image display unit 104B (FIG. 5). Thereby, the user can adjust the speed of the gesture so that the time required for a series of gestures falls within a predetermined period.

 変形例5は、所定期間中に、所定期間の残りを示す情報を、図5に示す画像表示部104Bに表示させる。図22は、本実施形態の変形例5において、画像表示部104Bに表示された画面の第1例を示す画面図である。図23は、本実施形態の変形例5において、画像表示部104Bに表示された画面の第2例を示す画面図である。 Modification 5 displays information indicating the remainder of the predetermined period on the image display unit 104B illustrated in FIG. 5 during the predetermined period. FIG. 22 is a screen diagram illustrating a first example of a screen displayed on the image display unit 104B in Modification 5 of the present embodiment. FIG. 23 is a screen diagram illustrating a second example of a screen displayed on the image display unit 104B in Modification 5 of the present embodiment.

 図5及び図22を参照して、モード制御部1214が、ジェスチャー入力を受け付ける受付モードを開始したとき、表示制御部104DRは、画像表示部104Bに画面SC1を表示させる。画面SC1は、残り時間(=所定期間-受付モード開始から経過した時間)を示す数字、及び、ジェスチャー入力が受け付けられていることを示す文字を含む。残り時間が、所定期間の残りを示す情報である。 Referring to FIG. 5 and FIG. 22, when mode control unit 1214 starts a reception mode for accepting a gesture input, display control unit 104DR causes image display unit 104B to display screen SC1. The screen SC1 includes a number indicating the remaining time (= predetermined period−the time elapsed since the start of the reception mode) and characters indicating that a gesture input is received. The remaining time is information indicating the remaining of the predetermined period.

 図5及び図23を参照して、変形例3において所定期間が延長されたとき、表示制御部104DRは、画像表示部104Bに画面SC2を表示させる。画面SC2は、残り時間、及び、所定期間が延長されたことを示す文字を含む。 Referring to FIG. 5 and FIG. 23, when the predetermined period is extended in the modified example 3, the display control unit 104DR displays the screen SC2 on the image display unit 104B. Screen SC2 includes the remaining time and characters indicating that the predetermined period has been extended.

 変形例5によれば、HMD100のユーザは、一連のジェスチャーに要する時間が、所定期間内に収まるように、ジェスチャーのスピードを調整することができる。 According to the modified example 5, the user of the HMD 100 can adjust the speed of the gesture so that the time required for a series of gestures falls within a predetermined period.

 表示制御部104DRは、所定期間の残りを示す情報の替わりに、受付モードの開始から経過した時間を示す情報を画像表示部104Bに表示させてもよいし、これら両方を画像表示部104Bに表示させてもよい。受付モードの開始から経過した時間とは、所定期間の開始から経過した時間である。 The display control unit 104DR may cause the image display unit 104B to display information indicating the time elapsed since the start of the reception mode instead of the information indicating the remainder of the predetermined period, or display both of them on the image display unit 104B. You may let them. The time elapsed from the start of the reception mode is the time elapsed from the start of the predetermined period.

 所定期間の残りを示す情報が、数字で表現されているが、これに限定されない。表示制御部104DRは、例えば、図24に示すように、白領域とグレー領域とを含む画面SC3を画像表示部104Bに表示させる。白領域は、受付モードの開始から経過した時間を示し、表示制御部104DRは、この時間が長くなるに従って、白領域の面積を大きくする。グレー領域は、所定期間の残り時間を示し、表示制御部104DRは、この時間が短くなるに従って、グレー領域の面積を小さくする。図24に示す画面SC3は、所定期間の残りを示す情報(グレー領域)、及び、受付モードの開始から経過した時間を示す情報(白領域)の両方を含む。 Although the information indicating the remainder of the predetermined period is expressed in numbers, it is not limited to this. For example, as illustrated in FIG. 24, the display control unit 104DR causes the image display unit 104B to display a screen SC3 including a white region and a gray region. The white area indicates the time that has elapsed since the start of the reception mode, and the display control unit 104DR increases the area of the white area as the time increases. The gray area indicates the remaining time of the predetermined period, and the display control unit 104DR decreases the area of the gray area as the time becomes shorter. The screen SC3 shown in FIG. 24 includes both information (gray area) indicating the remainder of the predetermined period and information (white area) indicating the time elapsed since the start of the reception mode.

(実施形態の纏め)
 実施形態に係る表示装置は、表示部と、前記表示部と異なる位置に検出領域を有し、予め定められた二以上のジェスチャーを区別して検出できる検出部と、ジェスチャー入力を受け付ける受付モードを開始し、所定期間の経過後、前記受付モードを終了させるモード制御部と、記憶部と、前記受付モードの開始後、複数回実行される一連のジェスチャーについて、前記検出部によって最初に検出されたジェスチャーから順番に、当該ジェスチャーに予め割り当てられていた入力を示す入力情報を、前記記憶部に記憶させる制御をする記憶制御部と、前記受付モードの終了後、前記記憶部に記憶されている、前記一連のジェスチャーを構成する複数のジェスチャーのそれぞれの前記入力情報を用いて、所定の処理をする処理部と、を備える。
(Summary of embodiment)
The display device according to the embodiment has a display unit, a detection unit having a detection region at a position different from the display unit, capable of distinguishing and detecting two or more predetermined gestures, and a reception mode for receiving a gesture input And a gesture that is first detected by the detection unit with respect to a mode control unit that terminates the reception mode after a predetermined period, a storage unit, and a series of gestures that are executed a plurality of times after the reception mode is started. In order from the above, a storage control unit for controlling the storage unit to store input information indicating an input pre-assigned to the gesture, and the storage unit stored in the storage unit after completion of the reception mode, A processing unit that performs predetermined processing using the input information of each of a plurality of gestures constituting a series of gestures.

 検出部は、表示部に表示された画面を用いてジェスチャーを検出するのではなく、表示部と異なる位置に検出領域を有し、予め定められた二以上のジェスチャーを区別して検出する。実施形態に係る表示装置は、一連のジェスチャーを構成する複数のジェスチャーのそれぞれの入力情報(すなわち、複数の入力情報)を用いて、所定の処理をする。このため、画面を用いないジェスチャー入力を用いて、パスワード認証等のような複数の入力を必要とする処理(所定の処理)をすることができる。従って、実施形態に係る表示装置によれば、画面を用いないジェスチャー入力を改善することができる。 The detection unit does not detect a gesture using the screen displayed on the display unit, but has a detection area at a position different from the display unit, and detects two or more predetermined gestures separately. The display device according to the embodiment performs a predetermined process using input information (that is, a plurality of input information) of a plurality of gestures constituting a series of gestures. For this reason, a process (predetermined process) that requires a plurality of inputs such as password authentication can be performed using gesture input without using a screen. Therefore, according to the display device according to the embodiment, gesture input without using a screen can be improved.

 検出部は、例えば、2次元マトリクス状に配列された複数の焦電素子と、前記複数の焦電素子の各出力に基づいてジェスチャーを判定するジェスチャー処理部と、を備える。表示装置は、例えば、頭や腕に装着できウェアラブル端末である。ウェアラブル端末とは、体の一部(例えば、頭、腕)に装着できる端末装置である。 The detection unit includes, for example, a plurality of pyroelectric elements arranged in a two-dimensional matrix, and a gesture processing unit that determines a gesture based on each output of the plurality of pyroelectric elements. The display device is, for example, a wearable terminal that can be worn on the head or arm. A wearable terminal is a terminal device that can be worn on a part of a body (for example, a head or an arm).

 上記構成において、前記二以上のジェスチャーのそれぞれの前記入力情報は、パスワードを構成する要素を示し、前記処理部は、前記所定の処理として、パスワード認証をする。 In the above configuration, the input information of each of the two or more gestures indicates an element constituting a password, and the processing unit performs password authentication as the predetermined processing.

 パスワードを構成する要素とは、例えば、数字、アルファベットである。この構成によれば、画面を用いないジェスチャー入力によって、パスワード認証をすることができる。 The elements constituting the password are, for example, numbers and alphabets. According to this configuration, password authentication can be performed by gesture input without using a screen.

 上記構成において、前記検出部が、前記所定期間内にジェスチャーを検出した後、次のジェスチャーの開始を検出していない無検出期間が予め定められた値に到達したと判断したとき、前記処理部は、前記所定期間の経過を待たずに、前記記憶部に記憶されている前記入力情報を用いて、前記所定の処理をする。 In the above configuration, when the detection unit determines that a non-detection period in which the start of the next gesture has not been detected has reached a predetermined value after detecting a gesture within the predetermined period, the processing unit Performs the predetermined process using the input information stored in the storage unit without waiting for the predetermined period to elapse.

 予め定められた値は、所定期間(例えば、3秒)より小さい値である(例えば、0.5秒)。一連の複数のジェスチャーに必要となる時間が所定期間として設定されている。このため、例えば、一つのジェスチャー入力による入力情報を用いて、処理部が所定の処理をする場合(例えば、次の画面に切り替えるコマンドの生成)、ジェスチャー終了後、所定の処理の開始までに待ち時間が発生する。この構成によれば、次のジェスチャーの開始を検出していない無検出期間が予め定められた値に到達したとき、所定期間の経過を待たずに、受付モードを終了し、所定の処理をする。このため、待ち時間を短くできる。 The predetermined value is a value smaller than a predetermined period (for example, 3 seconds) (for example, 0.5 seconds). The time required for a series of gestures is set as a predetermined period. For this reason, for example, when the processing unit performs a predetermined process using input information from one gesture input (for example, generation of a command to switch to the next screen), the process waits until the predetermined process starts after the gesture ends. Time occurs. According to this configuration, when the non-detection period in which the start of the next gesture has not been detected reaches a predetermined value, the reception mode is terminated and predetermined processing is performed without waiting for the predetermined period to elapse. . For this reason, waiting time can be shortened.

 上記構成において、予め定められた一以上のジェスチャーに割り当てられた前記入力情報に対して、前記所定期間を延長するコマンドが予め割り当てられており、前記所定期間中に、前記記憶部に前記一以上のジェスチャーの前記入力情報が記憶されたとき、前記モード制御部は、前記所定期間を延長する。 In the above configuration, a command for extending the predetermined period is assigned in advance to the input information assigned to one or more predetermined gestures, and the one or more commands are stored in the storage unit during the predetermined period. When the input information of the gesture is stored, the mode control unit extends the predetermined period.

 この構成によれば、所定期間中に所定期間を延長できる。従って、一連のジェスチャーの数が多い場合、又は、ユーザがジェスチャーをゆっくりした場合でも、所定期間内に、一連のジェスチャーを完了することができる。 According to this configuration, the predetermined period can be extended during the predetermined period. Therefore, even when the number of gestures is large or when the user has slowed down the gestures, the series of gestures can be completed within a predetermined period.

 上記構成において、前記一連のジェスチャーとして、第1の個数のジェスチャーにより構成される第1の一連ジェスチャーと、前記第1の個数より多い第2の個数のジェスチャーにより構成される第2の一連ジェスチャーとがあり、前記第2の一連ジェスチャーの最初の部分は、前記第1の個数以下の第3の個数のジェスチャーにより構成されており、前記最初の部分の前記入力情報に対して、前記コマンドが予め割り当てられており、前記記憶部に前記最初の部分の前記入力情報が記憶されたとき、前記モード制御部は、前記所定期間を延長する。 In the above configuration, as the series of gestures, a first series of gestures configured by a first number of gestures, and a second series of gestures configured by a second number of gestures greater than the first number, And the first part of the second series of gestures is composed of a third number of gestures equal to or less than the first number, and the command is preliminarily applied to the input information of the first part. When the input information of the first part is stored in the storage unit, the mode control unit extends the predetermined period.

 例えば、パスワード認証において、ユーザ用のパスワード数が第1の個数であり、特殊なパスワード(例えば、サービスマン専用の画面にアクセスするためのパスワード)の数が第2の個数とする。ユーザは、第1の一連ジェスチャーをしてパスワードを入力する。サービスマンは、第2の一連ジェスチャーをしてパスワードを入力する。この構成は、当初の所定期間(すなわち、所定期間の初期値)が、第1の一連ジェスチャーに要する時間より長いが、第2の一連ジェスチャーに要する時間より短いことを前提する。これにより、ユーザがサービスマン専用の画面にアクセスすることを困難にしている。この構成は、第2の一連ジェスチャーの最初の部分の入力情報に対して、所定期間を延長するコマンドを予め割り当てている。第2の一連のジェスチャーが開始され、記憶部に最初の部分の入力情報が記憶されたとき、モード制御部は、所定期間を延長する。これにより、第2の一連ジェスチャーの残りの部分を行うための時間が確保される。従って、この構成によれば、処理部は、記憶部に記憶されている、第2の一連のジェスチャーを構成する複数のジェスチャーのそれぞれの入力情報を用いて、所定の処理(例えば、サービスマン専用の画面にアクセスする処理)をすることができる。 For example, in password authentication, the number of passwords for users is the first number, and the number of special passwords (for example, passwords for accessing screens dedicated to service personnel) is the second number. The user inputs a password by making a first series of gestures. The service person makes a second series of gestures and inputs a password. This configuration assumes that the initial predetermined period (that is, the initial value of the predetermined period) is longer than the time required for the first series of gestures but shorter than the time required for the second series of gestures. This makes it difficult for the user to access the screen dedicated to the service person. In this configuration, a command for extending a predetermined period is assigned in advance to the input information of the first part of the second series of gestures. When the second series of gestures is started and the input information of the first part is stored in the storage unit, the mode control unit extends the predetermined period. This ensures time for performing the remaining part of the second series of gestures. Therefore, according to this configuration, the processing unit uses the input information of each of the plurality of gestures constituting the second series of gestures stored in the storage unit to perform predetermined processing (for example, dedicated to a serviceman To access the screen).

 上記構成において、前記所定期間中に、前記所定期間の残りを示す情報、及び、前記受付モードの開始から経過した時間を示す情報のうち、少なくとも一方を、前記表示部に表示させる表示制御部をさらに備える。 In the above configuration, a display control unit that causes the display unit to display at least one of information indicating the rest of the predetermined period and information indicating a time elapsed since the start of the reception mode during the predetermined period. Further prepare.

 例えば、所定期間が3秒とし、受付モードの開始から1.4秒経過したとする。所定期間の残りとは、1.6秒である。受付モードの開始(すなわち、所定期間の開始)から経過した時間とは、1.4秒である。この構成によれば、表示装置のユーザは、一連のジェスチャーに要する時間が、所定期間内に収まるように、ジェスチャーのスピードを調整することができる。 For example, assume that the predetermined period is 3 seconds and 1.4 seconds have elapsed since the start of the reception mode. The remainder of the predetermined period is 1.6 seconds. The time elapsed from the start of the reception mode (that is, the start of the predetermined period) is 1.4 seconds. According to this configuration, the user of the display device can adjust the speed of the gesture so that the time required for a series of gestures falls within a predetermined period.

 実施形態に係るジェスチャー入力方法は、表示部と、前記表示部と異なる位置に検出領域を有し、予め定められた二以上のジェスチャーを区別して検出できる検出部と、を備える表示装置に対して、ジェスチャー入力する方法であって、ジェスチャー入力を受け付ける受付モードを開始し、所定期間の経過後、前記受付モードを終了させる第1のステップと、前記受付モードの開始後、複数回実行される一連のジェスチャーについて、前記検出部によって最初に検出されたジェスチャーから順番に、当該ジェスチャーに予め割り当てられていた入力を示す入力情報を、記憶部に記憶させる制御をする第2のステップと、前記受付モードの終了後、前記記憶部に記憶されている、前記一連のジェスチャーを構成する複数のジェスチャーのそれぞれの前記入力情報を用いて、所定の処理をする第3のステップと、を備える。 The gesture input method according to the embodiment is directed to a display device including a display unit and a detection unit that has a detection region at a position different from the display unit and can distinguish and detect two or more predetermined gestures. A method for inputting a gesture, the first step of starting a reception mode for receiving a gesture input, and ending the reception mode after a predetermined period of time, and a series of steps executed a plurality of times after the start of the reception mode A second step of controlling the storage unit to store input information indicating an input pre-assigned to the gesture in order from the gesture first detected by the detection unit, and the reception mode. After completion of the operation, it is stored in the storage unit, and the plurality of gestures constituting the series of gestures Using the input information Les, and a third step of the predetermined processing.

 実施形態に係るジェスチャー入力方法は、実施形態に係る表示装置を方法の観点から規定しており、実施形態に係る表示装置と同様の作用効果を有する。 The gesture input method according to the embodiment defines the display device according to the embodiment from the viewpoint of the method, and has the same effects as the display device according to the embodiment.

 この出願は、2016年6月22日に出願された日本国特許出願特願2016-123744を基礎とするものであり、その内容は、本願に含まれるものである。 This application is based on Japanese Patent Application No. 2016-123744 filed on June 22, 2016, the contents of which are included in the present application.

 本発明を表現するために、上述において図面を参照しながら実施形態を通して本発明を適切且つ十分に説明したが、当業者であれば上述の実施形態を変更および/または改良することは容易に為し得ることであると認識すべきである。したがって、当業者が実施する変更形態または改良形態が、請求の範囲に記載された請求項の権利範囲を離脱するレベルのものでない限り、当該変更形態または当該改良形態は、当該請求項の権利範囲に包括されると解釈される。 In order to express the present invention, the present invention has been properly and fully described through the embodiments with reference to the drawings. However, those skilled in the art can easily change and / or improve the above-described embodiments. It should be recognized that this is possible. Therefore, unless the modifications or improvements implemented by those skilled in the art are at a level that departs from the scope of the claims recited in the claims, the modifications or improvements are not covered by the claims. To be construed as inclusive.

 本発明によれば、表示装置及びジェスチャー入力方法を提供することができる。 According to the present invention, a display device and a gesture input method can be provided.

Claims (9)

 表示部と、
 前記表示部と異なる位置に検出領域を有し、予め定められた二以上のジェスチャーを区別して検出できる検出部と、
 ジェスチャー入力を受け付ける受付モードを開始し、所定期間の経過後、前記受付モードを終了させるモード制御部と、
 記憶部と、
 前記受付モードの開始後、複数回実行される一連のジェスチャーについて、前記検出部によって最初に検出されたジェスチャーから順番に、当該ジェスチャーに予め割り当てられていた入力を示す入力情報を、前記記憶部に記憶させる制御をする記憶制御部と、
 前記受付モードの終了後、前記記憶部に記憶されている、前記一連のジェスチャーを構成する複数のジェスチャーのそれぞれの前記入力情報を用いて、所定の処理をする処理部と、を備える表示装置。
A display unit;
A detection unit having a detection region at a different position from the display unit, and capable of distinguishing and detecting two or more predetermined gestures;
A mode control unit that starts a reception mode for receiving a gesture input, and ends the reception mode after a predetermined period;
A storage unit;
After a start of the acceptance mode, for a series of gestures executed a plurality of times, input information indicating inputs previously assigned to the gestures in order from the gesture first detected by the detection unit is stored in the storage unit. A storage control unit for controlling the storage;
A display device comprising: a processing unit that performs predetermined processing using the input information of each of a plurality of gestures constituting the series of gestures stored in the storage unit after the acceptance mode ends.
 前記二以上のジェスチャーのそれぞれの前記入力情報は、パスワードを構成する要素を示し、
 前記処理部は、前記所定の処理として、パスワード認証をする請求項1に記載の表示装置。
The input information of each of the two or more gestures indicates an element constituting a password,
The display device according to claim 1, wherein the processing unit performs password authentication as the predetermined processing.
 前記検出部が、前記所定期間内にジェスチャーを検出した後、次のジェスチャーの開始を検出していない無検出期間が予め定められた値に到達したと判断したとき、前記処理部は、前記所定期間の経過を待たずに、前記記憶部に記憶されている前記入力情報を用いて、前記所定の処理をする請求項1又は2に記載の表示装置。 When the detection unit determines that a non-detection period in which the start of the next gesture has not been detected has reached a predetermined value after detecting a gesture within the predetermined period, the processing unit The display device according to claim 1, wherein the predetermined process is performed using the input information stored in the storage unit without waiting for the elapse of a period.  予め定められた一以上のジェスチャーに割り当てられた前記入力情報に対して、前記所定期間を延長するコマンドが予め割り当てられており、前記所定期間中に、前記記憶部に前記一以上のジェスチャーの前記入力情報が記憶されたとき、前記モード制御部は、前記所定期間を延長する請求項1~3のいずれか一項に記載の表示装置。 A command for extending the predetermined period is pre-assigned to the input information assigned to one or more predetermined gestures, and the storage unit stores the command of the one or more gestures during the predetermined period. The display device according to any one of claims 1 to 3, wherein when the input information is stored, the mode control unit extends the predetermined period.  前記一連のジェスチャーとして、第1の個数のジェスチャーにより構成される第1の一連ジェスチャーと、前記第1の個数より多い第2の個数のジェスチャーにより構成される第2の一連ジェスチャーとがあり、
 前記第2の一連ジェスチャーの最初の部分は、前記第1の個数以下の第3の個数のジェスチャーにより構成されており、前記最初の部分の前記入力情報に対して、前記コマンドが予め割り当てられており、
 前記記憶部に前記最初の部分の前記入力情報が記憶されたとき、前記モード制御部は、前記所定期間を延長する請求項4に記載の表示装置。
As the series of gestures, there are a first series gesture composed of a first number of gestures and a second series gesture composed of a second number of gestures larger than the first number,
The first part of the second series of gestures is composed of a third number of gestures equal to or less than the first number, and the command is pre-assigned to the input information of the first part. And
The display device according to claim 4, wherein the mode control unit extends the predetermined period when the input information of the first part is stored in the storage unit.
 前記所定期間中に、前記所定期間の残りを示す情報、及び、前記受付モードの開始から経過した時間を示す情報のうち、少なくとも一方を、前記表示部に表示させる表示制御部をさらに備える請求項1~5のいずれか一項に記載の表示装置。 The display control part which displays on the display part at least one among the information which shows the remaining of the predetermined period during the predetermined period, and the information which has passed since the start of the acceptance mode. The display device according to any one of 1 to 5.  前記検出部は、2次元マトリクス状に配列された複数の焦電素子と、前記複数の焦電素子の各出力に基づいてジェスチャーを判定するジェスチャー処理部と、を備える請求項1~6のいずれか一項に記載の表示装置。 The detection unit includes a plurality of pyroelectric elements arranged in a two-dimensional matrix, and a gesture processing unit that determines a gesture based on each output of the plurality of pyroelectric elements. A display device according to claim 1.  前記表示装置は、ウェアラブル端末である請求項1~7のいずれか一項に記載の表示装置。 The display device according to any one of claims 1 to 7, wherein the display device is a wearable terminal.  表示部と、前記表示部と異なる位置に検出領域を有し、予め定められた二以上のジェスチャーを区別して検出できる検出部と、を備える表示装置に対して、ジェスチャー入力する方法であって、
 ジェスチャー入力を受け付ける受付モードを開始し、所定期間の経過後、前記受付モードを終了させる第1のステップと、
 前記受付モードの開始後、複数回実行される一連のジェスチャーについて、前記検出部によって最初に検出されたジェスチャーから順番に、当該ジェスチャーに予め割り当てられていた入力を示す入力情報を、記憶部に記憶させる制御をする第2のステップと、
 前記受付モードの終了後、前記記憶部に記憶されている、前記一連のジェスチャーを構成する複数のジェスチャーのそれぞれの前記入力情報を用いて、所定の処理をする第3のステップと、を備えるジェスチャー入力方法。
A method for inputting gestures to a display device comprising: a display unit; and a detection unit having a detection region at a position different from the display unit and capable of distinguishing and detecting two or more predetermined gestures,
A first step of starting a reception mode for receiving a gesture input and ending the reception mode after a predetermined period;
For a series of gestures executed a plurality of times after the start of the acceptance mode, input information indicating inputs pre-assigned to the gestures is stored in the storage unit in order from the gesture first detected by the detection unit. A second step of controlling
A gesture comprising: a third step of performing a predetermined process using the input information of each of a plurality of gestures constituting the series of gestures stored in the storage unit after the acceptance mode ends. input method.
PCT/JP2017/022165 2016-06-22 2017-06-15 Display device and gesture input method Ceased WO2017221809A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016123744 2016-06-22
JP2016-123744 2016-06-22

Publications (1)

Publication Number Publication Date
WO2017221809A1 true WO2017221809A1 (en) 2017-12-28

Family

ID=60784720

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/022165 Ceased WO2017221809A1 (en) 2016-06-22 2017-06-15 Display device and gesture input method

Country Status (1)

Country Link
WO (1) WO2017221809A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000105728A (en) * 1998-09-29 2000-04-11 Fujitsu Ltd Information storage system
JP2009020691A (en) * 2007-07-11 2009-01-29 Kyocera Mita Corp User authentication method, user authentication apparatus and image forming apparatus
JP2009217465A (en) * 2008-03-10 2009-09-24 Sharp Corp Input device, input operation reception method, and program thereof
JP2015175090A (en) * 2014-03-12 2015-10-05 オムロンオートモーティブエレクトロニクス株式会社 Portable machine and control system
US20160210452A1 (en) * 2015-01-19 2016-07-21 Microsoft Technology Licensing, Llc Multi-gesture security code entry

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000105728A (en) * 1998-09-29 2000-04-11 Fujitsu Ltd Information storage system
JP2009020691A (en) * 2007-07-11 2009-01-29 Kyocera Mita Corp User authentication method, user authentication apparatus and image forming apparatus
JP2009217465A (en) * 2008-03-10 2009-09-24 Sharp Corp Input device, input operation reception method, and program thereof
JP2015175090A (en) * 2014-03-12 2015-10-05 オムロンオートモーティブエレクトロニクス株式会社 Portable machine and control system
US20160210452A1 (en) * 2015-01-19 2016-07-21 Microsoft Technology Licensing, Llc Multi-gesture security code entry

Similar Documents

Publication Publication Date Title
EP3167404B1 (en) Imaging and peripheral enhancements for mobile devices
US9360935B2 (en) Integrated bi-sensing optical structure for head mounted display
JP5957875B2 (en) Head mounted display
TWI498771B (en) Glasses that can recognize gestures
US10955971B2 (en) Information input device and information input method
KR102287751B1 (en) Method and apparatus for iris recognition of electronic device
US8279200B2 (en) Light-induced shape-memory polymer display screen
US10310624B2 (en) Electronic apparatus, method for controlling electronic apparatus, and control program for the same
JP6398870B2 (en) Wearable electronic device and gesture detection method for wearable electronic device
CN103946734A (en) Wearable computer superimposed with control and instructions for external devices
KR20210156613A (en) Augmented reality glass and operating method thereof
CN116830065A (en) Electronic device and method for tracking user gaze and providing augmented reality services
CN109815941A (en) Fingerprint identification device and electronic equipment
WO2017221809A1 (en) Display device and gesture input method
KR20150091724A (en) Wearable eyeglass device
WO2016185916A1 (en) Wearable electronic device, gesture detection method for wearable electronic device, and gesture detection program for wearable electronic device
WO2018143313A1 (en) Wearable electronic device
WO2018092674A1 (en) Display apparatus and gesture input method
WO2016072271A1 (en) Display device, method for controlling display device, and control program therefor
JP2017004532A (en) Head mounted display and information display device
WO2017104525A1 (en) Input device, electronic device, and head-mounted display
US20230138445A1 (en) Wearable electronic device and method for controlling electronic devices using vision information
EP4350420A1 (en) Lens assembly including light-emitting element disposed on first lens, and wearable electronic device including same
TWI492099B (en) Glasses with gesture recognition function
EP4495749A1 (en) Display module control method, and electronic device performing the method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17815277

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17815277

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP