[go: up one dir, main page]

WO2025204110A1 - Eyeglasses-type device, information processing method, and program - Google Patents

Eyeglasses-type device, information processing method, and program

Info

Publication number
WO2025204110A1
WO2025204110A1 PCT/JP2025/003067 JP2025003067W WO2025204110A1 WO 2025204110 A1 WO2025204110 A1 WO 2025204110A1 JP 2025003067 W JP2025003067 W JP 2025003067W WO 2025204110 A1 WO2025204110 A1 WO 2025204110A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
eyeglass
type device
contact portion
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2025/003067
Other languages
French (fr)
Japanese (ja)
Inventor
亜旗 米田
未佳 砂川
隆雅 吉田
弘毅 高橋
邦博 今村
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Intellectual Property Management Co Ltd
Original Assignee
Panasonic Intellectual Property Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Panasonic Intellectual Property Management Co Ltd filed Critical Panasonic Intellectual Property Management Co Ltd
Publication of WO2025204110A1 publication Critical patent/WO2025204110A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/02Viewing or reading apparatus
    • GPHYSICS
    • G02OPTICS
    • G02CSPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
    • G02C3/00Special supporting arrangements for lens assemblies or monocles
    • GPHYSICS
    • G02OPTICS
    • G02CSPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
    • G02C5/00Constructions of non-optical parts
    • GPHYSICS
    • G02OPTICS
    • G02CSPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
    • G02C5/00Constructions of non-optical parts
    • G02C5/14Side-members
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • HELECTRICITY
    • H10SEMICONDUCTOR DEVICES; ELECTRIC SOLID-STATE DEVICES NOT OTHERWISE PROVIDED FOR
    • H10KORGANIC ELECTRIC SOLID-STATE DEVICES
    • H10K59/00Integrated devices, or assemblies of multiple devices, comprising at least one organic light-emitting element covered by group H10K50/00
    • H10K59/10OLED displays

Definitions

  • This disclosure relates to an eyeglass-type device, an information processing method, and a program.
  • Patent Document 1 discloses electronic eyeglasses according to background art.
  • the frames of the electronic eyeglasses are equipped with sensors that perform sensing operations.
  • the left and right temples of the electronic eyeglasses are connected to each other by a connection means that crosses the back of the user's head.
  • Patent Document 1 The electronic eyeglasses disclosed in Patent Document 1 have a connection means that crosses the back of the user's head, which gives the user the feeling that their head is being squeezed, causing a great mental strain.
  • the present disclosure aims to provide an eyeglass-type device, information processing method, and program that can reduce misalignment of the eyeglass-type device without using a connection means that crosses the back of the user's head.
  • a glasses-type device comprises a front section, temple sections connected to the front section, end pieces connected to the temple sections, a first contact section that contacts the bridge of the user's nose or between the eyebrows to support the front section, a second contact section that has a connection section connected to the end pieces and contacts the area under the user's chin to support the end pieces, and at least one of a first sensor that is disposed in the first contact section and acquires a first biosignal of the user from the bridge of the user's nose or between the eyebrows, and a second sensor that is disposed in the second contact section and acquires a second biosignal of the user from the area under the user's chin.
  • FIG. 1 is a front view showing a simplified configuration example of AR glasses according to an embodiment.
  • FIG. 1 is a side view showing a simplified configuration example of AR glasses according to an embodiment.
  • FIG. 2 is a simplified diagram illustrating a first configuration example of a connection portion.
  • FIG. 10 is a simplified diagram illustrating a second configuration example of the connection portion.
  • FIG. 1 is a diagram showing a simplified functional configuration of AR glasses.
  • FIG. 2 is a simplified diagram showing the functional configuration of a processing unit.
  • 10 is a flowchart showing a process executed by a processing unit.
  • FIG. 10 is a diagram showing a simplified functional configuration of AR glasses according to a first modified example.
  • FIG. 10 is a diagram showing a simplified functional configuration of a processing unit according to a first modified example.
  • FIG. 10 is a diagram showing a simplified functional configuration of a processing unit according to a first modified example.
  • FIG. 10 is a diagram showing an image of another user's face captured by an external camera.
  • FIG. 10 is a side view showing an example of how the mask is worn.
  • FIG. 10 is a diagram showing an example of displaying the faces of other users on the display unit.
  • FIG. 10 is a diagram showing a simplified functional configuration of a processing unit according to a second modified example.
  • FIG. 10 is a diagram showing a simplified configuration example of a connection portion according to a third modified example.
  • FIG. 10 is a simplified diagram showing a first configuration example of AR glasses according to a fourth modified example.
  • FIG. 10 is a simplified diagram showing a second configuration example of AR glasses according to a fourth modified example.
  • FIG. 10 is a simplified diagram showing a third configuration example of AR glasses according to a fourth modified example.
  • FIG. 10 is a simplified diagram showing a fourth configuration example of AR glasses according to a fourth modified example.
  • FIG. 10 is a simplified diagram showing a fifth configuration example of AR glasses according to a
  • General eyeglass-type devices are designed to support the glasses with the temples and nose pads, which can cause the glasses to slip off easily.
  • the inventor discovered that by supporting the front section by contacting the bridge of the nose or between the eyebrows and supporting the temple sections by contacting the area under the chin, it is possible to reduce misalignment of the eyeglass-type device without using a connection means that crosses the back of the head, and this led to the present disclosure.
  • a glasses-type device comprises a front section, temple sections connected to the front section, end pieces connected to the temple sections, a first contact section that contacts the bridge of the user's nose or between the eyebrows to support the front section, a second contact section that has a connection section connected to the end pieces and contacts the area under the user's chin to support the end pieces, and at least one of a first sensor that is disposed in the first contact section and acquires a first biosignal of the user from the bridge of the user's nose or between the eyebrows, and a second sensor that is disposed in the second contact section and acquires a second biosignal of the user from the area under the user's chin.
  • misalignment of the eyeglass-type device can be reduced without using a connection means that crosses the back of the user's head.
  • the user's intention to operate the object to be operated can be estimated with high accuracy.
  • connection portion biases the end piece in a direction that pushes up the temple portion, using the end piece as a fulcrum
  • first contact portion preferably has an inter-brow pad that contacts the user's between the eyebrows from below.
  • the effect of reducing misalignment of the eyeglass-type device can be improved.
  • the transmission unit 57 transmits movement information (data D7) indicating the movement of the user's mouth, tongue, or throat estimated by the movement estimation unit 54 to the other AR glasses 1A.
  • the other AR glasses 1A are, for example, AR glasses worn by another user who is a face-to-face conversation partner with the user wearing the AR glasses 1.
  • the receiving unit 58 receives other movement information (data D7A) indicating the movement of the mouth, tongue, or throat of another user wearing another pair of AR glasses 1A from another transmitting unit 57A provided in the other pair of AR glasses 1A.
  • the receiving unit 58 inputs the data D7A to the image creating unit 59.
  • the image creation unit 59 creates image data D11 of the facial expression image of the other user based on image data D10 acquired from the external camera 37 and data D7A input from the receiving unit 58.
  • the image data D10 includes an image of the other user's face captured by the external camera 37.
  • the image creation unit 59 detects that the other user is wearing a mask 60 based on the image data D10, it creates an image of the facial expression around the mouth of the other user based on other movement information (data D7A) received by the receiving unit 58.
  • the image creation unit 59 inputs the image data D11 of the created facial expression image to the display control unit 52.
  • Figure 10 shows an image of the face of another user captured by the external camera 37.
  • the other user is wearing a mask 60, and the other user's nose and mouth are hidden by the mask 60.
  • the display control unit 52 superimposes and displays an image of the facial expression around the mouth of another user, created by the image creation unit 59 based on the image data D11, on the display surface of the display unit 31, in accordance with the position of the face of the other user.
  • the AR glasses 1 may further include an image projection unit, which may use the mask 60 worn by the other user as a screen and project an image of the other user's facial expression (image data D11) onto the screen.
  • the material of the mask 60 may be a retroreflective material.
  • the movement information (data D7) indicating the movement of the user's mouth, tongue, or throat transmitted by the transmission unit 57 can be used by other AR glasses 1A or any information processing device, etc.
  • the above-mentioned optional information processing device may also be a management device that manages the user's health.
  • the management device receives images (image data D10) captured by the external camera 37 from the AR glasses 1 and, based on the captured images, determines whether the user is currently eating and what they are chewing. Based on data D7 received from the AR glasses 1, the management device measures the user's jaw movement while chewing, the number of times they swallow, or the timing of swallowing. Data D7 may also include measurements taken by a sensor that can measure the degree to which the user's jaw is opened and closed. Because users who chew less frequently are at higher health risk, the management device issues a warning to such users and encourages them to improve their chewing behavior.
  • Health insurance premiums may be increased or decreased in real time depending on the number of chews. Furthermore, users who have abnormalities in their chewing behavior, such as asymmetry, may have dental or oral diseases.
  • the management device issues a warning to such users and encourages them to visit a hospital. Furthermore, in a care facility or the like, if the management device detects a user's aspiration based on data D7, it may output an alarm or send an emergency call to the staff in charge.
  • the arbitrary information processing device may be an information terminal carried by a language learning instructor, such as an English conversation instructor.
  • the instructor may be a real instructor who is face-to-face with the user, a remote instructor who can communicate with the user, or a virtual instructor.
  • a user wearing AR glasses 1 can have a real conversation with the instructor displayed on the display unit 31.
  • data D7 indicating the user's jaw or tongue movements is sent from the AR glasses 1 to the information terminal carried by the instructor.
  • the instructor determines whether the user's jaw or tongue movements are correct based on the received data D7, and provides feedback on the correct pronunciation method to the AR glasses 1 worn by the user. This makes it possible to provide instruction based on jaw and tongue movements in addition to instruction based on differences in pronunciation due to the ear, which is expected to promote language learning.
  • (Second Modification) 13 is a simplified diagram showing the functional configuration of the processing unit 32 according to the second modified example.
  • the processing unit 32 further includes a mis-attachment detection unit 71 in addition to the configuration shown in FIG.
  • the mis-wearing detection unit 71 detects that the second contact unit 22 has been worn incorrectly by the user, for example, when the detection value of the electrode 222 is an abnormal value.
  • An example of mis-wearing is when the second contact unit 22 is worn turned toward the back of the user's head.
  • the mis-wearing detection unit 71 detects that the second contact unit 22 has been worn incorrectly, it inputs mis-wearing detection information D20 to the display control unit 52.
  • the display control unit 52 receives detection information D20 from the mis-wearing detection unit 71, it displays notification information such as an image or text message on the display unit 31 to notify the user of the mis-wearing.
  • the manner in which the mis-wearing is notified is not limited to a display, and may also be the output of a warning sound or voice message, etc.
  • This modified example makes it possible to prevent malfunction of the AR glasses 1 or a decrease in estimation accuracy in gaze control due to incorrect attachment of the second contact portion 22.
  • a message such as "Please attach under the chin” may be written on the surface of the subchin pad 221.
  • connection portion 224 may be configured to be stretchable or have a controllable tensile strength.
  • ease of wearing may be ensured by weakening the tensile strength before wearing, and the degree of adhesion of the subchin pad 221 may be increased by strengthening the tensile strength after sensing that the pad has been worn.
  • a shape memory alloy that reacts to the user's body temperature may be used to variably control the tensile strength, or an actuator such as a motor may be used.
  • notification information may be output to prompt the user to reattach the AR glasses 1.
  • detection values of the electrodes 212, 222 change significantly due to misalignment of the AR glasses 1
  • notification information may be output to prompt the user to reattach the AR glasses 1.
  • detection information of the user's eye position by the internal camera 36 may be used.
  • FIG. 14 is a simplified diagram showing an example configuration of the connection unit 224 according to the third modified example.
  • the first contact unit 21 has a nasal root pad 213 and an electrode 214.
  • the nasal root pad 213 is a pad that contacts the skin surface of the user's nasal root from above, following the slope of the nasal root.
  • the nasal root pad 213 is connected to the bridge 112 or the rim 111 via a pad arm (not shown).
  • the electrode 214 is disposed on the nasal root pad 213. Multiple electrodes 214 may be disposed on one nasal root pad 213.
  • the electrode 214 is included in the first sensor 33.
  • the first sensor 33 acquires a first biological signal D5 from the user's nasal root via the electrode 214.
  • the connection portion 224 has a biasing member 224C that uses a spiral spring or the like.
  • the biasing member 224C biases the end piece 13 in a direction that rotates the lower end of the end piece 13 forward relative to the second contact portion 22 (the direction indicated by arrow Y4).
  • the connection portion 224 biases the end piece 13 in a direction that presses down on the temple portion 12 (the direction indicated by arrow Y5) with the end piece 13 as a fulcrum, resulting in the nose bridge pad 213 coming into close contact with the bridge of the user's nose.
  • the first contact portion 21 contacts the bridge of the user's nose to support the front portion 11, and the second contact portion 22 contacts the area under the user's chin to support the temple portion 13. This reduces misalignment of the AR glasses 1 without using a connection means that crosses the back of the user's head.
  • connection portion 224 biases the end piece 13 in a direction that presses down the temple portion 12, with the end piece 13 as a fulcrum, so that the nose bridge pad 213 adheres closely to the bridge of the user's nose, thereby improving the effect of reducing misalignment of the AR glasses 1.
  • biasing member 224C shown in Figure 14 can appropriately apply a biasing force in a direction that presses down the temple portion 12 with the end piece 13 as a fulcrum.

Landscapes

  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Optics & Photonics (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Pathology (AREA)
  • Physiology (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Eyeglasses (AREA)

Abstract

This eyeglasses-type device comprises: a front section; a temple section that is connected to the front section; an end tip section that is connected to the temple section; a first contact section that comes into contact with the nasal root or glabella of a user and supports the front section; a second contact section that has a connection section connected to the end tip section, comes into contact with the chin of the user, and supports the end tip section; and at least one of a first sensor and a second sensor, wherein the first sensor is disposed in the first contact section and acquires a first biological signal of the user from the nasal root or glabella of the user, and the second sensor is disposed in the second contact section and acquires a second biological signal of the user from the chin of the user.

Description

眼鏡型デバイス、情報処理方法、及びプログラムGlasses-type device, information processing method, and program

 本開示は、眼鏡型デバイス、情報処理方法、及びプログラムに関する。 This disclosure relates to an eyeglass-type device, an information processing method, and a program.

 特許文献1には、背景技術に係る電子眼鏡が開示されている。当該電子眼鏡のフレームには、センシング動作を行うセンサが設けられている。当該電子眼鏡の左右のテンプル部は、ユーザの後頭部側を横切る接続手段によって互いに接続されている。 Patent Document 1 discloses electronic eyeglasses according to background art. The frames of the electronic eyeglasses are equipped with sensors that perform sensing operations. The left and right temples of the electronic eyeglasses are connected to each other by a connection means that crosses the back of the user's head.

 特許文献1に開示された電子眼鏡によると、ユーザの後頭部側を横切る接続手段が設けられているため、ユーザは頭部を締め付けられている感覚があり、精神的な負担が大きい。 The electronic eyeglasses disclosed in Patent Document 1 have a connection means that crosses the back of the user's head, which gives the user the feeling that their head is being squeezed, causing a great mental strain.

特許第6046005号公報Patent No. 6046005

 本開示は、ユーザの後頭部側を横切る接続手段を用いることなく眼鏡型デバイスのずれを低減することが可能な、眼鏡型デバイス、情報処理方法、及びプログラムを得ることを目的とする。 The present disclosure aims to provide an eyeglass-type device, information processing method, and program that can reduce misalignment of the eyeglass-type device without using a connection means that crosses the back of the user's head.

 本開示の一態様に係る眼鏡型デバイスは、フロント部と、前記フロント部に接続されたテンプル部と、前記テンプル部に接続されたモダン部と、ユーザの鼻根又は眉間に接触して前記フロント部を支持する第1接触部と、前記モダン部に接続された接続部を有し、前記ユーザの顎下に接触して前記モダン部を支持する第2接触部と、前記第1接触部に配置され前記ユーザの鼻根又は眉間から前記ユーザの第1生体信号を取得する第1センサ、及び、前記第2接触部に配置され前記ユーザの顎下から前記ユーザの第2生体信号を取得する第2センサ、の少なくとも一方と、を備える。 A glasses-type device according to one aspect of the present disclosure comprises a front section, temple sections connected to the front section, end pieces connected to the temple sections, a first contact section that contacts the bridge of the user's nose or between the eyebrows to support the front section, a second contact section that has a connection section connected to the end pieces and contacts the area under the user's chin to support the end pieces, and at least one of a first sensor that is disposed in the first contact section and acquires a first biosignal of the user from the bridge of the user's nose or between the eyebrows, and a second sensor that is disposed in the second contact section and acquires a second biosignal of the user from the area under the user's chin.

実施形態に係るARグラスの構成例を簡略化して示す正面図である。FIG. 1 is a front view showing a simplified configuration example of AR glasses according to an embodiment. 実施形態に係るARグラスの構成例を簡略化して示す側面図である。FIG. 1 is a side view showing a simplified configuration example of AR glasses according to an embodiment. 接続部の第1の構成例を簡略化して示す図である。FIG. 2 is a simplified diagram illustrating a first configuration example of a connection portion. 接続部の第2の構成例を簡略化して示す図である。FIG. 10 is a simplified diagram illustrating a second configuration example of the connection portion. ARグラスの機能構成を簡略化して示す図である。FIG. 1 is a diagram showing a simplified functional configuration of AR glasses. 処理部の機能構成を簡略化して示す図である。FIG. 2 is a simplified diagram showing the functional configuration of a processing unit. 処理部が実行する処理を示すフローチャートである。10 is a flowchart showing a process executed by a processing unit. 第1変形例に係るARグラスの機能構成を簡略化して示す図である。FIG. 10 is a diagram showing a simplified functional configuration of AR glasses according to a first modified example. 第1変形例に係る処理部の機能構成を簡略化して示す図である。FIG. 10 is a diagram showing a simplified functional configuration of a processing unit according to a first modified example. 外部カメラによって撮影された他のユーザの顔の画像を示す図である。FIG. 10 is a diagram showing an image of another user's face captured by an external camera. マスクの装着例を示す側面図である。FIG. 10 is a side view showing an example of how the mask is worn. 表示部における他のユーザの顔の表示例を示す図である。FIG. 10 is a diagram showing an example of displaying the faces of other users on the display unit. 第2変形例に係る処理部の機能構成を簡略化して示す図である。FIG. 10 is a diagram showing a simplified functional configuration of a processing unit according to a second modified example. 第3変形例に係る接続部の構成例を簡略化して示す図である。FIG. 10 is a diagram showing a simplified configuration example of a connection portion according to a third modified example. 第4変形例に係るARグラスの第1の構成例を簡略化して示す図である。FIG. 10 is a simplified diagram showing a first configuration example of AR glasses according to a fourth modified example. 第4変形例に係るARグラスの第2の構成例を簡略化して示す図である。FIG. 10 is a simplified diagram showing a second configuration example of AR glasses according to a fourth modified example. 第4変形例に係るARグラスの第3の構成例を簡略化して示す図である。FIG. 10 is a simplified diagram showing a third configuration example of AR glasses according to a fourth modified example. 第4変形例に係るARグラスの第4の構成例を簡略化して示す図である。FIG. 10 is a simplified diagram showing a fourth configuration example of AR glasses according to a fourth modified example. 第4変形例に係るARグラスの第5の構成例を簡略化して示す図である。FIG. 10 is a simplified diagram showing a fifth configuration example of AR glasses according to a fourth modified example.

 (本開示の基礎となった知見)
 AR(Augmented Reality)グラス又はVR(Virtual Reality)グラス等の眼鏡型デバイスが実用化されつつある。
(Findings that form the basis of this disclosure)
Glasses-type devices such as AR (Augmented Reality) glasses or VR (Virtual Reality) glasses are becoming commercially available.

 一般的な眼鏡型デバイスは、眼鏡のモダン部と鼻パッドとで眼鏡を支持する構造であるため、眼鏡がずれやすい。 General eyeglass-type devices are designed to support the glasses with the temples and nose pads, which can cause the glasses to slip off easily.

 背景技術に係る電子眼鏡のように、ユーザの後頭部側を横切る接続手段によって左右のテンプル部を接続することにより、眼鏡のずれを低減できる。 As with the electronic eyeglasses in the background art, misalignment of the eyeglasses can be reduced by connecting the left and right temples with a connection means that crosses the back of the user's head.

 しかしながら、ユーザの後頭部側を横切る接続手段を用いた構造によると、ユーザは頭部を締め付けられている感覚があり、精神的な負担が大きい。 However, a structure that uses a connection means that crosses the back of the user's head can give the user the feeling that their head is being squeezed, which places a significant mental strain on the user.

 かかる課題を解決するために、本発明者は、鼻根又は眉間との接触によってフロント部を支持するとともに、顎下との接触によってモダン部を支持することによって、後頭部側を横切る接続手段を用いることなく眼鏡型デバイスのずれを低減できるとの知見を得て、本開示を想到するに至った。 In order to solve this problem, the inventor discovered that by supporting the front section by contacting the bridge of the nose or between the eyebrows and supporting the temple sections by contacting the area under the chin, it is possible to reduce misalignment of the eyeglass-type device without using a connection means that crosses the back of the head, and this led to the present disclosure.

 次に、本開示の各態様について説明する。 Next, each aspect of this disclosure will be described.

 本開示の第1態様に係る眼鏡型デバイスは、フロント部と、前記フロント部に接続されたテンプル部と、前記テンプル部に接続されたモダン部と、ユーザの鼻根又は眉間に接触して前記フロント部を支持する第1接触部と、前記モダン部に接続された接続部を有し、前記ユーザの顎下に接触して前記モダン部を支持する第2接触部と、前記第1接触部に配置され前記ユーザの鼻根又は眉間から前記ユーザの第1生体信号を取得する第1センサ、及び、前記第2接触部に配置され前記ユーザの顎下から前記ユーザの第2生体信号を取得する第2センサ、の少なくとも一方と、を備える。 A glasses-type device according to a first aspect of the present disclosure comprises a front section, temple sections connected to the front section, end pieces connected to the temple sections, a first contact section that contacts the bridge of the user's nose or between the eyebrows to support the front section, a second contact section that has a connection section connected to the end pieces and contacts the area under the user's chin to support the end pieces, and at least one of a first sensor that is disposed in the first contact section and acquires a first biosignal of the user from the bridge of the user's nose or between the eyebrows, and a second sensor that is disposed in the second contact section and acquires a second biosignal of the user from the area under the user's chin.

 第1態様によれば、ユーザの後頭部側を横切る接続手段を用いることなく眼鏡型デバイスのずれを低減できる。 According to the first aspect, misalignment of the eyeglass-type device can be reduced without using a connection means that crosses the back of the user's head.

 本開示の第2態様に係る眼鏡型デバイスは、第1態様において、配置前記第1センサ及び前記第2センサの双方を備えると良い。 The eyeglass-type device according to the second aspect of the present disclosure may be the first aspect, but may also include both the first sensor and the second sensor.

 第2態様によれば、操作対象のオブジェクトに対するユーザの操作意図を高精度に推定できる。 According to the second aspect, the user's intention to operate the object to be operated can be estimated with high accuracy.

 本開示の第3態様に係る眼鏡型デバイスは、第1又は第2態様において、前記接続部は、前記モダン部を支点として前記テンプル部を押し上げる方向に前記モダン部を付勢し、前記第1接触部は、前記ユーザの眉間に下方から接触する眉間パッドを有すると良い。 In the eyeglass-type device according to the third aspect of the present disclosure, in the first or second aspect, the connection portion biases the end piece in a direction that pushes up the temple portion, using the end piece as a fulcrum, and the first contact portion preferably has an inter-brow pad that contacts the user's between the eyebrows from below.

 第3態様によれば、眼鏡型デバイスのずれの低減効果を向上できる。 According to the third aspect, the effect of reducing misalignment of the eyeglass-type device can be improved.

 本開示の第4態様に係る眼鏡型デバイスは、第3態様において、前記接続部は、前記第2接触部に対して前記モダン部を直線的に引き付ける方向に前記モダン部を付勢する付勢部材を有すると良い。 In the eyeglass-type device according to the fourth aspect of the present disclosure, in the third aspect, the connection portion may have a biasing member that biases the end piece in a direction that linearly attracts the end piece toward the second contact portion.

 第4態様によれば、モダン部を支点としてテンプル部を押し上げる方向の付勢力を付与できる。 According to the fourth aspect, a biasing force can be applied in a direction that pushes up the temple portion, with the end piece serving as a fulcrum.

 本開示の第5態様に係る眼鏡型デバイスは、第3態様において、前記接続部は、前記第2接触部に対して前記モダン部を後傾回転させる方向に前記モダン部を付勢する付勢部材を有すると良い。 In the eyeglass-type device according to the fifth aspect of the present disclosure, in the third aspect, the connection portion may have a biasing member that biases the end piece in a direction that rotates the end piece backward relative to the second contact portion.

 第5態様によれば、モダン部を支点としてテンプル部を押し上げる方向の付勢力を付与できる。 According to the fifth aspect, a biasing force can be applied in a direction that pushes up the temple portion, with the end piece acting as a fulcrum.

 本開示の第6態様に係る眼鏡型デバイスは、第1又は第2態様において、前記接続部は、前記モダン部を支点として前記テンプル部を押し下げる方向に前記モダン部を付勢し、前記第1接触部は、前記ユーザの鼻根に上方から接触する鼻根パッドを有すると良い。 In the eyeglass-type device according to the sixth aspect of the present disclosure, in the first or second aspect, the connection portion may bias the end piece in a direction that presses down the temple portion using the end piece as a fulcrum, and the first contact portion may have a nose bridge pad that contacts the bridge of the user's nose from above.

 第6態様によれば、眼鏡型デバイスのずれの低減効果を向上できる。 According to the sixth aspect, the effect of reducing misalignment of the eyeglass-type device can be improved.

 本開示の第7態様に係る眼鏡型デバイスは、第6態様において、前記接続部は、前記第2接触部に対して前記モダン部を前傾回転させる方向に前記モダン部を付勢する付勢部材を有すると良い。 A glasses-type device according to a seventh aspect of the present disclosure is the sixth aspect, wherein the connection portion has a biasing member that biases the end piece in a direction that rotates the end piece forward relative to the second contact portion.

 第7態様によれば、モダン部を支点としてテンプル部を押し下げる方向の付勢力を付与できる。 According to the seventh aspect, a biasing force can be applied in a direction that pushes down the temple portion, with the end piece serving as a fulcrum.

 本開示の第8態様に係る眼鏡型デバイスは、第2態様において、前記第2生体信号は、前記ユーザの広頸筋の筋電を含むと良い。 In the second aspect of the eyeglass-type device according to the eighth aspect of the present disclosure, the second biological signal may include myoelectric potential of the user's platysma muscle.

 第8態様によれば、操作対象のオブジェクトに対するユーザの操作意図を高精度に推定できる。 According to the eighth aspect, the user's intention to operate the object to be operated can be estimated with high accuracy.

 本開示の第9態様に係る眼鏡型デバイスは、第8態様において、前記第2生体信号に基づいて、前記ユーザの口、舌、又は喉の動作を推定する動作推定部をさらに備えると良い。 The eyeglass-type device according to the ninth aspect of the present disclosure may be the eighth aspect, further comprising a movement estimation unit that estimates the movement of the user's mouth, tongue, or throat based on the second biological signal.

 第9態様によれば、第2生体信号に基づいて、ユーザの口、舌、又は喉の動作を高精度に推定できる。 According to the ninth aspect, the movement of the user's mouth, tongue, or throat can be estimated with high accuracy based on the second biological signal.

 本開示の第10態様に係る眼鏡型デバイスは、第9態様において、前記動作推定部によって推定された前記ユーザの口、舌、又は喉の動作を示す動作情報に基づいて、操作対象のオブジェクトに対する前記ユーザの操作意図を推定する意図推定部をさらに備えると良い。 The glasses-type device according to a tenth aspect of the present disclosure may be the ninth aspect, further comprising an intention estimation unit that estimates the user's intention to operate an object to be operated, based on movement information indicating the movement of the user's mouth, tongue, or throat estimated by the movement estimation unit.

 第10態様によれば、ユーザの口、舌、又は喉の動作を示す動作情報に基づいて、操作対象のオブジェクトに対するユーザの操作意図を高精度に推定できる。 According to the tenth aspect, the user's intention to operate an object to be operated can be estimated with high accuracy based on movement information indicating the movement of the user's mouth, tongue, or throat.

 本開示の第11態様に係る眼鏡型デバイスは、第10態様において、前記ユーザの視線の方向を検出する視線検出部と、前記視線検出部によって検出された前記ユーザの視線の方向を示す視線情報に基づいて、前記オブジェクトを特定するオブジェクト特定部と、をさらに備えると良い。 The glasses-type device according to the eleventh aspect of the present disclosure may be the tenth aspect, further comprising a gaze detection unit that detects the direction of the user's gaze, and an object identification unit that identifies the object based on gaze information indicating the direction of the user's gaze detected by the gaze detection unit.

 第11態様によれば、ユーザの視線の方向を示す視線情報に基づいて、操作対象のオブジェクトを高精度に特定できる。 According to the eleventh aspect, the object to be operated can be identified with high accuracy based on gaze information indicating the direction of the user's gaze.

 本開示の第12態様に係る眼鏡型デバイスは、第9~第11態様のいずれか一つにおいて、前記動作推定部によって推定された前記ユーザの口、舌、又は喉の動作を示す動作情報を送信する送信部をさらに備えると良い。 In any one of the ninth to eleventh aspects, the eyeglass-type device according to the twelfth aspect of the present disclosure may further include a transmitter that transmits movement information indicating the movement of the user's mouth, tongue, or throat estimated by the movement estimation unit.

 第12態様によれば、送信部が送信したユーザの口、舌、又は喉の動作を示す動作情報を、他の眼鏡型デバイス又は任意の情報処理装置等によって活用できる。 According to the twelfth aspect, the movement information indicating the movement of the user's mouth, tongue, or throat transmitted by the transmission unit can be utilized by other eyeglass-type devices or any information processing device, etc.

 本開示の第13態様に係る眼鏡型デバイスは、第12態様において、他の眼鏡型デバイスを装着する他のユーザの口、舌、又は喉の動作を示す他の動作情報を、前記他の眼鏡型デバイスが備える他の送信部から受信する受信部と、前記他のユーザがマスクを装着していることを検出した場合に、前記受信部によって受信された前記他の動作情報に基づいて、前記他のユーザの口部周辺の表情画像を作成する画像作成部と、前記画像作成部によって作成された前記表情画像を、前記他のユーザの顔の位置に合わせて表示する表示制御部と、をさらに備えると良い。 The eyeglass-type device according to the thirteenth aspect of the present disclosure may further include a receiving unit that receives, from another transmitting unit provided in the other eyeglass-type device, other movement information indicating the movement of the mouth, tongue, or throat of another user wearing the other eyeglass-type device; an image creating unit that, when it is detected that the other user is wearing a mask, creates a facial expression image of the area around the mouth of the other user based on the other movement information received by the receiving unit; and a display control unit that displays the facial expression image created by the image creating unit in accordance with the position of the other user's face.

 第13態様によれば、マスクを装着している他のユーザの口部周辺の表情画像を作成して表示でき、その結果、ユーザ同士のコミュニケーションを円滑化できる。 According to the thirteenth aspect, an image of the facial expression around the mouth of another user wearing a mask can be created and displayed, thereby facilitating communication between users.

 本開示の第14態様に係る眼鏡型デバイスは、第1~第13態様のいずれか一つにおいて、前記フロント部は、情報を表示する表示部を有し、前記第2接触部の誤装着を検出する誤装着検出部と、前記誤装着検出部によって前記第2接触部の誤装着が検出された場合に、誤装着を報知する報知情報を前記表示部に表示する表示制御部と、をさらに備えると良い。 In the eyeglass-type device according to a fourteenth aspect of the present disclosure, in any one of the first to thirteenth aspects, the front section preferably has a display section that displays information, and further includes a misplacement detection section that detects misplacement of the second contact section, and a display control section that, when the misplacement detection section detects misplacement of the second contact section, displays notification information on the display section that notifies the user of the misplacement.

 第14態様によれば、第2接触部の誤装着に起因する誤動作又は精度低下等を防止できる。 According to the fourteenth aspect, malfunctions or reduced accuracy caused by incorrect attachment of the second contact portion can be prevented.

 本開示の第15態様に係る眼鏡型デバイスは、第1~第14態様のいずれか一つにおいて、前記第1生体信号は、前記ユーザの筋電、眼電、脳波、筋肉の形状変化、筋肉の硬度変化、生体が発生する音響、皮膚温度、皮膚導電性、呼吸、発汗、心拍、及び血圧の少なくとも一つを含むと良い。 In the eyeglass-type device according to a fifteenth aspect of the present disclosure, in any one of the first to fourteenth aspects, the first biological signal may include at least one of the user's electromyography, electrooculography, electroencephalography, changes in muscle shape, changes in muscle hardness, sounds generated by the living body, skin temperature, skin conductivity, respiration, sweating, heart rate, and blood pressure.

 第15態様によれば、操作対象のオブジェクトに対するユーザの操作意図を高精度に推定できる。 According to the fifteenth aspect, the user's intention to operate the object to be operated can be estimated with high accuracy.

 本開示の第16態様に係る情報処理方法は、フロント部と、前記フロント部に接続されたテンプル部と、前記テンプル部に接続されたモダン部と、ユーザの鼻根又は眉間に接触して前記フロント部を支持する第1接触部と、前記モダン部に接続された接続部を有し、前記ユーザの顎下に接触して前記モダン部を支持する第2接触部と、前記第1接触部に配置され前記ユーザの鼻根又は眉間から前記ユーザの第1生体信号を取得する第1センサと、前記第2接触部に配置され前記ユーザの顎下から前記ユーザの第2生体信号を取得する第2センサと、を備える眼鏡型デバイスにおいて、前記眼鏡型デバイスに搭載された情報処理装置が実行する情報処理方法であって、前記第1センサから前記第1生体信号を取得し、前記第2センサから前記第2生体信号を取得し、取得した前記第1生体信号及び前記第2生体信号に基づいて、操作対象のオブジェクトに対するユーザの操作意図を推定し、前記オブジェクトに対して、推定した前記操作意図に対応する操作を実行する。 An information processing method according to a sixteenth aspect of the present disclosure is an information processing method for an eyeglass-type device comprising a front section, temple sections connected to the front section, end pieces connected to the temple sections, a first contact section that contacts the bridge of the user's nose or between the eyebrows to support the front section, a second contact section having a connection section connected to the end pieces and that contacts under the user's chin to support the end pieces, a first sensor that is disposed in the first contact section and acquires a first biometric signal of the user from the bridge of the user's nose or between the eyebrows, and a second sensor that is disposed in the second contact section and acquires a second biometric signal of the user from under the user's chin. The information processing method is executed by an information processing device mounted on the eyeglass-type device, and acquires the first biometric signal from the first sensor and the second biometric signal from the second sensor, and estimates the user's operational intention with respect to an object to be operated based on the acquired first biometric signal and second biometric signal, and performs an operation on the object corresponding to the estimated operational intention.

 第16態様によれば、眼鏡型デバイスのずれを低減できるため、第1生体信号及び第2生体信号の取得エラーを抑制できる。その結果、操作対象のオブジェクトに対するユーザの操作意図を高精度に推定できる。 According to the sixteenth aspect, misalignment of the eyeglass-type device can be reduced, thereby suppressing errors in acquiring the first biological signal and the second biological signal. As a result, the user's intention to operate the object to be operated can be estimated with high accuracy.

 本開示の第17態様に係るプログラムは、フロント部と、前記フロント部に接続されたテンプル部と、前記テンプル部に接続されたモダン部と、ユーザの鼻根又は眉間に接触して前記フロント部を支持する第1接触部と、前記モダン部に接続された接続部を有し、前記ユーザの顎下に接触して前記モダン部を支持する第2接触部と、前記第1接触部に配置され前記ユーザの鼻根又は眉間から前記ユーザの第1生体信号を取得する第1センサと、前記第2接触部に配置され前記ユーザの顎下から前記ユーザの第2生体信号を取得する第2センサと、を備える眼鏡型デバイスにおいて、前記眼鏡型デバイスに搭載された情報処理装置に処理を実行させるためのプログラムであって、前記処理は、前記第1センサから前記第1生体信号を取得し、前記第2センサから前記第2生体信号を取得し、取得した前記第1生体信号及び前記第2生体信号に基づいて、操作対象のオブジェクトに対するユーザの操作意図を推定し、前記オブジェクトに対して、推定した前記操作意図に対応する操作を実行する。 A seventeenth aspect of the present disclosure provides a program for causing an information processing device mounted on an eyeglass-type device to execute processing in the eyeglass-type device, the program comprising: a front section; temple sections connected to the front section; end sections connected to the temple sections; a first contact section that contacts the bridge of the user's nose or between the eyebrows to support the front section; a second contact section having a connection section connected to the end sections and that contacts the user's under-chin to support the end sections; a first sensor disposed in the first contact section to acquire a first biometric signal of the user from the bridge of the user's nose or between the eyebrows; and a second sensor disposed in the second contact section to acquire a second biometric signal of the user from under the user's chin, the program acquiring the first biometric signal from the first sensor, acquiring the second biometric signal from the second sensor, estimating the user's operational intention with respect to an object to be operated based on the acquired first and second biometric signals, and performing an operation on the object corresponding to the estimated operational intention.

 第17態様によれば、眼鏡型デバイスのずれを低減できるため、第1生体信号及び第2生体信号の取得エラーを抑制できる。その結果、操作対象のオブジェクトに対するユーザの操作意図を高精度に推定できる。 According to the seventeenth aspect, misalignment of the eyeglass-type device can be reduced, thereby suppressing errors in acquiring the first biological signal and the second biological signal. As a result, the user's intention to operate the object to be operated can be estimated with high accuracy.

 本開示は、このような方法又は装置に含まれる特徴的な各構成をコンピュータに実行させるプログラム、或いはこのプログラムによって動作するシステムとして実現することもできる。また、このようなコンピュータプログラムを、CD-ROM等のコンピュータ読取可能な非一時的な記録媒体あるいはインターネット等の通信ネットワークを介して流通させることができるのは、言うまでもない。 The present disclosure can also be realized as a program that causes a computer to execute each of the characteristic components included in such a method or apparatus, or as a system operated by this program. It goes without saying that such a computer program can also be distributed on a computer-readable, non-transitory recording medium such as a CD-ROM, or via a communications network such as the Internet.

 (本開示の実施形態)
 以下、本開示の実施形態について、図面を用いて詳細に説明する。異なる図面において同一の符号を付した要素は、同一又は相応する要素を示すものとする。また、以下の実施形態で示される構成要素、構成要素の配置位置、接続形態、及び動作の順序等は、一例であり、本開示を限定する趣旨ではない。本開示は、特許請求の範囲だけによって限定される。よって、以下の実施形態における構成要素のうち、本開示の最上位概念を示す独立請求項に記載されていない構成要素については、本開示の課題を達成するのに必ずしも必要ではないが、より好ましい形態を構成するものとして説明される。
(Embodiments of the present disclosure)
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings. Elements with the same reference numerals in different drawings indicate the same or corresponding elements. Furthermore, the components, the layout positions of the components, the connection forms, the order of operations, etc. shown in the following embodiments are merely examples and are not intended to limit the present disclosure. The present disclosure is limited only by the claims. Therefore, among the components in the following embodiments, components that are not described in the independent claims that represent the highest concept of the present disclosure are not necessarily required to achieve the objectives of the present disclosure, but are described as constituting more preferred forms.

 本実施形態では、本開示の適用対象としてARグラスを用いる例について説明するが、本開示の適用対象はこれに限らない。本開示は、AR、VR、又はMR等のクロスリアリティ技術を用いた、ユーザが装着する眼鏡型デバイスに広く適用可能である。また、本開示は、クロスリアリティ技術を用いなくても、生体信号を取得する眼鏡型デバイス全般に広く適用可能である。 In this embodiment, an example will be described in which AR glasses are used as the target of application of this disclosure, but the target of application of this disclosure is not limited to this. This disclosure is widely applicable to eyeglass-type devices worn by a user that use cross-reality technology such as AR, VR, or MR. Furthermore, this disclosure is widely applicable to eyeglass-type devices in general that acquire biosignals, even if they do not use cross-reality technology.

 図1及び図2はそれぞれ、本開示の実施形態に係るARグラス1の構成例を、ユーザによって装着された状態で簡略化して示す正面図及び側面図である。 FIGS. 1 and 2 are respectively a front view and a side view showing a simplified example of the configuration of AR glasses 1 according to an embodiment of the present disclosure, worn by a user.

 ARグラス1は、フロント部11、テンプル部12、及びモダン部13を備える。フロント部11は、レンズ113が固定される左右のリム111と、左右のリム111の内端間を接続するブリッジ112とを有する。テンプル部12の前端は、リム111の外端に接続される。テンプル部12の後端は、モダン部13の上端に接続される。モダン部13は、ユーザの耳に掛けやすい湾曲した構造を有する。 The AR glasses 1 comprise a front portion 11, temple portions 12, and end pieces 13. The front portion 11 has left and right rims 111 to which lenses 113 are fixed, and a bridge 112 connecting the inner ends of the left and right rims 111. The front ends of the temple portions 12 are connected to the outer ends of the rims 111. The rear ends of the temple portions 12 are connected to the upper ends of the end pieces 13. The end pieces 13 have a curved structure that makes them easy to wear over the user's ears.

 また、ARグラス1は、第1接触部21及び第2接触部22を備える。第1接触部21は、ユーザの眉間に接触してフロント部11を支持する。第2接触部22は、ユーザの顎下に接触してモダン部13を支持する。 The AR glasses 1 also have a first contact portion 21 and a second contact portion 22. The first contact portion 21 comes into contact with the user's eyebrows and supports the front portion 11. The second contact portion 22 comes into contact with the user's chin and supports the temple portion 13.

 第1接触部21は、眉間パッド211及び電極212を有する。眉間パッド211は、ユーザの眉間の傾斜に沿って下方から眉間の皮膚表面に接触するパッドである。眉間パッド211は、図略のパッドアームを介してブリッジ112又はリム111に接続される。電極212は、眉間パッド211に配置される。一つの眉間パッド211に複数の電極212が配置されても良い。電極212は、後述する第1センサ33が有する。第1センサ33は、ユーザの眉間から電極212を介して第1生体信号D5を取得する。第1生体信号D5は、ユーザの鼻根筋の筋電、眼電、脳波、筋肉の形状変化、筋肉の硬度変化、生体が発生する音響、皮膚温度、皮膚導電性、呼吸、発汗、心拍、及び血圧の少なくとも一つを含む。 The first contact portion 21 has an glabellar pad 211 and an electrode 212. The glabellar pad 211 is a pad that contacts the skin surface between the user's eyebrows from below, following the slope of the space between the eyebrows. The glabellar pad 211 is connected to the bridge 112 or rim 111 via a pad arm (not shown). The electrode 212 is arranged on the glabellar pad 211. Multiple electrodes 212 may be arranged on one glabellar pad 211. The electrode 212 is included in the first sensor 33, which will be described later. The first sensor 33 acquires a first biological signal D5 from between the user's eyebrows via the electrode 212. The first biological signal D5 includes at least one of the following: electromyography (EMG) of the user's proximal muscle, electrooculography (EOG), electroencephalogram (EEG), changes in muscle shape, changes in muscle hardness, sounds generated by the living body, skin temperature, skin conductivity, respiration, sweating, heart rate, and blood pressure.

 第2接触部22は、顎下パッド221、電極222、アーム223、及び接続部224を有する。顎下パッド221は、ユーザの顎下の傾斜に沿って下方から顎下の皮膚表面に接触するパッドである。顎下パッド221は、アーム223の下端に接続される。アーム223の素材は、例えば樹脂である。アーム223の上端は、接続部224の下端に接続される。接続部224の上端は、モダン部13の下端に接続される。電極222は、顎下パッド221に配置される。一つの顎下パッド221に複数の電極222が配置されても良い。電極222は、後述する第2センサ34が有する。第2センサ34は、ユーザの顎下から電極222を介して第2生体信号D6を取得する。第2生体信号D6は、ユーザの広頸筋の筋電を含む。 The second contact portion 22 has a subchin pad 221, an electrode 222, an arm 223, and a connection portion 224. The subchin pad 221 is a pad that contacts the skin surface under the user's chin from below, following the slope of the area under the chin. The subchin pad 221 is connected to the lower end of the arm 223. The arm 223 is made of, for example, resin. The upper end of the arm 223 is connected to the lower end of the connection portion 224. The upper end of the connection portion 224 is connected to the lower end of the end piece 13. The electrode 222 is disposed on the subchin pad 221. Multiple electrodes 222 may be disposed on one subchin pad 221. The electrode 222 is included in the second sensor 34, which will be described later. The second sensor 34 acquires a second biosignal D6 from under the user's chin via the electrode 222. The second biosignal D6 includes the myoelectric potential of the user's platysma muscle.

 図3は、接続部224の第1の構成例を簡略化して示す図である。接続部224は、コイルばね等を用いた付勢部材224Aを有する。付勢部材224Aは、第2接触部22に対してモダン部13の下端を直線的に引き付ける方向(矢印Y1が示す方向)にモダン部13を付勢する。これにより、接続部224は、モダン部13を支点としてテンプル部12を押し上げる方向(矢印Y2が示す方向)にモダン部13を付勢し、その結果、眉間パッド211がユーザの眉間に密着する。 Figure 3 is a simplified diagram showing a first example configuration of the connection portion 224. The connection portion 224 has a biasing member 224A that uses a coil spring or the like. The biasing member 224A biases the end piece 13 in a direction that linearly attracts the lower end of the end piece 13 toward the second contact portion 22 (the direction indicated by arrow Y1). As a result, the connection portion 224 biases the end piece 13 in a direction that pushes up the temple portion 12 (the direction indicated by arrow Y2) with the end piece 13 as a fulcrum, resulting in the glabella pad 211 coming into close contact with the user's eyebrows.

 図4は、接続部224の第2の構成例を簡略化して示す図である。接続部224は、渦巻きばね等を用いた付勢部材224Bを有する。付勢部材224Bは、第2接触部22に対してモダン部13の下端を後傾回転させる方向(矢印Y3が示す方向)にモダン部13を付勢する。これにより、接続部224は、モダン部13を支点としてテンプル部12を押し上げる方向(矢印Y2が示す方向)にモダン部13を付勢し、その結果、眉間パッド211がユーザの眉間に密着する。 Figure 4 is a simplified diagram showing a second example configuration of the connection part 224. The connection part 224 has a biasing member 224B that uses a spiral spring or the like. The biasing member 224B biases the end piece 13 in a direction that rotates the lower end of the end piece 13 backward relative to the second contact part 22 (the direction indicated by arrow Y3). As a result, the connection part 224 biases the end piece 13 in a direction that pushes up the temple part 12 (the direction indicated by arrow Y2) with the end piece 13 as a fulcrum, resulting in the glabella pad 211 coming into close contact with the user's eyebrows.

 図5は、ARグラス1の機能構成を簡略化して示す図である。ARグラス1は、表示部31、処理部32、第1センサ33、第2センサ34、記憶部35、及び内部カメラ36を備える。表示部31は、液晶ディスプレイ又は有機ELディスプレイ等を用いて構成され、レンズ113の内面が表示部31の表示面となる。処理部32は、CPU等のプロセッサ(情報処理装置)を用いて構成される。第1センサ33は、ユーザの眉間から電極212を介して第1生体信号D5を取得する。第2センサ34は、ユーザの顎下から電極222を介して第2生体信号D6を取得する。記憶部35は、半導体メモリ等を用いて構成される。内部カメラ36は、光学系及びCMOSイメージセンサ等を用いて構成される。なお、ARグラス1は、第1センサ33及び第2センサ34の一方を備えても良いし、双方を備えても良い。 Figure 5 is a simplified diagram showing the functional configuration of the AR glasses 1. The AR glasses 1 include a display unit 31, a processing unit 32, a first sensor 33, a second sensor 34, a memory unit 35, and an internal camera 36. The display unit 31 is configured using an LCD display or an organic EL display, etc., and the inner surface of the lens 113 serves as the display surface of the display unit 31. The processing unit 32 is configured using a processor (information processing device) such as a CPU. The first sensor 33 acquires a first biosignal D5 from between the user's eyebrows via an electrode 212. The second sensor 34 acquires a second biosignal D6 from under the user's chin via an electrode 222. The memory unit 35 is configured using a semiconductor memory, etc. The internal camera 36 is configured using an optical system and a CMOS image sensor, etc. The AR glasses 1 may include either the first sensor 33 or the second sensor 34, or both.

 記憶部35には、プログラム41及び図形データ42が記憶される。図形データ42は、表示部31に表示されるアイコン等の視線操作用のオブジェクトの画像データを含む。記憶部35は、コンピュータ読み取り可能な不揮発性の記憶媒体を含む。プログラム41は、当該記憶媒体に記憶されている。 The memory unit 35 stores a program 41 and graphic data 42. The graphic data 42 includes image data of objects for eye-gaze operation, such as icons, displayed on the display unit 31. The memory unit 35 includes a computer-readable non-volatile storage medium. The program 41 is stored in this storage medium.

 内部カメラ36は、リム111の内部側(ユーザの顔部側)に配置される。内部カメラ36は、ARグラス1を装着しているユーザの眼を撮影し、その画像データD1を出力する。 The internal camera 36 is positioned on the inside side of the rim 111 (toward the user's face). The internal camera 36 captures an image of the eyes of the user wearing the AR glasses 1 and outputs the image data D1.

 図6は、処理部32の機能構成を簡略化して示す図である。記憶部35から読み出したプログラム41をプロセッサが実行することによって実現される機能として、処理部32は、視線検出部51、表示制御部52、オブジェクト特定部53、動作推定部54、意図推定部55、及び視線操作実行部56を有する。 FIG. 6 is a simplified diagram showing the functional configuration of the processing unit 32. The processing unit 32 has a gaze detection unit 51, a display control unit 52, an object identification unit 53, a movement estimation unit 54, an intention estimation unit 55, and a gaze operation execution unit 56, which are functions realized by the processor executing the program 41 read from the storage unit 35.

 図7は、処理部32が実行する処理を示すフローチャートである。 Figure 7 is a flowchart showing the processing performed by the processing unit 32.

 まずステップS01において視線検出部51は、内部カメラ36から取得した画像データD1に基づいて、ユーザの視線の方向を検出する。視線検出部51は、ユーザの視線の方向を含む視線情報を示すデータD2を、オブジェクト特定部53に入力する。 First, in step S01, the gaze detection unit 51 detects the direction of the user's gaze based on image data D1 acquired from the internal camera 36. The gaze detection unit 51 inputs data D2 indicating gaze information including the direction of the user's gaze to the object identification unit 53.

 次にステップS02においてオブジェクト特定部53は、視線検出部51から入力されたデータD2と、表示制御部52から入力されたデータD3とに基づいて、ユーザによる視線操作の操作対象のオブジェクトを特定する。データD3は、表示部31に表示されているオブジェクトの位置座標を含む。表示部31には、同時に複数のオブジェクトが表示されても良い。オブジェクト特定部53は、特定したオブジェクトを示すデータD4を、視線操作実行部56に入力する。 Next, in step S02, the object identification unit 53 identifies the object that is the target of the user's gaze operation based on the data D2 input from the gaze detection unit 51 and the data D3 input from the display control unit 52. The data D3 includes the position coordinates of the object displayed on the display unit 31. Multiple objects may be displayed simultaneously on the display unit 31. The object identification unit 53 inputs data D4 indicating the identified object to the gaze operation execution unit 56.

 次にステップS03において動作推定部54は、第2センサ34から取得した第2生体信号D6を、機械学習済みのニューラルネットワーク等の推定モデルに入力することにより、第2生体信号D6に基づいてユーザの口、舌、又は喉の動作を推定する。動作推定部54は、推定したユーザの口、舌、又は喉の動作を含む動作情報を示すデータD7を、意図推定部55に入力する。 Next, in step S03, the movement estimation unit 54 inputs the second bio-signal D6 acquired from the second sensor 34 into an estimation model such as a machine-learned neural network, thereby estimating the movement of the user's mouth, tongue, or throat based on the second bio-signal D6. The movement estimation unit 54 inputs data D7 indicating movement information including the estimated movement of the user's mouth, tongue, or throat to the intention estimation unit 55.

 次にステップS04において意図推定部55は、第1センサ33から取得した第1生体信号D5と、動作推定部54から入力されたデータD7とに基づいて、視線操作におけるユーザの意図を推定する。ユーザの意図は、例えば、操作対象のオブジェクトを選択する意図を含む。意図推定部55は、第1生体信号D5及びデータD7を、機械学習済みのニューラルネットワーク等の推定モデルに入力することにより、第1生体信号D5及びデータD7に基づいてユーザの意図を推定する。意図推定部55は、推定したユーザの意図を示すデータD8を、視線操作実行部56に入力する。 Next, in step S04, the intention estimation unit 55 estimates the user's intention in the gaze operation based on the first biometric signal D5 acquired from the first sensor 33 and the data D7 input from the movement estimation unit 54. The user's intention includes, for example, the intention to select an object to be operated. The intention estimation unit 55 estimates the user's intention based on the first biometric signal D5 and the data D7 by inputting the first biometric signal D5 and the data D7 into an estimation model such as a machine-learned neural network. The intention estimation unit 55 inputs data D8 indicating the estimated user's intention to the gaze operation execution unit 56.

 次にステップS05において視線操作実行部56は、意図推定部55から入力されたデータD8に基づいて、ユーザに視線操作を実行する意図があるか否かを判定する。 Next, in step S05, the gaze operation execution unit 56 determines whether the user intends to perform a gaze operation based on the data D8 input from the intention estimation unit 55.

 ユーザに視線操作の意図がない場合(ステップS05:NO)は、ステップS01の処理に戻る。 If the user does not intend to perform gaze control (step S05: NO), the process returns to step S01.

 ユーザに視線操作の意図がある場合(ステップS05:YES)は、次にステップS06において視線操作実行部56は、オブジェクト特定部53から入力されたデータD4で示されるオブジェクトに対して、意図推定部55から入力されたデータD8で示される意図に対応する視線操作を実行する。 If the user intends to perform a gaze operation (step S05: YES), then in step S06, the gaze operation execution unit 56 performs a gaze operation corresponding to the intention indicated by the data D8 input from the intention estimation unit 55 on the object indicated by the data D4 input from the object identification unit 53.

 本実施形態によれば、第1接触部21がユーザの眉間に接触してフロント部11を支持するとともに、第2接触部22がユーザの顎下に接触してモダン部13を支持する。これにより、ユーザの後頭部側を横切る接続手段を用いることなく、ARグラス1のずれを低減できる。 According to this embodiment, the first contact portion 21 contacts the user's forehead and supports the front portion 11, while the second contact portion 22 contacts the area under the user's chin and supports the temple portion 13. This reduces misalignment of the AR glasses 1 without using a connection means that crosses the back of the user's head.

 また、本実施形態によれば、第1生体信号D5及び第2生体信号D6に基づいて、操作対象のオブジェクトに対するユーザの操作意図を高精度に推定できる。 Furthermore, according to this embodiment, the user's operational intention with respect to the object to be operated can be estimated with high accuracy based on the first biological signal D5 and the second biological signal D6.

 また、本実施形態によれば、接続部224がモダン部13を支点としてテンプル部12を押し上げる方向にモダン部13を付勢することにより、眉間パッド211がユーザの眉間に密着するため、ARグラス1のずれの低減効果を向上できる。 Furthermore, according to this embodiment, the connection portion 224 biases the end piece 13 in a direction that pushes up the temple portion 12, with the end piece 13 as a fulcrum, so that the glabella pad 211 comes into close contact with the user's forehead, thereby improving the effectiveness of reducing misalignment of the AR glasses 1.

 また、図3に示した付勢部材224Aによれば、モダン部13を支点としてテンプル部12を押し上げる方向の付勢力を適切に付与できる。 Furthermore, the biasing member 224A shown in Figure 3 can appropriately apply a biasing force in the direction of pushing up the temple portion 12 with the end piece 13 as a fulcrum.

 また、図4に示した付勢部材224Bによれば、モダン部13を支点としてテンプル部12を押し上げる方向の付勢力を適切に付与できる。 Furthermore, the biasing member 224B shown in Figure 4 can appropriately apply a biasing force in the direction of pushing up the temple portion 12 with the end piece 13 as a fulcrum.

 また、本実施形態によれば、第1生体信号D5がユーザの鼻根筋の筋電、眼電、脳波、筋肉の形状変化、筋肉の硬度変化、生体が発生する音響、皮膚温度、皮膚導電性、呼吸、発汗、心拍、及び血圧の少なくとも一つを含むことにより、操作対象のオブジェクトに対するユーザの操作意図を高精度に推定できる。 Furthermore, according to this embodiment, the first biological signal D5 includes at least one of the electromyography of the user's proximal muscle, electrooculography, electroencephalogram, changes in muscle shape, changes in muscle hardness, sounds generated by the living body, skin temperature, skin conductivity, respiration, sweating, heart rate, and blood pressure, making it possible to estimate the user's operational intention with respect to the object being operated with high accuracy.

 また、本実施形態によれば、第2生体信号D6がユーザの広頸筋の筋電を含むことにより、操作対象のオブジェクトに対するユーザの操作意図を高精度に推定できる。 Furthermore, according to this embodiment, the second biological signal D6 includes myoelectric potential of the user's platysma muscle, making it possible to estimate the user's operational intention with respect to the object to be operated with high accuracy.

 また、本実施形態に係る動作推定部54によれば、第2生体信号D6に基づいて、ユーザの口、舌、又は喉の動作を高精度に推定できる。 Furthermore, the movement estimation unit 54 according to this embodiment can estimate the movement of the user's mouth, tongue, or throat with high accuracy based on the second biological signal D6.

 また、本実施形態に係る意図推定部55によれば、ユーザの口、舌、又は喉の動作を示す動作情報(データD7)に基づいて、操作対象のオブジェクトに対するユーザの操作意図を高精度に推定できる。 Furthermore, the intention estimation unit 55 according to this embodiment can estimate with high accuracy the user's intention to operate the object to be operated, based on movement information (data D7) indicating the movement of the user's mouth, tongue, or throat.

 また、本実施形態に係るオブジェクト特定部53によれば、ユーザの視線の方向を示す視線情報(データD2)に基づいて、操作対象のオブジェクトを高精度に特定できる。 Furthermore, the object identification unit 53 according to this embodiment can identify the object to be operated with high accuracy based on line-of-sight information (data D2) indicating the direction of the user's line of sight.

 以下、本開示の様々な変形例について説明する。以下で述べる変形例は任意に組み合わせて適用可能である。 Various variations of this disclosure are described below. The variations described below can be applied in any combination.

 (第1変形例)
 図8は、第1変形例に係るARグラス1の機能構成を簡略化して示す図である。ARグラス1は、図5に示した構成に対して、外部カメラ37をさらに備える。
(First Modification)
8 is a simplified diagram showing the functional configuration of the AR glasses 1 according to the first modified example. The AR glasses 1 further include an external camera 37 in addition to the configuration shown in FIG.

 外部カメラ37は、リム111の外部側(ユーザの顔部と反対側)に配置される。外部カメラ37は、ARグラス1を装着しているユーザの視界に相当する画像を撮影し、その画像データD10を出力する。 The external camera 37 is positioned on the external side of the rim 111 (the side opposite the user's face). The external camera 37 captures an image corresponding to the field of view of the user wearing the AR glasses 1 and outputs the image data D10.

 図9は、第1変形例に係る処理部32の機能構成を簡略化して示す図である。処理部32は、図6に示した構成に対して、送信部57、受信部58、及び画像作成部59をさらに有する。 Figure 9 is a simplified diagram showing the functional configuration of the processing unit 32 according to the first modified example. The processing unit 32 further includes a transmitting unit 57, a receiving unit 58, and an image creating unit 59 in addition to the configuration shown in Figure 6.

 送信部57は、動作推定部54によって推定されたユーザの口、舌、又は喉の動作を示す動作情報(データD7)を、他のARグラス1Aに送信する。他のARグラス1Aは、例えば、ARグラス1を装着しているユーザと対面で会話をしている会話相手である他のユーザが装着しているARグラスである。 The transmission unit 57 transmits movement information (data D7) indicating the movement of the user's mouth, tongue, or throat estimated by the movement estimation unit 54 to the other AR glasses 1A. The other AR glasses 1A are, for example, AR glasses worn by another user who is a face-to-face conversation partner with the user wearing the AR glasses 1.

 受信部58は、他のARグラス1Aを装着している他のユーザの口、舌、又は喉の動作を示す他の動作情報(データD7A)を、他のARグラス1Aが備える他の送信部57Aから受信する。受信部58は、データD7Aを画像作成部59に入力する。 The receiving unit 58 receives other movement information (data D7A) indicating the movement of the mouth, tongue, or throat of another user wearing another pair of AR glasses 1A from another transmitting unit 57A provided in the other pair of AR glasses 1A. The receiving unit 58 inputs the data D7A to the image creating unit 59.

 画像作成部59は、外部カメラ37から取得した画像データD10と、受信部58から入力されたデータD7Aとに基づいて、他のユーザの表情画像の画像データD11を作成する。画像データD10は、外部カメラ37によって撮影された他のユーザの顔の画像を含む。画像作成部59は、画像データD10に基づき、他のユーザがマスク60を装着していることを検出した場合に、受信部58によって受信された他の動作情報(データD7A)に基づいて、他のユーザの口部周辺の表情画像を作成する。画像作成部59は、作成した表情画像の画像データD11を、表示制御部52に入力する。 The image creation unit 59 creates image data D11 of the facial expression image of the other user based on image data D10 acquired from the external camera 37 and data D7A input from the receiving unit 58. The image data D10 includes an image of the other user's face captured by the external camera 37. When the image creation unit 59 detects that the other user is wearing a mask 60 based on the image data D10, it creates an image of the facial expression around the mouth of the other user based on other movement information (data D7A) received by the receiving unit 58. The image creation unit 59 inputs the image data D11 of the created facial expression image to the display control unit 52.

 図10は、外部カメラ37によって撮影された他のユーザの顔の画像を示す図である。他のユーザはマスク60を装着しており、他のユーザの鼻及び口はマスク60によって隠されている。 Figure 10 shows an image of the face of another user captured by the external camera 37. The other user is wearing a mask 60, and the other user's nose and mouth are hidden by the mask 60.

 図11は、マスク60の装着例を示す側面図である。磁石、ボタン、又は紐等の任意の固定具によって、第2接触部22のアーム223等にマスク60の左右両端部を固定する。これにより、マスクの紐を耳に引っ掛ける態様よりも高い保持力でマスク60を装着できる。さらに、上記任意の固定具によってマスク60の上端部をリム111の下端部に固定しても良い。 Figure 11 is a side view showing an example of how the mask 60 is worn. Both left and right ends of the mask 60 are fixed to the arms 223 of the second contact portion 22 using any fastener such as a magnet, button, or string. This allows the mask 60 to be worn with a stronger holding force than when the mask strings are hooked onto the ears. Furthermore, the upper end of the mask 60 may be fixed to the lower end of the rim 111 using any of the above fasteners.

 図9を参照して、表示制御部52は、画像データD11に基づき、画像作成部59によって作成された他のユーザの口部周辺の表情画像を、表示部31の表示面上で他のユーザの顔の位置に合わせて重畳表示する。 Referring to FIG. 9, the display control unit 52 superimposes and displays an image of the facial expression around the mouth of another user, created by the image creation unit 59 based on the image data D11, on the display surface of the display unit 31, in accordance with the position of the face of the other user.

 図12は、表示部31における他のユーザの顔の表示例を示す図である。あたかもマスク60を透視したかのように、他のユーザの口部周辺の表情画像が表示部31の表示面上に表示されている。 Figure 12 is a diagram showing an example of the display of another user's face on the display unit 31. An image of the facial expression around the mouth of the other user is displayed on the display surface of the display unit 31, as if seen through a mask 60.

 なお、ARグラス1が画像投影部をさらに備え、当該画像投影部が、他のユーザが装着しているマスク60をスクリーンとして、他のユーザの表情画像(画像データD11)を当該スクリーン上に投影しても良い。この場合、マスク60の素材は再帰性反射材であっても良い。 The AR glasses 1 may further include an image projection unit, which may use the mask 60 worn by the other user as a screen and project an image of the other user's facial expression (image data D11) onto the screen. In this case, the material of the mask 60 may be a retroreflective material.

 本変形例によれば、送信部57が送信したユーザの口、舌、又は喉の動作を示す動作情報(データD7)を、他のARグラス1A又は任意の情報処理装置等によって活用できる。 According to this modified example, the movement information (data D7) indicating the movement of the user's mouth, tongue, or throat transmitted by the transmission unit 57 can be used by other AR glasses 1A or any information processing device, etc.

 また、本変形例によれば、マスク60を装着している他のユーザの口部周辺の表情画像を作成して表示でき、その結果、ユーザ同士のコミュニケーションを円滑化できる。 Furthermore, according to this modified example, it is possible to create and display an image of the facial expression around the mouth of another user wearing the mask 60, thereby facilitating communication between users.

 なお、上記任意の情報処理装置は、ユーザの健康を管理する管理装置であっても良い。管理装置は、外部カメラ37の撮影画像(画像データD10)をARグラス1から受信し、当該撮影画像に基づいて、ユーザが現在食事中であり、何を咀嚼中であるかを判定する。管理装置は、ARグラス1から受信したデータD7に基づいて、咀嚼中のユーザの顎の動き、嚥下の回数、又は嚥下のタイミング等を測定する。データD7には、ユーザの顎の開閉度を計測可能なセンサによる計測値を含めても良い。咀嚼回数が少ないユーザは健康リスクが高くなるため、管理装置は、そのようなユーザに対しては警告を出して咀嚼動作の改善を促す。咀嚼回数に応じて健康保険の掛け金をリアルタイムで増減させても良い。また、咀嚼動作に左右非対称等の異常があるユーザは、歯又は口腔内に疾患がある可能性がある。管理装置は、そのようなユーザに対しては警告を出して病院の受診を促す。また、介護施設等において、管理装置がデータD7に基づいてユーザの誤嚥を検出した場合に、警報を出力し、又は担当スタッフに緊急通報を送信しても良い。 The above-mentioned optional information processing device may also be a management device that manages the user's health. The management device receives images (image data D10) captured by the external camera 37 from the AR glasses 1 and, based on the captured images, determines whether the user is currently eating and what they are chewing. Based on data D7 received from the AR glasses 1, the management device measures the user's jaw movement while chewing, the number of times they swallow, or the timing of swallowing. Data D7 may also include measurements taken by a sensor that can measure the degree to which the user's jaw is opened and closed. Because users who chew less frequently are at higher health risk, the management device issues a warning to such users and encourages them to improve their chewing behavior. Health insurance premiums may be increased or decreased in real time depending on the number of chews. Furthermore, users who have abnormalities in their chewing behavior, such as asymmetry, may have dental or oral diseases. The management device issues a warning to such users and encourages them to visit a hospital. Furthermore, in a care facility or the like, if the management device detects a user's aspiration based on data D7, it may output an alarm or send an emergency call to the staff in charge.

 また、任意の情報処理装置は、英会話等の語学学習の講師が所持する情報端末であっても良い。講師は、ユーザと対面している現実の講師であっても良いし、ユーザと通信可能な遠隔地の講師であっても良いし、バーチャルな講師であっても良い。ARグラス1を装着しているユーザは、表示部31に表示された講師とリアルタイムでの会話が可能である。その際、ユーザの顎又は舌の動きを示すデータD7を、ARグラス1から講師が所持する情報端末に送信する。講師は、受信したデータD7に基づいてユーザの顎又は舌の動きが正しいか否かを判断し、正しい発音の仕方をユーザが装着しているARグラス1にフィードバックする。これにより、耳による発音の違いによる指導に加え、顎や舌の動きに基づいた指導を実現でき、語学学習の促進が期待できる。 Furthermore, the arbitrary information processing device may be an information terminal carried by a language learning instructor, such as an English conversation instructor. The instructor may be a real instructor who is face-to-face with the user, a remote instructor who can communicate with the user, or a virtual instructor. A user wearing AR glasses 1 can have a real conversation with the instructor displayed on the display unit 31. At that time, data D7 indicating the user's jaw or tongue movements is sent from the AR glasses 1 to the information terminal carried by the instructor. The instructor determines whether the user's jaw or tongue movements are correct based on the received data D7, and provides feedback on the correct pronunciation method to the AR glasses 1 worn by the user. This makes it possible to provide instruction based on jaw and tongue movements in addition to instruction based on differences in pronunciation due to the ear, which is expected to promote language learning.

 (第2変形例)
 図13は、第2変形例に係る処理部32の機能構成を簡略化して示す図である。処理部32は、図6に示した構成に対して、誤装着検出部71をさらに有する。
(Second Modification)
13 is a simplified diagram showing the functional configuration of the processing unit 32 according to the second modified example. The processing unit 32 further includes a mis-attachment detection unit 71 in addition to the configuration shown in FIG.

 誤装着検出部71は、電極222の検出値が異常値であること等によって、ユーザによって第2接触部22が誤装着されていることを検出する。誤装着の例としては、第2接触部22がユーザの後頭部側に回されて装着される態様が考えられる。誤装着検出部71は、第2接触部22の誤装着を検出した場合には、誤装着の検出情報D20を表示制御部52に入力する。表示制御部52は、誤装着検出部71から検出情報D20が入力された場合には、誤装着をユーザに報知する画像又はテキストメッセージ等の報知情報を、表示部31に表示する。誤装着の報知の態様は、表示に限らず、警告音又は音声メッセージの出力等であっても良い。 The mis-wearing detection unit 71 detects that the second contact unit 22 has been worn incorrectly by the user, for example, when the detection value of the electrode 222 is an abnormal value. An example of mis-wearing is when the second contact unit 22 is worn turned toward the back of the user's head. When the mis-wearing detection unit 71 detects that the second contact unit 22 has been worn incorrectly, it inputs mis-wearing detection information D20 to the display control unit 52. When the display control unit 52 receives detection information D20 from the mis-wearing detection unit 71, it displays notification information such as an image or text message on the display unit 31 to notify the user of the mis-wearing. The manner in which the mis-wearing is notified is not limited to a display, and may also be the output of a warning sound or voice message, etc.

 本変形例によれば、第2接触部22の誤装着に起因するARグラス1の誤動作又は視線操作における推定精度の低下等を防止できる。 This modified example makes it possible to prevent malfunction of the AR glasses 1 or a decrease in estimation accuracy in gaze control due to incorrect attachment of the second contact portion 22.

 なお、第2接触部22の誤装着を防止する簡易策として、「顎の下へ装着して下さい」等のメッセージを顎下パッド221の表面に表記しても良い。 As a simple measure to prevent incorrect attachment of the second contact portion 22, a message such as "Please attach under the chin" may be written on the surface of the subchin pad 221.

 また、ユーザが第2接触部22を容易に装着できるように、接続部224の構成を、伸縮可能な構成又は引っ張り強度を制御可能な構成としても良い。例えば、装着前は引っ張り強度を弱めることによって装着の容易性を確保し、装着の感知後に引っ張り強度を強めることによって顎下パッド221の密着度を高めても良い。引っ張り強度の可変制御には、ユーザの体温に反応する形状記憶合金を用いても良いし、モータ等のアクチュエータを用いても良い。 Furthermore, to allow the user to easily wear the second contact portion 22, the connection portion 224 may be configured to be stretchable or have a controllable tensile strength. For example, ease of wearing may be ensured by weakening the tensile strength before wearing, and the degree of adhesion of the subchin pad 221 may be increased by strengthening the tensile strength after sensing that the pad has been worn. A shape memory alloy that reacts to the user's body temperature may be used to variably control the tensile strength, or an actuator such as a motor may be used.

 また、第2接触部22が正しく装着された場合であっても、ARグラス1のずれによって電極212,222の検出値が大きく変化した場合には、ARグラス1の再装着を促す報知情報を出力しても良い。ARグラス1のずれの検出には、電極212,222の検出値のみならず、内部カメラ36によるユーザの眼の位置の検出情報を用いても良い。 Furthermore, even if the second contact portion 22 is correctly attached, if the detection values of the electrodes 212, 222 change significantly due to misalignment of the AR glasses 1, notification information may be output to prompt the user to reattach the AR glasses 1. To detect misalignment of the AR glasses 1, not only the detection values of the electrodes 212, 222 but also detection information of the user's eye position by the internal camera 36 may be used.

 (第3変形例)
 図14は、第3変形例に係る接続部224の構成例を簡略化して示す図である。第1接触部21は、鼻根パッド213及び電極214を有する。鼻根パッド213は、ユーザの鼻根の傾斜に沿って上方から鼻根の皮膚表面に接触するパッドである。鼻根パッド213は、図略のパッドアームを介してブリッジ112又はリム111に接続される。電極214は、鼻根パッド213に配置される。一つの鼻根パッド213に複数の電極214が配置されても良い。電極214は、第1センサ33が有する。第1センサ33は、ユーザの鼻根から電極214を介して第1生体信号D5を取得する。
(Third Modification)
FIG. 14 is a simplified diagram showing an example configuration of the connection unit 224 according to the third modified example. The first contact unit 21 has a nasal root pad 213 and an electrode 214. The nasal root pad 213 is a pad that contacts the skin surface of the user's nasal root from above, following the slope of the nasal root. The nasal root pad 213 is connected to the bridge 112 or the rim 111 via a pad arm (not shown). The electrode 214 is disposed on the nasal root pad 213. Multiple electrodes 214 may be disposed on one nasal root pad 213. The electrode 214 is included in the first sensor 33. The first sensor 33 acquires a first biological signal D5 from the user's nasal root via the electrode 214.

 接続部224は、渦巻きばね等を用いた付勢部材224Cを有する。付勢部材224Cは、第2接触部22に対してモダン部13の下端を前傾回転させる方向(矢印Y4が示す方向)にモダン部13を付勢する。これにより、接続部224は、モダン部13を支点としてテンプル部12を押し下げる方向(矢印Y5が示す方向)にモダン部13を付勢し、その結果、鼻根パッド213がユーザの鼻根に密着する。 The connection portion 224 has a biasing member 224C that uses a spiral spring or the like. The biasing member 224C biases the end piece 13 in a direction that rotates the lower end of the end piece 13 forward relative to the second contact portion 22 (the direction indicated by arrow Y4). As a result, the connection portion 224 biases the end piece 13 in a direction that presses down on the temple portion 12 (the direction indicated by arrow Y5) with the end piece 13 as a fulcrum, resulting in the nose bridge pad 213 coming into close contact with the bridge of the user's nose.

 本変形例によれば、第1接触部21がユーザの鼻根に接触してフロント部11を支持するとともに、第2接触部22がユーザの顎下に接触してモダン部13を支持する。これにより、ユーザの後頭部側を横切る接続手段を用いることなく、ARグラス1のずれを低減できる。 In this modified example, the first contact portion 21 contacts the bridge of the user's nose to support the front portion 11, and the second contact portion 22 contacts the area under the user's chin to support the temple portion 13. This reduces misalignment of the AR glasses 1 without using a connection means that crosses the back of the user's head.

 また、本変形例によれば、接続部224がモダン部13を支点としてテンプル部12を押し下げる方向にモダン部13を付勢することにより、鼻根パッド213がユーザの鼻根に密着するため、ARグラス1のずれの低減効果を向上できる。 Furthermore, according to this modified example, the connection portion 224 biases the end piece 13 in a direction that presses down the temple portion 12, with the end piece 13 as a fulcrum, so that the nose bridge pad 213 adheres closely to the bridge of the user's nose, thereby improving the effect of reducing misalignment of the AR glasses 1.

 また、図14に示した付勢部材224Cによれば、モダン部13を支点としてテンプル部12を押し下げる方向の付勢力を適切に付与できる。 Furthermore, the biasing member 224C shown in Figure 14 can appropriately apply a biasing force in a direction that presses down the temple portion 12 with the end piece 13 as a fulcrum.

 (第4変形例)
 図15は、第4変形例に係るARグラス1の第1の構成例を簡略化して示す図である。ARグラス1は、左右いずれか一方の眼のみに装着されるモノクルスタイルであっても良い。
(Fourth Modification)
15 is a simplified diagram showing a first configuration example of the AR glasses 1 according to the fourth modification. The AR glasses 1 may be monocle style glasses worn on only one of the left or right eye.

 図16は、第4変形例に係るARグラス1の第2の構成例を簡略化して示す図である。ARグラス1は、ユーザの耳の位置よりも前方でテンプル部12とアーム223とを繋ぐ接続部225を有しても良い。接続部225は、単なる紐であっても良い。接続部225が矢印Y6で示す下方向にテンプル部12を引っ張ることにより、接続部224は、矢印Y7で示す水平後方にフロント部11及びテンプル部12を付勢する。その結果、鼻根パッド213がユーザの鼻根に密着する。なお、鼻根パッド213に代えて眉間パッド211を使用しても良い。 Figure 16 is a simplified diagram showing a second configuration example of AR glasses 1 relating to the fourth modified example. The AR glasses 1 may have a connection part 225 that connects the temple part 12 and the arm 223 forward of the position of the user's ear. The connection part 225 may simply be a string. When the connection part 225 pulls the temple part 12 downward as indicated by arrow Y6, the connection part 224 urges the front part 11 and temple part 12 horizontally backward as indicated by arrow Y7. As a result, the nasal root pad 213 comes into close contact with the root of the user's nose. Note that the nasal root pad 213 may be replaced with an inter-brow pad 211.

 図17は、第4変形例に係るARグラス1の第3の構成例を簡略化して示す図である。接続部225は、コイルばね等を用いた付勢部材226を有する。付勢部材226は、接続部225に対してテンプル部12を直線的に引き付ける方向(矢印Y6が示す方向)にテンプル部12を付勢する。 Figure 17 is a simplified diagram showing a third configuration example of AR glasses 1 relating to the fourth modified example. The connection portion 225 has a biasing member 226 that uses a coil spring or the like. The biasing member 226 biases the temple portion 12 in a direction that linearly attracts the temple portion 12 toward the connection portion 225 (the direction indicated by arrow Y6).

 図18は、第4変形例に係るARグラス1の第4の構成例を簡略化して示す図である。第2接触部22は、アーム223及び接続部224に代えて、ゴム等の伸縮性素材から成る紐227を有する。紐227自体が伸縮性を有することにより、第2接触部22はユーザの顎のラインによりフィットする。これにより、顎下パッド221はユーザの顎下に安定して密着し、かつ、圧力が分散されるためユーザの負担も軽減する。 Figure 18 is a simplified diagram showing a fourth configuration example of AR glasses 1 relating to the fourth modified example. The second contact portion 22 has a string 227 made of a stretchable material such as rubber, instead of the arm 223 and connection portion 224. Because the string 227 itself is stretchable, the second contact portion 22 fits better to the user's jawline. This allows the subchin pad 221 to fit securely and closely under the user's chin, and the pressure is distributed, reducing the burden on the user.

 図19は、第4変形例に係るARグラス1の第5の構成例を簡略化して示す図である。図14に示した構成から第2接触部22が省略されている。また、電極222に代えてテンプル部12が電極121を有し、電極121からユーザの顎下にかけて導電ペースト122が塗られる。ユーザの顎下で発生した筋電信号は、導電ペースト122を通じて電極121で計測される。 Figure 19 is a simplified diagram showing a fifth configuration example of AR glasses 1 relating to the fourth modified example. The second contact portion 22 has been omitted from the configuration shown in Figure 14. Furthermore, instead of electrode 222, the temple portion 12 has electrode 121, and conductive paste 122 is applied from electrode 121 to under the user's chin. The myoelectric signal generated under the user's chin is measured by electrode 121 via conductive paste 122.

 本開示は、ARグラス、VRグラス、スマートグラス、又はヘッドマウントディスプレイ等の、ユーザが装着する眼鏡型デバイスに広く適用可能である。

 
The present disclosure is widely applicable to eyeglass-type devices worn by a user, such as AR glasses, VR glasses, smart glasses, or head-mounted displays.

Claims (17)

 フロント部と、
 前記フロント部に接続されたテンプル部と、
 前記テンプル部に接続されたモダン部と、
 ユーザの鼻根又は眉間に接触して前記フロント部を支持する第1接触部と、
 前記モダン部に接続された接続部を有し、前記ユーザの顎下に接触して前記モダン部を支持する第2接触部と、
 前記第1接触部に配置され前記ユーザの鼻根又は眉間から前記ユーザの第1生体信号を取得する第1センサ、及び、前記第2接触部に配置され前記ユーザの顎下から前記ユーザの第2生体信号を取得する第2センサ、の少なくとも一方と、
を備える、眼鏡型デバイス。
The front part and
a temple portion connected to the front portion;
an end piece connected to the temple piece;
a first contact portion that contacts the base of the nose or the space between the eyebrows of the user to support the front portion;
a second contact portion having a connection portion connected to the end cap portion and contacting a portion under the user's chin to support the end cap portion;
At least one of a first sensor that is disposed at the first contact portion and acquires a first biosignal of the user from the root of the nose or between the eyebrows of the user, and a second sensor that is disposed at the second contact portion and acquires a second biosignal of the user from under the chin of the user;
An eyeglass-type device comprising:
 配置前記第1センサ及び前記第2センサの双方を備える、
請求項1に記載の眼鏡型デバイス。
The arrangement includes both the first sensor and the second sensor.
The eyeglass-type device according to claim 1 .
 前記接続部は、前記モダン部を支点として前記テンプル部を押し上げる方向に前記モダン部を付勢し、
 前記第1接触部は、前記ユーザの眉間に下方から接触する眉間パッドを有する、
請求項1に記載の眼鏡型デバイス。
the connection portion biases the end piece in a direction pushing up the temple portion with the end piece as a fulcrum,
The first contact portion has an inter-eyebrow pad that contacts the area between the eyebrows of the user from below.
The eyeglass-type device according to claim 1 .
 前記接続部は、前記第2接触部に対して前記モダン部を直線的に引き付ける方向に前記モダン部を付勢する付勢部材を有する、
請求項3に記載の眼鏡型デバイス。
The connection portion has a biasing member that biases the end piece in a direction that linearly attracts the end piece to the second contact portion.
The eyeglass-type device according to claim 3 .
 前記接続部は、前記第2接触部に対して前記モダン部を後傾回転させる方向に前記モダン部を付勢する付勢部材を有する、
請求項3に記載の眼鏡型デバイス。
The connection portion has a biasing member that biases the end cap in a direction that rotates the end cap rearward relative to the second contact portion.
The eyeglass-type device according to claim 3 .
 前記接続部は、前記モダン部を支点として前記テンプル部を押し下げる方向に前記モダン部を付勢し、
 前記第1接触部は、前記ユーザの鼻根に上方から接触する鼻根パッドを有する、
請求項1に記載の眼鏡型デバイス。
the connection portion biases the end piece in a direction that presses down the temple portion with the end piece as a fulcrum,
The first contact portion has a nose bridge pad that contacts the user's nose bridge from above.
The eyeglass-type device according to claim 1 .
 前記接続部は、前記第2接触部に対して前記モダン部を前傾回転させる方向に前記モダン部を付勢する付勢部材を有する、
請求項6に記載の眼鏡型デバイス。
The connection portion has a biasing member that biases the end cap in a direction that rotates the end cap forward relative to the second contact portion.
The eyeglass-type device according to claim 6 .
 前記第2生体信号は、前記ユーザの広頸筋の筋電を含む、
請求項2に記載の眼鏡型デバイス。
The second biological signal includes a myoelectric potential of the user's platysma muscle.
The eyeglass-type device according to claim 2 .
 前記第2生体信号に基づいて、前記ユーザの口、舌、又は喉の動作を推定する動作推定部をさらに備える、
請求項8に記載の眼鏡型デバイス。
Further comprising a movement estimation unit that estimates a movement of the mouth, tongue, or throat of the user based on the second biological signal.
The eyeglass-type device according to claim 8 .
 前記動作推定部によって推定された前記ユーザの口、舌、又は喉の動作を示す動作情報に基づいて、操作対象のオブジェクトに対する前記ユーザの操作意図を推定する意図推定部をさらに備える、
請求項9に記載の眼鏡型デバイス。
an intention estimation unit that estimates an operation intention of the user with respect to an object to be operated, based on movement information indicating a movement of the user's mouth, tongue, or throat estimated by the movement estimation unit;
The eyeglass-type device according to claim 9 .
 前記ユーザの視線の方向を検出する視線検出部と、
 前記視線検出部によって検出された前記ユーザの視線の方向を示す視線情報に基づいて、前記オブジェクトを特定するオブジェクト特定部と、
をさらに備える、
請求項10に記載の眼鏡型デバイス。
a gaze detection unit that detects the direction of the user's gaze;
an object identification unit that identifies the object based on gaze information indicating a direction of the user's gaze detected by the gaze detection unit;
Further provided with
The eyeglass-type device according to claim 10.
 前記動作推定部によって推定された前記ユーザの口、舌、又は喉の動作を示す動作情報を送信する送信部をさらに備える、
請求項9に記載の眼鏡型デバイス。
a transmitter for transmitting movement information indicating a movement of the user's mouth, tongue, or throat estimated by the movement estimation unit;
The eyeglass-type device according to claim 9 .
他の眼鏡型デバイスを装着する他のユーザの口、舌、又は喉の動作を示す他の動作情報を、前記他の眼鏡型デバイスが備える他の送信部から受信する受信部と、
 前記他のユーザがマスクを装着していることを検出した場合に、前記受信部によって受信された前記他の動作情報に基づいて、前記他のユーザの口部周辺の表情画像を作成する画像作成部と、
 前記画像作成部によって作成された前記表情画像を、前記他のユーザの顔の位置に合わせて表示する表示制御部と、
をさらに備える、
請求項12に記載の眼鏡型デバイス。
a receiving unit that receives other motion information indicating a motion of a mouth, tongue, or throat of another user wearing another eyeglass-type device from another transmitting unit included in the other eyeglass-type device;
an image creation unit that, when it is detected that the other user is wearing a mask, creates an image of a facial expression around the mouth of the other user based on the other action information received by the receiving unit;
a display control unit that displays the facial expression image created by the image creation unit in accordance with the position of the face of the other user;
Further provided with
The eyeglass-type device according to claim 12.
 前記フロント部は、情報を表示する表示部を有し、
 前記第2接触部の誤装着を検出する誤装着検出部と、
 前記誤装着検出部によって前記第2接触部の誤装着が検出された場合に、誤装着を報知する報知情報を前記表示部に表示する表示制御部と、
をさらに備える、
請求項1に記載の眼鏡型デバイス。
the front portion has a display portion for displaying information,
a mis-attachment detection unit that detects mis-attachment of the second contact portion;
a display control unit that, when the mis-attachment detection unit detects that the second contact portion is mis-attached, displays, on the display unit, notification information that notifies the mis-attachment;
Further provided with
The eyeglass-type device according to claim 1 .
 前記第1生体信号は、前記ユーザの筋電、眼電、脳波、筋肉の形状変化、筋肉の硬度変化、生体が発生する音響、皮膚温度、皮膚導電性、呼吸、発汗、心拍、及び血圧の少なくとも一つを含む、
請求項1に記載の眼鏡型デバイス。
The first biological signal includes at least one of electromyography, electrooculography, electroencephalography, changes in muscle shape, changes in muscle hardness, sounds generated by a living body, skin temperature, skin conductivity, respiration, sweating, heart rate, and blood pressure of the user.
The eyeglass-type device according to claim 1 .
 フロント部と、前記フロント部に接続されたテンプル部と、前記テンプル部に接続されたモダン部と、ユーザの鼻根又は眉間に接触して前記フロント部を支持する第1接触部と、前記モダン部に接続された接続部を有し、前記ユーザの顎下に接触して前記モダン部を支持する第2接触部と、前記第1接触部に配置され前記ユーザの鼻根又は眉間から前記ユーザの第1生体信号を取得する第1センサと、前記第2接触部に配置され前記ユーザの顎下から前記ユーザの第2生体信号を取得する第2センサと、を備える眼鏡型デバイスにおいて、前記眼鏡型デバイスに搭載された情報処理装置が実行する情報処理方法であって、
 前記第1センサから前記第1生体信号を取得し、
 前記第2センサから前記第2生体信号を取得し、
 取得した前記第1生体信号及び前記第2生体信号に基づいて、操作対象のオブジェクトに対するユーザの操作意図を推定し、
 前記オブジェクトに対して、推定した前記操作意図に対応する操作を実行する、
情報処理方法。
In an eyeglass-type device comprising a front portion, temple portions connected to the front portion, end pieces connected to the temple portions, a first contact portion that contacts the bridge of the user's nose or between the eyebrows to support the front portion, a second contact portion having a connection portion connected to the end pieces and that contacts under the user's chin to support the end pieces, a first sensor that is disposed in the first contact portion and acquires a first biosignal of the user from the bridge of the user's nose or between the eyebrows, and a second sensor that is disposed in the second contact portion and acquires a second biosignal of the user from under the user's chin, an information processing method executed by an information processing device mounted on the eyeglass-type device,
acquiring the first biological signal from the first sensor;
acquiring the second biological signal from the second sensor;
estimating a user's intention to operate an object to be operated based on the acquired first biological signal and the acquired second biological signal;
performing an operation corresponding to the estimated operation intention on the object;
Information processing methods.
 フロント部と、前記フロント部に接続されたテンプル部と、前記テンプル部に接続されたモダン部と、ユーザの鼻根又は眉間に接触して前記フロント部を支持する第1接触部と、前記モダン部に接続された接続部を有し、前記ユーザの顎下に接触して前記モダン部を支持する第2接触部と、前記第1接触部に配置され前記ユーザの鼻根又は眉間から前記ユーザの第1生体信号を取得する第1センサと、前記第2接触部に配置され前記ユーザの顎下から前記ユーザの第2生体信号を取得する第2センサと、を備える眼鏡型デバイスにおいて、前記眼鏡型デバイスに搭載された情報処理装置に処理を実行させるためのプログラムであって、
 前記処理は、
 前記第1センサから前記第1生体信号を取得し、
 前記第2センサから前記第2生体信号を取得し、
 取得した前記第1生体信号及び前記第2生体信号に基づいて、操作対象のオブジェクトに対するユーザの操作意図を推定し、
 前記オブジェクトに対して、推定した前記操作意図に対応する操作を実行する、
プログラム。
a first contact portion that contacts the bridge of the user's nose or between the eyebrows to support the front portion; a second contact portion that has a connection portion connected to the eyebrows and contacts the area under the user's chin to support the eyebrows; a first sensor that is disposed in the first contact portion and acquires a first biosignal of the user from the bridge of the user's nose or between the eyebrows; and a second sensor that is disposed in the second contact portion and acquires a second biosignal of the user from the area under the user's chin, the program causing an information processing device mounted on the eyebrow device to execute processing,
The process comprises:
acquiring the first biological signal from the first sensor;
acquiring the second biological signal from the second sensor;
estimating a user's intention to operate an object to be operated based on the acquired first biological signal and the acquired second biological signal;
performing an operation corresponding to the estimated operation intention on the object;
program.
PCT/JP2025/003067 2024-03-29 2025-01-30 Eyeglasses-type device, information processing method, and program Pending WO2025204110A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2024-056548 2024-03-29
JP2024056548 2024-03-29

Publications (1)

Publication Number Publication Date
WO2025204110A1 true WO2025204110A1 (en) 2025-10-02

Family

ID=97215436

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2025/003067 Pending WO2025204110A1 (en) 2024-03-29 2025-01-30 Eyeglasses-type device, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2025204110A1 (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997033190A1 (en) * 1996-03-07 1997-09-12 Newline Surf Pty. Ltd. Sports sunglasses
JP2002156612A (en) * 2000-11-17 2002-05-31 Poochie Pompreece:Kk Spectacles for pet
CN203786386U (en) * 2014-04-17 2014-08-20 华北石油管理局总医院 Medical magnifier
WO2016194849A1 (en) * 2015-06-01 2016-12-08 アルプス電気株式会社 Glasses-type electronic device
JP2017206067A (en) * 2016-05-16 2017-11-24 株式会社東芝 Cap with electrooculogram detection electrode, headwear with electrooculogram detection electrode, and alert method using electrooculogram detection
US20170367423A1 (en) * 2016-06-23 2017-12-28 Six Flags Theme Parks, Inc. Headband for virtual reality goggles
JP2019512713A (en) * 2016-01-05 2019-05-16 サフィーロ・ソシエタ・アツィオナリア・ファブリカ・イタリアナ・ラボラツィオーネ・オッチアリ・エス・ピー・エー Eyeglasses with biometric signal sensor
JP2020502589A (en) * 2016-12-13 2020-01-23 サフィーロ・ソシエタ・アツィオナリア・ファブリカ・イタリアナ・ラボラツィオーネ・オッチアリ・エス・ピー・エー Glasses with biosensor
JP2020036027A (en) * 2015-12-25 2020-03-05 三井化学株式会社 Piezoelectric substrate, piezoelectric fabric, piezoelectric knit, piezoelectric device, force sensor, actuator, and biological information acquisition device
CN213338207U (en) * 2020-11-23 2021-06-01 梁少菊 Local magnifying glass for gastroenterology

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997033190A1 (en) * 1996-03-07 1997-09-12 Newline Surf Pty. Ltd. Sports sunglasses
JP2002156612A (en) * 2000-11-17 2002-05-31 Poochie Pompreece:Kk Spectacles for pet
CN203786386U (en) * 2014-04-17 2014-08-20 华北石油管理局总医院 Medical magnifier
WO2016194849A1 (en) * 2015-06-01 2016-12-08 アルプス電気株式会社 Glasses-type electronic device
JP2020036027A (en) * 2015-12-25 2020-03-05 三井化学株式会社 Piezoelectric substrate, piezoelectric fabric, piezoelectric knit, piezoelectric device, force sensor, actuator, and biological information acquisition device
JP2019512713A (en) * 2016-01-05 2019-05-16 サフィーロ・ソシエタ・アツィオナリア・ファブリカ・イタリアナ・ラボラツィオーネ・オッチアリ・エス・ピー・エー Eyeglasses with biometric signal sensor
JP2017206067A (en) * 2016-05-16 2017-11-24 株式会社東芝 Cap with electrooculogram detection electrode, headwear with electrooculogram detection electrode, and alert method using electrooculogram detection
US20170367423A1 (en) * 2016-06-23 2017-12-28 Six Flags Theme Parks, Inc. Headband for virtual reality goggles
JP2020502589A (en) * 2016-12-13 2020-01-23 サフィーロ・ソシエタ・アツィオナリア・ファブリカ・イタリアナ・ラボラツィオーネ・オッチアリ・エス・ピー・エー Glasses with biosensor
CN213338207U (en) * 2020-11-23 2021-06-01 梁少菊 Local magnifying glass for gastroenterology

Similar Documents

Publication Publication Date Title
CA3095287C (en) Augmented reality systems for time critical biomedical applications
EP3064130A1 (en) Brain activity measurement and feedback system
US10172552B2 (en) Method for determining and analyzing movement patterns during dental treatment
Kwon et al. Emotion recognition using a glasses-type wearable device via multi-channel facial responses
Bulling et al. What's in the Eyes for Context-Awareness?
CN105578954A (en) Physiological parameter measurement and feedback system
KR20190005219A (en) Augmented Reality Systems and Methods for User Health Analysis
JP2022548473A (en) System and method for patient monitoring
WO2019071166A1 (en) Multi-disciplinary clinical evaluation in virtual or augmented reality
JP7320261B2 (en) Information processing system, method, and program
Gao et al. Wearable technology for signal acquisition and interactive feedback in autism spectrum disorder intervention: A review
CN111933277A (en) Method, device, equipment and storage medium for detecting 3D vertigo
Gjoreski et al. OCOsense glasses–monitoring facial gestures and expressions for augmented human-computer interaction: OCOsense glasses for monitoring facial gestures and expressions
JP4730621B2 (en) Input device
WO2025204110A1 (en) Eyeglasses-type device, information processing method, and program
US20220240802A1 (en) In-ear device for blood pressure monitoring
CN113995416A (en) Apparatus and method for displaying user interface in glasses
Gemicioglu et al. TongueTap: Multimodal Tongue Gesture Recognition with Head-Worn Devices
WO2023129390A1 (en) Monitoring cardiac activity using an in-ear device
WO2022237954A1 (en) Eye tracking module wearable by a human being
Matthies et al. Wearable Sensing of Facial Expressions and Head Gestures
Peña-Cortés et al. Warning and rehabilitation system using brain computer interface (BCI) in cases of bruxism
KR102877504B1 (en) BCI-based Neuro Robotics System
CN222804286U (en) A wearable mask for monitoring critically ill patients
US20240103285A1 (en) Integrated health sensors

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25778578

Country of ref document: EP

Kind code of ref document: A1