[go: up one dir, main page]

US20180139567A1 - Method, system and apparatus for measuring head size using a magnetic sensor mounted on a personal audio delivery device - Google Patents

Method, system and apparatus for measuring head size using a magnetic sensor mounted on a personal audio delivery device Download PDF

Info

Publication number
US20180139567A1
US20180139567A1 US15/811,386 US201715811386A US2018139567A1 US 20180139567 A1 US20180139567 A1 US 20180139567A1 US 201715811386 A US201715811386 A US 201715811386A US 2018139567 A1 US2018139567 A1 US 2018139567A1
Authority
US
United States
Prior art keywords
delivery device
personal audio
head
audio delivery
head size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US15/811,386
Other versions
US9992603B1 (en
Inventor
Kapil Jain
Abhilash Mathew
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
EmbodyVR Inc
Original Assignee
EmbodyVR Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by EmbodyVR Inc filed Critical EmbodyVR Inc
Priority to US15/811,386 priority Critical patent/US9992603B1/en
Assigned to EmbodyVR, Inc. reassignment EmbodyVR, Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JAIN, KAPIL, MATHEW, ABHILASH
Publication of US20180139567A1 publication Critical patent/US20180139567A1/en
Application granted granted Critical
Publication of US9992603B1 publication Critical patent/US9992603B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/6803Head-worn items, e.g. helmets, masks, headphones or goggles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/12Audiometering
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/12Audiometering
    • A61B5/121Audiometering evaluating hearing capacity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6813Specially adapted to be attached to a specific body part
    • A61B5/6814Head
    • A61B5/6815Ear
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B7/00Measuring arrangements characterised by the use of electric or magnetic techniques
    • G01B7/14Measuring arrangements characterised by the use of electric or magnetic techniques for measuring distance or clearance between spaced objects or spaced apertures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/005Details of transducers, loudspeakers or microphones using digitally weighted transducing elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1008Earpieces of the supra-aural or circum-aural type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1058Manufacture or assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/22Arrangements for obtaining desired frequency or directional characteristics for obtaining desired frequency characteristic only 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/323Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only for loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/04Circuits for transducers, loudspeakers or microphones for correcting frequency response
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/008Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/301Automatic calibration of stereophonic sound system, e.g. with test microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • H04S7/304For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/305Electronic adaptation of stereophonic audio signals to reverberation of the listening space
    • H04S7/306For headphones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4005Detecting, measuring or recording for evaluating the nervous system for evaluating the sensory system
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient; User input means
    • A61B5/7405Details of notification to user or communication with user or patient; User input means using sound
    • A61B5/7415Sound rendering of measured values, e.g. by pitch or volume variation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1016Earpieces of the intra-aural type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R11/00Transducers of moving-armature or moving-core type
    • H04R11/02Loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/02Details casings, cabinets or mounting therein for transducers covered by H04R1/02 but not provided for in any of its subgroups
    • H04R2201/029Manufacturing aspects of enclosures transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/77Design aspects, e.g. CAD, of hearing aid tips, moulds or housings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/11Positioning of individual sound objects, e.g. moving airplane, within a sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field

Definitions

  • the disclosure is related to consumer goods and, more particularly, to a personal audio delivery device such as a headphone arranged to facilitate determining head size of a person wearing the personal audio delivery device based on a magnetic sensor mounted on the personal audio delivery device.
  • the head size may be used to facilitate spatial localization of sound heard by the person while wearing the personal audio delivery device.
  • a human auditory system includes an outer ear, middle ear, and inner ear. With the outer ear, middle ear, and inner ear, the human auditory system is able to hear sound.
  • a sound source such as a loudspeaker in a room may output sound.
  • a pinna of the outer ear receives the sound, directs the sound to an ear canal of the outer ear, which in turn directs the sound to the middle ear.
  • the middle ear of the human auditory system transfers the sound into fluids of an inner ear for conversion into nerve impulses.
  • a brain interprets the nerve impulses to hear the sound.
  • the human auditory system is able to perceive the direction where the sound is coming from.
  • the perception of direction of the sound source is based on interactions with human anatomy.
  • the interaction includes the sound reflecting and/or reverberating and diffracting off a head, shoulder and pinna.
  • the interaction generates audio cues which are decoded by the brain to perceive the direction where the sound
  • the personalized audio delivery devices outputs sound, e.g., music, into the ear canal of the outer ear.
  • sound e.g., music
  • a user wears an earcup seated on the pinna which outputs the sound into the ear canal.
  • a bone conduction headset vibrates middle ear bones to conduct the sound to the human auditory system.
  • the personalized audio delivery devices accurately reproduce sound. But unlike sound from a sound source, the sound from the personalized audio delivery devices does not interact with the human anatomy such that direction where the sound is coming from is accurately perceptible.
  • the seating of the earcup on the pinna prevents the sound from the personal audio delivery device from interacting with the pinna and the bone conduction may bypass the pinna altogether. Audio cues indicative of direction is not generated and as a result the person is not able to perceive the direction where the sound is coming from.
  • FIG. 1 is an example visualization of various parameters used for spatial localization of sound
  • FIG. 2 shows aspects of a human anatomy in spatial localization of sound
  • FIG. 3 shows an example of an effect of human anatomy on interaural audio cues
  • FIG. 4 shows an example system for measuring head size
  • FIGS. 5A and 5B show example arrangements of a processing engine in the example system for measuring head size.
  • FIG. 6 shows variables associated with measuring the head size
  • FIG. 7 is an example flow chart of functions associated with using head size to personalize audio reproduction
  • FIG. 8 shows how a magnetic field interacts with an AMR sensor
  • FIG. 9 shows an example of the non-linear transfer function
  • FIGS. 10A-C illustrate example arrangements associated with determining the non-linear transfer function.
  • a sound source may output sound.
  • a direction where the sound comes from may be identified by the human auditory system using one or more audio cues.
  • the audio cues may be sound (e.g., reflections and reverberations) indicative of a spatial location of the sound, e.g., where the sound is coming from.
  • the audio cues may be generated from interactions between the sound, objects in an environment, and human anatomy before reaching the human auditory system. For example, reverberation and reflection from the objects may generate audio cues.
  • aspects of the human anatomy such as head shape, head size, shoulder shape, shoulder size, and outer ear (pinna) structure may generate audio cues.
  • Each person may have different human anatomy. In this regard, the audio cues used by one person to spatially localize the sound may be different for another person.
  • FIG. 1 is an example visualization 100 of parameters which facilitates spatially localizing sound output by a sound source 102 .
  • One or more parameters may describe a relationship between a position of a listener 104 and the sound source 102 .
  • the parameters may include an azimuth 106 , elevation 108 , and a distance and/or velocity 110 / 112 .
  • the azimuth 106 may be an angle in a horizontal plane between the listener 104 and the sound source 102 .
  • the elevation 108 may be an angle in a vertical plane between the listener 104 and the sound source 102 .
  • the distance 110 may be a separation between the listener 104 and the sound source 102 .
  • the velocity 112 may describe a rate of movement of the sound source 104 .
  • Other parameters indicative of location may also be used.
  • FIG. 2 shows aspects of a human anatomy 202 - 208 used in sound localization. Audio cues may be generated based on the interaction of sound with the human anatomy. The audio cues may be indicative of a spatial location from where sound comes from.
  • the human anatomy which is illustrated includes a torso 202 , head 204 with ears 206 , and a pinna 208 .
  • Reflections of sound from the torso 202 may generate an audio cue indicative of elevation and distance from where the sound is coming from, e.g., the sound source. These reflections are modeled as torso effect.
  • Overall shape of the head 204 including ear symmetry and distance D between the ears 206 may generate an audio cue regarding azimuth and elevation from where the sound is coming from. This is modeled as head effect.
  • how sound interacts with the shape, size, and structure of the pinna 208 may generate an audio cue regarding elevation, distance and velocity from where the sound comes from.
  • FIG. 3 shows how the audio cue indicative of azimuth is generated.
  • a person 302 may be located a certain distance away from a sound source 304 .
  • the sound source 304 may output sound 306 which is then perceived by the person at a left ear 308 and a right ear 310 .
  • An interaural time difference represents a difference in time arrival between the two ears 308 , 310 .
  • Sound generated by sound source 304 , x(t) takes T L amount of time to reach the left ear 308 and T R amount of time to reach the right ear 310 .
  • ITD represents difference between T L and T R .
  • sound pressure level at left ear 308 X L (t) is different from the one experienced at right ear 310 X R (t).
  • This difference in intensity is represented by an interaural level difference (ILD) audio cue.
  • ILD interaural level difference
  • These audio cues (ITD and ILD) may be different for a different shape and size of head. A bigger head i.e. larger distance between left and right ear 308 , 310 , will generate larger time and intensity difference than a smaller head.
  • the ITD and ILD audio cues may be directly proportional to the azimuth between the listener and the sound source. In this regard, azimuth of the sound source may be perceived. ITD and ILD, however, may be insufficient to further localize the sound source in terms of elevation, distance and velocity of the sound source.
  • Personal audio delivery devices such as headphones, hearables, earbuds, speakers, and hearing aids may output sound directly into the human auditory system.
  • an earcup of a headphone may be placed on the pinna and a transducer in the earcup may output sound into the ear canal.
  • the earcup and headphone may cover or partially cover the pinna and head.
  • spatial localization such as elevation, distance and velocity of the sound source may be impaired.
  • the head and pinna might not interact with such sounds so as to generate certain audio cues to perceive the location of the sound, e.g., which direction it is coming from.
  • the audio cues may be artificially generated to facilitate spatial localization in terms of elevation, azimuth, distance and/or velocity.
  • a non-linear transfer function e.g., also referred to as a head related transfer function (HRTF) or simply transfer function, may facilitate generating the audio cues.
  • the non-linear transfer function may characterize how sound is received by a human auditory system based on interaction with the head, torso, shoulder, pinna and other parts of the human anatomy influencing human auditory localization.
  • the non-linear transfer function may be used to artificially generate the audio cues for determining elevation, distance and/or velocity of a sound source, among other cues.
  • Each person may have differences in head shape and size along with differences in features of the pinna and torso. As a result, the non-linear transfer function for one user cannot be used for another user. Such a use would result in audio cues being generated such that a sound source is perceived at a different spatial location from where it is intended to be perceived.
  • Embodiments described herein are directed to a personal audio delivery device arranged to determine head size.
  • the determination of the head size by the personal audio delivery device may facilitate personalization of the non-linear transfer function for generating one or more audio cues for spatial localization of sound.
  • the person may be able to spatialize the location of sound based on the personalized non-linear transfer function.
  • FIG. 4 illustrates an example system 400 for spatial localization.
  • the system 400 may include the personal audio delivery device 402 and a processing engine 404 .
  • the personal audio delivery device 402 may be a headset, hearable, or hearing aid which outputs sound such as voice and music.
  • the personal audio delivery device 402 may have an earcup 406 which is worn on a pinna 408 .
  • the pinna 408 may not be visible externally when the earcup 404 is worn, but pinna 408 is shown as visible for purposes of illustration.
  • the earcup 406 may have one or more transducers 410 and one or more sensors 412 .
  • the one or more transducers 410 may be a speaker which outputs sound based on conversion of an electrical signal representative of the sound.
  • the one or more sensors 412 may include a magnetic sensor on a headband 414 of the personal audio delivery device 402 .
  • the headband may connect two earcups.
  • the magnetic sensor may take the form of an anisotropic magnetoresistance (AMR) sensor which changes resistance in an externally applied magnetic field or a Hall effect transducer which outputs varying voltage in response an externally applied magnetic field.
  • the magnetic sensor may take other forms as well.
  • the magnetic sensor may be positioned at a center of the headband 414 of the personal audio delivery device 402 such that it is equidistant from both earcups.
  • FIGS. 5A and 5B show example arrangements of the processing engine in the example system for spatial localization.
  • the processing engine may process the signals output by the magnetic sensor.
  • the processing engine may take the form of a processor or a server, among other arrangements.
  • FIG. 5A shows an arrangement of a personal audio delivery device 500 with a processing engine in the form of the processor 502 .
  • the processor 504 may be a central processing unit (CPU) local to the personal audio delivery device 500 which executes computer instructions stored in storage such as memory to process the signals associated with the one or more magnetic sensors 504 and one or more transducers 506 .
  • the processor 502 may be local when the processor 502 is integrated with the personal audio delivery device 500 .
  • FIG. 5B shows an arrangement of a personal audio delivery device 510 and a processing engine in the form of a server 512 coupled via a network 514 .
  • the server 512 may be a network based computing system.
  • the server 512 may process the signals associated with the one or more magnetic sensors 504 and one or more transducers 506 .
  • the server 512 may be accessible to the personal audio delivery device via the network 514 .
  • the network 514 may take the form of a wired or wireless network.
  • the personal audio delivery device 512 may have communication circuitry 516 for communicating signals 518 with the server 512 , e.g., via WiFi or Ethernet, to facilitate processing of signals associated with the transducers and/or magnetic sensors.
  • Latency associated with processing the signals associated with the magnetic sensor may be less with a local processor as compared the server.
  • the latency may be less because there is no delay associated with communication to the server.
  • the personal audio delivery device may be powered by a battery. Processing signals on the local processor may also consume power from the battery to which otherwise would be used by the personal audio delivery device to output sound when the signals associated with the magnetic sensor is processed. However, this power consumption may be minimal if the processing is performed one or a few times to determine a head size of a user of the personal audio delivery device as described in further detail below. After completing this, the head size of the user may not need to be determined again until some indication is received (e.g., a user of the personal delivery device is different). For example, a new user may provide an indication to recalculate his head size which will result in the determination of head size for the new user. Other variations are also possible.
  • the processing engine may take other forms as well.
  • the processing engine may take the form of the CPU local to the personal audio delivery device and the server.
  • the processing of the signals may be performed locally by the processor at the personal audio delivery device as well as remotely at the server. Yet other variations are also possible.
  • FIG. 6 shows a head 602 on which a personal audio delivery device 604 is worn and variables associated with determining head size.
  • T may be known by design of the personal audio delivery device 604 .
  • Theta may be an angle between a center 606 of a headband 608 of the personal audio delivery device 604 and an earcup 610 when the personal audio delivery device is worn.
  • T may be equivalent to a physical height of the personal audio delivery device 604 at a center of the headband 608 .
  • R may be distance between the center 606 of the headband 608 and an earcup 610 .
  • One or more of these variables may be used to determine the head size which is represented by a variable 2H, where H is distance between a center of the head 602 and the earcup 610 .
  • FIG. 7 is an example flow chart 700 of functions associated with using head size to personalize a non-linear transfer function for a person. These functions may be performed by the example system which includes the personal audio delivery device and processing engine.
  • a sensor signal may be received from a magnetic sensor indicative of an interaction between a magnetic field of a personal audio delivery device and the magnetic sensor.
  • a head size of a head on which the personal audio delivery device is worn may be calculated based on the received sensor signal.
  • a non-linear transfer function may be identified based on the calculated head size. The identified non-linear transfer function may characterize how sound is transformed via the head with the calculated head size.
  • an output signal is generated indicative of one or more audio cues to facilitate spatialization of sound associated with the output signal based on the identified non-linear transfer function.
  • the sound associated with the output signal is output by the personal audio delivery device.
  • An individual may wear a personal audio delivery device.
  • the personal audio delivery device may have an earcup which the individual wears on a pinna.
  • a sensor signal may be received from the magnetic sensor on the headband of the personal audio delivery device.
  • the transducer in the earcup may have a magnet which produces the magnetic field. This magnet may be used by the transducer to output sound. This magnetic field may interact with the magnetic sensor which in turn causes the magnetic sensor to output the sensor signal indicative of the interaction.
  • the magnetic sensor may take the form of a Hall sensor or AMR sensor, among other forms.
  • the sensor signal output may be associated with a distance between the Hall sensor and the earcup.
  • the earcup may have a transducer with a magnet.
  • the magnet produces a magnetic field.
  • a strength of the magnetic field at the Hall sensor may be proportional to a distance to the magnet.
  • the Hall sensor may output the sensor signal proportional to the strength of the magnetic field of the magnet.
  • the sensor signal may have a higher voltage if the magnet field at the Hall sensor is stronger.
  • the sensor signal may have a lower voltage if the magnet field at the Hall sensor is weaker.
  • the sensor signal may be an indication of R shown in FIG. 6 .
  • the sensor signal provided by the Hall sensor indicative of R may be received by the processing engine.
  • the sensor signal output may be indicative of an angle by which the magnetic field passes through the AMR sensor.
  • the sensor signal may take the form of theta shown in FIG. 6 .
  • theta may be indicative of how much the headband is stretched to fit around the head when worn. A higher theta may be indicative of the headband being stretched more to fit around the head while a lower theta may be indicative of the headband being stretched less to fit around the head.
  • FIG. 8 shows an arrangement 800 with a personal audio delivery device 802 and how a magnetic field 804 interacts with the AMR sensor 806 .
  • the earcup 808 may have a magnet 810 .
  • the transducer may use the magnet 810 to convert electrical signals into audible sound.
  • a magnetic field 804 from the magnet 810 may interact with the AMR sensor 806 .
  • Lines of the magnetic field 804 associated with the magnet 810 may cross the AMR sensor 806 at different angles depending on how much a head band 812 of the personal audio delivery device 802 is stretched to fit around the head when worn.
  • the AMR sensor 806 may output a signal indicative of an angle at which the lines of the magnetic field 808 cross the AMR sensor 806 . This angle may be representative of theta.
  • the signal provided by the AMR sensor 806 indicative of theta may be received by the processing engine.
  • the processing engine may receive the signal from the Hall and/or AMR sensor before any sound is output by a transducer in the earcup. This way minimal magnetic field is produced due to current flow through the transducer that would generate extraneous magnetic fields. The generation of the extraneous magnetic fields would otherwise impact measurement of the magnetic field by the magnetic sensor.
  • a head size of a head on which the personal audio delivery device is worn may be calculated based on the received signal.
  • the processing engine may calculate H based on the following equation:
  • H distance from a center of the head to the earcup
  • T is a height of the head band of the personal audio delivery device and theta is the angle at which the magnetic field crosses the AMR sensor and which is equivalent to how far the headset is stretched around the head.
  • the processing engine may calculate H based on the following equation:
  • H distance from a center of the head to the earcup
  • T is a height of the head band which is known by design of the personal audio delivery device
  • R is a distance between the Hall Sensor and earcup which is equivalent to how far the headset is stretched around the head.
  • the head size may be calculated as:
  • a non-linear transfer function may be identified based on the calculated head size.
  • the non-linear transfer function may characterize how sound is transformed by the individual whose head size was calculated at 704 .
  • FIG. 9 shows an example of the non-linear transfer function 900 for generating the missing audio cues.
  • a horizontal axis 902 may represent a frequency, e.g., in Hz, while a vertical axis 904 may represent a frequency response, e.g., in dB.
  • the non-linear transfer function may characterize how the head transforms sound.
  • the non-linear transfer function may define waveforms indicative of frequency responses of the head at different azimuths of the sound source and a particular elevation of the sound source.
  • waveforms for a given elevation and azimuth may define the frequency response of the head when sound comes from the given elevation and azimuth.
  • regions 906 may represent notches and regions 908 may represent peaks in the frequency response of the head.
  • the non-linear transfer functions may take other forms as well.
  • the non-linear transfer function may describe one or more of a frequency response of the head versus distance for a given azimuth and elevation and/or a frequency response of the head versus velocity for a given azimuth and elevation, among others.
  • the non-linear transfer function may describe a frequency response with respect to a plurality of dimensions including distance, velocity, elevation, and/or azimuth.
  • FIGS. 10A-C illustrate example arrangements associated with determining the non-linear transfer function.
  • the non-linear transfer function may be determined in a variety of ways.
  • FIG. 10A illustrates an example arrangement 1000 for determining a non-linear transfer function via a direct measurement.
  • the direct measurement may be performed during a learning process.
  • a microphone 1002 may be placed at or near the ear canal 1004 of an individual 1006 different from whose head size was calculated at 704 .
  • a sound source 1008 may be moved around the individual 1006 .
  • the sound source 1008 may be moved to a plurality of spatial locations in azimuth, elevation, distance, and/or velocity around the individual, examples which are shown as A, B, and C.
  • a frequency response of the head 1004 measured by the microphone 1002 for the plurality of spatial locations may be indicative of the non-linear transfer function of the head.
  • the non-linear transfer function may be a plurality of non-linear transfer functions describing a frequency response of the head, e.g., one or more of a frequency response of the head versus azimuth for a given elevation, a frequency response of the head versus azimuth for a given distance, and/or a frequency response of the head versus azimuth for a given velocity.
  • the non-linear transfer function may be associated with a head size of the individual under test in the learning process. The head size may be measured based on a magnetic sensor as described above or via a physical measurement such as a tape measure, among other methods.
  • the direct measurement process may be repeated for a plurality of individuals different from whose head size was calculated at 704 during a learning process.
  • the direct measurements may result in determining a plurality of non-linear transfer functions where each non-linear transfer function is associated with a head size.
  • FIG. 10B illustrates an example arrangement 1050 for determining the non-linear transfer function for the individual whose head size was calculated at 704 .
  • the non-linear transfer function may be based on the plurality of non-linear transfer functions and associated head sizes determined during the learning process.
  • the example arrangement 1050 may include a database 1052 and comparator 1054 .
  • the database 1052 and comparator 1054 may reside on the personal audio delivery device, server, or some other device.
  • the database 1052 may store the plurality of non-linear transfer functions and associated listener characteristics which correspond to the head sizes determined during the learning process.
  • An entry 1056 in the database 1052 may define a respective non-linear transfer function 1058 and associated head size 1060 of the plurality of non-linear transfer functions and associated head sizes determined during the learning process.
  • the database may have a plurality of entries 1:N.
  • the comparator 1054 may be arranged to compare each head size 1060 associated with a respective non-linear transfer function 1058 to a reference listener characteristic 1062 to identify a head size 1058 in the entries 1:N which is closest to the reference head size 1062 .
  • the reference listener characteristic 1062 may be the head size calculated at step 706 .
  • the comparator 1054 may output a non-linear transfer function 1064 .
  • the non-linear transfer function 1064 may be a non-linear transfer function 1058 associated with a head size 1060 which is closest to the head size indicated by the reference listener characteristic 1062 . Mathematically, this decision may be based on the following equation (where HRTF refers to the non-linear transfer function):
  • Personalized HRTF HRTF( X i ), where i is chosen to minimize abs( X i ⁇ 2* H )
  • N is a number of HRTFs in the plurality of HRTFs
  • X i is a head size associated with a respective HRTF from the plurality of HRTFs
  • 2*H is the calculated head size
  • the non-linear transfer function 1064 may be the identified non-linear transfer function at step 706 .
  • the direct measurement may not need to be performed on the head of the individual for whom the head size is calculated at step 704 to determine the non-linear transfer function.
  • the non-linear transfer function 1064 is based on the plurality of non-linear transfer functions and head size determined during the learning process and stored in the database 1052 and used in real time to determine the non-linear transfer function 1062 .
  • the non-linear transfer function for the individual whose head size was calculated at 704 may be based on a combination of one or more of the plurality of non-linear transfer functions determined during the learning process. For instance, one or more of the plurality of non-linear transfer functions may be weighed to determine non-linear transfer function for the individual whose head size was calculated at 704 . The weighting may be based on a closeness of match between the calculated head size and a head size associated with a non-linear transfer function of the plurality of non-linear transfer functions. For instance, a closer match may result in a stronger weighting of the non-linear transfer function while a farther match may result in a weaker weighting of the non-linear transfer function. Then, the weighed non-linear transfer functions may be combined, e.g., summed, to form the non-linear transfer function for the individual whose head size was calculated at 704 .
  • FIG. 10C illustrates another example arrangement 1080 for determining the non-linear transfer function for the individual whose head size was calculated at 704 without having to perform a direct measurement for the individual.
  • the plurality of non-linear transfer functions and respective head sizes determined during the learning process may be parameterized via numerical analysis methods to define a function 1082 with an input 1084 and output 1086 .
  • the head size calculated at step 704 may be provided as the input 1084 to the function 1082 and the function 1082 may provide as the output 1086 the non-linear transfer function for the individual whose head size was calculated at 704 .
  • the function may take a variety of forms.
  • the function 1082 may take the form of a model fit to each of the non-linear transfer functions associated with head sizes determined during the learning phase using well known data fitting techniques such as neural networks. Then, the calculated head size at 706 may be input into the model and the model may output the non-linear transfer function for the individual whose head size was calculated at 704 .
  • the function may be expressed as:
  • x is the calculated head size and f is a function of the plurality of HRTFs.
  • an output signal is generated indicative of one or more audio cues to facilitate spatialization of sound associated with the output signal based on the identified non-linear transfer function. Because the sound associated with the output signal cannot properly interact with the head when the personal audio delivery device is worn, audio cues to spatially locate the sound may be missing.
  • the non-linear transfer function may facilitate generating the audio cues to spatially locate the sound for the individual via the calculated head size at 704 .
  • the identified non-linear transfer function may be modulated with a sound signal associated with the sound to form the output signal indicative of one or more audio cues.
  • the one or more audio cues may spatialize the sound at a given spatial location.
  • the sound signal may represent sound such as music or voice which is to be spatialized.
  • the non-linear transfer function may be an impulse response which is convolved with the sound signal in a time domain or multiplied with the first signal in a frequency domain.
  • the modulation of the sound signal with the non-linear transfer function may result in artificially generating these missing audio cues.
  • audio cues for perceiving elevation, azimuth, distance and/or velocity associated with the sound may be generated.
  • a direction may be associated with given sound to be spatialized.
  • metadata associated with the given sound may define a given azimuth and elevation for which the given sound is to be perceived.
  • a frequency response of the non-linear transfer function associated with the direction may be modulated with a sound signal associated with the given sound to generate one or more audio cues that facilitate spatialization of the given sound.
  • non-linear transfer function may define one or more waveforms indicative of a frequency response of the head when sound comes from the given azimuth and elevation.
  • the one or more waveforms may be modulated with the sound signal associated with the given sound to generate the output signal indicative of the one or more audio cues.
  • the audio cues may enable a user to perceive the given sound coming from the given azimuth and elevation.
  • sound associated with the output signal may be output by the personal audio delivery device to facilitate spatial localization of the sound for the person having the head with the calculated head size.
  • the modulated signal may be input into the transducer of the earcup.
  • the transducer may convert the output signal to sound.
  • the audio cues may facilitate spatialization of the sound associated with the output signal for the calculated head size.
  • the transducer may output sound associated with multiple signals where sound associated with each signal is spatialized. For instance, a first signal may be modulated with a first non-linear transfer function and a second signal may be modulated with a second transfer function to generate audio cues for the first and second signal. The modulated first signal and modulated second signal may be input into the transducer. The transducer may output sound such that the sound associated with the first and second signal are each spatialized. Other variations are also possible.
  • references herein to “example” and/or “embodiment” means that a particular feature, structure, or characteristic described in connection with the example and/or embodiment can be included in at least one example and/or embodiment of an invention.
  • the appearances of this phrase in various places in the specification are not necessarily all referring to the same example and/or embodiment, nor are separate or alternative examples and/or embodiments mutually exclusive of other examples and/or embodiments.
  • the example and/or embodiment described herein, explicitly and implicitly understood by one skilled in the art, can be combined with other examples and/or embodiments.
  • At least one of the elements in at least one example is hereby expressly defined to include a tangible, non-transitory medium such as a memory, DVD, CD, Blu-ray, and so on, storing the software and/or firmware.
  • a method comprising: receiving, from a magnetic sensor, a sensor signal indicative of an interaction between a magnetic field of a transducer of a personal audio delivery device and a magnetic sensor mounted on a headband of the personal audio delivery device; calculating a head size of a head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor; based on the head size, identifying a non-linear transfer function which characterizes how sound is transformed via the head with the calculated head size; generating an output signal indicative of one or more audio cues to facilitate spatialization of sound associated with the output signal based on the identified non-linear transfer function; and outputting, by the transducer of the personal audio delivery device, the sound associated with the output signal.
  • calculating the head size of the head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor comprises calculating the head size based on the received sensor signal indicating an angle by which a magnetic field passes through the magnetic sensor.
  • calculating the head size of the head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor comprises calculating the head size based on the received sensor signal indicating a strength of a magnetic field, wherein the strength is proportional to a distance between the magnetic sensor and the transducer.
  • identifying the non-linear transfer function comprises identifying the non-linear transfer function from a plurality of non-linear transfer functions associated with a respective head size closest to the calculated head size.
  • identifying the non-linear transfer function comprises inputting the calculated head size into a function which outputs the non-linear transfer function based on the calculated head size.
  • One or more non-transitory computer readable media comprising program code stored in memory and executable by a processor, the program code to: receive, from a magnetic sensor, a sensor signal indicative of an interaction between a magnetic field of a transducer of a personal audio delivery device and a magnetic sensor mounted on a headband of a personal audio delivery device; calculate a head size of a head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor; based on the head size, identify a non-linear transfer function which characterizes how sound is transformed via the head with the calculated head size; generate an output signal indicative of one or more audio cues to facilitate spatialization of sound associated with the output signal based on the identified non-linear transfer function; and output, by the transducer of the personal audio delivery device, the sound associated with the output signal.
  • the one or more non-transitory machine-readable media of Embodiment 9 or 10, wherein the program code to calculate the head size of the head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor comprises calculating the head size based the received sensor signal indicating a strength of a magnetic field, wherein the strength is proportional to a distance between the magnetic sensor and the transducer.
  • a system comprising: a personal audio delivery device comprising a headband, a magnetic senor mounted on the headband, and a transducer; and computer instructions stored in memory and executable by a processor to perform the functions of: receiving, from the magnetic sensor, a sensor signal indicative of an interaction between a magnetic field of the transducer of the personal audio delivery device and the magnetic sensor mounted on the headband of the personal audio delivery device; calculating a head size of a head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor; based on the head size, identifying a non-linear transfer function which characterizes how sound is transformed via the head with the calculated head size; generating an output signal indicative of one or more audio cues to facilitate spatialization of sound associated with the output signal based on the identified non-linear transfer function; and outputting, by the transducer of the personal audio delivery device, the sound associated with the output signal.
  • Embodiment 17 wherein the computer instructions stored in memory and executable by the processor for calculating the head size of the head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor comprises calculating the head size based the received sensor signal indicating an angle by which a magnetic field passes through the magnetic sensor.
  • Embodiment 17 or 18, wherein the computer instructions stored in memory and executable by the processor for calculating the head size of the head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor comprises calculating the head size based the received sensor signal indicating a strength of a magnetic field, wherein the strength is proportional to a distance between the magnetic sensor and the transducer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Otolaryngology (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Manufacturing & Machinery (AREA)
  • Fuzzy Systems (AREA)
  • Psychiatry (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Physiology (AREA)
  • Stereophonic System (AREA)
  • Headphones And Earphones (AREA)
  • Stereophonic Arrangements (AREA)
  • Image Analysis (AREA)
  • Obtaining Desirable Characteristics In Audible-Bandwidth Transducers (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

A magnetic sensor mounted on a headband of the personal audio delivery device may output a sensor signal indicative of an interaction between a magnetic field of a transducer of a personal audio delivery device and the magnetic sensor. A head size of a head on which the personal audio delivery device is worn is calculated based on the sensor signal from the magnetic sensor. Based on the head size, a non-linear transfer function is identified which characterizes how sound is transformed via the head with the calculated head size. An output signal is generated indicative of one or more audio cues to facilitate spatialization of sound associated with the output signal based on the identified non-linear transfer function. The sound associated with the output signal is output by the transducer of the personal audio delivery device.

Description

    RELATED APPLICATIONS
  • This disclosure claims the benefit of priority under 35 U.S.C. § 119(e) of U.S. Provisional Application No. 62/421,380 filed Nov. 14, 2016 entitled “Spatially Ambient Aware Audio Headset”, U.S. Provisional Application No. 62/424,512 filed Nov. 20, 2016 entitled “Head Anatomy Measurement and HRTF Personalization”, U.S. Provisional Application No. 62/468,933 filed Mar. 8, 2017 entitled “System and Method to Capture and Characterize Human Auditory Anatomy Using Mobile Device, U.S. Provisional Application No. 62/421,285 filed Nov. 13, 2016 entitled “Personalized Audio Reproduction System and Method”, and U.S. Provisional Application No. 62/466,268 filed Mar. 2, 2017 entitled “Method and Protocol for Human Auditory Anatomy Characterization in Real Time”, the contents each of which are herein incorporated by reference in their entireties.
  • This disclosure is also related to U.S. Application No. ______, attorney docket no. 154.2016 0003 ORG US1 filed ______, entitled “Spatially Ambient Aware Personal Audio Delivery Device”, U.S. Application No. ______, attorney docket no. 154.2016 0002 ORG US1 filed ______, entitled “Image and Audio Based Characterization of a Human Auditory System for Personalized Audio Reproduction”, U.S. Application No. ______, attorney docket no. 154.2016 0007 ORG US1 filed ______, “Audio Based Characterization of a Human Auditory System for Personalized Audio Reproduction”, and U.S. Application No. ______, attorney docket no. 154.2016 0008 ORG US1 filed ______, entitled “System and Method to Capture Image of Pinna and Characterize Human Auditory Anatomy using Image of Pinna”, the contents each of which are herein incorporated by reference in their entireties., the contents each of which are herein incorporated by reference in their entireties.
  • FIELD OF THE DISCLOSURE
  • The disclosure is related to consumer goods and, more particularly, to a personal audio delivery device such as a headphone arranged to facilitate determining head size of a person wearing the personal audio delivery device based on a magnetic sensor mounted on the personal audio delivery device. The head size may be used to facilitate spatial localization of sound heard by the person while wearing the personal audio delivery device.
  • BACKGROUND
  • A human auditory system includes an outer ear, middle ear, and inner ear. With the outer ear, middle ear, and inner ear, the human auditory system is able to hear sound. For example, a sound source such as a loudspeaker in a room may output sound. A pinna of the outer ear receives the sound, directs the sound to an ear canal of the outer ear, which in turn directs the sound to the middle ear. The middle ear of the human auditory system transfers the sound into fluids of an inner ear for conversion into nerve impulses. A brain then interprets the nerve impulses to hear the sound. Further, the human auditory system is able to perceive the direction where the sound is coming from. The perception of direction of the sound source is based on interactions with human anatomy. The interaction includes the sound reflecting and/or reverberating and diffracting off a head, shoulder and pinna. The interaction generates audio cues which are decoded by the brain to perceive the direction where the sound is coming from.
  • It is now becoming more common to listen to sounds wearing personalized audio delivery devices such as headphones, hearables, earbuds, speakers, or hearing aids. The personalized audio delivery devices outputs sound, e.g., music, into the ear canal of the outer ear. For example, a user wears an earcup seated on the pinna which outputs the sound into the ear canal. Alternatively, a bone conduction headset vibrates middle ear bones to conduct the sound to the human auditory system. The personalized audio delivery devices accurately reproduce sound. But unlike sound from a sound source, the sound from the personalized audio delivery devices does not interact with the human anatomy such that direction where the sound is coming from is accurately perceptible. The seating of the earcup on the pinna prevents the sound from the personal audio delivery device from interacting with the pinna and the bone conduction may bypass the pinna altogether. Audio cues indicative of direction is not generated and as a result the person is not able to perceive the direction where the sound is coming from.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Features, aspects, and advantages of the presently disclosed technology may be better understood with regard to the following description, appended claims, and accompanying drawings where:
  • FIG. 1 is an example visualization of various parameters used for spatial localization of sound;
  • FIG. 2 shows aspects of a human anatomy in spatial localization of sound;
  • FIG. 3 shows an example of an effect of human anatomy on interaural audio cues;
  • FIG. 4 shows an example system for measuring head size;
  • FIGS. 5A and 5B show example arrangements of a processing engine in the example system for measuring head size.
  • FIG. 6 shows variables associated with measuring the head size;
  • FIG. 7 is an example flow chart of functions associated with using head size to personalize audio reproduction;
  • FIG. 8 shows how a magnetic field interacts with an AMR sensor;
  • FIG. 9 shows an example of the non-linear transfer function; and
  • FIGS. 10A-C illustrate example arrangements associated with determining the non-linear transfer function.
  • The drawings are for the purpose of illustrating example embodiments, but it is understood that the embodiments are not limited to the arrangements and instrumentality shown in the drawings.
  • DETAILED DESCRIPTION
  • A sound source may output sound. A direction where the sound comes from may be identified by the human auditory system using one or more audio cues. The audio cues may be sound (e.g., reflections and reverberations) indicative of a spatial location of the sound, e.g., where the sound is coming from. The audio cues may be generated from interactions between the sound, objects in an environment, and human anatomy before reaching the human auditory system. For example, reverberation and reflection from the objects may generate audio cues. Additionally, or alternatively, aspects of the human anatomy such as head shape, head size, shoulder shape, shoulder size, and outer ear (pinna) structure may generate audio cues. Each person may have different human anatomy. In this regard, the audio cues used by one person to spatially localize the sound may be different for another person.
  • FIG. 1 is an example visualization 100 of parameters which facilitates spatially localizing sound output by a sound source 102. One or more parameters may describe a relationship between a position of a listener 104 and the sound source 102. The parameters may include an azimuth 106, elevation 108, and a distance and/or velocity 110/112. The azimuth 106 may be an angle in a horizontal plane between the listener 104 and the sound source 102. The elevation 108 may be an angle in a vertical plane between the listener 104 and the sound source 102. The distance 110 may be a separation between the listener 104 and the sound source 102. The velocity 112 may describe a rate of movement of the sound source 104. Other parameters indicative of location may also be used.
  • FIG. 2 shows aspects of a human anatomy 202-208 used in sound localization. Audio cues may be generated based on the interaction of sound with the human anatomy. The audio cues may be indicative of a spatial location from where sound comes from. The human anatomy which is illustrated includes a torso 202, head 204 with ears 206, and a pinna 208.
  • Reflections of sound from the torso 202 may generate an audio cue indicative of elevation and distance from where the sound is coming from, e.g., the sound source. These reflections are modeled as torso effect. Overall shape of the head 204 including ear symmetry and distance D between the ears 206 may generate an audio cue regarding azimuth and elevation from where the sound is coming from. This is modeled as head effect. Finally, how sound interacts with the shape, size, and structure of the pinna 208 may generate an audio cue regarding elevation, distance and velocity from where the sound comes from.
  • FIG. 3 shows how the audio cue indicative of azimuth is generated. A person 302 may be located a certain distance away from a sound source 304. The sound source 304 may output sound 306 which is then perceived by the person at a left ear 308 and a right ear 310.
  • An interaural time difference (ITD) represents a difference in time arrival between the two ears 308, 310. Sound generated by sound source 304, x(t), takes TL amount of time to reach the left ear 308 and TR amount of time to reach the right ear 310. ITD represents difference between TL and TR. Similarly, at any time t, sound pressure level at left ear 308 XL(t) is different from the one experienced at right ear 310 XR(t). This difference in intensity is represented by an interaural level difference (ILD) audio cue. These audio cues (ITD and ILD) may be different for a different shape and size of head. A bigger head i.e. larger distance between left and right ear 308, 310, will generate larger time and intensity difference than a smaller head.
  • The ITD and ILD audio cues may be directly proportional to the azimuth between the listener and the sound source. In this regard, azimuth of the sound source may be perceived. ITD and ILD, however, may be insufficient to further localize the sound source in terms of elevation, distance and velocity of the sound source.
  • Personal audio delivery devices such as headphones, hearables, earbuds, speakers, and hearing aids may output sound directly into the human auditory system. For example, an earcup of a headphone may be placed on the pinna and a transducer in the earcup may output sound into the ear canal. However, the earcup and headphone may cover or partially cover the pinna and head. As a result, spatial localization such as elevation, distance and velocity of the sound source may be impaired. The head and pinna might not interact with such sounds so as to generate certain audio cues to perceive the location of the sound, e.g., which direction it is coming from.
  • In this case, the audio cues may be artificially generated to facilitate spatial localization in terms of elevation, azimuth, distance and/or velocity. A non-linear transfer function, e.g., also referred to as a head related transfer function (HRTF) or simply transfer function, may facilitate generating the audio cues. The non-linear transfer function may characterize how sound is received by a human auditory system based on interaction with the head, torso, shoulder, pinna and other parts of the human anatomy influencing human auditory localization. The non-linear transfer function may be used to artificially generate the audio cues for determining elevation, distance and/or velocity of a sound source, among other cues.
  • Each person may have differences in head shape and size along with differences in features of the pinna and torso. As a result, the non-linear transfer function for one user cannot be used for another user. Such a use would result in audio cues being generated such that a sound source is perceived at a different spatial location from where it is intended to be perceived.
  • Embodiments described herein are directed to a personal audio delivery device arranged to determine head size. The determination of the head size by the personal audio delivery device may facilitate personalization of the non-linear transfer function for generating one or more audio cues for spatial localization of sound. The person may be able to spatialize the location of sound based on the personalized non-linear transfer function.
  • FIG. 4 illustrates an example system 400 for spatial localization. The system 400 may include the personal audio delivery device 402 and a processing engine 404.
  • The personal audio delivery device 402 may be a headset, hearable, or hearing aid which outputs sound such as voice and music. The personal audio delivery device 402 may have an earcup 406 which is worn on a pinna 408. The pinna 408 may not be visible externally when the earcup 404 is worn, but pinna 408 is shown as visible for purposes of illustration.
  • The earcup 406 may have one or more transducers 410 and one or more sensors 412. The one or more transducers 410 may be a speaker which outputs sound based on conversion of an electrical signal representative of the sound. The one or more sensors 412 may include a magnetic sensor on a headband 414 of the personal audio delivery device 402. The headband may connect two earcups. The magnetic sensor may take the form of an anisotropic magnetoresistance (AMR) sensor which changes resistance in an externally applied magnetic field or a Hall effect transducer which outputs varying voltage in response an externally applied magnetic field. The magnetic sensor may take other forms as well. The magnetic sensor may be positioned at a center of the headband 414 of the personal audio delivery device 402 such that it is equidistant from both earcups.
  • FIGS. 5A and 5B show example arrangements of the processing engine in the example system for spatial localization. The processing engine may process the signals output by the magnetic sensor. The processing engine may take the form of a processor or a server, among other arrangements.
  • FIG. 5A shows an arrangement of a personal audio delivery device 500 with a processing engine in the form of the processor 502. The processor 504 may be a central processing unit (CPU) local to the personal audio delivery device 500 which executes computer instructions stored in storage such as memory to process the signals associated with the one or more magnetic sensors 504 and one or more transducers 506. The processor 502 may be local when the processor 502 is integrated with the personal audio delivery device 500.
  • FIG. 5B shows an arrangement of a personal audio delivery device 510 and a processing engine in the form of a server 512 coupled via a network 514. The server 512 may be a network based computing system. The server 512 may process the signals associated with the one or more magnetic sensors 504 and one or more transducers 506. The server 512 may be accessible to the personal audio delivery device via the network 514. The network 514 may take the form of a wired or wireless network. The personal audio delivery device 512 may have communication circuitry 516 for communicating signals 518 with the server 512, e.g., via WiFi or Ethernet, to facilitate processing of signals associated with the transducers and/or magnetic sensors.
  • Latency associated with processing the signals associated with the magnetic sensor may be less with a local processor as compared the server. The latency may be less because there is no delay associated with communication to the server. The personal audio delivery device may be powered by a battery. Processing signals on the local processor may also consume power from the battery to which otherwise would be used by the personal audio delivery device to output sound when the signals associated with the magnetic sensor is processed. However, this power consumption may be minimal if the processing is performed one or a few times to determine a head size of a user of the personal audio delivery device as described in further detail below. After completing this, the head size of the user may not need to be determined again until some indication is received (e.g., a user of the personal delivery device is different). For example, a new user may provide an indication to recalculate his head size which will result in the determination of head size for the new user. Other variations are also possible.
  • The processing engine may take other forms as well. For example, the processing engine may take the form of the CPU local to the personal audio delivery device and the server. In other words, the processing of the signals may be performed locally by the processor at the personal audio delivery device as well as remotely at the server. Yet other variations are also possible.
  • FIG. 6 shows a head 602 on which a personal audio delivery device 604 is worn and variables associated with determining head size. T may be known by design of the personal audio delivery device 604. Theta may be an angle between a center 606 of a headband 608 of the personal audio delivery device 604 and an earcup 610 when the personal audio delivery device is worn. T may be equivalent to a physical height of the personal audio delivery device 604 at a center of the headband 608. R may be distance between the center 606 of the headband 608 and an earcup 610. One or more of these variables may be used to determine the head size which is represented by a variable 2H, where H is distance between a center of the head 602 and the earcup 610.
  • FIG. 7 is an example flow chart 700 of functions associated with using head size to personalize a non-linear transfer function for a person. These functions may be performed by the example system which includes the personal audio delivery device and processing engine.
  • Briefly, at 702, a sensor signal may be received from a magnetic sensor indicative of an interaction between a magnetic field of a personal audio delivery device and the magnetic sensor. At 704, a head size of a head on which the personal audio delivery device is worn may be calculated based on the received sensor signal. At 706, a non-linear transfer function may be identified based on the calculated head size. The identified non-linear transfer function may characterize how sound is transformed via the head with the calculated head size. At 708, an output signal is generated indicative of one or more audio cues to facilitate spatialization of sound associated with the output signal based on the identified non-linear transfer function. At 710, the sound associated with the output signal is output by the personal audio delivery device.
  • An individual may wear a personal audio delivery device. The personal audio delivery device may have an earcup which the individual wears on a pinna.
  • Referring back, at 702, a sensor signal may be received from the magnetic sensor on the headband of the personal audio delivery device. The transducer in the earcup may have a magnet which produces the magnetic field. This magnet may be used by the transducer to output sound. This magnetic field may interact with the magnetic sensor which in turn causes the magnetic sensor to output the sensor signal indicative of the interaction. The magnetic sensor may take the form of a Hall sensor or AMR sensor, among other forms.
  • In the case of the Hall sensor, the sensor signal output may be associated with a distance between the Hall sensor and the earcup. The earcup may have a transducer with a magnet. The magnet produces a magnetic field. A strength of the magnetic field at the Hall sensor may be proportional to a distance to the magnet. In turn, the Hall sensor may output the sensor signal proportional to the strength of the magnetic field of the magnet. The sensor signal may have a higher voltage if the magnet field at the Hall sensor is stronger. Conversely, the sensor signal may have a lower voltage if the magnet field at the Hall sensor is weaker. In this regard, the sensor signal may be an indication of R shown in FIG. 6. The sensor signal provided by the Hall sensor indicative of R may be received by the processing engine.
  • In the case of the AMR sensor, the sensor signal output may be indicative of an angle by which the magnetic field passes through the AMR sensor. The sensor signal may take the form of theta shown in FIG. 6. In turn, theta may be indicative of how much the headband is stretched to fit around the head when worn. A higher theta may be indicative of the headband being stretched more to fit around the head while a lower theta may be indicative of the headband being stretched less to fit around the head.
  • FIG. 8 shows an arrangement 800 with a personal audio delivery device 802 and how a magnetic field 804 interacts with the AMR sensor 806. The earcup 808 may have a magnet 810. Typically, the transducer may use the magnet 810 to convert electrical signals into audible sound. A magnetic field 804 from the magnet 810 may interact with the AMR sensor 806. Lines of the magnetic field 804 associated with the magnet 810 may cross the AMR sensor 806 at different angles depending on how much a head band 812 of the personal audio delivery device 802 is stretched to fit around the head when worn. The AMR sensor 806 may output a signal indicative of an angle at which the lines of the magnetic field 808 cross the AMR sensor 806. This angle may be representative of theta. The signal provided by the AMR sensor 806 indicative of theta may be received by the processing engine.
  • The processing engine may receive the signal from the Hall and/or AMR sensor before any sound is output by a transducer in the earcup. This way minimal magnetic field is produced due to current flow through the transducer that would generate extraneous magnetic fields. The generation of the extraneous magnetic fields would otherwise impact measurement of the magnetic field by the magnetic sensor.
  • At 704, a head size of a head on which the personal audio delivery device is worn may be calculated based on the received signal.
  • If theta is determined at 702, the processing engine may calculate H based on the following equation:

  • H=T*tan(θ)
  • where H is distance from a center of the head to the earcup, T is a height of the head band of the personal audio delivery device and theta is the angle at which the magnetic field crosses the AMR sensor and which is equivalent to how far the headset is stretched around the head.
  • If R is determined at 702, the processing engine may calculate H based on the following equation:

  • H=√{square root over (R 2T 2)}
  • where H is distance from a center of the head to the earcup, T is a height of the head band which is known by design of the personal audio delivery device and R is a distance between the Hall Sensor and earcup which is equivalent to how far the headset is stretched around the head.
  • Based on H calculated using the AMR sensor and/or Hall sensor, the head size may be calculated as:

  • Head Size=2*H
  • At 706, a non-linear transfer function may be identified based on the calculated head size. The non-linear transfer function may characterize how sound is transformed by the individual whose head size was calculated at 704.
  • FIG. 9 shows an example of the non-linear transfer function 900 for generating the missing audio cues. A horizontal axis 902 may represent a frequency, e.g., in Hz, while a vertical axis 904 may represent a frequency response, e.g., in dB. The non-linear transfer function may characterize how the head transforms sound. For example, the non-linear transfer function may define waveforms indicative of frequency responses of the head at different azimuths of the sound source and a particular elevation of the sound source. In this regard, waveforms for a given elevation and azimuth may define the frequency response of the head when sound comes from the given elevation and azimuth. Further, regions 906 may represent notches and regions 908 may represent peaks in the frequency response of the head.
  • The non-linear transfer functions may take other forms as well. For example, the non-linear transfer function may describe one or more of a frequency response of the head versus distance for a given azimuth and elevation and/or a frequency response of the head versus velocity for a given azimuth and elevation, among others. In other cases, the non-linear transfer function may describe a frequency response with respect to a plurality of dimensions including distance, velocity, elevation, and/or azimuth.
  • FIGS. 10A-C illustrate example arrangements associated with determining the non-linear transfer function. The non-linear transfer function may be determined in a variety of ways.
  • FIG. 10A illustrates an example arrangement 1000 for determining a non-linear transfer function via a direct measurement. The direct measurement may be performed during a learning process. A microphone 1002 may be placed at or near the ear canal 1004 of an individual 1006 different from whose head size was calculated at 704. Then, a sound source 1008 may be moved around the individual 1006. The sound source 1008 may be moved to a plurality of spatial locations in azimuth, elevation, distance, and/or velocity around the individual, examples which are shown as A, B, and C. A frequency response of the head 1004 measured by the microphone 1002 for the plurality of spatial locations may be indicative of the non-linear transfer function of the head. In some cases, the non-linear transfer function may be a plurality of non-linear transfer functions describing a frequency response of the head, e.g., one or more of a frequency response of the head versus azimuth for a given elevation, a frequency response of the head versus azimuth for a given distance, and/or a frequency response of the head versus azimuth for a given velocity. The non-linear transfer function may be associated with a head size of the individual under test in the learning process. The head size may be measured based on a magnetic sensor as described above or via a physical measurement such as a tape measure, among other methods.
  • The direct measurement process may be repeated for a plurality of individuals different from whose head size was calculated at 704 during a learning process. The direct measurements may result in determining a plurality of non-linear transfer functions where each non-linear transfer function is associated with a head size.
  • FIG. 10B illustrates an example arrangement 1050 for determining the non-linear transfer function for the individual whose head size was calculated at 704. The non-linear transfer function may be based on the plurality of non-linear transfer functions and associated head sizes determined during the learning process.
  • The example arrangement 1050 may include a database 1052 and comparator 1054. The database 1052 and comparator 1054 may reside on the personal audio delivery device, server, or some other device. The database 1052 may store the plurality of non-linear transfer functions and associated listener characteristics which correspond to the head sizes determined during the learning process. An entry 1056 in the database 1052 may define a respective non-linear transfer function 1058 and associated head size 1060 of the plurality of non-linear transfer functions and associated head sizes determined during the learning process. The database may have a plurality of entries 1:N.
  • The comparator 1054 may be arranged to compare each head size 1060 associated with a respective non-linear transfer function 1058 to a reference listener characteristic 1062 to identify a head size 1058 in the entries 1:N which is closest to the reference head size 1062. The reference listener characteristic 1062 may be the head size calculated at step 706. The comparator 1054 may output a non-linear transfer function 1064. The non-linear transfer function 1064 may be a non-linear transfer function 1058 associated with a head size 1060 which is closest to the head size indicated by the reference listener characteristic 1062. Mathematically, this decision may be based on the following equation (where HRTF refers to the non-linear transfer function):

  • Personalized HRTF=HRTF(X i), where i is chosen to minimize abs(X i−2*H)
  • where i=1:N where N is a number of HRTFs in the plurality of HRTFs, Xi is a head size associated with a respective HRTF from the plurality of HRTFs, and 2*H is the calculated head size.
  • The non-linear transfer function 1064 may be the identified non-linear transfer function at step 706. In this regard, the direct measurement may not need to be performed on the head of the individual for whom the head size is calculated at step 704 to determine the non-linear transfer function. Instead, the non-linear transfer function 1064 is based on the plurality of non-linear transfer functions and head size determined during the learning process and stored in the database 1052 and used in real time to determine the non-linear transfer function 1062.
  • In some examples, the non-linear transfer function for the individual whose head size was calculated at 704 may be based on a combination of one or more of the plurality of non-linear transfer functions determined during the learning process. For instance, one or more of the plurality of non-linear transfer functions may be weighed to determine non-linear transfer function for the individual whose head size was calculated at 704. The weighting may be based on a closeness of match between the calculated head size and a head size associated with a non-linear transfer function of the plurality of non-linear transfer functions. For instance, a closer match may result in a stronger weighting of the non-linear transfer function while a farther match may result in a weaker weighting of the non-linear transfer function. Then, the weighed non-linear transfer functions may be combined, e.g., summed, to form the non-linear transfer function for the individual whose head size was calculated at 704.
  • FIG. 10C illustrates another example arrangement 1080 for determining the non-linear transfer function for the individual whose head size was calculated at 704 without having to perform a direct measurement for the individual. The plurality of non-linear transfer functions and respective head sizes determined during the learning process may be parameterized via numerical analysis methods to define a function 1082 with an input 1084 and output 1086. Then, the head size calculated at step 704 may be provided as the input 1084 to the function 1082 and the function 1082 may provide as the output 1086 the non-linear transfer function for the individual whose head size was calculated at 704. The function may take a variety of forms.
  • For instance, the function 1082 may take the form of a model fit to each of the non-linear transfer functions associated with head sizes determined during the learning phase using well known data fitting techniques such as neural networks. Then, the calculated head size at 706 may be input into the model and the model may output the non-linear transfer function for the individual whose head size was calculated at 704. Mathematically, the function may be expressed as:

  • HRTFP=f(X)
  • where x is the calculated head size and f is a function of the plurality of HRTFs.
  • At 708, an output signal is generated indicative of one or more audio cues to facilitate spatialization of sound associated with the output signal based on the identified non-linear transfer function. Because the sound associated with the output signal cannot properly interact with the head when the personal audio delivery device is worn, audio cues to spatially locate the sound may be missing. The non-linear transfer function may facilitate generating the audio cues to spatially locate the sound for the individual via the calculated head size at 704. For example, the identified non-linear transfer function may be modulated with a sound signal associated with the sound to form the output signal indicative of one or more audio cues. The one or more audio cues may spatialize the sound at a given spatial location. The sound signal may represent sound such as music or voice which is to be spatialized. The non-linear transfer function may be an impulse response which is convolved with the sound signal in a time domain or multiplied with the first signal in a frequency domain. The modulation of the sound signal with the non-linear transfer function may result in artificially generating these missing audio cues. In particular, audio cues for perceiving elevation, azimuth, distance and/or velocity associated with the sound may be generated.
  • The modulation process may be now described in more detail for spatializing sound. A direction may be associated with given sound to be spatialized. For example, metadata associated with the given sound may define a given azimuth and elevation for which the given sound is to be perceived. A frequency response of the non-linear transfer function associated with the direction may be modulated with a sound signal associated with the given sound to generate one or more audio cues that facilitate spatialization of the given sound. For example, non-linear transfer function may define one or more waveforms indicative of a frequency response of the head when sound comes from the given azimuth and elevation. The one or more waveforms may be modulated with the sound signal associated with the given sound to generate the output signal indicative of the one or more audio cues. The audio cues may enable a user to perceive the given sound coming from the given azimuth and elevation.
  • At 710, sound associated with the output signal may be output by the personal audio delivery device to facilitate spatial localization of the sound for the person having the head with the calculated head size. For instance, the modulated signal may be input into the transducer of the earcup. The transducer may convert the output signal to sound. The audio cues may facilitate spatialization of the sound associated with the output signal for the calculated head size.
  • In some examples, the transducer may output sound associated with multiple signals where sound associated with each signal is spatialized. For instance, a first signal may be modulated with a first non-linear transfer function and a second signal may be modulated with a second transfer function to generate audio cues for the first and second signal. The modulated first signal and modulated second signal may be input into the transducer. The transducer may output sound such that the sound associated with the first and second signal are each spatialized. Other variations are also possible.
  • The description above discloses, among other things, various example systems, methods, apparatus, and articles of manufacture including, among other components, firmware and/or software executed on hardware. It is understood that such examples are merely illustrative and should not be considered as limiting. For example, it is contemplated that any or all of the firmware, hardware, and/or software aspects or components can be embodied exclusively in hardware, exclusively in software, exclusively in firmware, or in any combination of hardware, software, and/or firmware. Accordingly, the examples provided are not the only way(s) to implement such systems, methods, apparatus, and/or articles of manufacture.
  • Additionally, references herein to “example” and/or “embodiment” means that a particular feature, structure, or characteristic described in connection with the example and/or embodiment can be included in at least one example and/or embodiment of an invention. The appearances of this phrase in various places in the specification are not necessarily all referring to the same example and/or embodiment, nor are separate or alternative examples and/or embodiments mutually exclusive of other examples and/or embodiments. As such, the example and/or embodiment described herein, explicitly and implicitly understood by one skilled in the art, can be combined with other examples and/or embodiments.
  • The specification is presented largely in terms of illustrative environments, systems, procedures, steps, logic blocks, processing, and other symbolic representations that directly or indirectly resemble the operations of data processing devices coupled to networks. These process descriptions and representations are typically used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. Numerous specific details are set forth to provide a thorough understanding of the present disclosure. However, it is understood to those skilled in the art that certain embodiments of the present disclosure can be practiced without certain, specific details. In other instances, well known methods, procedures, components, and circuitry have not been described in detail to avoid unnecessarily obscuring aspects of the embodiments. Accordingly, the scope of the present disclosure is defined by the appended claims rather than the forgoing description of embodiments.
  • When any of the appended claims are read to cover a purely software and/or firmware implementation, at least one of the elements in at least one example is hereby expressly defined to include a tangible, non-transitory medium such as a memory, DVD, CD, Blu-ray, and so on, storing the software and/or firmware.
  • Example Embodiments
  • Example embodiments include:
  • Embodiment 1
  • A method comprising: receiving, from a magnetic sensor, a sensor signal indicative of an interaction between a magnetic field of a transducer of a personal audio delivery device and a magnetic sensor mounted on a headband of the personal audio delivery device; calculating a head size of a head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor; based on the head size, identifying a non-linear transfer function which characterizes how sound is transformed via the head with the calculated head size; generating an output signal indicative of one or more audio cues to facilitate spatialization of sound associated with the output signal based on the identified non-linear transfer function; and outputting, by the transducer of the personal audio delivery device, the sound associated with the output signal.
  • Embodiment 2
  • The method of Embodiment 2, wherein calculating the head size of the head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor comprises calculating the head size based on the received sensor signal indicating an angle by which a magnetic field passes through the magnetic sensor.
  • Embodiment 3
  • The method of Embodiment 1 or 2 wherein calculating the head size of the head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor comprises calculating the head size based on the received sensor signal indicating a strength of a magnetic field, wherein the strength is proportional to a distance between the magnetic sensor and the transducer.
  • Embodiment 4
  • The method of any of Embodiments 1-3, wherein identifying the non-linear transfer function comprises identifying the non-linear transfer function from a plurality of non-linear transfer functions associated with a respective head size closest to the calculated head size.
  • Embodiment 5
  • The method of any of Embodiments 1-4 wherein identifying the non-linear transfer function comprises inputting the calculated head size into a function which outputs the non-linear transfer function based on the calculated head size.
  • Embodiment 6
  • The method of any of Embodiments 1-5, wherein calculating the head size of the head on which the personal audio delivery device is worn is further based on a height of the personal audio delivery device.
  • Embodiment 7
  • The method of any of Embodiments 1-6, wherein the received sensor signal indicative of an interaction between the magnetic field of the transducer of the personal audio delivery device and the magnetic sensor mounted on the headband of the personal audio delivery device is based on the magnetic field of a magnet in the transducer.
  • Embodiment 8
  • The method of any of Embodiments 1-7, wherein the received sensor signal indicative of the interaction between the magnetic field of the transducer of the personal audio delivery device and the magnetic sensor is received before the sound associated with the output signal is output by the personal audio delivery device.
  • Embodiment 9
  • One or more non-transitory computer readable media comprising program code stored in memory and executable by a processor, the program code to: receive, from a magnetic sensor, a sensor signal indicative of an interaction between a magnetic field of a transducer of a personal audio delivery device and a magnetic sensor mounted on a headband of a personal audio delivery device; calculate a head size of a head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor; based on the head size, identify a non-linear transfer function which characterizes how sound is transformed via the head with the calculated head size; generate an output signal indicative of one or more audio cues to facilitate spatialization of sound associated with the output signal based on the identified non-linear transfer function; and output, by the transducer of the personal audio delivery device, the sound associated with the output signal.
  • Embodiment 10
  • The one or more non-transitory machine-readable media of Embodiment 9, wherein the program code to calculate the head size of the head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor comprises calculating the head size based on the received sensor signal indicating an angle by which a magnetic field passes through the magnetic sensor.
  • Embodiment 11
  • The one or more non-transitory machine-readable media of Embodiment 9 or 10, wherein the program code to calculate the head size of the head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor comprises calculating the head size based the received sensor signal indicating a strength of a magnetic field, wherein the strength is proportional to a distance between the magnetic sensor and the transducer.
  • Embodiment 12
  • The one or more non-transitory machine-readable media of any of Embodiments 9-11, wherein the program code to identify the non-linear transfer function comprises identifying the non-linear transfer function from a plurality of non-linear transfer functions associated with a respective head size closest to the calculated head size.
  • Embodiment 13
  • The one or more non-transitory machine-readable media of any of Embodiments 9-12, wherein the program code to identify the non-linear transfer function comprises inputting the calculated head size into a function which outputs the non-linear transfer function based on the calculated head size.
  • Embodiment 14
  • The one or more non-transitory machine-readable media of any of Embodiments 9-13, wherein the program code to calculate the head size of the head on which the personal audio delivery device is worn is further based on a height of the personal audio delivery device.
  • Embodiment 15
  • The one or more non-transitory machine-readable media of any of Embodiments 9-14, wherein the received sensor signal indicative of an interaction between the magnetic field of the transducer of the personal audio delivery device and the magnetic sensor mounted on the headband of the personal audio delivery device is based on the magnetic field of a magnet in the transducer.
  • Embodiment 16
  • The one or more non-transitory machine-readable media of any of Embodiments 9-15, wherein the received sensor signal indicative of the interaction between the magnetic field of the transducer of the personal audio delivery device and the magnetic sensor is received before the sound associated with the output signal is output by the personal audio delivery device.
  • Embodiment 17
  • A system comprising: a personal audio delivery device comprising a headband, a magnetic senor mounted on the headband, and a transducer; and computer instructions stored in memory and executable by a processor to perform the functions of: receiving, from the magnetic sensor, a sensor signal indicative of an interaction between a magnetic field of the transducer of the personal audio delivery device and the magnetic sensor mounted on the headband of the personal audio delivery device; calculating a head size of a head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor; based on the head size, identifying a non-linear transfer function which characterizes how sound is transformed via the head with the calculated head size; generating an output signal indicative of one or more audio cues to facilitate spatialization of sound associated with the output signal based on the identified non-linear transfer function; and outputting, by the transducer of the personal audio delivery device, the sound associated with the output signal.
  • Embodiment 18
  • The system of Embodiment 17, wherein the computer instructions stored in memory and executable by the processor for calculating the head size of the head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor comprises calculating the head size based the received sensor signal indicating an angle by which a magnetic field passes through the magnetic sensor.
  • Embodiment 19
  • The system of Embodiment 17 or 18, wherein the computer instructions stored in memory and executable by the processor for calculating the head size of the head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor comprises calculating the head size based the received sensor signal indicating a strength of a magnetic field, wherein the strength is proportional to a distance between the magnetic sensor and the transducer.
  • Embodiment 20
  • The system of any of Embodiments 17-19, wherein the program code to calculate the head size of the head on which the personal audio delivery device is worn is further based on a height of the personal audio delivery device.

Claims (20)

We claim:
1. A method comprising:
receiving, from a magnetic sensor, a sensor signal indicative of an interaction between a magnetic field of a transducer of a personal audio delivery device and a magnetic sensor mounted on a headband of the personal audio delivery device;
calculating a head size of a head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor;
based on the head size, identifying a non-linear transfer function which characterizes how sound is transformed via the head with the calculated head size;
generating an output signal indicative of one or more audio cues to facilitate spatialization of sound associated with the output signal based on the identified non-linear transfer function; and
outputting, by the transducer of the personal audio delivery device, the sound associated with the output signal.
2. The method of claim 1, wherein calculating the head size of the head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor comprises calculating the head size based on the received sensor signal indicating an angle by which a magnetic field passes through the magnetic sensor.
3. The method of claim 1, wherein calculating the head size of the head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor comprises calculating the head size based on the received sensor signal indicating a strength of a magnetic field, wherein the strength is proportional to a distance between the magnetic sensor and the transducer.
4. The method of claim 1, wherein identifying the non-linear transfer function comprises identifying the non-linear transfer function from a plurality of non-linear transfer functions associated with a respective head size closest to the calculated head size.
5. The method of claim 1, wherein identifying the non-linear transfer function comprises inputting the calculated head size into a function which outputs the non-linear transfer function based on the calculated head size.
6. The method of claim 1, wherein calculating the head size of the head on which the personal audio delivery device is worn is further based on a height of the personal audio delivery device.
7. The method of claim 1, wherein the received sensor signal indicative of an interaction between the magnetic field of the transducer of the personal audio delivery device and the magnetic sensor mounted on the headband of the personal audio delivery device is based on the magnetic field of a magnet in the transducer.
8. The method of claim 1, wherein the received sensor signal indicative of the interaction between the magnetic field of the transducer of the personal audio delivery device and the magnetic sensor is received before the sound associated with the output signal is output by the personal audio delivery device.
9. One or more non-transitory computer readable media comprising program code stored in memory and executable by a processor, the program code to:
receive, from a magnetic sensor, a sensor signal indicative of an interaction between a magnetic field of a transducer of a personal audio delivery device and a magnetic sensor mounted on a headband of a personal audio delivery device;
calculate a head size of a head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor;
based on the head size, identify a non-linear transfer function which characterizes how sound is transformed via the head with the calculated head size;
generate an output signal indicative of one or more audio cues to facilitate spatialization of sound associated with the output signal based on the identified non-linear transfer function; and
output, by the transducer of the personal audio delivery device, the sound associated with the output signal.
10. The one or more non-transitory machine-readable media of claim 9, wherein the program code to calculate the head size of the head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor comprises calculating the head size based on the received sensor signal indicating an angle by which a magnetic field passes through the magnetic sensor.
11. The one or more non-transitory machine-readable media of claim 9, wherein the program code to calculate the head size of the head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor comprises calculating the head size based the received sensor signal indicating a strength of a magnetic field, wherein the strength is proportional to a distance between the magnetic sensor and the transducer.
12. The one or more non-transitory machine-readable media of claim 9, wherein the program code to identify the non-linear transfer function comprises identifying the non-linear transfer function from a plurality of non-linear transfer functions associated with a respective head size closest to the calculated head size.
13. The one or more non-transitory machine-readable media of claim 9, wherein the program code to identify the non-linear transfer function comprises inputting the calculated head size into a function which outputs the non-linear transfer function based on the calculated head size.
14. The one or more non-transitory machine-readable media of claim 9, wherein the program code to calculate the head size of the head on which the personal audio delivery device is worn is further based on a height of the personal audio delivery device.
15. The one or more non-transitory machine-readable media of claim 9, wherein the received sensor signal indicative of an interaction between the magnetic field of the transducer of the personal audio delivery device and the magnetic sensor mounted on the headband of the personal audio delivery device is based on the magnetic field of a magnet in the transducer.
16. The one or more non-transitory machine-readable media of claim 9, wherein the received sensor signal indicative of the interaction between the magnetic field of the transducer of the personal audio delivery device and the magnetic sensor is received before the sound associated with the output signal is output by the personal audio delivery device.
17. A system comprising:
a personal audio delivery device comprising a headband, a magnetic senor mounted on the headband, and a transducer; and
computer instructions stored in memory and executable by a processor to perform the functions of:
receiving, from the magnetic sensor, a sensor signal indicative of an interaction between a magnetic field of the transducer of the personal audio delivery device and the magnetic sensor mounted on the headband of the personal audio delivery device;
calculating a head size of a head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor;
based on the head size, identifying a non-linear transfer function which characterizes how sound is transformed via the head with the calculated head size;
generating an output signal indicative of one or more audio cues to facilitate spatialization of sound associated with the output signal based on the identified non-linear transfer function; and
outputting, by the transducer of the personal audio delivery device, the sound associated with the output signal.
18. The system of claim 17, wherein the computer instructions stored in memory and executable by the processor for calculating the head size of the head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor comprises calculating the head size based the received sensor signal indicating an angle by which a magnetic field passes through the magnetic sensor.
19. The system of claim 17, wherein the computer instructions stored in memory and executable by the processor for calculating the head size of the head on which the personal audio delivery device is worn based on the received sensor signal from the magnetic sensor comprises calculating the head size based the received sensor signal indicating a strength of a magnetic field, wherein the strength is proportional to a distance between the magnetic sensor and the transducer.
20. The system of claim 17, wherein the program code to calculate the head size of the head on which the personal audio delivery device is worn is further based on a height of the personal audio delivery device.
US15/811,386 2016-11-13 2017-11-13 Method, system and apparatus for measuring head size using a magnetic sensor mounted on a personal audio delivery device Active US9992603B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/811,386 US9992603B1 (en) 2016-11-13 2017-11-13 Method, system and apparatus for measuring head size using a magnetic sensor mounted on a personal audio delivery device

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201662421285P 2016-11-13 2016-11-13
US201662421380P 2016-11-14 2016-11-14
US201662424512P 2016-11-20 2016-11-20
US201762466268P 2017-03-02 2017-03-02
US201762468933P 2017-03-08 2017-03-08
US15/811,386 US9992603B1 (en) 2016-11-13 2017-11-13 Method, system and apparatus for measuring head size using a magnetic sensor mounted on a personal audio delivery device

Publications (2)

Publication Number Publication Date
US20180139567A1 true US20180139567A1 (en) 2018-05-17
US9992603B1 US9992603B1 (en) 2018-06-05

Family

ID=62106984

Family Applications (6)

Application Number Title Priority Date Filing Date
US15/811,642 Active US10104491B2 (en) 2016-11-13 2017-11-13 Audio based characterization of a human auditory system for personalized audio reproduction
US15/811,386 Active US9992603B1 (en) 2016-11-13 2017-11-13 Method, system and apparatus for measuring head size using a magnetic sensor mounted on a personal audio delivery device
US15/811,295 Active US10313822B2 (en) 2016-11-13 2017-11-13 Image and audio based characterization of a human auditory system for personalized audio reproduction
US15/811,441 Active 2037-12-09 US10433095B2 (en) 2016-11-13 2017-11-13 System and method to capture image of pinna and characterize human auditory anatomy using image of pinna
US15/811,392 Active 2037-12-14 US10362432B2 (en) 2016-11-13 2017-11-13 Spatially ambient aware personal audio delivery device
US16/542,930 Active US10659908B2 (en) 2016-11-13 2019-08-16 System and method to capture image of pinna and characterize human auditory anatomy using image of pinna

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/811,642 Active US10104491B2 (en) 2016-11-13 2017-11-13 Audio based characterization of a human auditory system for personalized audio reproduction

Family Applications After (4)

Application Number Title Priority Date Filing Date
US15/811,295 Active US10313822B2 (en) 2016-11-13 2017-11-13 Image and audio based characterization of a human auditory system for personalized audio reproduction
US15/811,441 Active 2037-12-09 US10433095B2 (en) 2016-11-13 2017-11-13 System and method to capture image of pinna and characterize human auditory anatomy using image of pinna
US15/811,392 Active 2037-12-14 US10362432B2 (en) 2016-11-13 2017-11-13 Spatially ambient aware personal audio delivery device
US16/542,930 Active US10659908B2 (en) 2016-11-13 2019-08-16 System and method to capture image of pinna and characterize human auditory anatomy using image of pinna

Country Status (4)

Country Link
US (6) US10104491B2 (en)
EP (2) EP3539304A4 (en)
JP (2) JP2019536395A (en)
WO (2) WO2018089952A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230157585A1 (en) * 2021-11-22 2023-05-25 Sensimetrics Corporation Spatial Hearing Measurement System
WO2023147172A3 (en) * 2022-01-31 2023-08-31 Bose Corporation Audio device with hall effect sensor proximity detection and independent coupling

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SG10201800147XA (en) 2018-01-05 2019-08-27 Creative Tech Ltd A system and a processing method for customizing audio experience
WO2018041359A1 (en) * 2016-09-01 2018-03-08 Universiteit Antwerpen Method of determining a personalized head-related transfer function and interaural time difference function, and computer program product for performing same
US10507137B2 (en) * 2017-01-17 2019-12-17 Karl Allen Dierenbach Tactile interface system
US12010494B1 (en) * 2018-09-27 2024-06-11 Apple Inc. Audio system to determine spatial audio filter based on user-specific acoustic transfer function
US10880669B2 (en) * 2018-09-28 2020-12-29 EmbodyVR, Inc. Binaural sound source localization
CN116801179A (en) 2018-10-10 2023-09-22 索尼集团公司 Information processing apparatus, information processing method and computer-accessible medium
US11166115B2 (en) 2018-10-18 2021-11-02 Gn Hearing A/S Device and method for hearing device customization
US11158154B2 (en) * 2018-10-24 2021-10-26 Igt Gaming system and method providing optimized audio output
JP7206027B2 (en) * 2019-04-03 2023-01-17 アルパイン株式会社 Head-related transfer function learning device and head-related transfer function reasoning device
US11863959B2 (en) 2019-04-08 2024-01-02 Harman International Industries, Incorporated Personalized three-dimensional audio
CN109905831B (en) * 2019-04-15 2021-01-15 南京影风智能科技有限公司 Stereo auxiliary audio equipment
US10743128B1 (en) * 2019-06-10 2020-08-11 Genelec Oy System and method for generating head-related transfer function
AU2020203290B2 (en) * 2019-06-10 2022-03-03 Genelec Oy System and method for generating head-related transfer function
US20220264242A1 (en) * 2019-08-02 2022-08-18 Sony Group Corporation Audio output apparatus and audio output system using same
US10812929B1 (en) * 2019-08-28 2020-10-20 Facebook Technologies, Llc Inferring pinnae information via beam forming to produce individualized spatial audio
US10823960B1 (en) * 2019-09-04 2020-11-03 Facebook Technologies, Llc Personalized equalization of audio output using machine learning
EP4027633A4 (en) * 2019-09-06 2022-10-26 Sony Group Corporation INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
JP7276472B2 (en) * 2019-09-09 2023-05-18 日本電信電話株式会社 Sound collection method
CN110767028B (en) * 2019-11-11 2022-02-01 中国人民解放军第四军医大学 Pilot space hearing ability training system
CN111050244A (en) * 2019-12-11 2020-04-21 佳禾智能科技股份有限公司 Ambient sound monitoring method for headphones, electronic device, and computer-readable storage medium
US10966043B1 (en) * 2020-04-01 2021-03-30 Facebook Technologies, Llc Head-related transfer function determination using cartilage conduction
CN111818441B (en) * 2020-07-07 2022-01-11 Oppo(重庆)智能科技有限公司 Sound effect realization method and device, storage medium and electronic equipment
US11778408B2 (en) 2021-01-26 2023-10-03 EmbodyVR, Inc. System and method to virtually mix and audition audio content for vehicles
KR102654283B1 (en) * 2021-11-26 2024-04-04 대전보건대학교 산학협력단 Ear scanner
JP2024171502A (en) * 2023-05-30 2024-12-12 株式会社Jvcケンウッド Spatial audio processing device and spatial audio processing method
WO2025052579A1 (en) * 2023-09-06 2025-03-13 日本電信電話株式会社 Setting device, evaluation device, methods therefor, and program

Family Cites Families (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3045051B2 (en) * 1995-08-17 2000-05-22 ソニー株式会社 Headphone equipment
US6996244B1 (en) 1998-08-06 2006-02-07 Vulcan Patents Llc Estimation of head-related transfer functions for spatial sound representative
JP4226142B2 (en) * 1999-05-13 2009-02-18 三菱電機株式会社 Sound playback device
IL141822A (en) 2001-03-05 2007-02-11 Haim Levy Method and system for simulating a 3d sound environment
US20030035551A1 (en) 2001-08-20 2003-02-20 Light John J. Ambient-aware headset
JP3521900B2 (en) * 2002-02-04 2004-04-26 ヤマハ株式会社 Virtual speaker amplifier
WO2004040502A1 (en) 2002-10-31 2004-05-13 Korea Institute Of Science And Technology Image processing method for removing glasses from color facial images
US7430300B2 (en) 2002-11-18 2008-09-30 Digisenz Llc Sound production systems and methods for providing sound inside a headgear unit
KR20060059866A (en) * 2003-09-08 2006-06-02 마쯔시다덴기산교 가부시키가이샤 Sound control device design tool and sound control device
US20060013409A1 (en) * 2004-07-16 2006-01-19 Sensimetrics Corporation Microphone-array processing to generate directional cues in an audio signal
US8401212B2 (en) * 2007-10-12 2013-03-19 Earlens Corporation Multifunction system and method for integrated hearing and communication with noise cancellation and feedback management
JP2006203850A (en) * 2004-12-24 2006-08-03 Matsushita Electric Ind Co Ltd Sound image localization device
US8027481B2 (en) 2006-11-06 2011-09-27 Terry Beard Personal hearing control system and method
US8050444B2 (en) 2007-01-19 2011-11-01 Dale Trenton Smith Adjustable mechanism for improving headset comfort
JP2009105559A (en) * 2007-10-22 2009-05-14 Nec Saitama Ltd Method of detecting and processing object to be recognized from taken image, and portable electronic device with camera
US8489371B2 (en) 2008-02-29 2013-07-16 France Telecom Method and device for determining transfer functions of the HRTF type
US8155340B2 (en) * 2008-07-24 2012-04-10 Qualcomm Incorporated Method and apparatus for rendering ambient signals
US20100215198A1 (en) 2009-02-23 2010-08-26 Ngia Lester S H Headset assembly with ambient sound control
EP2362678B1 (en) 2010-02-24 2017-07-26 GN Audio A/S A headset system with microphone for ambient sounds
US20120183161A1 (en) 2010-09-03 2012-07-19 Sony Ericsson Mobile Communications Ab Determining individualized head-related transfer functions
WO2012164346A1 (en) * 2011-05-27 2012-12-06 Sony Ericsson Mobile Communications Ab Head-related transfer function (hrtf) selection or adaptation based on head size
US8787584B2 (en) 2011-06-24 2014-07-22 Sony Corporation Audio metrics for head-related transfer function (HRTF) selection or adaptation
US9030545B2 (en) 2011-12-30 2015-05-12 GNR Resound A/S Systems and methods for determining head related transfer functions
JP2013150190A (en) * 2012-01-20 2013-08-01 Nec Casio Mobile Communications Ltd Information terminal device, method for securing security and computer program
US20130279724A1 (en) 2012-04-19 2013-10-24 Sony Computer Entertainment Inc. Auto detection of headphone orientation
JP2014075753A (en) * 2012-10-05 2014-04-24 Nippon Hoso Kyokai <Nhk> Acoustic quality estimation device, acoustic quality estimation method and acoustic quality estimation program
US20140147099A1 (en) 2012-11-29 2014-05-29 Stephen Chase Video headphones platform methods, apparatuses and media
US9270244B2 (en) * 2013-03-13 2016-02-23 Personics Holdings, Llc System and method to detect close voice sources and automatically enhance situation awareness
US9426589B2 (en) 2013-07-04 2016-08-23 Gn Resound A/S Determination of individual HRTFs
KR101496659B1 (en) 2013-07-16 2015-02-27 유한회사 청텍 earphone having subminiature camera module
US9271077B2 (en) 2013-12-17 2016-02-23 Personics Holdings, Llc Method and system for directional enhancement of sound using small microphone arrays
US9900722B2 (en) 2014-04-29 2018-02-20 Microsoft Technology Licensing, Llc HRTF personalization based on anthropometric features
EP3522569A1 (en) 2014-05-20 2019-08-07 Oticon A/s Hearing device
US10181328B2 (en) 2014-10-21 2019-01-15 Oticon A/S Hearing system
CN107996028A (en) 2015-03-10 2018-05-04 Ossic公司 Calibrate listening devices
US9544706B1 (en) * 2015-03-23 2017-01-10 Amazon Technologies, Inc. Customized head-related transfer functions
US10182710B2 (en) 2015-07-23 2019-01-22 Qualcomm Incorporated Wearable dual-ear mobile otoscope
JP6687032B2 (en) * 2015-09-14 2020-04-22 ヤマハ株式会社 Ear shape analysis method, head-related transfer function generation method, ear shape analysis device, and head-related transfer function generation device
SG10201510822YA (en) * 2015-12-31 2017-07-28 Creative Tech Ltd A method for generating a customized/personalized head related transfer function
US9955279B2 (en) * 2016-05-11 2018-04-24 Ossic Corporation Systems and methods of calibrating earphones
TWI744341B (en) 2016-06-17 2021-11-01 美商Dts股份有限公司 Distance panning using near / far-field rendering
US10154365B2 (en) 2016-09-27 2018-12-11 Intel Corporation Head-related transfer function measurement and application

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230157585A1 (en) * 2021-11-22 2023-05-25 Sensimetrics Corporation Spatial Hearing Measurement System
WO2023147172A3 (en) * 2022-01-31 2023-08-31 Bose Corporation Audio device with hall effect sensor proximity detection and independent coupling

Also Published As

Publication number Publication date
US20180139532A1 (en) 2018-05-17
US20180139533A1 (en) 2018-05-17
US10104491B2 (en) 2018-10-16
WO2018089952A1 (en) 2018-05-17
EP3539304A4 (en) 2020-07-01
WO2018089956A1 (en) 2018-05-17
US10362432B2 (en) 2019-07-23
JP2020500492A (en) 2020-01-09
WO2018089956A9 (en) 2019-08-15
EP3539304A1 (en) 2019-09-18
US10313822B2 (en) 2019-06-04
US20180132764A1 (en) 2018-05-17
US20180139561A1 (en) 2018-05-17
EP3539305A4 (en) 2020-04-22
JP2019536395A (en) 2019-12-12
US10659908B2 (en) 2020-05-19
US9992603B1 (en) 2018-06-05
EP3539305A1 (en) 2019-09-18
US10433095B2 (en) 2019-10-01
US20190379993A1 (en) 2019-12-12

Similar Documents

Publication Publication Date Title
US9992603B1 (en) Method, system and apparatus for measuring head size using a magnetic sensor mounted on a personal audio delivery device
US11706582B2 (en) Calibrating listening devices
US11528577B2 (en) Method and system for generating an HRTF for a user
JP5894634B2 (en) Determination of HRTF for each individual
JP6824155B2 (en) Audio playback system and method
US10341799B2 (en) Impedance matching filters and equalization for headphone surround rendering
US10880669B2 (en) Binaural sound source localization
EP3837863B1 (en) Methods for obtaining and reproducing a binaural recording
Spagnol et al. Distance rendering and perception of nearby virtual sound sources with a near-field filter model
JP4226142B2 (en) Sound playback device
US11190896B1 (en) System and method of determining head-related transfer function parameter based on in-situ binaural recordings
US10728684B1 (en) Head related transfer function (HRTF) interpolation tool
JP2018152834A (en) Method and apparatus for controlling audio signal output in virtual auditory environment
US20190394583A1 (en) Method of audio reproduction in a hearing device and hearing device
KR20230139847A (en) Earphone with sound correction function and recording method using it
US20250097625A1 (en) Personalized sound virtualization
CN116648932A (en) Method and system for generating personalized free-field audio signal transfer function based on free-field audio signal transfer function data

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.)

AS Assignment

Owner name: EMBODYVR, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JAIN, KAPIL;MATHEW, ABHILASH;REEL/FRAME:044126/0213

Effective date: 20171113

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL)

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPP Fee payment procedure

Free format text: SURCHARGE FOR LATE PAYMENT, SMALL ENTITY (ORIGINAL EVENT CODE: M2554); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 4