US20190182592A1 - Method for adjusting audio for listener location and head orientation within a physical or virtual space - Google Patents
Method for adjusting audio for listener location and head orientation within a physical or virtual space Download PDFInfo
- Publication number
- US20190182592A1 US20190182592A1 US15/838,333 US201715838333A US2019182592A1 US 20190182592 A1 US20190182592 A1 US 20190182592A1 US 201715838333 A US201715838333 A US 201715838333A US 2019182592 A1 US2019182592 A1 US 2019182592A1
- Authority
- US
- United States
- Prior art keywords
- performer
- location
- monitor
- mixer
- orientation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R27/00—Public address systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/12—Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03G—CONTROL OF AMPLIFICATION
- H03G3/00—Gain control in amplifiers or frequency changers
- H03G3/20—Automatic control
- H03G3/30—Automatic control in amplifiers having semiconductor devices
- H03G3/3005—Automatic control in amplifiers having semiconductor devices in amplifiers suitable for low-frequencies, e.g. audio amplifiers
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03G—CONTROL OF AMPLIFICATION
- H03G5/00—Tone control or bandwidth control in amplifiers
- H03G5/16—Automatic control
- H03G5/165—Equalizers; Volume or gain control in limited frequency bands
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/02—Arrangements for generating broadcast information; Arrangements for generating broadcast-related information with a direct linking to broadcast information or to broadcast space-time; Arrangements for simultaneous generation of broadcast information and broadcast-related information
- H04H60/04—Studio equipment; Interconnection of studios
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2227/00—Details of public address [PA] systems covered by H04R27/00 but not provided for in any of its subgroups
- H04R2227/007—Electronic adaptation of audio signals to reverberation of the listening space for PA
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/11—Positioning of individual sound objects, e.g. moving airplane, within a sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/01—Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
- H04S7/303—Tracking of listener position or orientation
- H04S7/304—For headphones
Definitions
- IEMs in ear monitors
- One advantage is that the IEMs allow the performers to move around the stage and still hear their own monitor mix.
- Another advantage is that without speakers on stage, the chance of feedback loops forming between the monitor speakers and the microphones is eliminated.
- Another advantage is that without the high levels from monitor speakers leaking into open microphones on stage, the front of house mix is cleaner.
- IEMs A common complaint about IEMs is that because the performer is presented with the same mix regardless of the location of the performer on stage or the orientation of the performer's head relative to other performers on stage and the audience in front, the performer feels “separated” or “isolated” from the live performance, diminishing the realism and immediacy that performers rely upon. Also, the sonic image is all inside the listener's head. This adds to the performer's cognitive load, speeding the onset, and increasing the amount, of listener fatigue. Listener fatigue is a temporary threshold shift (lower level signals are not heard) and a spreading of the critical bands (hearing in noise is diminished) The typical response to listener fatigue is to increase the sound level, thus increasing the fatigue and its ill effects. A vocalist or a musician relies on his/her ability to hear in order to perform. As that ability is diminished the performance suffers.
- Real-time locating systems are used to automatically identify and track the location of objects or people in real time, usually within a building or other contained area.
- Wireless RTLS tags are attached to objects or worn by people, and in most RTLS, fixed reference points receive wireless signals from tags to determine their location. Examples of real-time locating systems include tracking automobiles through an assembly line, locating pallets of merchandise in a warehouse, or finding medical equipment in a hospital.
- the physical layer of RTLS technology is usually some form of radio frequency (RF) communication, but some systems use optical (usually infrared) or acoustic (usually ultrasound) technology instead of or in addition to RF.
- Tags and fixed reference points can be transmitters, receivers, or both, resulting in numerous possible technology combinations.
- RTLS are a form of local positioning system, and do not usually refer to GPS or to mobile phone tracking. Location information usually does not include speed, direction, or spatial orientation.
- ID signals from a tag are received by a multiplicity of readers in a sensory network, and a position is estimated using one or more locating algorithms, such as trilateration, multilateration, or triangulation. Equivalently, ID signals from several RTLS reference points can be received by a tag, and relayed back to a location processor. Localization with multiple reference points requires that distances between reference points in the sensory network be known in order to precisely locate a tag, and the determination of distances is called ranging. Another way to calculate relative location is if mobile tags communicate directly with each other, then relay this information to a location processor.
- RF trilateration uses estimated ranges from multiple receivers to estimate the location of a tag.
- RF triangulation uses the angles at which the RF signals arrive at multiple receivers to estimate the location of a tag.
- Many obstructions such as walls or furniture, can distort the estimated range and angle readings leading to varied qualities of location estimate.
- Estimation-based locating is often measured in accuracy for a given distance, such as 90% accurate for 10 meter range.
- It is an object of the present invention is to better correlate the sonic image with the performer's visual image
- the preferred embodiment of the invention is a system that detects the location and head orientation of a live performer on stage in front of an audience and adjusts the individual elements of the mix either by changing them at the listener's position or by feeding the location and orientation information back to the monitor mixer or a combination of both.
- the adjustments would include left/right, front/back and up/down panning; relative levels; equalization; transient response; reverberation levels, direction, time delay; as well as other possible modifications to the signal so that the performer senses that he/she is actually listening to the various instruments, vocals, acoustic space and audience, in their real or virtual locations.
- the location detection system has a detection system on or near the stage.
- Each performer would wear one or more remotely readable devices (such as a RFID tag) with unique digital identities.
- the detectors could use relative signal strength, time delay difference, triangulation, zoning algorithms or other methods or combination to locate the performer in the performance area.
- Orientation information would be generated by a means such as a magnetometer or gyroscope to detect the performer's head relative to known horizontal and vertical orientation.
- the detection device could be placed on or near one of the ear pieces in order to follow the performer's head movements. That information would be sent to a belt pack receiver and make adjustments in the pack or be sent back to the monitor mixing desk for it to make adjustments or a combination of both.
- a “reference” mix would be created based on where the performer would most often be standing and facing the audience. As the performer moved around the stage the components of the mix would be modified by level, delay, equalization, transient response, reverberation, etc. For example, as the performer moved closer to an instrument, that instrument would be increased in level, have less delay time, have greater high frequency equalization, and greater transient response. As the performer moved away from the instrument the level of the instrument would be reduced, the delay time increased, and has less high frequency equalization and transient response. The ratio of direct signal to reverberation as well as the timing of the reflections could also be modified based on the proximity of the performer to an instrument.
- the stereo mix would be modified. For example, as the performer turns his/her head from facing the audience to the right, the stereo image would shift so that the audience was more in the left ear and the instruments were more in the right ear.
- the mix could also be modified to account for head related transfer function (HRTF).
- HRTF head related transfer function
- the orientations of up and down and back in front would also be modified, providing the performer with a complete orientation of the sound in space.
- the apparatus that receives the location and orientation information would be capable of making all the adjustments within preset limits and also be able to revert to a reference mix if the location or the orientation system fails.
- the sonic image when listening to headphones is almost always inside the listener's head.
- One of the aims of the invention is to better correlate the sonic image with the performer's visual image.
- Each individual has “learned” to locate a sonic image based on his/her own unique combination of pinna shape and head size, necessitating customized measurement for each individual.
- One method for making individualized headsets is to use miniature microphones in each ear and measure the response for different input locations (Smyth Research). Another method would be to have a general setting and through successive approximation adjust the setting so that the individual correlates the sonic image to a real or virtual visual image.
- FIG. 1 is a view of the prior art stage and mixers.
- FIG. 2 is a view of the stage showing the performers and their orientation facing the audience.
- FIG. 3 is a view of the stage showing the performers with performer 1 moving to stage left and facing the audience.
- FIG. 4 is a view of the stage showing performers and their orientation with performer 1 at the stage left, facing the back right of the stage.
- FIG. 5 is a block diagram of the Monitor Mixer and computer controlling the input for the performers' microphones and the output to the various performers' headphones and/or speakers.
- FIG. 1 shows the prior art use of the monitor and house mixers and their relationship to the performers and the audience.
- the performers 102 - 108 On the stage 200 are located the performers 102 - 108 , including, for example only, the drummer 102 , keyboard player 104 , backup vocalists 106 and the main performer 108 .
- Each of the above performers has associated microphones 103 , 105 , 107 and 109 to pick up the performers' voices and/or instruments.
- Wired microphone signals are split through a transformer not shown and wireless microphone signals are split after the receiver.
- One leg of the split is sent to the monitor mixer 110 and one signal is sent to the house mixer 112 .
- the monitor mixer 110 is typically located on the side of the stage and is operated by a console monitor mixer person, not shown.
- the monitor mixer 110 receives an input from each of the performers and/or the performers' instruments and the person operating the monitor mixer 110 can control the volume and tone of each of the inputs from the performers, as well as otherwise alter the input signal. For example, the monitor mixing person can increase the sound of the drums or the keyboard or the vocal, or change the tone for one, or any combination of each of them.
- the monitor mixer 110 sends its outputs to the headphones 111 , 113 , 115 , and 117 worn by each of the performers 102 , 104 , 106 and 108 and/or to the speaker 111 a , 113 a , 115 a , and 117 a associated with each performer 102 - 108 either by hard wire or by wireless connection. The outputs from the monitor mixer 110 are thus heard by each of the performers.
- the monitor engineer will create an individual mix for each performer based on that performer's preferences. Each mix is then sent to either the monitor speakers located near each performer, or via RF to headphones worn by each performer. There are performers who are in a fixed position, such as a drummer, who may use just speakers. Performers will move around the stage would use just the wireless earphones. Some performers demand to have both.
- the house mixer 112 also receives the output signal from each of the performers' microphones 103 - 109 .
- the house mixer 112 is typically located towards the rear of the audience, not on the stage.
- the output of the house mixer 112 is amplified and sent to the speakers 120 and 122 that the audience hears.
- the number and location of the speakers 120 and 122 are selected for optimizing the volume and quality of what is heard by the audience.
- the performers 102 , 104 , 106 and 108 do not hear very much of the output of the house speakers.
- the operator of the house mixer 112 modifies the output of the house mixer to maximize the aesthetic sound heard by the audience. For example, the house mixer may increase the volume of the performer while reducing the volume of the vocal background 106 .
- the house mixer can also increase the overall volume and balance of the house speakers.
- Performers 1 through 7 are all equipped with active radio frequency identification tags and stereo in-ear monitors and/or associated speakers (not shown).
- Anchors 10 through 14 receive signals from the tags. Anchors send the received signals to the computer 30 which analyzes the relative time delay between the signals and locates each of the tags on the stage. Fixed locations such as ambience microphones 20 , 21 are set in the computer.
- Computer 30 sends localization information to the monitor mixer 40 .
- the monitor mix for each performer is automatically adjusted based on the localization information.
- the localization system of the present invention is shown where Performer 1 moves to stage left. As Performer 1 moves, the localization system sends the signals to the computer 30 which computes the new relative locations between each performer and fixed locations.
- the computer 30 sends the localization information to the monitor mixer 40 .
- the monitor mixer 40 adjusts the mix for each performer as follows:
- the panning of the audio signal from Performer 1 would shift to the left in the monitor mixes for Performers 2 through 7 .
- the level of the audio signal from Performer 1 would increase in the monitor mixes for Performers 2 and 3 , decrease for Performers 4 , 5 , 6 and 7 .
- the delay time for the audio signal from Performer 1 would decrease for Performers 2 and 3 , and increase for Performers 4 , 5 , 6 , and 7 .
- These adjustments in combination with other adjustments would be made in the monitor mixes for Performers 2 through 7 so that their aural perception of where Performer 1 is at any particular moment will match their visual perceptions.
- the panning of the audio signals from Performers 2 through 7 and from Ambience Microphones 20 and 21 would shift to the right in the monitor mix for Performer 1 .
- the audio signals from Performers 2 and 3 and from Ambience Microphone 20 would increase and the audio signals from Performers 4 , 5 , 6 and 7 and Ambience Microphone 21 would decrease in the monitor mix for Performer 1 .
- the delay times of the audio signals from Performers 2 and 3 and ambience microphone 20 would decrease and the delay times of the audio signals from Performers 4 , 5 , 6 and 7 and Ambience Microphone 21 would increase.
- the head orientation system of the present invention is shown as Performer 1 turns his/her head to the right.
- An electronic compass located in or near one of the two in-ear monitors detects the change of orientation.
- the orientation information is sent from Performer 1 to the computer 30 through a Bluetooth connection.
- the computer sends the information to the monitor mixer.
- the monitor mixer automatically adjusts monitor mix for Performer 1 as follows:
- the Monitor Mixer 40 has the standard audio inputs and outputs. In the event that the location and/or the orientation systems fail, the audio mixes would still be sent to the performers. In addition to the standard features found on monitor mixers, the Monitor Mixer 40 would have the added features of adjusting time delay, stereo balance, equalization, reverberation and transient response for every input to each mix. The Monitor Mixer 40 would also have the feature of adjusting HRTF, and stereo balance on each output.
- the Anchors 10 - 14 send their data to the Computer 30 which analyzes the data to determine the location of each performer.
- the Computer 30 sends control signals to the Monitor Mixer 40 which is equipped to receive these signals.
- the Monitor Mixer 40 makes the appropriate audio adjustments to conform the audio image to the location of the audio source relative to the location of the audio output.
- the orientation data generated by the compass in the performers' in-ear monitors are sent to the Computer 30 .
- the Computer 30 generates control signals and sends them to the Monitor Mixer 40 .
- the Monitor Mixer 40 makes the appropriate adjustments to the individual audio outputs to conform to the head orientation of each performer.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Stereophonic System (AREA)
Abstract
The present invention is a system that detects the location and head orientation of a live performer on stage in front of an audience and adjusts the individual elements of the mix either by changing them at the listener's position or by feeding the location and orientation information back to the monitor mixer or a combination of both. The adjustments would include left/right panning, relative levels, equalization, transient response, reverberation levels, panning, and time delay, as well as other possible modifications to the signal so that the performer senses that he/she is actually listening to the various instruments, vocals, acoustic space and audience, in their real or virtual locations.
Description
- Many performers on stage use “in ear monitors” (IEMs) rather than monitor speakers. There are a number of advantages for the use of IEMs. One advantage is that the IEMs allow the performers to move around the stage and still hear their own monitor mix. Another advantage is that without speakers on stage, the chance of feedback loops forming between the monitor speakers and the microphones is eliminated. Another advantage is that without the high levels from monitor speakers leaking into open microphones on stage, the front of house mix is cleaner.
- A common complaint about IEMs is that because the performer is presented with the same mix regardless of the location of the performer on stage or the orientation of the performer's head relative to other performers on stage and the audience in front, the performer feels “separated” or “isolated” from the live performance, diminishing the realism and immediacy that performers rely upon. Also, the sonic image is all inside the listener's head. This adds to the performer's cognitive load, speeding the onset, and increasing the amount, of listener fatigue. Listener fatigue is a temporary threshold shift (lower level signals are not heard) and a spreading of the critical bands (hearing in noise is diminished) The typical response to listener fatigue is to increase the sound level, thus increasing the fatigue and its ill effects. A vocalist or a musician relies on his/her ability to hear in order to perform. As that ability is diminished the performance suffers.
- One effort to mitigate this problem has been to add ambience microphones to the mix. But that “ambience” is still the same regardless of where the performer is on stage and which way the performer is facing. Another effort has been to vent the in-ear ear piece itself so that there is reduced isolation. This gives a greater sense of ambience but at the possible cost of clarity of the actual output of the in-ear monitor as well as possible phase cancellation between the ambient and electronic signals in the monitor. In addition, there may be instruments that have no acoustic output on stage (e.g.—electronic keyboards, electronic drums) and therefore would not be part of the ambience mix. These solutions would not give the performer any accurate sense of location on stage or proximity to other performers.
- There are locating systems that track a performer's location on stage but use that information to adjust the panning of that performer's signal only for the front of house mix in the PA system, not for the monitor system.
- Real-time locating systems (RTLS) are used to automatically identify and track the location of objects or people in real time, usually within a building or other contained area. Wireless RTLS tags are attached to objects or worn by people, and in most RTLS, fixed reference points receive wireless signals from tags to determine their location. Examples of real-time locating systems include tracking automobiles through an assembly line, locating pallets of merchandise in a warehouse, or finding medical equipment in a hospital. The physical layer of RTLS technology is usually some form of radio frequency (RF) communication, but some systems use optical (usually infrared) or acoustic (usually ultrasound) technology instead of or in addition to RF. Tags and fixed reference points can be transmitters, receivers, or both, resulting in numerous possible technology combinations. RTLS are a form of local positioning system, and do not usually refer to GPS or to mobile phone tracking. Location information usually does not include speed, direction, or spatial orientation.
- A number of disparate system designs are all referred to as “real-time locating systems”, but there are two primary system design elements:
- ID signals from a tag are received by a multiplicity of readers in a sensory network, and a position is estimated using one or more locating algorithms, such as trilateration, multilateration, or triangulation. Equivalently, ID signals from several RTLS reference points can be received by a tag, and relayed back to a location processor. Localization with multiple reference points requires that distances between reference points in the sensory network be known in order to precisely locate a tag, and the determination of distances is called ranging. Another way to calculate relative location is if mobile tags communicate directly with each other, then relay this information to a location processor.
- RF trilateration uses estimated ranges from multiple receivers to estimate the location of a tag. RF triangulation uses the angles at which the RF signals arrive at multiple receivers to estimate the location of a tag. Many obstructions, such as walls or furniture, can distort the estimated range and angle readings leading to varied qualities of location estimate. Estimation-based locating is often measured in accuracy for a given distance, such as 90% accurate for 10 meter range. Systems that use locating technologies that do not go through walls, such as infrared or ultrasound, tend to be more accurate in an indoor environment because only tags and receivers that have line of sight (or near line of sight) can communicate.
- There is a wide variety of systems concepts and designs to provide real-time locating.
-
- Active radio frequency identification (Active RFID)
- Active radio frequency identification—infrared hybrid (Active RFID-IR)
- Infrared (IR)
- Optical locating
- Low-frequency signpost identification
- Semi-active radio frequency identification (semi-active RFID)
- Passive RFID RTLS locating via Steerable Phased Array Antennae
- Radio beacon
- Ultrasound Identification (US-ID)
- Ultrasonic ranging (US-RTLS)
- Ultra-wideband (UWB)
- Wide-over-narrow band
- Wireless Local Area Network (WLAN, Wi-Fi)
- Bluetooth
- Clustering in noisy ambience,
- Bivalent systems
- Depending on the physical technology used, at least one and often some combination of ranging and/or angulating methods are used to determine location:
- Angle of arrival (AoA)
- Line-of-sight (LoS)
- Time of arrival (ToA)
- Multilateration (Time difference of arrival) (TDoA)
- Time-of-flight (ToF)
- Two-way ranging (TWR)
- Symmetrical Double Sided—Two-Way Ranging (SDS-TWR)
- Near-field electromagnetic ranging (NFER)
- There are commercial products that use various means to detect head orientation such as Waves NX or Klang, but they do not use any means to detect location, diminishing their usefulness in any application where the listener and/or the signal sources are moving.
- Other systems have used tracking devices worn by the performers for localizing the performers on the stage. See https://ubisense.net/en/news-events/news/ubisense-and-outboard-deliver-vocal-localisation-solutions-tokyos-new-national-theatre-and-finlands-national-opera However, this system does not assist the performers themselves and does not track the orientation of the heads of the performers.
- It is an object of the present invention is to better correlate the sonic image with the performer's visual image;
- It is another object of the present invention to reduce listener fatigue;
- It is another object of the present invention to vary the mix based on the location of the performer;
- It is another object of the present invention to vary the mix based on the head orientation of the performer;
- It is another object of the present invention to have the sonic image in front of the performer;
- It is another object of the present invention to improve virtual reality systems.
- These and other objects will be evident from the review the following specification and drawings.
- The preferred embodiment of the invention is a system that detects the location and head orientation of a live performer on stage in front of an audience and adjusts the individual elements of the mix either by changing them at the listener's position or by feeding the location and orientation information back to the monitor mixer or a combination of both. The adjustments would include left/right, front/back and up/down panning; relative levels; equalization; transient response; reverberation levels, direction, time delay; as well as other possible modifications to the signal so that the performer senses that he/she is actually listening to the various instruments, vocals, acoustic space and audience, in their real or virtual locations.
- The location detection system has a detection system on or near the stage. Each performer would wear one or more remotely readable devices (such as a RFID tag) with unique digital identities. The detectors could use relative signal strength, time delay difference, triangulation, zoning algorithms or other methods or combination to locate the performer in the performance area.
- Orientation information would be generated by a means such as a magnetometer or gyroscope to detect the performer's head relative to known horizontal and vertical orientation. The detection device could be placed on or near one of the ear pieces in order to follow the performer's head movements. That information would be sent to a belt pack receiver and make adjustments in the pack or be sent back to the monitor mixing desk for it to make adjustments or a combination of both.
- A “reference” mix would be created based on where the performer would most often be standing and facing the audience. As the performer moved around the stage the components of the mix would be modified by level, delay, equalization, transient response, reverberation, etc. For example, as the performer moved closer to an instrument, that instrument would be increased in level, have less delay time, have greater high frequency equalization, and greater transient response. As the performer moved away from the instrument the level of the instrument would be reduced, the delay time increased, and has less high frequency equalization and transient response. The ratio of direct signal to reverberation as well as the timing of the reflections could also be modified based on the proximity of the performer to an instrument.
- As the performer turned his/her head, that is, changed orientation, the stereo mix would be modified. For example, as the performer turns his/her head from facing the audience to the right, the stereo image would shift so that the audience was more in the left ear and the instruments were more in the right ear. The mix could also be modified to account for head related transfer function (HRTF). In addition to modifying the stereo mix to indicate right and left, it is also envisioned that the orientations of up and down and back in front would also be modified, providing the performer with a complete orientation of the sound in space.
- The apparatus that receives the location and orientation information would be capable of making all the adjustments within preset limits and also be able to revert to a reference mix if the location or the orientation system fails.
- The sonic image when listening to headphones is almost always inside the listener's head. There are techniques which bring the image out of the listener's head, but only to side to side and behind the head, not in front. One of the aims of the invention is to better correlate the sonic image with the performer's visual image.
- Each individual has “learned” to locate a sonic image based on his/her own unique combination of pinna shape and head size, necessitating customized measurement for each individual. One method for making individualized headsets is to use miniature microphones in each ear and measure the response for different input locations (Smyth Research). Another method would be to have a general setting and through successive approximation adjust the setting so that the individual correlates the sonic image to a real or virtual visual image.
- Other applications for this technology include, but are not limited to, education and training systems, virtual and augmented reality displays, games, museums, and amusement park rides.
-
FIG. 1 is a view of the prior art stage and mixers. -
FIG. 2 is a view of the stage showing the performers and their orientation facing the audience. -
FIG. 3 is a view of the stage showing the performers with performer 1 moving to stage left and facing the audience. -
FIG. 4 is a view of the stage showing performers and their orientation with performer 1 at the stage left, facing the back right of the stage. -
FIG. 5 is a block diagram of the Monitor Mixer and computer controlling the input for the performers' microphones and the output to the various performers' headphones and/or speakers. -
FIG. 1 shows the prior art use of the monitor and house mixers and their relationship to the performers and the audience. - On the
stage 200 are located the performers 102-108, including, for example only, thedrummer 102,keyboard player 104,backup vocalists 106 and themain performer 108. Each of the above performers has associated 103, 105, 107 and 109 to pick up the performers' voices and/or instruments. Wired microphone signals are split through a transformer not shown and wireless microphone signals are split after the receiver. One leg of the split is sent to themicrophones monitor mixer 110 and one signal is sent to thehouse mixer 112. Themonitor mixer 110 is typically located on the side of the stage and is operated by a console monitor mixer person, not shown. Themonitor mixer 110 receives an input from each of the performers and/or the performers' instruments and the person operating themonitor mixer 110 can control the volume and tone of each of the inputs from the performers, as well as otherwise alter the input signal. For example, the monitor mixing person can increase the sound of the drums or the keyboard or the vocal, or change the tone for one, or any combination of each of them. Themonitor mixer 110 sends its outputs to the 111, 113, 115, and 117 worn by each of theheadphones 102, 104, 106 and 108 and/or to theperformers 111 a, 113 a, 115 a, and 117 a associated with each performer 102-108 either by hard wire or by wireless connection. The outputs from thespeaker monitor mixer 110 are thus heard by each of the performers. - The monitor engineer will create an individual mix for each performer based on that performer's preferences. Each mix is then sent to either the monitor speakers located near each performer, or via RF to headphones worn by each performer. There are performers who are in a fixed position, such as a drummer, who may use just speakers. Performers will move around the stage would use just the wireless earphones. Some performers demand to have both.
- The
house mixer 112 also receives the output signal from each of the performers' microphones 103-109. Thehouse mixer 112 is typically located towards the rear of the audience, not on the stage. The output of thehouse mixer 112 is amplified and sent to the 120 and 122 that the audience hears. The number and location of thespeakers 120 and 122 are selected for optimizing the volume and quality of what is heard by the audience. Thespeakers 102, 104, 106 and 108 do not hear very much of the output of the house speakers. The operator of theperformers house mixer 112 modifies the output of the house mixer to maximize the aesthetic sound heard by the audience. For example, the house mixer may increase the volume of the performer while reducing the volume of thevocal background 106. The house mixer can also increase the overall volume and balance of the house speakers. - Referring to
FIG. 2 the localization system of the present invention is shown. Performers 1 through 7 are all equipped with active radio frequency identification tags and stereo in-ear monitors and/or associated speakers (not shown).Anchors 10 through 14 receive signals from the tags. Anchors send the received signals to thecomputer 30 which analyzes the relative time delay between the signals and locates each of the tags on the stage. Fixed locations such as 20, 21 are set in the computer.ambience microphones -
Computer 30 sends localization information to themonitor mixer 40. The monitor mix for each performer is automatically adjusted based on the localization information. - Referring to
FIG. 3 , the localization system of the present invention is shown where Performer 1 moves to stage left. As Performer 1 moves, the localization system sends the signals to thecomputer 30 which computes the new relative locations between each performer and fixed locations. - The
computer 30 sends the localization information to themonitor mixer 40. Themonitor mixer 40 adjusts the mix for each performer as follows: - In comparison to
FIG. 2 , the panning of the audio signal from Performer 1 would shift to the left in the monitor mixes for Performers 2 through 7. The level of the audio signal from Performer 1 would increase in the monitor mixes for Performers 2 and 3, decrease forPerformers 4,5,6 and 7. The delay time for the audio signal from Performer 1 would decrease for Performers 2 and 3, and increase forPerformers 4,5,6, and 7. These adjustments in combination with other adjustments (reverberation, equalization, head related transfer function, etc.) would be made in the monitor mixes for Performers 2 through 7 so that their aural perception of where Performer 1 is at any particular moment will match their visual perceptions. - The panning of the audio signals from Performers 2 through 7 and from
20 and 21 would shift to the right in the monitor mix for Performer 1. The audio signals from Performers 2 and 3 and fromAmbience Microphones Ambience Microphone 20 would increase and the audio signals fromPerformers 4,5,6 and 7 andAmbience Microphone 21 would decrease in the monitor mix for Performer 1. The delay times of the audio signals from Performers 2 and 3 andambience microphone 20 would decrease and the delay times of the audio signals fromPerformers 4, 5, 6 and 7 andAmbience Microphone 21 would increase. These adjustments along with other adjustments (reverberation, equalization, head related transfer function, etc.) would be made in the monitor mix for Performer 1 so that performer's aural perception of where Performers 2 through 7 are at any particular moment will match his/her visual perception. - Referring to
FIG. 4 , the head orientation system of the present invention is shown as Performer 1 turns his/her head to the right. An electronic compass located in or near one of the two in-ear monitors detects the change of orientation. The orientation information is sent from Performer 1 to thecomputer 30 through a Bluetooth connection. The computer sends the information to the monitor mixer. The monitor mixer automatically adjusts monitor mix for Performer 1 as follows: - The panning of the audio signals from
20 and 21 and Performers 6 and 7 would shift to the left. Audio signals fromAmbience Microphones Performers 4 and 5 would be centered. Audio signals from Performers 2 and 3 would shift to the right. Head related transfer function adjustments would be applied on audio signals based on the new orientation of Performer 1. These adjustments will aid in making Performer 1's aural perception match his/her visual perception. - Referring to
FIG. 5 Monitor Mixer 40, theMonitor Mixer 40 has the standard audio inputs and outputs. In the event that the location and/or the orientation systems fail, the audio mixes would still be sent to the performers. In addition to the standard features found on monitor mixers, theMonitor Mixer 40 would have the added features of adjusting time delay, stereo balance, equalization, reverberation and transient response for every input to each mix. TheMonitor Mixer 40 would also have the feature of adjusting HRTF, and stereo balance on each output. - The Anchors 10-14 send their data to the
Computer 30 which analyzes the data to determine the location of each performer. TheComputer 30 sends control signals to theMonitor Mixer 40 which is equipped to receive these signals. TheMonitor Mixer 40 makes the appropriate audio adjustments to conform the audio image to the location of the audio source relative to the location of the audio output. - The orientation data generated by the compass in the performers' in-ear monitors are sent to the
Computer 30. TheComputer 30 generates control signals and sends them to theMonitor Mixer 40. TheMonitor Mixer 40 makes the appropriate adjustments to the individual audio outputs to conform to the head orientation of each performer.
Claims (3)
1-2. (canceled)
3. A method of detecting, for at least one live performer who is moving, the location of the performer moving and adjusting the sound mix sent to the moving performer's in ear monitor (IEM) based on the performer's location comprising the steps of:
providing a location identification tag for each performer moving so as to locate the performer as the performer moves;
providing a microphone to send a signal from the performer to a monitor mixer;
each mixer configured to detect changes in the signal sent to the monitor mixer dependent on the location of each performer; and
providing a computer in communication with the monitor mixer, the computer configured to detect the location of each performer, and the computer configured to vary the output from the monitor mixer to the in ear monitor (IEM) of the performer wearing the location identification tag dependent on the performer's location.
4. A method of detecting, for at least one live performer who is moving, the location of the performer moving and the orientation of the moving performer's head, and adjusting the sound mix sent to the moving performer's in ear monitor (IEM) based on the performer's location and head orientation, comprising the steps of:
providing a location identification tag for each performer moving so as to locate the performer and in ear monitors (IEM) that are capable of transmitting a signal showing the orientation of the performer's head as the performer moves;
providing a microphone to send a signal from the performers to a monitor mixer;
configuring each mixer to detect changes in the signal sent to the monitor mixer dependent on the location of each performer and the orientation of the performer's head; and
providing a computer in communication with the monitor mixer, the computer configured to detect the location and orientation of the head of each performer, and the computer to vary the output from the monitor mixer to the in ear monitor (IEM) of the performer, dependent on the performer's location and the orientation of the performer's head.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/838,333 US20190182592A1 (en) | 2017-12-11 | 2017-12-11 | Method for adjusting audio for listener location and head orientation within a physical or virtual space |
| US16/432,182 US11159883B2 (en) | 2017-12-11 | 2019-06-05 | Method for adjusting listener location and head orientation within a physical or virtual space |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/838,333 US20190182592A1 (en) | 2017-12-11 | 2017-12-11 | Method for adjusting audio for listener location and head orientation within a physical or virtual space |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/432,182 Continuation US11159883B2 (en) | 2017-12-11 | 2019-06-05 | Method for adjusting listener location and head orientation within a physical or virtual space |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20190182592A1 true US20190182592A1 (en) | 2019-06-13 |
Family
ID=66696600
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/838,333 Abandoned US20190182592A1 (en) | 2017-12-11 | 2017-12-11 | Method for adjusting audio for listener location and head orientation within a physical or virtual space |
| US16/432,182 Active US11159883B2 (en) | 2017-12-11 | 2019-06-05 | Method for adjusting listener location and head orientation within a physical or virtual space |
Family Applications After (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/432,182 Active US11159883B2 (en) | 2017-12-11 | 2019-06-05 | Method for adjusting listener location and head orientation within a physical or virtual space |
Country Status (1)
| Country | Link |
|---|---|
| US (2) | US20190182592A1 (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050130717A1 (en) * | 2003-11-25 | 2005-06-16 | Gosieski George J.Jr. | System and method for managing audio and visual data in a wireless communication system |
| US20100045462A1 (en) * | 2008-08-25 | 2010-02-25 | James Edward Gibson | Devices for identifying and tracking wireless microphones |
| US20130010984A1 (en) * | 2011-07-09 | 2013-01-10 | Thomas Hejnicki | Method for controlling entertainment equipment based on performer position |
| US20170359646A1 (en) * | 2014-12-22 | 2017-12-14 | Klang:Technologies Gmbh | Cable Set |
| US10003901B1 (en) * | 2016-03-20 | 2018-06-19 | Audio Fusion Systems, LLC | Graphical monitor mixing system that uses a stage plot to create spatially accurate sound |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9197755B2 (en) * | 2013-08-30 | 2015-11-24 | Gleim Conferencing, Llc | Multidimensional virtual learning audio programming system and method |
| AU2017210021B2 (en) * | 2016-01-19 | 2019-07-11 | Sphereo Sound Ltd. | Synthesis of signals for immersive audio playback |
| US9986363B2 (en) * | 2016-03-03 | 2018-05-29 | Mach 1, Corp. | Applications and format for immersive spatial sound |
| US10754608B2 (en) * | 2016-11-29 | 2020-08-25 | Nokia Technologies Oy | Augmented reality mixing for distributed audio capture |
-
2017
- 2017-12-11 US US15/838,333 patent/US20190182592A1/en not_active Abandoned
-
2019
- 2019-06-05 US US16/432,182 patent/US11159883B2/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050130717A1 (en) * | 2003-11-25 | 2005-06-16 | Gosieski George J.Jr. | System and method for managing audio and visual data in a wireless communication system |
| US20100045462A1 (en) * | 2008-08-25 | 2010-02-25 | James Edward Gibson | Devices for identifying and tracking wireless microphones |
| US20130010984A1 (en) * | 2011-07-09 | 2013-01-10 | Thomas Hejnicki | Method for controlling entertainment equipment based on performer position |
| US20170359646A1 (en) * | 2014-12-22 | 2017-12-14 | Klang:Technologies Gmbh | Cable Set |
| US10003901B1 (en) * | 2016-03-20 | 2018-06-19 | Audio Fusion Systems, LLC | Graphical monitor mixing system that uses a stage plot to create spatially accurate sound |
Also Published As
| Publication number | Publication date |
|---|---|
| US11159883B2 (en) | 2021-10-26 |
| US20190289394A1 (en) | 2019-09-19 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12262193B2 (en) | Audio source spatialization relative to orientation sensor and output | |
| US10972835B2 (en) | Conference system with a microphone array system and a method of speech acquisition in a conference system | |
| US11750997B2 (en) | System and method for providing a spatialized soundfield | |
| KR101011543B1 (en) | Method and apparatus for creating a multi-dimensional communication space for use in a binaural audio system | |
| US8831761B2 (en) | Method for determining a processed audio signal and a handheld device | |
| US9924291B2 (en) | Distributed wireless speaker system | |
| US9712940B2 (en) | Automatic audio adjustment balance | |
| CN110972033A (en) | System and method for modifying audio data information based on one or more Radio Frequency (RF) signal reception and/or transmission characteristics | |
| US20150326963A1 (en) | Real-time Control Of An Acoustic Environment | |
| US11812235B2 (en) | Distributed audio capture and mixing controlling | |
| EP2410769B1 (en) | Method for determining an acoustic property of an environment | |
| CN110383374A (en) | Audio communication system and method | |
| CN108028976A (en) | Distributed audio microphone array and locator configuration | |
| JP2005513935A (en) | Peer base positioning | |
| US9832587B1 (en) | Assisted near-distance communication using binaural cues | |
| US10616684B2 (en) | Environmental sensing for a unique portable speaker listening experience | |
| WO2013132393A1 (en) | System and method for indoor positioning using sound masking signals | |
| US11490201B2 (en) | Distributed microphones signal server and mobile terminal | |
| US9826332B2 (en) | Centralized wireless speaker system | |
| US12328558B2 (en) | Sound output unit and a method of operating it | |
| US11159883B2 (en) | Method for adjusting listener location and head orientation within a physical or virtual space | |
| JP4450764B2 (en) | Speaker device | |
| JP2011155500A (en) | Monitor control apparatus and acoustic system | |
| US12483852B1 (en) | System and method for adjusting loudspeaker performance based on listener location | |
| Vibhute et al. | Sound Source Localization in 3D using Asymmetrical Positioned and Skew Aligned Two-Array Microphone-Experimentation |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |