[go: up one dir, main page]

US20180359582A1 - Method and Device for Estimating Acoustic Reverberation - Google Patents

Method and Device for Estimating Acoustic Reverberation Download PDF

Info

Publication number
US20180359582A1
US20180359582A1 US15/778,146 US201615778146A US2018359582A1 US 20180359582 A1 US20180359582 A1 US 20180359582A1 US 201615778146 A US201615778146 A US 201615778146A US 2018359582 A1 US2018359582 A1 US 2018359582A1
Authority
US
United States
Prior art keywords
characteristic
acoustic
reverberation
environment
determined
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US15/778,146
Other versions
US10313809B2 (en
Inventor
Arthur Belhomme
Yves Grenier
Roland Badeau
Eric Humbert
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Invoxia SAS
Original Assignee
Invoxia SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Invoxia SAS filed Critical Invoxia SAS
Assigned to INVOXIA reassignment INVOXIA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BELHOMME, Arthur, GRENIER, YVES, HUMBERT, ERIC, BADEAU, ROLAND
Publication of US20180359582A1 publication Critical patent/US20180359582A1/en
Application granted granted Critical
Publication of US10313809B2 publication Critical patent/US10313809B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • H04R29/001Monitoring arrangements; Testing arrangements for loudspeakers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/21Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being power information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control

Definitions

  • This invention relates to methods and devices for estimating acoustic reverberation.
  • Estimating the acoustic reverberation of an environment is essential for capturing acoustic signals such as speech in a reverberating environment such as for example a room in a building.
  • the microphone When a sound is emitted and then captured by a microphone in a reverberating environment, the microphone captures not only the signal received directly, but also signals reverberating in the environment.
  • This reverberation is reflected by the impulse response of the environment, from which emerges various known parameters, in particular the reverberation time.
  • the impulse response is directly measurable by emitting an acoustic impulse in the environment, but this method is burdensome and hard to imagine for making repeated measurements while one or more speakers talk in the room.
  • the reverberation time can be estimated blind, for example while one or more speakers talk.
  • the most commonly used parameter for representing the reverberation time is the reverberation time at 60 dB RT 60 .
  • the document US 2014/169,575 describes a method for blind estimation of reverberation time in a room.
  • the reverberation time is not representative of the distance between the emitter and the microphone, which however has a significant impact on the reverberation level.
  • the captured acoustic signals can therefore not be satisfactorily processed with the known methods of the aforementioned type.
  • the purpose of the present invention is to propose a method for estimating the acoustic reverberation with which to avoid this disadvantage.
  • the invention proposes a method for estimating the acoustic reverberations in an environment comprising the following steps:
  • step (a) a measurement step in which at least one acoustic signal in the environment is captured;
  • step (b) an observation step during which an acoustic energy decay rate distribution is determined from the acoustic signal captured in step (a) and the characteristic function of the acoustic energy decay rate distribution is determined;
  • step (c) an estimation step during which a characteristic reverberation time and a characteristic reverberation level of the sound in the environment are estimated from data representative of the acoustic energy decay rate distribution determined in step (b), where the regression is done with reference to:
  • both a characteristic reverberation time and a characteristic reverberation level can be reliably determined for the sound in the environment.
  • the captured sound signals can be processed satisfactorily with these two parameters.
  • one and/or another of the following dispositions can possibly be used:
  • an object of the invention is also a device for estimating the acoustic reverberation in an environment, comprising:
  • FIG. 1 is a schematic view showing the reverberation of sound in a room when a subject speaks so that their speech is captured by a device according to an embodiment of the invention
  • FIG. 2 is a conceptual drawing of the device from FIG. 1 .
  • the purpose of the invention is to estimate the acoustic reverberation of an environment 7 , for example a room in a building such as shown schematically in FIG. 1 , so as to process the acoustic signals captured by an electronic device 1 provided with a microphone 2 .
  • the electronic device 1 can for example be a telephone in the example shown, or a computer or something else.
  • this sound propagates to the microphone 2 along various paths 4 , either directly, or after reflection from one or more walls 5 , 6 of the environment 7 .
  • the electronic device 1 can comprise for example a central electronic unit 8 such as a processor or other, connected to the microphone 2 and various other elements, including for example a speaker 9 , keyboard 10 and screen 11 .
  • the central electronic unit 8 can communicate with an external network 12 , for example a telephone network.
  • the electronic device 1 is able to measure blind two characteristic parameters of the reverberation of the environment 7 :
  • These parameters can be used for eliminating the effects of echoes or more generally for optimizing sound signals captured by the microphone 2 .
  • the parameters in question are estimated repetitively, so that the device 1 adapts for example to changes of speakers 3 , movements of speakers 3 , and movements of the device 1 or other objects in the environment 7 .
  • the reverberation time at 60 dB RT 60 can be defined by the inverse integration method of Manfred R. Schroeder (New Method of Measuring Reverberation Time, The Journal of the Acoustical Society of America, 37(3):409, 1965) by the Energy Decay Curve (EDC):
  • RT 60 is the time at temporal index n required for EDC(n) to decrease 60 dB.
  • the reverberation time RT 60 is the most commonly used, another reverberation time characteristic of the environment 7 could be estimated.
  • the reverberation level is most commonly represented by the clarity index:
  • The two most commonly used values of ⁇ are 50 ms and 80 ms, in particular 50 ms (C 50 and D 50 indexes), but other lengths are possible and more generally other indexes reflecting the ratio of direct sound to reverberated sound could be estimated in the method according to the invention, implemented for example by the aforementioned electronic central unit 8 .
  • This method comprises the following steps:
  • step (b) an observation step during which an acoustic energy decay rate distribution is determined from acoustic signals measured in step (a);
  • step (c) an estimation step during which a characteristic reverberation time and a characteristic reverberation level of sound in the environment 7 are estimated by regression from the acoustic energy decay rate distribution determined in step (b).
  • the microphone 2 captures “blind” (meaning without prior knowledge of the emitted signals) an acoustic signal broadcast in the environment 7 , for example while the speaker 3 talks.
  • the signal is sampled and stored in the processor 8 or an attached memory (not shown).
  • an acoustic energy decay rate distribution is determined from the acoustic signal measured in step (a);
  • the reverberated signal energy envelope d x (n) is determined such as described in particular by Wen et al. (J. Y. C. Wen, E. A. P. Habets, and P. A. Naylor, Blind estimation of reverberation time based on the distribution of signal decay rates, Acoustics, Speech and Signal Processing, 2008, ICASSP 2008, IEEE International Conference pages 329-332, March 2008).
  • ⁇ s and ⁇ h are respectively the energy decay rate of the anechoic signal emitted and of the environment 7 (the captured signal is a convolution of the emitted anechoic signal (speech) with the impulse response of the environment between the speaker 3 and the microphone 2 , where n is the previously defined temporal index).
  • the calculation of ⁇ (m) can typically be done on a number of frames, M, at least 2000, corresponding to at least 1 min. of signal depending on the selected analysis parameters.
  • the frames can have an individual length of 10 to 100 ms, in particular of order 32 ms.
  • the frames can mutually overlap, for example with an overlap rate of order 50% between successive frames.
  • the characteristic function of the energy decay rate distribution is next determined by the following formula (see Audrey Feuerverger and Roman A. Mureika [The empirical characteristic function and its applications, Ann. Statist., 5(1):88-97, 01 1977]):
  • X here represents the aforementioned energy decay rate ⁇ (m) estimated for various values of m (formula (5))
  • F X represents the cumulative distribution of X
  • f is a dimensionless variable generally called angular frequency.
  • the characteristic function can be calculated for angular frequencies f ranging for example from 0 to 0.4, by increments of 0.001.
  • p can be included between 256 and 2048.
  • the characteristic function is a complex number, it can be represented by a vector X from p , constituting the random input vector x of the estimator used.
  • the random output vector y of the estimator belonging to 2 , has the two estimated parameters as its components, for example (RT 60 , C 50 ) or (RT 60 , D 50 ).
  • the estimator used can advantageously be a kernel function estimator, for example a Nadaraya-Watson estimator.
  • a kernel function estimator for example a Nadaraya-Watson estimator.
  • Such an estimator has the advantage of simultaneously determining the characteristic reverberation time and the characteristic reverberation level.
  • the estimator in question can be determined in advance in an initial calibration phase, where at least one initial step of reference signal determination (a′) and at least one initial step of observation (b′) is implemented.
  • a plurality of reference acoustic signals, and corresponding reference characteristic reverberation times and reference characteristic reverberation levels are determined.
  • the acoustic energy decay rate distribution and the reference characteristic function are determined for each reference acoustic signal in away identical or similar to the aforementioned observation step (b).
  • the reference acoustic signals are N generally voice signals and correspond to N different scenarios (e.g. different speakers, different positions, different environments 7 ). N can be several hundred or even several thousand.
  • the initial reference signal determination step can be done:
  • the aforementioned synthetic acoustic signals can be calculated by convolution of the prerecorded impulse responses with anechoic speech signals, also prerecorded, coming from different speakers.
  • Prerecorded impulse responses can, for example, come from impulse response databases, for example, coming from free access databases such as the databases: Aachen Impulse Response (http://www.openairlib.net/auralizationdb), MARDY (Wen et al., Evaluation of speech dereverberation algorithms using the Mardy database, September IWAENC 2006, Paris), QueenMary (R. Stewart and M. Sandler, Database of omnidirectional and b-format room impulse responses, In Acoustics Speech and Signal Processing (ICASSP).
  • IASSP Acoustics Speech and Signal Processing
  • the anechoic speech signals recorded from various speakers for example various ages and genders, with for example recording lengths for example of a few minutes, for example of order five minutes.
  • the energy decay rate distributions can for example be calculated on 10 to 100 ms frames, in particular of order 32 ms.
  • the frames can mutually overlap, for example with an overlap rate of order 50% between successive frames.
  • the characteristic functions can be calculated for angular frequencies f ranging for example from 0 to 0.4, by increments of 0.001.
  • the kernel function K ⁇ (x, x i ) is a function of x and x i such as defined in particular by Scholkopf et al. (B. Scholkopf and A. J. Smola, Learning with Kernels, MIT Press, Cambridge, Mass., 2001).
  • K ⁇ ⁇ ( x , x i ) 1 ⁇ ⁇ e - ⁇ x - x i ⁇ 2 2 ⁇ ⁇ .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)

Abstract

A method for estimating the acoustic reverberations in an environment comprising the following steps: a measurement step in which one acoustic signal emitted in the environment is captured; a step for determination of acoustic energy decay rate distribution during which an acoustic energy decay rate distribution is determined from the acoustic signal captured in step (a); an estimation step during which a reverberation time and a reverberation level of sound in the environment are estimated by regression from the characteristic function of the acoustic energy decay rate distribution determined in step (b).

Description

    FIELD OF THE INVENTION
  • This invention relates to methods and devices for estimating acoustic reverberation.
  • BACKGROUND OF THE INVENTION
  • Estimating the acoustic reverberation of an environment is essential for capturing acoustic signals such as speech in a reverberating environment such as for example a room in a building.
  • When a sound is emitted and then captured by a microphone in a reverberating environment, the microphone captures not only the signal received directly, but also signals reverberating in the environment.
  • This reverberation is reflected by the impulse response of the environment, from which emerges various known parameters, in particular the reverberation time. The impulse response is directly measurable by emitting an acoustic impulse in the environment, but this method is burdensome and hard to imagine for making repeated measurements while one or more speakers talk in the room.
  • The reverberation time can be estimated blind, for example while one or more speakers talk. The most commonly used parameter for representing the reverberation time is the reverberation time at 60 dB RT60.
  • As an example, the document US 2014/169,575 describes a method for blind estimation of reverberation time in a room.
  • However, the reverberation time is not representative of the distance between the emitter and the microphone, which however has a significant impact on the reverberation level. The captured acoustic signals can therefore not be satisfactorily processed with the known methods of the aforementioned type.
  • PURPOSE AND SUMMARY OF THE INVENTION
  • Therefore the purpose of the present invention is to propose a method for estimating the acoustic reverberation with which to avoid this disadvantage.
  • For this purpose, the invention proposes a method for estimating the acoustic reverberations in an environment comprising the following steps:
  • (a) a measurement step in which at least one acoustic signal in the environment is captured;
    (b) an observation step during which an acoustic energy decay rate distribution is determined from the acoustic signal captured in step (a) and the characteristic function of the acoustic energy decay rate distribution is determined;
    (c) an estimation step during which a characteristic reverberation time and a characteristic reverberation level of the sound in the environment are estimated from data representative of the acoustic energy decay rate distribution determined in step (b), where the regression is done with reference to:
      • reference characteristic functions representative respectively of several acoustic energy decay rate distributions;
      • reference characteristic reverberation times corresponding to said reference characteristic functions; and
      • reference characteristic reverberation levels corresponding to said reference characteristic functions.
  • Because of these arrangements, and in particular because of the fact that the estimation method is applied to the acoustic energy decay rate distribution, both a characteristic reverberation time and a characteristic reverberation level can be reliably determined for the sound in the environment. The captured sound signals can be processed satisfactorily with these two parameters.
  • In various embodiments of the method according to the invention, one and/or another of the following dispositions can possibly be used:
      • during the estimation step (c), a kernel function estimator is used and the characteristic reverberation time and the characteristic reverberation level are determined simultaneously;
      • during the estimation step (c), a Nadaraya-Watson estimator is used;
      • during the estimation step (c), the characteristic reverberation level of the sound in the environment (7) is chosen among the clarity index Cτ and the definition index Dτ;
      • during the observation step (b), the energy decay rates are determined by calculating the energy Em of the acoustic signal on successive signal frames m, and then calculating a logarithmic ratio between the energy of two successive frames:
  • ρ ( m ) = log ( E m E m - 1 ) ; ( 5 )
      • the method further comprises a preliminary calibration phase comprising the following steps:
        (a′) at least one initial reference signal determination step in which a plurality of reference acoustic signals corresponding to said reference characteristic reverberation times and said reference characteristic reverberation levels are determined;
        (b′) at least one initial observation step during which, an acoustic energy decay rate distribution and the reference characteristic function are determined for each reference acoustic signal;
      • during said reference signal determination step, at least one part of the reference acoustic signals and the reference characteristic reverberation times and characteristic reverberation levels corresponding to said reference acoustic signals are determined by calculation from a predetermined set of impulse responses;
      • during said reference signal determination step, at least one part of the reference acoustic signals, the characteristic reverberation times and the reference characteristic reverberation levels corresponding to said reference acoustic signals are determined by measurement.
  • Further, an object of the invention is also a device for estimating the acoustic reverberation in an environment, comprising:
  • (a) means of measurement for capturing at least one acoustic signal emitted in the environment;
    (b) means of determination of an acoustic energy decay rate distribution from the acoustic signal captured by the means of measurement, and for determining the characteristic function of the acoustic energy decay rate distribution;
    (c) means of estimation of a characteristic reverberation time and a characteristic reverberation level of the sound in the environment from data representative of the acoustic energy decay rate distribution, where the regression is done with reference to:
      • reference characteristic functions representative respectively of several acoustic energy decay rate distributions;
      • reference characteristic reverberation times corresponding to said reference characteristic functions; and
      • reference characteristic reverberation levels corresponding to said reference characteristic functions.
    BRIEF DESCRIPTION OF THE DRAWINGS
  • Other features and advantages of the invention will become apparent during the following description of one of the embodiments thereof, given as a nonlimiting example, with reference to the attached drawings.
  • In the drawings:
  • FIG. 1 is a schematic view showing the reverberation of sound in a room when a subject speaks so that their speech is captured by a device according to an embodiment of the invention;
  • FIG. 2 is a conceptual drawing of the device from FIG. 1.
  • MORE DETAILED DESCRIPTION
  • In the various figures, the same references designate identical or similar items.
  • The purpose of the invention is to estimate the acoustic reverberation of an environment 7, for example a room in a building such as shown schematically in FIG. 1, so as to process the acoustic signals captured by an electronic device 1 provided with a microphone 2. The electronic device 1 can for example be a telephone in the example shown, or a computer or something else.
  • When the sound is emitted in the environment 7, for example by the person 3, this sound propagates to the microphone 2 along various paths 4, either directly, or after reflection from one or more walls 5, 6 of the environment 7.
  • As shown in FIG. 2, the electronic device 1 can comprise for example a central electronic unit 8 such as a processor or other, connected to the microphone 2 and various other elements, including for example a speaker 9, keyboard 10 and screen 11. The central electronic unit 8 can communicate with an external network 12, for example a telephone network.
  • With the invention, the electronic device 1 is able to measure blind two characteristic parameters of the reverberation of the environment 7:
      • a characteristic reverberation time, for example the reverberation time at 60 dB RT60; and
      • a characteristic reverberation level (for example clarity or definition index, or direct signal over reverberated signal index).
  • These parameters can be used for eliminating the effects of echoes or more generally for optimizing sound signals captured by the microphone 2. The parameters in question are estimated repetitively, so that the device 1 adapts for example to changes of speakers 3, movements of speakers 3, and movements of the device 1 or other objects in the environment 7.
  • The reverberation time at 60 dB RT60 can be defined by the inverse integration method of Manfred R. Schroeder (New Method of Measuring Reverberation Time, The Journal of the Acoustical Society of America, 37(3):409, 1965) by the Energy Decay Curve (EDC):

  • EDC(n)=Σk=n N h h(k)2  (1)
  • where:
      • h is the impulse response of the environment of length Nh,
      • n is a temporal index, for example a number of samples obtained with constant time step sampling; n is included between 1 and Nh.
  • RT60 is the time at temporal index n required for EDC(n) to decrease 60 dB.
  • Although the reverberation time RT60 is the most commonly used, another reverberation time characteristic of the environment 7 could be estimated.
  • The reverberation level is most commonly represented by the clarity index:
  • C τ = 10 log 10 ( n = 0 N τ h 2 ( n ) n = N τ + 1 h 2 ( n ) ) dB , ( 2 )
  • or by the definition index:
  • D τ = 10 log 10 ( n = 0 N τ h 2 ( n ) n = 0 h 2 ( n ) ) dB , ( 3 )
  • where:
      • Nτ is the number of samples at constant time step corresponding to the time τ, generally included between 0.1 ms and 1 s;
      • n is a temporal index included between 1 and Nτ, representative of the number of samples of constant time step;
      • h(n) is the impulse response of the environment 7.
  • These indexes were described in particular by P. A. Naylor and N. D. Gaubitch (Speech Dereverberation, Springer, Eds., edition, 2010).
  • The two most commonly used values of τ are 50 ms and 80 ms, in particular 50 ms (C50 and D50 indexes), but other lengths are possible and more generally other indexes reflecting the ratio of direct sound to reverberated sound could be estimated in the method according to the invention, implemented for example by the aforementioned electronic central unit 8.
  • This method comprises the following steps:
  • (a) an acoustic signal measurement step;
  • (b) an observation step during which an acoustic energy decay rate distribution is determined from acoustic signals measured in step (a);
  • (c) an estimation step during which a characteristic reverberation time and a characteristic reverberation level of sound in the environment 7 are estimated by regression from the acoustic energy decay rate distribution determined in step (b).
  • (a) Measurement Step:
  • During this step, the microphone 2 captures “blind” (meaning without prior knowledge of the emitted signals) an acoustic signal broadcast in the environment 7, for example while the speaker 3 talks. The signal is sampled and stored in the processor 8 or an attached memory (not shown).
  • (b) Observation Step:
  • During this step, an acoustic energy decay rate distribution is determined from the acoustic signal measured in step (a);
  • To do that, the reverberated signal energy envelope dx(n) is determined such as described in particular by Wen et al. (J. Y. C. Wen, E. A. P. Habets, and P. A. Naylor, Blind estimation of reverberation time based on the distribution of signal decay rates, Acoustics, Speech and Signal Processing, 2008, ICASSP 2008, IEEE International Conference pages 329-332, March 2008).
  • By doing a calculation on the signal sample frames Nω separated by jumps of R signal samples, a total energy of the frame m can be calculated with the formula:

  • E mi=0 N ω −1 d x(mR+i)  (4)
  • and next estimate the energy decay rate by calculating the logarithmic ratio of two successive frames:
  • λ x ρ ( m ) = log ( E m E m - 1 ) . ( 5 )
  • In fact, the energy envelope dx(n) can be expressed by the formula:
  • d x ( n ) = { ( e λ h n - e λ s n ) / ( λ h - λ s ) if λ h λ s ne λ h n s if λ h = λ s ( 6 )
  • where λs and λh are respectively the energy decay rate of the anechoic signal emitted and of the environment 7 (the captured signal is a convolution of the emitted anechoic signal (speech) with the impulse response of the environment between the speaker 3 and the microphone 2, where n is the previously defined temporal index).
  • Since the sum is dominated by the exponential term corresponding to the largest value of λ, the energy decay rate of the reverberated signal λx can be approximated by:

  • λx=max[λhs]  (7),
  • which justifies the formula (5) above.
  • The calculation of ρ(m) can typically be done on a number of frames, M, at least 2000, corresponding to at least 1 min. of signal depending on the selected analysis parameters. The frames can have an individual length of 10 to 100 ms, in particular of order 32 ms. The frames can mutually overlap, for example with an overlap rate of order 50% between successive frames.
  • The result is thus different values of the energy decay rate ρ(m), which have some statistical distribution (number of executions, or probability of execution depending on the energy decay rate ρ(m), as discussed for example in the article by Wen et al. above).
  • The characteristic function of the energy decay rate distribution is next determined by the following formula (see Audrey Feuerverger and Roman A. Mureika [The empirical characteristic function and its applications, Ann. Statist., 5(1):88-97, 01 1977]):

  • ϕX(ƒ)=∫e iƒx dF X(x)=E[e iƒx]  (8)
  • where X here represents the aforementioned energy decay rate ρ(m) estimated for various values of m (formula (5)), FX represents the cumulative distribution of X and f is a dimensionless variable generally called angular frequency.
  • The characteristic function can be calculated for angular frequencies f ranging for example from 0 to 0.4, by increments of 0.001.
  • (c) Estimation step:
  • Start with the characteristic function Φρ(m)(f), calculated for p/2 frequencies f (where p is an even integer), where the frequency range f and their sampling are intended such that |Φρ(m)(f)| is preferably included between 0.1 and 1.
  • Typically, p can be included between 256 and 2048.
  • Because the characteristic function is a complex number, it can be represented by a vector X from
    Figure US20180359582A1-20181213-P00001
    p, constituting the random input vector x of the estimator used. The random output vector y of the estimator, belonging to
    Figure US20180359582A1-20181213-P00001
    2, has the two estimated parameters as its components, for example (RT60, C50) or (RT60, D50).
  • The estimator used can advantageously be a kernel function estimator, for example a Nadaraya-Watson estimator. Such an estimator has the advantage of simultaneously determining the characteristic reverberation time and the characteristic reverberation level.
  • The estimator in question can be determined in advance in an initial calibration phase, where at least one initial step of reference signal determination (a′) and at least one initial step of observation (b′) is implemented.
  • During the initial step of reference signal determination a plurality of reference acoustic signals, and corresponding reference characteristic reverberation times and reference characteristic reverberation levels are determined.
  • During the initial observation step, the acoustic energy decay rate distribution and the reference characteristic function are determined for each reference acoustic signal in away identical or similar to the aforementioned observation step (b).
  • The reference acoustic signals are N generally voice signals and correspond to N different scenarios (e.g. different speakers, different positions, different environments 7). N can be several hundred or even several thousand.
  • The initial reference signal determination step can be done:
      • with new real measurements done for example with an electronic device 1 of a fixed model (in this case, the characteristic reverberation time and the characteristic reverberation level can also be measured);
      • and/or with synthetic acoustic signals.
  • In the case of real measurements, these will not generally be done in the specific environment 7 where the electronic device 1 will be used, even though this scenario can be considered.
  • The aforementioned synthetic acoustic signals can be calculated by convolution of the prerecorded impulse responses with anechoic speech signals, also prerecorded, coming from different speakers. Prerecorded impulse responses can, for example, come from impulse response databases, for example, coming from free access databases such as the databases: Aachen Impulse Response (http://www.openairlib.net/auralizationdb), MARDY (Wen et al., Evaluation of speech dereverberation algorithms using the Mardy database, September IWAENC 2006, Paris), QueenMary (R. Stewart and M. Sandler, Database of omnidirectional and b-format room impulse responses, In Acoustics Speech and Signal Processing (ICASSP). 2010 IEEE International Conference on., pages 165-168, March 2010), for example with reverberation times RT60 ranging from 0.3 s to 8 s and clarity indexes C50 from −10 dB to 25 dB. The anechoic speech signals recorded from various speakers, for example various ages and genders, with for example recording lengths for example of a few minutes, for example of order five minutes.
  • The energy decay rate distributions can for example be calculated on 10 to 100 ms frames, in particular of order 32 ms. The frames can mutually overlap, for example with an overlap rate of order 50% between successive frames. The characteristic functions can be calculated for angular frequencies f ranging for example from 0 to 0.4, by increments of 0.001.
  • In that way N executions of the aforementioned x and y vectors result and the Nadaraya-Watson estimator can then be determined with the formula:
  • f ^ ( x ) = i = 1 N y i K λ ( x , x i ) i = 1 N K λ ( x , x i ) . ( 9 )
  • where:
      • xi, yi, i=1 to N, are the N executions of the vectors x, y used for the calibration step;
      • Kλ(x, xi) is a kernel function with window X (where X is a constant also called smoothing parameter);
      • x is the unknown input vector (measurement done at the measurement step (a) in order to estimate the vector y with the formula y={circumflex over (ƒ)}(x)).
  • The kernel function Kλ(x, xi) is a function of x and xi such as defined in particular by Scholkopf et al. (B. Scholkopf and A. J. Smola, Learning with Kernels, MIT Press, Cambridge, Mass., 2001).
  • The Gaussian kernel can in particular be used, for example with a window of λ=5·10−4 (nonlimiting example):
  • K λ ( x , x i ) = 1 λ e - x - x i 2 2 λ .
  • The tests performed show that the method from the invention is more precise than the methods from the prior art for the determination of reverberation time and it further serves to determine the reverberation level at the same time as the reverberation time, which is a significant improvement.

Claims (9)

1. A method for estimating the acoustic reverberations in an environment comprising the following steps:
(a) a measurement step in which at least one acoustic signal emitted in the environment is captured;
(b) an observation step during which an acoustic energy decay rate distribution is determined from the acoustic signal measured in step (a) and the characteristic function of the acoustic energy decay rate distribution is determined;
(c) an estimation step during which a characteristic reverberation time and a characteristic reverberation level of the sound in the environment thereof are estimated by regression from said characteristic function determined in step (b), where the regression is done with reference to:
reference characteristic functions representative respectively of several acoustic energy decay rate distributions;
reference characteristic reverberation times corresponding to said reference characteristic functions; and
reference characteristic reverberation levels corresponding to said reference characteristic functions.
2. The method according to claim 1 wherein during the estimation step (c), a kernel function estimator is used and the characteristic reverberation time and the characteristic reverberation level are determined simultaneously.
3. The method according to claim 2 wherein during the estimation step (c), a Nadaraya-Watson estimator is used.
4. The method according to claim 1 wherein during the estimation step (c), the characteristic reverberation level of the sound in the environment is chosen among the clarity index Cτ and the definition index Dτ.
5. The method according to claim 1 wherein during the observation step (b), the energy decay rates are determined by calculating the energy Em of the acoustic signal on successive signal frames m, and then calculating a logarithmic ratio between the energy of two successive frames:
ρ ( m ) = log ( E m E m - 1 )
6. The method according to claim 1 further comprises a preliminary calibration phase comprising the following steps:
(a′) at least one initial reference signal determination step in which a plurality of reference acoustic signals corresponding to said reference characteristic reverberation times and said reference characteristic reverberation levels are determined;
(b′) at least one initial observation step during which, an acoustic energy decay rate distribution and the reference characteristic function are determined for each reference acoustic signal.
7. The method according to claim 6 wherein during said reference signal determination step, at least one part of the reference acoustic signals and the reference characteristic reverberation times and characteristic reverberation levels corresponding to said reference acoustic signals are determined by calculation from a predetermined set of impulse responses.
8. The method according to claim 6 wherein during said reference signal determination step, at least one part of the reference acoustic signals, the characteristic reverberation times and the reference characteristic reverberation levels corresponding to said reference acoustic signals are determined by measurement.
9. A device for estimating the acoustic reverberations in an environment comprising:
at least a measurement device for capturing at least one acoustic signal emitted in the environment;
at least a processor adapted to:
determination an acoustic energy decay rate distribution from the acoustic signal captured by the means of measurement, and for determining the characteristic function of the acoustic energy decay rate distribution;
estimate a characteristic reverberation time and a characteristic reverberation level of the sound in the environment from data representative of the acoustic energy decay rate distribution, where the regression is done with reference to:
reference characteristic functions representative respectively of several acoustic energy decay rate distributions;
reference characteristic reverberation times corresponding to said reference characteristic functions; and
reference characteristic reverberation levels corresponding to said reference characteristic functions.
US15/778,146 2015-11-26 2016-11-21 Method and device for estimating acoustic reverberation Active US10313809B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
FR1561404A FR3044509B1 (en) 2015-11-26 2015-11-26 METHOD AND DEVICE FOR ESTIMATING ACOUSTIC REVERBERATION
FR1561404 2015-11-26
PCT/FR2016/053034 WO2017089688A1 (en) 2015-11-26 2016-11-21 Method and device for estimating sound reverberation

Publications (2)

Publication Number Publication Date
US20180359582A1 true US20180359582A1 (en) 2018-12-13
US10313809B2 US10313809B2 (en) 2019-06-04

Family

ID=55236682

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/778,146 Active US10313809B2 (en) 2015-11-26 2016-11-21 Method and device for estimating acoustic reverberation

Country Status (4)

Country Link
US (1) US10313809B2 (en)
EP (1) EP3381204B1 (en)
FR (1) FR3044509B1 (en)
WO (1) WO2017089688A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11450309B2 (en) * 2017-11-07 2022-09-20 Beijing Jingdong Shangke Information Technology Co., Ltd. Information processing method and system, computer system and computer readable medium
US11540072B2 (en) * 2019-10-25 2022-12-27 Magic Leap, Inc. Reverberation fingerprint estimation
US11800174B2 (en) 2018-02-15 2023-10-24 Magic Leap, Inc. Mixed reality virtual reverberation
US11895483B2 (en) 2017-10-17 2024-02-06 Magic Leap, Inc. Mixed reality spatial audio
US12308011B2 (en) 2018-06-14 2025-05-20 Magic Leap, Inc. Reverberation gain normalization

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040213415A1 (en) * 2003-04-28 2004-10-28 Ratnam Rama Determining reverberation time
US7813499B2 (en) 2005-03-31 2010-10-12 Microsoft Corporation System and process for regression-based residual acoustic echo suppression
EP1885154B1 (en) 2006-08-01 2013-07-03 Nuance Communications, Inc. Dereverberation of microphone signals
DE102007031677B4 (en) * 2007-07-06 2010-05-20 Sda Software Design Ahnert Gmbh Method and apparatus for determining a room acoustic impulse response in the time domain
EP2058804B1 (en) 2007-10-31 2016-12-14 Nuance Communications, Inc. Method for dereverberation of an acoustic signal and system thereof
US9407992B2 (en) 2012-12-14 2016-08-02 Conexant Systems, Inc. Estimation of reverberation decay related applications
US10869140B2 (en) * 2014-09-26 2020-12-15 Med-El Elektromedizinische Geraete Gmbh Determination of room reverberation for signal enhancement
US9607627B2 (en) * 2015-02-05 2017-03-28 Adobe Systems Incorporated Sound enhancement through deverberation

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11895483B2 (en) 2017-10-17 2024-02-06 Magic Leap, Inc. Mixed reality spatial audio
US12317064B2 (en) 2017-10-17 2025-05-27 Magic Leap, Inc. Mixed reality spatial audio
US11450309B2 (en) * 2017-11-07 2022-09-20 Beijing Jingdong Shangke Information Technology Co., Ltd. Information processing method and system, computer system and computer readable medium
US11800174B2 (en) 2018-02-15 2023-10-24 Magic Leap, Inc. Mixed reality virtual reverberation
US12143660B2 (en) 2018-02-15 2024-11-12 Magic Leap, Inc. Mixed reality virtual reverberation
US12308011B2 (en) 2018-06-14 2025-05-20 Magic Leap, Inc. Reverberation gain normalization
US11540072B2 (en) * 2019-10-25 2022-12-27 Magic Leap, Inc. Reverberation fingerprint estimation
US11778398B2 (en) 2019-10-25 2023-10-03 Magic Leap, Inc. Reverberation fingerprint estimation
US12149896B2 (en) 2019-10-25 2024-11-19 Magic Leap, Inc. Reverberation fingerprint estimation

Also Published As

Publication number Publication date
FR3044509B1 (en) 2017-12-15
US10313809B2 (en) 2019-06-04
EP3381204A1 (en) 2018-10-03
EP3381204B1 (en) 2019-11-13
WO2017089688A1 (en) 2017-06-01
FR3044509A1 (en) 2017-06-02

Similar Documents

Publication Publication Date Title
US10313809B2 (en) Method and device for estimating acoustic reverberation
Ratnam et al. Blind estimation of reverberation time
CN109313909B (en) Method, device, apparatus and system for evaluating consistency of microphone array
WO2020108614A1 (en) Audio recognition method, and target audio positioning method, apparatus and device
US11074925B2 (en) Generating synthetic acoustic impulse responses from an acoustic impulse response
CN114830686B (en) Improved localization of sound sources
US20180286423A1 (en) Audio processing device, audio processing method, and program
KR100905586B1 (en) Performance Evaluation System and Method of Microphone for Remote Speech Recognition in Robots
CN103067322A (en) Method for evaluating voice quality of audio frame in single channel audio signal
US10393571B2 (en) Estimation of reverberant energy component from active audio source
CN107408394A (en) Determination of noise power level difference and sound power level difference between main channel and reference channel
CN109979469A (en) Signal processing method, equipment and storage medium
CN106710602B (en) Acoustic reverberation time estimation method and device
Defrance et al. Measuring the mixing time in auditoria
JP6886890B2 (en) Decay time analysis methods, instruments, and programs
CN107613429A (en) Assessment and adjustment of audio installations
CN111627460A (en) Ambient reverberation detection method, device, equipment and computer readable storage medium
US20240153481A1 (en) Audio signal rendering method and apparatus, and electronic device
Raikar et al. Effect of Microphone Position Measurement Error on RIR and its Impact on Speech Intelligibility and Quality.
Leutnant et al. A new observation model in the logarithmic mel power spectral domain for the automatic recognition of noisy reverberant speech
KR20240103813A (en) Method and apparatus for estimating blind reverberation time using attentive pooling-based weighted sum of spectral decay rates
Dilungana et al. Learning-based estimation of individual absorption profiles from a single room impulse response with known positions of source, sensor and surfaces
US20170099554A1 (en) Modeling a frequency response characteristic of an electro-acoustic transducer
Shabtai et al. Feature selection for room volume identification from room impulse response
US10062392B2 (en) Method and device for estimating a dereverberated signal

Legal Events

Date Code Title Description
AS Assignment

Owner name: INVOXIA, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BELHOMME, ARTHUR;GRENIER, YVES;BADEAU, ROLAND;AND OTHERS;SIGNING DATES FROM 20161219 TO 20161222;REEL/FRAME:045875/0719

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 4