[go: up one dir, main page]

EP1247425B1 - Method for operating a hearing-aid and a hearing aid - Google Patents

Method for operating a hearing-aid and a hearing aid Download PDF

Info

Publication number
EP1247425B1
EP1247425B1 EP01900013A EP01900013A EP1247425B1 EP 1247425 B1 EP1247425 B1 EP 1247425B1 EP 01900013 A EP01900013 A EP 01900013A EP 01900013 A EP01900013 A EP 01900013A EP 1247425 B1 EP1247425 B1 EP 1247425B1
Authority
EP
European Patent Office
Prior art keywords
unit
features
hearing device
signal
auditory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP01900013A
Other languages
German (de)
French (fr)
Other versions
EP1247425A2 (en
Inventor
Silvia Allegro
Michael BÜCHLER
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonova Holding AG
Original Assignee
Phonak AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Phonak AG filed Critical Phonak AG
Publication of EP1247425A2 publication Critical patent/EP1247425A2/en
Application granted granted Critical
Publication of EP1247425B1 publication Critical patent/EP1247425B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/402Arrangements for obtaining a desired directivity characteristic using contructional means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/41Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest

Definitions

  • the present invention relates to a method for operating a hearing aid and a hearing aid.
  • the choice of hearing program can be made either via the remote control or via a switch on the hearing aid itself.
  • switching between different programs is annoying or difficult, if not impossible, for many users.
  • Which program offers the optimum comfort and the best speech intelligibility at which point in time is not always easy to determine even for experienced hearing aid wearers.
  • An automatic recognition of the acoustic environment situation and an associated automatic switching of the hearing program in the hearing aid is therefore desirable.
  • the known methods for noise classification consisting of feature extraction and pattern recognition, have the disadvantages that, although a clear and robust identification of speech signals is basically possible, several different acoustic environment situations can not or only insufficiently classified. Thus, although it is possible with the known methods to be able to distinguish pure speech signals from "non-speech" - ie all other acoustic environment situations. However, this is not sufficient so that an optimal hearing program to be used for an instantaneous acoustic environment situation can be selected. As a consequence, either the number of possible hearing programs is limited to the two automatically recognizable acoustic environment situations or the hearing device wearer must recognize the uncovered acoustic environment situations himself and activate the associated hearing program by hand.
  • pattern identification methods known for noise classification can be used.
  • distance estimators Bayes classifiers, fuzzy logic systems or neural networks are suitable as pattern recognizers.
  • neural networks the standard work of Christopher M. Bishop entitled “Neural Networks for Pattern Recognition” (1995 Oxford University Press ).
  • Ostendorf et. al. "Classification of Acoustic Signals Based on the Analysis of Modulation Spectra for Use in Digital Hearing Aids” (Journal of Audiology, 1998, pages 148 to 150 ); F.
  • the present invention is therefore based on the object of initially providing a method for operating a hearing aid, which is much more robust and accurate compared to the known methods.
  • the invention is based on an extraction of signal features and a subsequent separation of different noise sources as well as an identification of different sounds, Hidden Markov models being used in the identification phase to detect a current ambient situation or noises and / or a speaker or his words. This is the first dynamic properties of the classes of interest takes into account, whereby a significant improvement in the accuracy of the inventive method in all applications, ie in the detection of the current environmental situation or noise as well as in the detection of a speaker and single words has been achieved.
  • auditory-based features are taken into account instead of or in addition to technically-based features in the extraction phase.
  • auditory-based features are preferably using methods of Auditory Scene Analysis (ASA).
  • the invention will be explained in more detail with reference to a drawing, for example.
  • the single figure shows a block diagram of a hearing aid in which the inventive method is realized.
  • hearing aid means both so-called hearing aids, which are used to correct a damaged hearing of a person, as well as all other acoustic communication systems, such as radios, are to be understood ,
  • the hearing aid 1 consists in a known manner initially of two electro-acoustic transducers 2a, 2b and 6, namely one or more microphones 2a, 2b and a speaker 6 - also referred to as a handset.
  • An actual main component of a hearing aid 1 is a transmission unit designated 4 in which the - in the case of a hearing aid - adapted to the user of the hearing aid 1 signal modifications are made.
  • the operations carried out in the transmission unit 4 are not only dependent on the type of a predetermined target function of the hearing aid 1, but are chosen in particular depending on the current acoustic environment situation. For this reason, z. B.
  • a signal analysis unit 7 and a signal identification unit 8 are provided in the hearing aid 1.
  • the hearing device 1 is an implementation using digital technology
  • one or more analog / digital converters 3a, 3b and between the transmission unit 4 and the earphone 6 are a digital / analog converter between the microphones 2a, 2b and the transmission unit 4 5 provided.
  • the realization in digital technology is the preferred embodiment of the present invention, it is basically also conceivable that all components are realized in analog technology. In this case, it goes without saying that the transducers 3a, 3b and 5 are eliminated.
  • the signal analysis unit 7 is supplied with the same input signal as the transmission unit 4.
  • the signal identification unit 8 which is connected to the output of the signal analysis unit 7, is connected to the transmission unit 4 and to a control unit 9.
  • Denoted by 10 is a training unit, with the aid of which the determination of required for the classification in the signal identification unit 8 parameters in an "off-line" operation is made.
  • the settings of the transmission unit 4 and the control unit 9 determined by the signal analysis unit 7 and the signal identification unit 8 can be overwritten by the user by means of a user input unit 11.
  • a preferred embodiment of the inventive method is based on that in an extraction phase characteristic features are extracted from an acoustic signal, wherein instead of or in addition to technical-based features - such.
  • the previously mentioned zero-crossing rates, temporal level fluctuations, different modulation frequencies, or the level, the spectral centroid, the amplitude distribution, etc. - are also used auditory-based features.
  • These auditory-based features are determined using Auditory Scene Analysis (ASA) and include in particular loudness, spectral form (timbre), harmonic structure (pitch), common settling and decay times (on / offsets), coherent amplitude modulations , coherent frequency modulations, coherent frequency transitions, binaural effects, etc.
  • ASA Auditory Scene Analysis
  • the characterization of the tonality of the acoustic signal by analyzing the harmonic structure is given here, which is particularly suitable for the identification of tonal signals, such as speech and music.
  • the principles of Gestalt theory which examines qualitative qualities such as continuity, closeness, similarity, common destiny, unity, good continuation, and others, are applied to the auditory-based and possibly technical features for the formation of auditory objects.
  • the grouping can - as well as the feature extraction in the extraction phase - either context-independent, ie without the addition of additional knowledge, carried out (so-called “primitive” grouping), or context-dependent in the sense of human auditory perception using additional information or hypotheses about the signal content (so-called “schema-based” grouping).
  • the context-dependent grouping is thus adapted to the respective acoustic situation.
  • the advantage of using these grouping methods is that the characteristics of the input signal can be further differentiated. In particular, thereby signal parts are identifiable, which come from different sound sources. This allows the extracted features to be assigned to individual noise sources, thus providing additional knowledge about the existing noise sources - and thus about the current environmental situation.
  • the second aspect of the method according to the invention described here relates to pattern recognition or signal identification, which is carried out in the identification phase.
  • the method of the Hidden Markov Models is used in the signal identification unit 8 for automatic classification of the acoustic environment situation.
  • temporal changes of the calculated characteristics can be used for classification. Consequently, dynamic and not only static properties of the environmental situations to be recognized resp. Noise classes are taken into account.
  • the mentioned second aspect of the method ie the use of hidden Markov models. Excellent for determining a current acoustic environment or noise.
  • the recognition of a speaker as well as the detection of single words or phrases can be accomplished with extremely good results, even on their own, ie without the consideration of auditory-based characteristics in the extraction phase or even without the use of ASA. (Auditory Scene Analysis) methods, which is used in a further embodiment for determining characteristic features.
  • the output signal of the signal identification unit 8 thus contains information about the type of acoustic environment (acoustic environment situation). This information is applied to the transmission unit 4, in which the most suitable program for the detected environmental situation or the most suitable parameter set for the transmission is selected. At the same time, the information of the control unit 9 ascertained in the signal identification 8 is applied to further functions, where, depending on the situation, any function - e.g. an acoustic signal - can be triggered.
  • Hidden Markov models are used in the identification phase, a complicated procedure for determining the parameters necessary for the classification becomes necessary. This parameter determination is therefore preferably carried out in an "off-line" procedure, for each class alone.
  • the actual identification of different acoustic environment situations requires only small storage space and little computing capacity. Therefore, it is proposed to provide a training unit 9 which has sufficient computing power for parameter determination and which can be connected to the hearing aid 1 by suitable means for the purpose of data transfer. Such means may for example be a simple wire connection with be appropriate plugs.
  • the method according to the invention it is thus possible to select the most suitable from a multiplicity of different setting options and automatically retrievable actions without the user of the device having to take action himself.
  • the comfort for the user is thus significantly improved, because it is selected immediately after the detection of a new acoustic environment situation, the right program or the corresponding function in the hearing aid 1 automatically.
  • an input unit 11 is provided, with which automatic responses or automatic program selection can be overwritten.
  • Such input unit 11 may, for. B. a switch on the hearing aid 1 or a remote control, which is operated by the user.

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The invention relates to a method for operating a hearing aid (1). The method is characterised in that characteristic features are extracted from an acoustic signal which has been recorded using at least one microphone (2a, 2b) and that said characteristic features are processed in an identification phase, using hidden Markov models, in particular for determining a current acoustic environmental scene or noises and/or for recognition of a speaker and words. The invention also relates to a hearing-aid.

Description

Die vorliegende Erfindung betrifft ein Verfahren zum Betrieb eines Hörgerätes sowie ein Hörgerät.The present invention relates to a method for operating a hearing aid and a hearing aid.

Moderne Hörgeräte können heute mit Hilfe verschiedener Hörprogramme - typischerweise sind dies zwei bis maximal drei Programme - unterschiedlichen akustischen Umgebungssituationen angepasst werden. Damit soll das Hörgerät dem Benutzer in jeder Situation einen optimalen Nutzen bieten.Today, modern hearing aids can be adapted to different acoustic environment situations with the aid of various hearing programs-typically two to a maximum of three programs. Thus, the hearing aid should offer the user an optimal benefit in every situation.

Die Wahl des Hörprogramms kann entweder über die Fernbedienung oder über einen Schalter am Hörgerät selbst vorgenommen werden. Das Umschalten zwischen verschiedenen Hörprogrammen ist jedoch für viele Benutzer lästig oder schwierig, wenn nicht sogar unmöglich. Welches Programm zu welchem Zeitpunkt den optimalen Komfort und die beste Sprachverständlichkeit bietet, ist auch für versierte Hörgeräteträger nicht immer einfach zu bestimmen. Ein automatisches Erkennen der akustischen Umgebungssituation und ein damit verbundenes automatisches Umschalten des Hörprogramms im Hörgerät ist daher wünschenswert.The choice of hearing program can be made either via the remote control or via a switch on the hearing aid itself. However, switching between different programs is annoying or difficult, if not impossible, for many users. Which program offers the optimum comfort and the best speech intelligibility at which point in time is not always easy to determine even for experienced hearing aid wearers. An automatic recognition of the acoustic environment situation and an associated automatic switching of the hearing program in the hearing aid is therefore desirable.

Es sind derzeit verschiedene Verfahren für die automatische Klassifizierung von akustischen Umgebungssituationen bekannt. Bei all diesen Verfahren werden aus dem Eingangssignal, das beim Hörgerät von einem oder mehreren Mikrophonen stammen kann, verschiedene Merkmale extrahiert. Basierend auf diesen Merkmalen trifft ein Mustererkenner unter Anwendung eines Algorithmus eine Entscheidung über die Zugehörigkeit des analysierten Eingangssignals zu einer bestimmten akustischen Umgebungssituation. Die verschiedenen bekannten Verfahren unterscheiden sich dabei einerseits durch die unterschiedlichen Merkmale, welche bei der Beschreibung der akustischen Umgebungssituation verwendet werden (Signalanalyse), und andererseits durch den verwendeten Mustererkenner der die Merkmale klassifiziert (Signalidentifikation).Various methods for the automatic classification of acoustic environmental situations are currently known. In all of these methods, various features are extracted from the input signal that may originate from one or more microphones in the hearing aid. Based on these characteristics, a pattern recognizer, using an algorithm, makes a decision about the affiliation of the analyzed input signal to a specific acoustic environment situation. The various known methods differ on the one hand by the different features which in the description of the acoustic environment situation are used (signal analysis), and on the other hand by the pattern recognizer used which classifies the characteristics (signal identification).

Für die Merkmalsextraktion in Audiosignalen wurde im Aufsatz von J. M. Kates mit dem Titel "Classification of Background Noises für Hearing-Aid Applications" (1995, Journal of the Acoustical Society of America 97(1), Seiten 461 bis 469 ) vorgeschlagen, eine Analyse der zeitlichen Pegelschwankungen und des Spektrums vorzunehmen. Des weiteren wurde in der Europäischen Patentschrift mit der Nummer EP-B1-0 732 036 eine Analyse des Amplitudenhistogramms zur Erreichung des gleichen Ziels vorgeschlagen. Schliesslich wurde die Merkmalsextraktion auch durch eine Analyse verschiedener Modulationsfrequenzen untersucht und angewendet. Diesbezüglich wird auf die beiden Aufsätze von Ostendorf et. al. mit den Titeln "Empirische Klassifizierung verschiedener akustischer Signale und Sprache mittels einer Modulationsfrequenzanalyse" (1997, DAGA 97, Seiten 608 bis 609 ) und " Klassifikation von akustischen Signalen basierend auf der Analyse von Modulationsspektren zur Anwendung in digitalen Hörgeräten" (1998, DAGA 98, Seiten 402 bis 403 ) verwiesen. Ein ähnlicher Ansatz ist auch in einem Aufsatz von Edwards et. al. mit dem Titel "Signal-processing algorithms for a new sofware-based, digital hearing device" (1998, The Hearing Journal 51, Seiten 44 bis 52 ) offenbart. Weitere mögliche Merkmale sind der Pegel selbst oder die Nulldurchgangsrate wie z.B. in H. L. Hirsch, "Statistical Signal Characterization" (Artech House 1992 ) beschrieben. Die bisher zur Audiosignalanalyse verwendeten Merkmale sind also rein technisch-basiert.For the feature extraction in audio signals was in the article of JM Kates entitled "Classification of Background Noises for Hearing-Aid Applications" (1995, Journal of the Acoustic Society of America 97 (1), pages 461-469 ) proposed to perform an analysis of the temporal level fluctuations and the spectrum. Furthermore, in the European patent specification number EP-B1-0 732 036 an analysis of the amplitude histogram to achieve the same goal proposed. Finally, the feature extraction was also investigated and applied by analyzing different modulation frequencies. In this regard, the two essays by Ostendorf et. al. entitled "Empirical Classification of Various Acoustic Signals and Speech by Modulation Frequency Analysis" (1997, DAGA 97, pages 608 to 609 ) and " Classification of acoustic signals based on the analysis of modulation spectra for use in digital hearing aids "(1998, DAGA 98, pages 402-403 ). A similar approach is also in an essay by Edwards et. al. entitled "Signal processing algorithms for a new software-based, digital hearing device" (1998, The Hearing Journal 51, pages 44 to 52 ) disclosed. Other possible features are the level itself or the zero crossing rate such as in HL Hirsch, "Statistical Signal Characterization" (Artech House 1992 ). The features used so far for audio signal analysis are therefore purely technical-based.

Die bekannten Methoden zur Geräuschklassifikation, bestehend aus Merkmalsextraktion und Mustererkennung, weisen die Nachteile auf, dass, obwohl eine eindeutige und robuste Identifikation von Sprachsignalen grundsätzlich möglich ist, mehrere verschiedene akustische Umgebungssituationen nicht oder nur in unzureichender Weise klassifiziert werden können. So ist es zwar mit den bekannten Verfahren möglich, reine Sprachsignale von "Nicht-Sprache" - d.h. allen anderen akustischen Umgebungssituationen - unterscheiden zu können. Dies reicht jedoch nicht aus, damit ein für eine momentane akustische Umgebungssituation zu verwendendes optimales Hörprogramm gewählt werden kann. Als Folge davon ist entweder die Anzahl möglicher Hörprogramme auf die zwei automatisch erkennbaren akustischen Umgebungssituationen beschränkt oder der Hörgeräteträger muss die nicht abgedeckten akustischen Umgebungssituationen selber erkennen und das dazugehörige Hörprogramm von Hand aktivieren.The known methods for noise classification, consisting of feature extraction and pattern recognition, have the disadvantages that, although a clear and robust identification of speech signals is basically possible, several different acoustic environment situations can not or only insufficiently classified. Thus, although it is possible with the known methods to be able to distinguish pure speech signals from "non-speech" - ie all other acoustic environment situations. However, this is not sufficient so that an optimal hearing program to be used for an instantaneous acoustic environment situation can be selected. As a consequence, either the number of possible hearing programs is limited to the two automatically recognizable acoustic environment situations or the hearing device wearer must recognize the uncovered acoustic environment situations himself and activate the associated hearing program by hand.

Grundsätzlich lassen sich für die Geräuschklassifizierung bekannte Musteridentifikationsmethoden verwenden. So eignen sich insbesondere sogenannte Abstandsschätzer, Bayes Klassifizierer, Fuzzy Logic Systeme oder Neuronale Netzwerke als Mustererkenner. Weitere Informationen zu den zwei erst genannten Methoden können der Druckschrift " Pattern Classification and Scene Analysis" von Richard O. Duda und Peter E. Hart (John Wiley & Sons, 1973 ) entnommen werden. Bezüglich Neuronalen Netzwerken wird auf das Standardwerk von Christopher M. Bishop mit dem Titel "Neural Networks for Pattern Recognition" (1995, Oxford University Press ) verwiesen. Des weiteren wird auf die folgenden Publikationen verwiesen: Ostendorf et. al., "Klassifikation von akustischen Signalen basierend auf der Analyse von Modulationsspektren zur Anwendung in digitalen Hörgeräten" (Zeitschrift für Audiologie, 1998, Seiten 148 bis 150 ); F. Feldbusch, "Geräuscheerkennung mittels Neuronaler Netzwerke" (1998, Zeitschrift für Audiologie, Seiten 30 bis 36 ); Europäische Patentanmeldung mit der Veröffentlichungsnummer EP-A1-0 814 636 ; PCT-Anmeldung mit der Veröffentlichungsnummer WO 01 76321 A und US-Patent mit der Veröffentlichungsnummer US-5 604 812 . All die genannten Mustererkennungsmethoden haben jedoch den Nachteil, dass sie lediglich statische Eigenschaften der interessierenden Geräuschklassen modellieren.In principle, pattern identification methods known for noise classification can be used. Thus, in particular so-called distance estimators, Bayes classifiers, fuzzy logic systems or neural networks are suitable as pattern recognizers. Further information on the first two methods can be found in the publication " Pattern Classification and Scene Analysis "by Richard O. Duda and Peter E. Hart (John Wiley & Sons, 1973 ). Regarding neural networks, the standard work of Christopher M. Bishop entitled "Neural Networks for Pattern Recognition" (1995 Oxford University Press ). Further reference is made to the following publications: Ostendorf et. al., "Classification of Acoustic Signals Based on the Analysis of Modulation Spectra for Use in Digital Hearing Aids" (Journal of Audiology, 1998, pages 148 to 150 ); F. Feldbusch, "Noise Detection Using Neural Networks" (1998, Journal of Audiology, pages 30 to 36 ); European patent application with the publication number EP-A1-0 814 636 ; PCT application with the publication number WO 01 76321 A and U.S. Patent Publication Number U.S. 5,604,812 , However, all the mentioned pattern recognition methods have the disadvantage that they merely model static properties of the noise classes of interest.

Des Weiteren ist aus EP 0 881 625 ein System zur Spracherkennung bekannt, bei dem gesprochene Worte in einen schriftlichen Text umgesetzt werden. Dabei werden die gesprochenen Worte (Äusserungen) unter verschiedensten Wiedergabebedingungen aufgezeichnet und als Muster abgelegt, welche als Basis für eine Wiedererkennung bei der Spracherkennung mittels Hidden Markov Modellen verwendet werden. Die Wiedergabebedingungen beschränken sich dabei auf die Wiedergabe desselben Wortes durch verschiedene Sprecher mit unterschiedlichem Alter, Dialekt, Geschlecht, Aussprache, etc.Furthermore, it is off EP 0 881 625 a speech recognition system is known in which spoken words are translated into a written text. In this case, the spoken words (utterances) are recorded under a variety of reproduction conditions and filed as patterns, which are used as the basis for recognition in speech recognition using Hidden Markov models. The reproduction conditions are limited to the reproduction of the same word by different speakers with different ages, dialect, gender, pronunciation, etc.

Der vorliegenden Erfindung liegt daher die Aufgabe zugrunde, zunächst ein Verfahren zum Betrieb eines Hörgerätes anzugeben, das gegenüber den bekannten Verfahren wesentlich robuster und genauer ist.The present invention is therefore based on the object of initially providing a method for operating a hearing aid, which is much more robust and accurate compared to the known methods.

Diese Aufgabe wird durch die in Anspruch 1 angegebenen Massnahmen gelöst. Vorteilhafte Ausgestaltungen der Erfindung sowie ein Hörgerät sind in weiteren Ansprüchen angegeben.This object is achieved by the measures specified in claim 1. Advantageous embodiments of the invention and a hearing aid are specified in further claims.

Die Erfindung basiert auf einer Extraktion von Signalmerkmalen und einer nachfolgenden Separation verschiedener Geräuschquellen sowie einer Identifikation verschiedener Geräusche, wobei Hidden Markov Modelle in der Identifikationsphase eingesetzt werden, um eine momentane Umgebungssituation bzw. Geräusche und/oder einen Sprecher bzw. dessen Worte zu detektieren. Damit werden erstmals dynamische Eigenschaften der interessierenden Klassen berücksichtigt, womit eine erhebliche Verbesserung der Genauigkeit des erfindungsgemässen Verfahrens in allen Anwendungsbereichen, d.h. bei der Detektion der momentanen Umgebungssituation bzw. von Geräuschen als auch bei der Detektion eines Sprechers und einzelner Worte, erreicht worden ist.The invention is based on an extraction of signal features and a subsequent separation of different noise sources as well as an identification of different sounds, Hidden Markov models being used in the identification phase to detect a current ambient situation or noises and / or a speaker or his words. This is the first dynamic properties of the classes of interest takes into account, whereby a significant improvement in the accuracy of the inventive method in all applications, ie in the detection of the current environmental situation or noise as well as in the detection of a speaker and single words has been achieved.

In einer weiteren Ausführungsform des erfindungsgemässen Verfahrens werden an Stelle von oder neben technisch-basierten Merkmalen in der Extraktionsphase auditorisch-basierte Merkmale berücksichtigt. Diese auditorisch-basierten Merkmale werden vorzugsweise mit Methoden der Auditory Scene Analysis (ASA) ermittelt.In a further embodiment of the method according to the invention, auditory-based features are taken into account instead of or in addition to technically-based features in the extraction phase. These auditory-based features are preferably using methods of Auditory Scene Analysis (ASA).

In einer noch weiteren Ausführungsform des erfindungsgemässen Verfahrens erfolgt in der Extraktionsphase eine Gruppierung der Merkmale mit Hilfe der Gestaltprinzipien kontextunabhängig oder kontextabhängig.In yet another embodiment of the method according to the invention, in the extraction phase a grouping of the features takes place context-independent or context-dependent with the aid of the gestalt principles.

Die Erfindung wird nachfolgend anhand einer Zeichnung beispielsweise näher erläutert. Dabei zeigt die einzige Figur ein Blockschaltbild eines Hörgerätes, in dem das erfindungsgemässe Verfahren realisiert ist.The invention will be explained in more detail with reference to a drawing, for example. The single figure shows a block diagram of a hearing aid in which the inventive method is realized.

In der einzige Figur ist mit 1 ein Hörgerät bezeichnet, wobei im folgenden unter dem Begriff "Hörgerät" sowohl sogenannte Hörhilfen, welche zur Korrektur eines geschädigten Hörvermögens einer Person eingesetzt werden, als auch alle anderen akustischen Kommunikationssysteme, wie zum Beispiel Funkgeräte, zu verstehen sind.In the single figure, 1 denotes a hearing aid, whereby in the following the term "hearing aid" means both so-called hearing aids, which are used to correct a damaged hearing of a person, as well as all other acoustic communication systems, such as radios, are to be understood ,

Das Hörgerät 1 besteht in bekannter Weise zunächst aus zwei elektro-akustischen Wandlern 2a, 2b und 6, nämlich einem oder mehreren Mikrophonen 2a, 2b und einem Lautsprecher 6 - auch etwa als Hörer bezeichnet. Ein eigentlicher Hauptbestandteil eines Hörgerätes 1 ist eine mit 4 bezeichnete Übertragungseinheit, in welcher die - im Falle einer Hörhilfe - auf den Benutzer des Hörgerätes 1 abgestimmten Signalmodifikationen vorgenommen werden. Die in der Übertragungseinheit 4 vorgenommenen Operationen sind jedoch nicht nur von der Art einer vorgegebenen Zielfunktion des Hörgerätes 1 abhängig, sondern werden insbesondere auch in Abhängigkeit der momentanen akustischen Umgebungssituation gewählt. Aus diesem Grund wurden z. B. bereits Hörhilfen angeboten, bei denen der Geräteträger eine manuelle Umschaltung zwischen verschiedenen Hörprogrammen vornehmen kann, die auf bestimmte akustische Umgebungssituationen angepasst sind. Ebenso sind Hörhilfen bekannt, bei denen die Erkennung der akustischen Umgebungssituation automatisch vorgenommen wird. Diesbezüglich sei nochmals auf die europäischen Patentschriften mit den Veröffentlichungsnummern EP-B1-0 732 036 und EP-A1-0 814 636 sowie auf das US Patent mit der Veröffentlichungsnummer US-5 604 812 , und auf die Broschüre "Claro Autoselect" der Firma Phonak hearing systems ( 28148(GB)/0300 , 1999 ) verwiesen.The hearing aid 1 consists in a known manner initially of two electro-acoustic transducers 2a, 2b and 6, namely one or more microphones 2a, 2b and a speaker 6 - also referred to as a handset. An actual main component of a hearing aid 1 is a transmission unit designated 4 in which the - in the case of a hearing aid - adapted to the user of the hearing aid 1 signal modifications are made. However, the operations carried out in the transmission unit 4 are not only dependent on the type of a predetermined target function of the hearing aid 1, but are chosen in particular depending on the current acoustic environment situation. For this reason, z. B. already offered hearing aids, in which the equipment wearer can make a manual switch between different hearing programs on certain acoustic environment situations are adjusted. Likewise, hearing aids are known in which the detection of the acoustic environment situation is performed automatically. In this regard, let us reiterate the European patents with the publication numbers EP-B1-0 732 036 and EP-A1-0 814 636 as well as to the US patent with the publication number U.S. 5,604,812 , and the brochure "Claro Autoselect" by Phonak hearing systems ( 28148 (GB) / 0300 . 1999 ).

Neben den erwähnten Bestandteilen - wie Mikrophone 2a, 2b, Übertragungseinheit 4 und Hörer 6 - ist im Hörgerät 1 eine Signalanalyseeinheit 7 und eine Signalidentifikationseinheit 8 vorgesehen. Handelt es sich beim Hörgerät 1 um eine Realisierung mittels Digitaltechnologie, so sind zwischen den Mikrophonen 2a, 2b und der Übertragungseinheit 4 ein oder mehrere Analog/Digital-Wandler 3a, 3b und zwischen der Übertragungseinheit 4 und dem Hörer 6 ein Digital/Analog-Wandler 5 vorgesehen. Obwohl die Realisierung in Digitaltechnologie die bevorzugte Ausführungsform der vorliegenden Erfindung ist, ist grundsätzlich auch denkbar, dass alle Komponenten in Analogtechnologie realisiert sind. Diesfalls entfallen selbstverständlich die Wandler 3a, 3b und 5.In addition to the mentioned components - such as microphones 2a, 2b, transmission unit 4 and handset 6 - a signal analysis unit 7 and a signal identification unit 8 are provided in the hearing aid 1. If the hearing device 1 is an implementation using digital technology, then one or more analog / digital converters 3a, 3b and between the transmission unit 4 and the earphone 6 are a digital / analog converter between the microphones 2a, 2b and the transmission unit 4 5 provided. Although the realization in digital technology is the preferred embodiment of the present invention, it is basically also conceivable that all components are realized in analog technology. In this case, it goes without saying that the transducers 3a, 3b and 5 are eliminated.

Die Signalanalyseeinheit 7 ist mit dem gleichen Eingangssignal beaufschlagt wie die Übertragungseinheit 4. Schliesslich ist die Signalidentifikationseinheit 8, welche mit dem Ausgang der Signalanalyseeinheit 7 verbunden ist, mit der Übertragungseinheit 4 und mit einer Steuereinheit 9 verbunden.The signal analysis unit 7 is supplied with the same input signal as the transmission unit 4. Finally, the signal identification unit 8, which is connected to the output of the signal analysis unit 7, is connected to the transmission unit 4 and to a control unit 9.

Mit 10 ist eine Trainingseinheit bezeichnet, mit Hilfe derer die Ermittlung von für die Klassifizierung in der Signalidentifikationseinheit 8 benötigten Parameter in einem "off-line"-Betrieb vorgenommen wird.Denoted by 10 is a training unit, with the aid of which the determination of required for the classification in the signal identification unit 8 parameters in an "off-line" operation is made.

Die durch die Signalanalyseeinheit 7 und die Signalidentifikationseinheit 8 ermittelten Einstellungen der Übertragungseinheit 4 und der Steuereinheit 9 können durch den Benutzer mittels einer Benutzereingabeeinheit 11 überschrieben werden.The settings of the transmission unit 4 and the control unit 9 determined by the signal analysis unit 7 and the signal identification unit 8 can be overwritten by the user by means of a user input unit 11.

Im folgenden wird das erfindungsgemässe Verfahren erläutert:The following explains the method according to the invention:

Eine bevorzugte Ausführungsform des erfindungsgemässen Verfahrens basiert darauf, dass in einer Extraktionsphase charakteristische Merkmale aus einem akustischen Signal extrahiert werden, wobei anstelle oder zusätzlich zu technisch-basierten Merkmalen - wie z. B. die früher erwähnten Nulldurchgangsraten, zeitlichen Pegelschwankungen, verschiedenen Modulationsfrequenzen, oder der Pegel, der spektrale Schwerpunkt, die Amplitudenverteilung, etc. - auch auditorisch-basierte Merkmale verwendet werden. Diese auditorisch-basierten Merkmale werden mit Hilfe der Auditory Scene Analysis (ASA) ermittelt und umfassen insbesondere die Lautheit, die spektrale Form (timbre), die harmonische Struktur (pitch), gemeinsame Einschwing- und Ausschwingzeiten (on-/offsets), kohärente Amplitudenmodulationen, kohärente Frequenzmodulationen, kohärente Frequenzübergänge, binaurale Effekte, etc. Erläuterungen zur Auditory Scene Analysis sind z. B. in den Werken von A. Bregman, "Auditory Scene Analysis" (MIT Press, 1990 ) und W. A. Yost, "Fundamentals of Hearing - An Introduction" (Academic Press, 1977 ) zu finden. Angaben zu den einzelnen auditorisch-basierten Merkmalen findet man u.a. in W. A.Yost und S. Sheft, "Auditory Perception" (in "Human Psychophsics", herausgegeben von W. A. Yost, A. N. Popper und R. R. Fay, Springer 1993 ), W. M. Hartmann, "Pitch, periodicity, and auditory organization" (Journal of the Acoustical Society of America, 100 (6), Seiten 3491 bis 3502, 1996 ), sowie D. K. Mellinger und B. M. Mont-Reynaud, "Scene Analysis" (in "Auditory Computation", herausgegeben von H. L. Hawkins, T. A. McMullen, A. N. Popper und R. R. Fay, Springer 1996 ).A preferred embodiment of the inventive method is based on that in an extraction phase characteristic features are extracted from an acoustic signal, wherein instead of or in addition to technical-based features - such. For example, the previously mentioned zero-crossing rates, temporal level fluctuations, different modulation frequencies, or the level, the spectral centroid, the amplitude distribution, etc. - are also used auditory-based features. These auditory-based features are determined using Auditory Scene Analysis (ASA) and include in particular loudness, spectral form (timbre), harmonic structure (pitch), common settling and decay times (on / offsets), coherent amplitude modulations , coherent frequency modulations, coherent frequency transitions, binaural effects, etc. Explanations to the Auditory Scene Analysis are eg: B. in the works of A. Bregman, "Auditory Scene Analysis" (MIT Press, 1990 ) and WA Yost, "Fundamentals of Hearing - An Introduction" (Academic Press, 1977 ) to find. Information on the individual auditory-based characteristics can be found in, inter alia WAYost and S. Sheft, "Auditory Perception" (in "Human Psychophsics", edited by WA Yost, AN Popper and RR Fay, Springer 1993 ) WM Hartmann, "Pitch, periodicity, and auditory organization "(Journal of the Acoustic Society of America, 100 (6), pp. 3491-3502, 1996 ), such as DK Mellinger and BM Mont-Reynaud, "Scene Analysis" (in "Auditory Computation" edited by HL Hawkins, TA McMullen, AN Popper and RR Fay, Springer 1996 ).

Als Beispiel für die Verwendung von auditorisch-basierten Merkmalen bei der Signalanalyse sei an dieser Stelle die Charakterisierung der Tonalität des akustischen Signals durch die Analyse der harmonischen Struktur angegeben, was speziell für die Identifikation tonaler Signale, wie Sprache und Musik, geeignet ist.As an example of the use of auditory-based features in signal analysis, the characterization of the tonality of the acoustic signal by analyzing the harmonic structure is given here, which is particularly suitable for the identification of tonal signals, such as speech and music.

Bei einer weiteren Ausführungsform des erfindungsgemässen Verfahrens ist es vorgesehen, in der Signalanalyseeinheit 7 des weiteren eine Gruppierung der Merkmale mittels GestaltPrinzipien vorzunehmen. Dabei werden die Prinzipien der Gestalttheorie, bei der qualitative Eigenschaften - wie Kontinuität, Nähe, Ähnlichkeit, gemeinsames Schicksal, Geschlossenheit, gute Fortsetzung und andere - untersucht werden, auf die auditorisch-basierten und eventuell technischen Merkmale zur Bildung von auditorischen Objekten angewendet. Die Gruppierung kann - wie übrigens auch die Merkmalsextraktion in der Extraktionsphase - entweder kontext-unabhängig, also ohne Hinzunahme von zusätzlichem Wissen, durchgeführt werden (sogenannt "primitive" Gruppierung), oder sie kann kontext-abhängig im Sinne der menschlichen auditorischen Wahrnehmung unter Verwendung von zusätzlicher Information oder Hypothesen über den Signalgehalt erfolgen (sogenannt "schema-basierte" Gruppierung). Die kontext-abhängige Gruppierung ist also der jeweiligen akustischen Situation angepasst. Für ausführliche Erläuterungen der Prinzipien der Gestalttheorie und der Gruppierung mittels Gestaltprinzipien sei stellvertretend auf folgendeIn a further embodiment of the method according to the invention, it is provided to further group the features in the signal analysis unit 7 by means of Gestalt principles. In doing so, the principles of Gestalt theory, which examines qualitative qualities such as continuity, closeness, similarity, common destiny, unity, good continuation, and others, are applied to the auditory-based and possibly technical features for the formation of auditory objects. The grouping can - as well as the feature extraction in the extraction phase - either context-independent, ie without the addition of additional knowledge, carried out (so-called "primitive" grouping), or context-dependent in the sense of human auditory perception using additional information or hypotheses about the signal content (so-called "schema-based" grouping). The context-dependent grouping is thus adapted to the respective acoustic situation. For detailed explanations of the principles of Gestalt theory and grouping by means of gestalt principles, let me say for the following

Veröffentlichungen verwiesen: " Wahrnehmungspsychologie" von E. B. Goldstein (Spektrum Akademischer Verlag, 1997 ), " Neuronale Grundlagen der Gestaltwahrnehmung" von A. K. Engel und W. Singer (Spektrum der Wissenschaft, 1998, Seiten 66-73 ), sowie " Auditory Scene Analysis" von A. Bregman (MIT Press, 1990 ).Publications referenced: Perception Psychology "by EB Goldstein (Spektrum Akademischer Verlag, 1997 ), " Neural Bases of Gestalt Perception "by AK Engel and W. Singer (Spektrum der Wissenschaft, 1998, pp. 66-73 ), such as " Auditory Scene Analysis "by A. Bregman (MIT Press, 1990 ).

Der Vorteil der Anwendung dieser Gruppierungsverfahren liegt darin, dass die Merkmale des Eingangssignals weiter differenziert werden können. Insbesondere sind dadurch Signalteile identifizierbar, welche von unterschiedlichen Klangquellen stammen. Dies ermöglicht, dass die extrahierten Merkmale einzelnen Geräuschquellen zugeordnet werden können, womit zusätzliches Wissen über die vorhandenen Geräuschquellen - und damit über die momentane Umgebungssituation - erhalten wird.The advantage of using these grouping methods is that the characteristics of the input signal can be further differentiated. In particular, thereby signal parts are identifiable, which come from different sound sources. This allows the extracted features to be assigned to individual noise sources, thus providing additional knowledge about the existing noise sources - and thus about the current environmental situation.

Der zweite Aspekt des hier beschriebenen erfindungsgemässen Verfahrens betrifft die Mustererkennung bzw. die Signalidentifikation, welche in der Identifikationsphase vorgenommen wird. In der bevorzugten Ausführungsform des erfindungsgemässen Verfahrens kommt in der Signalidentifikationseinheit 8 zur automatischen Klassifizierung der akustischen Umgebungssituation die Methode der Hidden Markov Modelle (HMM) zur Anwendung. Damit können auch zeitliche Änderungen der berechneten Merkmale zur Klassifizierung eingesetzt werden. Demzufolge können auch dynamische und nicht nur statische Eigenschaften der zu erkennenden Umgebungssituationen resp. Geräuschklassen berücksichtigt werden. Ebenfalls möglich ist die Kombination von HMMs mit anderen Klassifizierern, z. B. in einem mehrstufigen Erkennungsverfahren, zur Identifikation der akustischen Umgebung.The second aspect of the method according to the invention described here relates to pattern recognition or signal identification, which is carried out in the identification phase. In the preferred embodiment of the method according to the invention, the method of the Hidden Markov Models (HMM) is used in the signal identification unit 8 for automatic classification of the acoustic environment situation. Thus, temporal changes of the calculated characteristics can be used for classification. Consequently, dynamic and not only static properties of the environmental situations to be recognized resp. Noise classes are taken into account. Also possible is the combination of HMMs with other classifiers, eg. B. in a multi-stage detection method, to identify the acoustic environment.

Erfindungsgemäss eignet sich der erwähnte zweite Aspekt des Verfahrens, d.h. die Verwendung von Hidden Markov Modellen, vorzüglich zur Bestimmung einer momentanen akustischen Umgebungssituation bzw. von Geräuschen. Ebenfalls das Erkennen von einem Sprecher sowie die Detektion einzelner Worte oder Phrasen lassen sich mit äusserst guten Ergebnissen bewerkstelligen, und zwar auch für sich allein, d.h. auch ohne die Mitberücksichtigung von auditorisch-basierten Merkmalen in der Extraktionsphase bzw. auch ohne die Verwendung von ASA-(Auditory Scene Analysis)-Methoden, welche in einer weiteren Ausführungsform zur Bestimmung von charakteristischen Merkmalen eingesetzt wird.According to the invention, the mentioned second aspect of the method, ie the use of hidden Markov models, is suitable. Excellent for determining a current acoustic environment or noise. Likewise, the recognition of a speaker as well as the detection of single words or phrases can be accomplished with extremely good results, even on their own, ie without the consideration of auditory-based characteristics in the extraction phase or even without the use of ASA. (Auditory Scene Analysis) methods, which is used in a further embodiment for determining characteristic features.

Das Ausgangssignal der Signalidentifikationseinheit 8 enthält somit Informationen über die Art der akustischen Umgebung (akustische Umgebungssituation). Diese Information wird der Übertragungseinheit 4 beaufschlagt, in der das für die erkannte Umgebungssituation geeignetste Programm bzw. der geeignetste Parametersatz für die Übertragung ausgewählt wird. Gleichzeitig wird die in der Signalidentifikation 8 ermittelte Information der Steuereinheit 9 für weitere Funktionen beaufschlagt, wo je nach Situation eine beliebige Funktion - z.B. ein akustisches Signal - ausgelöst werden kann.The output signal of the signal identification unit 8 thus contains information about the type of acoustic environment (acoustic environment situation). This information is applied to the transmission unit 4, in which the most suitable program for the detected environmental situation or the most suitable parameter set for the transmission is selected. At the same time, the information of the control unit 9 ascertained in the signal identification 8 is applied to further functions, where, depending on the situation, any function - e.g. an acoustic signal - can be triggered.

Werden in der Identifikationsphase Hidden Markov Modelle verwendet, so wird ein aufwendiges Verfahren zur Ermittlung der für die Klassifizierung notwendigen Parameter notwendig. Diese Parameterermittlung erfolgt daher vorzugsweise in einem "Off-line"-Verfahren, und zwar für jede Klasse allein. Die eigentliche Identifikation verschiedener akustischer Umgebungssituationen erfordert nur geringen Speicherplatz und wenig Rechenkapazität. Daher wird vorgeschlagen, eine Trainingseinheit 9 vorzusehen, die zur Parameterbestimmung ausreichend Rechenleistung aufweist und die mit geeigneten Mitteln mit dem Hörgerät 1 zum Zweck des Datentransfers verbindbar sind. Solche Mittel können beispielsweise eine einfache Drahtverbindung mit entsprechenden Steckern sein.If Hidden Markov models are used in the identification phase, a complicated procedure for determining the parameters necessary for the classification becomes necessary. This parameter determination is therefore preferably carried out in an "off-line" procedure, for each class alone. The actual identification of different acoustic environment situations requires only small storage space and little computing capacity. Therefore, it is proposed to provide a training unit 9 which has sufficient computing power for parameter determination and which can be connected to the hearing aid 1 by suitable means for the purpose of data transfer. Such means may for example be a simple wire connection with be appropriate plugs.

Mit dem erfindungsgemässen Verfahren ist es somit möglich, aus einer Vielzahl verschiedener Einstellmöglichkeiten und automatisch abrufbaren Aktionen die geeignetste auszuwählen, ohne dass der Benutzer des Gerätes selber tätig werden muss. Der Komfort für den Benutzer ist damit wesentlich verbessert, denn es wird unmittelbar nach dem Erkennen einer neuen akustischen Umgebungssituation das richtige Programm bzw. die entsprechende Funktion im Hörgerät 1 selbsttätig gewählt.With the method according to the invention, it is thus possible to select the most suitable from a multiplicity of different setting options and automatically retrievable actions without the user of the device having to take action himself. The comfort for the user is thus significantly improved, because it is selected immediately after the detection of a new acoustic environment situation, the right program or the corresponding function in the hearing aid 1 automatically.

Benutzer von Hörgeräten haben oft auch den Wunsch, die vorstehend beschriebene automatische Erkennung der Umgebungssituation und die damit verbundene automatische Wahl des entsprechenden Programms auszuschalten. Aus diesem Grund ist eine Eingabeeinheit 11 vorgesehen, mit der automatische Reaktionen oder die automatische Programmwahl überschrieben werden kann. Eine solche Eingabeeinheit 11 kann z. B. ein Schalter am Hörgerät 1 oder eine Fernbedienung sein, welche durch den Benutzer betätigt wird.Users of hearing aids often also wish to disable the above-described automatic recognition of the environmental situation and the associated automatic selection of the corresponding program. For this reason, an input unit 11 is provided, with which automatic responses or automatic program selection can be overwritten. Such input unit 11 may, for. B. a switch on the hearing aid 1 or a remote control, which is operated by the user.

Andere Möglichkeiten wie z. B. eine sprachgesteuerte Benutzereingabe sind ebenfalls denkbar.Other possibilities such. As a voice-controlled user input are also conceivable.

Claims (14)

  1. Method for operating a hearing device (1), the method consisting in
    - that, during an extraction phase, characteristic features are extracted from an acoustic signal recorded by at least one microphone (2a, 2b),
    - that, during an identification phase, the characteristic features are processed using the Hidden Markov Models for the determination of a momentary acoustic surrounding situation,
    characterized in
    - that, due to a determined momentary acoustic surrounding situation, a program or a transfer function, respectively, between at least one microphone (2a, 2b) and a receiver (6) is adjusted in the hearing device (1), such an adjustment being overwrite-able with the aid of a user input unit (11) by the hearing device user.
  2. Method according to claim 1, characterized in that, for the determination of the characteristic features extracted during the extraction phase, ASA (Auditory Scene Analysis) methods are used.
  3. Method according to claim 1 or 2, characterized in that at least one or several of the following auditory-based features are determined during the extraction of the features: loudness, spectral pattern, harmonic structure, common transient oscillation and decay events, coherent amplitude modulations, coherent frequency modulations, coherent frequency transitions and binaural effects.
  4. Method according to one of the preceding claims, characterized in that additionally to auditory-based features also any other features are determined.
  5. Method according to one of the preceding claims, characterized in that, for forming auditory objects, the auditory-based and, if need be, the other features are grouped along the principles of the Gestalt theory.
  6. Method according to claim 5, characterized in that the extraction of the features and/or the grouping of the features is carried out either context-independently or context-dependently in terms of human auditory perception, in consideration of additional information or hypothesis on the signal content and thus adapted to the respective acoustic situation.
  7. Method according to one of the preceding claims, characterized in that, during the identification phase, data is accessed that has been acquired during an "off-line" training phase.
  8. Method according to one of the preceding claims, characterized in that the extraction phase and the identification phase take place continuously or at regular or irregular time intervals, respectively.
  9. Method according to one of the preceding claims, characterized in that a certain function is triggered in the hearing device (1) in response to a detected momentary acoustic surrounding situation, a detected sound, a detected speaker or a detected word.
  10. Hearing device (1) comprising a transfer unit (4), which is, on its input side, operatively connected to at least one microphone (2a, 2b), and which is, on its output side, operatively connected to a receiver (6),
    - the input signal of the transfer unit (4) being simultaneously fed to a signal analyzing unit (7) for the extraction of characteristic features,
    - the signal analyzing unit (7) being operatively connected to a signal identification unit (8), in which a momentary acoustic situation is determined using Hidden Markov Models,
    characterized in
    - that the signal identification unit (8) is operatively connected to the transfer unit (4) for adjusting a program or a transfer function, such an adjustment being over writable by the hearing device user with the aid of a user input unit (11).
  11. Hearing device (1) according to claim 10, characterized in that an input unit (11) is provided which is operatively connected to the transfer unit (4).
  12. Hearing device (1) according to claim 10 or 11, characterized in that a control unit (9) is provided, the signal identification unit (8) being operatively connected to the control unit (9).
  13. Hearing device (1) according to claim 12, characterized in that the input unit (11) is operatively connected to the control unit (9).
  14. Hearing device (1) according to one of the claims 10 to 13, characterized in that any means for transferring parameters from a training unit (10) to the signal identification unit (8) are provided.
EP01900013A 2001-01-05 2001-01-05 Method for operating a hearing-aid and a hearing aid Expired - Lifetime EP1247425B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CH2001/000007 WO2001022790A2 (en) 2001-01-05 2001-01-05 Method for operating a hearing-aid and a hearing aid

Publications (2)

Publication Number Publication Date
EP1247425A2 EP1247425A2 (en) 2002-10-09
EP1247425B1 true EP1247425B1 (en) 2008-07-02

Family

ID=4358166

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01900013A Expired - Lifetime EP1247425B1 (en) 2001-01-05 2001-01-05 Method for operating a hearing-aid and a hearing aid

Country Status (6)

Country Link
EP (1) EP1247425B1 (en)
JP (1) JP2004500750A (en)
CA (1) CA2400089A1 (en)
DE (1) DE50114066D1 (en)
DK (1) DK1247425T3 (en)
WO (1) WO2001022790A2 (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2433390A1 (en) 2001-11-09 2002-01-17 Phonak Ag Method for operating a hearing device and hearing device
US20040175008A1 (en) 2003-03-07 2004-09-09 Hans-Ueli Roeck Method for producing control signals, method of controlling signal and a hearing device
US8027495B2 (en) 2003-03-07 2011-09-27 Phonak Ag Binaural hearing device and method for controlling a hearing device system
EP1320281B1 (en) 2003-03-07 2013-08-07 Phonak Ag Binaural hearing device and method for controlling such a hearing device
DK1326478T3 (en) 2003-03-07 2014-12-08 Phonak Ag Method for producing control signals and binaural hearing device system
US7773763B2 (en) * 2003-06-24 2010-08-10 Gn Resound A/S Binaural hearing aid system with coordinated sound processing
US6912289B2 (en) 2003-10-09 2005-06-28 Unitron Hearing Ltd. Hearing aid and processes for adaptively processing signals therein
EP1513371B1 (en) 2004-10-19 2012-08-15 Phonak Ag Method for operating a hearing device as well as a hearing device
US7319769B2 (en) * 2004-12-09 2008-01-15 Phonak Ag Method to adjust parameters of a transfer function of a hearing device as well as hearing device
US7680291B2 (en) 2005-08-23 2010-03-16 Phonak Ag Method for operating a hearing device and a hearing device
DE502005009721D1 (en) 2005-08-23 2010-07-22 Phonak Ag Method for operating a hearing aid and a hearing aid
JP5069696B2 (en) 2006-03-03 2012-11-07 ジーエヌ リザウンド エー/エス Automatic switching between omnidirectional and directional microphone modes of hearing aids
DK1858292T4 (en) 2006-05-16 2022-04-11 Phonak Ag Hearing device and method of operating a hearing device
US8249284B2 (en) 2006-05-16 2012-08-21 Phonak Ag Hearing system and method for deriving information on an acoustic scene
US7957548B2 (en) 2006-05-16 2011-06-07 Phonak Ag Hearing device with transfer function adjusted according to predetermined acoustic environments
US8594337B2 (en) 2006-12-13 2013-11-26 Phonak Ag Method for operating a hearing device and a hearing device
DE102007056221B4 (en) * 2007-11-27 2009-07-09 Siemens Ag Österreich Method for speech recognition
WO2009094709A1 (en) 2008-02-01 2009-08-06 Cochlear Limited An apparatus and method for optimising power consumption of a digital circuit
EP2569955B1 (en) 2010-05-12 2014-12-03 Phonak AG Hearing system and method for operating the same
US9363612B2 (en) 2010-12-20 2016-06-07 Sonova Ag Method for operating a hearing device and a hearing device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0681411B1 (en) * 1994-05-06 2003-01-29 Siemens Audiologische Technik GmbH Programmable hearing aid
US6002776A (en) * 1995-09-18 1999-12-14 Interval Research Corporation Directional acoustic signal processor and method therefor
US5960397A (en) * 1997-05-27 1999-09-28 At&T Corp System and method of recognizing an acoustic environment to adapt a set of based recognition models to the current acoustic environment for subsequent speech recognition
AU2001246395A1 (en) * 2000-04-04 2001-10-15 Gn Resound A/S A hearing prosthesis with automatic classification of the listening environment

Also Published As

Publication number Publication date
WO2001022790A3 (en) 2002-04-18
CA2400089A1 (en) 2001-04-05
WO2001022790A2 (en) 2001-04-05
DK1247425T3 (en) 2008-10-27
JP2004500750A (en) 2004-01-08
EP1247425A2 (en) 2002-10-09
DE50114066D1 (en) 2008-08-14

Similar Documents

Publication Publication Date Title
EP1247425B1 (en) Method for operating a hearing-aid and a hearing aid
WO2001020965A2 (en) Method for determining a current acoustic environment, use of said method and a hearing-aid
DE60120949T2 (en) A HEARING PROSTHESIS WITH AUTOMATIC HEARING CLASSIFICATION
DE3871711T2 (en) METHOD AND DEVICE FOR IMPROVING THE UNDERSTANDING OF VOICES IN HIGH NOISE ENVIRONMENT.
DE69816221T2 (en) LANGUAGE SPEED CHANGE METHOD AND DEVICE
EP1470735B1 (en) Method for determining an acoustic environment situation, application of the method and hearing aid
DE60027438T2 (en) IMPROVING A HARMFUL AUDIBLE SIGNAL
DE69433254T2 (en) Method and device for speech detection
EP2081406B1 (en) Method and device for configuring variables on a hearing aid
EP3445067B1 (en) Hearing aid and method for operating a hearing aid
DE60204902T2 (en) Method for programming a communication device and programmable communication device
EP1404152B1 (en) Device and method for fitting a hearing-aid
EP3074974B1 (en) Hearing assistance device with fundamental frequency modification
EP2405673B1 (en) Method for localising an audio source and multi-channel audio system
EP3386215B1 (en) Hearing aid and method for operating a hearing aid
DE69616724T2 (en) Method and system for speech recognition
DE2020753A1 (en) Device for recognizing given speech sounds
EP3693960B1 (en) Method for individualized signal processing of an audio signal of a hearing aid
WO2008043731A1 (en) Method for operating a hearing aid, and hearing aid
EP1471770B1 (en) Method for generating an approximated partial transfer function
DE60033039T2 (en) DEVICE AND METHOD FOR THE SUPPRESSION OF ZISCHLAUTEN USING ADAPTIVE FILTER ALGORITHMS
WO2001047335A2 (en) Method for the elimination of noise signal components in an input signal for an auditory system, use of said method and a hearing aid
DE10114101A1 (en) Processing input signal in signal processing unit for hearing aid, involves analyzing input signal and adapting signal processing unit setting parameters depending on signal analysis results
DE60004403T2 (en) DEVICE AND METHOD FOR DETECTING SIGNAL QUALITY
EP2200341A1 (en) Method for operating a hearing aid and hearing aid with a source separation device

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20020807

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

17Q First examination report despatched

Effective date: 20030407

RBV Designated contracting states (corrected)

Designated state(s): CH DE DK FR GB IT LI

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): CH DE DK FR GB IT LI

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: NV

Representative=s name: TROESCH SCHEIDEGGER WERNER AG

Ref country code: CH

Ref legal event code: EP

REF Corresponds to:

Ref document number: 50114066

Country of ref document: DE

Date of ref document: 20080814

Kind code of ref document: P

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20090403

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080702

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20130204

Year of fee payment: 13

Ref country code: DK

Payment date: 20130110

Year of fee payment: 13

REG Reference to a national code

Ref country code: DK

Ref legal event code: EBP

Effective date: 20140131

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20140930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140131

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140131

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: CH

Payment date: 20160127

Year of fee payment: 16

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20160127

Year of fee payment: 16

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20170125

Year of fee payment: 17

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20170105

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170131

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170131

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170105

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 50114066

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180801