EP1247425B1 - Method for operating a hearing-aid and a hearing aid - Google Patents
Method for operating a hearing-aid and a hearing aid Download PDFInfo
- Publication number
- EP1247425B1 EP1247425B1 EP01900013A EP01900013A EP1247425B1 EP 1247425 B1 EP1247425 B1 EP 1247425B1 EP 01900013 A EP01900013 A EP 01900013A EP 01900013 A EP01900013 A EP 01900013A EP 1247425 B1 EP1247425 B1 EP 1247425B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- unit
- features
- hearing device
- signal
- auditory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000004458 analytical method Methods 0.000 claims description 22
- 238000000605 extraction Methods 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 7
- 238000012546 transfer Methods 0.000 claims description 7
- 230000001427 coherent effect Effects 0.000 claims description 6
- 230000008447 perception Effects 0.000 claims description 5
- 238000012549 training Methods 0.000 claims description 4
- 230000003595 spectral effect Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 2
- 230000007704 transition Effects 0.000 claims description 2
- 230000001960 triggered effect Effects 0.000 claims description 2
- 230000001788 irregular Effects 0.000 claims 1
- 230000010355 oscillation Effects 0.000 claims 1
- 230000001052 transient effect Effects 0.000 claims 1
- 230000007613 environmental effect Effects 0.000 abstract description 7
- 230000005540 biological transmission Effects 0.000 description 10
- MOVRNJGDXREIBM-UHFFFAOYSA-N aid-1 Chemical compound O=C1NC(=O)C(C)=CN1C1OC(COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C(NC(=O)C(C)=C2)=O)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C(NC(=O)C(C)=C2)=O)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C(NC(=O)C(C)=C2)=O)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)CO)C(O)C1 MOVRNJGDXREIBM-UHFFFAOYSA-N 0.000 description 8
- 238000001514 detection method Methods 0.000 description 7
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000001419 dependent effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000003909 pattern recognition Methods 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- RZVAJINKPMORJF-UHFFFAOYSA-N Acetaminophen Chemical compound CC(=O)NC1=CC=C(O)C=C1 RZVAJINKPMORJF-UHFFFAOYSA-N 0.000 description 2
- 241000534414 Anotopterus nikparini Species 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000012512 characterization method Methods 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 229940092727 claro Drugs 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000012567 pattern recognition method Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/40—Arrangements for obtaining a desired directivity characteristic
- H04R25/402—Arrangements for obtaining a desired directivity characteristic using contructional means
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/41—Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
Definitions
- the present invention relates to a method for operating a hearing aid and a hearing aid.
- the choice of hearing program can be made either via the remote control or via a switch on the hearing aid itself.
- switching between different programs is annoying or difficult, if not impossible, for many users.
- Which program offers the optimum comfort and the best speech intelligibility at which point in time is not always easy to determine even for experienced hearing aid wearers.
- An automatic recognition of the acoustic environment situation and an associated automatic switching of the hearing program in the hearing aid is therefore desirable.
- the known methods for noise classification consisting of feature extraction and pattern recognition, have the disadvantages that, although a clear and robust identification of speech signals is basically possible, several different acoustic environment situations can not or only insufficiently classified. Thus, although it is possible with the known methods to be able to distinguish pure speech signals from "non-speech" - ie all other acoustic environment situations. However, this is not sufficient so that an optimal hearing program to be used for an instantaneous acoustic environment situation can be selected. As a consequence, either the number of possible hearing programs is limited to the two automatically recognizable acoustic environment situations or the hearing device wearer must recognize the uncovered acoustic environment situations himself and activate the associated hearing program by hand.
- pattern identification methods known for noise classification can be used.
- distance estimators Bayes classifiers, fuzzy logic systems or neural networks are suitable as pattern recognizers.
- neural networks the standard work of Christopher M. Bishop entitled “Neural Networks for Pattern Recognition” (1995 Oxford University Press ).
- Ostendorf et. al. "Classification of Acoustic Signals Based on the Analysis of Modulation Spectra for Use in Digital Hearing Aids” (Journal of Audiology, 1998, pages 148 to 150 ); F.
- the present invention is therefore based on the object of initially providing a method for operating a hearing aid, which is much more robust and accurate compared to the known methods.
- the invention is based on an extraction of signal features and a subsequent separation of different noise sources as well as an identification of different sounds, Hidden Markov models being used in the identification phase to detect a current ambient situation or noises and / or a speaker or his words. This is the first dynamic properties of the classes of interest takes into account, whereby a significant improvement in the accuracy of the inventive method in all applications, ie in the detection of the current environmental situation or noise as well as in the detection of a speaker and single words has been achieved.
- auditory-based features are taken into account instead of or in addition to technically-based features in the extraction phase.
- auditory-based features are preferably using methods of Auditory Scene Analysis (ASA).
- the invention will be explained in more detail with reference to a drawing, for example.
- the single figure shows a block diagram of a hearing aid in which the inventive method is realized.
- hearing aid means both so-called hearing aids, which are used to correct a damaged hearing of a person, as well as all other acoustic communication systems, such as radios, are to be understood ,
- the hearing aid 1 consists in a known manner initially of two electro-acoustic transducers 2a, 2b and 6, namely one or more microphones 2a, 2b and a speaker 6 - also referred to as a handset.
- An actual main component of a hearing aid 1 is a transmission unit designated 4 in which the - in the case of a hearing aid - adapted to the user of the hearing aid 1 signal modifications are made.
- the operations carried out in the transmission unit 4 are not only dependent on the type of a predetermined target function of the hearing aid 1, but are chosen in particular depending on the current acoustic environment situation. For this reason, z. B.
- a signal analysis unit 7 and a signal identification unit 8 are provided in the hearing aid 1.
- the hearing device 1 is an implementation using digital technology
- one or more analog / digital converters 3a, 3b and between the transmission unit 4 and the earphone 6 are a digital / analog converter between the microphones 2a, 2b and the transmission unit 4 5 provided.
- the realization in digital technology is the preferred embodiment of the present invention, it is basically also conceivable that all components are realized in analog technology. In this case, it goes without saying that the transducers 3a, 3b and 5 are eliminated.
- the signal analysis unit 7 is supplied with the same input signal as the transmission unit 4.
- the signal identification unit 8 which is connected to the output of the signal analysis unit 7, is connected to the transmission unit 4 and to a control unit 9.
- Denoted by 10 is a training unit, with the aid of which the determination of required for the classification in the signal identification unit 8 parameters in an "off-line" operation is made.
- the settings of the transmission unit 4 and the control unit 9 determined by the signal analysis unit 7 and the signal identification unit 8 can be overwritten by the user by means of a user input unit 11.
- a preferred embodiment of the inventive method is based on that in an extraction phase characteristic features are extracted from an acoustic signal, wherein instead of or in addition to technical-based features - such.
- the previously mentioned zero-crossing rates, temporal level fluctuations, different modulation frequencies, or the level, the spectral centroid, the amplitude distribution, etc. - are also used auditory-based features.
- These auditory-based features are determined using Auditory Scene Analysis (ASA) and include in particular loudness, spectral form (timbre), harmonic structure (pitch), common settling and decay times (on / offsets), coherent amplitude modulations , coherent frequency modulations, coherent frequency transitions, binaural effects, etc.
- ASA Auditory Scene Analysis
- the characterization of the tonality of the acoustic signal by analyzing the harmonic structure is given here, which is particularly suitable for the identification of tonal signals, such as speech and music.
- the principles of Gestalt theory which examines qualitative qualities such as continuity, closeness, similarity, common destiny, unity, good continuation, and others, are applied to the auditory-based and possibly technical features for the formation of auditory objects.
- the grouping can - as well as the feature extraction in the extraction phase - either context-independent, ie without the addition of additional knowledge, carried out (so-called “primitive” grouping), or context-dependent in the sense of human auditory perception using additional information or hypotheses about the signal content (so-called “schema-based” grouping).
- the context-dependent grouping is thus adapted to the respective acoustic situation.
- the advantage of using these grouping methods is that the characteristics of the input signal can be further differentiated. In particular, thereby signal parts are identifiable, which come from different sound sources. This allows the extracted features to be assigned to individual noise sources, thus providing additional knowledge about the existing noise sources - and thus about the current environmental situation.
- the second aspect of the method according to the invention described here relates to pattern recognition or signal identification, which is carried out in the identification phase.
- the method of the Hidden Markov Models is used in the signal identification unit 8 for automatic classification of the acoustic environment situation.
- temporal changes of the calculated characteristics can be used for classification. Consequently, dynamic and not only static properties of the environmental situations to be recognized resp. Noise classes are taken into account.
- the mentioned second aspect of the method ie the use of hidden Markov models. Excellent for determining a current acoustic environment or noise.
- the recognition of a speaker as well as the detection of single words or phrases can be accomplished with extremely good results, even on their own, ie without the consideration of auditory-based characteristics in the extraction phase or even without the use of ASA. (Auditory Scene Analysis) methods, which is used in a further embodiment for determining characteristic features.
- the output signal of the signal identification unit 8 thus contains information about the type of acoustic environment (acoustic environment situation). This information is applied to the transmission unit 4, in which the most suitable program for the detected environmental situation or the most suitable parameter set for the transmission is selected. At the same time, the information of the control unit 9 ascertained in the signal identification 8 is applied to further functions, where, depending on the situation, any function - e.g. an acoustic signal - can be triggered.
- Hidden Markov models are used in the identification phase, a complicated procedure for determining the parameters necessary for the classification becomes necessary. This parameter determination is therefore preferably carried out in an "off-line" procedure, for each class alone.
- the actual identification of different acoustic environment situations requires only small storage space and little computing capacity. Therefore, it is proposed to provide a training unit 9 which has sufficient computing power for parameter determination and which can be connected to the hearing aid 1 by suitable means for the purpose of data transfer. Such means may for example be a simple wire connection with be appropriate plugs.
- the method according to the invention it is thus possible to select the most suitable from a multiplicity of different setting options and automatically retrievable actions without the user of the device having to take action himself.
- the comfort for the user is thus significantly improved, because it is selected immediately after the detection of a new acoustic environment situation, the right program or the corresponding function in the hearing aid 1 automatically.
- an input unit 11 is provided, with which automatic responses or automatic program selection can be overwritten.
- Such input unit 11 may, for. B. a switch on the hearing aid 1 or a remote control, which is operated by the user.
Landscapes
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Neurosurgery (AREA)
- Otolaryngology (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
Description
Die vorliegende Erfindung betrifft ein Verfahren zum Betrieb eines Hörgerätes sowie ein Hörgerät.The present invention relates to a method for operating a hearing aid and a hearing aid.
Moderne Hörgeräte können heute mit Hilfe verschiedener Hörprogramme - typischerweise sind dies zwei bis maximal drei Programme - unterschiedlichen akustischen Umgebungssituationen angepasst werden. Damit soll das Hörgerät dem Benutzer in jeder Situation einen optimalen Nutzen bieten.Today, modern hearing aids can be adapted to different acoustic environment situations with the aid of various hearing programs-typically two to a maximum of three programs. Thus, the hearing aid should offer the user an optimal benefit in every situation.
Die Wahl des Hörprogramms kann entweder über die Fernbedienung oder über einen Schalter am Hörgerät selbst vorgenommen werden. Das Umschalten zwischen verschiedenen Hörprogrammen ist jedoch für viele Benutzer lästig oder schwierig, wenn nicht sogar unmöglich. Welches Programm zu welchem Zeitpunkt den optimalen Komfort und die beste Sprachverständlichkeit bietet, ist auch für versierte Hörgeräteträger nicht immer einfach zu bestimmen. Ein automatisches Erkennen der akustischen Umgebungssituation und ein damit verbundenes automatisches Umschalten des Hörprogramms im Hörgerät ist daher wünschenswert.The choice of hearing program can be made either via the remote control or via a switch on the hearing aid itself. However, switching between different programs is annoying or difficult, if not impossible, for many users. Which program offers the optimum comfort and the best speech intelligibility at which point in time is not always easy to determine even for experienced hearing aid wearers. An automatic recognition of the acoustic environment situation and an associated automatic switching of the hearing program in the hearing aid is therefore desirable.
Es sind derzeit verschiedene Verfahren für die automatische Klassifizierung von akustischen Umgebungssituationen bekannt. Bei all diesen Verfahren werden aus dem Eingangssignal, das beim Hörgerät von einem oder mehreren Mikrophonen stammen kann, verschiedene Merkmale extrahiert. Basierend auf diesen Merkmalen trifft ein Mustererkenner unter Anwendung eines Algorithmus eine Entscheidung über die Zugehörigkeit des analysierten Eingangssignals zu einer bestimmten akustischen Umgebungssituation. Die verschiedenen bekannten Verfahren unterscheiden sich dabei einerseits durch die unterschiedlichen Merkmale, welche bei der Beschreibung der akustischen Umgebungssituation verwendet werden (Signalanalyse), und andererseits durch den verwendeten Mustererkenner der die Merkmale klassifiziert (Signalidentifikation).Various methods for the automatic classification of acoustic environmental situations are currently known. In all of these methods, various features are extracted from the input signal that may originate from one or more microphones in the hearing aid. Based on these characteristics, a pattern recognizer, using an algorithm, makes a decision about the affiliation of the analyzed input signal to a specific acoustic environment situation. The various known methods differ on the one hand by the different features which in the description of the acoustic environment situation are used (signal analysis), and on the other hand by the pattern recognizer used which classifies the characteristics (signal identification).
Für die Merkmalsextraktion in Audiosignalen wurde im Aufsatz von
Die bekannten Methoden zur Geräuschklassifikation, bestehend aus Merkmalsextraktion und Mustererkennung, weisen die Nachteile auf, dass, obwohl eine eindeutige und robuste Identifikation von Sprachsignalen grundsätzlich möglich ist, mehrere verschiedene akustische Umgebungssituationen nicht oder nur in unzureichender Weise klassifiziert werden können. So ist es zwar mit den bekannten Verfahren möglich, reine Sprachsignale von "Nicht-Sprache" - d.h. allen anderen akustischen Umgebungssituationen - unterscheiden zu können. Dies reicht jedoch nicht aus, damit ein für eine momentane akustische Umgebungssituation zu verwendendes optimales Hörprogramm gewählt werden kann. Als Folge davon ist entweder die Anzahl möglicher Hörprogramme auf die zwei automatisch erkennbaren akustischen Umgebungssituationen beschränkt oder der Hörgeräteträger muss die nicht abgedeckten akustischen Umgebungssituationen selber erkennen und das dazugehörige Hörprogramm von Hand aktivieren.The known methods for noise classification, consisting of feature extraction and pattern recognition, have the disadvantages that, although a clear and robust identification of speech signals is basically possible, several different acoustic environment situations can not or only insufficiently classified. Thus, although it is possible with the known methods to be able to distinguish pure speech signals from "non-speech" - ie all other acoustic environment situations. However, this is not sufficient so that an optimal hearing program to be used for an instantaneous acoustic environment situation can be selected. As a consequence, either the number of possible hearing programs is limited to the two automatically recognizable acoustic environment situations or the hearing device wearer must recognize the uncovered acoustic environment situations himself and activate the associated hearing program by hand.
Grundsätzlich lassen sich für die Geräuschklassifizierung bekannte Musteridentifikationsmethoden verwenden. So eignen sich insbesondere sogenannte Abstandsschätzer, Bayes Klassifizierer, Fuzzy Logic Systeme oder Neuronale Netzwerke als Mustererkenner. Weitere Informationen zu den zwei erst genannten Methoden können der Druckschrift "
Des Weiteren ist aus
Der vorliegenden Erfindung liegt daher die Aufgabe zugrunde, zunächst ein Verfahren zum Betrieb eines Hörgerätes anzugeben, das gegenüber den bekannten Verfahren wesentlich robuster und genauer ist.The present invention is therefore based on the object of initially providing a method for operating a hearing aid, which is much more robust and accurate compared to the known methods.
Diese Aufgabe wird durch die in Anspruch 1 angegebenen Massnahmen gelöst. Vorteilhafte Ausgestaltungen der Erfindung sowie ein Hörgerät sind in weiteren Ansprüchen angegeben.This object is achieved by the measures specified in
Die Erfindung basiert auf einer Extraktion von Signalmerkmalen und einer nachfolgenden Separation verschiedener Geräuschquellen sowie einer Identifikation verschiedener Geräusche, wobei Hidden Markov Modelle in der Identifikationsphase eingesetzt werden, um eine momentane Umgebungssituation bzw. Geräusche und/oder einen Sprecher bzw. dessen Worte zu detektieren. Damit werden erstmals dynamische Eigenschaften der interessierenden Klassen berücksichtigt, womit eine erhebliche Verbesserung der Genauigkeit des erfindungsgemässen Verfahrens in allen Anwendungsbereichen, d.h. bei der Detektion der momentanen Umgebungssituation bzw. von Geräuschen als auch bei der Detektion eines Sprechers und einzelner Worte, erreicht worden ist.The invention is based on an extraction of signal features and a subsequent separation of different noise sources as well as an identification of different sounds, Hidden Markov models being used in the identification phase to detect a current ambient situation or noises and / or a speaker or his words. This is the first dynamic properties of the classes of interest takes into account, whereby a significant improvement in the accuracy of the inventive method in all applications, ie in the detection of the current environmental situation or noise as well as in the detection of a speaker and single words has been achieved.
In einer weiteren Ausführungsform des erfindungsgemässen Verfahrens werden an Stelle von oder neben technisch-basierten Merkmalen in der Extraktionsphase auditorisch-basierte Merkmale berücksichtigt. Diese auditorisch-basierten Merkmale werden vorzugsweise mit Methoden der Auditory Scene Analysis (ASA) ermittelt.In a further embodiment of the method according to the invention, auditory-based features are taken into account instead of or in addition to technically-based features in the extraction phase. These auditory-based features are preferably using methods of Auditory Scene Analysis (ASA).
In einer noch weiteren Ausführungsform des erfindungsgemässen Verfahrens erfolgt in der Extraktionsphase eine Gruppierung der Merkmale mit Hilfe der Gestaltprinzipien kontextunabhängig oder kontextabhängig.In yet another embodiment of the method according to the invention, in the extraction phase a grouping of the features takes place context-independent or context-dependent with the aid of the gestalt principles.
Die Erfindung wird nachfolgend anhand einer Zeichnung beispielsweise näher erläutert. Dabei zeigt die einzige Figur ein Blockschaltbild eines Hörgerätes, in dem das erfindungsgemässe Verfahren realisiert ist.The invention will be explained in more detail with reference to a drawing, for example. The single figure shows a block diagram of a hearing aid in which the inventive method is realized.
In der einzige Figur ist mit 1 ein Hörgerät bezeichnet, wobei im folgenden unter dem Begriff "Hörgerät" sowohl sogenannte Hörhilfen, welche zur Korrektur eines geschädigten Hörvermögens einer Person eingesetzt werden, als auch alle anderen akustischen Kommunikationssysteme, wie zum Beispiel Funkgeräte, zu verstehen sind.In the single figure, 1 denotes a hearing aid, whereby in the following the term "hearing aid" means both so-called hearing aids, which are used to correct a damaged hearing of a person, as well as all other acoustic communication systems, such as radios, are to be understood ,
Das Hörgerät 1 besteht in bekannter Weise zunächst aus zwei elektro-akustischen Wandlern 2a, 2b und 6, nämlich einem oder mehreren Mikrophonen 2a, 2b und einem Lautsprecher 6 - auch etwa als Hörer bezeichnet. Ein eigentlicher Hauptbestandteil eines Hörgerätes 1 ist eine mit 4 bezeichnete Übertragungseinheit, in welcher die - im Falle einer Hörhilfe - auf den Benutzer des Hörgerätes 1 abgestimmten Signalmodifikationen vorgenommen werden. Die in der Übertragungseinheit 4 vorgenommenen Operationen sind jedoch nicht nur von der Art einer vorgegebenen Zielfunktion des Hörgerätes 1 abhängig, sondern werden insbesondere auch in Abhängigkeit der momentanen akustischen Umgebungssituation gewählt. Aus diesem Grund wurden z. B. bereits Hörhilfen angeboten, bei denen der Geräteträger eine manuelle Umschaltung zwischen verschiedenen Hörprogrammen vornehmen kann, die auf bestimmte akustische Umgebungssituationen angepasst sind. Ebenso sind Hörhilfen bekannt, bei denen die Erkennung der akustischen Umgebungssituation automatisch vorgenommen wird. Diesbezüglich sei nochmals auf die europäischen Patentschriften mit den Veröffentlichungsnummern
Neben den erwähnten Bestandteilen - wie Mikrophone 2a, 2b, Übertragungseinheit 4 und Hörer 6 - ist im Hörgerät 1 eine Signalanalyseeinheit 7 und eine Signalidentifikationseinheit 8 vorgesehen. Handelt es sich beim Hörgerät 1 um eine Realisierung mittels Digitaltechnologie, so sind zwischen den Mikrophonen 2a, 2b und der Übertragungseinheit 4 ein oder mehrere Analog/Digital-Wandler 3a, 3b und zwischen der Übertragungseinheit 4 und dem Hörer 6 ein Digital/Analog-Wandler 5 vorgesehen. Obwohl die Realisierung in Digitaltechnologie die bevorzugte Ausführungsform der vorliegenden Erfindung ist, ist grundsätzlich auch denkbar, dass alle Komponenten in Analogtechnologie realisiert sind. Diesfalls entfallen selbstverständlich die Wandler 3a, 3b und 5.In addition to the mentioned components - such as
Die Signalanalyseeinheit 7 ist mit dem gleichen Eingangssignal beaufschlagt wie die Übertragungseinheit 4. Schliesslich ist die Signalidentifikationseinheit 8, welche mit dem Ausgang der Signalanalyseeinheit 7 verbunden ist, mit der Übertragungseinheit 4 und mit einer Steuereinheit 9 verbunden.The
Mit 10 ist eine Trainingseinheit bezeichnet, mit Hilfe derer die Ermittlung von für die Klassifizierung in der Signalidentifikationseinheit 8 benötigten Parameter in einem "off-line"-Betrieb vorgenommen wird.Denoted by 10 is a training unit, with the aid of which the determination of required for the classification in the
Die durch die Signalanalyseeinheit 7 und die Signalidentifikationseinheit 8 ermittelten Einstellungen der Übertragungseinheit 4 und der Steuereinheit 9 können durch den Benutzer mittels einer Benutzereingabeeinheit 11 überschrieben werden.The settings of the
Im folgenden wird das erfindungsgemässe Verfahren erläutert:The following explains the method according to the invention:
Eine bevorzugte Ausführungsform des erfindungsgemässen Verfahrens basiert darauf, dass in einer Extraktionsphase charakteristische Merkmale aus einem akustischen Signal extrahiert werden, wobei anstelle oder zusätzlich zu technisch-basierten Merkmalen - wie z. B. die früher erwähnten Nulldurchgangsraten, zeitlichen Pegelschwankungen, verschiedenen Modulationsfrequenzen, oder der Pegel, der spektrale Schwerpunkt, die Amplitudenverteilung, etc. - auch auditorisch-basierte Merkmale verwendet werden. Diese auditorisch-basierten Merkmale werden mit Hilfe der Auditory Scene Analysis (ASA) ermittelt und umfassen insbesondere die Lautheit, die spektrale Form (timbre), die harmonische Struktur (pitch), gemeinsame Einschwing- und Ausschwingzeiten (on-/offsets), kohärente Amplitudenmodulationen, kohärente Frequenzmodulationen, kohärente Frequenzübergänge, binaurale Effekte, etc. Erläuterungen zur Auditory Scene Analysis sind z. B. in den Werken von
Als Beispiel für die Verwendung von auditorisch-basierten Merkmalen bei der Signalanalyse sei an dieser Stelle die Charakterisierung der Tonalität des akustischen Signals durch die Analyse der harmonischen Struktur angegeben, was speziell für die Identifikation tonaler Signale, wie Sprache und Musik, geeignet ist.As an example of the use of auditory-based features in signal analysis, the characterization of the tonality of the acoustic signal by analyzing the harmonic structure is given here, which is particularly suitable for the identification of tonal signals, such as speech and music.
Bei einer weiteren Ausführungsform des erfindungsgemässen Verfahrens ist es vorgesehen, in der Signalanalyseeinheit 7 des weiteren eine Gruppierung der Merkmale mittels GestaltPrinzipien vorzunehmen. Dabei werden die Prinzipien der Gestalttheorie, bei der qualitative Eigenschaften - wie Kontinuität, Nähe, Ähnlichkeit, gemeinsames Schicksal, Geschlossenheit, gute Fortsetzung und andere - untersucht werden, auf die auditorisch-basierten und eventuell technischen Merkmale zur Bildung von auditorischen Objekten angewendet. Die Gruppierung kann - wie übrigens auch die Merkmalsextraktion in der Extraktionsphase - entweder kontext-unabhängig, also ohne Hinzunahme von zusätzlichem Wissen, durchgeführt werden (sogenannt "primitive" Gruppierung), oder sie kann kontext-abhängig im Sinne der menschlichen auditorischen Wahrnehmung unter Verwendung von zusätzlicher Information oder Hypothesen über den Signalgehalt erfolgen (sogenannt "schema-basierte" Gruppierung). Die kontext-abhängige Gruppierung ist also der jeweiligen akustischen Situation angepasst. Für ausführliche Erläuterungen der Prinzipien der Gestalttheorie und der Gruppierung mittels Gestaltprinzipien sei stellvertretend auf folgendeIn a further embodiment of the method according to the invention, it is provided to further group the features in the
Veröffentlichungen verwiesen: "
Der Vorteil der Anwendung dieser Gruppierungsverfahren liegt darin, dass die Merkmale des Eingangssignals weiter differenziert werden können. Insbesondere sind dadurch Signalteile identifizierbar, welche von unterschiedlichen Klangquellen stammen. Dies ermöglicht, dass die extrahierten Merkmale einzelnen Geräuschquellen zugeordnet werden können, womit zusätzliches Wissen über die vorhandenen Geräuschquellen - und damit über die momentane Umgebungssituation - erhalten wird.The advantage of using these grouping methods is that the characteristics of the input signal can be further differentiated. In particular, thereby signal parts are identifiable, which come from different sound sources. This allows the extracted features to be assigned to individual noise sources, thus providing additional knowledge about the existing noise sources - and thus about the current environmental situation.
Der zweite Aspekt des hier beschriebenen erfindungsgemässen Verfahrens betrifft die Mustererkennung bzw. die Signalidentifikation, welche in der Identifikationsphase vorgenommen wird. In der bevorzugten Ausführungsform des erfindungsgemässen Verfahrens kommt in der Signalidentifikationseinheit 8 zur automatischen Klassifizierung der akustischen Umgebungssituation die Methode der Hidden Markov Modelle (HMM) zur Anwendung. Damit können auch zeitliche Änderungen der berechneten Merkmale zur Klassifizierung eingesetzt werden. Demzufolge können auch dynamische und nicht nur statische Eigenschaften der zu erkennenden Umgebungssituationen resp. Geräuschklassen berücksichtigt werden. Ebenfalls möglich ist die Kombination von HMMs mit anderen Klassifizierern, z. B. in einem mehrstufigen Erkennungsverfahren, zur Identifikation der akustischen Umgebung.The second aspect of the method according to the invention described here relates to pattern recognition or signal identification, which is carried out in the identification phase. In the preferred embodiment of the method according to the invention, the method of the Hidden Markov Models (HMM) is used in the
Erfindungsgemäss eignet sich der erwähnte zweite Aspekt des Verfahrens, d.h. die Verwendung von Hidden Markov Modellen, vorzüglich zur Bestimmung einer momentanen akustischen Umgebungssituation bzw. von Geräuschen. Ebenfalls das Erkennen von einem Sprecher sowie die Detektion einzelner Worte oder Phrasen lassen sich mit äusserst guten Ergebnissen bewerkstelligen, und zwar auch für sich allein, d.h. auch ohne die Mitberücksichtigung von auditorisch-basierten Merkmalen in der Extraktionsphase bzw. auch ohne die Verwendung von ASA-(Auditory Scene Analysis)-Methoden, welche in einer weiteren Ausführungsform zur Bestimmung von charakteristischen Merkmalen eingesetzt wird.According to the invention, the mentioned second aspect of the method, ie the use of hidden Markov models, is suitable. Excellent for determining a current acoustic environment or noise. Likewise, the recognition of a speaker as well as the detection of single words or phrases can be accomplished with extremely good results, even on their own, ie without the consideration of auditory-based characteristics in the extraction phase or even without the use of ASA. (Auditory Scene Analysis) methods, which is used in a further embodiment for determining characteristic features.
Das Ausgangssignal der Signalidentifikationseinheit 8 enthält somit Informationen über die Art der akustischen Umgebung (akustische Umgebungssituation). Diese Information wird der Übertragungseinheit 4 beaufschlagt, in der das für die erkannte Umgebungssituation geeignetste Programm bzw. der geeignetste Parametersatz für die Übertragung ausgewählt wird. Gleichzeitig wird die in der Signalidentifikation 8 ermittelte Information der Steuereinheit 9 für weitere Funktionen beaufschlagt, wo je nach Situation eine beliebige Funktion - z.B. ein akustisches Signal - ausgelöst werden kann.The output signal of the
Werden in der Identifikationsphase Hidden Markov Modelle verwendet, so wird ein aufwendiges Verfahren zur Ermittlung der für die Klassifizierung notwendigen Parameter notwendig. Diese Parameterermittlung erfolgt daher vorzugsweise in einem "Off-line"-Verfahren, und zwar für jede Klasse allein. Die eigentliche Identifikation verschiedener akustischer Umgebungssituationen erfordert nur geringen Speicherplatz und wenig Rechenkapazität. Daher wird vorgeschlagen, eine Trainingseinheit 9 vorzusehen, die zur Parameterbestimmung ausreichend Rechenleistung aufweist und die mit geeigneten Mitteln mit dem Hörgerät 1 zum Zweck des Datentransfers verbindbar sind. Solche Mittel können beispielsweise eine einfache Drahtverbindung mit entsprechenden Steckern sein.If Hidden Markov models are used in the identification phase, a complicated procedure for determining the parameters necessary for the classification becomes necessary. This parameter determination is therefore preferably carried out in an "off-line" procedure, for each class alone. The actual identification of different acoustic environment situations requires only small storage space and little computing capacity. Therefore, it is proposed to provide a
Mit dem erfindungsgemässen Verfahren ist es somit möglich, aus einer Vielzahl verschiedener Einstellmöglichkeiten und automatisch abrufbaren Aktionen die geeignetste auszuwählen, ohne dass der Benutzer des Gerätes selber tätig werden muss. Der Komfort für den Benutzer ist damit wesentlich verbessert, denn es wird unmittelbar nach dem Erkennen einer neuen akustischen Umgebungssituation das richtige Programm bzw. die entsprechende Funktion im Hörgerät 1 selbsttätig gewählt.With the method according to the invention, it is thus possible to select the most suitable from a multiplicity of different setting options and automatically retrievable actions without the user of the device having to take action himself. The comfort for the user is thus significantly improved, because it is selected immediately after the detection of a new acoustic environment situation, the right program or the corresponding function in the
Benutzer von Hörgeräten haben oft auch den Wunsch, die vorstehend beschriebene automatische Erkennung der Umgebungssituation und die damit verbundene automatische Wahl des entsprechenden Programms auszuschalten. Aus diesem Grund ist eine Eingabeeinheit 11 vorgesehen, mit der automatische Reaktionen oder die automatische Programmwahl überschrieben werden kann. Eine solche Eingabeeinheit 11 kann z. B. ein Schalter am Hörgerät 1 oder eine Fernbedienung sein, welche durch den Benutzer betätigt wird.Users of hearing aids often also wish to disable the above-described automatic recognition of the environmental situation and the associated automatic selection of the corresponding program. For this reason, an
Andere Möglichkeiten wie z. B. eine sprachgesteuerte Benutzereingabe sind ebenfalls denkbar.Other possibilities such. As a voice-controlled user input are also conceivable.
Claims (14)
- Method for operating a hearing device (1), the method consisting in- that, during an extraction phase, characteristic features are extracted from an acoustic signal recorded by at least one microphone (2a, 2b),- that, during an identification phase, the characteristic features are processed using the Hidden Markov Models for the determination of a momentary acoustic surrounding situation,characterized in- that, due to a determined momentary acoustic surrounding situation, a program or a transfer function, respectively, between at least one microphone (2a, 2b) and a receiver (6) is adjusted in the hearing device (1), such an adjustment being overwrite-able with the aid of a user input unit (11) by the hearing device user.
- Method according to claim 1, characterized in that, for the determination of the characteristic features extracted during the extraction phase, ASA (Auditory Scene Analysis) methods are used.
- Method according to claim 1 or 2, characterized in that at least one or several of the following auditory-based features are determined during the extraction of the features: loudness, spectral pattern, harmonic structure, common transient oscillation and decay events, coherent amplitude modulations, coherent frequency modulations, coherent frequency transitions and binaural effects.
- Method according to one of the preceding claims, characterized in that additionally to auditory-based features also any other features are determined.
- Method according to one of the preceding claims, characterized in that, for forming auditory objects, the auditory-based and, if need be, the other features are grouped along the principles of the Gestalt theory.
- Method according to claim 5, characterized in that the extraction of the features and/or the grouping of the features is carried out either context-independently or context-dependently in terms of human auditory perception, in consideration of additional information or hypothesis on the signal content and thus adapted to the respective acoustic situation.
- Method according to one of the preceding claims, characterized in that, during the identification phase, data is accessed that has been acquired during an "off-line" training phase.
- Method according to one of the preceding claims, characterized in that the extraction phase and the identification phase take place continuously or at regular or irregular time intervals, respectively.
- Method according to one of the preceding claims, characterized in that a certain function is triggered in the hearing device (1) in response to a detected momentary acoustic surrounding situation, a detected sound, a detected speaker or a detected word.
- Hearing device (1) comprising a transfer unit (4), which is, on its input side, operatively connected to at least one microphone (2a, 2b), and which is, on its output side, operatively connected to a receiver (6),- the input signal of the transfer unit (4) being simultaneously fed to a signal analyzing unit (7) for the extraction of characteristic features,- the signal analyzing unit (7) being operatively connected to a signal identification unit (8), in which a momentary acoustic situation is determined using Hidden Markov Models,characterized in- that the signal identification unit (8) is operatively connected to the transfer unit (4) for adjusting a program or a transfer function, such an adjustment being over writable by the hearing device user with the aid of a user input unit (11).
- Hearing device (1) according to claim 10, characterized in that an input unit (11) is provided which is operatively connected to the transfer unit (4).
- Hearing device (1) according to claim 10 or 11, characterized in that a control unit (9) is provided, the signal identification unit (8) being operatively connected to the control unit (9).
- Hearing device (1) according to claim 12, characterized in that the input unit (11) is operatively connected to the control unit (9).
- Hearing device (1) according to one of the claims 10 to 13, characterized in that any means for transferring parameters from a training unit (10) to the signal identification unit (8) are provided.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CH2001/000007 WO2001022790A2 (en) | 2001-01-05 | 2001-01-05 | Method for operating a hearing-aid and a hearing aid |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| EP1247425A2 EP1247425A2 (en) | 2002-10-09 |
| EP1247425B1 true EP1247425B1 (en) | 2008-07-02 |
Family
ID=4358166
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP01900013A Expired - Lifetime EP1247425B1 (en) | 2001-01-05 | 2001-01-05 | Method for operating a hearing-aid and a hearing aid |
Country Status (6)
| Country | Link |
|---|---|
| EP (1) | EP1247425B1 (en) |
| JP (1) | JP2004500750A (en) |
| CA (1) | CA2400089A1 (en) |
| DE (1) | DE50114066D1 (en) |
| DK (1) | DK1247425T3 (en) |
| WO (1) | WO2001022790A2 (en) |
Families Citing this family (20)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CA2433390A1 (en) | 2001-11-09 | 2002-01-17 | Phonak Ag | Method for operating a hearing device and hearing device |
| US20040175008A1 (en) | 2003-03-07 | 2004-09-09 | Hans-Ueli Roeck | Method for producing control signals, method of controlling signal and a hearing device |
| US8027495B2 (en) | 2003-03-07 | 2011-09-27 | Phonak Ag | Binaural hearing device and method for controlling a hearing device system |
| EP1320281B1 (en) | 2003-03-07 | 2013-08-07 | Phonak Ag | Binaural hearing device and method for controlling such a hearing device |
| DK1326478T3 (en) | 2003-03-07 | 2014-12-08 | Phonak Ag | Method for producing control signals and binaural hearing device system |
| US7773763B2 (en) * | 2003-06-24 | 2010-08-10 | Gn Resound A/S | Binaural hearing aid system with coordinated sound processing |
| US6912289B2 (en) | 2003-10-09 | 2005-06-28 | Unitron Hearing Ltd. | Hearing aid and processes for adaptively processing signals therein |
| EP1513371B1 (en) | 2004-10-19 | 2012-08-15 | Phonak Ag | Method for operating a hearing device as well as a hearing device |
| US7319769B2 (en) * | 2004-12-09 | 2008-01-15 | Phonak Ag | Method to adjust parameters of a transfer function of a hearing device as well as hearing device |
| US7680291B2 (en) | 2005-08-23 | 2010-03-16 | Phonak Ag | Method for operating a hearing device and a hearing device |
| DE502005009721D1 (en) | 2005-08-23 | 2010-07-22 | Phonak Ag | Method for operating a hearing aid and a hearing aid |
| JP5069696B2 (en) | 2006-03-03 | 2012-11-07 | ジーエヌ リザウンド エー/エス | Automatic switching between omnidirectional and directional microphone modes of hearing aids |
| DK1858292T4 (en) | 2006-05-16 | 2022-04-11 | Phonak Ag | Hearing device and method of operating a hearing device |
| US8249284B2 (en) | 2006-05-16 | 2012-08-21 | Phonak Ag | Hearing system and method for deriving information on an acoustic scene |
| US7957548B2 (en) | 2006-05-16 | 2011-06-07 | Phonak Ag | Hearing device with transfer function adjusted according to predetermined acoustic environments |
| US8594337B2 (en) | 2006-12-13 | 2013-11-26 | Phonak Ag | Method for operating a hearing device and a hearing device |
| DE102007056221B4 (en) * | 2007-11-27 | 2009-07-09 | Siemens Ag Österreich | Method for speech recognition |
| WO2009094709A1 (en) | 2008-02-01 | 2009-08-06 | Cochlear Limited | An apparatus and method for optimising power consumption of a digital circuit |
| EP2569955B1 (en) | 2010-05-12 | 2014-12-03 | Phonak AG | Hearing system and method for operating the same |
| US9363612B2 (en) | 2010-12-20 | 2016-06-07 | Sonova Ag | Method for operating a hearing device and a hearing device |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP0681411B1 (en) * | 1994-05-06 | 2003-01-29 | Siemens Audiologische Technik GmbH | Programmable hearing aid |
| US6002776A (en) * | 1995-09-18 | 1999-12-14 | Interval Research Corporation | Directional acoustic signal processor and method therefor |
| US5960397A (en) * | 1997-05-27 | 1999-09-28 | At&T Corp | System and method of recognizing an acoustic environment to adapt a set of based recognition models to the current acoustic environment for subsequent speech recognition |
| AU2001246395A1 (en) * | 2000-04-04 | 2001-10-15 | Gn Resound A/S | A hearing prosthesis with automatic classification of the listening environment |
-
2001
- 2001-01-05 CA CA002400089A patent/CA2400089A1/en not_active Abandoned
- 2001-01-05 WO PCT/CH2001/000007 patent/WO2001022790A2/en not_active Ceased
- 2001-01-05 JP JP2001526020A patent/JP2004500750A/en active Pending
- 2001-01-05 EP EP01900013A patent/EP1247425B1/en not_active Expired - Lifetime
- 2001-01-05 DK DK01900013T patent/DK1247425T3/en active
- 2001-01-05 DE DE50114066T patent/DE50114066D1/en not_active Expired - Lifetime
Also Published As
| Publication number | Publication date |
|---|---|
| WO2001022790A3 (en) | 2002-04-18 |
| CA2400089A1 (en) | 2001-04-05 |
| WO2001022790A2 (en) | 2001-04-05 |
| DK1247425T3 (en) | 2008-10-27 |
| JP2004500750A (en) | 2004-01-08 |
| EP1247425A2 (en) | 2002-10-09 |
| DE50114066D1 (en) | 2008-08-14 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP1247425B1 (en) | Method for operating a hearing-aid and a hearing aid | |
| WO2001020965A2 (en) | Method for determining a current acoustic environment, use of said method and a hearing-aid | |
| DE60120949T2 (en) | A HEARING PROSTHESIS WITH AUTOMATIC HEARING CLASSIFICATION | |
| DE3871711T2 (en) | METHOD AND DEVICE FOR IMPROVING THE UNDERSTANDING OF VOICES IN HIGH NOISE ENVIRONMENT. | |
| DE69816221T2 (en) | LANGUAGE SPEED CHANGE METHOD AND DEVICE | |
| EP1470735B1 (en) | Method for determining an acoustic environment situation, application of the method and hearing aid | |
| DE60027438T2 (en) | IMPROVING A HARMFUL AUDIBLE SIGNAL | |
| DE69433254T2 (en) | Method and device for speech detection | |
| EP2081406B1 (en) | Method and device for configuring variables on a hearing aid | |
| EP3445067B1 (en) | Hearing aid and method for operating a hearing aid | |
| DE60204902T2 (en) | Method for programming a communication device and programmable communication device | |
| EP1404152B1 (en) | Device and method for fitting a hearing-aid | |
| EP3074974B1 (en) | Hearing assistance device with fundamental frequency modification | |
| EP2405673B1 (en) | Method for localising an audio source and multi-channel audio system | |
| EP3386215B1 (en) | Hearing aid and method for operating a hearing aid | |
| DE69616724T2 (en) | Method and system for speech recognition | |
| DE2020753A1 (en) | Device for recognizing given speech sounds | |
| EP3693960B1 (en) | Method for individualized signal processing of an audio signal of a hearing aid | |
| WO2008043731A1 (en) | Method for operating a hearing aid, and hearing aid | |
| EP1471770B1 (en) | Method for generating an approximated partial transfer function | |
| DE60033039T2 (en) | DEVICE AND METHOD FOR THE SUPPRESSION OF ZISCHLAUTEN USING ADAPTIVE FILTER ALGORITHMS | |
| WO2001047335A2 (en) | Method for the elimination of noise signal components in an input signal for an auditory system, use of said method and a hearing aid | |
| DE10114101A1 (en) | Processing input signal in signal processing unit for hearing aid, involves analyzing input signal and adapting signal processing unit setting parameters depending on signal analysis results | |
| DE60004403T2 (en) | DEVICE AND METHOD FOR DETECTING SIGNAL QUALITY | |
| EP2200341A1 (en) | Method for operating a hearing aid and hearing aid with a source separation device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| 17P | Request for examination filed |
Effective date: 20020807 |
|
| AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR |
|
| AX | Request for extension of the european patent |
Free format text: AL;LT;LV;MK;RO;SI |
|
| 17Q | First examination report despatched |
Effective date: 20030407 |
|
| RBV | Designated contracting states (corrected) |
Designated state(s): CH DE DK FR GB IT LI |
|
| GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
| GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
| GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
| AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): CH DE DK FR GB IT LI |
|
| REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
| REG | Reference to a national code |
Ref country code: CH Ref legal event code: NV Representative=s name: TROESCH SCHEIDEGGER WERNER AG Ref country code: CH Ref legal event code: EP |
|
| REF | Corresponds to: |
Ref document number: 50114066 Country of ref document: DE Date of ref document: 20080814 Kind code of ref document: P |
|
| PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
| 26N | No opposition filed |
Effective date: 20090403 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20080702 |
|
| PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20130204 Year of fee payment: 13 Ref country code: DK Payment date: 20130110 Year of fee payment: 13 |
|
| REG | Reference to a national code |
Ref country code: DK Ref legal event code: EBP Effective date: 20140131 |
|
| REG | Reference to a national code |
Ref country code: FR Ref legal event code: ST Effective date: 20140930 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20140131 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20140131 |
|
| PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: CH Payment date: 20160127 Year of fee payment: 16 |
|
| PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20160127 Year of fee payment: 16 |
|
| PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20170125 Year of fee payment: 17 |
|
| REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
| GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20170105 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170131 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170131 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170105 |
|
| REG | Reference to a national code |
Ref country code: DE Ref legal event code: R119 Ref document number: 50114066 Country of ref document: DE |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20180801 |