EP3386215B1 - Procédé de fonctionnement d'un dispositif d'aide auditive et dispositif d'aide auditive - Google Patents
Procédé de fonctionnement d'un dispositif d'aide auditive et dispositif d'aide auditive Download PDFInfo
- Publication number
- EP3386215B1 EP3386215B1 EP18157220.7A EP18157220A EP3386215B1 EP 3386215 B1 EP3386215 B1 EP 3386215B1 EP 18157220 A EP18157220 A EP 18157220A EP 3386215 B1 EP3386215 B1 EP 3386215B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- assigned
- signal
- hearing
- acoustic
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/70—Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/50—Customised settings for obtaining desired overall acoustical characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/50—Customised settings for obtaining desired overall acoustical characteristics
- H04R25/505—Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
- G10L25/81—Detection of presence or absence of voice signals for discriminating voice from music
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
- G10L25/84—Detection of presence or absence of voice signals for discriminating voice from noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/43—Electronic input selection or mixing based on input signal analysis, e.g. mixing or selection between microphone and telecoil or between microphones with different directivity characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/50—Customised settings for obtaining desired overall acoustical characteristics
- H04R25/505—Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
- H04R25/507—Customised settings for obtaining desired overall acoustical characteristics using digital signal processing implemented by neural network or fuzzy logic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/39—Aspects relating to automatic logging of sound environment parameters and the performance of the hearing aid during use, e.g. histogram logging, or of user selected programs or settings in the hearing aid, e.g. usage logging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/41—Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/43—Signal processing in hearing aids to enhance the speech intelligibility
Definitions
- the invention relates to a method for operating a hearing device and a hearing device which is set up in particular to carry out the method.
- Hearing devices are usually used to output a sound signal to the hearing of the wearer of this hearing device.
- the output takes place by means of an output transducer, mostly acoustically via airborne sound by means of a loudspeaker (also referred to as “listener” or “receiver”).
- Such hearing devices are often used as so-called hearing aids (also known as hearing aids for short).
- the hearing devices normally include an acoustic input transducer (in particular a microphone) and a signal processor which is set up to process the input signal (also: microphone signal) generated by the input transducer from the ambient sound using at least one usually user-specifically stored signal processing algorithm in such a way that a Hearing loss of the wearer of the hearing device is at least partially compensated for.
- the output transducer can also be a so-called bone conduction receiver or a cochlear implant, in addition to a loudspeaker, which are set up for mechanical or electrical coupling of the audio signal into the wearer's hearing.
- hearing devices also includes, in particular, devices such as so-called tinnitus maskers, headsets, headphones and the like.
- Modern hearing devices in particular hearing aids, often include a so-called classifier, which is usually designed as part of the signal processor that executes the respective signal processing algorithm or algorithms.
- a classifier is usually in turn an algorithm which is used to infer an existing hearing situation on the basis of the ambient sound recorded by means of the microphone.
- the respective signal processing algorithm or algorithms are then usually adapted to the characteristic properties of the present hearing situation.
- the hearing device is intended to pass on the information relevant to the user in accordance with the hearing situation. For example, different settings (parameter values of different parameters) of the or one of the signal processing algorithms are required for the clearest possible output of music than for the intelligible output of speech in the case of loud ambient noise.
- the correspondingly assigned parameters are then changed as a function of the recognized hearing situation.
- Usual listening situations are e.g. B. Speech at rest, speech with background noises, listening to music, (driving in) a vehicle.
- a classifier is often "trained" for the respective hearing situation by means of databases in which a large number of different representative audio samples are stored for the respective hearing situations.
- the disadvantage of this is that in most cases not all combinations of noises that may occur in everyday life can be mapped in such a database. This can therefore lead to misclassification of some listening situations.
- EP1858291 A1 describes a method for operating a hearing system which comprises a transmission unit and input / output units linked therewith.
- a transfer function of the transfer unit describes how audio signals generated by the input unit are processed in order to derive audio signals which are fed to the output unit and which can be set by one or more transfer parameters.
- US 2003/0144838 A1 describes a method and a device for identifying an acoustic scene, wherein an acoustic input signal is processed in at least two processing stages so that an extraction phase is provided in at least one of the processing stages, in which characteristic features are extracted from the input signal, and wherein in each processing stage an identification phase is provided in which the extracted characteristic features are classified.
- class information that characterizes or identifies the acoustic scene is generated in at least one of the processing stages.
- WO 2008/084116 A2 describes a method for operating a hearing apparatus comprising an input transducer, an output transducer and a signal processing unit for processing an output signal of the input transducer in order to obtain an input signal for the output transducer by applying a transfer function to the output signal of the input transducer.
- the method comprises the steps of: extracting features of the output signal of the input transducer, classifying the extracted features by at least two classification experts, weighting the outputs of the at least two classification experts by a weight vector in order to obtain a classification output, setting at least some parameters of the transfer function according to the classification output, Monitoring a user feedback received from the hearing device and updating the weight vector and / or one of the at least two classification experts in accordance with the user feedback.
- One aspect of the present topic includes a method of operating a hearing aid for a wearer. Acoustic inputs are received and a variety of acoustic environments determined by parallel signal processing based on the received acoustic inputs. According to various embodiments, an audiological parameter of the hearing aid device is adapted based on the determined plurality of acoustic environments.
- the invention is based on the object of making an improved hearing device possible.
- the method according to the invention is used to operate a hearing device which comprises at least one microphone for converting ambient sound into a microphone signal.
- a number of characteristics also referred to as “features”
- At least three classifiers which are implemented independently of one another for the analysis of a (preferably permanently) assigned acoustic dimension, are each supplied with a specifically assigned selection from these features. By means of the respective classifier, information is then generated in each case about a characteristic of the acoustic dimension assigned to this classifier.
- At least one signal processing algorithm is then used, which is used to process the microphone signal or the Input signal is processed into an output signal (ie executed), changed.
- Changing the signal processing algorithm is understood here and below in particular to mean that at least one parameter contained in the signal processing algorithm is set to a different parameter value as a function of the characteristic of the acoustic dimension or at least one of the acoustic dimensions. In other words, another setting of the signal processing algorithm is "approached" (i.e., effected or made).
- acoustic dimension is understood here and below to mean a group of listening situations that are related due to their specific properties.
- the hearing situations depicted in such an acoustic dimension are preferably each described by the same features and differ in particular on the basis of the current value of the respective features.
- the term "expression" of the respective acoustic dimension is understood here and in the following in particular as to whether (in the sense of a binary distinction) or (in a preferred variant) to what degree (for example, to what percentage) the respective in the respective acoustic dimension the listening situation shown is present.
- a degree or percentage preferably represents a probability value for the presence of the respective hearing situation.
- the listening situations "speech in rest", “speech in background noise” or (in particular only) "Background noise” (ie there is no speech)
- the information about the expression preferably again contains percentages (for example 30% probability for speech in background noise and 70% probability for only background noise).
- the hearing device comprises at least one microphone for converting the ambient sound into the microphone signal and a signal processor in which at least the three above-described classifiers are implemented independently of one another for analyzing the respectively (preferably permanently) assigned acoustic dimension.
- the signal processor is set up to carry out the method according to the invention, preferably automatically.
- the signal processor is set up to derive the number of features from the microphone signal or the input signal formed therefrom, to supply each of the three classifiers with a specifically assigned selection from the features, with the aid of the respective classifier information about the expression of the respectively assigned acoustic To generate dimension and, depending on at least one of the three items of information, to change at least one signal processing algorithm (preferably correspondingly assigned to the acoustic dimension) and preferably to apply it to the microphone signal or the input signal.
- the signal processor (also referred to as a signal processing unit) is formed at least in its core by a microcontroller with a processor and a data memory in which the functionality for carrying out the method according to the invention is implemented in the form of operating software ("firmware"), so that the method - possibly in interaction with a user of the hearing device - is carried out automatically when the operating software is executed in the microcontroller.
- the signal processor is formed by a non-programmable electronic component, e.g. an ASIC, in which the functionality for carrying out the method according to the invention is implemented using circuitry.
- At least three classifiers are set up and provided for analyzing an associated acoustic dimension and thus in particular for recognizing one hearing situation in each case, advantageously enables at least three hearing situations to be recognized independently of one another.
- This increases the flexibility of the hearing device in the Recognition of listening situations advantageously increased.
- the invention is based on the knowledge that at least some listening situations can also be completely independent (ie in particular not influencing one another or only influencing one another to an insignificant extent) and parallel to one another.
- the risk can thus be reduced that, at least with regard to the at least three acoustic dimensions analyzed by means of the respectively assigned classifier, mutually exclusive and in particular contradicting classifications (i.e. assessment of the currently existing acoustic situation) can be reduced. comes.
- (completely) parallel listening situations can be recognized in a simple manner and taken into account in the change in the signal processing algorithm.
- the hearing device according to the invention has the same advantages as the method according to the invention for operating the hearing device.
- the signal processing algorithms are used, in particular in parallel for processing the microphone signal or the input signal.
- the signal processing algorithms "work" preferably on (at least) one assigned acoustic dimension, ie the signal processing algorithms are used to process (e.g. filtering, amplifying, attenuating) signal components that are relevant for the listening situations contained or mapped in the assigned acoustic dimension .
- the signal processing algorithms comprise at least one, preferably several parameters, the parameter values of which can be changed.
- the parameter values can preferably also be changed in several steps (gradually or continuously) depending on the respective probability of the expression. This enables signal processing that is particularly flexible and advantageously adaptable to a large number of gradual differences between a number of listening situations.
- a different selection from the features is also supplied to at least two of the at least three classifiers. This is understood here and in the following in particular to mean that a different number and / or different features are selected for the respective classifier and supplied to it.
- each of the classifiers is supplied with the correspondingly assigned selection, in particular features that are only relevant for the analysis of the assigned acoustic dimension. In other words, only those features are selected and supplied for each classifier that are actually required to determine the hearing situation mapped in the respective acoustic dimension.
- computational effort and effort in the implementation of the respective classifier can advantageously be saved, since features that are insignificant for the respective acoustic dimension are not taken into account from the outset. The risk of a misclassification due to an erroneous consideration of non-relevant features can thereby advantageously be further reduced.
- each of the classifiers is thus on a specific "problem", ie with regard to its own Analysis algorithm “tailored” (ie adapted or designed) to the acoustic dimension specifically assigned to this classifier.
- the dimensions “vehicle”, “music” and “language” are used as the at least three acoustic dimensions.
- These three acoustic dimensions are, in particular, the dimensions that usually occur particularly frequently in the everyday life of a user of the hearing device and are also independent of one another.
- a fourth classifier is used to analyze a fourth acoustic dimension, which is in particular the loudness (also: “volume”) of ambient noises (also referred to as “interference noises”).
- the characteristics of this acoustic dimension extend gradually or continuously over several intermediate stages from very quiet to very loud.
- the information on the characteristics, in particular of the acoustic dimensions vehicle and music can optionally be "binary", ie it is only recognized whether or not the vehicle is being driven, or whether music is being heard or not.
- all information of the other three acoustic dimensions is continuously available as a kind of probability value. This is particularly advantageous because errors in the analysis of the respective acoustic dimension cannot be ruled out, and because this also makes it simple, in contrast to binary information Way "smoother" transitions between different settings can be achieved.
- further classifiers are used in each case for wind and / or reverberation estimation and for the detection of the wearer's own voice.
- features are derived from the microphone signal or the input signal that are selected from a (in particular non-final) group, which in particular includes the features of signal level, 4 Hz envelope modulation, onset content, level of background noise (also called “Noise Floor Level” denotes, optionally at a predetermined frequency), the spectral focus of the background noise, stationarity (in particular at a predetermined frequency), tonality and wind activity.
- a (in particular non-final) group which in particular includes the features of signal level, 4 Hz envelope modulation, onset content, level of background noise (also called “Noise Floor Level” denotes, optionally at a predetermined frequency), the spectral focus of the background noise, stationarity (in particular at a predetermined frequency), tonality and wind activity.
- At least the characteristics level of the background noise, spectral focus of the background noise and stationarity (and optionally also the characteristic of wind activity) are assigned to the acoustic dimension vehicle.
- the characteristics of onset content, tonality and level of the background noise are preferably assigned to the acoustic dimension of music.
- the characteristics of onset content and 4 Hz envelope modulation in particular are assigned to the acoustic dimension of language.
- the loudness dimension of the ambient noise which may be present, is assigned in particular the characteristics of the level of the background noise, the signal level and the spectral focus of the background noise.
- a specifically assigned temporal stabilization is taken into account for each classifier.
- this state is present in the past (for example, in a previous time segment of predetermined duration) (ie in particular with a certain characteristic of the acoustic dimension) (the expression) is then still present with a high degree of probability at the current point in time.
- a sliding mean value is formed over (in particular a predetermined number of) previous time segments.
- a type of "dead time element" can also be provided, by means of which, in a subsequent time segment, the probability is increased that the expression present in the previous time segment is still present.
- a further optional variant for stabilization can also take place via a counting principle in which a counter (“counter”) is incremented with a comparatively fast (e.g. 100 milliseconds to a few seconds) detection cycle and the “detection" is only activated when a limit value is exceeded for this counter "the respective hearing situation is triggered.
- a counter incremented with a comparatively fast (e.g. 100 milliseconds to a few seconds) detection cycle and the "detection” is only activated when a limit value is exceeded for this counter "the respective hearing situation is triggered.
- the respective signal processing algorithm or the respective signal processing algorithm is or are dependent on at least two of the at least three pieces of information about the characteristics of the respectively assigned acoustic Dimension adjusted.
- the information from several classifiers is therefore taken into account in at least one signal processing algorithm.
- the respective information from the individual classifiers is in particular first fed to a merger element for a common evaluation (“merged”).
- merged On the basis of this joint evaluation of all information, in particular overall information about the present listening situations is created.
- a dominant hearing situation is preferably determined in this case - in particular on the basis of the degree of expression reflecting the probability.
- the respective signal processing algorithm or algorithms are adapted to this dominant hearing situation.
- a hearing situation (namely the dominant one) is optionally prioritized by changing the respective signal processing algorithm only as a function of the dominant hearing situation, while other signal processing algorithms and / or the parameters dependent on other hearing situations remain unchanged or to a parameter value that does not have any Has an impact on signal processing.
- a hearing situation referred to as a sub-situation is determined, which has a lower dominance compared to the dominant hearing situation.
- This or the respective sub-situation is additionally taken into account in the aforementioned adaptation of the respective signal processing algorithm or algorithms to the dominant hearing situation and / or to adapt a signal processing algorithm specifically assigned to the acoustic dimension of this sub-situation.
- this sub-situation leads to a smaller change in the parameter or parameters assigned in each case compared to the dominant hearing situation.
- one or more parameters of a signal processing algorithm which is used to ensure the clearest possible speech intelligibility in the case of background noise, are changed comparatively strongly in order to achieve the highest possible speech intelligibility. But since there is also music, parameters that serve to attenuate ambient noise are set less strongly (than if only background noise is present) so that the tones of the music are not completely attenuated.
- a signal processing algorithm (in particular additional) serving for clear sound reproduction of music is also set less strongly than with music as the dominant listening situation (but stronger than with no music) so as not to cover up the speech components.
- the parallel presence of several hearing situations is preferably taken into account in at least one of the possibly several signal processing algorithms.
- each signal processing algorithm is assigned to at least one of the classifiers.
- at least one parameter of each signal processing algorithm is changed (in particular directly) as a function of the information output by the respective classifier about the characteristic of the assigned acoustic dimension.
- This parameter or its parameter value is preferably designed as a function of the respective information.
- each classifier "controls" at least one parameter of at least one signal processing algorithm. A joint evaluation of all information can be omitted here.
- At least one of the classifiers is supplied with status information that is generated independently of the microphone signal or the input signal.
- This status information is also taken into account, in particular, for evaluating the respective acoustic dimension. For example, this involves movement and / or location information that is used, for example, to evaluate the acoustic dimension of the vehicle.
- This movement and / or location information is generated, for example, with an acceleration or (global) position sensor arranged in the hearing device itself or in a system connected to it for signal transmission purposes (e.g. a smartphone).
- the probability of the presence of the hearing situation driving in the vehicle can be increased in a simple manner in addition to the acoustic evaluation.
- a hearing aid device referred to as "hearing device 1" for short, is shown as a hearing device.
- the hearing aid 1 comprises, as electrical components housed in a housing 2, two microphones 3, a signal processor 4 and a loudspeaker 5 ) or as a secondary cell (ie as a rechargeable battery).
- a microphone signal S M is generated therefrom.
- These two microphone signals S M are fed to the signal processor 4, which, while processing four signal processing algorithms A 1 , A 2 , A 3 and A 4, generates an output signal S A from these microphone signals S M and sends this to a loudspeaker 5, which is an output transducer, issues.
- the loudspeaker 5 converts the output signal S A into airborne sound, which is transmitted to the hearing of a user or wearer (short: hearing aid wearer) of the hearing aid via a sound tube 7 connected to the housing 2 and an earpiece 8 connected to it at the end (when the hearing aid 1 is properly worn) 1 is issued.
- the hearing device 1, specifically its signal processor 4 is set up to use a method described below on the basis of FIG Figure 2 and Figure 3 to carry out the procedure described in more detail automatically.
- the hearing aid 1, specifically its signal processor 4 comprises at least three classifiers Ks, K M and K F. These three classifiers K S , K M and K F are each set up and designed to analyze a specifically assigned acoustic dimension.
- the classifier Ks is specifically designed to evaluate the acoustic dimension “language”, ie whether language, language is present in background noise or just background noise.
- the classifier K M is specifically designed to evaluate the acoustic dimension “music”, ie whether the ambient sound is dominated by music.
- the classifier K F is specifically designed to evaluate the acoustic dimension “vehicle”, ie to determine whether the hearing aid wearer is driving in a vehicle.
- the signal processor 4 further comprises a feature analysis module 10 (also referred to as a "feature extraction module”) which is set up to derive a number of (signal) features from the microphone signals S M , specifically from an input signal S E formed from these microphone signals S M.
- the classifiers Ks, K M and K F are each supplied with a different and specifically assigned selection from these features.
- the respective classifier Ks, K M or K F determines a characteristic of the respective assigned acoustic dimension, ie the degree to which a hearing situation specifically assigned to the acoustic dimension is present, and outputs this characteristic as the respective information.
- the microphone signals S M are generated from the detected ambient sound and are combined by the signal processor 4 to form the input signal S E (specifically mixed to form a directional microphone signal).
- the input signal S E formed from the microphone signals S M is fed to the feature analysis module 10, and the number of features is derived therefrom.
- a background noise characteristic "M P "
- a spectral focus of the background noise characteristic "M Z”
- a stationarity of the signal characteristic "M M”
- wind activity Feature “M W”
- an onset content of the signal feature “M O”
- a tonality feature “M T ”
- a 4-Hertz envelope modulation feature “M E ”
- the features M E and Mo are fed to the classifier Ks for analyzing the acoustic dimension of speech.
- the features M O , M T and M P are fed to the classifier K M for analyzing the acoustic dimension of music.
- the characteristics M P , M W , M Z and M M are fed to the classifier K F to analyze the acoustic dimension of driving in the vehicle.
- the classifiers K S , K M and K F then use specifically adapted analysis algorithms to determine the extent to which, ie to what degree, the respective acoustic dimension is pronounced on the basis of the features supplied in each case.
- the classifier Ks is used to determine the probability with which speech is present at rest, speech in background noise or only background noise.
- the classifier K M is used to determine with what probability there is music.
- the classifier K F is used to determine the probability with which the hearing aid wearer is driving or not in a vehicle.
- the respective characteristics of the acoustic dimensions are output to a fusion module 60 in a method step 50 (see FIG Figure 2 ) by bringing the respective information together and comparing it with one another.
- a decision is made in the fusion module 60 as to which dimension, specifically which hearing situation depicted therein, is currently to be regarded as dominant and which hearing situations are currently of subordinate importance or can be completely excluded.
- the fusion module then changes a number of the parameters relating to the dominant and the less relevant hearing situations with a number of the stored signal processing algorithms A 1 to A 4 , so that the signal processing is primarily adapted to the dominant hearing situation and to a lesser extent to the less relevant hearing situation.
- Each of the signal processing algorithms A 1 to A 4 is in each case adapted to the presence of a hearing situation, possibly also parallel to other hearing situations.
- the classifier K F includes a stabilization over time in a manner not shown in detail. This is geared in particular to the fact that a journey in the vehicle usually lasts a longer time, and thus in the event that driving in the vehicle has already been recognized in previous time segments, for example from 30 seconds to five minutes each, and on the assumption that the driving situation in the vehicle is still ongoing, the probability that this hearing situation is present is already increased in advance. The same is set up and provided in the classifier K M.
- the fusion module 60 is missing in the signal flow diagram shown.
- Each of the classifiers Ks, K M and K F is assigned at least one of the signal processing algorithms A 1 , A 2 , A 3 and A 4 in such a way that several in the respective signal processing algorithm A 1 , A 2 , A 3 or A 4 contained parameters are designed to be changeable as a function of the characteristics of the respective acoustic dimension. This means that on the basis of the respective information about the respective characteristic, at least one parameter is changed directly - that is, without any intermediate merging.
- the signal processing algorithm A 1 is only dependent on the information from the classifier Ks.
- the information from all classifiers K S , K M and K F flow into the signal processing algorithm A 3 and lead to a change in several parameters there.
- the subject matter of the invention is not limited to the exemplary embodiments described above. Rather, further embodiments of the invention can be derived from the above description by a person skilled in the art. In particular, the individual features of the invention described on the basis of the various exemplary embodiments and their design variants can also be combined with one another in other ways.
- the hearing aid 1 can also be designed as an in-the-ear hearing aid instead of the behind-the-ear hearing aid shown.
- the subject matter of the invention is defined in the following claims.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Neurosurgery (AREA)
- Otolaryngology (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Automation & Control Theory (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Quality & Reliability (AREA)
- Circuit For Audible Band Transducer (AREA)
- Soundproofing, Sound Blocking, And Sound Damping (AREA)
Claims (13)
- Procédé pour faire fonctionner un dispositif auditif (1), qui comprend au moins un microphone (3) pour convertir du son ambiant en un signal de microphone (SM), dans lequel selon le procédé- un certain nombre de caractéristiques (ME, MO, MT, MP, MW, MZ, MM) est dérivé du signal du microphone (SM) ou d'un signal d'entrée (SE) formé à partir de celui-ci,- à au moins trois classificateurs (KS, KM, KF), qui sont mis en œuvre indépendamment les uns des autres pour l'analyse respective d'une dimension acoustique attribuée - c'est-à-dire un groupe de situations auditives, qui sont liées sur la base de leurs propriétés spécifiques - respectivement une sélection spécifiquement attribuée de ces caractéristiques (ME, MO, MT, MP, MW, MZ, MM) est fournie, dans lequel à au moins deux des au moins trois classificateurs (KS, KM, KF) respectivement une sélection différente des caractéristiques (ME, MO, MT, MP, MW, MZ, MM) est fournie,- au moyen du classificateur respectif (KS, KM, KF), dans chaque cas des informations sur une caractéristique de la dimension acoustique attribuée à ce classificateur (KS, KM, KF) sont générées, et- au moins un algorithme de traitement de signal (A1, A2, A3, A4), qui est traité pour transformer le signal de microphone (SM) ou le signal d'entrée (SE) en un signal de sortie (SA), est modifié en fonction d'au moins une des au moins trois informations sur la caractéristique respective de la dimension acoustique attribuée.
- Procédé selon la revendication 1,
dans lequel seules les caractéristiques (ME, MO, MT, MP, MW, MZ, MM) pertinentes pour l'analyse de la dimension acoustique attribuée respective sont fournies à chacun des classificateurs (KS, KM, KF) avec la sélection attribuée de manière correspondante. - Procédé selon l'une des revendications 1 à 2,
dans lequel, pour chacun des classificateurs (KS, KM, KF), un algorithme d'analyse spécifique est utilisé pour l'évaluation des caractéristiques fournies respectives (ME, MO, MT, MP, MW, MZ, MM). - Procédé selon l'une des revendications 1 à 3,
dans lequel le véhicule, la musique et la parole sont utilisés comme les au moins trois dimensions acoustiques. - Procédé selon l'une des revendications 1 à 4,
dans lequel des caractéristiques choisies parmi le niveau de signal, la modulation d'enveloppe à 4 hertz (ME), le contenu du début (MO), le niveau d'un bruit de fond (MP), le centre de gravité spectral du bruit de fond (MZ), la stationnarité (MM), la tonalité (MT), l'activité du vent (MW) sont dérivées du signal du microphone (SM) ou respectivement du signal d'entrée (SE). - Procédé selon l'une des revendications 4 et 5,
dans lequel au moins les caractéristiques niveau du bruit de fond (MP), centre de gravité spectral du bruit de fond (MZ) et stationnarité (MM) sont attribués à la dimension acoustique véhicule, dans lequel les caractéristiques contenu du début (MO), tonalité (MT) et niveau du bruit de fond (MP) sont attribués à la dimension acoustique musique, et dans lequel les caractéristiques contenu du début (MO) et modulation d'enveloppe à 4 hertz (ME) sont attribués à la dimension acoustique parole. - Procédé selon l'une des revendications 1 à 6,
dans lequel pour chaque classificateur (KS, KM, KF), une stabilisation temporelle spécifiquement attribuée est prise en compte. - Procédé selon l'une des revendications 1 à 7,
dans lequel le ou l'algorithme de traitement de signal respectif (A1, A2, A3, A4) est modifié en fonction d'au moins deux des au moins trois informations concernant l'expression de la dimension acoustique attribuée respective. - Procédé selon l'une des revendications 1 à 8,
dans lequel les informations des classificateurs respectifs (KS, KM, KF) sont fournies à une évaluation commune, dans lequel une situation auditive dominante est déterminée sur la base de cette évaluation commune, et dans lequel l'algorithme ou l'algorithme de traitement de signal (A1, A2, A3, A4) est adapté à cette situation auditive dominante. - Procédé selon la revendication 9,
dans lequel au moins une sous-situation avec une dominance plus faible par rapport à la situation d'écoute dominante est déterminée, et dans lequel cette ou la sous-situation respective est prise en compte dans la modification de l'algorithme de traitement de signal (A1, A2, A3, A4) ou d'au moins un des algorithmes de traitement de signal (A1, A2, A3, A4). - Procédé selon l'une des revendications 1 à 7,
dans lequel chaque algorithme de traitement de signal (A1, A2, A3, A4) est attribué à au moins un des classificateurs (KS, KM, KF), et dans lequel au moins un paramètre de chaque algorithme de traitement de (A1, A2, A3, A4) est modifié en fonction de l'information sur l'expression de la dimension acoustique correspondante fournie par le classificateur attribué (KS, KM, KF). - Procédé selon l'une des revendications 1 à 11,
dans lequel au moins l'un des classificateurs (KS, KM, KF) une information d'état générée indépendamment du signal du microphone (SM) ou du signal d'entrée (SE) est fournie à au moins un des classificateurs (KS, KM, KF), laquelle information d'état est en outre prise en compte pour l'évaluation de la dimension acoustique respective. - Dispositif auditif (1),avec au moins un microphone (3) pour la conversion du son ambiant en un signal de microphone (SM), et- avec un processeur de signaux (4), dans lequel au moins trois classificateurs (KS, KM, KF) sont mis en œuvre indépendamment les uns des autres pour l'analyse respective d'une dimension acoustique attribuée, etdans lequel le processeur de signaux (4) est configuré pour exécuter le procédé selon l'une des revendications 1 à 12.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| DE102017205652.5A DE102017205652B3 (de) | 2017-04-03 | 2017-04-03 | Verfahren zum Betrieb einer Hörvorrichtung und Hörvorrichtung |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| EP3386215A1 EP3386215A1 (fr) | 2018-10-10 |
| EP3386215B1 true EP3386215B1 (fr) | 2021-11-17 |
Family
ID=61231167
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP18157220.7A Active EP3386215B1 (fr) | 2017-04-03 | 2018-02-16 | Procédé de fonctionnement d'un dispositif d'aide auditive et dispositif d'aide auditive |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US10462584B2 (fr) |
| EP (1) | EP3386215B1 (fr) |
| CN (1) | CN108696813B (fr) |
| DE (1) | DE102017205652B3 (fr) |
| DK (1) | DK3386215T3 (fr) |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE102019203786A1 (de) * | 2019-03-20 | 2020-02-13 | Sivantos Pte. Ltd. | Hörgerätesystem |
| DE102019218808B3 (de) * | 2019-12-03 | 2021-03-11 | Sivantos Pte. Ltd. | Verfahren zum Trainieren eines Hörsituationen-Klassifikators für ein Hörgerät |
| DE102020208720B4 (de) * | 2019-12-06 | 2023-10-05 | Sivantos Pte. Ltd. | Verfahren zum umgebungsabhängigen Betrieb eines Hörsystems |
| DE102019220408A1 (de) * | 2019-12-20 | 2021-06-24 | Sivantos Pte. Ltd. | Verfahren zur Anpassung eines Hörinstruments und zugehöriges Hörsystem |
| CN117545420A (zh) * | 2021-06-18 | 2024-02-09 | 索尼集团公司 | 信息处理方法、信息处理系统、数据收集方法和数据收集系统 |
| DE102022212035A1 (de) * | 2022-11-14 | 2024-05-16 | Sivantos Pte. Ltd. | Verfahren zum Betrieb eines Hörgeräts sowie Hörgerät |
Family Cites Families (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| BR9508898A (pt) * | 1994-09-07 | 1997-11-25 | Motorola Inc | Sistema para reconhecer sons falados |
| AU2001246395A1 (en) * | 2000-04-04 | 2001-10-15 | Gn Resound A/S | A hearing prosthesis with automatic classification of the listening environment |
| US7158931B2 (en) * | 2002-01-28 | 2007-01-02 | Phonak Ag | Method for identifying a momentary acoustic scene, use of the method and hearing device |
| EP1513371B1 (fr) * | 2004-10-19 | 2012-08-15 | Phonak Ag | Procédé pour actionner une prothèse auditive et prothèse auditive |
| US8249284B2 (en) * | 2006-05-16 | 2012-08-21 | Phonak Ag | Hearing system and method for deriving information on an acoustic scene |
| EP1858291B1 (fr) * | 2006-05-16 | 2011-10-05 | Phonak AG | Système auditif et méthode de déterminer information sur une scène sonore |
| CN101529929B (zh) * | 2006-09-05 | 2012-11-07 | Gn瑞声达A/S | 具有基于直方图的声环境分类的助听器 |
| WO2008028484A1 (fr) * | 2006-09-05 | 2008-03-13 | Gn Resound A/S | Appareil auditif à classification d'environnement acoustique basée sur un histogramme |
| WO2008084116A2 (fr) * | 2008-03-27 | 2008-07-17 | Phonak Ag | Procédé pour faire fonctionner une prothèse auditive |
| US20100002782A1 (en) * | 2008-07-02 | 2010-01-07 | Yutaka Asanuma | Radio communication system and radio communication method |
| DK2792165T3 (da) | 2012-01-27 | 2019-01-21 | Sivantos Pte Ltd | Tilpasning af en klassifikation af et lydsignal i et høreapparat |
| EP2670168A1 (fr) * | 2012-06-01 | 2013-12-04 | Starkey Laboratories, Inc. | Dispositif d'assistance auditive adaptatif utilisant la détection et la classification d'environnement multiple |
| EP3036915B1 (fr) * | 2013-08-20 | 2018-10-10 | Widex A/S | Prothèse auditive avec un classeur adaptif |
| DE102014207311A1 (de) * | 2014-04-16 | 2015-03-05 | Siemens Medical Instruments Pte. Ltd. | Automatisches Auswählen von Hörsituationen |
| DK3360136T3 (da) | 2015-10-05 | 2021-01-18 | Widex As | Høreapparatsystem og en fremgangsmåde til at drive et høreapparatsystem |
| JP6402810B1 (ja) * | 2016-07-22 | 2018-10-10 | 株式会社リコー | 立体造形用樹脂粉末、立体造形物の製造装置、及び立体造形物の製造方法 |
-
2017
- 2017-04-03 DE DE102017205652.5A patent/DE102017205652B3/de not_active Expired - Fee Related
-
2018
- 2018-02-16 DK DK18157220.7T patent/DK3386215T3/da active
- 2018-02-16 EP EP18157220.7A patent/EP3386215B1/fr active Active
- 2018-03-30 US US15/941,106 patent/US10462584B2/en active Active
- 2018-04-03 CN CN201810287586.2A patent/CN108696813B/zh active Active
Non-Patent Citations (1)
| Title |
|---|
| None * |
Also Published As
| Publication number | Publication date |
|---|---|
| US20180288534A1 (en) | 2018-10-04 |
| US10462584B2 (en) | 2019-10-29 |
| CN108696813A (zh) | 2018-10-23 |
| CN108696813B (zh) | 2021-02-19 |
| DE102017205652B3 (de) | 2018-06-14 |
| DK3386215T3 (da) | 2022-02-07 |
| EP3386215A1 (fr) | 2018-10-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP3386215B1 (fr) | Procédé de fonctionnement d'un dispositif d'aide auditive et dispositif d'aide auditive | |
| EP3451705B1 (fr) | Procédé et dispositif de reconnaissance rapide de voix propre | |
| EP2603018B1 (fr) | Dispositif auditif avec détection d'activité de locuteur et procédé de fonctionnement d'un dispositif auditif | |
| EP1379102B1 (fr) | Localisation du son avec des prothèses auditives binauriculaires | |
| DE102019206743A1 (de) | Hörgeräte-System und Verfahren zur Verarbeitung von Audiosignalen | |
| EP3873108B1 (fr) | Système auditif pourvu d'au moins un instrument auditif porté dans ou sur l'oreille de l'utilisateur, ainsi que procédé de fonctionnement d'un tel système auditif | |
| WO2001020965A2 (fr) | Procede de determination d'une situation d'environnement acoustique momentanee, utilisation de ce procede, et prothese auditive | |
| EP3396978B1 (fr) | Procédé de fonctionnement d'un dispositif d'aide auditive et dispositif d'aide auditive | |
| DE102009043775A1 (de) | Verfahren zum Einstellen einer Hörvorrichtung anhand eines emotionalen Zustandes und entsprechende Hörvorrichtung | |
| EP3840418A1 (fr) | Procédé d'ajustement d'un instrument auditif et système auditif associé | |
| EP3693960B1 (fr) | Procédé de traitement individualisé du signal d'un signal audio d'un appareil auditif | |
| EP2141941A2 (fr) | Procédé d'élimination de bruits parasites et appareil auditif correspondant | |
| DE102020216439A1 (de) | Verfahren zum Betrieb eines Hörsystems mit einem Hörinstrument | |
| EP3836139A1 (fr) | Procédé d'accouplement de deux appareils auditifs ainsi qu'appareil auditif | |
| EP2182741B1 (fr) | Dispositif auditif doté d'une unité de reconnaissance de situation spéciale et procédé de fonctionnement d'un dispositif auditif | |
| EP3985997B1 (fr) | Système d'appareil auditif et son procédé de fonctionnement | |
| DE102018209822A1 (de) | Verfahren zur Steuerung der Datenübertragung zwischen zumindest einem Hörgerät und einem Peripheriegerät eines Hörgerätesystems sowie Hörgerät | |
| DE102008046040A1 (de) | Verfahren zum Betrieb einer Hörvorrichtung mit Richtwirkung und zugehörige Hörvorrichtung | |
| EP3585073A1 (fr) | Procédé de commande de la transmission de données entre au moins un appareil auditif et un périphérique d'un système d'appareil auditif ainsi que système d'appareil auditif associé | |
| EP1926087A1 (fr) | Adaptation d'un dispositif auditif à un signal vocal | |
| EP2658289B1 (fr) | Procédé de commande d'une caractéristique de guidage et système auditif | |
| EP1881738A2 (fr) | Procédé d'utilisation d'une prothèse auditive et assemblage avec une prothèse auditive | |
| EP3116236A1 (fr) | Procédé pour le traitement de signaux pour un appareil auditif, appareil auditif, système à appareil auditif et émetteur de source parasite pour un système à appareil auditif | |
| DE102007043081A1 (de) | Verfahren und Anordnungen zum Erfassen des Typs einer Schallsignalquelle mit einem Hörgerät | |
| EP4231667B1 (fr) | Procédé de fonctionnement d'un système de dispositif auditif binaural et système de dispositif auditif binaural |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| AX | Request for extension of the european patent |
Extension state: BA ME |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
| 17P | Request for examination filed |
Effective date: 20190409 |
|
| RBV | Designated contracting states (corrected) |
Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
| 17Q | First examination report despatched |
Effective date: 20191014 |
|
| GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
| INTG | Intention to grant announced |
Effective date: 20200225 |
|
| RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: AUBREVILLE, MARC Inventor name: LUGGER, MARKO |
|
| GRAJ | Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted |
Free format text: ORIGINAL CODE: EPIDOSDIGR1 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
| INTC | Intention to grant announced (deleted) | ||
| GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
| INTG | Intention to grant announced |
Effective date: 20210407 |
|
| GRAJ | Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted |
Free format text: ORIGINAL CODE: EPIDOSDIGR1 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
| REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Ref document number: 502018007846 Country of ref document: DE Free format text: PREVIOUS MAIN CLASS: H04R0025000000 Ipc: G10L0025810000 |
|
| GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
| INTC | Intention to grant announced (deleted) | ||
| RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: AUBREVILLE, MARC Inventor name: LUGGER, MARKO |
|
| RIC1 | Information provided on ipc code assigned before grant |
Ipc: G10L 25/81 20130101AFI20210610BHEP Ipc: H04R 25/00 20060101ALI20210610BHEP Ipc: G10L 25/84 20130101ALI20210610BHEP |
|
| INTG | Intention to grant announced |
Effective date: 20210629 |
|
| GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
| GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
| AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
| REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 502018007846 Country of ref document: DE |
|
| REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: GERMAN |
|
| REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 1448738 Country of ref document: AT Kind code of ref document: T Effective date: 20211215 |
|
| REG | Reference to a national code |
Ref country code: DK Ref legal event code: T3 Effective date: 20220201 |
|
| REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG9D |
|
| REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20211117 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220217 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220317 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220317 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220217 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220218 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 |
|
| REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 502018007846 Country of ref document: DE |
|
| PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 |
|
| REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20220228 |
|
| 26N | No opposition filed |
Effective date: 20220818 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20220216 Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20220216 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20220228 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20180216 |
|
| REG | Reference to a national code |
Ref country code: AT Ref legal event code: MM01 Ref document number: 1448738 Country of ref document: AT Kind code of ref document: T Effective date: 20230216 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230216 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: AT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230216 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 |
|
| PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20250122 Year of fee payment: 8 |
|
| PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DK Payment date: 20250121 Year of fee payment: 8 |
|
| PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: CH Payment date: 20250301 Year of fee payment: 8 |
|
| PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20250122 Year of fee payment: 8 |
|
| PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20250124 Year of fee payment: 8 |