[go: up one dir, main page]

CN112686295B - Personalized hearing loss modeling method - Google Patents

Personalized hearing loss modeling method Download PDF

Info

Publication number
CN112686295B
CN112686295B CN202011587016.9A CN202011587016A CN112686295B CN 112686295 B CN112686295 B CN 112686295B CN 202011587016 A CN202011587016 A CN 202011587016A CN 112686295 B CN112686295 B CN 112686295B
Authority
CN
China
Prior art keywords
hearing
hearing loss
samples
audiogram
severe
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011587016.9A
Other languages
Chinese (zh)
Other versions
CN112686295A (en
Inventor
唐闺臣
梁瑞宇
吴亮
王青云
谢跃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN202011587016.9A priority Critical patent/CN112686295B/en
Publication of CN112686295A publication Critical patent/CN112686295A/en
Application granted granted Critical
Publication of CN112686295B publication Critical patent/CN112686295B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

本发明公开了一种个性化听力损失建模方法,步骤(A)、获取大量听障患者样本的听力图和对应的助听器插入增益;步骤(B)、将听障患者样本按照听损程度分为中度听损、重度听损和极重度听损三类;步骤(C)、针对分类后的中度听损、重度听损和极重度听损的听障患者样本,将每类听障患者样本的助听器插入增益分别进行分类;步骤(D)、计算各类别下的助听器插入增益对应的听力图曲线的平均值,用来表征各类个体听力损失;步骤(E)、对于待分类的听力图,计算其与每类个体听力损失的距离并按照其与每类个体听力损失的最小距离进行归类。可以使助听器验配尽可能少的依赖听力专家,弥补现有助听器技术单纯依靠听力图进行听力损失分类的不足。

Figure 202011587016

The invention discloses a personalized hearing loss modeling method. In step (A), an audiogram of a large number of hearing-impaired patient samples and the corresponding hearing aid insertion gain are obtained; in step (B), the hearing-impaired patient samples are classified according to the degree of hearing loss There are three categories of moderate hearing loss, severe hearing loss and very severe hearing loss; in step (C), for the classified samples of hearing impaired patients with moderate hearing loss, severe hearing loss and extremely severe hearing loss, each type of hearing impairment is classified into three categories. The hearing aid insertion gains of the patient samples are classified respectively; in step (D), the average value of the audiogram curves corresponding to the hearing aid insertion gains under each category is calculated, which is used to characterize the hearing loss of various individuals; step (E), for the to-be-classified Audiograms, which calculate their distance from each type of individual hearing loss and categorize them according to their minimum distance from each type of individual hearing loss. It can make hearing aid fitting rely as little as possible on hearing experts, and make up for the deficiency of the existing hearing aid technology that only relies on audiograms to classify hearing loss.

Figure 202011587016

Description

Personalized hearing loss modeling method
Technical Field
The invention relates to a personalized hearing loss modeling method.
Background
Currently, 3.5 million people worldwide suffer from some form of hearing loss, and this number will rise to 6.3 million in the next decade or so. Overall, the burden of hearing loss in the global economy is estimated to be $ 7500 billion per year, and thus, it is necessary and urgent to quickly identify and address the hearing impairment of the affected individual.
Audiogram is the output of standard audiometry and visually displays the hearing thresholds of the subject in the form of a frequency spectrum on an inverted graph, which is actually a graph of discrete threshold of hearing versus frequency, and the correct interpretation of audiogram is key to making the best clinical diagnosis and recommending the best treatment. While mobile and automated audiometry can provide a partial solution to the problem of auditory patient shortages, many users (e.g., primary care physicians, nurses, and technicians) lack the necessary training to adequately interpret or optimally utilize the large amount of clinical information contained in audiograms.
The audiogram classification process is not only an effective hearing loss personalityModeling methods are developed and can be used to study the prevalence of different types of hearing loss. The parent Raymond Kahart (Raymond Carhart) of modern audiology proposed the first standardized classification system for audiograms as early as 1945, and researchers have proposed a variety of classification systems over the years, most of which rely on a set of rules set by experts. For example, Margolis and Saly24 were unsatisfactory for the complexity and rigidity of Carhart systems, and therefore AMCLASS was designedTMThis is a rule-based system that is dedicated to classifying audiograms generated by automated audiometers. AMCLASSTMIs the current state of the art and consists of 161 manually established rules to maximize the classification agreement between the system and expert groups with regard to the task of annotation of audiogram configuration, severity, symmetry and lesion sites.
Machine learning is a series of data-driven techniques that learn directly from data, while supervised learning is a branch of machine learning that trains models from annotated data, which is becoming increasingly popular in medical applications. For example, some scholars classify and label audiogram data by using audiologists, and then perform personalized hearing loss modeling by using decision trees. However, the supervised learning method requires manual labeling of various audiogram data, which depends on the professional knowledge of the labeling expert and is susceptible to the subjective preference of the audiologist.
How to establish an objective and effective personalized hearing loss modeling method to enhance the interpretability of audiogram is a key for improving the performance and popularization of hearing aids and is a problem to be solved at present.
Disclosure of Invention
Aiming at the problems, the invention provides a personalized hearing loss modeling method which can ensure that a hearing aid is matched with hearing experts as few as possible and make up the defect that the existing hearing aid technology simply depends on audiogram to classify the hearing loss.
In order to achieve the technical purpose and achieve the technical effect, the invention is realized by the following technical scheme:
a personalized hearing loss modeling method, comprising:
acquiring audiogram of a large number of samples of hearing-impaired patients and corresponding hearing aid insertion gain;
dividing samples of the hearing-impaired patients into three types of moderate hearing loss, severe hearing loss and extremely severe hearing loss according to hearing loss degrees;
step (C), classifying the hearing aid insertion gains of various hearing-impaired patient samples by using an unsupervised clustering method aiming at the classified hearing-impaired patient samples with moderate hearing loss, severe hearing loss and extremely severe hearing loss;
step (D), calculating the average value of audiogram curves corresponding to the insertion gains of the hearing aids in each category to represent the hearing loss of each individual;
and (E) calculating the distance between the audiogram to be classified and each individual hearing loss and classifying according to the minimum distance between the audiogram to be classified and each individual hearing loss.
Preferably, the hearing aid insertion gain G in step (a) has 133 dimensions, respectively, of insertion gain at 125Hz, 160Hz, 200Hz, 250Hz, 315Hz, 400Hz, 500Hz, 630Hz, 800Hz, 1000Hz, 1250Hz, 1600Hz, 2000Hz, 2500Hz, 3150Hz, 4000Hz, 5000Hz, 6300Hz, and 8000Hz at the input sound pressure level of 50dB, 55dB, 60dB, 65dB, 70dB, 80dB, and 90dB SPL input.
Preferably, the step (B) of classifying the samples of the hearing-impaired patients according to hearing loss degree comprises the following steps:
B1) for each audiogram HLiCalculating the sample g of hearing-impaired patientiAverage hearing loss μ HL ofi
Figure BDA0002866242910000031
In the formula, HLi(phi) denotes the sample g of the hearing impaired patientiHearing loss at frequency point phi;
B2) according to hearing impaired patient sample giTo perform a classification of moderate hearing loss, severe hearing loss and very severe hearing loss, wherein: g1For a medium hearing loss set, G2For severe hearing impairment set, G3For the very severe hearing loss set:
G1={gi|μHLi∈(40dB,60dB]}
G2={gi|μHLi∈(60dB,80dB]}
G3={gi|μHLi>80dB}。
preferably, the step (C) specifically includes the steps of:
C1) setting the maximum iteration number as N and the classification number as k, and constructing a sample set G of the insertion gain according to the classification of the step (B), wherein the sample set G is { G ═ G1,g2,g3,…,gnN represents the number of samples;
C2) randomly selecting a point from the input sample set as a first cluster center mu1
C3) For each point giCalculating its minimum distance from the selected cluster center
Figure BDA0002866242910000032
r represents the number of cluster centers already present, wherein,
Figure BDA0002866242910000033
denotes giAnd cluster center muiThe distance of (d);
C4) selection DiThe largest point is taken as the new cluster center mur+1
C5) Repeating steps C3) and C4) until k cluster centers [ mu ] are selected123,…,μk};
C6) Calculate each sample giTo respective cluster center vector muj(j ═ 1,2, …, k) distance dij
Figure BDA0002866242910000034
G is prepared fromiIs classified asijClass c corresponding to the minimumj
C7) For class cjCalculating the mean value of audiogram of all samples as the new cluster center muj
C8) If all k cluster center vectors mujIf no change occurs or the iteration number N is reached, executing the step C9), otherwise, repeatedly executing the steps C6) to C8);
C9) for each classified subset Gm, the number of samples is
Figure BDA0002866242910000043
Calculating the contour coefficient of Gm
Figure BDA0002866242910000041
Wherein s isiIs the coefficient of the sample;
C10) calculating the contour coefficients under all the k values, and keeping the corresponding classification number k and the corresponding clustering center { mu ] under the condition that the contour coefficients are maximum123,…,μk}。
Preferably, 2. ltoreq. k.ltoreq.6.
Preferably, coefficients of samples
Figure BDA0002866242910000042
Wherein, aiRepresents giAverage distance to homogeneous samples; biRepresents giAverage distance to all samples in the class closest thereto.
The invention has the beneficial effects that:
compared with the prior art, the invention constructs an individual hearing loss modeling method based on hearing aid gain, classifies hearing-impaired patients with different hearing losses, and selects a representative audiogram to represent the difference of the hearing-impaired patients with different categories so as to overcome the current situation that the existing hearing aid is very dependent on the fitting of hearing experts, wherein:
1) the classification method based on the gain is more in line with the actual compensation effect of the hearing-impaired patients, and can better reflect the individual difference of the hearing-impaired patients;
2) the hearing loss compensation can be better realized by classifying the patients with different hearing losses, so that the hearing aid can be matched with hearing experts as few as possible;
3) the invention can enhance the interpretability of the audiogram, fully explain or optimally utilize a large amount of clinical information contained in the audiogram, and enable non-experts to decide whether to refer a patient to an audiologist, a hearing instrument expert or a physician according to needs.
Drawings
FIG. 1 is an exemplary graph of the results of the audiometric hearing loss modeling of the present invention, with frequency on the x-axis and hearing loss on the ordinate;
FIG. 2 is an exemplary graph of the modeling results of severe hearing loss of the present invention, with frequency on the x-axis and hearing loss on the ordinate;
fig. 3 is an exemplary graph of the modeling results of very severe hearing loss according to the present invention, with frequency on the x-axis and hearing loss on the ordinate.
Detailed Description
The present invention will be better understood and implemented by those skilled in the art by the following detailed description of the technical solution of the present invention with reference to the accompanying drawings and specific examples, which are not intended to limit the present invention.
A personalized hearing loss modeling method, comprising:
and (A) acquiring audiogram of a large number of samples of hearing-impaired patients and corresponding hearing aid insertion gain.
The step is to acquire a sample data source, wherein the more the number of the acquired data sources is, the higher the classification precision of the method is, and the suggested number is not less than (500- & ltSUB & gt 1000- & gt).
Preferably, the hearing aid insertion gain G in step (a) has 133 dimensions, i.e. insertion gain at 125Hz, 160Hz, 200Hz, 250Hz, 315Hz, 400Hz, 500Hz, 630Hz, 800Hz, 1000Hz, 1250Hz, 1600Hz, 2000Hz, 2500Hz, 3150Hz, 4000Hz, 5000Hz, 6300Hz, and 8000Hz at the respective 19 frequency points at the input sound pressure level of 50dB, 55dB, 60dB, 65dB, 70dB, 80dB, and 90dB SPL, respectively. The hearing aid insertion gain G has 133 dimensions, and includes:
1) 50dBSPL (sound pressure level) insertion gain at 19 frequency points below the input sound pressure level;
2) 55dBSPL (sound pressure level) insertion gain at 19 frequency points below the input sound pressure level;
3) 60dBSPL (sound pressure level) insertion gain at 19 frequency points below the input sound pressure level;
4) 65dBSPL (sound pressure level) insertion gain at 19 frequency points below the input sound pressure level;
5) 70dBSPL (sound pressure level) insertion gain at 19 frequency points below the input sound pressure level;
6) 80dBSPL (sound pressure level) insertion gain at 19 frequency points below the input sound pressure level;
7) 90dBSPL (sound pressure level) insertion gain at 19 frequency points below the input sound pressure level.
And (B) dividing the samples of the hearing-impaired patients into three types of moderate hearing loss, severe hearing loss and extremely severe hearing loss according to the hearing loss degree.
Preferably, the step (B) of classifying the sample of the hearing-impaired patient according to hearing loss degree comprises the following steps:
B1) for each audiogram HLiCalculating the sample g of hearing-impaired patientiAverage hearing loss μ HL ofi
Figure BDA0002866242910000061
In the formula, HLi(phi) denotes the sample g of the hearing impaired patientiHearing loss at frequency point phi;
B2) according to hearing impaired patient sample giTo perform a classification of moderate hearing loss, severe hearing loss and very severe hearing loss, wherein: g1For a medium hearing loss set, G2For severe hearing impairment set, G3For the very severe hearing loss set:
G1={gi|μHLi∈(40dB,60dB]}
G2={gi|μHLi∈(60dB,80dB]}
G3={gi|μHLi>80dB}。
and (C) classifying the hearing aid insertion gains of the hearing-impaired patient samples of the classified moderate hearing loss, severe hearing loss and extremely severe hearing loss respectively by using an unsupervised clustering method (such as K-Means, mean shift, DBSCAN and the like).
Preferably, the step (C) specifically includes the steps of:
C1) setting the maximum iteration number as N and the classification number as k, and constructing a sample set G of the insertion gain according to the classification of the step (B), wherein the sample set G is { G ═ G1,g2,g3,…,gnN represents the number of samples, wherein preferably, k is more than or equal to 2 and less than or equal to 6, namely, the hearing aid insertion gain classification number of each type of samples of the hearing-impaired patients is between 2 and 6, including 2 and 6, in each classification of moderate hearing loss, severe hearing loss or extremely severe hearing loss.
C2) Randomly selecting a point from the input sample set as a first cluster center mu1
C3) For each point giCalculating its minimum distance from the selected cluster center
Figure BDA0002866242910000071
r represents the number of cluster centers already present, wherein,
Figure BDA0002866242910000072
denotes giAnd cluster center muiThe distance of (d);
C4) selection DiThe largest point is taken as the new cluster center mur+1
C5) Repeating steps C3) and C4) until k cluster centers [ mu ] are selected123,…,μk};
C6) Calculate each sample giTo respective cluster center vector muj(j ═ 1,2, …, k) distance dij
Figure BDA0002866242910000073
G is prepared fromiIs classified asijClass c corresponding to the minimumj
C7) For class cjCalculating the mean value of audiogram of all samples as the new cluster center muj
C8) If all k cluster center vectors mujIf no change occurs or the iteration number N is reached, executing the step C9), otherwise, repeatedly executing the steps C6) to C8);
C9) for each classified subset Gm, the number of samples is
Figure BDA0002866242910000074
Calculating the contour coefficient of Gm
Figure BDA0002866242910000075
Wherein s isiIs the coefficient of the sample;
wherein coefficients of the samples
Figure BDA0002866242910000076
Wherein, aiRepresents giAverage distance to homogeneous samples; biRepresents giAverage distance to all samples in the class closest thereto.
C10) Calculating the contour coefficients under all the k values, and keeping the corresponding classification number k and the corresponding clustering center { mu ] under the condition that the contour coefficients are maximum123,…,μk}。
And (D) calculating the average value of audiogram curves corresponding to the insertion gains of the hearing aids in each category to represent the hearing loss of each individual.
And (E) calculating the distance between the audiogram to be classified and each individual hearing loss and classifying according to the minimum distance between the audiogram to be classified and each individual hearing loss.
The following description is provided in connection with specific examples in which experimental data is from the national health and nutrition survey (NHANES), a portion of which assesses the hearing status of a subject by pure tone audiometry, and the NHANES data set contains a number of essentially pure tone audiograms. Experiments searched audiograms obtained from 1999 to 2016, which were obtained using a conventional audiometer with an otoacoustic or plug-in earphone using a standard pure tone audiometric protocol, to obtain a data set of 21436 audiograms for participants aged 12 to 85 years (mean: 39 ± 21 years). The data contains air conduction thresholds at 7 test frequencies: 500Hz, 1000Hz, 2000Hz, 3000Hz, 4000Hz, 6000Hz and 8000 Hz.
For efficient evaluation, we preprocessed the data:
1) deleting the incomplete audiogram with at least one missing threshold;
2) discard data below moderate hearing loss, i.e., audiogram with average thresholds below 40dB HL at 500Hz, 1000Hz, 2000Hz, and 4000 Hz.
The screened data comprises 1578 moderate hearing loss samples, 356 severe hearing loss samples and 63 extremely severe hearing loss samples. Wherein: the medium hearing loss samples were classified into 3 classes, each consisting of 494, 393, and 691 samples; severe hearing loss samples were classified into 3 classes, each consisting of 135, 122, and 99 samples; the very severe hearing loss samples were classified into 2 classes, each consisting of 22 and 41 samples, for a total of 8 classes. The contour coefficients of the classification results of moderate, severe and extremely severe hearing loss samples can reach 0.345, 0.380 and 0.466 respectively. The results of the classification are shown in fig. 1 to 3:
as can be seen from FIG. 1, the discrimination of different categories in the high-frequency part is higher, and the characteristics of high-frequency weight loss of hearing-impaired patients are met. The whole hearing-impaired people are divided into 8 classes, which are lower than the traditional classification method, and the realization possibility is provided for the class switching of the non-fitting hearing aid. Moreover, the time for the expert to check is far longer than that of the method of the invention, and the checking time of the invention is almost zero.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (4)

1. A method of personalized hearing loss modeling, comprising:
acquiring audiogram of a large number of samples of hearing-impaired patients and corresponding hearing aid insertion gain;
dividing samples of the hearing-impaired patients into three types of moderate hearing loss, severe hearing loss and extremely severe hearing loss according to hearing loss degrees;
step (C), classifying the hearing aid insertion gains of various hearing-impaired patient samples by using an unsupervised clustering method aiming at the classified hearing-impaired patient samples with moderate hearing loss, severe hearing loss and extremely severe hearing loss;
step (D), calculating the average value of audiogram curves corresponding to the insertion gains of the hearing aids in each category to represent the hearing loss of each individual;
step (E), for the audiogram to be classified, calculating the distance between the audiogram and the hearing loss of each individual type and classifying according to the minimum distance between the audiogram and the hearing loss of each individual type;
the step (B) of classifying the samples of the hearing-impaired patients according to hearing loss degrees comprises the following steps:
B1) for each audiogram HLiCalculating the sample g of hearing-impaired patientiMean hearing loss μ HLi
Figure FDA0003154627630000011
In the formula, HLi(phi) denotes the sample g of the hearing impaired patientiHearing loss at frequency point phi;
B2) according to hearing impaired patient sample giTo perform a classification of moderate hearing loss, severe hearing loss and very severe hearing loss, wherein: g1For a medium hearing loss set, G2For severe hearing impairment set, G3For the very severe hearing loss set:
G1={gi|μHLi∈(40dB,60dB]}
G2={gi|μHLi∈(60dB,80dB]}
G3={gi|μHLi>80dB};
the step (C) specifically comprises the following steps:
C1) setting the maximum iteration number as N and the classification number as k, and constructing a sample set G of the insertion gain according to the classification of the step (B), wherein the sample set G is { G ═ G1,g2,g3,…,gnN represents the number of samples;
C2) randomly selecting a point from the input sample set as a first cluster center mu1
C3) For each point giCalculating its minimum distance from the selected cluster center
Figure FDA0003154627630000021
r represents the number of cluster centers already present, wherein,
Figure FDA0003154627630000022
denotes giAnd cluster center mujThe distance of (d);
C4) selection DiThe largest point is taken as the new cluster center mur+1
C5) Repeating steps C3) and C4) until k cluster centers [ mu ] are selected123,…,μk};
C6) Calculate each sample giTo respective cluster center vector muj(j ═ 1,2, …, k) distance dij
Figure FDA0003154627630000023
G is prepared fromiIs classified asijClass c corresponding to the minimumj
C7) For class cjCalculating the mean value of the audiogram of all samples as the new cluster center muj
C8) If all k cluster center vectors mujAll have not changedOr reaching the iteration number N, executing the step C9), otherwise, repeatedly executing the steps C6) to C8);
C9) for each classified subset Gm, the number of samples is
Figure FDA0003154627630000024
Calculating the contour coefficient of Gm
Figure FDA0003154627630000025
Wherein s isiIs the coefficient of the sample;
C10) calculating the contour coefficients under all the k values, and keeping the corresponding classification number k and the corresponding clustering center { mu ] under the condition that the contour coefficients are maximum123,…,μk}。
2. The personalized hearing loss modeling method of claim 1, wherein the hearing aid insertion gain G in step (A) has 133 dimensions, namely insertion gains at 125Hz, 160Hz, 200Hz, 250Hz, 315Hz, 400Hz, 500Hz, 630Hz, 800Hz, 1000Hz, 1250Hz, 1600Hz, 2000Hz, 2500Hz, 3150Hz, 4000Hz, 5000Hz, 6300Hz, and 8000Hz, at 50dB, 55dB, 60dB, 65dB, 70dB, 80dB, and 90dB SPL input sound pressure levels.
3. The personalized hearing loss modeling method of claim 1, wherein 2 ≦ k ≦ 6.
4. The personalized hearing loss modeling method of claim 1, wherein coefficients of the sample
Figure FDA0003154627630000026
Wherein, aiRepresents giAverage distance to homogeneous samples; biRepresents giAverage distance to all samples in the class closest thereto.
CN202011587016.9A 2020-12-28 2020-12-28 Personalized hearing loss modeling method Active CN112686295B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011587016.9A CN112686295B (en) 2020-12-28 2020-12-28 Personalized hearing loss modeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011587016.9A CN112686295B (en) 2020-12-28 2020-12-28 Personalized hearing loss modeling method

Publications (2)

Publication Number Publication Date
CN112686295A CN112686295A (en) 2021-04-20
CN112686295B true CN112686295B (en) 2021-08-24

Family

ID=75454582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011587016.9A Active CN112686295B (en) 2020-12-28 2020-12-28 Personalized hearing loss modeling method

Country Status (1)

Country Link
CN (1) CN112686295B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113411733B (en) * 2021-06-18 2023-04-07 南京工程学院 Parameter self-adjusting method for non-fitting hearing aid
CN114786105B (en) * 2022-03-02 2024-10-11 左点实业(湖北)有限公司 Hearing compensation integral control method and device for hearing aid
CN114786107B (en) * 2022-05-10 2023-08-22 东南大学 Hearing aid fitting method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107221339A (en) * 2017-05-22 2017-09-29 华北电力大学 Based on gain compensation audiphone voice quality W PESQ method for objectively evaluating
CN109729485A (en) * 2017-09-15 2019-05-07 奥迪康有限公司 For adjusting the method, equipment and computer program of hearing aid device
CN111553399A (en) * 2020-04-21 2020-08-18 佳都新太科技股份有限公司 Feature model training method, device, equipment and storage medium
CN111968677A (en) * 2020-08-21 2020-11-20 南京工程学院 Voice quality self-evaluation method for fitting-free hearing aid

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7223245B2 (en) * 2002-01-30 2007-05-29 Natus Medical, Inc. Method and apparatus for automatic non-cooperative frequency specific assessment of hearing impairment and fitting of hearing aids
US8075494B2 (en) * 2007-03-05 2011-12-13 Audiology Incorporated Audiogram classification system
NL2004294C2 (en) * 2010-02-24 2011-08-25 Ru Jacob Alexander De Hearing instrument.
EP2635046A1 (en) * 2012-02-29 2013-09-04 Bernafon AG A method of fitting a binaural hearing aid system
EP2931114A1 (en) * 2012-12-12 2015-10-21 Phonak AG Audiometric self-testing
US9131321B2 (en) * 2013-05-28 2015-09-08 Northwestern University Hearing assistance device control
EP2858381A1 (en) * 2013-10-03 2015-04-08 Oticon A/s Hearing aid specialised as a supplement to lip reading
US9363614B2 (en) * 2014-02-27 2016-06-07 Widex A/S Method of fitting a hearing aid system and a hearing aid fitting system
US10097937B2 (en) * 2015-09-15 2018-10-09 Starkey Laboratories, Inc. Methods and systems for loading hearing instrument parameters
DE102016216054A1 (en) * 2016-08-25 2018-03-01 Sivantos Pte. Ltd. Method and device for setting a hearing aid device
US20200129094A1 (en) * 2017-03-15 2020-04-30 Steven Brian Levine Diagnostic hearing health assessment system and method
CN108652639B (en) * 2018-05-17 2020-11-24 佛山博智医疗科技有限公司 Hearing test result graph automatic identification method
EP3627857A1 (en) * 2018-09-20 2020-03-25 Oticon A/s Hearing loss class calculation
CN109246567A (en) * 2018-09-29 2019-01-18 湖南可孚医疗科技发展有限公司 A kind of hearing evaluation detection system
CN209629660U (en) * 2018-12-20 2019-11-15 江苏贝泰福医疗科技有限公司 A kind of mobile quick universal newborn hearing screening instrument
US10905337B2 (en) * 2019-02-26 2021-02-02 Bao Tran Hearing and monitoring system
CN111768802B (en) * 2020-09-03 2020-12-08 江苏爱谛科技研究院有限公司 Cochlear implant speech processing method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107221339A (en) * 2017-05-22 2017-09-29 华北电力大学 Based on gain compensation audiphone voice quality W PESQ method for objectively evaluating
CN109729485A (en) * 2017-09-15 2019-05-07 奥迪康有限公司 For adjusting the method, equipment and computer program of hearing aid device
CN111553399A (en) * 2020-04-21 2020-08-18 佳都新太科技股份有限公司 Feature model training method, device, equipment and storage medium
CN111968677A (en) * 2020-08-21 2020-11-20 南京工程学院 Voice quality self-evaluation method for fitting-free hearing aid

Also Published As

Publication number Publication date
CN112686295A (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN112686295B (en) Personalized hearing loss modeling method
Sabo et al. Hearing levels in infants and young children in relation to testing technique, age group, and the presence or absence of middle-ear effusion
EP2238899B1 (en) Efficient evaluation of hearing ability
Dorn et al. Predicting audiometric status from distortion product otoacoustic emissions using multivariate analyses
Gorga et al. From laboratory to clinic: A large scale study of distortion product otoacoustic emissions in ears with normal hearing and ears with hearing loss
Barbour et al. Online machine learning audiometry
Cristofari et al. A multicenter clinical evaluation of data logging in cochlear implant recipients using automated scene classification technologies
Saak et al. A flexible data-driven audiological patient stratification method for deriving auditory profiles
Punch et al. Most Comfortable and Uncomfortable Loudness Levels
Uhler et al. Method of speech stimulus presentation impacts pediatric speech recognition: monitored live voice versus recorded speech
Saunders et al. Description, normative data, and utility of the hearing aid skills and knowledge test
US8075494B2 (en) Audiogram classification system
Zhao et al. Middle ear dynamic characteristics in patients with otosclerosis
Al-Salim et al. Reliability of categorical loudness scaling and its relation to threshold
Kassjański et al. Automated hearing loss type classification based on pure tone audiometry data
Margolis et al. Clinical interpretation of word-recognition scores for listeners with sensorineural hearing loss: Confidence intervals, limits, and levels
Bamford et al. Trial of a two-channel hearing aid (low-frequency compression–high-frequency linear amplification) with school age children
KR102564571B1 (en) Apparatus and method for determining resultant of pure tone audiometry based on machine learning
Flynn Envirograms: Bringing greater utility to datalogging
Chesnaye et al. Gaussian processes for hearing threshold estimation using auditory brainstem responses
WO2014023340A1 (en) Method for generating a first-fit-configuration for a hearing device
de Waal, R. & Hugo, R. & Soer, M. & Kruger Predicting hearing loss from otoacoustic emissions using an artificial neural network
Sanchez-Lopez et al. Robust auditory profiling: Improved data-driven method and profile definitions for better hearing rehabilitation
Biever et al. Assessment of Population Mean MAPs for Activation of Cochlear Implant Sound Processors Compared With Behavioral Programming
Keidser et al. Threshold measurements by self-fitting hearing aids: Feasibility and challenges

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant