[go: up one dir, main page]

CN207518802U - Neck wears formula interactive voice earphone - Google Patents

Neck wears formula interactive voice earphone Download PDF

Info

Publication number
CN207518802U
CN207518802U CN201721408744.2U CN201721408744U CN207518802U CN 207518802 U CN207518802 U CN 207518802U CN 201721408744 U CN201721408744 U CN 201721408744U CN 207518802 U CN207518802 U CN 207518802U
Authority
CN
China
Prior art keywords
module
microphone
neck
control device
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201721408744.2U
Other languages
Chinese (zh)
Inventor
朱华明
武巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jinruidelu Technology Co Ltd
Original Assignee
Beijing Jinruidelu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jinruidelu Technology Co Ltd filed Critical Beijing Jinruidelu Technology Co Ltd
Priority to CN201721408744.2U priority Critical patent/CN207518802U/en
Application granted granted Critical
Publication of CN207518802U publication Critical patent/CN207518802U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Soundproofing, Sound Blocking, And Sound Damping (AREA)

Abstract

The utility model provides a kind of neck and wears formula interactive voice earphone, including:Sound wheat harvest mixer, control device and executive device.The utility model realizes the sound for removing ambient sound, noise and loud speaker in the case of radio reception, improves neck and wears the speech recognition accuracy of formula interactive voice earphone, further improves the service order of preparation, improves the satisfaction of user.

Description

Neck-wearing voice interaction earphone
Technical Field
The utility model relates to an intelligence wearing equipment technical field, especially a neck wear-type voice interaction earphone.
Background
Along with the development of intelligent wearing equipment and the continuous improvement of people's standard of living, the use of various intelligent wearing equipment like intelligent wrist-watch is more and more popularized, and intelligent wearing equipment has become the indispensable communication instrument in people's life.
A person can hear sound because vibrations in the air transmit vibrations through the external ear canal to the eardrum, and the vibrations formed by the eardrum drive the auditory nerve of the person. When the middle and outer ear of a person is damaged or the ear canal is plugged by a hand, sound can transmit sound vibration through the skin and bones of the person to drive the auditory nerve of the person.
Bone conduction is a sound conduction mode, and sound waves are transmitted through skull, temporal bone, bone labyrinth, inner ear lymph fluid transmission, cochlea, auditory nerve and auditory center of a human, which is a bone conduction technology. Bone conduction is the vibration of the skull or temporal bone, which is transmitted directly into the inner ear without passing through the outer and middle ear. Compared with the traditional air conduction mode of generating sound waves through a loudspeaker diaphragm, the bone conduction mode omits a plurality of sound wave transmission steps, can realize clear sound restoration in noisy pipe diameters, and does not influence her people due to the fact that the sound waves are diffused in the air.
However, bone conduction techniques also have several disadvantages: (1) the amount of bone conduction sound is related to the location of contact with the bone and also to the characteristics of the human tissue. For example: the differences of the users such as age, gender and weight can cause different users to have different experiences when using the same bone conduction headset, and the different experiences are often performance deterioration. (2) When a user receives or sends a call by using a bone conduction technology, the bone conduction device is required to be tightly attached to a bone, a sinking wave is directly transmitted to an auditory nerve through the bone, the wearing mode determines that the bone conduction device is required to be highly pressed on the bone to be transmitted, the quality of audio transmission is affected if the bone conduction device is loosened, and the wearing mode of the highly pressed bone enables the comfort and the skin health of the user to be affected to different degrees in the using process. (3) Bone and human tissue produce frequency selective amplitude attenuation and delay for the shendong signal, and high fidelity or broadband audio signals are difficult to conduct to the auditory nerve through the bone, so most users based on the prior art complain about poor "tone quality" and "timbre" of bone conduction headphones. (4) Bone conduction sound leakage. Most of the existing bone conduction technologies cannot really solve the problem of bone conduction sound leakage because the existing technologies compensate the frequency-dependent real-world situation of human bone and tissue attenuation vibration signals by large volume and large vibration signals, which is equivalent to Yinzi 40489, and the user complains that the sound leakage is serious, or because more power is needed, the volume weight of the bone conduction sound is greatly increased, and the whole device is too heavy. (5) The bone conduction headset is an open binaural system, and when a user is in a noisy environment, the user cannot hear sound transmitted from the headset at all due to the openness of the bone conduction headset.
Patent application No. 102084668A discloses a method and system for processing signals, the system comprising: (a) a processor arranged to process a first input signal detected by the first microphone at a detection time, a second input signal detected by the second microphone at the detection time, and a third input signal detected by the first microphone at the detection time to produce a corrected signal responsive to the first, second and third input signals; and (b) a communication interface configured to provide the corrected signal to an external system. The method carries out noise reduction processing on the sound through a convolution function, and obtains a more accurate sound signal. However, since several sounds are mixed, some sounds are easily mistakenly determined as correct sounds and recorded in the track, and thus the output sound is not completely accurate and clear.
The patent application with the application number of 105721973a discloses a bone conduction headset and an audio processing method thereof, wherein the bone conduction headset comprises a human skeleton and tissue model modeling module, a mathematical pre-corrector, a delay estimation unit, a digital-to-analog converter, an analog-to-digital converter, a first low-pass filter, a second low-pass filter, an audio amplifier, an audio driving amplifier, at least one first microphone and at least one bone conduction oscillator; monitoring attenuation effect information of human bones and tissues of different users in real time, generating a compensation transfer function based on the attenuation effect information, and conducting the input audio signal in the bones and the human tissues after carrying out digital pre-correction on the input audio signal through the compensation transfer function. The method is mainly used for solving the problem of attenuation of audio signals, and correct audio data cannot be distinguished from noise.
SUMMERY OF THE UTILITY MODEL
In order to solve the technical problem, the utility model provides a through setting up traditional acoustics microphone (the mode that first microphone and second microphone combined together), carry out the single frame respectively to two tunnel audio input and judge, confirm that the higher frame of speech probability is the speech frame to become the audio data of output with final speech frame combination.
The utility model provides a neck wear-type pronunciation interactive headset, include: the system comprises a voice and wheat sound receiving device, a control device and an execution device;
the acoustic-wheat sound receiving device is arranged on an earplug of the neck-worn voice interaction earphone and/or a host;
the control device is fixed on the host machine through a buckle or a screw;
the executing device is arranged on the earplug and/or the host;
the sound wheat sound receiving device is connected with the control device in a wired or wireless mode;
the control device is connected with the execution device through a flexible cable;
the voice of a user is collected by the voice collecting device, converted into a digital signal and sent to the control device;
the control device receives the digital signal, converts the digital signal into a control signal through operation and sends the control signal to the execution device;
the acoustic wheat sound collecting device comprises at least one microphone, and the microphone is arranged outside the earplug and/or outside the host.
For example, the number of the microphones is 3, the microphones are respectively arranged on the outer end faces of the two earplugs and are integrated with the earplugs; the third microphone is disposed at one end of the host.
Preferably, the control device comprises a coordination analysis module, a filtering module, a compensation module and a mixing module.
Preferably, the coordination analysis module is connected to the microphone and integrated with the control device.
Preferably, the filtering module is connected to the coordination analysis module and the mixing module, and the filtering module is integrated with the control device.
Preferably, the compensation module is connected to the coordination analysis module and the mixing module, respectively, and the compensation module is integrated with the control device.
Preferably, the frequency mixing module is connected to the filtering module, the compensation module and the executing device respectively.
Any one of the above embodiments preferably further includes a first power amplifying module disposed between the coordination analysis module and the microphone.
Any one of the above embodiments preferably further includes a second power amplification module between the compensation module and the coordination analysis module.
Any one of the above embodiments preferably further includes a third power amplifying module between the mixing module and the executing device.
Preferably, the acoustic microphone device is one or more of an electric microphone, a condenser microphone, a piezoelectric microphone, an electromagnetic microphone, a carbon particle microphone, and a semiconductor microphone.
Preferably, any one of the above is integrated with the control device.
Preferably, in any of the above embodiments, the processing unit is MTK 6580.
Preferably, the actuator comprises a speaker.
Any one of the above is preferable, further comprising at least one battery pack, and the battery pack is connected to the control device by a flexible cable.
Preferably, in any one of the above embodiments, the control device further integrates and operates a memory RAM.
Preferably, in any of the above embodiments, the control device further integrates a body memory space ROM or a body memory space ROM slot.
The utility model discloses a set up the microphone on two earplugs on the left and right sides, gather the sound of ambient noise, voice and speaker, handle through the audio signal to these three, finally obtain the speech signal of the most clear accurate user. The method and the device realize the removal of environmental sound, noise and sound of a loudspeaker under the condition of reception, improve the voice recognition accuracy of the neck-worn voice interaction earphone, further improve the prepared service instruction and improve the satisfaction degree of users.
Drawings
Fig. 1 is a schematic structural view of embodiment 1 according to the present invention;
fig. 2 is a schematic structural diagram of an ANC noise reduction circuit according to embodiment 1 of the present invention;
fig. 3 is a schematic structural diagram of an ANC noise reduction circuit according to embodiment 2 of the present invention.
Description of the reference symbols in the drawings:
100 neck wearing voice interaction earphones;
1, an earplug;
2, a host;
a 10-tone wheat sound receiving device;
20 a control device;
30 an execution device;
101 a first microphone; 102 a second microphone; 103 a third microphone;
201 coordinating an analysis module; 202 a filtering module; 203 a compensation module; 204 a frequency mixing module;
301 a speaker;
231 an audio characteristic detection sub-module; 232 a main sound source judgment submodule; 233 main sound source compensation submodule; 234 noise reduction submodule.
Detailed Description
The invention is further illustrated with reference to the accompanying drawings and specific examples.
Example 1
As shown in fig. 1, a neck-worn voice interactive earphone 100 comprises: a voice receiving device 10, a control device 20 and an execution device 30;
the acoustic wheat sound receiving device 10 is arranged on the earplug 1 of the neck-wearing type voice interaction earphone and/or the host 2;
the control device 20 is arranged on the host 2;
the actuating device 30 is arranged on the earplug 1 and/or on the host 2;
the acoustic wheat sound receiving device 10 is connected with the control device 20 through wires or wirelessly;
the control device 20 is connected with the actuating device 30 through a flexible cable;
the voice collecting device 10 collects the voice of the user, converts the voice into a digital signal and sends the digital signal to the control device 20;
the control device 20 receives the digital signal, converts the digital signal into a control signal through operation and sends the control signal to the execution device 30;
the acoustic microphone arrangement 10 comprises at least one microphone arranged outside the earplug 1 and/or outside the main unit 2.
As shown in fig. 2, the number of the microphones is 3, and the microphones are respectively arranged at the outer end faces of the two earplugs 1 and integrated with the earplugs 1; a third microphone 103 is provided at one end of the main body 2.
The control device 20 includes a coordination analysis module 201, a filtering module 202, a compensation module 203, and a mixing module 204.
The coordination analysis module 201 is connected to the microphone and integrated with the control device 20.
The filtering module 202 is respectively connected to the coordination analysis module 201 and the mixing module 204, and the filtering module 202 is integrated with the control device 20.
The compensation module 203 is connected to the coordination analysis module 201 and the mixing module 204, respectively, and the compensation module 203 is integrated with the control device 20.
The mixing module 204 is connected to the filtering module 202, the compensating module 203 and the executing device 30 respectively.
A first power amplification module is further arranged between the coordination analysis module 201 and the microphone.
A second power amplification module is further disposed between the compensation module 203 and the coordination analysis module 201.
A third power amplifying module is further disposed between the frequency mixing module 204 and the executing device 30.
The acoustic microphone device 10 is any one or more of an electrodynamic microphone, a condenser microphone, a piezoelectric microphone, an electromagnetic microphone, a carbon particle microphone, and a semiconductor microphone.
The control device 20 integrates a processing unit.
The processing unit is MTK 6580.
The actuator 30 comprises a loudspeaker 301.
And at least one battery pack connected with the control device 20 through a flexible cable.
The control device 20 also integrates an operating memory RAM.
The control device 20 is further integrated with a body memory space ROM or a body memory space ROM slot.
Example 2
As shown in fig. 3, the digital signal of the acoustic wheat sound receiving apparatus 10 includes a first audio signal and a second audio signal.
The first audio signal is used for collecting voice information of a user by using the first microphone 101.
The second audio signal refers to the environmental sound collected by the second microphone 102 within the time range of the first audio signal.
The control device 20 further comprises the following sub-modules: an audio characteristic detection submodule 231, configured to perform audio characteristic detection on the acquired audio signal; a dominant sound source decision submodule 232 configured to perform dominant sound source decision; a dominant sound source compensation submodule 233 for dominant sound source compensation; and a noise reduction sub-module 234 for removing noise.
The steps of audio characteristic detection are as follows: 1) the extraction frame is as long as20ms of audio data, xi(n) and calculating the average energy EiZero crossing rate ZCRiShort time correlation RiAnd short-time cross-correlation Cij(k),
Wherein,2) according to said average energy EiThe zero crossing rate ZCRiThe short-time correlation RiAnd the short-time cross-correlation Cij(k) Calculating the non-silence probability of the current frameAnd probability of speechWhereinis i channel max (E)i*ZCRi) Is determined by the empirical reference value of (a),is i channel max { max [ R ]i(k)]*max[Cij(k)]An empirical reference value. 3) The method for detecting the audio characteristics also comprises the step of detecting the non-silence probability of the current frame according to the i channelAnd the speech probabilityJudging the type of the current frame, namely whether the current frame is a noise frame, a voice frame or a noise-free environment voice frame,wherein,the method is an empirical value of relevant judgment, the Ambient sound frame without Noise is used as Ambient sound frame, Noise is used as Noise frame, and Speech is used as a voice frame. The dominant sound source determination submodule 232 determines the dominant sound source based on the current frameThe current frame extracted from that way is determined as the dominant sound source of the current position frame. The determination method comprises the following steps: 1) when one path is a Speech voice frame and the other path is an Ambient Noise-free environment voice frame or a Noise frame, determining the path as a main data path of the current position frame; 2) when one path is an Ambient Noise-free sound frame and the other path is a Noise frame, determining the path as a main data path of the current position frame; 3) when both paths are the same kind of frame, determiningThe channel with the largest value is used as the main data path of the current position frame. And (3) main sound source compensation, wherein after the main sound source of the current position frame is determined, the compensation submodule extracts effective data from the other path and carries out voice component compensation on the main sound source. The voice component compensation method comprises the following steps: 1) carrying out full-spectrum sub-band weighting superposition compensation in a frequency domain by utilizing effective audio data of different channels; 2) and carrying out spectrum copying operation by using the effective correlation characteristic of the low-frequency sub-band data to compensate the high-frequency sub-band data. And noise elimination, wherein the compensated audio data still contains a small amount of noise data, and the noise reduction submodule 234 obtains noise spectrum characteristics according to noise frames related before and after the main data channel voice frame and effectively suppresses noise spectrum components of the voice frame in a frequency domain, so that purer effective voice data is obtained. And outputting the signal, and pushing the finally generated effective voice data to the terminal equipment.
And importing main audio data, calling environment judgment data stored in a memory, comparing the main audio data with the environment judgment data, and determining the surrounding noise environment when the main audio is input. And calling the ambient noise data from the memory, and performing single-frame comparison with the main audio data. Audio data in a single frame of the main audio data that is identical to the ambient noise data is removed. Effective audio data without noise is generated.
For a better understanding of the present invention, the detailed description is given above in connection with the specific embodiments of the present invention, but not limiting the present invention. All the technical matters of the present invention are any simple modifications to the above embodiments, and still belong to the technical solution of the present invention. In the present specification, each embodiment is described with emphasis on differences from other embodiments, and the same or similar parts between the respective embodiments may be referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The method and apparatus of the present invention may be implemented in a number of ways. For example, the methods and systems of the present invention may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustrative purposes only, and the steps of the method of the present invention are not limited to the order specifically described above unless specifically stated otherwise. Furthermore, in some embodiments, the present invention may also be embodied as programs recorded in a recording medium, the programs including machine readable instructions for implementing a method according to the present invention. Thus, the present invention also covers a recording medium storing a program for executing the method according to the present invention.
The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to practitioners skilled in this art. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A neck worn voice interactive headset comprising: the system comprises a voice and wheat sound receiving device, a control device and an execution device; it is characterized in that the preparation method is characterized in that,
the acoustic-wheat sound receiving device is arranged on an earplug of the neck-worn voice interaction earphone and/or a host;
the control device is fixed on the host machine through a buckle or a screw;
the executing device is arranged on the earplug and/or the host;
the sound wheat sound receiving device is connected with the control device in a wired or wireless mode;
the control device is connected with the execution device through a flexible cable;
the voice of a user is collected by the voice collecting device, converted into a digital signal and sent to the control device;
the control device receives the digital signal, converts the digital signal into a control signal through operation and sends the control signal to the execution device;
the acoustic wheat sound collecting device comprises at least one microphone, and the microphone is arranged outside the earplug and/or outside the host.
2. The neck-worn voice interactive headset of claim 1, wherein the control device comprises a coordination analysis module, a filtering module, a compensation module, and a mixing module.
3. The neck-worn voice-interactive headset of claim 2, wherein the coordination analysis module is connected to the microphone and integrated with the control device.
4. The neck-worn voice interactive earphone according to claim 3, wherein the filtering module is connected to the coordination analysis module and the mixing module, respectively, and the filtering module is integrated with the control device.
5. The neck-worn voice interactive earphone according to claim 4, wherein the compensation module is connected to the coordination analysis module and the mixing module respectively, and the compensation module is integrated with the control device.
6. The neck-worn voice interactive earphone according to claim 5, wherein the mixing module is connected to the filtering module, the compensation module and the execution device respectively.
7. The neck-worn voice interactive headset of claim 6, wherein a first power amplification module is further disposed between the coordination analysis module and the microphone.
8. The neck-worn voice interactive headset according to claim 7, wherein a second power amplification module is further disposed between the compensation module and the coordination analysis module.
9. The neck-worn voice interactive earphone according to claim 8, wherein a third power amplification module is further disposed between the mixing module and the execution device.
10. The neck-worn voice interactive headset of claim 9, wherein the acoustic microphone device is any one or more of an electrodynamic microphone, a capacitive microphone, a piezoelectric microphone, an electromagnetic microphone, a carbon particle microphone, and a semiconductor microphone.
CN201721408744.2U 2017-10-27 2017-10-27 Neck wears formula interactive voice earphone Active CN207518802U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201721408744.2U CN207518802U (en) 2017-10-27 2017-10-27 Neck wears formula interactive voice earphone

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201721408744.2U CN207518802U (en) 2017-10-27 2017-10-27 Neck wears formula interactive voice earphone

Publications (1)

Publication Number Publication Date
CN207518802U true CN207518802U (en) 2018-06-19

Family

ID=62536445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201721408744.2U Active CN207518802U (en) 2017-10-27 2017-10-27 Neck wears formula interactive voice earphone

Country Status (1)

Country Link
CN (1) CN207518802U (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109729471A (en) * 2017-10-27 2019-05-07 北京金锐德路科技有限公司 ANC noise reduction device for neck-worn voice interactive headset

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109729471A (en) * 2017-10-27 2019-05-07 北京金锐德路科技有限公司 ANC noise reduction device for neck-worn voice interactive headset

Similar Documents

Publication Publication Date Title
CN107071647B (en) A kind of sound collection method, system and device
US9380374B2 (en) Hearing assistance systems configured to detect and provide protection to the user from harmful conditions
US9301057B2 (en) Hearing assistance system
CN104144374B (en) Assisting hearing method and system based on mobile device
JP6069830B2 (en) Ear hole mounting type sound collecting device, signal processing device, and sound collecting method
CN118450307A (en) Hearing aid and method for operating the same
US10176821B2 (en) Monaural intrusive speech intelligibility predictor unit, a hearing aid and a binaural hearing aid system
US10034087B2 (en) Audio signal processing for listening devices
WO2016167877A1 (en) Hearing assistance systems configured to detect and provide protection to the user harmful conditions
JP2017028718A (en) Auricle mounted sound collecting device, signal processing device, and sound collecting method
US20250287135A1 (en) Noise cancellation method, headset, apparatus, storage medium, and computer program product
CN112367599B (en) Hearing aid system with cloud background support
CN207995324U (en) Neck wears formula interactive voice earphone
CN109729471A (en) ANC noise reduction device for neck-worn voice interactive headset
CN217064005U (en) Hearing device
CN207518802U (en) Neck wears formula interactive voice earphone
WO2025231439A1 (en) User specific active sound-reduction, hearing protection system, and related computer products and methods
CN207518800U (en) Neck wears formula interactive voice earphone
CN207518792U (en) Neck wears formula interactive voice earphone
CN109729463A (en) The compound audio signal reception device of sound wheat bone wheat of formula interactive voice earphone is worn for neck
CN109729454A (en) Acoustic microphone processing device for neck-worn voice interactive headset
CN207518791U (en) Neck wears formula interactive voice earphone
CN109729470A (en) The sound wheat harvest sound processor of formula interactive voice earphone is worn for neck
EP4598059A1 (en) Prescribing hearing aid features from diagnostic measures
CN109729457A (en) Bone microphone radio processing device for neck-worn voice interactive headset

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant