CN113542963B - Sound mode control method, device, electronic equipment and storage medium - Google Patents
Sound mode control method, device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113542963B CN113542963B CN202110825740.9A CN202110825740A CN113542963B CN 113542963 B CN113542963 B CN 113542963B CN 202110825740 A CN202110825740 A CN 202110825740A CN 113542963 B CN113542963 B CN 113542963B
- Authority
- CN
- China
- Prior art keywords
- mode
- sound
- sound mode
- information
- target sound
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000001514 detection method Methods 0.000 claims description 6
- 238000002834 transmittance Methods 0.000 claims description 5
- 230000006870 function Effects 0.000 description 13
- 230000007613 environmental effect Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000007726 management method Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 210000003454 tympanic membrane Anatomy 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1041—Mechanical or electronic switches, or control elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
- H04M1/72454—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
- H04M1/72457—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to geographic location
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Environmental & Geological Engineering (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Telephone Function (AREA)
Abstract
The embodiment of the application discloses a sound mode control method and device, electronic equipment and a storage medium. The method comprises the following steps: by detecting the current sound mode of the electronic equipment, if the current sound mode is the first target sound mode, the current call scene information and the motion information of the electronic equipment are determined. And then determining a corresponding second target sound mode according to the call scene information and the motion information, and finally switching the first target sound mode into the second target sound mode. Therefore, the electronic equipment can determine the corresponding sound mode according to the actual call scene information and the motion information of the user, and the user is prevented from frequently and manually operating the electronic equipment to enter different sound modes.
Description
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and an apparatus for controlling a voice mode, an electronic device, and a storage medium.
Background
Along with the development of acoustic technology, in noisy environment such as public transit, subway, in order to prevent the noise, people can wear to fall the earphone of making an uproar and initiatively carry out elimination to a certain extent to external noise.
However, in the existing noise reduction earphones, a fixed earphone mode is set, and the specific earphone usage mode cannot be determined according to the actual usage scene of the user.
Disclosure of Invention
The embodiment of the application provides a sound mode control method and device, electronic equipment and a storage medium. The method can actively adjust the sound mode of the electronic equipment according to the actual use scene of the user.
In a first aspect, an embodiment of the present application provides a sound mode control method, applied to an electronic device, including:
detecting a current sound mode of the electronic equipment;
if the current sound mode is a first target sound mode, determining current call scene information and motion information of the electronic equipment;
determining a corresponding second target sound mode according to the call scene information and the motion information;
switching the first target sound mode to the second target sound mode.
In a second aspect, an embodiment of the present application provides a sound mode control apparatus, applied to an electronic device, including:
the detection module is used for detecting the current sound mode of the electronic equipment;
the first determining module is used for determining the current call scene information and the motion information of the electronic equipment if the current sound mode is a first target sound mode;
the second determining module is used for determining a corresponding second target sound mode according to the call scene information and the motion information;
and the control module is used for switching the first target sound mode into the second target sound mode.
In a third aspect, an embodiment of the present application provides an electronic device, where the electronic device includes a memory storing executable program codes, and a processor coupled to the memory, where the processor calls the executable program codes stored in the memory to perform the steps in the sound pattern control method provided in the embodiment of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor, and are used for executing steps in a sound pattern control method provided by the present application.
In the embodiment of the application, by detecting the current sound mode of the electronic equipment, if the current sound mode is the first target sound mode, the current call scene information and the motion information of the electronic equipment are determined. And then determining a corresponding second target sound mode according to the call scene information and the motion information, and finally switching the first target sound mode into the second target sound mode. Therefore, the electronic equipment can determine the corresponding sound mode according to the actual call scene information and the motion information of the user, and the user is prevented from frequently and manually operating the electronic equipment to enter different sound modes.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a first flowchart of a sound mode control method according to an embodiment of the present disclosure.
FIG. 2 is provided by an embodiment of the present application;
FIG. 3 is provided by an embodiment of the present application;
FIG. 4 is provided by an embodiment of the present application;
FIG. 5 is provided by an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the prior art, the sound mode of an electronic device such as a noise reduction earphone is often a sound mode that is manually selected by a user, but the user enters different use scenes, and if the electronic device keeps the sound mode all the time, the user cannot accurately perceive the external sound, so that the user needs to frequently and manually switch the sound mode of the electronic device.
In order to solve the foregoing technical problem, embodiments of the present application provide a sound mode control method and apparatus, an electronic device, and a storage medium. The sound mode control method can be applied to various electronic devices such as mobile phones, computers, earphones, VRs and the like.
Referring to fig. 1, fig. 1 is a first flowchart of a voice mode control method according to an embodiment of the present disclosure. The voice mode control method can adaptively switch the voice mode according to the actual use environment of the user. The sound mode control method may include the steps of:
110. the current sound mode of the electronic equipment is detected.
The electronic device includes a plurality of sound modes such as a noise reduction mode, a normal mode, a pass-through mode, and the like. The noise reduction mode comprises an active noise reduction function, and the active noise reduction function is to generate reverse sound waves equal to external noise through a noise reduction system to neutralize the noise, so that the noise reduction effect is realized. The normal mode is a mode without any additional processing to sound, and the audio played in the earphone is the audio effect in the original audio. The transparent mode is to play the ambient sound through the earphone speaker, so that the user can hear the external ambient sound.
The electronic device may detect a current sound mode, for example, the current sound mode is one of multiple sound modes preset by the electronic device, or the current sound mode is a sound mode set by the user.
In some embodiments, before detecting the current sound mode of the electronic device, the user may set the sound mode according to his or her own habits, for example, the user may adjust the sound parameter adjustment interface to obtain the adjusted sound parameter, and then the electronic device generates the first target sound parameter according to the adjusted sound parameter.
120. And if the current sound mode is the first target sound mode, determining the current call scene information and the motion information of the electronic equipment.
In some embodiments, the electronic device may determine whether the current sound mode is a first target sound mode, where the first target sound mode may be a pass-through mode, and the first target sound mode may also be another sound mode, and the first target sound mode may be a sound mode specified by a user.
And if the current sound mode is judged to be the first target sound mode, determining the current call scene information and the motion information of the electronic equipment. The current call scene information comprises call scene information and non-call scene information, and the operation information comprises position information, movement speed information and the like of the user.
In some embodiments, prior to determining the current call scenario information and motion information of the electronic device, the electronic device may also determine a current control state, the control state including an automatic control sound mode state and a manual control sound mode state. And if the electronic equipment is in the state of automatically controlling the sound mode, acquiring scene information under the current scene. And if the electronic equipment is in the state of manually controlling the sound mode, stopping acquiring the scene information in the current scene, and simultaneously keeping the sound mode of the electronic equipment as the first target sound mode, namely keeping the sound mode of the electronic equipment as the current sound mode.
130. And determining a corresponding second target sound mode according to the call scene information and the motion information.
In some embodiments, if the current call scene is determined to be a call scene according to the call scene information, the motion information of the electronic device is detected, and the corresponding second target sound mode is determined according to the motion information. For example, if it is determined that the user is running based on the motion information, and the user is talking during running, the electronic device may generate wind noise due to the airflow, and the second target sound pattern may be determined as the anti-wind noise pattern in order to prevent the generated wind noise from affecting the user's talking. For example, when it is determined that the user is not moving according to the movement information, which indicates that the user is talking on site, and there is no talking environment with different noise levels, the electronic device may determine the second target sound pattern as the current sound pattern.
In some embodiments, if it is determined that the current call scene is an unlinked scene according to the call scene information, and the user may be listening to music, watching video, or the like, at this time, motion information of the electronic device may be detected, and voice information of the user may also be detected, where the voice information may be sound information of whether the user is speaking.
And finally, determining a corresponding second target sound mode according to the voice information of the user and the motion information of the electronic equipment. For example, when the user is moving and speaking, the user may be walking and talking to a person, and in order for the user to hear the talking person speaking and hear the external environmental sound, the second target sound mode may be determined as the full transparent mode for enhancing the human sound. For another example, if the user does not move or speak, the user may be working, and in order to prevent the user from hearing the outside speech, the second target sound pattern may be determined as the human sound pattern.
140. The first target sound mode is switched to the second target sound mode.
After the corresponding second target sound mode is determined according to the call scene information and the motion information, the electronic device may switch the first target sound mode to the second target sound mode. Therefore, the self-adaptive sound mode switching according to the use scene of the user is completed, and the user is prevented from manually and frequently replacing the sound mode.
In some embodiments, after the electronic device switches the first target sound mode to the second target sound mode, the electronic device may detect whether the user has changed the control state of the electronic device, and if the user changes the automatic control sound mode state of the electronic device to the manual control sound mode state, the switching of the sound mode according to the actual usage scenario of the user is stopped.
In some embodiments, after the electronic device switches the first target sound mode to the second target sound mode, the electronic device may continue to detect current call scene information and motion information of the electronic device, and if the current call scene information and motion information of the electronic device are changed, continue to determine a third target sound mode according to the changed call scene information and motion information. And switches the second target sound mode to the third target sound mode.
In the embodiment of the application, by detecting the current sound mode of the electronic equipment, if the current sound mode is the first target sound mode, the current call scene information and the motion information of the electronic equipment are determined. And then determining a corresponding second target sound mode according to the call scene information and the motion information, and finally switching the first target sound mode into the second target sound mode. Therefore, the electronic equipment can determine the corresponding sound mode according to the actual call scene information and the motion information of the user, and the user is prevented from frequently and manually operating the electronic equipment to enter different sound modes.
For a more detailed understanding of the voice mode control method provided in the present embodiment, please refer to fig. 2, and fig. 2 is a second flowchart of the voice mode control method provided in the present embodiment. The sound mode control method may include the steps of:
201. the current sound mode of the electronic equipment is detected.
In some embodiments, the electronic device may detect the current sound mode, for example, the current sound mode is one of a plurality of sound modes preset by the electronic device, or the current sound mode is a sound mode set by the user.
Before detecting the current sound mode of the electronic equipment, the user can set the sound mode according to own habits. The user can adjust the sound parameter adjustment interface to obtain the adjusted sound parameter, and then the electronic device generates the first target sound pattern according to the adjusted sound parameter.
For example, the electronic device may display a sound parameter adjustment interface having a corresponding frequency and a sound adjustment intensity range corresponding to the frequency. For example, in the sound parameter adjustment interface, there are 6 adjustment frequencies of 300Hz, 500Hz, 800Hz, 1KHz, 1.5KHz, and 2KHz, and the sound adjustment intensity range corresponding to each frequency is-6 dB to +6dB, and the user can adjust the sound intensity for each frequency, for example, adjust the sound adjustment intensity corresponding to the 300Hz frequency to +3dB.
After the user adjusts the sound adjustment parameters on the sound parameter adjustment interface, the electronic device may determine an adjustment frequency corresponding to the sound adjustment parameters and a sound adjustment intensity corresponding to the adjustment frequency, and then generate a first target sound pattern according to the adjustment frequency and the sound adjustment intensity.
In some embodiments, EQ (equilizer) filter parameters and the like can also be set in the sound adjustment parameter adjustment interface for user adjustment.
202. And if the current sound mode is the first target sound mode, determining the current call scene information and the motion information of the electronic equipment.
When the electronic equipment detects that the current sound mode is the first target sound mode, the current call scene information and the motion information of the electronic equipment are determined. Wherein the first target sound mode may be a pass-through mode.
203. And if the current scene is determined to be the calling scene according to the calling scene information, determining whether the electronic equipment moves according to the motion information.
If the electronic device determines that the current scene is the scene in the call according to the call scene information, whether the electronic device moves can be determined according to the motion information of the electronic device.
For example, the real-time position of the electronic device may be determined according to the motion information, and then the position of the user may be determined at intervals of a preset time period, and if the position of the electronic device moves from position a to position B within a preset time period, the moving distance from position a to position B may be determined. And judging whether the moving distance is greater than a preset moving distance, for example, the preset moving distance is 1m, and if the moving distance is greater than the preset moving distance, determining that the electronic equipment moves. If the moving distance is not greater than the preset moving distance, the electronic equipment is not moved.
In some embodiments, the electronic device may determine motion information of the electronic device through a GPS sensor and may also determine motion information of the electronic device through a gyroscope. The motion information of the electronic device can also be determined by other electronic devices, for example, when the user wears the electronic device, the electronic device determines the movement information of the user through a mobile phone of the user.
If it is determined from the motion information that the electronic device is not moving, step 204 is entered.
If it is determined that the electronic device is moving according to the motion information, step 205 is entered.
204. And if the electronic equipment is determined not to move according to the motion information, switching the first target sound mode into a light human sound mode.
The second target sound pattern may include a shallow voice pattern, which is set on the basis of the voice pattern, for example, the voice in the frequency range of 500Hz to 3KHz is retained in the voice pattern, and the shallow voice pattern is obtained by attenuating the voice in the frequency range of 500Hz to 3KHz with 3dB intensity.
In this way, when the electronic device is in the first target voice mode and the user is talking and the electronic device is not moving, the first target voice mode is switched to the shallow voice mode.
That is, at this time, the user is talking, and in order to enable the user to hear the voice of the external environment while talking, the electronic device is controlled to switch from the first target sound mode to the shallow voice mode.
205. And if the electronic equipment is determined to move according to the motion information, switching the first target sound mode into a shallow transparent mode.
The second target sound pattern may include a shallow pass-through pattern that reduces the intensity by 3dB over a full pass-through pattern. The full transparent mode is that the environmental sound is picked up by a microphone, processed by a Digital Signal Processing (DSP) chip and played by a loudspeaker, the part is played by electronic playback and is superposed with the environmental sound which is not isolated by the earphone shell in a passive protection way, and finally the pseudo environmental sound is formed at the eardrum of the human ear.
In this way, when the electronic device is in the first target voice mode and the user is talking and the electronic device moves, the first target voice mode is switched to the shallow transparent mode.
That is, the user is making a call while walking, and in order to enable the user to sense sounds of the external environment, such as automobile engine sounds and whistling sounds, and meanwhile, to ensure that the user can clearly hear the conversation sound, the electronic device is controlled to switch from the first target sound mode to the shallow transparent mode.
Referring to fig. 3, fig. 3 is a third flow chart of the voice mode control method according to the embodiment of the present application. The steps 201 and 202 are already described in the above, and are not described herein again. The sound mode control method may further include the steps of:
206. and if the current scene is determined to be the non-call scene according to the call scene information, determining whether the electronic equipment moves according to the motion information.
After the current call scene information and the motion information of the electronic equipment are determined, if the current scene is determined to be a non-call scene, whether the electronic equipment moves is determined according to the motion information.
If the electronic device has moved, step 207 is entered, and if the electronic device has not moved, step 210 is entered.
207. And if the electronic equipment is determined to move according to the motion information, judging whether the user speaks according to the voice information.
In some embodiments, it may be determined whether the user is speaking by a bone conduction sensor on the electronic device, which may receive the sensed data while the user is speaking, thereby determining that the user is speaking.
Whether the user speaks can also be determined through an ambient Noise reduction (ENC) algorithm and a related microphone, for example, a voice signal of the user's own speaking can be obtained after the voice signal received by the microphone is processed through the ambient Noise reduction algorithm, so as to determine whether the user speaks.
If the electronic device is moving and the user is speaking, step 208 is entered. If the electronic device is moving but the user is not speaking, step 209 is entered.
208. And if the user speaks, switching the first target sound mode into a full-transparent enhanced human sound mode.
The second target sound mode comprises a full-transmittance enhanced human sound mode, and the full-transmittance enhanced human sound mode is used for enhancing the human sound part in the frequency range of 500Hz to 3KHz on the basis of the full-transmittance mode, for example, the intensity of the human sound part is enhanced by 3dB. At this time, compared with the ambient sound, the intensity of the human sound is 3dB greater than that of the ambient sound, and the sounds in other frequency bands are equal to the ambient sound.
That is, when the user is not making a call but the electronic device is moving, which indicates that the user is moving, and the user is speaking during moving, the scene may be that the user walks with the electronic device and talks with others, and in order to make the user hear environmental sounds such as car whistle, and make the user hear the voices of fellows, the electronic device is controlled to switch from the first target voice mode to the fully transparent enhanced human voice mode.
209. And if the user does not speak, switching the first target sound mode into a full-transparent mode.
The second target sound mode includes a full through mode, which may preserve sounds in the external environment, such as human voice, car voice, wind voice, and the like.
That is to say, at this time, the user does not make a call, the user does not speak, but the electronic device is moving, which indicates that the user is moving, for example, crossing a road, and in order to make the user sense external environment sound, the electronic device is controlled to switch the first target sound mode to the full-transparent mode.
210. And if the electronic equipment is determined not to move according to the motion information, judging whether the user speaks according to the voice information.
If the electronic device is not moving and the user is speaking, step 211 is entered. If the electronic device is not moving and the user is not speaking, step 212 is entered.
211. If the user is speaking, the first target voice mode is switched to the enhanced human voice mode.
The second target sound mode includes a voice enhancement mode, and the voice enhancement mode is set on the basis of the voice mode, for example, the voice within the frequency range of 500Hz to 3KHz is retained in the voice mode, and the voice enhancement mode is the voice within the frequency range of 500Hz to 3KHz is enhanced by 3dB intensity, so as to obtain the voice enhancement mode.
That is, when the user does not make a call and the electronic device does not move, but the user speaks, which means that the user talks with another person at one location, in order to make the user hear the voice of the other party clearly, the first target voice mode is switched to the enhanced voice mode, so that the voice is enhanced, and the user can hear the voice of the other party.
212. And if the user does not speak, switching the first target voice mode into the human voice mode.
The second target sound mode comprises a human sound mode, wherein the human sound mode is that after the electronic playback and the passive sound conduction are superposed, only the human sound part in the frequency range of 500Hz to 3KHz is reserved, but low-frequency sound in the frequency range of 20Hz to 500Hz and high-frequency sound in the frequency range of 3KHz to 20KHz are still filtered. Under this people's voice mode, only pass through the people's voice part, other low frequency and high frequency noise still filter, can satisfy the demand scene that the user only need hear outside people's voice.
That is, when the user does not have a call, move, or speak, the user may be a person doing other things, but in order to make the user perceive the voice of the external environment, such as the sound of a reminder of an attendant, the electronic device is controlled to switch the first target sound mode to the human sound mode.
In summary, in the embodiment of the present application, the electronic device detects the current sound mode of the electronic device, and determines the current call scene information and the motion information of the electronic device if the current sound mode is the first target sound mode. And if the current scene is determined to be the conversation scene according to the conversation scene information and the electronic equipment is determined not to move according to the motion information, switching the first target sound mode into the shallow human sound mode. And if the current scene is determined to be the calling scene according to the calling scene information and the electronic equipment is determined to move according to the motion information, switching the first target sound mode into a shallow transparent mode.
And if the current scene is determined to be a non-call scene according to the call scene information, the electronic equipment is determined to be moving according to the motion information, and the user speaks, switching the first target sound mode into a full-transparent enhanced human sound mode. And if the current scene is determined to be a non-call scene according to the call scene information, the electronic equipment is determined to be moving according to the motion information, and the user does not speak, switching the first target sound mode into a full-transparent mode.
And if the current scene is determined to be a non-call scene according to the call scene information, the electronic equipment is determined not to move according to the motion information, and the user speaks, switching the first target sound mode into a human sound enhancement mode. And if the current scene is determined to be the non-call scene according to the call scene information, the electronic equipment is determined not to move according to the motion information, and the user does not speak, switching the first target sound mode into the human sound mode.
In the embodiment of the application, the electronic equipment analyzes the actual use scene of the user, so that the electronic equipment can be self-adaptive to the actual use scene to switch the sound mode of the electronic equipment, and the user is prevented from manually and frequently switching the sound mode. So that the user has a good experience when using the electronic equipment.
Correspondingly, the embodiment of the present application further provides a sound mode control device, and the sound mode control device can execute the sound mode control method provided by the embodiment of the present application. As shown in fig. 4, fig. 4 is a schematic view of a first structure of a sound mode control device according to an embodiment of the present application. The sound mode control device 300 includes: a detection module 310, a first determination module 320, a second determination module 330, and a control module 340.
The detecting module 310 is configured to detect a current sound mode of the electronic device.
The detecting module 310 may detect a current sound mode, for example, the current sound mode is one of a plurality of sound modes preset by the electronic device, or the current sound mode is a sound mode set by the user.
The first determining module 320 is configured to determine current call scene information and motion information of the electronic device if the current sound mode is the first target sound mode.
In some embodiments, the first determining module 320 may determine whether the current sound mode is a first target sound mode, wherein the first target sound mode may be a transparent mode, the first target sound mode may also be another sound mode, and the first target sound mode may be a sound mode specified by the user.
If the first determining module 320 determines that the current sound mode is the first target sound mode, it determines the current call scene information and the motion information of the electronic device. The current call scene information comprises call scene information and non-call scene information, and the operation information comprises position information, motion speed information and the like of the user.
And the second determining module 330 is configured to determine a corresponding second target sound mode according to the call scene information and the motion information.
A second determining module 330, configured to determine, according to the motion information, a corresponding second target sound mode if it is determined that the current scene is a call scene according to the call scene information;
and if the current scene is determined to be a non-call scene according to the call scene information, acquiring voice information of the user, and determining a corresponding second target sound mode according to the voice information and the motion information.
The second determining module 330 is further configured to, if it is determined that the current scene is a current scene according to the call scene information, switch the first target sound mode to the shallow human sound mode if it is determined that the electronic device does not move according to the motion information;
and if the current scene is determined to be the calling scene according to the calling scene information, and if the electronic equipment is determined to move according to the motion information, the first target sound mode is switched to the shallow transparent mode.
The second determining module 330 is further configured to, if it is determined that the current scene is an unvoiced scene according to the conversation scene information, determine that the electronic device is moving according to the motion information, and if the user speaks, switch the first target voice mode to the full-transparent enhanced human voice mode; and if the user does not speak, switching the first target sound mode into the full transparent mode.
The second determining module 330 is further configured to, if it is determined that the current scene is an unvoiced scene according to the conversation scene information, determine that the electronic device does not move according to the motion information, and if the user speaks, switch the first target sound mode to the enhanced human sound mode; and if the user does not speak, switching the first target voice mode into the human voice mode.
A control module 340, configured to switch the first target sound mode to the second target sound mode.
After the second target sound mode determined according to the call scene information and the motion information is acquired, the control module 340 may switch the first target sound mode to the second target sound mode. Therefore, the self-adaptive sound mode switching according to the use scene of the user is completed, and the user is prevented from manually and frequently replacing the sound mode.
Referring to fig. 5, fig. 5 is a second structural schematic diagram of the sound mode control device according to the embodiment of the present application. Wherein the sound mode control device 300 further comprises: an obtaining module 350 and a generating module 360.
An obtaining module 350, configured to display a sound parameter adjustment interface, and obtain a sound adjustment parameter adjusted by a user in the sound parameter adjustment interface;
a generating module 360, configured to generate the first target sound pattern according to the sound adjusting parameter.
The generating module 360 is further configured to determine a corresponding adjusting frequency according to the sound adjusting parameter; determining the sound adjusting intensity corresponding to the adjusting frequency; generating the first target sound pattern according to the adjusted frequency and the sound adjustment intensity.
In the embodiment of the application, by detecting the current sound mode of the electronic equipment, if the current sound mode is the first target sound mode, the current call scene information and the motion information of the electronic equipment are determined. And then determining a corresponding second target sound mode according to the call scene information and the motion information, and finally switching the first target sound mode into the second target sound mode. Therefore, the electronic equipment can determine the corresponding sound mode according to the actual call scene information and the motion information of the user, and the user is prevented from frequently and manually operating the electronic equipment to enter different sound modes.
Accordingly, embodiments of the present application also provide an electronic device, as shown in fig. 6, which may include an input unit 401, a display unit 402, a memory 403 including one or more computer-readable storage media, a sensor 404, a processor 405 including one or more processing cores, and a power supply 406. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 6 does not constitute a limitation of the electronic device and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. Wherein:
the memory 403 may be used for storing software programs and modules, and the processor 405 executes various functional applications and data processing by operating the software programs and modules stored in the memory 403. The memory 403 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the electronic device, and the like. Further, the memory 403 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 403 may also include a memory controller to provide the processor 405 and the input unit 401 access to the memory 403.
The input unit 401 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. Specifically, in one particular embodiment, input unit 401 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 405, and can receive and execute commands sent by the processor 405. In addition, the touch sensitive surface can be implemented in various types, such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 401 may include other input devices in addition to the touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 402 may be used to display information input by or provided to a user and various graphical user interfaces of the electronic device, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 402 may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 405 to determine the type of touch event, and then the processor 405 provides a corresponding visual output on the display panel according to the type of touch event. Although in FIG. 6 the touch-sensitive surface and the display panel are two separate components to implement input and output functions, in some embodiments the touch-sensitive surface may be integrated with the display panel to implement input and output functions.
The electronic device may also include at least one sensor 404, such as a light sensor, motion sensor, and other sensors. In particular, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that may turn off the display panel and/or the backlight when the electronic device is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when the device is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration) for recognizing the attitude of an electronic device, and related functions (such as pedometer and tapping) for vibration recognition; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which may be further configured to the electronic device, detailed descriptions thereof are omitted.
The processor 405 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 403 and calling data stored in the memory 403, thereby performing overall monitoring of the electronic device. Optionally, processor 405 may include one or more processing cores; preferably, the processor 405 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 405.
The electronic device also includes a power source 406 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 405 via a power management system to manage charging, discharging, and power consumption management functions via the power management system. The power supply 406 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the electronic device may further include a camera, a bluetooth module, and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 405 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 403 according to the following instructions, and the processor 405 runs the application programs stored in the memory 403, thereby implementing various functions:
detecting a current sound mode of the electronic equipment;
if the current sound mode is a first target sound mode, determining current call scene information and motion information of the electronic equipment;
determining a corresponding second target sound mode according to the call scene information and the motion information;
switching the first target sound mode to the second target sound mode.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps in any one of the sound pattern control methods provided in the embodiments of the present application. For example, the instructions may perform the steps of:
detecting a current sound mode of the electronic equipment;
if the current sound mode is a first target sound mode, determining current call scene information and motion information of the electronic equipment;
determining a corresponding second target sound mode according to the call scene information and the motion information;
switching the first target sound mode to the second target sound mode.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any sound pattern control method provided in the embodiments of the present application, the beneficial effects that can be achieved in any sound pattern control method provided in the embodiments of the present application can be achieved, and detailed descriptions are omitted here for the foregoing embodiments.
The foregoing describes in detail a method, an apparatus, an electronic device, and a storage medium for controlling a voice mode provided in an embodiment of the present application, and a specific example is applied to explain the principle and the implementation of the present application, and the description of the foregoing embodiment is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, the specific implementation manner and the application scope may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (10)
1. A sound mode control method is applied to electronic equipment and is characterized by comprising the following steps:
detecting a current sound mode of the electronic equipment;
if the current sound mode is a first target sound mode, determining current call scene information and motion information of the electronic equipment;
determining a corresponding second target sound mode according to the call scene information and the motion information, wherein the second target sound mode comprises the following steps: if the current scene is determined to be the calling scene according to the calling scene information, determining a corresponding second target sound mode according to the motion information; if the current scene is determined to be a non-call scene according to the call scene information, acquiring voice information of a user, and determining a corresponding second target sound mode according to the voice information and the motion information;
switching the first target sound mode to the second target sound mode.
2. The voice mode control method of claim 1, wherein prior to determining current call scene information and motion information for the electronic device, the method further comprises:
determining a control state of the electronic device, the control state including an automatic control sound mode state and a manual control sound mode state;
if the electronic equipment is in an automatic control sound mode state, acquiring scene information in a current scene;
and if the electronic equipment is in the state of manually controlling the sound mode, stopping acquiring the scene information in the current scene.
3. The sound pattern control method according to claim 1, wherein the second target sound pattern includes a shallow human sound pattern and a shallow transparent pattern, and the determining the corresponding second target sound pattern according to the motion information includes:
if the electronic equipment is determined not to move according to the motion information, switching the first target sound mode into the shallow human sound mode;
and if the electronic equipment is determined to move according to the motion information, switching the first target sound mode into the shallow transparent mode.
4. The sound mode control method of claim 1, wherein the second target sound mode comprises a full-transmittance mode and a full-transmittance enhanced human sound mode, and the determining the corresponding second target sound mode according to the voice information and the motion information comprises:
if the electronic equipment is determined to move according to the motion information, judging whether a user speaks according to the voice information;
if the user speaks, the first target sound mode is switched to the full-transparent enhanced human sound mode;
and if the user does not speak, switching the first target sound mode into the full transparent mode.
5. The sound pattern control method according to claim 1, wherein the second target sound pattern includes a human sound pattern and an enhanced human sound pattern, and the determining the corresponding second target sound pattern according to the speech information and the motion information includes:
if the electronic equipment is determined not to move according to the motion information, judging whether a user speaks according to the voice information;
if the user speaks, the first target voice mode is switched to the enhanced human voice mode;
and if the user does not speak, switching the first target voice mode into the human voice mode.
6. The voice mode control method of claim 1, wherein prior to detecting the current voice mode of the electronic device, the method further comprises:
displaying a sound parameter adjusting interface, and acquiring sound adjusting parameters adjusted in the sound parameter adjusting interface by a user;
generating the first target sound pattern according to the sound adjustment parameter.
7. The sound pattern control method according to claim 6, wherein the generating the first target sound pattern according to the sound adjustment parameter includes:
determining a corresponding adjusting frequency according to the sound adjusting parameter;
determining the sound adjusting intensity corresponding to the adjusting frequency;
generating the first target sound pattern according to the adjusted frequency and the sound adjustment intensity.
8. A sound mode control device applied to an electronic device, comprising:
the detection module is used for detecting the current sound mode of the electronic equipment;
the first determining module is used for determining the current call scene information and the motion information of the electronic equipment if the current sound mode is a first target sound mode;
the second determining module is used for determining a corresponding second target sound mode according to the call scene information and the motion information; the second determining module is specifically used for determining a corresponding second target sound mode according to the motion information if the current scene is determined to be a calling scene according to the calling scene information; if the current scene is determined to be a non-call scene according to the call scene information, acquiring voice information of a user, and determining a corresponding second target sound mode according to the voice information and the motion information;
and the control module is used for switching the first target sound mode into the second target sound mode.
9. An electronic device, comprising:
a memory storing executable program code, a processor coupled with the memory;
the processor calls the executable program code stored in the memory to perform the steps in the sound pattern control method according to any one of claims 1 to 7.
10. A computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the sound pattern control method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110825740.9A CN113542963B (en) | 2021-07-21 | 2021-07-21 | Sound mode control method, device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110825740.9A CN113542963B (en) | 2021-07-21 | 2021-07-21 | Sound mode control method, device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113542963A CN113542963A (en) | 2021-10-22 |
CN113542963B true CN113542963B (en) | 2022-12-20 |
Family
ID=78129211
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110825740.9A Active CN113542963B (en) | 2021-07-21 | 2021-07-21 | Sound mode control method, device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113542963B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116110361A (en) * | 2021-11-10 | 2023-05-12 | 万魔声学股份有限公司 | Noise reduction method, device and electronic equipment |
CN114095825B (en) * | 2021-11-23 | 2024-08-13 | 深圳市锐尔觅移动通信有限公司 | Mode switching method, device, audio playing equipment and computer readable medium |
CN115002598B (en) * | 2022-05-26 | 2024-02-13 | 歌尔股份有限公司 | Headset mode control method, headset device, head-mounted device and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070002529A (en) * | 2005-06-30 | 2007-01-05 | 주식회사 팬택앤큐리텔 | Apparatus and method for controlling reception notification in mobile communication terminal |
CN105516511A (en) * | 2016-01-08 | 2016-04-20 | 深圳市普达尔科技有限公司 | Method and device for switching telephone answering modes |
CN106210344A (en) * | 2016-07-27 | 2016-12-07 | 维沃移动通信有限公司 | A kind of call mode method to set up and mobile terminal |
CN109120790A (en) * | 2018-08-30 | 2019-01-01 | Oppo广东移动通信有限公司 | Call control method and device, storage medium and wearable device |
CN109451390A (en) * | 2018-12-25 | 2019-03-08 | 歌尔科技有限公司 | A kind of TWS earphone and its control method, device, equipment |
CN110913062A (en) * | 2018-09-18 | 2020-03-24 | 西安中兴新软件有限责任公司 | Audio control method and device and terminal |
CN210405606U (en) * | 2019-09-27 | 2020-04-24 | 东莞市库珀电子有限公司 | TWS Bluetooth earphone capable of synchronously playing call voice and real-time environment sound |
CN111698600A (en) * | 2020-06-05 | 2020-09-22 | 北京搜狗科技发展有限公司 | Processing execution method and device and readable medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101597512B1 (en) * | 2009-07-27 | 2016-02-26 | 삼성전자주식회사 | A method of operating a portable terminal and a portable terminal |
JP2016514856A (en) * | 2013-03-21 | 2016-05-23 | インテレクチュアル ディスカバリー カンパニー リミテッド | Audio signal size control method and apparatus |
CN107589861B (en) * | 2016-07-06 | 2021-02-09 | 北京小米移动软件有限公司 | Method and device for communication |
CN109391870B (en) * | 2018-11-10 | 2021-01-15 | 上海麦克风文化传媒有限公司 | Method for automatically adjusting earphone audio signal playing based on human motion state |
-
2021
- 2021-07-21 CN CN202110825740.9A patent/CN113542963B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070002529A (en) * | 2005-06-30 | 2007-01-05 | 주식회사 팬택앤큐리텔 | Apparatus and method for controlling reception notification in mobile communication terminal |
CN105516511A (en) * | 2016-01-08 | 2016-04-20 | 深圳市普达尔科技有限公司 | Method and device for switching telephone answering modes |
CN106210344A (en) * | 2016-07-27 | 2016-12-07 | 维沃移动通信有限公司 | A kind of call mode method to set up and mobile terminal |
CN109120790A (en) * | 2018-08-30 | 2019-01-01 | Oppo广东移动通信有限公司 | Call control method and device, storage medium and wearable device |
CN110913062A (en) * | 2018-09-18 | 2020-03-24 | 西安中兴新软件有限责任公司 | Audio control method and device and terminal |
CN109451390A (en) * | 2018-12-25 | 2019-03-08 | 歌尔科技有限公司 | A kind of TWS earphone and its control method, device, equipment |
CN210405606U (en) * | 2019-09-27 | 2020-04-24 | 东莞市库珀电子有限公司 | TWS Bluetooth earphone capable of synchronously playing call voice and real-time environment sound |
CN111698600A (en) * | 2020-06-05 | 2020-09-22 | 北京搜狗科技发展有限公司 | Processing execution method and device and readable medium |
Non-Patent Citations (2)
Title |
---|
Jabra MOTION;Jabra;《Jabra YOU"RE ON》;20170721;第1-25页 * |
声阔Liberty Air2Pro 真无线蓝牙耳机全方面表现都很"Pro";蒋倩;《计算机与网络》;20210126;第20-21页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113542963A (en) | 2021-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11217240B2 (en) | Context-aware control for smart devices | |
CN113542963B (en) | Sound mode control method, device, electronic equipment and storage medium | |
CN108551636B (en) | A speaker control method and mobile terminal | |
US12197816B2 (en) | Prompting method and mobile terminal | |
CN110764730A (en) | Method and device for playing audio data | |
KR20140103003A (en) | Mobile terminal for controlling a hearing aid and method therefor | |
CN109616135B (en) | Audio processing method, device and storage medium | |
CN110708630B (en) | Method, device and equipment for controlling earphone and storage medium | |
CN108683761A (en) | Sound production control method and device, electronic device and computer readable medium | |
CN111010608B (en) | Video playing method and electronic equipment | |
CN114071315A (en) | Audio processing method, device, electronic device and storage medium | |
CN108540638A (en) | Screen intensity adjusts control method, terminal and computer storage media | |
CN108259659A (en) | A kind of pick-up control method, flexible screen terminal and computer readable storage medium | |
CN110012143B (en) | A receiver control method and terminal | |
CN108769327A (en) | Method and device for sounding display screen, electronic device and storage medium | |
CN110602696A (en) | Conversation privacy protection method and electronic equipment | |
CN107749306B (en) | Vibration optimization method and mobile terminal | |
CN107147767B (en) | Call volume control method and device, storage medium and terminal | |
CN111049972A (en) | A kind of audio playback method and terminal device | |
CN111314551A (en) | Vibration adjusting method and device, storage medium and mobile terminal | |
CN111984222A (en) | Method and device for adjusting volume, electronic equipment and readable storage medium | |
CN112291672B (en) | Speaker control method, control device and electronic device | |
CN111078186A (en) | A playback method and electronic device | |
CN111681654A (en) | Voice control method and device, electronic equipment and storage medium | |
CN108769364B (en) | Call control method, device, mobile terminal and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |