[go: up one dir, main page]

CN119906808A - Projection device and audio noise reduction method thereof - Google Patents

Projection device and audio noise reduction method thereof Download PDF

Info

Publication number
CN119906808A
CN119906808A CN202411884438.0A CN202411884438A CN119906808A CN 119906808 A CN119906808 A CN 119906808A CN 202411884438 A CN202411884438 A CN 202411884438A CN 119906808 A CN119906808 A CN 119906808A
Authority
CN
China
Prior art keywords
microphone
frequency response
audio data
response curve
collect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202411884438.0A
Other languages
Chinese (zh)
Other versions
CN119906808B (en
Inventor
于彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202411884438.0A priority Critical patent/CN119906808B/en
Publication of CN119906808A publication Critical patent/CN119906808A/en
Application granted granted Critical
Publication of CN119906808B publication Critical patent/CN119906808B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L21/0232Processing in the frequency domain
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0272Voice signal separating
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/18Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/24Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3191Testing thereof
    • H04N9/3194Testing thereof including sensor feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02166Microphone arrays; Beamforming

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Quality & Reliability (AREA)
  • Transforming Electric Information Into Light Information (AREA)

Abstract

本申请涉及一种投影设备及其音频降噪方法,涉及音频处理领域。该投影设备中包括风扇,被配置为投影设备进行降温;麦克风阵列,包括至少两个麦克风,被配置为采集音频数据;控制器,与麦克风阵列连接,被配置为:控制第一麦克风采集第一音频数据,以及第二麦克风同步采集第二音频数据;其中,第一麦克风为麦克风阵列中靠近标准语音发起位置的麦克风;第二麦克风为麦克风阵列中靠近风扇设置的麦克风;根据第二音频数据,对第一音频数据进行降噪处理,得到目标音频数据。采用上述技术方案提高了目标音频数据的清晰度,并提高了降噪效率,另外,由于无需额外引入其他硬件设备,还减少了硬件成本投入。

The present application relates to a projection device and an audio noise reduction method thereof, and relates to the field of audio processing. The projection device includes a fan, which is configured to cool the projection device; a microphone array, including at least two microphones, configured to collect audio data; a controller, connected to the microphone array, is configured to: control the first microphone to collect the first audio data, and the second microphone to synchronously collect the second audio data; wherein the first microphone is a microphone in the microphone array close to the standard voice initiation position; the second microphone is a microphone in the microphone array set close to the fan; according to the second audio data, the first audio data is subjected to noise reduction processing to obtain the target audio data. The adoption of the above technical solution improves the clarity of the target audio data and the noise reduction efficiency. In addition, since there is no need to introduce other hardware devices, the hardware cost investment is also reduced.

Description

Projection equipment and audio noise reduction method thereof
Technical Field
The present application relates to the field of audio processing, and in particular, to a projection device and an audio noise reduction method thereof.
Background
In the use process of projection equipment such as a laser television, the internal heating of the equipment in the operation process is higher due to the reasons such as light source irradiation, so that a fan is required to perform cooling treatment, the influence of heat on the service life of the projection equipment is reduced, and the equipment is ensured to stably operate for a long time.
However, during operation of the fan, additional noise is caused by the rotation of the blades. In the case of a voice recognition module built in the projection device, the fan noise will have a certain influence on the accuracy of the voice recognition function.
In the conventional technology, when noise reduction processing is performed on fan noise in audio, a high-precision microphone and a low-precision microphone are required to collect fan noise, and drawing of an amplitude-frequency response correction curve and a speaker correction curve of the low-precision microphone and determination of a noise reduction cycle marking point are performed based on collected data. And in the subsequent use stage of the microphone, carrying out noise reduction processing on the noise reduction periodic mark point position according to the amplitude-frequency response correction curve and the loudspeaker correction curve. The operation process of the mode is relatively complex, and the operation efficiency is low.
Disclosure of Invention
The application provides projection equipment and an audio noise reduction method thereof, which are used for solving the technical problem of influence of fan noise on a voice recognition function in the prior art and simultaneously considering operation efficiency.
In a first aspect, some embodiments provide a projection device comprising:
a fan configured to cool the projection device;
a microphone array comprising at least two microphones configured to collect audio data;
a controller, coupled to the microphone array, configured to:
Controlling a first microphone to collect first audio data and a second microphone to synchronously collect second audio data, wherein the first microphone is a microphone which is close to a standard voice initiating position in the microphone array;
and carrying out noise reduction processing on the first audio data according to the second audio data to obtain target audio data.
In some embodiments, the target audio data is obtained by setting a first microphone close to the standard voice initiating position in the microphone array and a second microphone close to the fan, and performing noise reduction processing on the first audio data synchronously collected by the first microphone according to the second audio data collected by the second microphone. In the technical scheme, the second microphone is arranged close to the fan, so that the noise characteristic of the fan carried in the second audio data is more obvious, and the first microphone is arranged close to the standard voice initiating position, so that the voice data carried in the first audio data is more obvious theoretically. Therefore, the second audio data is adopted to carry out noise reduction processing on the first audio data, so that the situation that the noise reduction effect is poor due to the fact that the noise characteristics of the fan are not obvious can be avoided, and the definition of the target audio data is ensured. In addition, the technical scheme can realize the filtering of the fan noise without drawing the amplitude-frequency response correction curve of the low-precision microphone and the correction curve of the loudspeaker, reduces the operand of the noise filtering process and improves the noise reduction efficiency. Meanwhile, in the noise reduction processing process, a pure software processing mode is adopted, and only the existing hardware equipment is relied on, so that other hardware equipment is not required to be additionally introduced, and the hardware cost is reduced.
In a second aspect, some embodiments further provide an audio noise reduction method, including:
Controlling a first microphone to collect first audio data and a second microphone to synchronously collect second audio data, wherein the first microphone is a microphone which is close to a standard voice initiating position in a microphone array;
and carrying out noise reduction processing on the first audio data according to the second audio data to obtain target audio data.
In some embodiments, the target audio data is obtained by setting a first microphone close to a standard voice initiating position in the microphone array and a second microphone close to the fan, controlling the first microphone to collect first audio data, controlling the second microphone to synchronously collect second audio data, and performing noise reduction processing on the first audio data according to the second audio data. In the technical scheme, the second microphone is arranged close to the fan, so that the noise characteristic of the fan carried in the second audio data is more obvious, and the first microphone is arranged close to the standard voice initiating position, so that the voice data carried in the first audio data is more obvious theoretically. Therefore, the second audio data is adopted to carry out noise reduction processing on the first audio data, so that the situation that the noise reduction effect is poor due to the fact that the noise characteristics of the fan are not obvious can be avoided, and the definition of the target audio data is ensured. In addition, the technical scheme can realize the filtering of the fan noise without drawing the amplitude-frequency response correction curve of the low-precision microphone and the correction curve of the loudspeaker, reduces the operand of the noise filtering process and improves the noise reduction efficiency. Meanwhile, in the noise reduction processing process, a pure software processing mode is adopted, and only the existing hardware equipment is relied on, so that other hardware equipment is not required to be additionally introduced, and the hardware cost is reduced.
In a third aspect, some embodiments further provide an audio noise reduction device, including:
The system comprises an acquisition control module, a first microphone, a second microphone, a fan, a first voice data acquisition module and a second voice data acquisition module, wherein the acquisition control module is used for controlling the first microphone to acquire first voice data and the second microphone to synchronously acquire second voice data;
And the noise reduction processing module is used for carrying out noise reduction processing on the first audio data according to the second audio data to obtain target audio data.
According to the audio noise reduction device provided by the embodiment, the first microphone close to the standard voice initiating position in the microphone array and the second microphone close to the fan are set, the first microphone is controlled to collect first audio data through the collecting control module, the second microphone is controlled to synchronously collect second audio data, and noise reduction processing is carried out on the first audio data through the noise reduction processing module according to the second audio data, so that target audio data are obtained. In the technical scheme, the second microphone is arranged close to the fan, so that the noise characteristic of the fan carried in the second audio data is more obvious, and the first microphone is arranged close to the standard voice initiating position, so that the voice data carried in the first audio data is more obvious theoretically. Therefore, the second audio data is adopted to carry out noise reduction processing on the first audio data, so that the situation that the noise reduction effect is poor due to the fact that the noise characteristics of the fan are not obvious can be avoided, and the definition of the target audio data is ensured. In addition, the technical scheme can realize the filtering of the fan noise without drawing the amplitude-frequency response correction curve of the low-precision microphone and the correction curve of the loudspeaker, reduces the operand of the noise filtering process and improves the noise reduction efficiency. Meanwhile, in the noise reduction processing process, a pure software processing mode is adopted, and only the existing hardware equipment is relied on, so that other hardware equipment is not required to be additionally introduced, and the hardware cost is reduced.
In a fourth aspect, some embodiments further provide a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Controlling a first microphone to collect first audio data and a second microphone to synchronously collect second audio data, wherein the first microphone is a microphone which is close to a standard voice initiating position in a microphone array;
and carrying out noise reduction processing on the first audio data according to the second audio data to obtain target audio data.
In the readable storage medium provided in the above embodiment, in the process of executing the stored computer program, the target audio data is obtained by setting a first microphone near a standard voice initiating position in the microphone array and a second microphone near the fan, controlling the first microphone to collect the first audio data, controlling the second microphone to collect the second audio data synchronously, and performing noise reduction processing on the first audio data according to the second audio data. In the technical scheme, the second microphone is arranged close to the fan, so that the noise characteristic of the fan carried in the second audio data is more obvious, and the first microphone is arranged close to the standard voice initiating position, so that the voice data carried in the first audio data is more obvious theoretically. Therefore, the second audio data is adopted to carry out noise reduction processing on the first audio data, so that the situation that the noise reduction effect is poor due to the fact that the noise characteristics of the fan are not obvious can be avoided, and the definition of the target audio data is ensured. In addition, the technical scheme can realize the filtering of the fan noise without drawing the amplitude-frequency response correction curve of the low-precision microphone and the correction curve of the loudspeaker, reduces the operand of the noise filtering process and improves the noise reduction efficiency. Meanwhile, in the noise reduction processing process, a pure software processing mode is adopted, and only the existing hardware equipment is relied on, so that other hardware equipment is not required to be additionally introduced, and the hardware cost is reduced.
In a fifth aspect, some embodiments also provide a computer program product comprising a computer program which when executed by a processor performs the steps of:
Controlling a first microphone to collect first audio data and a second microphone to synchronously collect second audio data, wherein the first microphone is a microphone which is close to a standard voice initiating position in a microphone array;
and carrying out noise reduction processing on the first audio data according to the second audio data to obtain target audio data.
In the process of executing the computer program, the first microphone near the standard voice initiating position and the second microphone near the fan in the microphone array are set, the first microphone is controlled to collect the first audio data, the second microphone is controlled to synchronously collect the second audio data, and noise reduction processing is performed on the first audio data according to the second audio data, so that the target audio data are obtained. In the technical scheme, the second microphone is arranged close to the fan, so that the noise characteristic of the fan carried in the second audio data is more obvious, and the first microphone is arranged close to the standard voice initiating position, so that the voice data carried in the first audio data is more obvious theoretically. Therefore, the second audio data is adopted to carry out noise reduction processing on the first audio data, so that the situation that the noise reduction effect is poor due to the fact that the noise characteristics of the fan are not obvious can be avoided, and the definition of the target audio data is ensured. In addition, the technical scheme can realize the filtering of the fan noise without drawing the amplitude-frequency response correction curve of the low-precision microphone and the correction curve of the loudspeaker, reduces the operand of the noise filtering process and improves the noise reduction efficiency. Meanwhile, in the noise reduction processing process, a pure software processing mode is adopted, and only the existing hardware equipment is relied on, so that other hardware equipment is not required to be additionally introduced, and the hardware cost is reduced.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of a projection scene of a projection device according to some embodiments of the present application;
FIG. 2 is a schematic view of an optical path of a projection device according to some embodiments of the present application;
FIG. 3 is a schematic diagram of a circuit architecture of a projection device according to some embodiments of the present application;
FIG. 4 is a schematic view of an optical path of a projection device according to some embodiments of the present application;
FIG. 5 is a schematic diagram of a system frame for implementing display control of a projection device according to some embodiments of the present application;
FIG. 6 is a schematic diagram showing the relative positions of a microphone array and a fan in a projection device according to some embodiments of the present application;
Fig. 7 is a flowchart of an audio noise reduction method according to some embodiments of the present application;
Fig. 8 is a schematic flow chart of a first microphone extraction step according to some embodiments of the present application;
fig. 9A is a flowchart illustrating a selection procedure of a first microphone according to some embodiments of the present application;
FIG. 9B is a schematic diagram of a first frequency response curve according to some embodiments of the present application;
FIG. 9C is a schematic diagram of a response bias curve provided by some embodiments of the application;
Fig. 10 is a flowchart illustrating a selection procedure of a second microphone according to some embodiments of the present application;
FIG. 11 is a flowchart illustrating an audio noise reduction method according to some embodiments of the present application;
FIG. 12 is a block diagram illustrating an audio noise reduction device according to some embodiments of the present application;
fig. 13 is an internal structural view of a computer device in one embodiment.
Detailed Description
Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The embodiments described in the examples below do not represent all embodiments consistent with the application. Merely exemplary of systems and methods consistent with aspects of the application as set forth in the claims.
It should be noted that the brief description of the terminology in the present application is for the purpose of facilitating understanding of the embodiments described below only and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms first, second, third and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar or similar objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements explicitly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware or/and software code that is capable of performing the function associated with that element.
The projection device is a device capable of projecting media data onto a projection medium, and the projection device can be connected with a computer, a broadcast and television network, the internet, a VCD (Video Compact Disc: video compact disc), a DVD (DIGITAL VERSATILE DISC Recordable: digital video disc), a game console, a DV and other devices through different interfaces so as to receive the media data to be projected. Where the media data includes, but is not limited to, image, video, text, etc., and the projection medium includes, but is not limited to, physical forms of walls, curtains, screens, etc.
Fig. 1 is a schematic view of a projection scene of a projection device according to some embodiments of the present application.
In some embodiments, the projection device may be a laser television. Taking laser light as an example, referring to fig. 1, a laser television may include a projection screen 1 and a projection host 2. The projection screen 1 is fixed on a first position (such as a television background wall, etc.), the projection host 2 is placed on a second position, and the projection picture of the projection host 2 is matched with the projection screen 1 by adjusting the relation between the first position and the second position, that is, the second position is the optimal placement position of the projection host 2.
Fig. 2 is a schematic view of an optical path of a projection apparatus according to some embodiments of the present application.
The projection host 2 includes a projection assembly including a laser light source 210, a light engine 220, and a lens 230. The laser light source 210 provides illumination for the optical machine 220, the optical machine 220 modulates the light beam of the light source and outputs the modulated light beam to the lens 230, the lens 230 performs imaging and projects the imaged light beam onto the projection screen 1, and the projection screen 1 presents a projection picture.
In some embodiments, the laser light source 210 includes a laser assembly and an optical lens assembly, and the light beam emitted from the laser assembly can be transmitted through the optical lens assembly to provide illumination for the optical machine 220. Where, for example, the optical lens assembly requires a higher level of environmental cleanliness, hermetic level of sealing, and the chamber in which the laser assembly is mounted may be sealed with a lower level of sealing, dust-proof level, to reduce sealing costs.
In some embodiments, the light engine 220 may include a blue light engine, a green light engine, a red light engine, a heat dissipation system, a circuit control system, and the like. The blue light machine, the green light machine and the red light machine form a trichromatic light machine, the trichromatic light machine is used for modulating and generating laser of which the user interface comprises pixel points, and the projection equipment can be prevented from being overheated by reducing working current of the integrated red light machine in the trichromatic light machine.
In some embodiments, the light emitting components of the laser television may also be implemented by LED light sources.
Fig. 3 is a schematic circuit architecture of a projection device according to some embodiments of the present application.
In some embodiments, referring to fig. 3, projection host 2 may include display control circuitry 240, a laser light source 210, at least one laser drive assembly 250, and at least one brightness sensor 260. The laser light source 210 may include at least one laser in one-to-one correspondence with at least one laser driving assembly. Wherein, the at least one means one or more, and the plurality means two or more.
In some embodiments, the laser light source 210 includes three lasers, which may be a blue laser 211, a red laser 212, and a green laser 213, respectively, in a one-to-one correspondence with the laser driving assembly 250. The blue laser 211 is used for emitting blue laser light, the red laser 212 is used for emitting red laser light, and the green laser 213 is used for emitting green laser light. The laser driving assembly 250 may be implemented to include a plurality of sub-laser driving assemblies, each corresponding to a different color laser.
In some embodiments, the display control circuit 240 is configured to output light control signals corresponding to different primary colors to the laser driving assembly 250 to drive the corresponding lasers to emit light, e.g., the light control signals include a blue light control signal, a red light control signal, and a green light control signal. Referring to fig. 3, the display control circuit 240 is connected to the laser driving assembly 250, and is configured to output at least one light control signal corresponding to three primary colors of each of the multi-frame display images, and transmit the at least one light control signal to the corresponding laser driving assembly 250, respectively. For example, the display control circuit 240 may be a micro control unit (micro controller unit, MCU), also referred to as a single chip microcomputer.
In some embodiments, the laser television may implement adaptive tuning. For example, by providing the luminance sensor 260 in the light-emitting path of the laser light source 210, the luminance sensor 260 may detect the first luminance value of the laser light source 210 and send the first luminance value to the display control circuit 240. The display control circuit 240 may obtain a second brightness value corresponding to the driving current of each laser, and determine that the laser has a COD (Catastrophic optical damage, optical catastrophic damage) fault when the difference between the second brightness value of the laser and the first brightness value of the laser is greater than a difference threshold, so that the display control circuit 240 may adjust the current control signal of the laser driving component corresponding to the laser until the difference is less than or equal to the difference threshold, thereby eliminating the COD fault of the laser, reducing the damage rate of the laser, and improving the image display effect of the projection device.
Fig. 4 is a schematic view of an optical path of a projection apparatus according to some embodiments of the present application.
In some embodiments, referring to fig. 4, the optical path structure includes a laser light source 210 and an optical assembly 214. The laser light source 210 may include a blue laser 211, a red laser 212 and a green laser 213 which are independently arranged, and the projection device may also be referred to as a three-color projection device, where the blue laser 211, the red laser 212 and the green laser 213 are all light-weighted (Mirai Console Loader, MCL) packaged lasers, and are small in size, so as to facilitate compact arrangement of the optical paths.
In some embodiments, the projection host 2 may include a controller including at least one of a central processing unit (CentralProcessing Unit, CPU), a video processor, an audio processor, a Graphics processor (Graphics ProcessingUnit, GPU), RAM Random Access Memory, RAM), a ROM (Read-Only Memory), a first interface to nth interface for input/output, a communication Bus (Bus), and the like. The controller is connected with related hardware of the projection equipment, such as hardware of a display control circuit, a brightness sensor, a distance sensor, an image collector and the like, and is used for controlling the projection, focusing, correction, calibration switch screen state adjustment and other functions of the projection equipment.
In some embodiments, several types of interfaces may be provided on the body of a projection device (e.g., a laser television), such as a power interface, a USB interface, an HDMI (High Definition Multimedia Interface ) interface, a network cable interface, a VGA (Video GRAPHICS ARRAY, video image array) interface, a DVI (DigitalVisual Interface, digital Video interface), etc., to connect to a signal source for transmitting media.
In some embodiments, the projection device may directly enter the display interface of the signal source selected last time after being started, or the signal source selection interface, where the signal source is, for example, a preset video on demand program, and may also be one of an HDMI interface, a USB interface, a live television interface, and the like. After the user selects the target signal source, the projection host 2 may acquire media data from the target signal source, and project the media data on the projection screen 1 for display.
In some embodiments, projection host 2 may configure an image collector for cooperation with the projection host to enable associated regulatory control of the projection process. For example, the projection device may configure a 3D camera, a monocular item, or a binocular camera.
In some embodiments, when the projection host 2 performs correction on the projection screen, the correction chart card can be projected onto the projection screen 1, and the image collector is controlled to collect the correction image including the correction chart card and the screen edge, so as to calculate the correction parameter according to the correction image, and the optical engine is controlled to automatically correct the projection screen according to the correction parameter, so that the screen projected by the projection host 2 coincides with the projection screen, and the projection deviation is eliminated.
Fig. 5 is a schematic diagram of a system frame for implementing display control of a projection device according to some embodiments of the present application.
In some embodiments, referring to FIG. 5, the system framework includes an application service layer, a process communication framework, an operation layer, a framework layer, a correction service, a camera service, a time-of-flight service, and hardware and its drivers, among others. The controller of the projection host 2 controls the overall system architecture, and based on the bottom program logic, projection control of the projection device is realized, including but not limited to functions of automatic curtain entering, automatic obstacle avoidance, automatic focusing, eye protection, screen switching control, automatic correction and fine adjustment correction of a projection picture, and the like.
In some embodiments, for example, where the position of the projection screen 1 is fixed, the position of the projection host 2 may change, for example, where the projection host 2 is placed on a television cabinet, the projection host 2 may be moved due to a user cleaning or wiping the television cabinet, or the projection host 2 may shift due to sliding. The projector host 2 may be configured with a sensor, such as a gyroscope, for detecting a change in the position of the projector host. In the moving process of the projection host 2, the gyroscope can sense the displacement of the projection host 2 and actively collect position data, then the collected position data is sent to an application service layer through a frame layer, the application data required in the user interface interaction and application program interaction process are supported, and the position data can also be used for data calling of the controller in algorithm service realization.
In some embodiments, the controller, when executing the correction service, may invoke the gyroscope-detected position data to determine whether a change in position of the projection host 2 has occurred. If the projection host 2 is shifted, it can be detected whether the offset of the projection host 2 exceeds the local correction range, if the offset exceeds the local adjustable range, the user is prompted to move the projection host 2, and if the offset does not exceed the local adjustable range, automatic correction and/or fine adjustment correction can be triggered. The automatic correction flow is to collect correction images containing correction chart cards and screen edges through an image collector to calculate correction parameters. The fine adjustment correction procedure is to project a correction interface that displays fine adjustment points containing a plurality of adjustable positions, so that the user manually corrects to a satisfactory projection effect by adjusting the position of any one or more target fine adjustment points.
In some embodiments, the projection host 2 is further configured with a distance sensor for detecting a distance, where the distance sensor may employ a Time of Flight (TOF) sensor, where the Time of Flight sensor measures a distance between nodes by using a Time of Flight of a signal to and fro between a transmitting end and a reflecting end, and after the Time of Flight sensor collects distance data, the distance data is sent to a Time of Flight service, and after the Time of Flight service obtains the distance data, the collected distance data is sent to an application service layer through a process communication framework, where the distance data is used for data call, user interface, program application, and the like of the controller.
In some embodiments, the projection host 2 may further configure an image collector, which may employ a monocular camera, a binocular camera, a depth camera, a 3D camera, or the like, that sends the collected image data to a camera service, which then sends the image data to a process communication framework and/or a correction service, which sends the image data to an application service layer, which is used for data invocation by the controller, user interface, program application, or the like, for interactive use.
In some embodiments, the data interaction is performed between the process communication frame and the application program service, and then the projection correction parameters are fed back to the correction service through the process communication frame, the correction service sends the projection correction parameters to an operation layer of the projection equipment, the operation system generates correction instructions according to the projection correction parameters, and the correction instructions are sent to the optical machine control driving module, so that the optical machine driving module adjusts the working condition of the optical machine according to the projection correction parameters, and automatic correction of the projection picture is completed.
In some embodiments, the projection device may correct the projected picture when a correction instruction is detected. The association relation among the distance, the horizontal included angle and the offset angle can be created in advance, and then the controller of the projection host 2 determines the target included angle between the light machine and the projection screen 1 at the current moment by acquiring the current distance between the light machine and the projection screen 1 and combining the association relation, so as to realize the correction of the projection picture. The target included angle is embodied as an included angle between the central axis of the optical machine and the projection screen 1.
In some embodiments, the projection device may refocus after auto-calibration is completed, the controller detects whether the auto-focus function is on, if the auto-focus function is not on, the controller will end the auto-focus service, and if the auto-focus function is on, the controller performs focus calculation according to the distance detection value of the time-of-flight sensor.
In some embodiments, the controller queries a preset mapping table according to a distance detection value of the flight time sensor, the preset mapping table records a mapping relation between a distance and a focal length, so as to obtain the focal length of the projection device corresponding to the distance detection value, the middleware sends the obtained focal length to an optical machine of the projection device, the optical machine shoots a projection content image by at least one image collector after performing laser emission according to the focal length, the controller performs definition detection on the projection content image, and determines whether the current focal length of the lens is proper, and if the focal length is improper, focusing is needed. The projection device positions the focusing position with highest definition by adjusting the lens position and shooting and comparing the definition changes of the projection content images before and after adjustment.
If the judging result does not meet the preset finishing condition, the middleware will finely adjust the focal length parameters of the optical machine of the projection equipment, for example, the focal length is gradually finely adjusted according to the preset step length, the adjusted focal length parameters are set on the optical machine again, and finally, the optimal focal length is locked through the steps of photographing for many times, definition evaluation and the like, so as to finish the automatic focusing.
In some embodiments, at least a lens, a distance sensor, and an image collector, which may include one or more cameras, are disposed on the first plane of the projection host 2. The first plane is a plane parallel to and opposite to the projection screen 1 on the projection host 2 when projected. It should be noted that, the hardware and software configuration and the system architecture of the projection device are not limited to examples of the alternative embodiments of the present application.
In the working process of the projection equipment, the cooling treatment is needed by a fan because of the influence of factors such as light source irradiation. Referring to the schematic diagram of the relative positions of the microphone array and the fan in the projection device shown in fig. 6, because additional noise is generated during the operation of the fan, under the condition that the voice function is started, each microphone in the microphone array can collect the noise, so that the collected audio data is unclear, and the normal use of the voice function is affected.
To overcome the above problems, in some alternative embodiments, referring to fig. 7, an audio noise reduction method is provided, which is applied to a controller in a projection device, and may include the steps of:
S710, controlling the first microphone to collect the first audio data and controlling the second microphone to collect the second audio data synchronously.
The first microphone is a microphone in the microphone array, which is close to a standard voice starting position, namely a signal source during optimization of wind noise, and the second microphone is a microphone in the microphone array, which is close to a fan, namely a noise source during optimization of wind noise.
The standard voice initiation position may be preset by a technician based on experience or a number of experiments, and may be, for example, a position point at a preset distance from the center of the projection device. Alternatively, the preset distance may be 25 cm.
The first microphone and the second microphone may be preset, and in this embodiment, the selection mechanism of the first microphone and the second microphone is not limited, and only the first microphone is required to be close to the standard voice initiation position, and the second microphone is required to be close to the fan setting position.
It can be understood that, because the first microphone is close to the standard voice initiating position, the first microphone can be understood as a microphone capable of collecting clearer user voice instructions (i.e. effective voice information), and accordingly, the effective voice information carried in the first audio data is richer. Because the second microphone is arranged close to the fan, the second microphone can be understood as a microphone capable of collecting clearer fan noise, and correspondingly, the fan noise carried in the second audio data is more comprehensive.
It is worth noting that the first microphone and the second microphone are controlled to synchronously collect audio data, so that the same collection environment of the collected audio data can be ensured, and the situation that effective voice information is filtered out by mistake due to different collection environments during subsequent noise reduction processing is avoided.
In an alternative implementation manner, the first microphone can be controlled to collect the first audio data only when the voice function is started, and the second microphone can be controlled to collect the second audio data synchronously, so that the waste of computational resources caused by unnecessary data operation of the controller when the voice function is not started is avoided.
Because the energy of the fan noise is positively correlated with the fan, that is, the higher the wind speed is, the greater the wind noise sound is, in another alternative implementation manner, the first microphone may be controlled to collect the first audio data, and the second microphone may be controlled to collect the second audio data synchronously when the voice function is turned on and the fan rotation speed reaches the set rotation speed threshold. The preset rotation speed threshold value can be set or adjusted by a technician according to needs or experience, or can be repeatedly determined through a large number of experiments. The method has the advantages that under the condition that the rotating speed of the fan does not reach the set rotating speed threshold value, the noise of the fan does not affect the voice function or has small influence, and at the moment, a certain computing power resource is not needed to be paid for subsequent wind noise filtering, so that the input of unnecessary computing power resources is further saved.
S720, according to the second audio data, noise reduction processing is carried out on the first audio data, and target audio data are obtained.
For example, the noise characteristic data in the second audio data may be extracted, and the noise characteristic data may be subtracted from the first audio data, thereby achieving the purpose of noise suppression and obtaining relatively clear target audio data.
In some embodiments, a feature extraction network may be invoked to perform feature extraction on the second audio data to obtain noise feature data. The feature extraction network may be implemented based on a traditional machine learning model or a deep learning model, and the network structure of the feature extraction network is not limited in this embodiment.
It is noted that the feature extraction network may be built in the projection device locally, or may be set in the cloud server to reduce the computational cost of the projection device, and only the interface call of the feature extraction network needs to be ensured when feature extraction is required.
The above-mentioned alternative embodiment obtains the target audio data by setting the first microphone near the standard voice initiating position in the microphone array and the second microphone near the fan, and performing noise reduction processing on the first audio data synchronously collected by the first microphone according to the second audio data collected by the second microphone. In the technical scheme, the second microphone is arranged close to the fan, so that the noise characteristic of the fan carried in the second audio data is more obvious, and the first microphone is arranged close to the standard voice initiating position, so that the voice data carried in the first audio data is more obvious theoretically. Therefore, the second audio data is adopted to carry out noise reduction processing on the first audio data, so that the situation that the noise reduction effect is poor due to the fact that the noise characteristics of the fan are not obvious can be avoided, and the definition of the target audio data is ensured. In addition, the technical scheme can realize the filtering of the fan noise without drawing the amplitude-frequency response correction curve of the low-precision microphone and the correction curve of the loudspeaker, reduces the operand of the noise filtering process and improves the noise reduction efficiency. Meanwhile, in the noise reduction processing process, a pure software processing mode is adopted, and only the existing hardware equipment is relied on, so that other hardware equipment is not required to be additionally introduced, and the hardware cost is reduced.
Based on the technical solutions of the above embodiments, the present application further provides some optional embodiments, where the selection mechanism of the first microphone is refined.
Referring to fig. 8, a first microphone selection step includes:
S810, controlling each microphone in the microphone array, and under the condition that the fan does not operate, collecting a sound signal of a test frequency band emitted by a preset sound source arranged at a standard voice initiating position, so as to obtain a first test audio of each microphone.
The preset sound source may be a test sound cone dedicated for testing, and is configured to emit a sound signal of a test frequency band. The test frequency band may be set by a technician according to needs or experience, or may be determined by a large number of tests, for example, may be 100hz to 8000hz.
It is noted that the preset sound source is set at the standard voice initiating position under the condition that the fan is not operated, so that the interference of the operation of the fan can be eliminated
The user is simulated to initiate a voice signal, and at this time, the first test audio collected by each microphone in the microphone array can be equivalently audio data carrying effective voice information.
S820, for each microphone, performing spectrum analysis on the first test audio of the microphone to obtain a first frequency response curve of the microphone.
For each microphone, the first test audio frequency of the microphone is subjected to frequency spectrum analysis, the time domain signal is converted into the frequency domain signal, the sound intensity under the condition of different frequencies can be effectively obtained, and the fluctuation condition of the sound intensity reflects the distribution condition of the effective information of the collected first test audio frequency, so that the hidden information in the sound signal can be extracted by converting the first test audio frequency in the time domain into the first frequency response curve in the frequency domain.
It should be noted that, in the first test audio acquisition stage, in order to facilitate calculation, a microphone may be additionally disposed at the center of the microphone array, and the sound signal sent by the preset sound source is corrected by the microphone, so that the first frequency response curve obtained by spectrum analysis is changed from arched to straight, and the sound intensity presented by the first frequency response curve is more visual.
S830, selecting a first microphone from the microphones according to the first frequency response curve of each microphone.
For example, since the first test audio of the different microphones is collected synchronously, the difference of the sound intensities of the first frequency response curves of the different microphones can reflect the distance between the different microphones and the preset sound source in the same frequency band. Thus, a first microphone relatively close to the preset sound source, i.e. close to the standard speech originating location, can be selected from the microphones by the difference between the first frequency response curves of the microphones.
In some embodiments, a first key frequency band with relatively abundant sound information can be selected from the test frequency bands according to the peak and trough distribution condition of the first frequency response curves of the microphones, and a microphone corresponding to a curve segment with larger fluctuation amplitude (for example, the largest) is selected as the first microphone from curve segments corresponding to the first key frequency band according to the first frequency response curves.
It should be noted that, the selection of the first microphone may be performed when the projection device leaves the factory, and the subsequent processing may be performed directly according to the selection result when the use result of the projection device. Of course, considering that the relative positional relationship between each microphone and the standard voice initiating position is changed due to the influence of factors such as equipment aging, equipment maintenance or component change in the follow-up process of the projection equipment, the selection of the first microphone can be performed periodically according to a preset period in the use stage of the projection equipment.
According to the above-mentioned alternative embodiment, under the condition that the fan does not operate, each microphone in the microphone array is controlled to collect a sound signal of a test frequency band emitted by a preset sound source arranged at a standard voice initiating position, so as to obtain first test audios of each microphone, and spectrum analysis is performed on each first test audio, so that a first frequency response curve with richer carrying information is obtained. The first frequency response curves corresponding to different microphones can reflect the sound intensity of each microphone at different frequencies, so that the sound collecting capacity of the microphones is effectively displayed, and the distance situation of the microphones from the standard voice initiating position is further effectively reflected. Correspondingly, through the first frequency response curves of different microphones, the first microphone close to the standard voice initiating position can be effectively selected from the microphone array, so that the selected first microphone is more accurate, and the situation that the first microphone is selected by mistake due to factors such as systematic errors in the manufacturing process of projection equipment is avoided.
Based on the technical solutions of the foregoing alternative embodiments, the present application further provides alternative embodiments, where the selecting step of the first microphone in S830 is refined.
Referring to fig. 9A, the selecting step of the first microphone includes:
S910, determining a first reference frequency response curve according to the first frequency response curve of each microphone.
For example, one of the first frequency response curves with the representativeness can be selected as the first reference frequency response curve according to the change condition of the first frequency response curve of each microphone.
Alternatively, the first frequency response curve with relatively severe curve fluctuation can be used as the first reference frequency response curve, alternatively, a curve deviating from other first frequency response curves in each first frequency response curve can be used as the first reference frequency response curve, or the first reference frequency response curve can be determined according to the average value of the first frequency response curves of each microphone.
It is worth noting that the average value of the first frequency response curves of different microphones is taken to generate the first reference frequency response curve, a mode of selecting the first reference frequency response curve from the first frequency response curves is replaced, analysis of curve fluctuation conditions is not needed, comparison analysis of deviation conditions of different first frequency response curves is not needed, operation is more convenient, operation amount is small, and improvement of selection efficiency of the first microphones is facilitated. In addition, since there is only one microphone in the microphone array near the standard voice initiating position, the first reference frequency response curve determined by means of averaging is usually closer to the first frequency response curve of the first microphone, so that the accuracy of the subsequently selected first microphone is ensured.
S920, selecting a first microphone from the microphones according to the difference between the first frequency response curve of each microphone and the first reference frequency response curve.
For example, a microphone corresponding to a first frequency response curve having a small (e.g., smallest) difference in the first reference frequency response curve may be selected from among the microphones as the first microphone.
In an alternative embodiment, a frequency response weight determination function may be introduced to numerically quantify the difference between the first frequency response curves of the different microphones and the first reference frequency response curve. Wherein the frequency response weight determination function is a monotonic function of the distance between the first frequency response curve and the first reference frequency response curve.
Alternatively, the frequency response weight determination function may be a monotonically increasing function, and correspondingly, a microphone with a smaller (e.g., smallest) frequency response weight is selected as the first microphone. Or alternatively, the frequency response weight determining function may be a monotonically decreasing function, and correspondingly, a microphone with a larger (e.g., largest) frequency response weight is selected.
In an alternative implementation, for each microphone, an accumulated response bias of the microphone may be determined according to a distance between a first frequency response curve of the microphone and a first reference frequency response curve, a frequency response weight of the microphone may be determined according to the accumulated response bias of the microphone, a microphone with a largest frequency response weight may be selected as the first microphone if the frequency response weight is inversely proportional to the corresponding accumulated response bias, or a microphone with a smallest frequency response weight may be selected as the first microphone if the frequency response weight is directly proportional to the corresponding accumulated response bias.
The method comprises the steps of determining at least one frequency test point through uniformly sampling a test frequency band, wherein a sampling interval can be set or adjusted by a technician according to needs or experience or is repeatedly determined through a large number of experiments, determining offset distances between a first frequency response curve and a first reference frequency response curve of a microphone on each frequency test point, and determining accumulated response offset of the corresponding microphone according to average values or accumulated values of the offset distances on different frequency test points, wherein the accumulated response offset is used for carrying out numerical quantization on the difference condition between the first frequency response curve and the first reference frequency response curve of the microphone.
For example, the frequency response weight of the microphone may be determined based on the cumulative response bias of the microphone. Alternatively, the microphone with the largest frequency response weight is selected as the first microphone in the case that the frequency response weight is inversely proportional to the corresponding accumulated response bias, or alternatively, the microphone with the smallest frequency response weight is selected as the first microphone in the case that the frequency response weight is directly proportional to the corresponding accumulated response bias.
Specifically, the following formula may be used to determine the frequency response weight of each microphone:
;
wherein W j is the frequency response weight of the jth microphone, N is the total number of test frequency points in the test frequency band, FR ij is the sound intensity of the jth microphone under the ith frequency test point, ABS () is an absolute value function, and M is the total number of microphones in the microphone array.
Fig. 9B illustrates a first frequency response curve corresponding to each microphone, taking 4 microphones (mic 1 to mic 4) included in the microphone array as an example. Wherein, the abscissa is frequency in Hz, and the ordinate is sound intensity in dB. Correspondingly, an average curve of the four first frequency response curves is determined by means of averaging, so that a first reference frequency response curve (corresponding to average) is obtained.
Referring to fig. 9C, the response bias curve (mic bias) of each microphone may be obtained by using the absolute value of the difference between the first frequency response curve and the first reference frequency response curve of each microphone, to characterize the deviation of each first frequency response curve and the first reference frequency response curve at different frequency test points.
For example, the frequency response weight of each microphone in the following table can be obtained through the above frequency response weight determination formula, and the mic3 with the highest weight is selected as the first microphone, that is, the signal source during the optimization of wind noise.
Microphone numbering Frequency response weights
mic1 1.892822
mic2 0.667389
mic3 2.208202
mic4 0.775623
According to the alternative embodiment, the first reference frequency response curve is introduced, and the selection of the first microphone is assisted by the different conditions of the first frequency response curve and the first reference frequency response curve, so that the operation is convenient and quick, and the selection efficiency of the first microphone is improved.
Based on the technical solutions of the above alternative embodiments, the present application further provides some alternative embodiments, in which the selection mechanism of the second microphone is refined.
Referring to fig. 10, the selecting step of the second microphone includes:
S1010, controlling each microphone in the microphone array, and collecting sound signals emitted by the fan under the condition that the fan is running to obtain second test audio of each microphone.
Under the condition that the fan operates, each microphone is controlled to collect sound signals sent by the fan, so that the collected second test audios carry abundant fan noise information.
S1020, performing spectrum analysis on the second test audio of the microphone according to each microphone to obtain a second frequency response curve of the microphone.
For each microphone, the frequency spectrum analysis is carried out on the second test audio of the microphone, the time domain signal is converted into the frequency domain signal, the sound intensity under the condition of different frequencies can be effectively obtained, and the fluctuation condition of the sound intensity reflects the distribution condition of the effective information of the collected second test audio, so that the hidden information in the sound signal can be extracted by converting the second test audio in the time domain into the second frequency response curve in the frequency domain.
S1030, selecting a second microphone from the microphones according to the second frequency response curve of each microphone.
For example, since the second test audio of the different microphones is collected synchronously, the difference of the sound intensities of the second frequency response curves of the different microphones can reflect the distance between the different microphones and the fan in the same frequency band. Thus, a second microphone relatively close to the fan position can be selected from the microphones by the difference between the second frequency response curves of the microphones.
In some embodiments, a second key frequency band with relatively abundant sound information can be selected from the test frequency bands according to the peak and trough distribution condition of the second frequency response curve of each microphone, and a microphone corresponding to a curve segment with larger fluctuation amplitude (for example, the largest) is selected as the second microphone from curve segments corresponding to the second key frequency band according to each second frequency response curve.
For example, for each microphone, the cumulative response intensity of the microphone may be determined according to the second frequency response curve of the microphone, the wind noise response weight of the microphone may be determined according to the cumulative response intensity of the microphone, the microphone with the smallest wind noise response weight may be selected as the second microphone in the case that the wind noise response weight is inversely proportional to the cumulative response intensity bias, or the microphone with the largest wind noise response weight may be selected as the second microphone in the case that the wind noise response weight is directly proportional to the cumulative response intensity bias.
Alternatively, the cumulative response intensity of the microphone may be determined according to the second frequency response curve of the microphone for each microphone, the wind noise response weight of the microphone may be determined according to the cumulative response intensity of the microphone, the microphone with the smallest wind noise response weight may be selected as the second microphone if the wind noise response weight is inversely proportional to the cumulative response intensity bias, or the microphone with the largest wind noise response weight may be selected as the second microphone if the wind noise response weight is directly proportional to the cumulative response intensity bias.
The method comprises the steps of determining at least one frequency test point through uniformly sampling a test frequency band, wherein a sampling interval can be set or adjusted by a technician according to needs or experiences or repeatedly determined through a large number of experiments, and determining an average value or an accumulated value of sound intensities corresponding to each frequency test point on a second frequency response curve of a microphone to obtain an accumulated response intensity of the corresponding microphone, wherein the accumulated response intensity is used for carrying out numerical quantization on the acquisition capacity of the microphone on fan noise.
For example, the wind noise response weight of the microphone may be determined based on the cumulative response strength of the microphone. Optionally, a microphone with the smallest wind noise response weight is selected as the second microphone under the condition that the wind noise response weight is inversely proportional to the accumulated response intensity bias, or alternatively, a microphone with the largest wind noise response weight is selected as the second microphone under the condition that the wind noise response weight is directly proportional to the accumulated response intensity bias.
Specifically, the following formula may be used to determine the wind noise response weight of each microphone:
;
Wherein Q j is the wind noise response weight of the jth microphone, N is the total number of test frequency points in the test frequency band, SPECTRUM ij is the sound intensity of the jth microphone under the ith frequency test point, and ABS () is an absolute value function.
The selecting of the second microphone may be performed when the projection device leaves the factory, and the subsequent processing may be performed directly according to the result of the selecting when the use result of the projection device. Of course, considering that the relative positional relationship between the fan and the microphone is changed due to the influence of factors such as equipment aging, equipment maintenance or component change in the subsequent process of the projection equipment, the selection of the second microphone can be performed periodically according to a preset period in the use stage of the projection equipment.
According to the above alternative embodiment, under the condition that the fan is running, each microphone in the microphone array is controlled to collect the sound signal sent by the fan, so as to obtain the second test audio of each microphone, and spectrum analysis is performed on each second test audio, so that a second frequency response curve with more abundant carrying information is obtained. The second frequency response curves corresponding to different microphones can reflect the sound intensity of each microphone at different frequencies, so that the microphone noise collecting capacity of the microphones is effectively displayed, and the distance situation of the microphones from the position of the fan is further effectively reflected. Correspondingly, through the second frequency response curves of different microphones, the second microphone close to the fan position can be effectively selected from the microphone array, so that the selected second microphone is more accurate, and the situation that the second microphone is selected by mistake due to factors such as systematic errors in the manufacturing process of projection equipment is avoided.
On the basis of the technical solutions of the above embodiments, some optional embodiments are provided, and in these optional embodiments, the audio noise reduction process of the projection device is described in detail.
Referring to the audio noise reduction method shown in fig. 11, it includes:
s1101, controlling each microphone in the microphone array, and under the condition that a fan does not operate, collecting a sound signal of a test frequency band emitted by a preset sound source arranged at a standard voice initiating position, so as to obtain a first test audio of each microphone;
s1102, aiming at each microphone, performing spectrum analysis on the first test audio of the microphone to obtain a first frequency response curve of the microphone;
S1103, determining a first reference frequency response curve according to the average value of the first frequency response curves of the microphones.
S1104, for each microphone, determining the accumulated response bias of the microphone according to the distance between the first frequency response curve of the microphone and the first reference frequency response curve;
S1105, determining the frequency response weight of the microphone according to the accumulated response bias of the microphone;
S1106A, selecting the microphone with the largest frequency response weight as the first microphone under the condition that the frequency response weight is inversely proportional to the corresponding accumulated response bias, or
S1106B, in the case where the frequency response weight is proportional to the corresponding accumulated response offset, selecting the microphone with the smallest frequency response weight as the first microphone.
In the embodiment, the step S1106A and the step S1106B may be alternatively implemented, which is not limited in any way.
S1107, controlling each microphone in the microphone array, and under the condition that the fan is running, collecting sound signals emitted by the fan to obtain second test audio of each microphone;
S1108, aiming at each microphone, performing spectrum analysis on second test audio of the microphone to obtain a second frequency response curve of the microphone;
S1109, determining the accumulated response intensity of the microphone according to the second frequency response curve of the microphone;
S1110, determining wind noise response weight of the microphone according to the accumulated response intensity of the microphone;
S1111A, selecting the microphone with the smallest wind noise response weight as the second microphone under the condition that the wind noise response weight is inversely proportional to the accumulated response intensity bias, or
S1111B, selecting a microphone with the largest wind noise response weight as a second microphone under the condition that the wind noise response weight is in direct proportion to the accumulated response intensity bias.
In this embodiment, S1111A and S1111B may be alternatively implemented, which is not limited in this way.
It should be noted that, S1101 to S1106A or S1101 to S1106B may be performed before, after or both of S1107 to S1111A and S1107 to S1111B, or may be implemented in parallel, which is not limited in this embodiment.
S1112, under the condition that the voice function is started, controlling the first microphone to collect the first audio data and the second microphone to synchronously collect the second audio data;
s1113, performing noise reduction processing on the first audio data according to the second audio data to obtain target audio data.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an audio noise reduction device for realizing the above-mentioned audio noise reduction method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation of one or more embodiments of the audio noise reduction device provided below may be referred to the limitation of the audio noise reduction method hereinabove, and will not be repeated here.
In one exemplary embodiment, as shown in FIG. 12, an audio noise reduction device is provided that includes an acquisition control module 1210 and a noise reduction processing module 1220. Wherein,
The acquisition control module 1210 is used for controlling a first microphone to acquire first audio data and a second microphone to synchronously acquire second audio data, wherein the first microphone is a microphone which is close to a standard voice initiating position in a microphone array;
the noise reduction processing module 1220 is configured to perform noise reduction processing on the first audio data according to the second audio data, so as to obtain target audio data.
In some embodiments, the acquisition control module 1210 is further configured to control each microphone in the microphone array, and acquire a sound signal of a test frequency band emitted by a preset sound source disposed at a standard voice initiation position to obtain a first test audio of each microphone when the fan is not running, and the audio noise reduction device further includes a spectrum analysis module configured to perform spectrum analysis on the first test audio of each microphone to obtain a first frequency response curve of the microphone, and a microphone selection module configured to select the first microphone from the microphones according to the first frequency response curve of each microphone.
In some embodiments, the microphone selection module comprises a first determination unit, a first selection unit and a second selection unit, wherein the first determination unit is used for determining a first reference frequency response curve according to a first frequency response curve of each microphone, and the first selection unit is used for selecting the first microphone from the microphones according to the difference condition of the first frequency response curve of each microphone and the first reference frequency response curve.
In some embodiments, the first determining unit is specifically configured to determine the first reference frequency response curve according to an average value of the first frequency response curves of the microphones.
In some embodiments, the first selecting unit is specifically configured to select, from the microphones, a microphone corresponding to a first frequency response curve with a smallest difference from the first reference frequency response curve as the first microphone.
In some embodiments, the first selecting unit is specifically configured to determine, for each microphone, an accumulated response bias of the microphone according to a distance between a first frequency response curve of the microphone and a first reference frequency response curve, determine a frequency response weight of the microphone according to the accumulated response bias of the microphone, select, as the first microphone, a microphone with a largest frequency response weight in a case where the frequency response weight is inversely proportional to the corresponding accumulated response bias, or select, as the first microphone, a microphone with a smallest frequency response weight in a case where the frequency response weight is directly proportional to the corresponding accumulated response bias.
In some embodiments, the acquisition control module 1210 is further configured to control each microphone in the microphone array, and acquire a sound signal emitted by the fan to obtain a second test audio of each microphone when the fan is running, and the audio noise reduction device further includes a spectrum analysis module configured to perform spectrum analysis on the second test audio of each microphone to obtain a second frequency response curve of the microphone, and a microphone selection module configured to select the second microphone from the microphones according to the second frequency response curve of each microphone.
In some embodiments, the microphone selection module comprises a second determination unit, a third determination unit and a second selection unit, wherein the second determination unit is used for determining the accumulated response intensity of the microphone according to the second frequency response curve of the microphone, the third determination unit is used for determining the wind noise response weight of the microphone according to the accumulated response intensity of the microphone, the second selection unit is used for selecting the microphone with the smallest wind noise response weight as the second microphone under the condition that the wind noise response weight is inversely proportional to the accumulated response intensity bias, or the microphone with the largest wind noise response weight as the second microphone under the condition that the wind noise response weight is directly proportional to the accumulated response intensity bias.
In some embodiments, the acquisition control module 1210 is specifically configured to control the first microphone to acquire the first audio data and the second microphone to synchronously acquire the second audio data when the voice function is on, or control the first microphone to acquire the first audio data and the second microphone to synchronously acquire the second audio data when the voice function is on and the fan rotation speed reaches a set rotation speed threshold.
The various modules in the audio noise reduction device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In an exemplary embodiment, a computer device, which may be a terminal, is provided, and an internal structure thereof may be as shown in fig. 13. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The Communication interface of the computer device is used for conducting wired or wireless Communication with an external terminal, and the wireless Communication can be realized through WIFI, a mobile cellular network, near field Communication (NEAR FIELD Communication) or other technologies. The computer program is executed by a processor to implement an audio noise reduction method. The display unit of the computer device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 13 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an alternative embodiment, the computer device shown in FIG. 13 may be the aforementioned projection device, such as a laser television.
In an exemplary embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor performing the steps of the method embodiments described above when the computer program is executed. In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, displayed data, etc.) related to the present application are both information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to meet the related regulations.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile memory and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (RESISTIVE RANDOM ACCESS MEMORY, reRAM), magneto-resistive Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computation, an artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) processor, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the present application.
The foregoing examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1.一种投影设备,其特征在于,包括:1. A projection device, comprising: 风扇,被配置为所述投影设备进行降温;A fan, configured to cool the projection device; 麦克风阵列,包括至少两个麦克风,被配置为采集音频数据;a microphone array, comprising at least two microphones, configured to collect audio data; 控制器,与所述麦克风阵列连接,被配置为:A controller, connected to the microphone array, is configured to: 控制第一麦克风采集第一音频数据,以及第二麦克风同步采集第二音频数据;其中,所述第一麦克风为所述麦克风阵列中靠近标准语音发起位置的麦克风;所述第二麦克风为所述麦克风阵列中靠近所述风扇设置的麦克风;Controlling the first microphone to collect the first audio data, and the second microphone to synchronously collect the second audio data; wherein the first microphone is a microphone in the microphone array close to the standard voice initiation position; and the second microphone is a microphone in the microphone array close to the fan; 根据所述第二音频数据,对所述第一音频数据进行降噪处理,得到目标音频数据。According to the second audio data, the first audio data is subjected to noise reduction processing to obtain target audio data. 2.根据权利要求1所述的投影设备,其特征在于,所述控制器,还被配置为:2. The projection device according to claim 1, wherein the controller is further configured to: 控制所述麦克风阵列中的各麦克风,在所述风扇不运行的情况下,采集设置于标准语音发起位置的预设声源发出测试频段的声音信号,得到各所述麦克风的第一测试音频;Controlling each microphone in the microphone array to collect a sound signal of a test frequency band emitted by a preset sound source set at a standard voice initiation position when the fan is not running, and obtaining a first test audio of each microphone; 针对每一麦克风,对所述麦克风的第一测试音频进行频谱分析,得到所述麦克风的第一频率响应曲线;For each microphone, performing spectrum analysis on a first test audio of the microphone to obtain a first frequency response curve of the microphone; 根据各所述麦克风的第一频率响应曲线,从各所述麦克风中选取所述第一麦克风。The first microphone is selected from the microphones according to a first frequency response curve of each microphone. 3.根据权利要求2所述的投影设备,其特征在于,所述控制器在执行所述根据各所述麦克风的第一频率响应曲线,从各所述麦克风中选取所述第一麦克风时,被配置为:3. The projection device according to claim 2, wherein when the controller selects the first microphone from the microphones according to the first frequency response curves of the microphones, the controller is configured as follows: 根据各所述麦克风的第一频率响应曲线,确定第一参考频率响应曲线;Determining a first reference frequency response curve according to the first frequency response curve of each microphone; 根据各所述麦克风的第一频率响应曲线与所述第一参考频率响应曲线的差异情况,从各所述麦克风中选取所述第一麦克风。The first microphone is selected from the microphones according to a difference between the first frequency response curve of each microphone and the first reference frequency response curve. 4.根据权利要求3所述的投影设备,其特征在于,所述控制器在执行所述根据各所述麦克风的第一频率响应曲线,确定第一参考频率响应曲线时,被配置为:4. The projection device according to claim 3, wherein when the controller determines the first reference frequency response curve according to the first frequency response curve of each microphone, the controller is configured to: 根据各所述麦克风的第一频率响应曲线的平均值,确定所述第一参考频率响应曲线。The first reference frequency response curve is determined according to an average value of the first frequency response curves of the microphones. 5.根据权利要求3所述的投影设备,其特征在于,所述控制器在执行所述根据各所述麦克风的第一频率响应曲线与所述第一参考频率响应曲线的差异情况,从各所述麦克风中选取所述第一麦克风,被配置为:5. The projection device according to claim 3, wherein the controller, when executing the step of selecting the first microphone from the microphones according to the difference between the first frequency response curve of each microphone and the first reference frequency response curve, is configured as follows: 从各所述麦克风中,选取与所述第一参考频率响应曲线差异最小的第一频率响应曲线对应的麦克风,作为所述第一麦克风。A microphone corresponding to a first frequency response curve having a minimum difference from the first reference frequency response curve is selected from among the microphones as the first microphone. 6.根据权利要求5所述的投影设备,其特征在于,所述从各所述麦克风中,选取与所述第一参考频率响应曲线差异最小的第一频率响应曲线对应的麦克风,作为所述第一麦克风,包括:6. The projection device according to claim 5, characterized in that the step of selecting, from among the microphones, a microphone corresponding to a first frequency response curve having the smallest difference from the first reference frequency response curve as the first microphone comprises: 针对每一麦克风,根据所述麦克风的第一频率响应曲线与所述第一参考频率响应曲线之间的距离,确定所述麦克风的累计响应偏置;For each microphone, determining a cumulative response offset of the microphone according to a distance between a first frequency response curve of the microphone and the first reference frequency response curve; 根据所述麦克风的累计响应偏置,确定所述麦克风的频率响应权重;Determining a frequency response weight of the microphone according to a cumulative response bias of the microphone; 在所述频率响应权重与相应累计响应偏置呈反比的情况下,选取所述频率响应权重最大的麦克风,作为所述第一麦克风;或者,In the case where the frequency response weight is inversely proportional to the corresponding cumulative response bias, selecting the microphone with the largest frequency response weight as the first microphone; or, 在所述频率响应权重与相应累计响应偏置呈正比的情况下,选取所述频率响应权重最小的麦克风,作为所述第一麦克风。In a case where the frequency response weight is proportional to the corresponding cumulative response bias, the microphone with the smallest frequency response weight is selected as the first microphone. 7.根据权利要求1-6任一项所述的投影设备,其特征在于,所述控制器,还被配置为:7. The projection device according to any one of claims 1 to 6, characterized in that the controller is further configured to: 控制所述麦克风阵列中的各麦克风,在所述风扇运行的情况下,采集所述风扇发出的声音信号,得到各所述麦克风的第二测试音频;Controlling each microphone in the microphone array to collect a sound signal emitted by the fan when the fan is running, and obtaining a second test audio of each microphone; 针对每一麦克风,对所述麦克风的第二测试音频进行频谱分析,得到所述麦克风的第二频率响应曲线;For each microphone, performing spectrum analysis on a second test audio of the microphone to obtain a second frequency response curve of the microphone; 根据各所述麦克风的第二频率响应曲线,从各所述麦克风中选取所述第二麦克风。The second microphone is selected from the microphones according to the second frequency response curve of each microphone. 8.根据权利要求7所述的投影设备,其特征在于,所述控制器在执行所述根据各所述麦克风的第二频率响应曲线,从各所述麦克风中选取所述第二麦克风时,被配置为:8. The projection device according to claim 7, wherein when the controller selects the second microphone from the microphones according to the second frequency response curve of each microphone, the controller is configured as follows: 根据所述麦克风的第二频率响应曲线,确定所述麦克风的累计响应强度;determining a cumulative response strength of the microphone according to a second frequency response curve of the microphone; 根据所述麦克风的累计响应强度,确定所述麦克风的风噪响应权重;Determining a wind noise response weight of the microphone according to the cumulative response strength of the microphone; 在所述风噪响应权重与累计响应强度偏置呈反比的情况下,选取所述风噪响应权重最小的麦克风,作为所述第二麦克风;或者,In the case where the wind noise response weight is inversely proportional to the cumulative response strength bias, selecting the microphone with the smallest wind noise response weight as the second microphone; or, 在所述风噪响应权重与累计响应强度偏置呈正比的情况下,选取所述风噪响应权重最大的麦克风,作为所述第二麦克风。In a case where the wind noise response weight is proportional to the cumulative response strength bias, the microphone with the largest wind noise response weight is selected as the second microphone. 9.根据权利要求1-6任一项所述的投影设备,其特征在于,所述控制器在执行所述控制第一麦克风采集第一音频数据,以及第二麦克风同步采集第二音频数据时,被配置为:9. The projection device according to any one of claims 1 to 6, characterized in that when the controller controls the first microphone to collect the first audio data and the second microphone to synchronously collect the second audio data, the controller is configured as follows: 在语音功能开启的情况下,控制所述第一麦克风采集第一音频数据,以及所述第二麦克风同步采集第二音频数据;或者When the voice function is turned on, controlling the first microphone to collect first audio data, and the second microphone to synchronously collect second audio data; or 在语音功能开启,且所述风扇转速达到设定转速阈值的情况下,控制所述第一麦克风采集第一音频数据,以及所述第二麦克风同步采集第二音频数据。When the voice function is turned on and the fan speed reaches a set speed threshold, the first microphone is controlled to collect first audio data, and the second microphone is controlled to collect second audio data synchronously. 10.一种音频降噪方法,其特征在于,包括:10. An audio noise reduction method, comprising: 控制第一麦克风采集第一音频数据,以及第二麦克风同步采集第二音频数据;其中,所述第一麦克风为麦克风阵列中靠近标准语音发起位置的麦克风;所述第二麦克风为所述麦克风阵列中靠近所述风扇设置的麦克风;Controlling the first microphone to collect the first audio data, and the second microphone to synchronously collect the second audio data; wherein the first microphone is a microphone in the microphone array close to the standard voice initiation position; and the second microphone is a microphone in the microphone array close to the fan; 根据所述第二音频数据,对所述第一音频数据进行降噪处理,得到目标音频数据。According to the second audio data, the first audio data is subjected to noise reduction processing to obtain target audio data.
CN202411884438.0A 2024-12-19 2024-12-19 Projection equipment and audio noise reduction method thereof Active CN119906808B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411884438.0A CN119906808B (en) 2024-12-19 2024-12-19 Projection equipment and audio noise reduction method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411884438.0A CN119906808B (en) 2024-12-19 2024-12-19 Projection equipment and audio noise reduction method thereof

Publications (2)

Publication Number Publication Date
CN119906808A true CN119906808A (en) 2025-04-29
CN119906808B CN119906808B (en) 2025-12-26

Family

ID=95472881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411884438.0A Active CN119906808B (en) 2024-12-19 2024-12-19 Projection equipment and audio noise reduction method thereof

Country Status (1)

Country Link
CN (1) CN119906808B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100316234A1 (en) * 2009-06-16 2010-12-16 Seiko Epson Corporation Projector and audio output method
CN104702787A (en) * 2015-03-12 2015-06-10 深圳市欧珀通信软件有限公司 Sound acquisition method applied to MT (Mobile Terminal) and MT
WO2017052550A1 (en) * 2015-09-24 2017-03-30 Intel Corporation Platform noise identification using platform integrated microphone
US20170238109A1 (en) * 2014-08-20 2017-08-17 Zte Corporation Method for selecting a microphone and apparatus and computer storage medium
CN111009255A (en) * 2019-11-29 2020-04-14 深圳市无限动力发展有限公司 Method, apparatus, computer device and storage medium for eliminating internal noise interference
CN116582781A (en) * 2023-05-16 2023-08-11 东莞市阿尔法电子科技有限公司 Wind noise prevention optimization method, device and storage medium based on reference microphone
WO2024099324A1 (en) * 2022-11-08 2024-05-16 深圳洛克创新科技有限公司 Noise reduction method, projection device, and storage medium
CN118588103A (en) * 2024-06-04 2024-09-03 天津大学 A method for analyzing the sound range of distributed microphones in vehicles based on channel weight perception

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100316234A1 (en) * 2009-06-16 2010-12-16 Seiko Epson Corporation Projector and audio output method
US20170238109A1 (en) * 2014-08-20 2017-08-17 Zte Corporation Method for selecting a microphone and apparatus and computer storage medium
CN104702787A (en) * 2015-03-12 2015-06-10 深圳市欧珀通信软件有限公司 Sound acquisition method applied to MT (Mobile Terminal) and MT
WO2017052550A1 (en) * 2015-09-24 2017-03-30 Intel Corporation Platform noise identification using platform integrated microphone
CN111009255A (en) * 2019-11-29 2020-04-14 深圳市无限动力发展有限公司 Method, apparatus, computer device and storage medium for eliminating internal noise interference
WO2024099324A1 (en) * 2022-11-08 2024-05-16 深圳洛克创新科技有限公司 Noise reduction method, projection device, and storage medium
CN116582781A (en) * 2023-05-16 2023-08-11 东莞市阿尔法电子科技有限公司 Wind noise prevention optimization method, device and storage medium based on reference microphone
CN118588103A (en) * 2024-06-04 2024-09-03 天津大学 A method for analyzing the sound range of distributed microphones in vehicles based on channel weight perception

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
NILESH MADHU: "A Versatile Framework for Speaker Separation Using a Model-Based Speaker Localization Approach", 《IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING ( VOLUME: 19, ISSUE: 7, SEPTEMBER 2011)》, 30 December 2010 (2010-12-30) *
曾启明: "选购智能投影不可缺失的功能", 《大众用电》, 30 November 2020 (2020-11-30) *

Also Published As

Publication number Publication date
CN119906808B (en) 2025-12-26

Similar Documents

Publication Publication Date Title
CN114885138B (en) Projection device and automatic focusing method
JP5108093B2 (en) Imaging apparatus and imaging method
US10721420B2 (en) Method and system of adaptable exposure control and light projection for cameras
US8629915B2 (en) Digital photographing apparatus, method of controlling the same, and computer readable storage medium
JP6055681B2 (en) Imaging device
US8373790B2 (en) Auto-focus apparatus, image-pickup apparatus, and auto-focus method
CN110166692A (en) A kind of method and device improving camera automatic focusing accuracy rate and speed
JPWO2019146226A1 (en) Image processing device, output information control method, and program
JP2018503325A (en) System and method for performing operations on pixel data
CN115002433B (en) Projection equipment and ROI feature area selection method
CN115604445A (en) Projection equipment and projection obstacle avoidance method
CN116055696B (en) Projection equipment and projection method
US20130335619A1 (en) Imaging device and imaging method
JP2016540440A (en) Picture processing method and apparatus
WO2023087948A1 (en) Projection device and display control method
US20250211717A1 (en) Projection apparatus and method for projection apparatus
CN116320335A (en) Projection device and method for adjusting projection screen size
CN115243021A (en) Projection device and obstacle avoidance projection method
CN104980647A (en) Image processing apparatus, imaging apparatus, determination method and driving method
KR20250016478A (en) Reducing a flicker effect of multiple light sources in an image
CN119906808B (en) Projection equipment and audio noise reduction method thereof
JP2019191767A (en) Image processing device, information display device, control method and program
US11009675B2 (en) Imaging apparatus, lens apparatus, and their control methods
US20190052803A1 (en) Image processing system, imaging apparatus, image processing apparatus, control method, and storage medium
JPWO2004057531A1 (en) Image composition apparatus and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant