[go: up one dir, main page]

WO2008139435A2 - Système et procédé de capture d'interactions vocales dans des environnements de passage - Google Patents

Système et procédé de capture d'interactions vocales dans des environnements de passage Download PDF

Info

Publication number
WO2008139435A2
WO2008139435A2 PCT/IL2007/000569 IL2007000569W WO2008139435A2 WO 2008139435 A2 WO2008139435 A2 WO 2008139435A2 IL 2007000569 W IL2007000569 W IL 2007000569W WO 2008139435 A2 WO2008139435 A2 WO 2008139435A2
Authority
WO
WIPO (PCT)
Prior art keywords
audio signals
unit
face
microphone array
agent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IL2007/000569
Other languages
English (en)
Other versions
WO2008139435A3 (fr
Inventor
Reuven Knoll
Adrian Loffer
Gal Yechil
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nice Systems Ltd
Original Assignee
Nice Systems Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nice Systems Ltd filed Critical Nice Systems Ltd
Priority to PCT/IL2007/000569 priority Critical patent/WO2008139435A2/fr
Publication of WO2008139435A2 publication Critical patent/WO2008139435A2/fr
Publication of WO2008139435A3 publication Critical patent/WO2008139435A3/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42221Conversation recording systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M7/00Arrangements for interconnection between switching centres
    • H04M7/006Networks other than PSTN/ISDN providing telephone service, e.g. Voice over Internet Protocol (VoIP), including next generation networks with a packet-switched transport layer

Definitions

  • the environmental noise which is mainly human speech
  • the agent may be required to leave its regular location facing the client during the interaction. Accordingly, existing solutions of voice recording systems are not suitable for noisy crowded environments such as walk-in service centers.
  • Fig. 1 is a high-level block diagram of an exemplary walk-in environment according to embodiments of the present invention
  • Fig. 2 is a block diagram of an exemplary end-point of a walk-in environment according to embodiments of the present invention
  • Fig. 3 is a high-level block diagram of an exemplary input agent unit according to embodiments of the present invention.
  • Fig. 4 is a flowchart of a method for capturing agent-client voice interactions at walk-in environments according to embodiments of the present invention. It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.
  • plality and a plurality as used herein may include, for example, “multiple” or “two or more”.
  • the terms “plurality” or “a plurality” may be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like.
  • a plurality of stations may include two or more stations.
  • walk-in center and "walk-in environment” as used herein may be used throughout the specification to describe any place in which a verbal interaction between two or more persons may occurred, for example, service centers of service providers, branches of banks, stores and other private, commercial or government points of presence.
  • Walk-in environment 100 may include one or more end-points, for example, end-points 110, 120 and 130, all capable of communicating with a central capture device 140 via a wired or wireless communication network 160.
  • walk-in environment may include end-points 115, 125 and 135, all capable of communicating with a central capture device 145 via network 160.
  • walk-in environment 100 may include any suitable numbers of end-points.
  • an end-point may refer to any kind of frontal, face-to-face point of sale, point of service or any other space in which a verbal interaction between an agent and a client may take place.
  • Each end-point, for example, end-points 110, 120 and 130 may include one or more agent input devices 111, for example a portable microphone to receive audio signals from agents and an input client unit 113 to receive audio signals from one or more clients.
  • Each end-point 110, 120 and 130 may further include an interaction capture unit 112 to capture voice data from agent input device 111 and from input client unit 113.
  • the audio signals captured by interaction capture unit 112 may be created by at least one agent and at least one client during a face-a-face verbal interaction occurring at the location of the respective end-point 110, 120 or 130.
  • input client unit 113 and capture unit 112 are stand alone units, it should be understood to a person skilled in art that the invention is not limited in this respect and according to embodiments of the present invention input client unit 113 and capture unit 112 may be embedded in the same housing.
  • Interaction capture unit 112 may process the captured audio signal, e.g., filter the non-relevant external acoustic sources and may transmit the processed audio signals via a wired or wireless link to central capture device 140, as described in detail below with reference to Fig.2.
  • Central capture device 140 may interface one or more end-points, for example, 120 and 130 in environment 100 and may transfer the processed audio signals of a verbal interaction to one or more storage unit 150.
  • central capture device 140 may receive the audio signals from interaction capture units 112 and may process the audio signals before transferring them to storage unit 150.
  • central capture device 140 may combine the audio signals captured by agent input device 111 and the signals captured by input client unit 113 to a synchronized audio signal of an entire face-to-face interaction. In some embodiments, such processing may be performed by interaction capture unit 112 and central capture device 140 may separate the audio signals before transferring them to storage unit 150.
  • central capture entity 140 may be implemented using any suitable combination of software and/or hardware and may be implemented as a stand alone unit or as a part of storage unit 150.
  • Central capture device 140 may be coupled to communication network 160 to deliver the processed audio signals, for storage at storage unit 150 or live-monitoring at terminal 170.
  • Storage unit and/or terminal 170 may be coupled to or may be a part of quality assurance or quality management system 180 which may be used for validating that the walk-in environment activities are being performed effectively and efficiently.
  • input client unit 113 may include a directional microphone or one or more closely positioned microphones to act like a highly directional microphone in order to detect the audio signals, e.g., voice created by client, as is further described in Fig. 2.
  • input client unit 113 may be implemented using a microphone array, which may include a plurality of microphones which may optimize the signal-to-noise ratio (SNR) of the detected audio signal created by client 220 (of Fig. 2).
  • Input client unit 113 may achieve high directionality by taking advantage of the fact that an incoming acoustic wave arrives at each of the microphones at a slightly different time or phase.
  • input client unit 113 is referred to a microphone array. It should be understood to a person skilled in art that the invention is not limited in this respect and according to embodiments of the present invention other devices having directional microphone functionalities are likewise applicable.
  • Communication network 160 may be a Local Area Network (LAN), a Wireless LAN (WLAN), a Metropolitan Area Network (MAN), a Wireless MAN (WMAN), a Wide Area Network (WAN), a Wireless WAN (WWAN) and networks operating in accordance with existing IEEE 802.11, 802.11a, 802.11b, 802.1 Ie, 802.1 Ig, 802.11 h, 802.1 Ii, 802.1 In, 802.16, 802.16d, 802.16e standards and/or future versions and/or derivatives and/or Long Term Evolution (LTE) of the above standards.
  • communication network 160 may facilitate an exchange of information packets in accordance with the Ethernet local area networks (LANs).
  • LANs Ethernet local area networks
  • Such Ethernet LANs conform to the IEEE 802.3, 802.3u and 802.3x network standards, published by the Institute of Electrical and Electronics Engineers (TEEE).
  • TEEE Institute of Electrical and Electronics Engineers
  • proprietary interface protocols may be used and/or implemented.
  • Storage unit 150 may be used for voice interaction capturing, storing and retrieval.
  • An exemplary system is sold under the trade name of NiceLog TM by NICE Systems Ltd., R'annana, Israel, the assignee of this patent application.
  • storage unit 150 may further comprise screen capture and storage components for screen shot and screen events interaction capturing and/or video capture and storage component for capturing, storing and retrieval of the visual streaming video interaction coming from one, or more, video camera which may be located at one or more end point 110, 120 and/or 130.
  • Storage unit 150 may include or may be coupled to a database component in which information regarding the interaction is stored for later query and analysis (not shown).
  • capture elements such as interaction capture unit 140 and storage elements, such as storage unit 150 may be separated and interconnected over a LAN/WAN or any other IP based local or wide network, e.g., communication network 160.
  • the storage component 150 which may include a database component (not shown), may either be located at the same location or be centralized at another location covering multiple walk-in environments or branches.
  • the transfer of content such as, voice, screen or other media from the interaction capture unitsll2 to the central capture unit 140 may either be based on proprietary protocols such as a unique packaging of RTP packets for the voice or based on standard protocols such as H.323 for VoIP and the like.
  • FIG. 2 is a block diagram of an exemplary end- point of a walk-in environment according to embodiments of the present invention.
  • a single session or an interaction at end-point 200 may include at least two participants: an agent 210 and a client 220.
  • End point 200 may include an input agent unit 230 to detect and capture the audio signals created by agent 210, a microphone array 250 to detect the audio signals created by client 220 and an interaction capture unit 240 to receive, capture and process the audio signals transmitted by microphone array 250 and input agent unit 230.
  • an input agent unit 230 to detect and capture the audio signals created by agent 210
  • a microphone array 250 to detect the audio signals created by client 220
  • an interaction capture unit 240 to receive, capture and process the audio signals transmitted by microphone array 250 and input agent unit 230.
  • input agent unit 230 may be a portable unit having dimensions small enough to be easily attached to and detached from the agent's clothing or body.
  • agent unit 230 may be a fixed device, e.g., fixed to a desk, a computer or other equipment at the location of end-point 200.
  • agent unit 230 may detect and capture the voice stream created by agent 210 and may filter all external acoustic sources other than agent 210 voice. Agent unit 230 may further transmit the captured voice stream to local interaction capture unit 240 via a communication connection 260.
  • the transmission may be done via a wireless connection, for example a radio frequency (RF) connection.
  • RF radio frequency
  • the transmission may be done via any wired connection, as known in the art.
  • filtering and further processing of the voice stream detected by agent unit 230 may be performed by interaction capture unit 240.
  • Input agent unit 230 may be implemented using hardware components or any suitable combination of software and hardware, as is described in detail below with reference to Fig. 3.
  • Communication connection 260 may be a power-efficient and inexpensive interface, implemented for example, by proprietary unidirectional Wireless Personal Area Network (WPAN) protocols for low power networks, standard Radio Frequency (RF) protocols or proprietary RF protocols.
  • WPAN Wireless Personal Area Network
  • RF Radio Frequency
  • client 220 may be a different person in each interaction and may have various positions within the limited space of end-point 200. Detecting and/or capturing the voice created by client 220 by microphone array 250 may require competing with the various positions and different speakers and may further require competing with a plurality of possible acoustic noise sources and non acoustic noise sources.
  • Acoustic noise sources may include for example, direct sound sources, such as other humans, machinery and the like, ambient sound sources, such as reflected sound waves from all direct sound sources. Additional degradation in speech quality may rise from frequency domain limitations as is known in the art. Non acoustic noise sources may result, for example, from electronics noise figure (NF) and non-linear distortions of the amplification stages.
  • NF electronics noise figure
  • microphone array 250 may be based on microphone phase array technology and may include one or more microphones which may optimize the signal to noise ratio (SNR) of the detected audio signal created by client 220.
  • Microphone array 250 may include a set of closely positioned microphones to achieve better directionality than a single microphone by taking advantage of the fact that an incoming acoustic wave arrives at each of the microphones at a slightly different time or phase.
  • Non-Limiting examples of microphone array design may include a two-element microphone array, a straight four-element microphone array and L-shaped 4-element microphone array.
  • Microphone array 250 may combine the signals detected by all microphones, and may act like a highly directional microphone, forming what is also referred to herein as "a beam" which is a known in the art term. This microphone array beam may be electronically managed to point to the speaker, e.g, client 220. Using microphone array 250 may be mechanically equivalent to using two highly directional microphones: one for scanning the end-point space and for measuring the sound level, and the other for pointing to the direction with the highest sound level, e.g., toward client 220.
  • microphone array 250 may detect and/or capture audio signals from client 220 and may transmit these audio signals to local interaction capture unit 240.
  • microphone array 250 may include a microphone array receiving unit 280 to amplify and sample the audio signal detected by microphone array 250.
  • interaction capture unit 240 may include an agent receiving unit 290 to receive the voice transferred from input agent unit 230, a processor 270 coupled to units 280 and 290 to process the received signals and a communication interface unit 275.
  • Processor 270 may further control input agent unit 230 and optionally microphone array 250.
  • processor 270 may sum the voice streams received from input agent unit 230 and microphone array 250 and may deliver a data stream of a complete verbal interaction between agent 210 and client 220.
  • processor 270 may include or may be coupled to a memory unit 278.
  • Memory unit 278 may be used as a buffer to store temporary data, for example, when the communication between capture unit 240 and the central capture 140 may be down.
  • types of memory that may be used with embodiments of the present invention may include, for example, a shift register, a Flash memory, a random access memory (RAM), dynamic RAM (DRAM), static RAM (SRAM) and the like.
  • unit 280 may include one or more amplifiers and one or more analog-to-digital (A/D) converters (not shown) to prepare the detected voice for further processing, such as but not limited to, filtering by processor 270.
  • unit 280 may include an amplifier and an A/D converter for each microphone of microphone array 250.
  • Unit 280 may further contain a control circuitry to transmit control signals from processor 270 to microphone array 250.
  • Microphone array receiving unit 280 may contain other blocks or circuitry.
  • Microphone array receiving unit 280 may be implemented using hardware components or any suitable combination of software and hardware.
  • Microphone array 250 may be positioned in front of client 220 to produce high directivity "beam", which may be considered as an acoustical phased array antenna with narrow controlled main beam and minimal side lobes by changing the weight of the signal received from each microphone of microphone array 250 by processor 270.
  • Processor 270 may create the "beam” by, for example, weighted summation of all microphone array signals or other algorithms and may control the "movement" of the beam in order to track client 220 by applying mathematical algorithms on the signals received from microphone array receiving unit 280.
  • processor 270 may search for the position of client 220 and may aim the beam in that direction by using for example, special software.
  • processor 270 may control microphone array 250 to follow the sound source by applying a software tracking algorithm.
  • the tracking algorithm used may be the GBD of Microsoft® designed by Ivan Tachev and Henrique S. Malvar.
  • processor 270 may be a general-purpose processor. Additionally or alternatively, processor 270 may include a digital signal processor (DSP) 5 a microprocessor, a host processor, a controller, a plurality of processors or controllers, a chip, a microchip, one or more circuits, circuitry, a logic unit (FPGA), an integrated circuit (IC), an application-specific IC (ASIC), or any other suitable multi-purpose or specific processor or controller. In some embodiments of the invention, processor 270 may be implemented as an integrated unit in microphone array 250.
  • DSP digital signal processor
  • FPGA logic unit
  • IC integrated circuit
  • ASIC application-specific IC
  • agent receiving unit 290 may include an antenna, for example, a dipole antenna 292 to receive the audio signals transferred from input agent unit 230 via the wireless connection, an amplifier circuitry and an RF demodulator circuitry (not shown) to demodulate the audio signals received from input agent unit 230.
  • the output of the demodulator circuitry or other circuitry may be further processed by processor 270.
  • Processor 270 may transfer the agent voice stream and the client voice stream as separate channels or in an combined stream via communication interface unit 275 to a higher level; for example, central capture unit 140 of Fig. 1.
  • Interaction capture unit 240 may be in operable communication with central unit 140 via a wired or wireless communication link.
  • Interface communication unit 275 may include circuitry and physical components for transferring the captured and processed voice streams or audio signals via a communication network, e.g., network 160 of Fig. 1 to peripherals units such as, central capture unit 140 and/or a personal computer, e.g., the personal computer of agent 210.
  • interface unit 275 may include, for example, layer 2-switch circuitry, physical connectors such as RJ45 connectors and the like. Other circuits and/or physical connectors may be used.
  • the space architecture of endpoint 200 may follow the exemplary specification detailed herein.
  • the distance between client 220 and microphone array 250 may be no more than 1.5 meter
  • the angle between client 220 and interaction capture unit 240 may be not more than ⁇ 45 degrees in the horizontal plane
  • the angle between client 220 and endpoint 200 may be not more than -30 to 45 degrees in the vertical plane.
  • the agent may carry agent unit 230 such that the distance between the agent unit and the agent's mouth may not exceed 0.3 meters, the distance between agent unit 230 and interaction capture unit 240 may not exceed 20 meters and the distance between microphone array 250 and other direct sound sources at other end-points may be no less then 3 meters. Other distances may be used.
  • input agent unit 300 may record the audio signal created by, for example, agent 210 in walk-in environment 100.
  • input agent unit 300 may be portable and may have small dimensions to allow simple attachment to an agent clothing or body to allow high recording quality without limiting the agent's movement.
  • Input agent unit 300 may include one or more microphones 310, for example, wireless omni directional microphone to receive and detect the voice of agent 210 of Fig. 2. Any other microphone or microphones may be used.
  • Input agent unit 300 may comprise a processing and control unit 320 to capture the analog voice signal received by microphone 310, to process the signal and to transfer the processed signal to interaction capture unit 240.
  • the received and/or processed signal may be transmitted via antenna 330 which may include or may be for example, a PCB printed folded dipole antenna or any other antenna as is known in the art.
  • processing and control unit 320 may include amplifying circuits and/or other components to amplify the analog audio signal received from and/or detected by microphone 310, an analog-to-digital (A/D) converter to convert the received analog audio signal to a digital signal for further processing and a transmitting circuitry to transmit the processed signal via a wireless connection, e.g., connection 260 of Fig. 2 to interaction capture unit 240.
  • A/D analog-to-digital
  • processing and control unit 320 may include circuitry for filtering the external acoustic sources other than the voice of agent 220 and for controlling the transmission of the processed signal according to the required communication protocol, for example, a proprietary RF protocol which may include a handshake with RF link band of 2400-2480 Mhz. Any other license free link band may be likewise used.
  • processing and control unit 320 may include a general-purpose processor.
  • processing and control unit 320 may include a digital signal processor (DSP), a microprocessor, a host processor, a controller, a plurality of processors or controllers, a chip, a microchip, one or more circuits, circuitry, a logic unit, an integrated circuit (IC), an application-specific IC (ASIC), or any other suitable multi-purpose or specific processor or controller.
  • DSP digital signal processor
  • microprocessor a microprocessor
  • host processor a controller
  • controller a plurality of processors or controllers
  • chip a microchip
  • IC integrated circuit
  • ASIC application-specific IC
  • agent device 300 may include a power supply 340 which may be, for example, a rechargeable battery such as lithium ion battery, super iron battery and the like. Power supply 340 may be recharged via charge pins 350 and may allow an easy maintenance of agent device 300. Although embodiments of the invention are not limited in this regard power supply 340 may have dimensions which are small enough to be included in a personal portable device and may work for several hours, e.g., up to 9 hours without the need to recharge it.
  • a power supply 340 which may be, for example, a rechargeable battery such as lithium ion battery, super iron battery and the like. Power supply 340 may be recharged via charge pins 350 and may allow an easy maintenance of agent device 300. Although embodiments of the invention are not limited in this regard power supply 340 may have dimensions which are small enough to be included in a personal portable device and may work for several hours, e.g., up to 9 hours without the need to recharge it.
  • Fig. 4 is a flowchart of a method for capturing voice interactions in walk-in environments according to embodiments of the present invention. This procedure, as illustrated, may be performed for each end-point of a walk-in environment. Operations of the method may be implemented, for example, by system 100 of Fig. 1, by any or all of stations or end-points 110, 120 and 130 of Fig. 1, by end-point 200 of Fig. 2, and/or by other suitable units, devices, and/or systems.
  • the method may include receiving audio stream signals of the voice created by a participant of a face-to-face interaction, for example, agent 210 (of figure 2) by one or more microphones.
  • the method may include further processing of the audio signals received at box 410, for example, amplifying the signals, converting the signals from analog signals to digital signals and filtering external noises and reverberations other than agent 210 voice.
  • the method may include transmitting the signals processed at block 420 via a communication link, for example, RF wireless communication to a capture unit, for example, interaction capture unit 240 (of figure 2).
  • the method may include receiving audio stream signals of the voice created by another participant of the face-to-face interaction, for example, client 220 (of figure 2) by a microphone array unit.
  • the method may include processing the audio signals received at boxes 440 and 420, for example, beam forming, filtering external noises and reverberations other than client 220 and agent 210 voices.
  • the method may further include processing of the received signals or controlling of the receiving microphones, e.g., microphone array unit 250 in order to optimize the signal to noise ratio of the received signal, as is described with reference to Fig. 2.
  • processing the audio signals received at box 410 may be additionally or alternatively to the processing which is indicated at box 420.
  • the features of the method which are described at boxes 450 and 440 may be implemented at a single physical unit and according to other embodiments may be implemented at separate physical units.
  • the method may include transmitting the processed signals of the face-to-face interaction to a higher level, for example, central capture unit 140 (of figure 1) via a communication network, for example, network 160 (of figure 1) for future analysis.
  • a higher level for example, central capture unit 140 (of figure 1)
  • a communication network for example, network 160 (of figure 1) for future analysis.
  • Other operations or sets of operations may be used in accordance with embodiments of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

L'invention concerne un dispositif, un système et un procédé de capture et de stockage d'une interaction vocale face à face dans un environnement de passage. Par exemple, un point de fin dans un environnement de passage peut comprendre une unité d'agent pour détecter un signal audio créé par un premier participant d'une interaction face à face et pour transmettre le signal audio via une liaison de communication sans fil; un réseau de microphones pour détecter un signal audio créé par un second participant de l'interaction; et une unité de capture pour recevoir et traiter le signal audio provenant de l'unité d'agent et le signal audio provenant du réseau de microphones et pour transmettre des signaux audio traités à un dispositif de capture central.
PCT/IL2007/000569 2007-05-10 2007-05-10 Système et procédé de capture d'interactions vocales dans des environnements de passage Ceased WO2008139435A2 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IL2007/000569 WO2008139435A2 (fr) 2007-05-10 2007-05-10 Système et procédé de capture d'interactions vocales dans des environnements de passage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IL2007/000569 WO2008139435A2 (fr) 2007-05-10 2007-05-10 Système et procédé de capture d'interactions vocales dans des environnements de passage

Publications (2)

Publication Number Publication Date
WO2008139435A2 true WO2008139435A2 (fr) 2008-11-20
WO2008139435A3 WO2008139435A3 (fr) 2009-04-30

Family

ID=40002717

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2007/000569 Ceased WO2008139435A2 (fr) 2007-05-10 2007-05-10 Système et procédé de capture d'interactions vocales dans des environnements de passage

Country Status (1)

Country Link
WO (1) WO2008139435A2 (fr)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6844893B1 (en) * 1998-03-09 2005-01-18 Looking Glass, Inc. Restaurant video conferencing system and method
US20070043608A1 (en) * 2005-08-22 2007-02-22 Recordant, Inc. Recorded customer interactions and training system, method and computer program product

Also Published As

Publication number Publication date
WO2008139435A3 (fr) 2009-04-30

Similar Documents

Publication Publication Date Title
US20080279400A1 (en) System and method for capturing voice interactions in walk-in environments
US11019306B2 (en) Combining installed audio-visual sensors with ad-hoc mobile audio-visual sensors for smart meeting rooms
CN113203988B (zh) 声源定位方法及装置
US9094496B2 (en) System and method for stereophonic acoustic echo cancellation
CN103004233B (zh) 基于两个或更多宽带麦克风信号生成修改宽带音频信号的电子设备
US8606249B1 (en) Methods and systems for enhancing audio quality during teleconferencing
US9294839B2 (en) Augmentation of a beamforming microphone array with non-beamforming microphones
US7991167B2 (en) Forming beams with nulls directed at noise sources
US20160100156A1 (en) Smart Audio and Video Capture Systems for Data Processing Systems
EP1278395A2 (fr) Réseau de microphones adaptatifs différentiels du second ordre
US20150078581A1 (en) Systems And Methods For Audio Conferencing
CN108520754B (zh) 一种降噪会议机
CH702399A2 (fr) Appareil et procédé pour la saisie et le traitement de la voix.
US20080273683A1 (en) Device method and system for teleconferencing
CN111048093A (zh) 会议音箱及会议记录方法、设备、系统和计算机存储介质
CN113645546A (zh) 语音信号处理方法和系统及音视频通信设备
WO2008139435A2 (fr) Système et procédé de capture d'interactions vocales dans des environnements de passage
US10991392B2 (en) Apparatus, electronic device, system, method and computer program for capturing audio signals
CN109920442A (zh) 一种麦克风阵列语音增强的方法和系统
CN115512712A (zh) 回声消除方法、装置及设备
CN101442696A (zh) 滤除声音噪声的方法
CN211047148U (zh) 一种录音电路控制板及录音设备
CN218958974U (zh) 一拖六无线音频及鱼眼视频摄录一体会议系统
CN112204999A (zh) 音频处理方法、设备、可移动平台和计算机可读存储介质
Arabaci et al. Direction of arrival estimation in reverberant rooms using a resource-constrained wireless sensor network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07736309

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07736309

Country of ref document: EP

Kind code of ref document: A2