US20130039496A1 - System and method for distributed audio recording and collaborative mixing - Google Patents
System and method for distributed audio recording and collaborative mixing Download PDFInfo
- Publication number
- US20130039496A1 US20130039496A1 US13/652,461 US201213652461A US2013039496A1 US 20130039496 A1 US20130039496 A1 US 20130039496A1 US 201213652461 A US201213652461 A US 201213652461A US 2013039496 A1 US2013039496 A1 US 2013039496A1
- Authority
- US
- United States
- Prior art keywords
- wireless devices
- recording
- audio
- mixer component
- audio recording
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title description 3
- 239000000872 buffer Substances 0.000 claims description 5
- 230000001360 synchronised effect Effects 0.000 abstract description 8
- 230000005540 biological transmission Effects 0.000 description 4
- 230000001413 cellular effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 239000012634 fragment Substances 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 230000004931 aggregating effect Effects 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- JLGLQAWTXXGVEM-UHFFFAOYSA-N triethylene glycol monomethyl ether Chemical compound COCCOCCOCCO JLGLQAWTXXGVEM-UHFFFAOYSA-N 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/46—Volume control
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0033—Recording/reproducing or transmission of music for electrophonic musical instruments
- G10H1/0083—Recording/reproducing or transmission of music for electrophonic musical instruments using wireless transmission, e.g. radio, light, infrared
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
- H04L65/403—Arrangements for multi-party communication, e.g. for conferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
- H04N7/152—Multipoint control units therefor
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2220/00—Input/output interfacing specifically adapted for electrophonic musical tools or instruments
- G10H2220/091—Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith
- G10H2220/101—Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith for graphical creation, edition or control of musical data or parameters
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/171—Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
- G10H2240/175—Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments for jam sessions or musical collaboration through a network, e.g. for composition, ensemble playing or repeating; Compensation of network or internet delays therefor
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/171—Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
- G10H2240/201—Physical layer or hardware aspects of transmission to or from an electrophonic musical instrument, e.g. voltage levels, bit streams, code words or symbols over a physical link connecting network nodes or instruments
- G10H2240/205—Synchronous transmission of an analog or digital signal, e.g. according to a specific intrinsic timing, or according to a separate clock
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/171—Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
- G10H2240/201—Physical layer or hardware aspects of transmission to or from an electrophonic musical instrument, e.g. voltage levels, bit streams, code words or symbols over a physical link connecting network nodes or instruments
- G10H2240/241—Telephone transmission, i.e. using twisted pair telephone lines or any type of telephone network
- G10H2240/251—Mobile telephone transmission, i.e. transmitting, accessing or controlling music data wirelessly via a wireless or mobile telephone receiver, analogue or digital, e.g. DECT, GSM, UMTS
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/325—Synchronizing two or more audio tracks or files according to musical features or musical timings
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/02—Arrangements for generating broadcast information; Arrangements for generating broadcast-related information with a direct linking to broadcast information or to broadcast space-time; Arrangements for simultaneous generation of broadcast information and broadcast-related information
- H04H60/04—Studio equipment; Interconnection of studios
Definitions
- This invention relates generally to wireless devices capable of audio recording, and more specifically to distributed audio recording and collaborative mixing by two or more wireless devices.
- Wireless devices such as laptop computers, personal digital assistants (PDAs), cellular phones, etc., bring new resources to distributed computing.
- wireless devices In addition to typical computational resources such as CPU, disk space, and applications, wireless devices increasingly employ cameras, microphones, GPS receivers, and other types of sensors.
- a wireless device by definition has at least one wireless communication interface (e.g., cell, radio frequency, Wi-Fi, or BluetoothTM). Users increasingly take wireless devices with them to new places, in both their personal and professional lives.
- the ability of wireless devices to form ad-hoc grids allows using the available resources in a collaborative manner, by aggregating information from the range of input/output interfaces found in wireless devices, by leveraging the locations and contexts in which wireless devices are located, and finally, by leveraging the mesh network capabilities of wireless devices.
- Wireless grids allow coordinated collaboration of heterogeneous inherently unreliable devices, across unreliable network connections.
- a system for distributed audio recording and collaborative mixing by combining audio streams from two or more sources into a single stream that is composed of two or more channels. Leveraging the spatial location of the devices allows the producing of high quality multi-channel sound (e.g., stereo sound or surround sound).
- high quality multi-channel sound e.g., stereo sound or surround sound
- Two or more wireless devices can be located near a sound source, e.g., at a business meeting, symphony concert, or a live lecture.
- the wireless devices can be independently controlled by their respective users, by a mixer component, or by a leader wireless device.
- the wireless devices can convert the recorded audio into a standard or proprietary audio stream format, and transmit the audio stream to a mixer component which can run on a remote computer.
- the real-time clocks of two or more participating wireless devices can be synchronized.
- a wireless device can insert timestamps into the audio stream to facilitate the mixing operation.
- Mixing of the two or more audio streams recorded by wireless devices can be performed by a mixer component either in real time (contemporaneously with the recording) or asynchronously with respect to the recording.
- the mixing can be performed in a fully automated mode, and/or in an operator-assisted mode.
- FIG. 1 illustrates a network level view of a sample embodiment of a system for distributed audio recording and collaborative mixing
- FIG. 2 illustrates the operation of the mixer component in a fully automated mode.
- FIG. 3 illustrates a sample graphical user interface (GUI) for the operation of the mixer component in an operator-assisted mode.
- GUI graphical user interface
- a system for distributed audio recording and collaborative mixing by combining audio streams from multiple sources into a single stream that is composed of multiple channels. Leveraging the spatial location of the devices allows to produce high quality multi-channel sound (e.g., stereo sound or surround sound).
- FIG. 1 illustrates a network level view of a sample embodiment of a system 1000 for distributed audio recording and collaborative mixing.
- Two or more wireless devices 101 a - 101 z can be located near the sound source 100 , e.g., at a business meeting, a symphony concert, or a live lecture.
- the wireless device 101 can include a central processing unit (CPU), a memory, a wireless communications interface (e.g., cell, radio frequency, Wi-Fi, or BluetoothTM), a battery, and a microphone.
- the wireless device 101 can be provided, e.g., by a cellular phone, a personal digital assistant, a handheld computer, etc.
- the wireless devices 101 a - 101 z can have a user interface and/or an application programming interface (API) allowing to at least start and stop the audio recording and streaming operations.
- API application programming interface
- the wireless devices 101 a - 101 z can be independently controlled by their respective users via a user interface.
- the wireless devices 101 a - 101 z can register with and be controlled by a mixer component (not shown in FIG. 1 ).
- the mixer component can run on a remote computer 110 .
- a “computer” herein shall refer to a programmable device for data processing, including a central processing unit (CPU), a memory, and at least one communication interface.
- CPU central processing unit
- a computer can be provided, e.g., by a personal computer (PC) running the Linux operating system.
- Computer 110 can be connected to network 180 . While different networks are designated herein, it is recognized that a single network as seen from the network layer of the Open System Interconnection (OSI) model can comprise a plurality of lower layer networks (e.g., what can be regarded as a single IP network, can include a plurality of different physical networks).
- OSI Open System Interconnection
- wireless device 101 a - 101 c can be provided by a PDA and can connect to network 180 via a wireless access point 114 a - 114 z.
- wireless device 101 d - 101 z can be provided by a cellular phone and can connect to network 180 via General Packet Radio Service (GPRS) gateway 150 .
- GPRS General Packet Radio Service
- the mixer component can transmit control messages to the wireless devices 101 a - 101 z.
- the control messages can be encapsulated into, e.g., Blocks Extensible Exchange Protocol (BEEP).
- BEEP Blocks Extensible Exchange Protocol
- the control messages can include a start recording message and a stop recording command.
- the wireless device 101 can activate its microphone to start recording.
- the wireless device can start transmitting the recorded audio stream back to the mixer component in real time (synchronously with the recording).
- the wireless device can buffer the audio stream being recorded and, asynchronously with respect to the recording, transmit the buffered stream back to the mixer component.
- the wireless device can store the recorded audio stream in its memory for later transmission to a mixer component.
- the wireless device 101 Upon receiving a stop recording command, the wireless device 101 might stop recording audio stream. In one embodiment, the wireless device might further stop any synchronous transmission of the audio stream to the mixer component. In another embodiment, the wireless device 101 can further complete any asynchronous transmission of a buffered audio stream to a mixer component.
- the wireless devices 101 a - 101 z can elect a leader wireless device which will coordinate the recording by other participating wireless devices.
- the leader election can be performed, e.g., using an algorithm described in “A Leader Election Protocol For Fault Recovery In Asynchronous Fully-Connected Networks” by M. Franceschetti and J. Bruck, available at http://caltechparadise.library.caltech.edu/31/00/etr024.pdf.
- the wireless devices 101 a - 101 z can convert the recorded audio into a standard or proprietary audio stream format, e.g., MPEG-3, RealAudio, Windows Media Audio, etc.
- the resulting audio stream can be stored by the recording device locally, and/or transmitted to a remote computer 110 via a wireless access point 114 and network 180 .
- Wireless devices with no direct connection to wireless access point can leverage the mesh network capability of a group of wireless devices, e.g., by establishing a wireless mesh network defined in IEEE 80211s.
- wireless devices 101 a - 101 z can have their real-time clocks unsynchronized.
- the real-time clocks of two or more participating wireless devices 101 a - 101 z can be synchronized using, e.g., Network Time Protocol (NTP) by Network Working Group, available at ftp://ftp.rfc-editor.org/in-notes/rfc1305.pdf.
- NTP Network Time Protocol
- a wireless device can insert timestamps into the audio stream to facilitate the mixing operation.
- Mixing of the two or more audio streams recorded by wireless devices 101 a - 101 z can be performed by a mixer component (not shown in FIG. 1 ) running on a remote computer 110 .
- the mixing can be performed either in real time (synchronously with the recording) or asynchronously with respect to the recording.
- Wireless devices 101 a - 101 z can also receive the mixed audio stream back from the mixer, thus allowing the users of wireless devices 101 a - 101 z to listen to the mixed stream.
- the mixing can be performed in a fully automated mode, and/or in an operator-assisted mode.
- the mixing can be performed based upon timestamps included into the audio streams recorded by the individual wireless devices.
- the wireless device 101 upon receiving a start recording command from a mixer component at time 210 , can transmit to the mixer component a message 210 containing the start time timestamp, followed by one or more messages 204 containing the audio stream being recorded.
- the wireless device 101 Upon receiving a stop recording command from the mixer component at time 212 , the wireless device 101 can stop recording and continue transmitting the buffered audio stream.
- the wireless device 101 can transmit a message 206 containing the timestamp corresponding to time 212 when it stopped the recording.
- the mixer component can use the start time and end time of the audio stream file received for synchronizing it with other audio stream files.
- the mixer component can also calculate a time stamp for any intermediate point of the data stream file by linearly interpolating the start time and end time timestamps.
- the individual recordings can be synchronized in time based upon one or more clearly distinguishable events present in all the recordings being synchronized.
- a clearly distinguishable event can be, e.g., a change in the signal amplitude at a given frequency range where the amplitude level changes by a value exceeding a pre-defined amplitude threshold within a time period not exceeding a pre-defined duration.
- GUI graphical user interface
- the GUI can include two or more graph windows 302 a, 302 b. Each of the graph windows 302 a, 302 b can show a waveform graph of an audio signal received from a wireless recording device.
- the GUI can further include two or more scroll bars 304 a, 304 b using which a user can scroll the respective graphs 302 a, 302 b along the time axis.
- the GUI can further have two or more text output fields 306 a, 306 b where the timestamp corresponding to the start of the audio stream fragment being displayed in the respective graph window 302 a, 302 b can be automatically displayed according to the position of the respective scroll bar 304 a, 304 b within the recorded audio stream file.
- the GUI can further have two or more text output fields 308 a, 308 b where the timestamp corresponding to the end of the audio stream fragment being displayed in the respective graph 302 a, 302 b can be automatically displayed according to the position of the respective scroll bar 304 a, 304 b within the recorded audio stream file.
- the user can choose a common point of visual distinction (e.g., a point of rapid signal amplitude change 310 a, 310 b ) and align the graphs using the view slide controls and then pressing the Sync button 320 , so that two or more sound channels are synchronized at the common point 310 a, 310 b.
- a common point of visual distinction e.g., a point of rapid signal amplitude change 310 a, 310 b
- two or more wireless devices capable of audio recording, wherein said two or more wireless devices are located near a sound source to be recorded;
- each wireless device of said two or more wireless devices having an interface allowing at least start and stop audio recording and streaming operations
- each wireless device of said two or more wireless devices being configured to transmit a recorded audio stream to a mixer component
- a mixer component configured to combine two or more audio streams received from said two or more wireless devices into a multi-channel audio stream by synchronizing in time said two or more audio streams, said synchronization being performed based upon one or more clearly distinguishable events present in all said two or more audio streams.
- A2 The system for distributed audio recording and collaborative mixing of A1, wherein said mixer component runs on a remote computer.
- A3 The system for distributed audio recording and collaborative mixing of A1, wherein at least one wireless device of said two or more wireless devices is controlled by a user of said at least one wireless device via a user interface.
- A4 The system for distributed audio recording and collaborative mixing of A1, wherein at least one wireless device of said two or more wireless devices registers with a mixer component and is controlled by said mixer component via an application program interface.
- A5 The system for distributed audio recording and collaborative mixing of A1, wherein at least one wireless device of said two or more wireless devices transmits said recorded audio stream to said mixer component synchronously with said recording.
- A6 The system for distributed audio recording and collaborative mixing of A1, wherein at least one wireless device of said two or more wireless devices buffers said recorded audio stream to produce a buffered audio stream, and transmits said buffered audio stream to said mixer component asynchronously with respect to said recording.
- A7 The system for distributed audio recording and collaborative mixing of A1, wherein said two or more wireless devices elect a leader device, and wherein said leader device coordinates said audio recording by said one or more wireless devices.
- A8 The system for distributed audio recording and collaborative mixing of A1, wherein said synchronizing said two or more recorded audio streams is performed by an operator via a graphical user interface (GUI), said GUI presenting to said operator two or more graphs of said first audio streams, and allowing said operator to align said graphs at said one or more clearly distinguishable events present in said two or more recorded audio streams.
- GUI graphical user interface
- A9 The system for distributed audio recording and collaborative mixing of A1, wherein said synchronizing said two or more recorded audio streams is performed by a mixer component, said mixer component being configured to synchronize one or more clearly distinguishable events present in said two or more recorded audio streams.
- a system for distributed audio recording and collaborative mixing comprising:
- each wireless device of said two or more wireless devices are located near a sound source to be recorded, each wireless device of said two or more wireless devices having an interface allowing at least start and stop audio recording and streaming operations, each wireless device of said two or more wireless devices being configured to transmit a recorded audio stream to a mixer component, each wireless device of said two or more wireless devices having a real-time clock, each wireless device of said two or more wireless devices being further configured insert timestamps into said
- a mixer component configured to combine two or more audio streams received from said two or more wireless devices into a multi-channel audio stream by synchronizing in time said two or more audio streams based upon said timestamps.
- B2 A system for distributed audio recording and collaborative mixing of B1, wherein at least one of said two or more wireless devices is configured to synchronize said real-time clock with an external clock source.
- B3 A system for distributed audio recording and collaborative mixing of B1, wherein said mixer component runs on a remote computer.
- B4 A system for distributed audio recording and collaborative mixing of B1, wherein at least one wireless device of said two or more wireless devices is controlled by a user of said at least one wireless device via a user interface.
- B5 A system for distributed audio recording and collaborative mixing of B1, wherein at least one wireless device of said two or more wireless devices registers with a mixer component and is controlled by said mixer component via an application program interface.
- B6 A system for distributed audio recording and collaborative mixing of B1, wherein at least one wireless device of said two or more wireless devices transmits said recorded audio stream to said mixer component synchronously with said recording.
- B7 A system for distributed audio recording and collaborative mixing of B1, wherein at least one wireless device of said two or more wireless devices buffers said recorded audio stream to produce a buffered audio stream, and transmits said buffered audio stream to said mixer component asynchronously with respect to said recording.
- B8 A system for distributed audio recording and collaborative mixing of B1, wherein said two or more wireless devices elect a leader device, wherein said leader device coordinates said audio recording by said one or more wireless devices.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Circuit For Audible Band Transducer (AREA)
- Transmitters (AREA)
Abstract
Two or more wireless devices can be independently controlled by their respective users, a mixer component, or a leader wireless device to perform audio recording, convert the recorded audio into a standard or proprietary audio stream format, and transmit the audio stream to a server. The real-time clocks of two or more participating wireless devices can be synchronized. A wireless device can insert timestamps into the audio stream to facilitate the mixing operation. Mixing of the two or more audio streams recorded by wireless devices can be performed by a mixer component either in real time (contemporaneously with the recording) or asynchronously with respect to the recording. The mixing can be performed in a fully automated mode, and/or in an operator-assisted mode.
Description
- This application claims priority to U.S. Utility application Ser. No. 12/194,205 filed on Aug. 19, 2008, which claims priority under 35 U.S.C §119(e) of the following provisional application: U.S. Ser. No. 60/965,581, filed Aug. 21, 2007, entitled “SYSTEM AND METHOD FOR DISTRIBUTED AUDIO RECORDING AND COLLABORATIVE MIXING”, the content of which is incorporated herein by reference.
- This invention relates generally to wireless devices capable of audio recording, and more specifically to distributed audio recording and collaborative mixing by two or more wireless devices.
- Wireless devices, such as laptop computers, personal digital assistants (PDAs), cellular phones, etc., bring new resources to distributed computing. In addition to typical computational resources such as CPU, disk space, and applications, wireless devices increasingly employ cameras, microphones, GPS receivers, and other types of sensors. A wireless device by definition has at least one wireless communication interface (e.g., cell, radio frequency, Wi-Fi, or Bluetooth™). Users increasingly take wireless devices with them to new places, in both their personal and professional lives. The ability of wireless devices to form ad-hoc grids allows using the available resources in a collaborative manner, by aggregating information from the range of input/output interfaces found in wireless devices, by leveraging the locations and contexts in which wireless devices are located, and finally, by leveraging the mesh network capabilities of wireless devices. Wireless grids allow coordinated collaboration of heterogeneous inherently unreliable devices, across unreliable network connections.
- The inherent unreliability of wireless devices is primarily caused by the fact that those devices are, due to their mobile nature, battery-powered. Thus, reducing the power consumption and mitigating the inherent unreliability are two goals of a paramount importance.
- Thus, there is a need in distributed systems and applications which can assist in achieving both goals by off-loading processing and data management to non-mobile devices, or to wireless devices which can be reachable with less transmitter power.
- There is provided a system for distributed audio recording and collaborative mixing by combining audio streams from two or more sources into a single stream that is composed of two or more channels. Leveraging the spatial location of the devices allows the producing of high quality multi-channel sound (e.g., stereo sound or surround sound).
- Two or more wireless devices can be located near a sound source, e.g., at a business meeting, symphony concert, or a live lecture. The wireless devices can be independently controlled by their respective users, by a mixer component, or by a leader wireless device. The wireless devices can convert the recorded audio into a standard or proprietary audio stream format, and transmit the audio stream to a mixer component which can run on a remote computer.
- The real-time clocks of two or more participating wireless devices can be synchronized. A wireless device can insert timestamps into the audio stream to facilitate the mixing operation.
- Mixing of the two or more audio streams recorded by wireless devices can be performed by a mixer component either in real time (contemporaneously with the recording) or asynchronously with respect to the recording. The mixing can be performed in a fully automated mode, and/or in an operator-assisted mode.
-
FIG. 1 illustrates a network level view of a sample embodiment of a system for distributed audio recording and collaborative mixing -
FIG. 2 illustrates the operation of the mixer component in a fully automated mode. -
FIG. 3 illustrates a sample graphical user interface (GUI) for the operation of the mixer component in an operator-assisted mode. - The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention. In the drawings, like numerals are used to indicate like parts throughout the various views.
- There is provided a system for distributed audio recording and collaborative mixing, by combining audio streams from multiple sources into a single stream that is composed of multiple channels. Leveraging the spatial location of the devices allows to produce high quality multi-channel sound (e.g., stereo sound or surround sound).
-
FIG. 1 illustrates a network level view of a sample embodiment of asystem 1000 for distributed audio recording and collaborative mixing. Two or more wireless devices 101 a-101 z can be located near thesound source 100, e.g., at a business meeting, a symphony concert, or a live lecture. The wireless device 101 can include a central processing unit (CPU), a memory, a wireless communications interface (e.g., cell, radio frequency, Wi-Fi, or Bluetooth™), a battery, and a microphone. The wireless device 101 can be provided, e.g., by a cellular phone, a personal digital assistant, a handheld computer, etc. - The wireless devices 101 a-101 z can have a user interface and/or an application programming interface (API) allowing to at least start and stop the audio recording and streaming operations. In one embodiment, the wireless devices 101 a-101 z can be independently controlled by their respective users via a user interface. In another embodiment, the wireless devices 101 a-101 z can register with and be controlled by a mixer component (not shown in
FIG. 1 ). - The mixer component can run on a
remote computer 110. A “computer” herein shall refer to a programmable device for data processing, including a central processing unit (CPU), a memory, and at least one communication interface. A computer can be provided, e.g., by a personal computer (PC) running the Linux operating system. -
Computer 110 can be connected tonetwork 180. While different networks are designated herein, it is recognized that a single network as seen from the network layer of the Open System Interconnection (OSI) model can comprise a plurality of lower layer networks (e.g., what can be regarded as a single IP network, can include a plurality of different physical networks). - In one aspect, wireless device 101 a-101 c can be provided by a PDA and can connect to
network 180 via a wireless access point 114 a-114 z. In another aspect,wireless device 101 d-101 z can be provided by a cellular phone and can connect tonetwork 180 via General Packet Radio Service (GPRS)gateway 150. - The mixer component can transmit control messages to the wireless devices 101 a-101 z. The control messages can be encapsulated into, e.g., Blocks Extensible Exchange Protocol (BEEP). The control messages can include a start recording message and a stop recording command.
- Upon receiving a start recording command, the wireless device 101 can activate its microphone to start recording. In one embodiment, the wireless device can start transmitting the recorded audio stream back to the mixer component in real time (synchronously with the recording). In another embodiment, the wireless device can buffer the audio stream being recorded and, asynchronously with respect to the recording, transmit the buffered stream back to the mixer component. In a yet another embodiment, the wireless device can store the recorded audio stream in its memory for later transmission to a mixer component.
- Upon receiving a stop recording command, the wireless device 101 might stop recording audio stream. In one embodiment, the wireless device might further stop any synchronous transmission of the audio stream to the mixer component. In another embodiment, the wireless device 101 can further complete any asynchronous transmission of a buffered audio stream to a mixer component.
- In another embodiment, the wireless devices 101 a-101 z can elect a leader wireless device which will coordinate the recording by other participating wireless devices. The leader election can be performed, e.g., using an algorithm described in “A Leader Election Protocol For Fault Recovery In Asynchronous Fully-Connected Networks” by M. Franceschetti and J. Bruck, available at http://caltechparadise.library.caltech.edu/31/00/etr024.pdf.
- A skilled artisan would appreciate the fact that any other suitable algorithm of the leader election can be used without departing from the scope and spirit of the invention.
- The wireless devices 101 a-101 z can convert the recorded audio into a standard or proprietary audio stream format, e.g., MPEG-3, RealAudio, Windows Media Audio, etc. The resulting audio stream can be stored by the recording device locally, and/or transmitted to a
remote computer 110 via a wireless access point 114 andnetwork 180. Wireless devices with no direct connection to wireless access point can leverage the mesh network capability of a group of wireless devices, e.g., by establishing a wireless mesh network defined in IEEE 80211s. - In one embodiment, wireless devices 101 a-101 z can have their real-time clocks unsynchronized. In another embodiment, the real-time clocks of two or more participating wireless devices 101 a-101 z can be synchronized using, e.g., Network Time Protocol (NTP) by Network Working Group, available at ftp://ftp.rfc-editor.org/in-notes/rfc1305.pdf. A wireless device can insert timestamps into the audio stream to facilitate the mixing operation.
- Mixing of the two or more audio streams recorded by wireless devices 101 a-101 z can be performed by a mixer component (not shown in
FIG. 1 ) running on aremote computer 110. The mixing can be performed either in real time (synchronously with the recording) or asynchronously with respect to the recording. Wireless devices 101 a-101 z can also receive the mixed audio stream back from the mixer, thus allowing the users of wireless devices 101 a-101 z to listen to the mixed stream. - The mixing can be performed in a fully automated mode, and/or in an operator-assisted mode.
- Operation of the mixer component in a fully automated mode is now described with reference to
FIG. 2 . In one embodiment, the mixing can be performed based upon timestamps included into the audio streams recorded by the individual wireless devices. The wireless device 101 upon receiving a start recording command from a mixer component attime 210, can transmit to the mixer component amessage 210 containing the start time timestamp, followed by one ormore messages 204 containing the audio stream being recorded. Upon receiving a stop recording command from the mixer component attime 212, the wireless device 101 can stop recording and continue transmitting the buffered audio stream. Upon completing the transmission of the buffered audio stream attime 214, the wireless device 101 can transmit amessage 206 containing the timestamp corresponding totime 212 when it stopped the recording. Thus, the mixer component can use the start time and end time of the audio stream file received for synchronizing it with other audio stream files. The mixer component can also calculate a time stamp for any intermediate point of the data stream file by linearly interpolating the start time and end time timestamps. - In another embodiment, where the real-time clocks of the participating wireless devices can not be synchronized reliably, the individual recordings can be synchronized in time based upon one or more clearly distinguishable events present in all the recordings being synchronized. A clearly distinguishable event can be, e.g., a change in the signal amplitude at a given frequency range where the amplitude level changes by a value exceeding a pre-defined amplitude threshold within a time period not exceeding a pre-defined duration.
- The operation of the mixer component in an operator-assisted mode is now described. Graphical representations of the sound waves over two or more sound channels, e.g., graphs of the audio signal amplitude over time, can be presented to the user via a graphical user interface (GUI) as shown in
FIG. 3 . The GUI can include two or 302 a, 302 b. Each of themore graph windows 302 a, 302 b can show a waveform graph of an audio signal received from a wireless recording device. The GUI can further include two orgraph windows 304 a, 304 b using which a user can scroll themore scroll bars 302 a, 302 b along the time axis. The GUI can further have two or morerespective graphs 306 a, 306 b where the timestamp corresponding to the start of the audio stream fragment being displayed in thetext output fields 302 a, 302 b can be automatically displayed according to the position of therespective graph window 304 a, 304 b within the recorded audio stream file. The GUI can further have two or morerespective scroll bar 308 a, 308 b where the timestamp corresponding to the end of the audio stream fragment being displayed in thetext output fields 302 a, 302 b can be automatically displayed according to the position of therespective graph 304 a, 304 b within the recorded audio stream file.respective scroll bar - The user can choose a common point of visual distinction (e.g., a point of rapid
310 a, 310 b) and align the graphs using the view slide controls and then pressing thesignal amplitude change Sync button 320, so that two or more sound channels are synchronized at the 310 a, 310 b.common point -
- A small sample of systems methods and apparatus that are described herein is as follows:
A1. A system for distributed audio recording and collaborative mixing comprising:
- A small sample of systems methods and apparatus that are described herein is as follows:
- two or more wireless devices capable of audio recording, wherein said two or more wireless devices are located near a sound source to be recorded;
- wherein each wireless device of said two or more wireless devices having an interface allowing at least start and stop audio recording and streaming operations;
- wherein each wireless device of said two or more wireless devices being configured to transmit a recorded audio stream to a mixer component; and
- a mixer component configured to combine two or more audio streams received from said two or more wireless devices into a multi-channel audio stream by synchronizing in time said two or more audio streams, said synchronization being performed based upon one or more clearly distinguishable events present in all said two or more audio streams.
- A2 The system for distributed audio recording and collaborative mixing of A1, wherein said mixer component runs on a remote computer.
A3 The system for distributed audio recording and collaborative mixing of A1, wherein at least one wireless device of said two or more wireless devices is controlled by a user of said at least one wireless device via a user interface.
A4 The system for distributed audio recording and collaborative mixing of A1, wherein at least one wireless device of said two or more wireless devices registers with a mixer component and is controlled by said mixer component via an application program interface.
A5 The system for distributed audio recording and collaborative mixing of A1, wherein at least one wireless device of said two or more wireless devices transmits said recorded audio stream to said mixer component synchronously with said recording.
A6 The system for distributed audio recording and collaborative mixing of A1, wherein at least one wireless device of said two or more wireless devices buffers said recorded audio stream to produce a buffered audio stream, and transmits said buffered audio stream to said mixer component asynchronously with respect to said recording.
A7 The system for distributed audio recording and collaborative mixing of A1, wherein said two or more wireless devices elect a leader device, and wherein said leader device coordinates said audio recording by said one or more wireless devices.
A8 The system for distributed audio recording and collaborative mixing of A1, wherein said synchronizing said two or more recorded audio streams is performed by an operator via a graphical user interface (GUI), said GUI presenting to said operator two or more graphs of said first audio streams, and allowing said operator to align said graphs at said one or more clearly distinguishable events present in said two or more recorded audio streams.
A9 The system for distributed audio recording and collaborative mixing of A1, wherein said synchronizing said two or more recorded audio streams is performed by a mixer component, said mixer component being configured to synchronize one or more clearly distinguishable events present in said two or more recorded audio streams.
B1. A system for distributed audio recording and collaborative mixing comprising: - two or more wireless devices capable of audio recording,
- wherein said two or more wireless devices are located near a sound source to be recorded, each wireless device of said two or more wireless devices having an interface allowing at least start and stop audio recording and streaming operations, each wireless device of said two or more wireless devices being configured to transmit a recorded audio stream to a mixer component, each wireless device of said two or more wireless devices having a real-time clock, each wireless device of said two or more wireless devices being further configured insert timestamps into said
- a mixer component configured to combine two or more audio streams received from said two or more wireless devices into a multi-channel audio stream by synchronizing in time said two or more audio streams based upon said timestamps.
- B2 A system for distributed audio recording and collaborative mixing of B1, wherein at least one of said two or more wireless devices is configured to synchronize said real-time clock with an external clock source.
B3 A system for distributed audio recording and collaborative mixing of B1, wherein said mixer component runs on a remote computer.
B4 A system for distributed audio recording and collaborative mixing of B1, wherein at least one wireless device of said two or more wireless devices is controlled by a user of said at least one wireless device via a user interface.
B5 A system for distributed audio recording and collaborative mixing of B1, wherein at least one wireless device of said two or more wireless devices registers with a mixer component and is controlled by said mixer component via an application program interface.
B6 A system for distributed audio recording and collaborative mixing of B1, wherein at least one wireless device of said two or more wireless devices transmits said recorded audio stream to said mixer component synchronously with said recording.
B7 A system for distributed audio recording and collaborative mixing of B1, wherein at least one wireless device of said two or more wireless devices buffers said recorded audio stream to produce a buffered audio stream, and transmits said buffered audio stream to said mixer component asynchronously with respect to said recording.
B8. A system for distributed audio recording and collaborative mixing of B1, wherein said two or more wireless devices elect a leader device, wherein said leader device coordinates said audio recording by said one or more wireless devices. - While the present invention has been particularly shown and described with reference to certain exemplary embodiments, it will be understood by one skilled in the art that various changes in detail may be affected therein without departing from the spirit and scope of the invention as defined by claims that can be supported by the written description and drawings. Further, where exemplary embodiments are described with reference to a certain number of elements it will be understood that the exemplary embodiments can be practiced utilizing less than the certain number of elements.
Claims (17)
1. A system for distributed audio recording and collaborative mixing comprising:
two or more wireless devices capable of audio recording, wherein said two or more wireless devices are located near a sound source to be recorded;
wherein each wireless device of said two or more wireless devices having an interface allowing at least start and stop audio recording and streaming operations;
wherein each wireless device of said two or more wireless devices being configured to transmit a recorded audio stream to a mixer component; and
a mixer component configured to combine two or more audio streams received from said two or more wireless devices into a multi-channel audio stream by synchronizing in time said two or more audio streams, said synchronization being performed based upon one or more clearly distinguishable events present in all said two or more audio streams.
2. The system for distributed audio recording and collaborative mixing of claim 1 , wherein said mixer component runs on a remote computer.
3. The system for distributed audio recording and collaborative mixing of claim 1 , wherein at least one wireless device of said two or more wireless devices is controlled by a user of said at least one wireless device via a user interface.
4. The system for distributed audio recording and collaborative mixing of claim 1 , wherein at least one wireless device of said two or more wireless devices registers with a mixer component and is controlled by said mixer component via an application program interface.
5. The system for distributed audio recording and collaborative mixing of claim 1 , wherein at least one wireless device of said two or more wireless devices transmits said recorded audio stream to said mixer component synchronously with said recording.
6. The system for distributed audio recording and collaborative mixing of claim 1 , wherein at least one wireless device of said two or more wireless devices buffers said recorded audio stream to produce a buffered audio stream, and transmits said buffered audio stream to said mixer component asynchronously with respect to said recording.
7. The system for distributed audio recording and collaborative mixing of claim 1 , wherein said two or more wireless devices elect a leader device, and wherein said leader device coordinates said audio recording by said one or more wireless devices.
8. The system for distributed audio recording and collaborative mixing of claim 1 , wherein said synchronizing said two or more recorded audio streams is performed by an operator via a graphical user interface (GUI), said GUI presenting to said operator two or more graphs of said first audio streams, and allowing said operator to align said graphs at said one or more clearly distinguishable events present in said two or more recorded audio streams.
9. The system for distributed audio recording and collaborative mixing of claim 1 , wherein said synchronizing said two or more recorded audio streams is performed by a mixer component, said mixer component being configured to synchronize one or more clearly distinguishable events present in said two or more recorded audio streams.
10. A system for distributed audio recording and collaborative mixing comprising:
two or more wireless devices capable of audio recording,
wherein said two or more wireless devices are located near a sound source to be recorded, each wireless device of said two or more wireless devices having an interface allowing at least start and stop audio recording and streaming operations, each wireless device of said two or more wireless devices being configured to transmit a recorded audio stream to a mixer component, each wireless device of said two or more wireless devices having a real-time clock, each wireless device of said two or more wireless devices being further configured insert timestamps into said recorded audio stream; and
a mixer component configured to combine two or more audio streams received from said two or more wireless devices into a multi-channel audio stream by synchronizing in time said two or more audio streams based upon said timestamps.
11. A system for distributed audio recording and collaborative mixing of claim 10 , wherein at least one of said two or more wireless devices is configured to synchronize said real-time clock with an external clock source.
12. A system for distributed audio recording and collaborative mixing of claim 10 , wherein said mixer component runs on a remote computer.
13. A system for distributed audio recording and collaborative mixing of claim 10 , wherein at least one wireless device of said two or more wireless devices is controlled by a user of said at least one wireless device via a user interface.
14. A system for distributed audio recording and collaborative mixing of claim 10 , wherein at least one wireless device of said two or more wireless devices registers with a mixer component and is controlled by said mixer component via an application program interface.
15. A system for distributed audio recording and collaborative mixing of claim 10 , wherein at least one wireless device of said two or more wireless devices transmits said recorded audio stream to said mixer component synchronously with said recording.
16. A system for distributed audio recording and collaborative mixing of claim 10 , wherein at least one wireless device of said two or more wireless devices buffers said recorded audio stream to produce a buffered audio stream, and transmits said buffered audio stream to said mixer component asynchronously with respect to said recording.
17. A system for distributed audio recording and collaborative mixing of claim 10 , wherein said two or more wireless devices elect a leader device, wherein said leader device coordinates said audio recording by said one or more wireless devices.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US13/652,461 US20130039496A1 (en) | 2007-08-21 | 2012-10-15 | System and method for distributed audio recording and collaborative mixing |
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US96558107P | 2007-08-21 | 2007-08-21 | |
| US12/194,205 US8301076B2 (en) | 2007-08-21 | 2008-08-19 | System and method for distributed audio recording and collaborative mixing |
| US13/652,461 US20130039496A1 (en) | 2007-08-21 | 2012-10-15 | System and method for distributed audio recording and collaborative mixing |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US12/194,205 Continuation US8301076B2 (en) | 2007-08-21 | 2008-08-19 | System and method for distributed audio recording and collaborative mixing |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20130039496A1 true US20130039496A1 (en) | 2013-02-14 |
Family
ID=40378611
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US12/194,205 Active - Reinstated 2030-11-08 US8301076B2 (en) | 2007-08-21 | 2008-08-19 | System and method for distributed audio recording and collaborative mixing |
| US13/652,461 Abandoned US20130039496A1 (en) | 2007-08-21 | 2012-10-15 | System and method for distributed audio recording and collaborative mixing |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US12/194,205 Active - Reinstated 2030-11-08 US8301076B2 (en) | 2007-08-21 | 2008-08-19 | System and method for distributed audio recording and collaborative mixing |
Country Status (5)
| Country | Link |
|---|---|
| US (2) | US8301076B2 (en) |
| EP (1) | EP2181507A4 (en) |
| AU (1) | AU2008288928A1 (en) |
| CA (1) | CA2697233A1 (en) |
| WO (1) | WO2009026347A1 (en) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160295321A1 (en) * | 2015-04-05 | 2016-10-06 | Nicholaus J. Bauer | Distributed audio system |
| WO2017051061A1 (en) * | 2015-09-22 | 2017-03-30 | Nokia Technologies Oy | Media feed synchronisation |
| US9646587B1 (en) * | 2016-03-09 | 2017-05-09 | Disney Enterprises, Inc. | Rhythm-based musical game for generative group composition |
| WO2018144367A1 (en) * | 2017-02-03 | 2018-08-09 | iZotope, Inc. | Audio control system and related methods |
Families Citing this family (26)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100064219A1 (en) * | 2008-08-06 | 2010-03-11 | Ron Gabrisko | Network Hosted Media Production Systems and Methods |
| US8099134B2 (en) | 2008-12-19 | 2012-01-17 | Verizon Patent And Licensing Inc. | Visual manipulation of audio |
| CN102630385B (en) * | 2009-11-30 | 2015-05-27 | 诺基亚公司 | Method, device and system for audio scaling processing in audio scene |
| EP2612324A4 (en) * | 2010-08-31 | 2014-08-13 | Nokia Corp | AUDIO SCENE APPARATUS |
| EP2666160A4 (en) * | 2011-01-17 | 2014-07-30 | Nokia Corp | AUDIO SCENE PROCESSING APPARATUS |
| WO2013147901A1 (en) * | 2012-03-31 | 2013-10-03 | Intel Corporation | System, device, and method for establishing a microphone array using computing devices |
| WO2014016645A1 (en) * | 2012-07-25 | 2014-01-30 | Nokia Corporation | A shared audio scene apparatus |
| US9479887B2 (en) * | 2012-09-19 | 2016-10-25 | Nokia Technologies Oy | Method and apparatus for pruning audio based on multi-sensor analysis |
| WO2014064325A1 (en) * | 2012-10-26 | 2014-05-01 | Nokia Corporation | Media remixing system |
| EP2936480B1 (en) * | 2012-12-21 | 2018-10-10 | JamHub Corporation | Multi tracks analog audio hub with digital vector output for collaborative music post processing . |
| EP2775694B1 (en) * | 2013-03-08 | 2019-05-08 | BlackBerry Limited | Methods and devices to generate multiple-channel audio recordings with location-based registration |
| US9438993B2 (en) | 2013-03-08 | 2016-09-06 | Blackberry Limited | Methods and devices to generate multiple-channel audio recordings |
| US10038957B2 (en) | 2013-03-19 | 2018-07-31 | Nokia Technologies Oy | Audio mixing based upon playing device location |
| US20140337420A1 (en) * | 2013-05-09 | 2014-11-13 | Brian Lee Wentzloff | System and Method for Recording Music Which Allows Asynchronous Collaboration over the Internet |
| US9705953B2 (en) | 2013-06-17 | 2017-07-11 | Adobe Systems Incorporated | Local control of digital signal processing |
| JP6191572B2 (en) * | 2013-10-16 | 2017-09-06 | ヤマハ株式会社 | Recording system, recording method and program |
| JP2016082422A (en) * | 2014-10-17 | 2016-05-16 | ヤマハ株式会社 | Acoustic signal processing device |
| JP6606825B2 (en) * | 2014-12-18 | 2019-11-20 | ティアック株式会社 | Recording / playback device with wireless LAN function |
| JP2016118649A (en) * | 2014-12-19 | 2016-06-30 | ティアック株式会社 | Multitrack recording system with radio lan function |
| EP3209033B1 (en) | 2016-02-19 | 2019-12-11 | Nokia Technologies Oy | Controlling audio rendering |
| US9959851B1 (en) | 2016-05-05 | 2018-05-01 | Jose Mario Fernandez | Collaborative synchronized audio interface |
| US10607586B2 (en) | 2016-05-05 | 2020-03-31 | Jose Mario Fernandez | Collaborative synchronized audio interface |
| EP3652950B1 (en) | 2017-07-13 | 2021-07-14 | Dolby Laboratories Licensing Corporation | Audio input and output device with streaming capabilities |
| GB2568288B (en) | 2017-11-10 | 2022-07-06 | Henry Cannings Nigel | An audio recording system and method |
| US10572534B2 (en) | 2018-07-06 | 2020-02-25 | Blaine Clifford Readler | Distributed coordinated recording |
| US20220225048A1 (en) * | 2021-01-14 | 2022-07-14 | Onanoff Limited Company (Ltd.) | System and method for managing a headphones users sound exposure |
Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050201301A1 (en) * | 2004-03-11 | 2005-09-15 | Raj Bridgelall | Self-associating wireless personal area network |
| US20050213947A1 (en) * | 2004-03-24 | 2005-09-29 | Funai Electric Co., Ltd. | Optical disc recorder |
| US20070111657A1 (en) * | 2002-09-13 | 2007-05-17 | Shohei Yamada | Broadcast program recording method, communication control device, and mobile communication device |
| US20070202806A1 (en) * | 2006-02-08 | 2007-08-30 | Samsung Electronics Co., Ltd. | Method and apparatus for secured communication between Bluetooth® devices |
| US20080146343A1 (en) * | 2006-12-14 | 2008-06-19 | Sullivan C Bart | Wireless video game system and method |
| US20090092115A1 (en) * | 2003-11-07 | 2009-04-09 | Interdigital Technology Corporation | Apparatus and methods for central control of mesh networks |
| US7711443B1 (en) * | 2005-07-14 | 2010-05-04 | Zaxcom, Inc. | Virtual wireless multitrack recording system |
| US20100217414A1 (en) * | 2005-07-14 | 2010-08-26 | Zaxcom, Inc. | Virtual Wireless Multitrack Recording System |
| US7966034B2 (en) * | 2003-09-30 | 2011-06-21 | Sony Ericsson Mobile Communications Ab | Method and apparatus of synchronizing complementary multi-media effects in a wireless communication device |
Family Cites Families (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR100342505B1 (en) | 1999-06-29 | 2002-06-28 | 윤종용 | Apparatus and method for simultaneously transmitting and recording voice signals of mobile station in mobile telecommunication system |
| JP3633459B2 (en) | 2000-08-04 | 2005-03-30 | ヤマハ株式会社 | Mixing recording / reproducing apparatus, method, and storage medium |
| US7191023B2 (en) | 2001-01-08 | 2007-03-13 | Cybermusicmix.Com, Inc. | Method and apparatus for sound and music mixing on a network |
| US6674459B2 (en) | 2001-10-24 | 2004-01-06 | Microsoft Corporation | Network conference recording system and method including post-conference processing |
| US7084898B1 (en) | 2003-11-18 | 2006-08-01 | Cisco Technology, Inc. | System and method for providing video conferencing synchronization |
| KR100469472B1 (en) | 2004-06-24 | 2005-01-31 | 주식회사 브리지텍 | Method of call recording service for mobile communication, and system thereof |
| IL165817A0 (en) | 2004-12-16 | 2006-01-15 | Samsung Electronics U K Ltd | Electronic music on hand portable and communication enabled devices |
| US20060221869A1 (en) | 2005-03-29 | 2006-10-05 | Teck-Kuen Chua | System and method for audio multicast |
| KR101081534B1 (en) | 2005-06-10 | 2011-11-08 | 엘지전자 주식회사 | Mobile phone capable of mixing voice and background music |
| US7518051B2 (en) | 2005-08-19 | 2009-04-14 | William Gibbens Redmann | Method and apparatus for remote real time collaborative music performance and recording thereof |
| US7853342B2 (en) | 2005-10-11 | 2010-12-14 | Ejamming, Inc. | Method and apparatus for remote real time collaborative acoustic performance and recording thereof |
| US7825322B1 (en) * | 2007-08-17 | 2010-11-02 | Adobe Systems Incorporated | Method and apparatus for audio mixing |
-
2008
- 2008-08-19 US US12/194,205 patent/US8301076B2/en active Active - Reinstated
- 2008-08-20 EP EP08798250A patent/EP2181507A4/en not_active Withdrawn
- 2008-08-20 CA CA2697233A patent/CA2697233A1/en not_active Abandoned
- 2008-08-20 WO PCT/US2008/073686 patent/WO2009026347A1/en not_active Ceased
- 2008-08-20 AU AU2008288928A patent/AU2008288928A1/en not_active Abandoned
-
2012
- 2012-10-15 US US13/652,461 patent/US20130039496A1/en not_active Abandoned
Patent Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070111657A1 (en) * | 2002-09-13 | 2007-05-17 | Shohei Yamada | Broadcast program recording method, communication control device, and mobile communication device |
| US7966034B2 (en) * | 2003-09-30 | 2011-06-21 | Sony Ericsson Mobile Communications Ab | Method and apparatus of synchronizing complementary multi-media effects in a wireless communication device |
| US20090092115A1 (en) * | 2003-11-07 | 2009-04-09 | Interdigital Technology Corporation | Apparatus and methods for central control of mesh networks |
| US20050201301A1 (en) * | 2004-03-11 | 2005-09-15 | Raj Bridgelall | Self-associating wireless personal area network |
| US20050213947A1 (en) * | 2004-03-24 | 2005-09-29 | Funai Electric Co., Ltd. | Optical disc recorder |
| US7711443B1 (en) * | 2005-07-14 | 2010-05-04 | Zaxcom, Inc. | Virtual wireless multitrack recording system |
| US20100217414A1 (en) * | 2005-07-14 | 2010-08-26 | Zaxcom, Inc. | Virtual Wireless Multitrack Recording System |
| US7929902B1 (en) * | 2005-07-14 | 2011-04-19 | Zaxcom, Inc. | Virtual wireless multitrack recording system |
| US20070202806A1 (en) * | 2006-02-08 | 2007-08-30 | Samsung Electronics Co., Ltd. | Method and apparatus for secured communication between Bluetooth® devices |
| US20080146343A1 (en) * | 2006-12-14 | 2008-06-19 | Sullivan C Bart | Wireless video game system and method |
Cited By (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160295321A1 (en) * | 2015-04-05 | 2016-10-06 | Nicholaus J. Bauer | Distributed audio system |
| US9800972B2 (en) * | 2015-04-05 | 2017-10-24 | Nicholaus J. Bauer | Distributed audio system |
| WO2017051061A1 (en) * | 2015-09-22 | 2017-03-30 | Nokia Technologies Oy | Media feed synchronisation |
| CN108028886A (en) * | 2015-09-22 | 2018-05-11 | 诺基亚技术有限公司 | Media feed-in is synchronous |
| US9646587B1 (en) * | 2016-03-09 | 2017-05-09 | Disney Enterprises, Inc. | Rhythm-based musical game for generative group composition |
| WO2018144367A1 (en) * | 2017-02-03 | 2018-08-09 | iZotope, Inc. | Audio control system and related methods |
| US10171055B2 (en) | 2017-02-03 | 2019-01-01 | iZotope, Inc. | Audio control system and related methods |
| US10185539B2 (en) | 2017-02-03 | 2019-01-22 | iZotope, Inc. | Audio control system and related methods |
| US10248381B2 (en) | 2017-02-03 | 2019-04-02 | iZotope, Inc. | Audio control system and related methods |
| US10248380B2 (en) | 2017-02-03 | 2019-04-02 | iZotope, Inc. | Audio control system and related methods |
Also Published As
| Publication number | Publication date |
|---|---|
| EP2181507A1 (en) | 2010-05-05 |
| EP2181507A4 (en) | 2011-08-10 |
| US20090068943A1 (en) | 2009-03-12 |
| WO2009026347A1 (en) | 2009-02-26 |
| US8301076B2 (en) | 2012-10-30 |
| CA2697233A1 (en) | 2009-02-26 |
| AU2008288928A1 (en) | 2009-02-26 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8301076B2 (en) | System and method for distributed audio recording and collaborative mixing | |
| US11936921B2 (en) | Method for managing network live streaming data and related apparatus, and device and storage medium | |
| US6879997B1 (en) | Synchronously shared online documents | |
| US10375497B2 (en) | Dynamically changing master audio playback device | |
| CN103338204A (en) | Audio synchronization output method and system | |
| US20120059651A1 (en) | Mobile communication device for transcribing a multi-party conversation | |
| US10778742B2 (en) | System and method for sharing multimedia content with synched playback controls | |
| CN116529716A (en) | Virtual universal serial bus interface | |
| US9800972B2 (en) | Distributed audio system | |
| CN105912295A (en) | Method and device for processing audio data | |
| EP2207311A1 (en) | Voice communication device | |
| US11140480B2 (en) | Indirect sourced cognitive loudspeaker system | |
| CN111338490B (en) | Electronic pen, receiving equipment, multi-equipment control system and method | |
| WO2023216988A1 (en) | Call method and communication system | |
| US20210125594A1 (en) | Wireless midi headset | |
| TWI798890B (en) | Bluetooth voice communication system and related computer program product for generating stereo voice effect | |
| CN116320858A (en) | Method and device for transmitting audio | |
| Khotunov et al. | REAL-TIME AUDIO STREAM AGGREGATION IN BLUETOOTH PERSONAL NETWORKS | |
| GR et al. | Interactive live audio streaming in wireless network by interfacing with I. MX53 hardware using Adavanced Linux sound architecture (ALSA) programming | |
| CN117956218A (en) | Multimedia data sharing method, device, equipment and computer readable storage medium | |
| KR101582472B1 (en) | Wireless audio palying method and audio source performing the same | |
| CN117729287A (en) | Audio sharing method and device, storage medium | |
| CN205039836U (en) | IP broadcast machine | |
| CN117221789A (en) | Audio processing method and device, computing equipment and storage medium | |
| CN114760000A (en) | Audio data processing system, method, device, electronic equipment and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |