WO2011114310A2 - Digital sound mixing system with graphical controls - Google Patents
Digital sound mixing system with graphical controls Download PDFInfo
- Publication number
- WO2011114310A2 WO2011114310A2 PCT/IB2011/051135 IB2011051135W WO2011114310A2 WO 2011114310 A2 WO2011114310 A2 WO 2011114310A2 IB 2011051135 W IB2011051135 W IB 2011051135W WO 2011114310 A2 WO2011114310 A2 WO 2011114310A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- audio
- performance area
- signal
- data value
- stage
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/02—Arrangements for generating broadcast information; Arrangements for generating broadcast-related information with a direct linking to broadcast information or to broadcast space-time; Arrangements for simultaneous generation of broadcast information and broadcast-related information
- H04H60/04—Studio equipment; Interconnection of studios
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04847—Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/46—Volume control
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H3/00—Instruments in which the tones are generated by electromechanical means
- G10H3/12—Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussive instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent instrument
- G10H3/14—Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussive instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent instrument using mechanically actuated vibrators with pick-up means
- G10H3/18—Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussive instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent instrument using mechanically actuated vibrators with pick-up means using a string, e.g. electric guitar
- G10H3/186—Means for processing the signal picked up from the strings
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/10527—Audio or video recording; Data buffering arrangements
- G11B2020/10537—Audio or video recording
- G11B2020/10546—Audio or video recording specifically adapted for audio data
Definitions
- the present application relates to sound mixing.
- the application relates to distributed digital sound mixing.
- JP 2004147262 discloses a method for distributing video data.
- This method includes a video input part and an audio input part obtaining an analog video signal and an analog audio signal respectively.
- the received analog video signal and the received analog audio signal are then converted into digital video data by an analog to digital conversion part.
- the digi ⁇ tal data are later compressed by an information compression part.
- a packet generation part forms data packets using the compressed digital data such that the packets are suitable for unicast, broadcast, or multicast.
- the packets are also formatted according to a protocol that is suitable for low latency delivery.
- a transmission control packet afterward transfers the data packets to a network interface part.
- the network interface part then distributes the data packets over a network.
- the application provides a digital audio processing device for a performance area.
- the audio processing device includes one or more touch screen devices and one or more audio proc ⁇ essing blocks.
- the audio processing block connects to the touch screen device.
- the touch screen device comprises a touch screen for receiving one or more audio signal parameter data values from a user.
- the audio processing block comprises a memory unit, a communication module, and a processing device.
- the memory unit is provided for storing an au- dio signal-processing program and the audio signal parameter data value.
- the communication module is provided for receiv ⁇ ing two or more audio input signals from two or more perform ⁇ ance area devices.
- the processing device is provided for per ⁇ forming instructions of the audio signal-processing program to mix the audio input signals according to the audio signal parameter data value to produce one or more audio output sig ⁇ nals.
- the audio output signal is provided for sending to one or more further performance area devices.
- the processing device is further provided for performing instructions of the audio signal-processing program to display a performance area image on the touch screen and to display at least two performance area device icons on the touch screen .
- the touch screen is further provided for receiving at least two performance area device icon image data values from the user.
- the performance area device icon image data values are provided for selecting performance area device icon images that correspond to the performance area device images.
- the touch screen is also provided for receiving at least two performance area device positional data values from the user.
- the performance area device positional data values are pro- vided for positioning the performance area device icons on the touch screen.
- Fig. 1 illustrates a stage with stage objects, the stage is connected to an improved digital sound mixing system
- Fig. 2 illustrates the digital sound mixing system of Fig.
- Fig. 3 illustrates a schematic of the stage block of Fig.
- Fig. 4 illustrates a flow chart for a function of assign- ing stage object icons to the stage objects of the stage block of Fig. 2,
- Fig. 5 illustrates a screen view of the touch screen device of Fig. 2 when the stage block provides the stage object assignment function of Fig. 4,
- Fig. 6 illustrates a table of data that is used by stage object assignment function of Fig. 4,
- Fig. 7 illustrates a flow chart for a function of equali ⁇ sation of the stage block of Fig. 2,
- Fig. 8 illustrates a screen view of the touch screen de- vice of Fig. 2 when the stage block provides the equalisation function of Fig. 7,
- Fig. 9 illustrates a table of data that is used by the
- Fig. 10 illustrates a flow chart for a fader function of the stage block of Fig. 2,
- Fig. 11 illustrates a screen view of the touch screen device of Fig. 2 when the stage block provides the fader function of Fig. 10
- Fig. 12 illustrates a table of data that is used by the fader function of Fig. 10,
- Fig. 13 illustrates a screen view of the touch screen device of Fig. 2 when the stage block provides a mix- ing function of Fig. 13,
- Fig. 14 illustrates a table of data that is used by the
- Fig. 15 illustrates a screen view of the touch screen device of Fig. 2 when the stage block of Fig. 2 pro- vides a sub-mixing function
- Fig. 16 illustrates a table of data that is used by the
- Fig. 17 illustrates a flow chart for an auto-adaptive con ⁇ figuring function of the stage block of Fig. 2
- Fig. 18 illustrates a flow chart for an auto-feedback
- Fig. 19 illustrates a flow chart for an auto-muting func ⁇ tion of the stage block of Fig. 2,
- Fig. 20 illustrates the stage blocks of Fig. 2 being con- nected in a simple configuration
- Fig. 21 illustrates several stage blocks of Fig. 3 being connected in a daisy chain configuration
- Fig. 22 illustrates several stage blocks of Fig. 3 being connected in a star configuration
- Fig. 23 illustrates the stage blocks of Fig. 3 being con ⁇ nected in a robust configuration
- Fig. 24 illustrates another method of using the stage block of Fig. 3.
- Fig. 1 depicts a physical performance stage 10 is communica- tively connected to an improved digital sound mixing system 12.
- the digital sound mixing system 12 is also called a stage workstation .
- the stage 10 includes a plurality of objects 14, 15, 16, and 17, which are placed in different positions on the stage 10.
- the stage objects 14 and 16 are connected to the sound mixing system 12 via wireless communication means 20 whilst the stage objects 15 and 17 are connected to the sound mixing system 12 via wired communication means 21.
- Fig. 2 shows the sound mixing system 12 that includes a touch screen device 23 that is communicatively connected to a stage block 24.
- the touch screen device 23 includes a touch screen 22 and multiple buttons 26.
- the buttons 26 provided on the right side of the touch screen device 23.
- the touch screen device 23 is physically separated from the stage block 24 although they can also be placed next to each other .
- the touch screen device 23 is part of a control personal computer that is connected to the stage block 24 via wired local area network (LAN) although they can also be connected via a wireless means.
- LAN local area network
- the orientation of the touch screen 22 of the touch screen device 23 with respect to the stage 10 is generally fixed, and an image of the stage 10 is displayed as a background image on the touch screen 22.
- the stage 10 provides a platform for an event that is used for performing, entertaining, or communicating to an audience.
- One example of the event is a musical per ⁇ formance .
- stage objects 14, 15, 16, and 17 are used for the event and they can refer to an audio-equipment, to a musical in ⁇ strument, and to stage equipments.
- the audio-equipment can refer to a microphone for converting sound signals to electrical signals.
- the audio-equipment may also refer to a loudspeaker, which receives electrical sig ⁇ nals from the stage block 24 and which converts the received electrical signals to audio sounds.
- the musical instrument can refer to a musical device, such as guitar, which is equipped with an audio receiver.
- the audio receiver is intended for receiving audio or sound signals and for converting the received sound signals to electrical sig ⁇ nals. Both electrical signals of the audio-equipment and of the musical instrument are intended for sending to the stage block 24 of the sound mixing system 12 for processing.
- the stage equipments can include a prop actuator or a light ⁇ ing device.
- the prop actuator can activate a curtain or other stage items.
- the lighting device can comprise a spotlight or a video projector.
- the communication means 20 and 21 is intended for conveying control signals and audio signals between the stage objects 14, 15, 16, and 17 and the stage block 24.
- the audio signals are also called cargo signals.
- the touch screen 22 is used for displaying a stage view.
- the stage view shows a picture or an image 25 of the stage 10. It also shows a plurality of object icons 27, 28, 29, and 30.
- the object icons 27, 28, 29, and 30 have pictures that are displayed on the touch screen 22. Positions of the object icons 27, 28, 29, and 30 on the stage image 25 correspond with positions of the stage objects 14,
- the object icon 27, which is positioned on the left side of the stage image 25, corresponds to the stage object 14, while is positioned on the left side of the stage 10.
- shapes or images of the object icons 27, 28, 29, and 30 correspond with shapes or images of the stage objects 14, 15, 16, and 17 respectively for intuitive identification.
- the object icon 27, which includes a picture of a guitar corresponds to the stage object 14, which in ⁇ cludes a guitar. This is illustrated in Fig. 1.
- the stage view which is generated using a computing technique with user input, provides an enhanced view of the stage 10 and a view of the stage objects.
- a user actuates the touch screen 22 and the buttons 26 to provide inputs to the stage block 24.
- the actuation includes a pushing action or a touching action.
- An example of the pushing action is the user pushing the buttons 26.
- An example of the touching action is the user touching an area or a spot of a surface of the touch screen.
- the touch can include to a single tapping action or a double tapping action.
- the touch can also comprise a touch and drag action, wherein a finger of the user touches a desired spot on the touch screen 22 and continues touching the touch screen 22 while moving the finger to another spot on the touch screen 22.
- the stage block 24 is used for receiving audio or cargo sig- nals from the stage objects 14, 15, 16, and 17.
- the stage block 24 is also used for treating or processing the received cargo signals, and for sending the processed cargo signals to the stage objects 14, 15, 16, and 17.
- One example of this is a stage block receiving audio signals from a stage object in the form of a guitar.
- the stage block then treats the audio signals to improve the sound quality. It later sends the treated signal to a stage object in the form of a loud ⁇ speaker .
- the stage block 24 is also used for sending control signals to the stage objects 14, 15, 16, and 17.
- One example of this is a stage block sending an activation signal to a stage ob ⁇ ject in the form of a stage prop or a lighting device.
- a user of the touch screen device 23 is able to identify the object icon icons 27, 28, 29, and 30 that corre ⁇ sponds with the stage object objects 14, 15, 16, and 17 in an easily and intuitive manner.
- the stage block 24 treats signals received from the stage ob ⁇ ject objects according to inputs that are from the user via the touch screen 22 and via the buttons 26.
- the stage block 24 then sends the treated signals to the other stage object objects.
- the treated signals can include an audio signal, a control signal, or both the audio signal and the control sig ⁇ nals . This is unlike other implementations, where an operator of a sound mixer sees an array of fader-type controls, one control for one sound channel. The operator is required to keep track of which sound source is on which channel.
- the operator needs to create multiple sound mixes, such as a sound mix for front speakers and a special sound mix for headphones of musicians, then the operator is required to set up the sound mix individually. Later, if the level of one sound source changes, the operator must go back to all of the sound individual mixes to make a compensating change for all sound mixes .
- the touch screen 22 and the buttons 26 can also be provided at any area that is convenient for the user.
- a display monitor with a keyboard and a computer mouse can replace the touch screen 22 and buttons 26.
- the display monitor with the keyboard and the mouse can be part of a per ⁇ sonal computer system, which is located in an area separated from the stage block 24 but is communicatively connected to the stage block 24.
- a big concert or other live sound event can have multiple sound mixing systems 12.
- One sound mixing system 12 is located among an audience of the concert to mix audio sig ⁇ nals for front of house (FOH) speakers that are heard by the audience.
- Another sound mixing system 12 is located at a side of a stage to mix audio signals for monitor speakers that are positioned directly in front of performers so that they can hear one another.
- the concert can have a separate sound mixing system 12 for broadcasting or recording.
- the multiple touch screens 22 and the buttons 26 can be used with a single digi ⁇ tal sound mixing system 12. This may be desired if a single digital sound mixing system 12 is capable of providing audio signals for all the required stage objects such as front of house speakers, monitor speakers, and broadcast feeds. In this case, multiple operators would control those audio sig ⁇ nals from the different touch screens 22 with the buttons 26 at different locations.
- Fig. 3 shows a schematic for the stage block 24 of the sound mixing system 12 of Fig. 1.
- the stage block 24 includes a processing module 42.
- the proc- essing module 42 is connected to a memory module 43, to a display module 44, to a network communication module 45, and to an input module 48.
- the memory module 43 is connected to a wireless transceiver 46.
- An Analog to Digital Converter (ADC) module 50 connects a wired transceiver 47 to the memory mod ⁇ ule 43 while a Digital to Analog Converter (DAC) module 51 also connects the wired transceiver 47 to the memory module 43.
- ADC Analog to Digital Converter
- DAC Digital to Analog Converter
- the wireless transceiver 46 and the wired transceiver 47 are used for transmitting data between the stage objects 14, 15, 16, and 17 and the memory module 43.
- the wireless transceiver 46 it is used for transmitting digital data from the stage objects 14 or 16 to the memory module 43 via a wireless medium.
- the digital data is arranged in a data packet format for easy handling.
- the stage objects may receive analog or digital data from its source. In the case of the source providing analog data, the stage objects have an analog to digital converting means for converting the received analog data to its digital form.
- the wired transceiver 47 is used for receiving analog data from the stage objects 15 or 17 via a wired me ⁇ dium. These stage objects receive analog data from its source. After this, the stage objects transmit the received analog data to the wired transceiver 47 through the wired me ⁇ dium. The wired transceiver 47 then sends the analog data to the ADC 50, which converts the analog data to its digital form. The ADC 50 later transmits the digitalised data to the memory module 43.
- the memory module 43 is used for receiving stage object data from the transceivers 46 and 47 and for storing these stage object data.
- the memory module 43 is also used to store a software signal-processing program with modules or functions for treating the stage object data according to pre- determined parameter sets or according to parameters defined by the user. These parameters specifying treatment of the stage object data are also stored in the memory module 43.
- the input module 48 is used for receiving data from the touch screen 22 and buttons 26 and for transmitting the received data to the processing module 42.
- the display module 44 is used for receiving data from the processing module 42 and for sending the received data to the touch screen 22 for display to the user.
- the processing module 42 is intended for performing instruc ⁇ tions of the signal-processing program, which is stored in the memory module 43.
- the signal-processing program performs operations on the stage object data according to pre ⁇ determined parameter sets or according to parameters defined by a user. Such operations may comprise combining different stage object data streams from multiple stage objects and scaling the relative levels of the individual stage object data streams.
- the network communication module 45 acts to receive data from the processing module 42 and to transmit the received data to another stage block.
- Multiple processing modules 42 can be communicatively connected by the their network communication modules 54, wherein some processing modules 42 can serve to treat the data whilst one processing module 42 can serve to supervise or to manage the treatment of these processing mod- ules 42 through the network communication modules 54.
- the network communication module 45 also acts to enable two- way communication from other touch screens 22 and other but- tons 26 and other processing module 42. This two-way communi ⁇ cations allows users to control the processing module 42 through user inputs on their touch screens 22 and buttons 26. Additionally, the users may communicate new parameters speci- fying the treatment of stage object data, those parameters being communicated through the network communication module 45 to the processing module 42, which stores those parameters in the memory module 43 for later use.
- the wired transceiver 47 can receive digital data and not just analog data, from the stage ob ⁇ jects. It can transmit the digital data directly to the mem ⁇ ory module 43. Likewise, the wired transceiver 47 can also receive digital data from the memory module 43 directly and then send the received digital data to the stage objects, which have digital to analog converters from converting these digital data to its analog form.
- the processing module 42 can receive multiple streams of data from different types of the stage objects.
- the memory module 43 can have a multi-stream signal- processing program capability of handling multi-stream of data and have multiple predefined parameter sets or profiles for the different types of stage objects.
- a user can assign the stage objects to the parameter sets according to its stage object type. The user can later also adjust the as ⁇ signed parameter sets to improve sound quality.
- the process ⁇ ing module 42 then treats the various data streams according to instructions from the signal-processing program and ac- cording to the assigned parameter sets.
- a computer motherboard with sound cards can be used for implementing or realizing the stage block 24.
- different types of Programmable Gate Array (PGA) compo ⁇ nents can also be used to realize the stage block 24.
- the different types of PGA can include a Field-Programmable Gate Array (FPGA) , which can be described using a Hardware De ⁇ scription Language (HDL), such as Verilog HDL.
- the stage block 24 can also be implemented using Complex Programmable Logic Devices (CPLD) , Field-Programmable Analog Arrays
- FPAA Software Defined Silicon
- SDS Software Defined Silicon
- the proc ⁇ essing module 42 can also be implemented using a General Pur ⁇ pose Graphics Processing Unit (GPGPU) .
- the GPGPU can have a Compute Unified Device Architecture (CUDA) .
- CUDA Compute Unified Device Architecture
- Fig. 4 shows a flow chart 52 with steps for providing a stage object assignment function of the signal-processing program of the stage block 24 of Fig. 2.
- the stage block 24 includes several transceivers 46 and 47 of Fig. 3 that are connected to stage objects.
- the transceivers 46 and 47 are also called input or output ports.
- the stage block ports are used to receive audio or control signals from the stage objects and are also used to send audio or control signals to the stage objects.
- the signal-processing program provides the stage object as ⁇ signment function, which is intended for producing a stage view of the stage 10 on the touch screen 22.
- the stage view shows a plurality of stage object icons on the touch screen 22.
- the stage objects correspond to the stage objects on the stage 10.
- the flow chart 52 includes a step 53 of a user selecting a stage object icon.
- This selecting step 53 comprises an act of the signal- processing program receiving a picture of the stage 10 from the user.
- the signal-processing program then displays the stage picture or image 25 on the touch screen 22. This is il ⁇ lustrated in Fig. 5.
- the stage image 25 on the touch screen 22 serves to provide an orientation of the touch screen 22 relative to a stage.
- the signal-processing program also shows a group 57 of object icons 60, 61, 62, and 63 as well as a group 58 of stage block port icons 65, 66, 67, and 68 on one side of the stage pic ⁇ ture 25, as illustrated in Fig. 5.
- the stage block port icons 65, 66, 67, and 68 act to represent the input or output ports of the stage block 24.
- the object icons 60, 61, 62, and 63 are intended for representing the stage objects on the stage.
- the selection step 53 also includes a step of the signal- processing program accepting a user actuation on the touch screen 22 and on the buttons 26 via the input module 48 of Fig. 3.
- the actuation acts to provide data to the signal- processing program regarding selection of one stage object icon from the stage object icon group 57.
- Fig. 5 shows the stage object icon 61 being selected, as an example.
- the pic- ture of the selected stage object icon 61 corresponds to the picture of the stage object for easy recognition.
- a step 54 of associating the selected stage object with a stage block port follows the selection step 53.
- the association step 54 includes an act of the signal- processing program accepting the user actuation of the touch screen 22 and of the buttons 26 to select one stage block port icon from the stage block port icon group 58.
- Fig. 5 shows the stage block port icon 65 being se ⁇ lected .
- the signal-processing program then enables the user to move the selected object icon 61 towards the selected stage block port 65 such that the selected stage object icon 61 and the selected stage block port icon 65 are positioned on the same location, as shown in Fig. 5.
- the above same position serves for indicating to the signal- processing program to associate or to link the selected stage object icon 61 with the selected stage block port icon 65. Since the selected stage block port icon 65 is linked in the signal-processing software to a particular stage block port while the stage block port is connected to a stage object, the selected stage block port icon 65 is also linked or asso ⁇ ciated with the stage object.
- a step 55 of positioning the selected stage object icon 61 and the selected stage block port icon 65 follows the selec ⁇ tion step 54.
- the user then moves the selected object icon 61 with the selected stage block port icon 65 to a position on the touch screen 22 that corresponds to a position of stage object on the stage 10 for easy identification.
- Fig. 6 shows a table 70 of data that is used by the stage ob ⁇ ject assignment function.
- the data is stored in the memory module 43 of the stage block 24.
- the memory module 43 is used for storing a plurality of in ⁇ formation or data. These data are organized in data fields, which are arranged in rows and in columns of the data table 70 for easy illustration.
- the data table 70 includes a column 71 for stage object icon data, a column 72 for stage block port number data that is linked with the stage object icon data, a column 73 for x- position data of the stage object icon, and a column 74 for y-position data of the stage object icon.
- the data within each row relates to one stage object icon.
- Fig. 7 shows a flow chart 76 with steps for providing an equalisation function of the signal-processing program of the stage block 24 of Fig. 2.
- the equalisation function alters frequency characteristic of the stage objects. Audio signals of the stage objects com ⁇ prise a plurality of sine waves with different frequencies that extend over a low frequency band, a midrange frequency band, and a high frequency band.
- the low frequency band ranges from 0 hertz (Hz) to 250 Hz.
- the midrange frequency band ranges from 250 Hz to 6,000 Hz while the high frequency band ranges from 6000 Hz to above 6000 Hz.
- the equalisation function serves to alter signal strength or amplitude of the audio signals within each frequency band.
- the flow chart 76 includes a step 78 of selecting the desired stage object icon.
- the selection step 78 includes the signal- processing program accepting an user actuation on the touch screen 22 and on the buttons 26 for selecting the desired stage object icon on the screen touch 22.
- Fig. 8 shows a screen view of the touch screen 22 when the signal-processing program of the stage block 24 provides the function of equalisation.
- Each type of stage object icon is associated with an appro- priate set of pre-determined or default parameters.
- the pre ⁇ determined or default parameters serve to provide a set of parameters that are most often suitable for the specific stage objects.
- the stage object icon which relates to a violin has a set of pre-determined parameters that will specify the audio processing that most sound engi ⁇ neer consider as best practices when processing sound from a violin. The user can alter the parameters, when needed.
- the selection step 78 is followed by a step 79 of the user adjusting the equalisation parameters.
- the signal-processing program displays a window or an area 81 on the touch screen 22 for user input, as illustrated in Fig. 8.
- the signal- processing program then accepts user input for altering signal strength of the signal within each signal band.
- Fig. 9 shows a further table 83 of data that is used by the equalisation function.
- the data is stored in the memory module 43 of the stage block 24.
- the table 83 comprises a column 71 for stage object icon data, a column 72 for stage block port number data that is linked with the stage object icon data, a column 85 of data for low frequency band, a column 86 of data for midrange fre- quency band, and a column 87 of data for high frequency band.
- the data within each row relates to the same stage object icon .
- the low frequency band column 85, the midrange band column 86, and the high frequency band column 87 include a magnifi ⁇ cation factor or an attenuation factor. This factor is intended for applying to the signal within the respective low frequency band, the midrange frequency band, and the high frequency band.
- the equalisation function can provide more general and sophisticated means of altering frequency characteristic.
- the audio signals can be grouped in frequency bands with higher resolution, instead of just low, mid-range, and high frequency bands .
- Fig. 10 shows a flow chart 89 with steps for providing a fader function of the signal-processing program of the stage block 24 of Fig. 2.
- the fader function serves to alter signal strength of the audio signals that are received from the stage object.
- the flow chart 89 includes a step 90 of selecting the desired stage object icon 27.
- Fig. 11 shows a screen view of the touch screen 22 when the signal-processing program of the stage block 24 provides the function of fader.
- the screen view displays a slider icon 35 that is placed next to the se ⁇ lected stage object icon 27 that corresponds with the stage object 14.
- the selection step 90 is followed by a step 91 of the user adjusting the fader parameter for the selected stage object.
- the signal-processing program then accepts user actuation for moving the slider button 36.
- the position of the slider button 36 provides an indication of signal amplitude amplifica- tion factor that is intended for applying to audio signal re ⁇ ceived from the stage object.
- the signal amplitude amplifica ⁇ tion factor is stored in the memory module 43 of the stage block 24.
- Fig. 12 shows another table 92 of data that is used by the fader function. The data is stored in the memory module 43 of the stage block 24.
- the table 92 comprises a column 71 for stage object icon data, a column 72 for stage block port num ⁇ ber data that is linked with the stage object icon data, and a column 93 of data for an amplitude amplification factor.
- the data within each row relates to the same stage object icon.
- This amplitude amplification factor is intended for applying to the audio signal received from the stage object.
- Fig. 13 shows a screen view 183 of the touch screen 23 provided by the signal-processing program of the stage block 24 when the signal-processing program provides a sound mixing function .
- the sound mixing function is used for mixing or combining different audio input signals into one or more audio output signals.
- the different combined audio signals are designated for different channels or ports of a music system.
- the screen view 183 shows several vertical input channel icons 185, 186, 187, and 188.
- Each input channel icon 185, 186, 187, and 188 has a volume slider icon 190 with a slider button 191 and several output channel button icons 192, 193, and 194.
- the channel is also known as a port.
- the input channel icons 185, 186, 187, and 188 are used to represent input ports of the stage block 24. Each input port is connected to one stage object.
- a position data of the slider button 191 of the input channel icon is used to represent value of an amplification factor that is applied to signals received by an input channel, the input channel being represented by the said input channel icon 185, 186, 187, or 188.
- the signal-processing program can receive user input to change the position of the slider button 191.
- Each output channel button icon 192, 193, and 194 corresponds to one output port of the stage block 24.
- the output channel button icons 192, 193, and 194 are intended for activating by the user.
- the screen view 183 shows signals from input channels 1, 2, 3, and 4 being mixed and being combined into a composite signal that is directed to the output channel 1. Signals from input channels 1, 3, and 4 are mixed and combined into a composite signal that is directed to the output channel 2. Signals from input channels 2 and 3 are mixed and combined into a signal that is directed to the output channel 3.
- Fig. 14 shows a table 195 of data that is used by the mixing function. The data is stored in the memory module 43 of the stage block 24.
- the data table 195 comprises several columns and rows. Each column has data for the rows.
- the data table 195 includes a column 196 of data for the in ⁇ put channel 1, a column 197 of data for the input channel 2, a column 198 of data for the input channel 3, and a column 199 of data for the input channel 4.
- the data table 195 includes a row 201 of data for signal am ⁇ plification factor, a row 202 of data for the output channel 1, a row 203 of data for the output channel 2, and a row 204 of data for the output channel 3.
- the data for the column 196 relates to the input channel icon 185 of Fig. 13.
- the data for the column 197 relates to the input channel icon 186.
- the data for the column 198 relates to the input channel icon 187.
- the data for the column 199 relates to the input channel icon 188.
- the data of the column 196 for the row 201 relates to positional information of the volume slider icon 190 of the input channel icon 185.
- the data of the column 196 for the row 202 relates to activation status of the channel button icon 192 of the input channel icon 185.
- the data of the column 196 for the row 203 relates to activa ⁇ tion status of the channel button icon 193 of the input chan ⁇ nel icon 185.
- the data of the column 196 for the row 204 re ⁇ lates to activation status of the channel button icon 194 of the input channel icon 185.
- Fig. 15 shows a screen view 210 of the touch screen 22 when the stage block of Fig. 2 provides a sub-mixing function.
- the sub-mixing function is used for mixing or for combining different audio input signals into one or more auxiliary com- posite audio output signals.
- the different combined audio signals are designated for different auxiliary channels or ports of a music system.
- the screen view 210 includes parts of the screen view 183 of Fig. 13. Auxiliary output channels replace the output chan ⁇ nels of Fig. 13.
- Fig. 16 shows a table 210 of data that is used by the sub- mixing function of Fig. 15.
- the data is stored in the memory module 43 of the stage block 24.
- the table 210 includes parts of the table 195 of Fig. 14.
- Data of the auxiliary output channels replace the data of the output channels of Fig. 14.
- the sub-mixing function is provided after the main mixing function of Fig. 13 is provided. In other words, the sub- mixing function is activated only after the main mixing function is activated. When the sub-mixing function is activated for the first time, it takes on or copies the values of the main mixing function.
- the copying of the values of the main mixing function provides a convenient start point for the sub-mixing function.
- the user can then alter the starting values of the sub-mixing function as needed to produce the desired values of the sub- mixing function. Often only certain values of the main mixing function need this alteration.
- output channels of the main mixing function is provided for a music band.
- the sound of each musician in the band arrives in the digital mixing system on a separate port.
- the output channels contain a mix or a composite of these sound signals combined together.
- the sub-mixing function is provided to create a unique mix, which is sent to the monitor headphones used by a drummer of the band.
- the drummer desires to hear the sound of the entire band but with the sound of the drummer a bit louder.
- a simple alteration of the setting of main channel for the band is then needed to produce a setting for the auxiliary channel for the drummer.
- the sub-mix parameters be capable of automatically tracking or following corresponding parameters of the main mixing function.
- the tracking can refer to a same value tracking, wherein the auxiliary port parameter data has the same values or the same port setting as the corresponding main port parameter data. Put differently, when the main port parameter data is ad ⁇ justed, the corresponding auxiliary port parameter data is also adjusted such that both the main port data and the cor ⁇ responding auxiliary port data have the same value.
- the tracking can also refer to an offset tracking, wherein the auxiliary port parameter data having an offset with reference to the corresponding main port parameter data. In other words, the auxiliary port parameter data and the corre ⁇ sponding main port parameter data have a constant difference.
- a user can set the auxiliary port parameters such that some auxiliary port parameters have the same value tracking, some auxiliary port parameters have the offset tracking, and some auxiliary port parameters do not track.
- the sub-mix may be pro ⁇ grammed so that port settings automatically are adjusted in the sub-mix whenever the corresponding port settings are ad- justed in the main mixing function.
- the port settings in the auxiliary mixing function is auto ⁇ matically adjusted.
- Fig. 17 shows a flow chart 110 with steps for providing an auto-adaptive configuring function of the signal-processing program of the stage block 24 of Fig. 2.
- the auto-adaptive configuration function allows automatic treatment of audio data of the stage objects based on fre ⁇ quency characteristics of the audio data.
- the flow chart 110 includes a step 112 of monitoring audio signals. This is followed by a step 114 of detecting changes of predetermined audio characteristics. The step 114 is fol ⁇ lowed by a step 115 of changing audio treatment parameter based on the detected changes.
- the processing module 42 is equipped with a program for monitoring audio signals, as shown in the step 112.
- a first person can provide first audio signals to the stage block 24 via a microphone.
- the stage block 24 treated the first audio sig ⁇ nals according to a first predefined audio parameter set.
- a second person later uses the microphone to second audio sig ⁇ nals to the stage block 24.
- the program then detects changes of audio characteristics, as shown in the step 114, since the first and second audio sig ⁇ nals have different audio signal characteristics.
- the program may use Fast Fournier Transfer (FFT) techniques to detect the audio characteristics changes.
- FFT Fast Fournier Transfer
- the processing module 42 applies changes of audio treatment audio parameter based on the detected
- the second audio signals are later treated in accordance to a different second prede ⁇ fined audio parameter set.
- the processing module 42 performs automatically the change of stage block configuration.
- the processing module 42 issues a control signal to change a prop, such as projecting a picture on a screen to visual, to acknowledge the presence of the second person.
- Fig. 18 shows a flow chart 120 with steps for an auto- feedback eliminating function of the signal-processing program of the stage block 24 of Fig. 2.
- the auto-feedback eliminating or an automatic tweaking function serves to remove positive audio feedback.
- the flow chart 120 includes a step 121 of monitoring audio signals. This is followed by a decision step 123 of checking whether a predetermined signal threshold level is exceeded. If the level is exceeded, a step 125 of adjusting signal treatment parameter is performed.
- the stage block 24 monitors the amplitude of microphone electri ⁇ cal signals, as shown in the step 121 and it also checks whether a predetermined signal threshold level is exceeded, as shown in the decision step 123.
- the microphone often picks up the loudspeaker sound. When this occurs, the microphones and the loudspeakers form a feedback loop.
- This feedback loop can have a positive feed ⁇ back, wherein the volume of loudspeaker sound picked up by the microphone is louder than the volume of original sound. This would lead to auto-feedback, wherein the loudspeaker produces louder, and louder sounds.
- the stage block 24 deems that a positive feedback has occurred and it reduces the am ⁇ plitude of the microphone signal, as shown in the step 125, for eliminating the positive feedback.
- the monitoring of the amplitude of the electrical signals can be expanded or be replaced by a moni ⁇ toring of a frequency spectrum of the electrical signals.
- a stage block can consider that a positive feedback has occurred and it re ⁇ cutes amplitude of microphone signal for eliminating the positive feedback.
- Fig. 19 shows a flow chart 130 with steps for providing an auto-muting or squelching function of the signal-processing program of the stage block 24 of Fig. 2.
- the auto-mute function enables audio input device, such as a microphone or recorder, to ignore automatically its input signal when its level is below a certain threshold.
- the stage block 24 considers an input signal as ambient or surrounding noise when its level is below a certain threshold. The stage block 24 then ignores the input signal. Put differently, it does not treat and does not process it.
- the flow chart 130 includes a step 132 of monitoring audio signals by the stage block 24. This is followed by a decision step 135 of checking by the stage block 24 whether an amplitude value of the signal is below a predetermined threshold level. If the amplitude value of the monitored signal is be ⁇ low the threshold level, the stage block 24 deems that the monitored signal is ambient noise, and it then performs a step 37 of ignoring or the audio signal.
- a scaling function can be provided to magnify the data or to attenuate the data, to cut-off low or high frequency compo ⁇ nents of the data, or alter positional information of the au ⁇ dio the data.
- certain audio data streams may contain data specifying where sound should appear in a sound- field, such as the 5-speaker sound field of a home theatre system comprising left, centre, right, left-rear, and right- rear speakers; and the positional information data may be al- tered by the signal-processing system within the stage block 24
- a limiting function can be provided to limit high parts of an audio signal while raising low parts of the audio signal. In other words, the lowest and the highest volume levels of the audio signal are brought closer.
- a compressing function can also be provided to compress the data to reduce its size such that it is easier for transmit ⁇ ting.
- Another function can also be provided to store the data.
- the stage block 24 can serve as a digi ⁇ tal audio recorder.
- a revert function can also be provided to revert the parame ⁇ ter set associated with a stage object to its previous val ⁇ ues.
- the signal-processing program can provide real time "undo" or "re-do" functions.
- a reverb or echo function can be provided to mix or to com ⁇ bine a first audio output signal data with a second audio output signal data to form a composite output signal data.
- the second audio output signal data includes the first audio output signal data with a short time delay.
- the time delay refers to a time shifting of the first audio output signal data.
- the delayed first audio output signal data serves to provide an echo effect.
- the signal-processing program can also provide a control function to adjust or move certain stage objects.
- the control function adjusts the prop for a stage user, such as raising a stage curtain.
- the control function controls the lighting de- vice, which can provide a dim light or can project a video clip .
- Multiple stage blocks can be connected together to provide a distributed means for treating audio signals.
- Fig. 20 shows the stage block 24 of Fig. 3 arranged in a sim ⁇ ple configuration.
- Fig. 20 depicts a first stage block 255 that is connected to a second stage block 256.
- a guitar 259 and a guitar monitor speaker 260 are connected to the first stage block 255 whilst a stage left loudspeaker 263 and a stage right loudspeaker 264 are connected to the second stage block 256.
- a display monitor 266 is connected to the first stage block 255.
- the first stage block 255 is used for receiv ⁇ ing audio signals from the guitar 259 and for treating the received audio signals such that the treated audio signals are suitable for the guitar monitor speaker 260.
- the guitar monitor speaker 260 is used for producing monitor audio sounds using the received audio signals.
- the monitor audio sounds allow a musician playing the guitar 259 to monitor or to hear sounds that are produced by the guitar 259. Without the guitar monitor speaker 260, sounds from other instruments might prevent the musician from hearing the guitar sounds .
- the first stage block 255 is also used for trans ⁇ mitting signal levels of the received audio signals or signal levels of the treated audio signals to the display monitor 266.
- the first stage block 255 is also used for sending the guitar audio signals to the second stage block 256.
- the display monitor 266 is used for displaying or showing the received signal level of the audio signals for a user, for example a sound engineer.
- the display helps the user to pro ⁇ vide inputs to the first stage block 255 such that the first stage block 255 can treat the received audio signals in a manner that the user wants.
- the display monitor 266 displays a stationary guitar icon to represent the guitar 259.
- a position of the guitar 259 on its stage corresponds to a position of the guitar icon on a display screen of the display monitor 266.
- the second stage block 256 is used for treating the guitar audio signals such that it is suitable for the audio signals for the stage left loudspeaker 263 and for the stage right loudspeaker 264.
- Fig. 21 shows several stage blocks arranged in a star con ⁇ figuration.
- Fig. 21 depicts a first stage block 270 that is connected to a second stage block 271 whilst the second stage block 271 is connected to a third stage block 272.
- the third stage block 272 is connected to a fourth stage block 275 that is connected to a master stage block 276.
- the master stage block 276 is connected to a display unit 278.
- the first stage block 270 is connected to a right loud- speaker 280 whilst the third stage block 272 is connected to a left loudspeaker 282.
- the second stage block 271 is connected to a microphone 284 whilst the fourth stage block 275 is connected to a guitar 285.
- the first stage block 270, the second stage block 271, the third stage block 272, the fourth stage block 275, and the master stage block 276 can communicate with each other to send data signals or control signals.
- the first stage block 270 communicates with the master stage block 276 via the second stage block 271, via the third stage block 272, and via the fourth stage block 275.
- the second stage block 271 communicates with the master stage block 276 via the third stage block 272 and via fourth stage block 275.
- the third stage block 272 communicates with the master stage block 276 via the fourth stage block 275.
- the fourth stage block 275 communicates directly with the master stage block 276.
- the first stage block 270, the second stage block 271, the third stage block 272, the fourth stage block 275, and the master stage block 276 provides a distributed means of treat ⁇ ing or processing audio signals.
- the second stage block 271 is used for receiving audio elec ⁇ trical signals from the microphone 284 and for treating the received signals.
- the treatment is based on microphone signal parameters that the second stage block 271 receives from the master stage block 276.
- the treatment is suitable or is adapted for signals that are produced by microphones.
- the adapting magnifies the microphone signals such that it can be processed by other stage blocks.
- the second stage block 271 is also used for sending the treated signal to the third stage block 272 and to the first stage block 270, as desig ⁇ nated by the master stage block 276.
- the fourth stage block 275 is used for receiving audio electrical signals from the guitar 285 and for treating the received signals.
- the treatment is based on guitar signal parameters that the fourth stage block 275 receives from the master stage block 276.
- the treatment is suitable or is adapted for signals that are produced by guitars.
- the adapt ⁇ ing magnifies the guitar signals such that it can be proc ⁇ essed by other stage blocks.
- the fourth stage block 275 is also used for sending the treated signal to the third stage block 272 and to the first stage block 270, as designated by the master stage block 276.
- the first stage block 270 is used for receiving signals from the second stage block 271 and from the fourth stage block 275, as instructed by the master stage block 276.
- the first stage block 270 is also used for treating the received sig ⁇ nals such that the signals is suitable or is adapted for loudspeakers.
- the loudspeakers can produce sounds for an au ⁇ dience using the treated signals.
- the third stage block 272 is used for receiving signals from the second stage block 271 and from the fourth stage block 275, as instructed by the master stage block 276.
- the third stage block 272 is also used for treating the re ⁇ ceived signals such that the signals is suitable or is adapted for loudspeakers.
- the loudspeakers can produce sounds for an audience using the treated signals.
- the master stage block 276 is used for providing control instructions and signal parameters to the first stage block 270, to the second stage block 271, to the third stage block 272, and to the fourth stage block 275.
- the master stage block 276 also acts as a database for stor ⁇ ing control instructions and signal parameters. In the event that new stage blocks are added into the connection, the new stage blocks can obtain the required control instructions and parameters from the master stage block 276.
- the master stage block 276 receives audio sig ⁇ nals from the second stage block 271 and from the fourth stage block 275.
- the received audio signals are displayed on the display unit 278 for a user.
- the user uses the dis ⁇ played information to provide inputs or instructions to the master stage block 276, which later translates these instruc ⁇ tions into individual stage block instructions for the re ⁇ spective stage block 270, 271, 72, and 275.
- This daisy chain connection has an advantage of providing easy installation and easy expansion of the stage blocks.
- the distribution of treating data by the different stage blocks has an advantage of providing easy installation and easy expansion of the stage blocks.
- 270, 271, 72, 275, and 276 allows the different stage ob- jects, such as the right loudspeaker, the left loudspeaker
- the distribution can be done with
- Ethernet and can be independent of topology.
- the type of chain connection used is usually selected based on distance.
- the third stage block 272, and the fourth stage block 275 can serve a master stage block and can be provided with a display unit.
- Fig. 22 shows a several stage blocks of Fig. 3 arranged in a star configuration.
- Fig. 22 shows a master stage block 370 that is connected directly to a first stage block 373, di ⁇ rectly to a second stage block 374, directly to a third stage block 375, and directly to a fourth stage block 376.
- the master stage block 370 is connected to a display monitor 377.
- the first stage block 373 is connected to a left loud ⁇ speaker 380 via a wired means whilst the fourth stage block 376 is connected to a right loudspeaker 381 via a wired means.
- the second stage block 374 is connected to a guitar 383 via a wired means and the third stage block 375 is con ⁇ nected to a microphone 384 via a wireless means.
- the master stage block 370, the first stage block 373, the second stage block 374, the third stage block 375, and the fourth stage block 376 can transmit data signals or control signals with each other.
- the master stage block 370 communicates directly to the first stage block 373, di ⁇ rectly to the second stage block 374, directly to the third stage block 375, and directly to the fourth stage block 376.
- Fig. 23 shows the stage blocks of Fig. 3 arranged in a robust system.
- Fig. 23 includes all parts of Fig. 22. Such parts in ⁇ clude the master stage block 370, which is connected to the first stage block 373, to the second stage block 374, to the third stage block 375, and to the fourth stage block 376.
- the master stage block 370 is connected to the display monitor 377.
- the stage objects include the left loudspeaker 380, the right loudspeaker 381, the guitar 383, and the microphone 384.
- the stage objects are characterised in that, they are each con ⁇ nected to two stage blocks, instead of one stage block.
- the left loudspeaker 380 is connected to the first stage block 373 and to the second stage block 374.
- the right loudspeaker 381 is connected to the fourth stage block 376 and to the third stage block 375.
- the guitar 383 is con ⁇ nected to the second stage block 374 and to the third stage block 375.
- the microphone 384 is connected to the third stage block 375 and to the fourth stage block 376.
- connection of the stage object to two stage blocks pro ⁇ vide a robust or redundancy connection in that should one connection is broken, the affected stage object can continue to function with the other connection.
- the guitar 383 sends data signals to both the second stage block 374 and the third stage block 375. Then both the second stage block 374 and the third stage block 375 sends their respective guitar data signals to master stage block 370.
- the master stage block 370 receives the same gui ⁇ tar data signals from two sources. When one source fails, the master stage block 370 still receives the guitar data sig ⁇ nals. The master block 370 later sends the guitar data sig ⁇ nals to the loudspeaker 380 via the first stage block 373.
- the master block 370 can sends the guitar data signals to both the first stage block 373 and the second stage block 374 for outputting to the loudspeaker 380.
- the loudspeaker 380 receives the same guitar data signals from two sources and uses the guitar data signals from one source. When one source fails, the loudspeaker 380 can use the guitar data signals from the other source.
- FIG. 24 shows another method of using the stage blocks of
- Fig. 3 in which a path of control signal is different from a path of cargo signal.
- Fig. 24 includes a first stage block 390 that is connected to a second stage block 392.
- a microphone 395 is connected to an input port of the first stage block 390 while a speaker 396 connected to an output port of the first stage block 390.
- a curtain actuator 400 is connected to an output port of the second stage block 392.
- a display unit 394 is also connected to another output port of the second stage block 392.
- the second stage block 392 sends controls signals to a cur- tain actuator 400 and does not receive any signals from the curtain actuator 400.
- the curtain actuator 400 is used for raising or lowering a stage curtain.
- the display unit 394 sends a user input or a control signal to the first stage block 390 via the second stage block 392.
- the first stage block 390 receives the control signals, without audio signals or cargo signals, from the second stage block 392.
- the combination of the two stage blocks 390 and 392 provide a system for controlling both sound processing and curtain control, from a single user interface on display unit 394.
- the embodiments show several features of the ap ⁇ plication .
- heterogeneous multi-functional independ ⁇ ent audio processing units can be equipped with peer-to-peer communication devices and with appropriate instructions or programs to enable distributed audio processing, especially in the area of live audio mixing.
- Audio processing units provide audio mixing to treat or proc ⁇ ess one or more audio signals.
- the treating changes a charac ⁇ teristic of the audio signals.
- the treating can amplify amplitude of the audio signals such that it can drive a loudspeaker.
- the audio mixing can also combine multiple au ⁇ dio input signals to form one or more composite audio output signals.
- the processing units receive a plu- rality of audio signals from several musical instruments. The audio mixing treats the multiple audio signals to enhance its sound and then combines the treated audio signals to form a left channel audio track signal and a right channel audio track signal for purpose of recording.
- the peer-to-peer communication devices enable the processing units to broadcast and receive one or more independent data streams from each other in a simultaneous manner. This en- ables sharing of the audio processing by several processing units.
- the processing units can be connected such that one or more processing units can also be added or be removed without affecting stability of the remaining processing units.
- a master controller can direct behaviour of the processing units. In other words, the heterogeneous environment may allow dif ⁇ ferent processing units to co-exist and to work with each other to provide a sound mixing system.
- the application provides a digital audio processing device for a performance area.
- the performance area is also known as a stage.
- the audio processing device is used for treating or processing audio signals produced by devices on the perform ⁇ ance area using digital techniques, such as analog to digital conversion.
- the devices can relate to a musical instrument, like a piano, or to an audio device or equipment, like, a mi ⁇ crophone or loudspeaker.
- the audio processing device includes one or more touch screen devices and one or more audio processing blocks, wherein the audio processing block is communicatively connected to the touch screen device.
- the touch screen device comprises a touch screen for receiving one or more audio signal parameter data values from a user, like a sound engineer.
- the touch screen comprises a screen or a display that identifies an occurrence as well as a position of a touch on the display of the touch.
- the touch screen then provides occurrence information of the touch as well as positional information of the touch to a computing device .
- the audio processing block includes a memory unit, a communication module, and a processing device.
- the audio processing block is used to provide signal treatment and to provide a stage view.
- the memory unit is used for storing an audio signal-processing program and the audio signal parameter data values.
- the communication module is used for receiving two or more audio input signals from two or more corresponding performance area devices.
- the processing device is used for performing instructions of the audio signal-processing program to mix the audio input signals according to the audio signal parameter data values to produce one or more audio output signals for sending to one or more other performance area device.
- the mixing of the audio input signals can include altering parameters of the input signals.
- the signal parameters can relate to signal am ⁇ plitude or to signal phase shift.
- the touch screen is further used for receiving two or more icon image data values of the perform ⁇ ance area devices from the user.
- the icon image data values are used for selecting images of icons that correspond to the images of the performance area devices.
- the memory unit often stores a library or collection of icons for user selection.
- the icons of the performance area devices provide a visual representation of the performance area devices.
- a user is able to associate easily the icons of the performance area devices on the touch screen with the per ⁇ formance area devices on the performance area.
- An outline, colours, or a drawing of the icon can enable the easy asso ⁇ ciation with the corresponding performance area device.
- the touch screen is also used for receiving two or more posi ⁇ tional data values of the performance area devices from the user.
- the positional data values are used for positioning the icons of the performance area devices on the touch screen. It is a feature of the application that the icon positions of the performance area devices on the touch screen correspond to the positions of the performance area devices on the per ⁇ formance area.
- the positions of icons of the performance area devices on touch screen correspond to the positions of the performance area devices on the performance area.
- the performance area device icons are displayed against a background of the performance area image.
- an icon positioned on a left side of the touch screen would correspond with a performance area device on a left side of the performance area. The user would then be able to quickly and intuitive associate the icons with the performance area devices.
- the processing device is further used for performing instruc- tions of the audio signal-processing program to display a performance area image on the touch screen and to display the icons of the performance area devices on the performance area image on the touch screen. This is unlike other implementations, which require a user to memorise relations between screen image and stage equipments.
- the touch screen device can comprise one or more buttons, a computer mouse, or a computer keyboard for provid- ing an input means for a user.
- the performance area device can comprise a musical instru ⁇ ment, like a drum, or an audio device or equipment, like a microphone or a loudspeaker.
- the audio processing device can include a network module for connecting the touch screen device to the audio processing block.
- the network module can include a wired or wireless module, an Ethernet cable, and an
- Ethernet communication module
- the application provides a further audio processing device for a performance area.
- the audio processing device includes one or more display ter ⁇ minals and one or more processing blocks, wherein the proc ⁇ essing block connects to the display terminal.
- the display terminal comprises an input device and a display device.
- the input device is used for receiving one or more signal parameter data values from a user.
- the processing block comprises a memory unit, a communication module, and a processing device.
- the audio processing device provides signal treatment and a stage view.
- the memory unit is used for storing a signal-processing program and the signal parameter data values.
- the communication module is used for receiving one or more input signals from one or more corresponding per ⁇ formance area devices.
- the processing device is used for per- forming instructions of the signal-processing program to treat or process the input signals according to the signal parameter data values to produce one or more output signals for sending to one or more further performance area devices.
- the treatment of the input signals can include a step of mix ⁇ ing of the input signals to produce one or more composite output signals.
- the touch screen is further used for receiving one or more icon image data values of the per ⁇ formance area devices from the user.
- the icon image data val ⁇ ues of the performance area devices are provided for select ⁇ ing images of the performance area device icons that corre ⁇ spond to the images of the performance area devices.
- the touch screen is also used for receiving one or more posi ⁇ tional data values of the performance area devices from the user.
- the positional data values of the performance area de- vice are provided for positioning the icons of the perform ⁇ ance area devices on the touch screen.
- the positions of the icons of the performance area devices on the touch screen correspond to the positions of the performance area devices on the performance area.
- the processing device is also used for performing instruc- tions of the signal-processing program to display the icons of the performance area devices on the touch screen.
- the stage view allows the user to locate easily and quickly the icon corresponding to the performance area device.
- the processing device is often used for performing the instructions of the signal-processing program to display a performance area image on the display device. This allows for easier identification of the performance area icons.
- the input signal can comprise an audio signal or a control signal.
- the control signal can be used for a stage prop, like a stage actuator, or for other performance area devices.
- a touch screen can be used to provide the display device and the input device, other means are also possible.
- the application provides a method for operating a digital au ⁇ dio processing device.
- the method includes a method of providing a stage view and a method of treating signals
- the stage view method includes a step of displaying an image of a performance area on a touch screen.
- At least two data values of images of performance area de- vices are then received from a user. After this, icons of the performance area devices are displayed on the performance area image of on the touch screen according to the performance area device image data values.
- the performance area de ⁇ vice image data values provide images of performance area de- vice icons that correspond with images of the performance area devices.
- At least two data values of positions of the performance area devices are later received from the user.
- the performance area device icons on the performance area image on the touch screen are then displayed according to the performance area device positional data values.
- the performance area device positional data values provide at least two positions of the performance area device icons on the performance area image of the touch screen that correspond with at least two posi ⁇ tions of the performance area devices on the performance area .
- the steps allow the user to locate easily the performance area device icons unlike other implementation.
- the signal treatment method includes a step of receiving at least one audio signal parameter data value from the user. At least two audio input signals are received from at least two performance area devices. The at least two audio input signals are later mixed or combined according to the at least one audio signal parameter data values to produce at least one composite audio output signal for sending to at least one further performance area device.
- the method can also include a step of associating a pre- determined set of parameter data values of the audio signal to the performance area device icon.
- the pre-determined set of parameter data values can be used for providing parameter data values that are deem suitable for a sound engineer for a type of the performance area device. The sound engineer can then start with these data values to tweak the data values for local performance area conditions.
- the method can also include a step of providing an indication of the input signal from the performance area device on the touch screen for a user when the performance area device icon corresponding the performance area device is actuated.
- the actuation can refer to touching the performance area de ⁇ vice icon on a screen of the display device.
- the indication provides a graphical display of certain signal parameter data values for the user.
- the indication can be positioned next to the performance area device icon.
- the display can provide a relative level of parameter data values.
- the icon position of the performance area device can be ad ⁇ justed by actuating the touch screen. It can also be adjusted by using a computer mouse or a computer keyboard. It can also be adjusted using buttons.
- the method can include a step of audio muting the audio input signal. This step includes an act of measuring an amplitude value of the audio input signal and an act of treating the audio input signal to produce an output signal if a measure- ment data value of the amplitude value exceeds a pre ⁇ determined noise data value.
- the method can include a step of feedback eliminating [the audio input signal. This step includes an act of measuring a frequency spectrum of the audio input signal and an act of treating the audio input signal to produce an output signal if a measurement data value of the frequency spectrum exceeds a pre-determined signal feedback data value.
- the treating re- prises an amplitude value of the output signal to a predeter ⁇ mined operating data value.
- the method can include a step of equalising the audio input signal. This step includes an act of selecting a frequency band of components of the audio input signal and an act of adjusting an amplitude data value, frequency data value, and bandwidth data value of the audio input signal components within the frequency band.
- the method can include a step of undoing parameter data value changes of the audio input signal.
- the step includes an act of retrieving at least one previous parameter data value of the audio input signal and an act of applying the at least one previous parameter data value.
- the method can include a step of redoing parameter data value changes of the audio input signal. This step includes an act of storing at least one pre-determined parameter data value adjustment and an act of applying the at least one pre- determined parameter data value adjustment.
- the method can include a step of producing reverb of the au ⁇ dio input signal. This step includes an act of mixing a first audio output signal data value of the audio input signal and a second audio output signal data value to form a composite output signal data value.
- the second audio output signal data value comprises the first audio output signal data value with a time delay.
- the application provides a method for operating an audio processing device.
- the method includes a method of providing a stage view and a method of treating signals.
- the stage view method comprises a step of receiving at least one performance area device image data value from a user. At least one performance area device icons is then displayed on the display device according to the at least one performance area device image data value.
- the at least one performance area device image data value provides at least one perform ⁇ ance area device icon image that corresponds with at least one performance area device image.
- At least one performance area device positional data value is later received from the user.
- the at least one performance area device icon is afterward displayed on the touch screen according to the at least one performance area device posi ⁇ tional data value.
- the at least one performance area device positional data value provides at least one performance area device icon position on the touch screen that corresponds with at least one performance area device position on the performance area.
- the signal treatment method includes a step of receiving at least one signal parameter data value.
- a signal-processing program and the at least one signal parameter data value are later stored.
- At least one input signal from at least one performance area device are then received.
- the at least one input signal are then treated according to the at least one signal parameter data value to produce at least one output signal for sending to at least one further performance area device.
- the signal parameter data can be used for signal compression, signal attenuation, or signal magnification.
- the method can include a step of displaying an image or a picture of a performance area or a stage on the display de ⁇ vice.
- the performance area image enables a user to locate the icon of the performance area device easily.
- the act of treating of the input signal can include an act of mixing or combining two or more audio signals from the two or more devices of the performance area to form the output sig ⁇ nal. Other acts of treating the input are also possible.
- the act of treating includes a step of altering parameter values of the input signal.
- the application provides an audio module.
- the audio module provides a reliable processing of audio signals.
- the audio module includes a first digital audio processing unit, a second digital audio processing unit, a first audio device, and a second audio device.
- the first audio device is used for sending a first audio nal to the first digital audio processing unit and to the second digital audio processing unit.
- the first digital audio processing unit treats or proc ⁇ esses the first audio signal to produce a first immediate au ⁇ dio signal while the second digital audio processing unit treats the first audio signal to produce a second immediate audio signal.
- the second audio device later receives one of the first imme ⁇ diate audio signal from the first digital audio processing unit and the second immediate audio signal from the second digital audio processing unit to produce an audio output sig ⁇ nal .
- This structure allows the second audio device to function when one of the first digital audio processing unit and the second digital audio processing unit fails or malfunction.
- the first or the second digital audio processing unit can in ⁇ clude one of the above audio processing devices.
- the application provides a method of using a digital audio processing device.
- the method provides sub-mix function.
- a sound engineer often is required to produce a main mix of sound signals and an auxiliary mix of sound signals.
- the main mix of sound signals can be used for the main loudspeakers while the auxiliary mix of sound signals can be used for cer ⁇ tain persons, like a guitarist of band.
- the method includes a step of receiving two or more audio in ⁇ put signals from two or more corresponding main performance area devices.
- the performance area devices can refer to a mu ⁇ sical instrument, like a piano, or to an audio device, like a microphone .
- a user provides one or more main mix parameter data values.
- the audio input signals are then mixed or are then combined to form one or more composite main audio output signals according to the main parameter data value.
- One or more auxiliary mix parameter data values are later derived from the main mix parameter data values. Later, the audio input signals are mixed to form one or more auxiliary au ⁇ dio output signals according to the auxiliary mix parameter data value.
- auxiliary mix parameter data values are derived from the main mix parameter data values, they can be produced quickly in an automated way using a computer, allowing the sound engineer to focus on other matters.
- the step of deriving of the auxiliary parameter data values can include a step of duplicating or copying the auxiliary parameter data values from the main parameter data value.
- Most auxiliary parameter data values are same or similar to the main parameter data value.
- a sound engineer would just need to adjust some appropriate auxiliary parameter data val ⁇ ues to make the auxiliary parameter data values suitable for use .
- the method can include a step of receiving an adjustment data value from a user.
- the main parameter data value is then adjusted according to the adjustment data value.
- the auxiliary parameter data value is also adjusted according to the ad ⁇ justment data value.
- adjustment of the main parameter data value also causes the auxiliary parameter data value to be adjusted.
- the adjustment of the auxiliary parameter data value can be done such that the main parameter data value and the auxil- iary parameter data value have a pre-determined offset or difference .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application provides a digital audio processing device for a performance area. The audio processing device includes one or more touch screen devices and one or more audio processing blocks. The audio processing block connects to the touch screen device. The audio processing block comprises a memory unit, a communication module, and a processing device. The touch screen device comprises a touch screen for receiving one or more audio signal parameter data values from a user. The touch screen is provided for receiving at least two performance area device icon image data values from the user and for receiving at least two performance area device positional data values from the user.
Description
DIGITAL SOUND MIXING SYSTEM WITH GRAPHICAL CONTROLS
The present application relates to sound mixing. In particu¬ lar, the application relates to distributed digital sound mixing.
JP 2004147262 discloses a method for distributing video data. This method includes a video input part and an audio input part obtaining an analog video signal and an analog audio signal respectively. The received analog video signal and the received analog audio signal are then converted into digital video data by an analog to digital conversion part. The digi¬ tal data are later compressed by an information compression part. A packet generation part forms data packets using the compressed digital data such that the packets are suitable for unicast, broadcast, or multicast. The packets are also formatted according to a protocol that is suitable for low latency delivery. A transmission control packet afterward transfers the data packets to a network interface part. The network interface part then distributes the data packets over a network.
It is an object of the application to provide an improved de¬ vice and a method for audio mixing sound.
The application provides a digital audio processing device for a performance area. The audio processing device includes one or more touch screen devices and one or more audio proc¬ essing blocks. The audio processing block connects to the touch screen device.
The touch screen device comprises a touch screen for receiving one or more audio signal parameter data values from a
user. The audio processing block comprises a memory unit, a communication module, and a processing device.
Specifically, the memory unit is provided for storing an au- dio signal-processing program and the audio signal parameter data value. The communication module is provided for receiv¬ ing two or more audio input signals from two or more perform¬ ance area devices. The processing device is provided for per¬ forming instructions of the audio signal-processing program to mix the audio input signals according to the audio signal parameter data value to produce one or more audio output sig¬ nals. The audio output signal is provided for sending to one or more further performance area devices.
The processing device is further provided for performing instructions of the audio signal-processing program to display a performance area image on the touch screen and to display at least two performance area device icons on the touch screen .
The touch screen is further provided for receiving at least two performance area device icon image data values from the user. The performance area device icon image data values are provided for selecting performance area device icon images that correspond to the performance area device images.
The touch screen is also provided for receiving at least two performance area device positional data values from the user. The performance area device positional data values are pro- vided for positioning the performance area device icons on the touch screen.
In the following description, details are provided to de¬ scribe embodiments of the application. It shall be apparent to one skilled in the art, however, that the embodiments may be practised without such details.
Fig. 1 illustrates a stage with stage objects, the stage is connected to an improved digital sound mixing system,
Fig. 2 illustrates the digital sound mixing system of Fig.
1 with a touch screen device and with a stage block,
Fig. 3 illustrates a schematic of the stage block of Fig.
2,
Fig. 4 illustrates a flow chart for a function of assign- ing stage object icons to the stage objects of the stage block of Fig. 2,
Fig. 5 illustrates a screen view of the touch screen device of Fig. 2 when the stage block provides the stage object assignment function of Fig. 4,
Fig. 6 illustrates a table of data that is used by stage object assignment function of Fig. 4,
Fig. 7 illustrates a flow chart for a function of equali¬ sation of the stage block of Fig. 2,
Fig. 8 illustrates a screen view of the touch screen de- vice of Fig. 2 when the stage block provides the equalisation function of Fig. 7,
Fig. 9 illustrates a table of data that is used by the
equalisation function of Fig 7,
Fig. 10 illustrates a flow chart for a fader function of the stage block of Fig. 2,
Fig. 11 illustrates a screen view of the touch screen device of Fig. 2 when the stage block provides the fader function of Fig. 10,
Fig. 12 illustrates a table of data that is used by the fader function of Fig. 10,
Fig. 13 illustrates a screen view of the touch screen device of Fig. 2 when the stage block provides a mix- ing function of Fig. 13,
Fig. 14 illustrates a table of data that is used by the
mixing function of Fig. 13,
Fig. 15 illustrates a screen view of the touch screen device of Fig. 2 when the stage block of Fig. 2 pro- vides a sub-mixing function,
Fig. 16 illustrates a table of data that is used by the
sub-mixing function of Fig. 15,
Fig. 17 illustrates a flow chart for an auto-adaptive con¬ figuring function of the stage block of Fig. 2, Fig. 18 illustrates a flow chart for an auto-feedback
eliminating function of the stage block of Fig. 2,
Fig. 19 illustrates a flow chart for an auto-muting func¬ tion of the stage block of Fig. 2,
Fig. 20 illustrates the stage blocks of Fig. 2 being con- nected in a simple configuration,
Fig. 21 illustrates several stage blocks of Fig. 3 being connected in a daisy chain configuration,
Fig. 22 illustrates several stage blocks of Fig. 3 being connected in a star configuration,
Fig. 23 illustrates the stage blocks of Fig. 3 being con¬ nected in a robust configuration, and
Fig. 24 illustrates another method of using the stage block of Fig. 3.
Figs, below have similar parts. The similar parts have the same names or similar part numbers. The description of the similar parts is hereby incorporated by reference, where ap
propriate, thereby reducing repetition of text without limit¬ ing the disclosure.
Fig. 1 depicts a physical performance stage 10 is communica- tively connected to an improved digital sound mixing system 12. The digital sound mixing system 12 is also called a stage workstation .
The stage 10 includes a plurality of objects 14, 15, 16, and 17, which are placed in different positions on the stage 10. The stage objects 14 and 16 are connected to the sound mixing system 12 via wireless communication means 20 whilst the stage objects 15 and 17 are connected to the sound mixing system 12 via wired communication means 21.
Fig. 2 shows the sound mixing system 12 that includes a touch screen device 23 that is communicatively connected to a stage block 24. The touch screen device 23 includes a touch screen 22 and multiple buttons 26. The buttons 26 provided on the right side of the touch screen device 23.
The touch screen device 23 is physically separated from the stage block 24 although they can also be placed next to each other .
In a special embodiment, the touch screen device 23 is part of a control personal computer that is connected to the stage block 24 via wired local area network (LAN) although they can also be connected via a wireless means.
During operation of the sound mixing system 12, the orientation of the touch screen 22 of the touch screen device 23 with respect to the stage 10 is generally fixed, and an image
of the stage 10 is displayed as a background image on the touch screen 22.
Functionally, the stage 10 provides a platform for an event that is used for performing, entertaining, or communicating to an audience. One example of the event is a musical per¬ formance .
The stage objects 14, 15, 16, and 17 are used for the event and they can refer to an audio-equipment, to a musical in¬ strument, and to stage equipments.
The audio-equipment can refer to a microphone for converting sound signals to electrical signals. The audio-equipment may also refer to a loudspeaker, which receives electrical sig¬ nals from the stage block 24 and which converts the received electrical signals to audio sounds.
The musical instrument can refer to a musical device, such as guitar, which is equipped with an audio receiver. The audio receiver is intended for receiving audio or sound signals and for converting the received sound signals to electrical sig¬ nals. Both electrical signals of the audio-equipment and of the musical instrument are intended for sending to the stage block 24 of the sound mixing system 12 for processing.
The stage equipments can include a prop actuator or a light¬ ing device. The prop actuator can activate a curtain or other stage items. The lighting device can comprise a spotlight or a video projector.
The communication means 20 and 21 is intended for conveying control signals and audio signals between the stage objects
14, 15, 16, and 17 and the stage block 24. The audio signals are also called cargo signals.
The touch screen 22 is used for displaying a stage view. The stage view shows a picture or an image 25 of the stage 10. It also shows a plurality of object icons 27, 28, 29, and 30. The object icons 27, 28, 29, and 30 have pictures that are displayed on the touch screen 22. Positions of the object icons 27, 28, 29, and 30 on the stage image 25 correspond with positions of the stage objects 14,
15, 16, and 17 on the stage 10 for easy identification. This is illustrated in Fig. 1. For example, the object icon 27, which is positioned on the left side of the stage image 25, corresponds to the stage object 14, while is positioned on the left side of the stage 10.
Furthermore, shapes or images of the object icons 27, 28, 29, and 30 correspond with shapes or images of the stage objects 14, 15, 16, and 17 respectively for intuitive identification. In one example, the object icon 27, which includes a picture of a guitar, corresponds to the stage object 14, which in¬ cludes a guitar. This is illustrated in Fig. 1. In summary, the stage view, which is generated using a computing technique with user input, provides an enhanced view of the stage 10 and a view of the stage objects.
This easy and intuitive identification is especially impor- tant when the stage 10 has a large number of stage objects.
This is also unlike other implementations, where the user has to memorise linkages between multiple stage items and multi¬ ple screen items.
A user actuates the touch screen 22 and the buttons 26 to provide inputs to the stage block 24. The actuation includes a pushing action or a touching action. An example of the pushing action is the user pushing the buttons 26. An example of the touching action is the user touching an area or a spot of a surface of the touch screen. The touch can include to a single tapping action or a double tapping action. The touch can also comprise a touch and drag action, wherein a finger of the user touches a desired spot on the touch screen 22 and continues touching the touch screen 22 while moving the finger to another spot on the touch screen 22.
The stage block 24 is used for receiving audio or cargo sig- nals from the stage objects 14, 15, 16, and 17. The stage block 24 is also used for treating or processing the received cargo signals, and for sending the processed cargo signals to the stage objects 14, 15, 16, and 17. One example of this is a stage block receiving audio signals from a stage object in the form of a guitar. The stage block then treats the audio signals to improve the sound quality. It later sends the treated signal to a stage object in the form of a loud¬ speaker . The stage block 24 is also used for sending control signals to the stage objects 14, 15, 16, and 17. One example of this is a stage block sending an activation signal to a stage ob¬ ject in the form of a stage prop or a lighting device. In short, a user of the touch screen device 23 is able to identify the object icon icons 27, 28, 29, and 30 that corre¬ sponds with the stage object objects 14, 15, 16, and 17 in an easily and intuitive manner.
The stage block 24 treats signals received from the stage ob¬ ject objects according to inputs that are from the user via the touch screen 22 and via the buttons 26. The stage block 24 then sends the treated signals to the other stage object objects. The treated signals can include an audio signal, a control signal, or both the audio signal and the control sig¬ nals . This is unlike other implementations, where an operator of a sound mixer sees an array of fader-type controls, one control for one sound channel. The operator is required to keep track of which sound source is on which channel. If the operator needs to create multiple sound mixes, such as a sound mix for front speakers and a special sound mix for headphones of musicians, then the operator is required to set up the sound mix individually. Later, if the level of one sound source changes, the operator must go back to all of the sound individual mixes to make a compensating change for all sound mixes .
After the operator makes a series of adjustments and after¬ ward decides to revert to the previous sound settings, the adjustments is done manually, which is often time consuming. Some times, this also creates problems where for example the adjustments caused audible disturbance in a live performance. Often, the adjustments need to be done rapidly. In a generic sense, the touch screen 22 and the buttons 26 can also be provided at any area that is convenient for the user. A display monitor with a keyboard and a computer mouse can replace the touch screen 22 and buttons 26. The display
monitor with the keyboard and the mouse can be part of a per¬ sonal computer system, which is located in an area separated from the stage block 24 but is communicatively connected to the stage block 24.
In practice, a big concert or other live sound event can have multiple sound mixing systems 12. One sound mixing system 12 is located among an audience of the concert to mix audio sig¬ nals for front of house (FOH) speakers that are heard by the audience. Another sound mixing system 12 is located at a side of a stage to mix audio signals for monitor speakers that are positioned directly in front of performers so that they can hear one another. Optionally, the concert can have a separate sound mixing system 12 for broadcasting or recording.
In practice, it is also possible that the multiple touch screens 22 and the buttons 26 can be used with a single digi¬ tal sound mixing system 12. This may be desired if a single digital sound mixing system 12 is capable of providing audio signals for all the required stage objects such as front of house speakers, monitor speakers, and broadcast feeds. In this case, multiple operators would control those audio sig¬ nals from the different touch screens 22 with the buttons 26 at different locations.
Fig. 3 shows a schematic for the stage block 24 of the sound mixing system 12 of Fig. 1.
The stage block 24 includes a processing module 42. The proc- essing module 42 is connected to a memory module 43, to a display module 44, to a network communication module 45, and to an input module 48. The memory module 43 is connected to a wireless transceiver 46. An Analog to Digital Converter (ADC)
module 50 connects a wired transceiver 47 to the memory mod¬ ule 43 while a Digital to Analog Converter (DAC) module 51 also connects the wired transceiver 47 to the memory module 43.
Functionally, the wireless transceiver 46 and the wired transceiver 47 are used for transmitting data between the stage objects 14, 15, 16, and 17 and the memory module 43. Referring to the wireless transceiver 46, it is used for transmitting digital data from the stage objects 14 or 16 to the memory module 43 via a wireless medium. The digital data is arranged in a data packet format for easy handling. The stage objects may receive analog or digital data from its source. In the case of the source providing analog data, the stage objects have an analog to digital converting means for converting the received analog data to its digital form.
In contrast, the wired transceiver 47 is used for receiving analog data from the stage objects 15 or 17 via a wired me¬ dium. These stage objects receive analog data from its source. After this, the stage objects transmit the received analog data to the wired transceiver 47 through the wired me¬ dium. The wired transceiver 47 then sends the analog data to the ADC 50, which converts the analog data to its digital form. The ADC 50 later transmits the digitalised data to the memory module 43.
The memory module 43 is used for receiving stage object data from the transceivers 46 and 47 and for storing these stage object data. The memory module 43 is also used to store a software signal-processing program with modules or functions for treating the stage object data according to pre-
determined parameter sets or according to parameters defined by the user. These parameters specifying treatment of the stage object data are also stored in the memory module 43. The input module 48 is used for receiving data from the touch screen 22 and buttons 26 and for transmitting the received data to the processing module 42.
The display module 44 is used for receiving data from the processing module 42 and for sending the received data to the touch screen 22 for display to the user.
The processing module 42 is intended for performing instruc¬ tions of the signal-processing program, which is stored in the memory module 43. The signal-processing program performs operations on the stage object data according to pre¬ determined parameter sets or according to parameters defined by a user. Such operations may comprise combining different stage object data streams from multiple stage objects and scaling the relative levels of the individual stage object data streams.
The network communication module 45 acts to receive data from the processing module 42 and to transmit the received data to another stage block. Multiple processing modules 42 can be communicatively connected by the their network communication modules 54, wherein some processing modules 42 can serve to treat the data whilst one processing module 42 can serve to supervise or to manage the treatment of these processing mod- ules 42 through the network communication modules 54.
The network communication module 45 also acts to enable two- way communication from other touch screens 22 and other but-
tons 26 and other processing module 42. This two-way communi¬ cations allows users to control the processing module 42 through user inputs on their touch screens 22 and buttons 26. Additionally, the users may communicate new parameters speci- fying the treatment of stage object data, those parameters being communicated through the network communication module 45 to the processing module 42, which stores those parameters in the memory module 43 for later use. In a generic sense, the wired transceiver 47 can receive digital data and not just analog data, from the stage ob¬ jects. It can transmit the digital data directly to the mem¬ ory module 43. Likewise, the wired transceiver 47 can also receive digital data from the memory module 43 directly and then send the received digital data to the stage objects, which have digital to analog converters from converting these digital data to its analog form.
In practice, the processing module 42 can receive multiple streams of data from different types of the stage objects. The memory module 43 can have a multi-stream signal- processing program capability of handling multi-stream of data and have multiple predefined parameter sets or profiles for the different types of stage objects. A user can assign the stage objects to the parameter sets according to its stage object type. The user can later also adjust the as¬ signed parameter sets to improve sound quality. The process¬ ing module 42 then treats the various data streams according to instructions from the signal-processing program and ac- cording to the assigned parameter sets.
Different ways of implementing the parts of the stage block are possible. A computer motherboard with sound cards can be
used for implementing or realizing the stage block 24. Beside this, different types of Programmable Gate Array (PGA) compo¬ nents can also be used to realize the stage block 24. The different types of PGA can include a Field-Programmable Gate Array (FPGA) , which can be described using a Hardware De¬ scription Language (HDL), such as Verilog HDL. The stage block 24 can also be implemented using Complex Programmable Logic Devices (CPLD) , Field-Programmable Analog Arrays
(FPAA) , or Software Defined Silicon (SDS) .
To utilize off-the-shelf components for lower cost, the proc¬ essing module 42 can also be implemented using a General Pur¬ pose Graphics Processing Unit (GPGPU) . The GPGPU can have a Compute Unified Device Architecture (CUDA) .
Several functions of the signal-processing program of the stage block 24 are provided below.
Fig. 4 shows a flow chart 52 with steps for providing a stage object assignment function of the signal-processing program of the stage block 24 of Fig. 2.
The stage block 24 includes several transceivers 46 and 47 of Fig. 3 that are connected to stage objects. The transceivers 46 and 47 are also called input or output ports. The stage block ports are used to receive audio or control signals from the stage objects and are also used to send audio or control signals to the stage objects. The signal-processing program provides the stage object as¬ signment function, which is intended for producing a stage view of the stage 10 on the touch screen 22. The stage view shows a plurality of stage object icons on the touch screen
22. The stage objects correspond to the stage objects on the stage 10.
The flow chart 52 includes a step 53 of a user selecting a stage object icon.
This selecting step 53 comprises an act of the signal- processing program receiving a picture of the stage 10 from the user. The signal-processing program then displays the stage picture or image 25 on the touch screen 22. This is il¬ lustrated in Fig. 5. The stage image 25 on the touch screen 22 serves to provide an orientation of the touch screen 22 relative to a stage. The signal-processing program also shows a group 57 of object icons 60, 61, 62, and 63 as well as a group 58 of stage block port icons 65, 66, 67, and 68 on one side of the stage pic¬ ture 25, as illustrated in Fig. 5. The stage block port icons 65, 66, 67, and 68 act to represent the input or output ports of the stage block 24. The object icons 60, 61, 62, and 63 are intended for representing the stage objects on the stage.
The selection step 53 also includes a step of the signal- processing program accepting a user actuation on the touch screen 22 and on the buttons 26 via the input module 48 of Fig. 3. The actuation acts to provide data to the signal- processing program regarding selection of one stage object icon from the stage object icon group 57. Fig. 5 shows the stage object icon 61 being selected, as an example. The pic- ture of the selected stage object icon 61 corresponds to the picture of the stage object for easy recognition.
A step 54 of associating the selected stage object with a stage block port follows the selection step 53.
The association step 54 includes an act of the signal- processing program accepting the user actuation of the touch screen 22 and of the buttons 26 to select one stage block port icon from the stage block port icon group 58. As an ex¬ ample, Fig. 5 shows the stage block port icon 65 being se¬ lected .
The signal-processing program then enables the user to move the selected object icon 61 towards the selected stage block port 65 such that the selected stage object icon 61 and the selected stage block port icon 65 are positioned on the same location, as shown in Fig. 5.
The above same position serves for indicating to the signal- processing program to associate or to link the selected stage object icon 61 with the selected stage block port icon 65. Since the selected stage block port icon 65 is linked in the signal-processing software to a particular stage block port while the stage block port is connected to a stage object, the selected stage block port icon 65 is also linked or asso¬ ciated with the stage object.
A step 55 of positioning the selected stage object icon 61 and the selected stage block port icon 65 follows the selec¬ tion step 54. The user then moves the selected object icon 61 with the selected stage block port icon 65 to a position on the touch screen 22 that corresponds to a position of stage object on the stage 10 for easy identification.
Fig. 6 shows a table 70 of data that is used by the stage ob¬ ject assignment function. The data is stored in the memory module 43 of the stage block 24. The memory module 43 is used for storing a plurality of in¬ formation or data. These data are organized in data fields, which are arranged in rows and in columns of the data table 70 for easy illustration. The data table 70 includes a column 71 for stage object icon data, a column 72 for stage block port number data that is linked with the stage object icon data, a column 73 for x- position data of the stage object icon, and a column 74 for y-position data of the stage object icon. The data within each row relates to one stage object icon.
Fig. 7 shows a flow chart 76 with steps for providing an equalisation function of the signal-processing program of the stage block 24 of Fig. 2.
The equalisation function alters frequency characteristic of the stage objects. Audio signals of the stage objects com¬ prise a plurality of sine waves with different frequencies that extend over a low frequency band, a midrange frequency band, and a high frequency band. The low frequency band ranges from 0 hertz (Hz) to 250 Hz. The midrange frequency band ranges from 250 Hz to 6,000 Hz while the high frequency band ranges from 6000 Hz to above 6000 Hz. The equalisation function serves to alter signal strength or amplitude of the audio signals within each frequency band.
The flow chart 76 includes a step 78 of selecting the desired stage object icon. The selection step 78 includes the signal-
processing program accepting an user actuation on the touch screen 22 and on the buttons 26 for selecting the desired stage object icon on the screen touch 22. Fig. 8 shows a screen view of the touch screen 22 when the signal-processing program of the stage block 24 provides the function of equalisation.
Each type of stage object icon is associated with an appro- priate set of pre-determined or default parameters. The pre¬ determined or default parameters serve to provide a set of parameters that are most often suitable for the specific stage objects. As an example, the stage object icon, which relates to a violin has a set of pre-determined parameters that will specify the audio processing that most sound engi¬ neer consider as best practices when processing sound from a violin. The user can alter the parameters, when needed.
The selection step 78 is followed by a step 79 of the user adjusting the equalisation parameters. The signal-processing program displays a window or an area 81 on the touch screen 22 for user input, as illustrated in Fig. 8. The signal- processing program then accepts user input for altering signal strength of the signal within each signal band.
Fig. 9 shows a further table 83 of data that is used by the equalisation function. The data is stored in the memory module 43 of the stage block 24. The table 83 comprises a column 71 for stage object icon data, a column 72 for stage block port number data that is linked with the stage object icon data, a column 85 of data for low frequency band, a column 86 of data for midrange fre-
quency band, and a column 87 of data for high frequency band. The data within each row relates to the same stage object icon . The low frequency band column 85, the midrange band column 86, and the high frequency band column 87 include a magnifi¬ cation factor or an attenuation factor. This factor is intended for applying to the signal within the respective low frequency band, the midrange frequency band, and the high frequency band.
In a general sense, the equalisation function can provide more general and sophisticated means of altering frequency characteristic. The audio signals can be grouped in frequency bands with higher resolution, instead of just low, mid-range, and high frequency bands .
Fig. 10 shows a flow chart 89 with steps for providing a fader function of the signal-processing program of the stage block 24 of Fig. 2. The fader function serves to alter signal strength of the audio signals that are received from the stage object.
The flow chart 89 includes a step 90 of selecting the desired stage object icon 27. Fig. 11 shows a screen view of the touch screen 22 when the signal-processing program of the stage block 24 provides the function of fader. The screen view displays a slider icon 35 that is placed next to the se¬ lected stage object icon 27 that corresponds with the stage object 14.
The selection step 90 is followed by a step 91 of the user adjusting the fader parameter for the selected stage object.
The signal-processing program then accepts user actuation for moving the slider button 36. The position of the slider button 36 provides an indication of signal amplitude amplifica- tion factor that is intended for applying to audio signal re¬ ceived from the stage object. The signal amplitude amplifica¬ tion factor is stored in the memory module 43 of the stage block 24. Fig. 12 shows another table 92 of data that is used by the fader function. The data is stored in the memory module 43 of the stage block 24. The table 92 comprises a column 71 for stage object icon data, a column 72 for stage block port num¬ ber data that is linked with the stage object icon data, and a column 93 of data for an amplitude amplification factor. The data within each row relates to the same stage object icon. This amplitude amplification factor is intended for applying to the audio signal received from the stage object. Fig. 13 shows a screen view 183 of the touch screen 23 provided by the signal-processing program of the stage block 24 when the signal-processing program provides a sound mixing function . The sound mixing function is used for mixing or combining different audio input signals into one or more audio output signals. The different combined audio signals are designated for different channels or ports of a music system. The screen view 183 shows several vertical input channel icons 185, 186, 187, and 188. Each input channel icon 185, 186, 187, and 188 has a volume slider icon 190 with a slider
button 191 and several output channel button icons 192, 193, and 194. The channel is also known as a port.
The input channel icons 185, 186, 187, and 188 are used to represent input ports of the stage block 24. Each input port is connected to one stage object.
A position data of the slider button 191 of the input channel icon is used to represent value of an amplification factor that is applied to signals received by an input channel, the input channel being represented by the said input channel icon 185, 186, 187, or 188.
The signal-processing program can receive user input to change the position of the slider button 191. The user pro¬ vides the user input by actuating the touch screen 22 or by actuating the buttons 26.
Each output channel button icon 192, 193, and 194 corresponds to one output port of the stage block 24. The output channel button icons 192, 193, and 194 are intended for activating by the user.
When the user activates the output channel button icon 192, 193, and 194 of a particular input channel icon, signals from the input channel that corresponds to the particular input channel icon are directed to the output channel that corre¬ sponds to the said output channel button icon 192, 193, and 194. The signal is directed to the output channel and is mixed or is combined with any other signal that is also di¬ rected to the same output channel.
Referring to Fig. 13, as an example, the screen view 183 shows signals from input channels 1, 2, 3, and 4 being mixed and being combined into a composite signal that is directed to the output channel 1. Signals from input channels 1, 3, and 4 are mixed and combined into a composite signal that is directed to the output channel 2. Signals from input channels 2 and 3 are mixed and combined into a signal that is directed to the output channel 3. Fig. 14 shows a table 195 of data that is used by the mixing function. The data is stored in the memory module 43 of the stage block 24.
The data table 195 comprises several columns and rows. Each column has data for the rows.
The data table 195 includes a column 196 of data for the in¬ put channel 1, a column 197 of data for the input channel 2, a column 198 of data for the input channel 3, and a column 199 of data for the input channel 4.
The data table 195 includes a row 201 of data for signal am¬ plification factor, a row 202 of data for the output channel 1, a row 203 of data for the output channel 2, and a row 204 of data for the output channel 3.
The data for the column 196 relates to the input channel icon 185 of Fig. 13. The data for the column 197 relates to the input channel icon 186. The data for the column 198 relates to the input channel icon 187. The data for the column 199 relates to the input channel icon 188.
Referring to the column 196, the data of the column 196 for the row 201 relates to positional information of the volume slider icon 190 of the input channel icon 185. The data of the column 196 for the row 202 relates to activation status of the channel button icon 192 of the input channel icon 185. The data of the column 196 for the row 203 relates to activa¬ tion status of the channel button icon 193 of the input chan¬ nel icon 185. The data of the column 196 for the row 204 re¬ lates to activation status of the channel button icon 194 of the input channel icon 185.
In a similar way, the data for the column 197, 198, and 199 are linked to the input channel icon 186, 187, and 188. Fig. 15 shows a screen view 210 of the touch screen 22 when the stage block of Fig. 2 provides a sub-mixing function.
The sub-mixing function is used for mixing or for combining different audio input signals into one or more auxiliary com- posite audio output signals. The different combined audio signals are designated for different auxiliary channels or ports of a music system.
The screen view 210 includes parts of the screen view 183 of Fig. 13. Auxiliary output channels replace the output chan¬ nels of Fig. 13.
Fig. 16 shows a table 210 of data that is used by the sub- mixing function of Fig. 15. The data is stored in the memory module 43 of the stage block 24.
The table 210 includes parts of the table 195 of Fig. 14. Data of the auxiliary output channels replace the data of the output channels of Fig. 14. The sub-mixing function is provided after the main mixing function of Fig. 13 is provided. In other words, the sub- mixing function is activated only after the main mixing function is activated. When the sub-mixing function is activated for the first time, it takes on or copies the values of the main mixing function.
The copying of the values of the main mixing function provides a convenient start point for the sub-mixing function. The user can then alter the starting values of the sub-mixing function as needed to produce the desired values of the sub- mixing function. Often only certain values of the main mixing function need this alteration.
In one example, output channels of the main mixing function is provided for a music band. The sound of each musician in the band arrives in the digital mixing system on a separate port. The output channels contain a mix or a composite of these sound signals combined together. The sub-mixing function is provided to create a unique mix, which is sent to the monitor headphones used by a drummer of the band. The drummer desires to hear the sound of the entire band but with the sound of the drummer a bit louder. A simple alteration of the setting of main channel for the band is then needed to produce a setting for the auxiliary channel for the drummer.
By starting with a copy of the main channel mix setting, and then only changing the port setting for the drummer, a significant reduction in workload of the sound engineer is achieved .
In practice, a sound engineer often must adjust levels of in¬ dividual ports during a live performance. For this reason, it may be desired that the sub-mix parameters be capable of automatically tracking or following corresponding parameters of the main mixing function.
The tracking can refer to a same value tracking, wherein the auxiliary port parameter data has the same values or the same port setting as the corresponding main port parameter data. Put differently, when the main port parameter data is ad¬ justed, the corresponding auxiliary port parameter data is also adjusted such that both the main port data and the cor¬ responding auxiliary port data have the same value. The tracking can also refer to an offset tracking, wherein the auxiliary port parameter data having an offset with reference to the corresponding main port parameter data. In other words, the auxiliary port parameter data and the corre¬ sponding main port parameter data have a constant difference.
In use, a user can set the auxiliary port parameters such that some auxiliary port parameters have the same value tracking, some auxiliary port parameters have the offset tracking, and some auxiliary port parameters do not track.
In one embodiment of the invention, the sub-mix may be pro¬ grammed so that port settings automatically are adjusted in the sub-mix whenever the corresponding port settings are ad-
justed in the main mixing function. In other words, when then the port settings in the main mixing function are adjusted, the port settings in the auxiliary mixing function is auto¬ matically adjusted.
This allows for quick adjustment of output port setting.
Fig. 17 shows a flow chart 110 with steps for providing an auto-adaptive configuring function of the signal-processing program of the stage block 24 of Fig. 2.
The auto-adaptive configuration function allows automatic treatment of audio data of the stage objects based on fre¬ quency characteristics of the audio data.
The flow chart 110 includes a step 112 of monitoring audio signals. This is followed by a step 114 of detecting changes of predetermined audio characteristics. The step 114 is fol¬ lowed by a step 115 of changing audio treatment parameter based on the detected changes.
As an example of auto-adaptive configuring function, the processing module 42 is equipped with a program for monitoring audio signals, as shown in the step 112. A first person can provide first audio signals to the stage block 24 via a microphone. The stage block 24 treated the first audio sig¬ nals according to a first predefined audio parameter set. A second person later uses the microphone to second audio sig¬ nals to the stage block 24.
The program then detects changes of audio characteristics, as shown in the step 114, since the first and second audio sig¬ nals have different audio signal characteristics. The program
may use Fast Fournier Transfer (FFT) techniques to detect the audio characteristics changes.
Following this, the processing module 42 applies changes of audio treatment audio parameter based on the detected
changes, as shown in the step 115. The second audio signals are later treated in accordance to a different second prede¬ fined audio parameter set. The processing module 42 performs automatically the change of stage block configuration.
In a different embodiment, instead of signal treatment in ac¬ cordance to a different audio parameter set, the processing module 42 issues a control signal to change a prop, such as projecting a picture on a screen to visual, to acknowledge the presence of the second person.
Fig. 18 shows a flow chart 120 with steps for an auto- feedback eliminating function of the signal-processing program of the stage block 24 of Fig. 2. The auto-feedback eliminating or an automatic tweaking function serves to remove positive audio feedback.
The flow chart 120 includes a step 121 of monitoring audio signals. This is followed by a decision step 123 of checking whether a predetermined signal threshold level is exceeded. If the level is exceeded, a step 125 of adjusting signal treatment parameter is performed.
To illustrate an application of the auto-feedback eliminating function, consider a situation in which a microphone converting sound to electrical signals. These microphone electrical signals are transmitted to a loudspeaker. The loudspeaker then converts the transmitted electrical signal to sound.
When the auto-feedback eliminating function is activated, the stage block 24 monitors the amplitude of microphone electri¬ cal signals, as shown in the step 121 and it also checks whether a predetermined signal threshold level is exceeded, as shown in the decision step 123.
The microphone often picks up the loudspeaker sound. When this occurs, the microphones and the loudspeakers form a feedback loop. This feedback loop can have a positive feed¬ back, wherein the volume of loudspeaker sound picked up by the microphone is louder than the volume of original sound. This would lead to auto-feedback, wherein the loudspeaker produces louder, and louder sounds.
If the monitored amplitude of microphone electrical signals exceeds a certain threshold level, the stage block 24 deems that a positive feedback has occurred and it reduces the am¬ plitude of the microphone signal, as shown in the step 125, for eliminating the positive feedback.
In a generic sense, the monitoring of the amplitude of the electrical signals can be expanded or be replaced by a moni¬ toring of a frequency spectrum of the electrical signals.
When the monitored frequency spectrum of microphone electri¬ cal signals exceeds a certain threshold level, a stage block can consider that a positive feedback has occurred and it re¬ duces amplitude of microphone signal for eliminating the positive feedback.
Fig. 19 shows a flow chart 130 with steps for providing an auto-muting or squelching function of the signal-processing program of the stage block 24 of Fig. 2. The auto-mute function enables audio input device, such as a microphone or recorder, to ignore automatically its input signal when its level is below a certain threshold. The stage block 24 considers an input signal as ambient or surrounding noise when its level is below a certain threshold. The stage block 24 then ignores the input signal. Put differently, it does not treat and does not process it.
The flow chart 130 includes a step 132 of monitoring audio signals by the stage block 24. This is followed by a decision step 135 of checking by the stage block 24 whether an amplitude value of the signal is below a predetermined threshold level. If the amplitude value of the monitored signal is be¬ low the threshold level, the stage block 24 deems that the monitored signal is ambient noise, and it then performs a step 37 of ignoring or the audio signal.
Other functions of the signal-processing programs for treat¬ ing data from the stage objects are also possible. A scaling function can be provided to magnify the data or to attenuate the data, to cut-off low or high frequency compo¬ nents of the data, or alter positional information of the au¬ dio the data. In other words, certain audio data streams may contain data specifying where sound should appear in a sound- field, such as the 5-speaker sound field of a home theatre system comprising left, centre, right, left-rear, and right- rear speakers; and the positional information data may be al-
tered by the signal-processing system within the stage block 24
A limiting function can be provided to limit high parts of an audio signal while raising low parts of the audio signal. In other words, the lowest and the highest volume levels of the audio signal are brought closer.
A compressing function can also be provided to compress the data to reduce its size such that it is easier for transmit¬ ting. Another function can also be provided to store the data. In other words, the stage block 24 can serve as a digi¬ tal audio recorder. A revert function can also be provided to revert the parame¬ ter set associated with a stage object to its previous val¬ ues. In other words, the signal-processing program can provide real time "undo" or "re-do" functions. A reverb or echo function can be provided to mix or to com¬ bine a first audio output signal data with a second audio output signal data to form a composite output signal data. The second audio output signal data includes the first audio output signal data with a short time delay. The time delay refers to a time shifting of the first audio output signal data. The delayed first audio output signal data serves to provide an echo effect.
The signal-processing program can also provide a control function to adjust or move certain stage objects. In one im¬ plementation, the control function adjusts the prop for a stage user, such as raising a stage curtain. In another implementation, the control function controls the lighting de-
vice, which can provide a dim light or can project a video clip .
Different ways of implementing distributed processing are possible. Multiple stage blocks can be connected together to provide a distributed means for treating audio signals.
Fig. 20 shows the stage block 24 of Fig. 3 arranged in a sim¬ ple configuration. Fig. 20 depicts a first stage block 255 that is connected to a second stage block 256.
A guitar 259 and a guitar monitor speaker 260 are connected to the first stage block 255 whilst a stage left loudspeaker 263 and a stage right loudspeaker 264 are connected to the second stage block 256. A display monitor 266 is connected to the first stage block 255.
Operationally, the first stage block 255 is used for receiv¬ ing audio signals from the guitar 259 and for treating the received audio signals such that the treated audio signals are suitable for the guitar monitor speaker 260.
The guitar monitor speaker 260 is used for producing monitor audio sounds using the received audio signals. The monitor audio sounds allow a musician playing the guitar 259 to monitor or to hear sounds that are produced by the guitar 259. Without the guitar monitor speaker 260, sounds from other instruments might prevent the musician from hearing the guitar sounds .
Moreover, the first stage block 255 is also used for trans¬ mitting signal levels of the received audio signals or signal levels of the treated audio signals to the display monitor
266. The first stage block 255 is also used for sending the guitar audio signals to the second stage block 256.
The display monitor 266 is used for displaying or showing the received signal level of the audio signals for a user, for example a sound engineer. The display helps the user to pro¬ vide inputs to the first stage block 255 such that the first stage block 255 can treat the received audio signals in a manner that the user wants. For easier tracking, the display monitor 266 displays a stationary guitar icon to represent the guitar 259. A position of the guitar 259 on its stage corresponds to a position of the guitar icon on a display screen of the display monitor 266. The second stage block 256 is used for treating the guitar audio signals such that it is suitable for the audio signals for the stage left loudspeaker 263 and for the stage right loudspeaker 264. The stage left loudspeaker 263 and the stage right loud¬ speaker 264 are used for producing audio sounds using the treated guitar audio signals for an audience who are gathered to listen to music that is produced by the guitar 259. Fig. 21 shows several stage blocks arranged in a star con¬ figuration. Fig. 21 depicts a first stage block 270 that is connected to a second stage block 271 whilst the second stage block 271 is connected to a third stage block 272. The third stage block 272 is connected to a fourth stage block 275 that is connected to a master stage block 276.
The master stage block 276 is connected to a display unit 278. The first stage block 270 is connected to a right loud-
speaker 280 whilst the third stage block 272 is connected to a left loudspeaker 282. The second stage block 271 is connected to a microphone 284 whilst the fourth stage block 275 is connected to a guitar 285.
Operationally, the first stage block 270, the second stage block 271, the third stage block 272, the fourth stage block 275, and the master stage block 276 can communicate with each other to send data signals or control signals.
The first stage block 270 communicates with the master stage block 276 via the second stage block 271, via the third stage block 272, and via the fourth stage block 275. Likewise, the second stage block 271 communicates with the master stage block 276 via the third stage block 272 and via fourth stage block 275. The third stage block 272 communicates with the master stage block 276 via the fourth stage block 275. The fourth stage block 275 communicates directly with the master stage block 276.
The first stage block 270, the second stage block 271, the third stage block 272, the fourth stage block 275, and the master stage block 276 provides a distributed means of treat¬ ing or processing audio signals.
The second stage block 271 is used for receiving audio elec¬ trical signals from the microphone 284 and for treating the received signals. The treatment is based on microphone signal parameters that the second stage block 271 receives from the master stage block 276. The treatment is suitable or is adapted for signals that are produced by microphones. The adapting magnifies the microphone signals such that it can be processed by other stage blocks. The second stage block 271
is also used for sending the treated signal to the third stage block 272 and to the first stage block 270, as desig¬ nated by the master stage block 276. Similarly, the fourth stage block 275 is used for receiving audio electrical signals from the guitar 285 and for treating the received signals. The treatment is based on guitar signal parameters that the fourth stage block 275 receives from the master stage block 276. The treatment is suitable or is adapted for signals that are produced by guitars. The adapt¬ ing magnifies the guitar signals such that it can be proc¬ essed by other stage blocks. The fourth stage block 275 is also used for sending the treated signal to the third stage block 272 and to the first stage block 270, as designated by the master stage block 276.
The first stage block 270 is used for receiving signals from the second stage block 271 and from the fourth stage block 275, as instructed by the master stage block 276. The first stage block 270 is also used for treating the received sig¬ nals such that the signals is suitable or is adapted for loudspeakers. The loudspeakers can produce sounds for an au¬ dience using the treated signals. Likewise, the third stage block 272 is used for receiving signals from the second stage block 271 and from the fourth stage block 275, as instructed by the master stage block 276. The third stage block 272 is also used for treating the re¬ ceived signals such that the signals is suitable or is adapted for loudspeakers. The loudspeakers can produce sounds for an audience using the treated signals.
The master stage block 276 is used for providing control instructions and signal parameters to the first stage block 270, to the second stage block 271, to the third stage block 272, and to the fourth stage block 275.
The master stage block 276 also acts as a database for stor¬ ing control instructions and signal parameters. In the event that new stage blocks are added into the connection, the new stage blocks can obtain the required control instructions and parameters from the master stage block 276.
In particular, the master stage block 276 receives audio sig¬ nals from the second stage block 271 and from the fourth stage block 275. The received audio signals are displayed on the display unit 278 for a user. The user then uses the dis¬ played information to provide inputs or instructions to the master stage block 276, which later translates these instruc¬ tions into individual stage block instructions for the re¬ spective stage block 270, 271, 72, and 275.
This daisy chain connection has an advantage of providing easy installation and easy expansion of the stage blocks. The distribution of treating data by the different stage blocks
270, 271, 72, 275, and 276 allows the different stage ob- jects, such as the right loudspeaker, the left loudspeaker
282, the microphone 284, and the guitar 285, to have dedi¬ cated resources.
In a general sense, the distribution can be done with
Ethernet and can be independent of topology. The type of chain connection used is usually selected based on distance. Each of the first stage block 270, the second stage block
271, the third stage block 272, and the fourth stage block
275 can serve a master stage block and can be provided with a display unit.
Fig. 22 shows a several stage blocks of Fig. 3 arranged in a star configuration. Fig. 22 shows a master stage block 370 that is connected directly to a first stage block 373, di¬ rectly to a second stage block 374, directly to a third stage block 375, and directly to a fourth stage block 376. The master stage block 370 is connected to a display monitor 377. The first stage block 373 is connected to a left loud¬ speaker 380 via a wired means whilst the fourth stage block 376 is connected to a right loudspeaker 381 via a wired means. The second stage block 374 is connected to a guitar 383 via a wired means and the third stage block 375 is con¬ nected to a microphone 384 via a wireless means.
Functionally, the master stage block 370, the first stage block 373, the second stage block 374, the third stage block 375, and the fourth stage block 376 can transmit data signals or control signals with each other. The master stage block 370 communicates directly to the first stage block 373, di¬ rectly to the second stage block 374, directly to the third stage block 375, and directly to the fourth stage block 376.
Signals are received by the master stage block 370, the first stage block 373, the second stage block 374, the third stage block 375, and the fourth stage block 376 and the received signals are treated the said stage block 370, 373, 374, 375, and 376 in a manner that is similar to the earlier embodi¬ ment .
The star connection allows simple installation and easy expansion of stage blocks. Even if one stage block becomes faulty, it would not cause the entire connection to stop working .
Fig. 23 shows the stage blocks of Fig. 3 arranged in a robust system. Fig. 23 includes all parts of Fig. 22. Such parts in¬ clude the master stage block 370, which is connected to the first stage block 373, to the second stage block 374, to the third stage block 375, and to the fourth stage block 376. The master stage block 370 is connected to the display monitor 377.
The stage objects include the left loudspeaker 380, the right loudspeaker 381, the guitar 383, and the microphone 384. The stage objects are characterised in that, they are each con¬ nected to two stage blocks, instead of one stage block.
Specifically, the left loudspeaker 380 is connected to the first stage block 373 and to the second stage block 374. The right loudspeaker 381 is connected to the fourth stage block 376 and to the third stage block 375. The guitar 383 is con¬ nected to the second stage block 374 and to the third stage block 375. The microphone 384 is connected to the third stage block 375 and to the fourth stage block 376.
The connection of the stage object to two stage blocks pro¬ vide a robust or redundancy connection in that should one connection is broken, the affected stage object can continue to function with the other connection.
In one example, the guitar 383 sends data signals to both the second stage block 374 and the third stage block 375. Then
both the second stage block 374 and the third stage block 375 sends their respective guitar data signals to master stage block 370. The master stage block 370 receives the same gui¬ tar data signals from two sources. When one source fails, the master stage block 370 still receives the guitar data sig¬ nals. The master block 370 later sends the guitar data sig¬ nals to the loudspeaker 380 via the first stage block 373.
Furthermore, the master block 370 can sends the guitar data signals to both the first stage block 373 and the second stage block 374 for outputting to the loudspeaker 380. The loudspeaker 380 receives the same guitar data signals from two sources and uses the guitar data signals from one source. When one source fails, the loudspeaker 380 can use the guitar data signals from the other source.
This feature is especially important for live performances, wherein the failure of audio processing would have a severe impact because an audience would miss parts of a performance. This way of providing redundancy is also inexpensive and easy to install, which is unlike other implementations. Only a splitter and an additional cable is usually enough to provide the additional connection. Fig. 24 shows another method of using the stage blocks of
Fig. 3 in which a path of control signal is different from a path of cargo signal.
Fig. 24 includes a first stage block 390 that is connected to a second stage block 392. A microphone 395 is connected to an input port of the first stage block 390 while a speaker 396 connected to an output port of the first stage block 390. A curtain actuator 400 is connected to an output port of the
second stage block 392. A display unit 394 is also connected to another output port of the second stage block 392.
The second stage block 392 sends controls signals to a cur- tain actuator 400 and does not receive any signals from the curtain actuator 400. The curtain actuator 400 is used for raising or lowering a stage curtain. The display unit 394 sends a user input or a control signal to the first stage block 390 via the second stage block 392. The first stage block 390 receives the control signals, without audio signals or cargo signals, from the second stage block 392.
In this implementation, the combination of the two stage blocks 390 and 392 provide a system for controlling both sound processing and curtain control, from a single user interface on display unit 394.
In summary, the embodiments show several features of the ap¬ plication .
It is believed that heterogeneous multi-functional independ¬ ent audio processing units can be equipped with peer-to-peer communication devices and with appropriate instructions or programs to enable distributed audio processing, especially in the area of live audio mixing.
Audio processing units provide audio mixing to treat or proc¬ ess one or more audio signals. The treating changes a charac¬ teristic of the audio signals. For example, the treating can amplify amplitude of the audio signals such that it can drive a loudspeaker. The audio mixing can also combine multiple au¬ dio input signals to form one or more composite audio output signals. In one example, the processing units receive a plu-
rality of audio signals from several musical instruments. The audio mixing treats the multiple audio signals to enhance its sound and then combines the treated audio signals to form a left channel audio track signal and a right channel audio track signal for purpose of recording.
The peer-to-peer communication devices enable the processing units to broadcast and receive one or more independent data streams from each other in a simultaneous manner. This en- ables sharing of the audio processing by several processing units. The processing units can be connected such that one or more processing units can also be added or be removed without affecting stability of the remaining processing units. A master controller can direct behaviour of the processing units. In other words, the heterogeneous environment may allow dif¬ ferent processing units to co-exist and to work with each other to provide a sound mixing system.
The application provides a digital audio processing device for a performance area. The performance area is also known as a stage. The audio processing device is used for treating or processing audio signals produced by devices on the perform¬ ance area using digital techniques, such as analog to digital conversion. The devices can relate to a musical instrument, like a piano, or to an audio device or equipment, like, a mi¬ crophone or loudspeaker.
The audio processing device includes one or more touch screen devices and one or more audio processing blocks, wherein the audio processing block is communicatively connected to the touch screen device.
The touch screen device comprises a touch screen for receiving one or more audio signal parameter data values from a user, like a sound engineer. The touch screen comprises a screen or a display that identifies an occurrence as well as a position of a touch on the display of the touch. The touch screen then provides occurrence information of the touch as well as positional information of the touch to a computing device . The audio processing block includes a memory unit, a communication module, and a processing device. The audio processing block is used to provide signal treatment and to provide a stage view. To provide signal treatment, the memory unit is used for storing an audio signal-processing program and the audio signal parameter data values. The communication module is used for receiving two or more audio input signals from two or more corresponding performance area devices.
The processing device is used for performing instructions of the audio signal-processing program to mix the audio input signals according to the audio signal parameter data values to produce one or more audio output signals for sending to one or more other performance area device. The mixing of the audio input signals can include altering parameters of the input signals. The signal parameters can relate to signal am¬ plitude or to signal phase shift. To provide a stage view, the touch screen is further used for receiving two or more icon image data values of the perform¬ ance area devices from the user. The icon image data values
are used for selecting images of icons that correspond to the images of the performance area devices.
The memory unit often stores a library or collection of icons for user selection.
The icons of the performance area devices provide a visual representation of the performance area devices. In other words, a user is able to associate easily the icons of the performance area devices on the touch screen with the per¬ formance area devices on the performance area. An outline, colours, or a drawing of the icon can enable the easy asso¬ ciation with the corresponding performance area device. The touch screen is also used for receiving two or more posi¬ tional data values of the performance area devices from the user. The positional data values are used for positioning the icons of the performance area devices on the touch screen. It is a feature of the application that the icon positions of the performance area devices on the touch screen correspond to the positions of the performance area devices on the per¬ formance area.
Put differently, the positions of icons of the performance area devices on touch screen correspond to the positions of the performance area devices on the performance area. Put differently, the performance area device icons are displayed against a background of the performance area image. As an illustration, an icon positioned on a left side of the touch screen would correspond with a performance area device on a left side of the performance area. The user would then
be able to quickly and intuitive associate the icons with the performance area devices.
The processing device is further used for performing instruc- tions of the audio signal-processing program to display a performance area image on the touch screen and to display the icons of the performance area devices on the performance area image on the touch screen. This is unlike other implementations, which require a user to memorise relations between screen image and stage equipments.
In general, the touch screen device can comprise one or more buttons, a computer mouse, or a computer keyboard for provid- ing an input means for a user.
The performance area device can comprise a musical instru¬ ment, like a drum, or an audio device or equipment, like a microphone or a loudspeaker.
For easy implementation, the audio processing device can include a network module for connecting the touch screen device to the audio processing block. The network module can include a wired or wireless module, an Ethernet cable, and an
Ethernet communication module.
The application provides a further audio processing device for a performance area. The audio processing device includes one or more display ter¬ minals and one or more processing blocks, wherein the proc¬ essing block connects to the display terminal.
The display terminal comprises an input device and a display device. The input device is used for receiving one or more signal parameter data values from a user. The processing block comprises a memory unit, a communication module, and a processing device.
The audio processing device provides signal treatment and a stage view. To provide the signal treatment, the memory unit is used for storing a signal-processing program and the signal parameter data values. The communication module is used for receiving one or more input signals from one or more corresponding per¬ formance area devices. The processing device is used for per- forming instructions of the signal-processing program to treat or process the input signals according to the signal parameter data values to produce one or more output signals for sending to one or more further performance area devices. The treatment of the input signals can include a step of mix¬ ing of the input signals to produce one or more composite output signals.
To provide the stage view, the touch screen is further used for receiving one or more icon image data values of the per¬ formance area devices from the user. The icon image data val¬ ues of the performance area devices are provided for select¬ ing images of the performance area device icons that corre¬ spond to the images of the performance area devices.
The touch screen is also used for receiving one or more posi¬ tional data values of the performance area devices from the user. The positional data values of the performance area de-
vice are provided for positioning the icons of the perform¬ ance area devices on the touch screen.
It is a feature of the application that the positions of the icons of the performance area devices on the touch screen correspond to the positions of the performance area devices on the performance area.
The processing device is also used for performing instruc- tions of the signal-processing program to display the icons of the performance area devices on the touch screen.
The stage view allows the user to locate easily and quickly the icon corresponding to the performance area device.
The processing device is often used for performing the instructions of the signal-processing program to display a performance area image on the display device. This allows for easier identification of the performance area icons.
The input signal can comprise an audio signal or a control signal. The control signal can be used for a stage prop, like a stage actuator, or for other performance area devices. Although a touch screen can be used to provide the display device and the input device, other means are also possible.
The application provides a method for operating a digital au¬ dio processing device.
The method includes a method of providing a stage view and a method of treating signals
The stage view method includes a step of displaying an image of a performance area on a touch screen.
At least two data values of images of performance area de- vices are then received from a user. After this, icons of the performance area devices are displayed on the performance area image of on the touch screen according to the performance area device image data values. The performance area de¬ vice image data values provide images of performance area de- vice icons that correspond with images of the performance area devices.
At least two data values of positions of the performance area devices are later received from the user. The performance area device icons on the performance area image on the touch screen are then displayed according to the performance area device positional data values. The performance area device positional data values provide at least two positions of the performance area device icons on the performance area image of the touch screen that correspond with at least two posi¬ tions of the performance area devices on the performance area .
The steps allow the user to locate easily the performance area device icons unlike other implementation.
The signal treatment method includes a step of receiving at least one audio signal parameter data value from the user. At least two audio input signals are received from at least two performance area devices. The at least two audio input signals are later mixed or combined according to the at least one audio signal parameter data values to produce at least
one composite audio output signal for sending to at least one further performance area device.
The method can also include a step of associating a pre- determined set of parameter data values of the audio signal to the performance area device icon. The pre-determined set of parameter data values can be used for providing parameter data values that are deem suitable for a sound engineer for a type of the performance area device. The sound engineer can then start with these data values to tweak the data values for local performance area conditions.
The method can also include a step of providing an indication of the input signal from the performance area device on the touch screen for a user when the performance area device icon corresponding the performance area device is actuated.
The actuation can refer to touching the performance area de¬ vice icon on a screen of the display device. The indication provides a graphical display of certain signal parameter data values for the user. The indication can be positioned next to the performance area device icon. The display can provide a relative level of parameter data values. The icon position of the performance area device can be ad¬ justed by actuating the touch screen. It can also be adjusted by using a computer mouse or a computer keyboard. It can also be adjusted using buttons. The method can include a step of audio muting the audio input signal. This step includes an act of measuring an amplitude value of the audio input signal and an act of treating the audio input signal to produce an output signal if a measure-
ment data value of the amplitude value exceeds a pre¬ determined noise data value.
The method can include a step of feedback eliminating [the audio input signal. This step includes an act of measuring a frequency spectrum of the audio input signal and an act of treating the audio input signal to produce an output signal if a measurement data value of the frequency spectrum exceeds a pre-determined signal feedback data value. The treating re- duces an amplitude value of the output signal to a predeter¬ mined operating data value.
The method can include a step of equalising the audio input signal. This step includes an act of selecting a frequency band of components of the audio input signal and an act of adjusting an amplitude data value, frequency data value, and bandwidth data value of the audio input signal components within the frequency band. The method can include a step of undoing parameter data value changes of the audio input signal. The step includes an act of retrieving at least one previous parameter data value of the audio input signal and an act of applying the at least one previous parameter data value.
The method can include a step of redoing parameter data value changes of the audio input signal. This step includes an act of storing at least one pre-determined parameter data value adjustment and an act of applying the at least one pre- determined parameter data value adjustment.
The method can include a step of producing reverb of the au¬ dio input signal. This step includes an act of mixing a first
audio output signal data value of the audio input signal and a second audio output signal data value to form a composite output signal data value. The second audio output signal data value comprises the first audio output signal data value with a time delay.
The application provides a method for operating an audio processing device. The method includes a method of providing a stage view and a method of treating signals.
The stage view method comprises a step of receiving at least one performance area device image data value from a user. At least one performance area device icons is then displayed on the display device according to the at least one performance area device image data value. The at least one performance area device image data value provides at least one perform¬ ance area device icon image that corresponds with at least one performance area device image. At least one performance area device positional data value is later received from the user. The at least one performance area device icon is afterward displayed on the touch screen according to the at least one performance area device posi¬ tional data value. The at least one performance area device positional data value provides at least one performance area device icon position on the touch screen that corresponds with at least one performance area device position on the performance area. The signal treatment method includes a step of receiving at least one signal parameter data value. A signal-processing program and the at least one signal parameter data value are later stored. At least one input signal from at least one
performance area device are then received. The at least one input signal are then treated according to the at least one signal parameter data value to produce at least one output signal for sending to at least one further performance area device.
The signal parameter data can be used for signal compression, signal attenuation, or signal magnification. The method can include a step of displaying an image or a picture of a performance area or a stage on the display de¬ vice. The performance area image enables a user to locate the icon of the performance area device easily. The act of treating of the input signal can include an act of mixing or combining two or more audio signals from the two or more devices of the performance area to form the output sig¬ nal. Other acts of treating the input are also possible. In one example, the act of treating includes a step of altering parameter values of the input signal.
The application provides an audio module. The audio module provides a reliable processing of audio signals. The audio module includes a first digital audio processing unit, a second digital audio processing unit, a first audio device, and a second audio device.
The first audio device is used for sending a first audio nal to the first digital audio processing unit and to the second digital audio processing unit.
The first digital audio processing unit then treats or proc¬ esses the first audio signal to produce a first immediate au¬ dio signal while the second digital audio processing unit treats the first audio signal to produce a second immediate audio signal.
The second audio device later receives one of the first imme¬ diate audio signal from the first digital audio processing unit and the second immediate audio signal from the second digital audio processing unit to produce an audio output sig¬ nal .
This structure allows the second audio device to function when one of the first digital audio processing unit and the second digital audio processing unit fails or malfunction.
The first or the second digital audio processing unit can in¬ clude one of the above audio processing devices. The application provides a method of using a digital audio processing device. The method provides sub-mix function. A sound engineer often is required to produce a main mix of sound signals and an auxiliary mix of sound signals. The main mix of sound signals can be used for the main loudspeakers while the auxiliary mix of sound signals can be used for cer¬ tain persons, like a guitarist of band.
The method includes a step of receiving two or more audio in¬ put signals from two or more corresponding main performance area devices. The performance area devices can refer to a mu¬ sical instrument, like a piano, or to an audio device, like a microphone .
After this, a user provides one or more main mix parameter data values. The audio input signals are then mixed or are then combined to form one or more composite main audio output signals according to the main parameter data value.
One or more auxiliary mix parameter data values are later derived from the main mix parameter data values. Later, the audio input signals are mixed to form one or more auxiliary au¬ dio output signals according to the auxiliary mix parameter data value.
Since the auxiliary mix parameter data values are derived from the main mix parameter data values, they can be produced quickly in an automated way using a computer, allowing the sound engineer to focus on other matters.
The step of deriving of the auxiliary parameter data values can include a step of duplicating or copying the auxiliary parameter data values from the main parameter data value. Most auxiliary parameter data values are same or similar to the main parameter data value. A sound engineer would just need to adjust some appropriate auxiliary parameter data val¬ ues to make the auxiliary parameter data values suitable for use .
The method can include a step of receiving an adjustment data value from a user. The main parameter data value is then adjusted according to the adjustment data value. The auxiliary parameter data value is also adjusted according to the ad¬ justment data value. In other words, adjustment of the main parameter data value also causes the auxiliary parameter data value to be adjusted.
Thus, a sound engineer making changes to a main sound setting would also result the auxiliary sound setting being changed in the similar manner. The sound engineer does need to apply the similar change for the auxiliary sound setting. This saves time and effort for the sound engineer allowing the sound engineer to concentrate on other matters.
The adjustment of the auxiliary parameter data value can be done such that the main parameter data value and the auxil- iary parameter data value have a pre-determined offset or difference .
Although the above description contains much specificity, this should not be construed as limiting the scope of the em- bodiments but merely providing illustrations of the foresee¬ able embodiments. The above stated advantages of the embodi¬ ments should not be construed especially as limiting the scope of the embodiments but merely to explain possible achievements if the described embodiments are put into prac- tise. Thus, the scope of the embodiments should be determined by the claims and their equivalents, rather than by the exam¬ ples given.
Reference
10 stage
12 sound mixing system
14 obj ect
15 obj ect
16 obj ect
17 obj ect
20 wireless communication means
21 wired communication means
22 touch screen
23 touch screen device
24 stage block
25 image
26 button
27 object icon
28 object icon
29 object icon
30 object icon
33 graph
35 slider icon
36 slider button
42 processing module
43 memory module
44 display module
45 network communication module
46 wireless transceiver
47 wired transceiver
48 input module
50 ADC module
51 DAC module
52 flow chart
53 step
step
step
group
group
object icon
object icon
object icon
object icon
stage block port icon stage block port icon stage block port icon stage block port icon memory data table column
column
column
column
flow chart
step
step
window
table
column
column
column
flow chart
step
table 92
column 93
flow chart
step
step
step
120 flow chart
121 step
123 step
125 step
130 flow chart
132 step
135 step
137 step
183 screen view
185 input channel icon
186 input channel icon
187 input channel icon
188 input channel icon
190 volume slider icon
191 slider button
192 output channel button icon
193 output channel button icon
194 output channel button icon
195 table
196 column
197 column
198 column
199 column
201 row
202 row
203 row
204 row
210 screen view
255 stage block
256 stage block
259 guitar
260 guitar monitor speaker 263 stage left loudspeaker
264 stage right loudspeaker
266 display monitor
270 first stage block
271 second stage block 272 third stage block
275 fourth stage block
276 master stage block 278 display unit
280 right loudspeaker 282 left loudspeaker
284 microphone
285 guitar
370 master stage block
373 first stage block 374 second stage block
375 third stage block
376 fourth stage block
377 display monitor
380 left loudspeaker
381 right loudspeaker
383 guitar
384 microphone
390 first stage block
392 second stage block 394 display unit
395 first microphone
396 loudspeaker
398 second microphone
400 curtain actuator
Claims
A digital audio processing device for a performance area comprising
at least one touch screen device, the touch screen device comprising a touch screen for receiving at least one audio signal parameter data value from a user and at least one audio processing block, the audio processing block connecting to the touch screen device, the audio processing block comprising
a memory unit for storing an audio signal- processing program and the at least one audio signal parameter data value,
a communication module for receiving at least two audio input signals from at least two per¬ formance area devices, and
a processing device for performing instructions of the audio signal-processing program to mix the at least two audio input signals according to the at least one audio signal pa¬ rameter data value to produce at least one au¬ dio output signal for sending to at least one further performance area device, wherein
- the processing device is further provided
for performing instructions of the audio sig¬ nal-processing program
to display a performance area image on the touch screen and
to display at least two performance area device icons on the touch screen and
- the touch screen is further provided for receiving at least two performance area device icon image data values from the user, wherein the at least two performance area de¬ vice icon image data values are provided for selecting at least two performance area device icon images that correspond to the at least two performance area device images and
for receiving at least two performance area device positional data values from the user, wherein the at least two performance area de¬ vice positional data values are provided for positioning the at least two performance area device icons on the touch screen.
2. The digital audio processing device according to claim 1, wherein
the touch screen device comprises at least one button.
3. The digital audio processing device according to claim 1 or 2, wherein
the performance area device comprises a musical instru¬ ment .
4. The digital audio processing device according to claim 1 or 2, wherein
the performance area device comprises an audio device.
5. The digital audio processing device according to one of claims 1 to 4 further comprises
a network module for connecting the touch screen device to the audio processing block.
6. An audio processing device for a performance area com¬ prising
at least one display terminal, the display terminal comprising
- an input device for receiving at least one
signal parameter data value from a user and a display device, and
at least one processing block, the processing block connecting to the display terminal, the processing block comprising
a memory unit for storing a signal-processing program and the at least one signal parameter data value,
a communication module for receiving at least one input signal from at least one performance area device, and
a processing device for performing instructions of the signal-processing program to treat the at least one input signal according to the at least one signal parameter data value to produce at least one output signal for sending to at least one further performance area device,
wherein
- the processing device is further provided
for performing instructions of the signal- processing program to display at least one performance area device icon on the touch screen and
- the touch screen is further provided
for receiving at least one performance area device icon image data value from the user, wherein the at least one performance area de- vice icon image data value is provided for se¬ lecting at least one performance area device icon image that corresponds to the at least one performance area device image and
- for receiving at least one performance area device positional data value from the user, wherein the at least one performance area de¬ vice positional data value is provided for po¬ sitioning the at least one performance area device icon on the touch screen.
7. The audio processing device according to claim 6,
wherein
the processing device is further provided for performing the instructions of the signal-processing program to display a performance area image on the display device.
8. The audio processing device according to claim 6 or 7, wherein
the input signal comprises an audio signal.
9. The audio processing device according to one of claims 6 to 8, wherein
the display device comprises a touch screen.
10. The audio processing device according to one of claims 6 to 9, wherein
the input device comprises a touch screen.
11. A method for operating a digital audio processing de¬ vice, the method that comprises
displaying a performance area image on a touch screen, receiving at least two performance area device im¬ age data values from a user,
displaying at least two performance area device icons on the performance area image on the touch screen according to the at least two performance area device image data values,
wherein the at least two performance area device image data values provide at least two performance area device icon images that correspond with at least two perform¬ ance area device images,
receiving at least two performance area device po¬ sitional data values from the user,
displaying the at least two performance area device icons on the performance area image on the touch screen according to the at least two performance area device positional data values,
wherein the at least two performance area device posi¬ tional data values provide at least two performance area device icon positions on the performance area image of the touch screen that correspond with at least two per¬ formance area device positions on the performance area, receiving at least one audio signal parameter data value from the user
receiving at least two audio input signals from at least two performance area devices, and
mixing the at least two audio input signals accord¬ ing to the at least one audio signal parameter data val¬ ues to produce at least one audio output signal for sending to at least one further performance area device.
12. The method according to claim 11 further comprising
associating a pre-determined set of audio signal parame¬ ter data values to the performance area device icon.
The method according to claim 11 or 12 further comprising
providing an indication of the input signal from the performance area device on the touch screen.
The method according to one of claims 11 to 13 further comprising
adjusting the performance area device icon position by actuating the touch screen.
The method according to one of claims 10 to 14 further comprising
audio muting the audio input signal, which comprises measuring an amplitude value of the audio input signal and
treating the audio input signal to produce an out¬ put signal if a measurement data value of the am¬ plitude exceeds a pre-determined noise data value.
The method according to one of claims 10 to 15 further comprising
feedback eliminating the audio input signal, which comprises
measuring a frequency spectrum of the audio input signal and
treating the audio input signal to produce an out¬ put signal if a measurement data value of the fre¬ quency spectrum exceeds a pre-determined signal feedback data value, wherein the treating reduces an amplitude value of the output signal to a prede termined operating data value.
17. The method according to one of claims 10 to 16 further comprising
equalising the audio input signal, which comprises
selecting a frequency band of components of the au- dio input signal and
adjusting an amplitude data value, frequency data value, and bandwidth data value of the audio input signal components within the frequency band.
18. The method according to one of claims 10 to 17 further comprising
undoing parameter data value changes of the audio input signal, which comprises
retrieving at least one previous parameter data value of the audio input signal and
applying the at least one previous parameter data value .
19. The method according to one of claims 10 to 18 further comprising
redoing parameter data value changes of the audio input signal, which comprises
storing at least one pre-determined parameter data value adjustment and
- applying the at least one pre-determined parameter data value adjustment.
20. The method according to one of claims 10 to 19 further comprising
producing reverb of the audio input signal, which com¬ prises
mixing a first audio output signal data value of the au¬ dio input signal and a second audio output signal data value to form a composite output signal data value, wherein
the second audio output signal data value comprises the first audio output signal data value with a time delay.
21. A method for operating an audio processing device, the method that comprises
receiving at least one performance area device im¬ age data value from a user,
displaying at least one performance area device icons on the display device according to the at least one performance area device image data value,
wherein the at least one performance area device image data value provides at least one performance area device icon image that corresponds with at least one perform¬ ance area device image,
receiving at least one performance area device po¬ sitional data value from the user,
displaying the at least one performance area device icon on the touch screen according to the at least one performance area device positional data value,
wherein the at least one performance area device posi¬ tional data value provides at least one performance area device icon position on the touch screen that corre- sponds with at least one performance area device posi¬ tion on the performance area,
receiving at least one signal parameter data value and
storing a signal-processing program and the at least one signal parameter data value,
receiving at least one input signal from at least one performance area device, and treating the at least one input signal according to the at least one signal parameter data value to produce at least one output signal for sending to at least one further performance area device.
22. The method according to claim 21 further comprises
displaying a performance area image on the display de¬ vice .
23. The method according to claim 21 or 22, wherein
the treating of the at least one input signal comprises mixing at least two audio signals from at least two per¬ formance area devices to form the at least one output signal .
24. An audio module comprising
a first digital audio processing unit,
a second digital audio processing unit,
a first audio device for sending a first signal to the first digital audio processing unit and to the sec¬ ond digital audio processing unit, and
a second audio device,
wherein the first digital audio processing unit treats the first signal to produce a first immediate signal and the second digital audio processing unit treats the first signal to produce a second immediate signal, and wherein the second audio device receives one of the first immediate signal or the second immediate signal to produce an output signal.
25. The audio module according to claim 24, wherein the first digital audio processing unit comprises a digital audio processing device according to one of claims 1 to 10.
The audio module according to claim 24 or 25, wherein the second digital audio processing unit comprises a digital audio processing device according to one of claims 1 to 10.
A method of using a digital audio processing device
receiving at least two audio input signals from at least two main performance area devices,
providing at least one main parameter data value, mixing the at least two audio input signals to form at least one main audio output signal according to the at least one main parameter data value,
deriving at least one auxiliary parameter data value from the at least one main parameter data value, and
mixing the at least two audio input signals to form at least one auxiliary audio output signal according to the at least one auxiliary parameter data value.
The method according to claim 27, wherein
the deriving of the at least one auxiliary parameter data value comprises
duplicating the at least one auxiliary parameter data value from the at least one main parameter data value
29. The method according to claim 27 or 28 further comprising
receiving an adjustment data value from a user, adjusting the main parameter data value by the adjust¬ ment data value, and
adjusting the auxiliary parameter data value by the adjustment data value.
The method according to claim 29, wherein
the adjusting the auxiliary parameter data value is done such that the main parameter data value and the auxil¬ iary parameter data value comprise a pre-determined dif¬ ference .
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| SG201001892-7 | 2010-03-18 | ||
| SG201001892 | 2010-03-18 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2011114310A2 true WO2011114310A2 (en) | 2011-09-22 |
| WO2011114310A3 WO2011114310A3 (en) | 2012-03-01 |
Family
ID=44649675
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IB2011/051135 Ceased WO2011114310A2 (en) | 2010-03-18 | 2011-03-18 | Digital sound mixing system with graphical controls |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2011114310A2 (en) |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7742609B2 (en) * | 2002-04-08 | 2010-06-22 | Gibson Guitar Corp. | Live performance audio mixing system with simplified user interface |
| DE102005043641A1 (en) * | 2005-05-04 | 2006-11-09 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus and method for generating and processing sound effects in spatial sound reproduction systems by means of a graphical user interface |
| KR100669034B1 (en) * | 2005-05-20 | 2007-01-15 | 주식회사 엔터기술 | Digital audio playback device and playback method using same |
| US7698009B2 (en) * | 2005-10-27 | 2010-04-13 | Avid Technology, Inc. | Control surface with a touchscreen for editing surround sound |
-
2011
- 2011-03-18 WO PCT/IB2011/051135 patent/WO2011114310A2/en not_active Ceased
Also Published As
| Publication number | Publication date |
|---|---|
| WO2011114310A3 (en) | 2012-03-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11451913B2 (en) | Digital audio communication and control in a live performance venue | |
| US7099483B2 (en) | Sound control system, sound control device, electronic device, and method for controlling sound | |
| US20190013886A1 (en) | Operation panel structure and control method and control apparatus for mixing system | |
| KR101823437B1 (en) | Integration control system of digital video and audio | |
| US11347468B2 (en) | Sound volume operation device | |
| CN102724009A (en) | System for controlling a mixer via external controller | |
| US20130322654A1 (en) | Audio signal processing device and program | |
| CN100589349C (en) | Hybrid system control method and hybrid system control device | |
| CN109346048B (en) | Karaoke sound effect processing device and sound effect processing system | |
| WO2011114310A2 (en) | Digital sound mixing system with graphical controls | |
| KR102266560B1 (en) | Apparatus for multichannel audio mixing based on wireless communication and method thereof | |
| US10270551B2 (en) | Mixing console with solo output | |
| WO2015024675A1 (en) | Audio system and foot operated control surface device | |
| US10083680B2 (en) | Mixing console | |
| US11653132B2 (en) | Audio signal processing method and audio signal processing apparatus | |
| GB2540157A (en) | A mixing console | |
| JP3928570B2 (en) | Acoustic control system | |
| JP2004207826A (en) | Control method of mixing system, and control apparatus and program for mixing system | |
| US20150261855A1 (en) | Audio system and audio signal processing device | |
| WO2020020888A1 (en) | Communication system for musicians | |
| WO2024241584A1 (en) | Sound signal processing device and sound signal processing program | |
| JP4895921B2 (en) | Audio signal processing system | |
| JP2011139381A (en) | Audio control console | |
| Izhaki | Mixing consoles | |
| JP2000217199A (en) | Sound controller, sound processor and mixing system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11755777 Country of ref document: EP Kind code of ref document: A2 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 11755777 Country of ref document: EP Kind code of ref document: A2 |