[go: up one dir, main page]

WO2025012857A1 - Musical instrument that digitizes and processes signals, and synthesizes sounds and related methods - Google Patents

Musical instrument that digitizes and processes signals, and synthesizes sounds and related methods Download PDF

Info

Publication number
WO2025012857A1
WO2025012857A1 PCT/IB2024/056771 IB2024056771W WO2025012857A1 WO 2025012857 A1 WO2025012857 A1 WO 2025012857A1 IB 2024056771 W IB2024056771 W IB 2024056771W WO 2025012857 A1 WO2025012857 A1 WO 2025012857A1
Authority
WO
WIPO (PCT)
Prior art keywords
string
signals
instrument
sensors
musical instrument
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IB2024/056771
Other languages
French (fr)
Inventor
Mariano Camilo Gonzalez Lebrero
Esteban Eduardo Mocskos
Lucas Tomas Rubinstein
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Consejo Nacional de Investigaciones Cientificas y Tecnicas CONICET
Universidad de Buenos Aires
Original Assignee
Consejo Nacional de Investigaciones Cientificas y Tecnicas CONICET
Universidad de Buenos Aires
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Consejo Nacional de Investigaciones Cientificas y Tecnicas CONICET, Universidad de Buenos Aires filed Critical Consejo Nacional de Investigaciones Cientificas y Tecnicas CONICET
Publication of WO2025012857A1 publication Critical patent/WO2025012857A1/en
Anticipated expiration legal-status Critical
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/32Constructional details
    • G10H1/34Switch arrangements, e.g. keyboards or mechanical switches specially adapted for electrophonic musical instruments
    • G10H1/342Switch arrangements, e.g. keyboards or mechanical switches specially adapted for electrophonic musical instruments for guitar-like instruments with or without strings and with a neck on which switches or string-fret contacts are used to detect the notes being played
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/02Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos
    • G10H1/04Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation
    • G10H1/053Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only
    • G10H1/055Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only by switches with variable impedance elements
    • G10H1/0551Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only by switches with variable impedance elements using variable capacitors
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H3/00Instruments in which the tones are generated by electromechanical means
    • G10H3/12Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussive instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent instrument
    • G10H3/14Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussive instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent instrument using mechanically actuated vibrators with pick-up means
    • G10H3/18Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussive instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent instrument using mechanically actuated vibrators with pick-up means using a string, e.g. electric guitar
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H3/00Instruments in which the tones are generated by electromechanical means
    • G10H3/12Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussive instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent instrument
    • G10H3/14Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussive instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent instrument using mechanically actuated vibrators with pick-up means
    • G10H3/18Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussive instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent instrument using mechanically actuated vibrators with pick-up means using a string, e.g. electric guitar
    • G10H3/186Means for processing the signal picked up from the strings
    • G10H3/188Means for processing the signal picked up from the strings for converting the signal to digital format
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H5/00Instruments in which the tones are generated by means of electronic generators
    • G10H5/007Real-time simulation of G10B, G10C, G10D-type instruments using recursive or non-linear techniques, e.g. waveguide networks, recursive algorithms
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/265Key design details; Special characteristics of individual keys of a keyboard; Key-like musical input devices, e.g. finger sensors, pedals, potentiometers, selectors
    • G10H2220/275Switching mechanism or sensor details of individual keys, e.g. details of key contacts, hall effect or piezoelectric sensors used for key position or movement sensing purposes; Mounting thereof
    • G10H2220/295Switch matrix, e.g. contact array common to several keys, the actuated keys being identified by the rows and columns in contact
    • G10H2220/301Fret-like switch array arrangements for guitar necks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/395Acceleration sensing or accelerometer use, e.g. 3D movement computation by integration of accelerometer data, angle sensing with respect to the vertical, i.e. gravity sensing

Definitions

  • the present invention relates to musical instruments. More particularly, the present invention relates to a digital electronic musical instrument comprising a synthesizer based on a physical simulation, a method allowing digitizing and processing signals generated continuously by said instrument, and a method for synthesizing sounds by means of the digital instrument.
  • One of the most common digital musical systems or instruments are systems based on MIDI such as those described by patent applications US 8093482 B1 , US 2011/239848 A1 , and US 2022/208160 A1. Briefly, these systems use a processor that receives signals emitted by sensors and generates an output signal in MIDI format. However, these systems make use of discrete signals that are usually limited to an identity of a note played, its intensity and, eventually, some modulated parameter. As a result, these systems are incapable of executing and interpreting more subtle techniques and gestures resulting from interpretation by a performer.
  • the present invention provides a musical instrument that allows digitizing and processing signals to accurately replicate the actions of a performer in such a way that allows performing and interpreting the subtle techniques and gestures resulting from the interpretation by a performer, more particularly such subtle techniques and gestures are peculiar to a stringed musical instrument.
  • Another aspect of the present invention is a method for digitizing and processing such shares.
  • another aspect of the present invention is a method for synthesizing sounds from the digitized and processed signals.
  • the present invention relates to a musical instrument that digitizes and processes the actions of a performer, wherein said instrument comprises two parts, wherein one of the parts digitizes the action of the non-deft hand of the performer and the other part digitizes the action of the deft hand of the performer.
  • the instrument comprises a body and a neck, wherein the body of the instrument is the part of the musical instrument that digitizes the action of the deft hand of the performer while the neck of the instrument is the part of the musical instrument that digitizes the action of the non-deft hand of the performer.
  • One aspect of the present invention relates to a musical instrument that digitizes and continuously processes analog signals, digital signals, or both, produced by said instrument, and wherein said instrument comprises
  • a body comprising a container and a lid, thus defining an inner volume
  • said body further comprises at least one string located on the outer surface of the lid and extending along the lid, wherein said at least one string is a metal string, two string damping media, wherein the first medium is located proximate to one end of the at least one string and the second medium is located proximate to the opposite end of the at least one string and wherein both media are in contact with the at least one string, at least one spring located on the outer surface of the lid, wherein said at least one spring is linked to the corresponding at least one string, at least one microphone located on the outer surface of the lid, below the corresponding at least one string and without contact with the string; at least one capacitive sensing integrated circuit wherein said at least one capacitive sensing integrated circuit is located within the inner volume and wherein said at least one capacitive sensing integrated circuit is connected with the at least one string; at least one analog-to-digital converter, or A/D converter, located within the interior volume,
  • the modulator is integrated into an inertial measurement unit (IMU), located in the fretboard and connected to the microcontroller, wherein the at least one microphone, the at least one capacitive sensing integrated circuit, the at least one A/D converter, the at least one spring, and the at least one row of sensors correspond to each of the strings of the at least one string, in a 1 to 1 ratio.
  • IMU inertial measurement unit
  • the instrument body comprises 1 , 2, 3, 4, 5, 6, 7, 8 or more strings, more preferably it comprises 4, 5 or 6 strings, still more preferably, the instrument body comprises 6 strings.
  • the length of the at least one string is between 10 and 40 cm, more preferably, between 25 and 35 cm, still more preferably, the length of the string is 30 cm.
  • the damping media comprise a viscoelastic material.
  • said viscoelastic material is viscoelastic foam rubber.
  • the at least one microphone is located below the corresponding at least one string at a distance of between 5, 6, 7, 8, 9 or 10 mm, more preferably at a distance of 5 mm.
  • the at least one capacitive sensing integrated circuit is connected to the at least one string, to the plurality of sensors of the fretboard, and to the microcontroller.
  • the at least one A/D converter digitizes the information coming from the at least one microphone with a given sampling frequency, preferably the sampling frequency is between 12 and 192 kHz, preferably the sampling frequency is selected from the group consisting of 12, 24, 48, 96 and 192 kHz, even more preferably the sampling frequency is 96 kHz.
  • the instrument body comprises a control and processing unit (CPU), wherein said unit controls and processes the information coming from at least one A/D converter and the microcontroller.
  • CPU control and processing unit
  • control and processing unit comprises at least one sound channel.
  • said unit comprises 1 , 2, 3, 4, 5, 6, 7, 8 or more sound channels, more preferably it comprises 4, 5 or 6 sound channels. Even more preferably, said unit comprises 6 sound channels.
  • control and processing unit controls and processes the information coming from the at least one A/D converter with a given sampling frequency, preferably said sampling frequency is between 12 and 192 kHz, preferably the sampling frequency is selected from the group consisting of 24, 48, 96, 192 kHz, even more preferably the sampling frequency is 96 kHz.
  • control and processing unit models at least one virtual string.
  • the audio output comprises an analog audio output, wherein said analog audio output comprises a digital-to-analog converter (DAC) in communication with an audio signal conditioning circuit, wherein said audio signal conditioning circuit conditions the signal for reproduction.
  • DAC digital-to-analog converter
  • the digital-to-analog converter converts signals with a given sampling frequency, preferably said sampling frequency is between 12 and 192 kHz, preferably the sampling frequency is selected from the group consisting of 24, 48, 96, 192 kHz, even more preferably the sampling frequency is 48 Hz.
  • the audio signal conditioning circuit is in communication with a reproduction means which may be comprised by the instrument, or may be located outside the instrument, and wherein said reproduction means is a speaker or a headphone, more preferably a speaker.
  • the plurality of sensors comprised on the fretboard printed circuit board comprises capacitive sensors or pressure sensors, more preferably, the plurality of sensors comprises capacitive sensors.
  • the at least one row of sensors comprises at least 6 sensors.
  • said row comprises 6, 7, 8, 9, 10, 11 , 12 or more sensors, preferably, the at least one row of sensors comprises 10, 1 1 or 12 sensors. Even more preferably, the at least one row of sensors comprises 12 sensors.
  • the plurality of sensors of the instrument fretboard comprises 72 sensors, wherein said 72 sensors are distributed in 6 sensor rows, wherein each sensor row comprises 12 sensors.
  • the at least one additional modulator of the model is selected from an accelerometer, gyroscope, magnetometer, or a combination thereof, and wherein said modulator is integrated into an inertial measurement unit (IMU).
  • IMU inertial measurement unit
  • step B) digitize the plurality of analog signals of step A), wherein i) the plurality of signals coming from the at least one string is digitized by means of at least one capacitive sensing integrated circuit, ii) the plurality of signals coming from the at least one microphone is digitized by means of at least one A/D converter with a frequency of between 12 and 192 kHz, iii) the plurality of signals coming from the at least one sensor of the plurality of sensors of the fretboard is digitized by means of at least one capacitive sensing integrated circuit;
  • step D) processing the plurality of digital signals obtained from step C) to obtain a plurality of input signals, wherein said processing comprises i) converting the signals coming from the at least one string into an increase of the friction coefficient, ii) converting the signals coming from at least one microphone into a force, iii) converting the signals coming from the plurality of sensors of the fretboard into a friction, iv) converting the signals coming from the at least one additional modulator of the model into a force, volume, friction, position where a force is applied, or position where the sound is read.
  • the method for digitizing and processing signals of the present invention is performed in real time.
  • the threshold value of step Ci) is set with reference to the noise of the digital signals resulting from step A) or B). More preferably, said threshold value is determined as 20% greater than said noise.
  • the method for processing and digitizing signals is carried out independently for each string of the at least one string, for each microphone of the at least one microphone, for each sensor of the plurality of sensors of the fretboard, and for each additional model modulator of the at least one additional model modulator.
  • the position and pressure of the performer's non-deft hand on the circuit board is used to modify the length of the string and consequently the resonant frequency of the at least one virtual string, generating different notes.
  • the pressure difference can be used to generate sound effects typical of real instruments such as a fully plucked string, harmonics, slurs between two notes, quenching, etc.
  • Another aspect of the present invention is a method for synthesizing sounds comprising the steps of
  • step D) sending the plurality of output signals produced in step C) to an audio output for their reproduction.
  • the method for synthesizing sounds of the present invention is performed in real time.
  • the value of n for the n nodes is between 100-500 nodes, preferably 500. Accordingly, the value of (n-1) for the number of springs is between 99 and 499 springs, preferably 499 springs.
  • the Figure 1 shows a schematic representation of the instrument of the present invention showing the components and their connections.
  • the Figure 2 shows a representative schematic of the modeling of a virtual string according to the modeling as described in the present invention.
  • the Figure 3 shows a schematic of a preferred embodiment of the digitizing and processing method of the present invention in combination with a preferred embodiment of the sound generation method of the present invention.
  • the Figure 4 shows a block diagram of one embodiment of the musical instrument of the present invention.
  • deft hand will be used to refer to both the right hand of a right-handed guitarist and the left hand of a left-handed guitarist
  • non-deft hand will be used to refer to both the left hand of a right-handed guitarist and the right hand of a left-handed guitarist.
  • deft hand and non-deft hand should not be interpreted in relation to the performer's ability, but only to the spatial arrangement in which the performer places the hands in relation to the instrument.
  • the term "performer” shall be understood to mean a person who acts upon the instrument for the purpose of generating a music, sound, sound effect, or a combination thereof.
  • action of a performer or “action of the performer” shall be understood to mean any action, including and not limited to movements, direct and indirect contacts, executions, strokes, strumming, techniques and gesticulations, among others, performed upon the instrument by the performer that is detectable by the instrument, i.e. that generates at least one non-null signal by the instrument, for the purpose of generating a music, sound, sound effect, or a combination thereof.
  • the term may refer to a singular action or to a series of consecutive actions.
  • the term "digitizing" refers to any electronic process carried out by any type of converter, wherein a signal or plurality of signals of any type of non-digital nature is converted into a digital signal or into a plurality of digital signals.
  • string when not accompanied by any adjective or qualifier will refer to a real, physical string made of any material suitable for use as required by the instrument of the present invention and upon which a performer can physically interact in order to perform any of the methods disclosed in the present invention, unless the context clearly indicates otherwise.
  • real string “string of the body”, “string of the instrument” and “string of the body of the instrument” shall be considered synonymous and shall be used interchangeably.
  • the term "virtual string” will refer to a string resulting from mathematical modeling, in any of the embodiments disclosed in the present specification, executed by means of the software of a control and processing unit, and on which a signal or a plurality of signals coming from the instrument, preferably signals resulting from the actions of a performer, can be applied.
  • the expression “of the model” should be understood as referring to the mathematical model used in the modeling of the virtual string.
  • the terms “body of the instrument” and “part of the musical instrument that digitizes the action of the deft hand of the performer” are to be understood as synonyms and, therefore, will be used interchangeably.
  • neck of the instrument and “part of the musical instrument that digitizes the action of the performer's non-deft hand” shall be considered synonymous and used interchangeably.
  • the term "damped string” shall be understood to mean a string whose vibrations subsequent to the initial vibration are attenuated. Accordingly, any process to attenuate the vibrations of a string subsequent to the initial vibration shall be understood as “damp”, “damping” or “the damping” and these terms shall be used interchangeably when they refer to a string that is subjected to such a process.
  • real time when referring to a process or action, shall be understood as meaning that said process or action is completed in an amount of time that is not significantly perceptible to the performer or that said process or action is completed in an amount of time that does not represent an inconvenience to the execution of the performer's actions, and/or the operation of the instrument.
  • top face of the base should be understood as the face on which the performer will execute the actions to be digitized and processed by the instrument.
  • bottom face of the base should be understood as the face or region of the base that is diametrically opposite to the top face.
  • the term "parameter” refers to the at least one virtual string or the modeling of the at least one virtual string, it should be understood as magnitudes modeling physical properties (such as mass, spring hardness, air friction, etc.) of the at least one virtual string or the modeling of the at least one virtual string that determine its properties independently from the signals coming from the musical instrument.
  • variable refers to the at least one virtual string or the modeling of the at least one virtual string, it should be understood as magnitudes modeling physical properties (such as string length, friction coefficients, applied forces, etc.) of the at least one virtual string or the modeling of the at least one virtual string that determine its behavior in a dependently from the signals coming from the musical instrument.
  • the term "input signal”, “input signals” and “plurality of input signals” will be used interchangeably and will refer to those digital signals resulting from the method for digitizing and processing of the present invention in any of its embodiments which are used to calculate or determine the variables of the at least one virtual string in any of its embodiments.
  • output signal For the purposes of the present invention, the term "output signal”, “output signals” and “plurality of output signals” will be used interchangeably and will refer to those digital signals resulting from the application, in any of its embodiments, of the input signals on the at least one virtual string of the present invention in any of its embodiments.
  • One aspect of the present invention is a musical instrument 100 that digitizes and processes signals, preferably, wherein said signals result from the actions of a performer, such that the digitization and processing of the signals allows the techniques and gestures of the performer to be accurately replicated.
  • the techniques and gestures that the instrument replicates comprise legato, tapping, slapping, glissando, vibrato, finger tapping, pick tapping, pick dragging, pizzicato, string quenching, strumming, snapping, harmonic generation, among others.
  • the musical instrument 100 comprises a body 101 and a neck 102, wherein said body 101 and neck 102 are linked such that the neck 102 extends from the body 101 following its longitudinal axis.
  • the body 101 is between 20 and 40 cm long, between 7 and 30 cm wide and between 4 and 10 cm thick.
  • the neck 102 of the instrument is between 20 and 40 cm long, between 4 and 8 cm wide and between 1 and 3 cm thick.
  • the body 101 comprises a container and a lid pivotally connected to the container, thus defining an inner volume.
  • the instrument 100 of the invention comprises a neck 102 comprising a base with a partially or completely flat top face, wherein said top face is that face upon which the performer executes the actions to be digitized and processed by the instrument, and wherein said neck 102 further comprises: a fretboard 107 disposed on at least a planar portion of the top face of the base of the neck 102 comprising a printed circuit board 108, comprising a plurality of sensors 110 of the fretboard wherein said plurality of sensors 110 are arranged in at least one row 109 of sensors 110, and wherein said circuit board 108 is connected to the at least one 113 capacitive sensing integrated circuit via a wired connection;
  • modulator 111 is integrated to an inertial measurement unit (IMU) located in the fretboard 107 of the neck 102 and which is connected to the microcontroller 114.
  • IMU inertial measurement unit
  • the at least one row 109 of sensors 110 is arranged such that it is aligned on the longitudinal axis with its corresponding string of the at least one string 103.
  • the body 101 of the instrument 100 comprises at least one string 103 of a stringed musical instrument, preferably said at least one string 103 is of an acoustic or electric stringed instrument. More preferably, said at least one string is an acoustic guitar string or an electric guitar string.
  • the at least one string 103 is made of metal and is made of a metal selected from steel, nickel, brass, bronze, or any combination thereof. In a particularly preferred embodiment, the at least one string 103 is made of steel and nickel.
  • each of the strings of the at least one string 103 is independently connected to the capacitive sensing integrated circuit 113.
  • the at least one string 103 is a damped string with two damping media 104a and 104b, wherein the 104a medium is located proximate to the end of the at least one string 103 located most distal to the fretboard and the 104b medium is located proximate to the end of the at least one string 103 that is closest to the fretboard and wherein both media are in contact with the at least one string 103.
  • "Proximate to the end of the string” will be understood to mean any distance that allows the damping medium to produce the desired damping and reduce vibrations subsequent to the initial vibration of the string. A person skilled in the art will be able to determine the position for the exact location of the damping media. Likewise, a person skilled in the art will be able to determine the optimum size that these damping means can be to produce the desired damping.
  • the inclusion of the viscoelastic material gives the instrument the ability to avoid reflections of the initial signal, thus achieving greater precision in the replication of the performer's intention and avoiding spurious signals, giving the invention the advantage of achieving greater precision in the replication of techniques and gestures performed by the performer.
  • the body 101 of the instrument 100 comprises two supports.
  • said supports are located on the outer surface of the lid and at opposite ends of the at least one string, wherein said two supports comprise a first support which is used as a tensioning point of the at least one string and a second support which is used as a bridge or anchor point.
  • the method of anchoring the at least one string is by attaching the string to the second body support.
  • the body 101 of the instrument 100 comprises two supports, wherein the first support is located at the more distal end relative to the neck 102 of the instrument 100 and the second support is located at the nearer end relative to the neck 102 of the instrument 100.
  • the first support comprises at least one spring 105.
  • said support comprises 1 , 2, 3, 4, 5, 6, 7, 8 or more springs 105. More preferably, said support comprises 4, 5 or 6 springs 105. Even more preferably, said support comprises 6 springs 105.
  • Each of the at least one spring 105 is arranged in such a way that a spring is arranged for each of the strings of the at least one string 103.
  • said at least one spring 105 is linked to the at least one string 103 from a single end.
  • all the springs shall be arranged from the same end of each of the strings, so that in the instrument all the springs are located on the first support.
  • the tension of the strings of the instrument is similar to that of the strings of a conventional length instrument despite the fact that the strings of the instrument of the present invention are significantly shorter.
  • the musical instrument 100 comprises at least one polyphonic microphone, more preferably said at least one microphone is a hexaphonic microphone.
  • the body 101 of the instrument 100 comprises a capacitive sensing integrated circuit 113, wherein said circuit 113 is connected to the at least one string 103 such that each of the strings of the at least one string 103 is independently connected.
  • the connection between the at least one string 103 and the capacitive sensing integrated circuit 113 is made via a wired connection.
  • the at least one capacitive sensing integrated circuit 113 is used to detect the direct or indirect contact, for example, through an instrumental performance element, for example, a pick or a bow, of the performer with the at least one string 103. Further, the at least one capacitive sensing integrated circuit 113 receives signals from the at least one string 103, from the plurality of sensors 110 of the fretboard, or both and sends signals to the microcontroller 114.
  • the at least one capacitive sensing integrated circuit 113 comprises a group of sensing integrated circuits for sensing the quenching of the at least one string 103 and another group of capacitive sensing integrated circuits for sensing the force and position coming from the sensors 110 of the fretboard 107.
  • the instrument body comprises at least one A/D converter 112.
  • the bit depth chosen for the at least one A/D converter 112 is such as to allow sufficient resolution to feed the simulation, preferably, the bit depth is 16 bits.
  • each of the at least one A/D converter 112 is arranged such that an A/D converter 112 is arranged for each of the microphones 106 that sense the strings of the at least one string 103. More particularly, said converters 112 and microphones 106 are connected via a wired connection.
  • the at least one A/D converter 112 digitizes information coming from the at least one microphone 106 at a given sampling rate, such as to allow the capture of audible components of the performance transient (20 Hz to 20 kHz) and of ultrasonic vibrations that may contribute to the final model synthesis result (20 kHz to 48 kHz). In one embodiment, the at least one A/D converter 112 digitizes the information coming from the at least one microphone 106 with a given sampling frequency, preferably the sampling frequency is between 12 and 192 kHz, preferably the sampling frequency is between 12, 24, 48, 96 and 192 kHz, even more preferably the sampling frequency is 96 kHz.
  • This high sampling rate allows the musical instrument 100 of the present invention to achieve replicating with greater detail and resolution the movements performed by the at least one string 103 and to transmit that information for use in the modeling of the at least one virtual string 200, thus allowing a more accurate replication of the movements performed on the real string.
  • an expressiveness is obtained by the musical instrument that is more similarto the gesticulations and techniques of the performer that is not obtained by other musical instruments in the prior art.
  • the musical instrument 100 features a microcontroller 114.
  • the microcontroller 114 is responsible for controlling, concentrating and distributing the at least one digital signal that does not come from the at least one A/D converter 112 to the control and processing unit 115. For example, said at least one digital signal coming from the at least one additional modulator 111 or from the at least one capacitive sensor 110.
  • said microcontroller 114 is connected to the capacitive sensing integrated circuit 113 via an I2C connection, or an SPI connection.
  • the microcontroller 114 used is a 32-bit microcontroller with SPI, I2C digital communication ports and digital inputs, model ESP32.
  • the body 101 of the instrument 100 comprises a control and processing unit 115 wherein said control and processing unit 115 controls and processes information coming from the at least one A/D converter 112.
  • the control and processing unit 115 is connected to the at least one A/D converter 112 through the use of at least one sound channel.
  • each of the at least one sound channel is arranged in such a way that a sound channel is arranged for each A/D converter of the at least one A/D converter 112. That is, by way of example, when the instrument 100 comprises 6 sound channels, it will also comprise 6 microphones such that each sound channel receives information from a single microphone.
  • control and processing unit 115 operates with a latency such that it is not significantly noticeable to the performer or is not a drawback to the execution of the performer's actions, and/or the operation of the instrument.
  • control and processing unit 115 operates with a latency of 2, 3, 4, 5, 6, 7, 8, 9, or 10 ms, more preferably, said unit 115 operates with a latency of 2, 3, 4 or 5 ms, more preferably, said unit 115 operates with a latency of 2 ms.
  • control and processing unit 115 comprising the body 101 of the instrument 100 digitizes information coming from the at least one microphone 106 at a given sampling rate.
  • said sampling frequency of between 12 and 192 kHz, preferably the sampling frequency is between 12, 24, 48, 96 and 192 kHz, still more preferably the sampling frequency is 96 kHz.
  • control and processing unit 115 communicates with the following elements in the following ways: at least one A/D converter 112 via an I2C or SPI connection, to the microcontroller 114 via USB connection the at least one audio output 118 or 119, by means of a DAC and an audio conditioning circuit 117 if the audio output 118 is analog, or by means of a wired connection if the audio output 119 is digital.
  • control and processing unit 115 comprises a graphics processing unit (GPU), a field programmable gate array (FPGA), an embedded system, or any other high-performance co-processor.
  • GPU graphics processing unit
  • FPGA field programmable gate array
  • embedded system or any other high-performance co-processor.
  • control and processing unit 115 may be located outside the musical instrument 100 on an external processor, for example, inside a computer or notebook, and be in communication with the microcontroller 114 and the A/D converters 112 via a serial digital connection, such as USB. Additionally, when the external processor is a computer or notebook, the audio output used by the musical instrument shall be that comprised by the computer or notebook.
  • the control and processing unit 115 is a computer that may be located within the interior volume of the body 101 of the instrument 100, or located outside said instrument 100. In a particular embodiment, said computer may be an Nvidia Jetson nano, Jetson TX2 NX, or any other computer with similar computing capability, or any combination thereof.
  • control and processing unit 115 models at least one virtual string 200 for each of the at least one string 103 of the body 101.
  • the control and processing unit 115 models 1 , 2, 3, 4 or more virtual strings for each of the at least one string of the body.
  • control and processing unit 115 models the at least one virtual string 200 such that each virtual string of the at least one virtual string 200 corresponds to each string of the at least one string 103 of the body 101.
  • said modeling when modeling at least two virtual strings, said modeling may be independent or interrelated, i.e., said at least two strings may be modeled separately or together.
  • the instrument 100 comprises an analog audio output 118, wherein said analog audio output 118 comprises a digital-to-analog converter 116 (DAC), wherein said DAC 116 converts the signal generated by the model into an analog audio signal; and at least an audio signal conditioning circuit 117 that communicates with the at least one DAC converter 116, wherein said audio signal conditioning circuit conditions the signal for reproduction.
  • DAC digital-to-analog converter
  • audio signal conditioner circuit 117 is in direct communication to a reproduction medium, wherein said reproduction medium may be a speaker or a headphone, most preferably a speaker.
  • the audio signal conditioning circuit 117 is in indirect communication to a reproduction medium, for example, via an amplifier or an effects pedal.
  • the reproduction means may be inside the musical instrument, for example, inside the body 101 or inside the neck 102, or outside the musical instrument 100, for example, being a autonomous speaker, a computer, or any other reproduction means suitable for the reproduction of music, sounds, or sound effects.
  • the neck 102 of the instrument is linked to the body 101 of the instrument 100 via a heel, wherein said heel generates a linkage between the bottom face of the base of the neck 102 and the region of the body 101 of the instrument 100 closest to said bottom face, wherein said bottom face is the face or region of the base that is diametrically opposite the top face of the base of the neck 102.
  • the fretboard 107 comprises a circuit board 108 of the fretboard, wherein said circuit board 108 comprises a plurality of sensors 110 preferably distributed in at least one row 109 of integrated sensors 110.
  • this integrated circuit board 108 is connected to the at least one capacitive sensing integrated circuit 113 via a wired connection.
  • the fretboard 107 of the instrument 100 comprises a plurality of sensors 110.
  • the plurality of sensors 110 of the fretboard 107 comprises capacitive sensors or pressure sensors, more preferably, the plurality of sensors 110 of the fretboard 107 comprises capacitive sensors.
  • the plurality of sensors 110 comprises sensors protruding superficially from the fretboard 107, such that there is a height difference between the position of the plurality of sensors 110 and the fretboard 107 wherein they are located. In this way the plurality of sensors provides a tactile cue of their position to the performer. A person skilled in the art will know how to optimize the exact height at which these sensors protrude 110 in order to achieve the intended functionality in the best way.
  • the plurality of sensors 110 comprises metal sensors, preferably copper sensors, more preferably enameled copper sensors.
  • the at least one row 109 of sensors 110 comprises capacitive sensors.
  • each of the at least one row 109 of sensors 110 is arranged such that each row of sensors of the at least one row 109 of sensors 110 are aligned with each string of the at least one string 103 of the body 101 along a longitudinal axis.
  • each row of the at least one row 109 of sensors 110 comprises the same number of sensors 110.
  • each sensor 110 of the at least one row 109 of sensors 110 corresponds to the position of each of the frets of a stringed musical instrument or to the position on the fretboard of a fretless stringed instrument.
  • the at least one row 109 of sensors 110 comprises capacitive sensors.
  • the plurality of sensors 110 are connected to the at least one capacitive sensing integrated circuit 113, wherein said capacitive sensing integrated circuit 113 comprises an integrated mpr121 .
  • the plurality of sensors 110 of the fretboard 107 of the instrument 100 comprises 72 sensors.
  • said 72 sensors are distributed in 6 sensor rows, wherein each sensor row comprises 12 sensors.
  • each sensor corresponds to the position of each of the frets of a stringed musical instrument.
  • the musical instrument 100 comprises at least one additional modulator 111 of the model.
  • said additional modulator 111 is located on the neck 102 of the instrument 100, more preferably it is located at the base of the neck 102.
  • the at least one additional modulator 111 of the model is integrated into an inertial measurement unit (IMU) and sends a continuous signal to the microcontroller 114 via an I2C or SPI connection.
  • IMU inertial measurement unit
  • the at least one additional modulator 111 of the model is selected from an accelerometer, gyroscope, magnetometer, or a combination thereof.
  • the at least one additional modulator 111 of the model transmits information that can be used to modify at least one variable or parameter of the at least one virtual string, of the at least one node of the n nodes, or of the at least one spring of the n-1 springs, for example: mass of the nodes, friction coefficients, position where the force is applied, tuning, output level, additional force, among others.
  • the additional modulator 111 of the model is an accelerometer.
  • the function of the accelerometer is to capture parameters such as vibrato or small gestures of the performer, which modify the sound of the string once the initial transient has passed, and introduce them to the model in the form of forces or modulation of model parameters.
  • the musical instrument 100 of the invention comprises a power supply arranged in the inner volume of the body 101 , connected directly or indirectly to all parts of the instrument requiring electrical power.
  • the power supply may be any power supply that a person skilled in the art would consider suitable to be able to power the instrument 100 of the present invention and to enable the methods of the present invention to be carried out.
  • the instrument 100 comprises at least one microphone 106, at least one capacitive sensing integrated circuit 113, a plurality of sensors 100 of the fretboard 107, and at least one additional modulator 111 of the model, wherein at least one of these elements produces a plurality of signals received by the processor 115, and wherein said plurality of signals is used by the processor 115 to calculate the parameters of the mathematical model of the at least one virtual string.
  • the musical instrument 100 comprises at least one string 103, at least one microphone 106, at least one capacitive sensing integrated circuit 113, at least one A/D converter 112, at least one spring 105, and at least one row 109 of sensors 110
  • each of the at least one microphone, at least one integrated circuit, at least one A/D converter, the at least one spring, and the at least one row of sensors are arranged in such a way that a single microphone, integrated circuit, A/D converter, spring, and row of sensors are disposed with a single string of the at least one string 103 comprised by the instrument 100.
  • the instrument when it comprises 6 strings, it will also comprise 6 microphones, 6 integrated circuits, 6 A/D converters, 6 springs and 6 rows of sensors such that each microphone, A/D converter, spring and row of sensors will be associated with a single string.
  • generating a plurality of signals of step A) comprises generating null signals, non-null signals, and a combination of null and non-null signals, wherein a "null signal" is that signal which is filtered out as a result of the method for digitizing and processing of the present invention.
  • the musical instrument 100 of the present invention produces only a plurality of null signals in the absence of actions of the performer thereon.
  • the signal generation of step A) comprises generating at least one non-null signal, wherein said at least one non-null signal results from the action of a performer on the musical instrument 100 of the present invention.
  • the action of the performer on the musical instrument 100 of the present invention comprises establishing direct contact between the deft hand of the performer and the at least one string 103 of the instrument 100, or establishing indirect contact, e.g., via an instrumental performance element, e.g., a pick or a bow, between the deft hand of the performer and the at least one string 103 of the instrument 100.
  • an instrumental performance element e.g., a pick or a bow
  • the action of the performer on the musical instrument 100 of the present invention comprises establishing direct contact between the non-deft hand of the performer and at least one sensor 110 of the plurality of sensors 110 of the fretboard 107.
  • the action of the performer on the musical instrument 100 of the present invention comprises establishing a direct or indirect contact between the deft hand of the performer and the at least one string 103 of the instrument 100 and, simultaneously, establishing a contact between the non-deft hand of the performer and at least one sensor 110 of the plurality of sensors 110 of the fretboard 107, wherein said at least one string 103 and said at least one sensor 110 are located on the same longitudinal axis.
  • generating signals by the at least one string 103 of step A) may comprise generating sounds and movements, wherein said generating sounds and movements result from the action of a performer.
  • sounds coming from the at least one string 103 are detected by the at least one microphone 106.
  • movements of the at least one string 103 are detected by the capacitive sensing integrated circuit 113.
  • the plurality of signals of step A) coming from the at least one string 103 and the plurality of signals coming from the plurality of sensors 110 of the fretboard 107 come from a string and a sensor that are aligned.
  • the at least one string 103 is used as a capacitive sensor that sends signals to the at least one capacitive sensing integrated circuit 113.
  • the performer's actions on the at least one string 103 produce, at least one capacitance change signal associated with the input of the integrated which can be used to determine the contact between at least one part of the performer's body, for example, a hand or at least one finger, and the at least one string 103 of the instrument 100. It can also be used to detect an indirect contact, for example, via an instrumental performance element, e.g., a pick or a bow, between the performer and the at least one string 103 of the instrument 100.
  • an instrumental performance element e.g., a pick or a bow
  • the performer's actions upon at least one sensor 110 of the plurality of sensors 110 of the fretboard 107 produce at least one capacitance change signal associated with the input of the integrated.
  • the origin of the at least one capacitance change signal produced by at least one sensor 110 of the plurality of sensors 110 of the fretboard 107 is used to determine the position on the fretboard 107 of the contact between at least one part of the performer's body and at least one sensor 110 of the plurality of sensors 110.
  • the intensity of the at least one capacitance change signal may be used to determine the pressure exerted on the at least one sensor 110 of the plurality of sensors 110 of the fretboard 107.
  • the digitization and processing method of the present invention which makes it possible to incorporate pressure values into the modeling of the virtual string, it is possible to generate a spectrum of sounds, for example, a fully plucked string, harmonics, slurs between two notes, trills, quenching, etc.
  • continuous pressure sensing is performed by using the printed circuit board 108 with capacitive sensors 110, which are read by at least one integrated circuit 113 that delivers a digital signal proportional to the change in capacitance added by finger contact with the surface. The more pressure is applied to the sensor 110, the more contact surface is generated on the plate 108 of the finger and a digital signal proportional to the pressure exerted can be obtained.
  • the digitization of the plurality of signals coming from the at least one microphone 106 of step B) is performed by the at least one A/D converter 112, wherein said digitization is performed with a frequency of between 12 and 192 kHz, preferably between 24, 48, 96 and 192 kHz, more preferably with a frequency of 96 kHz.
  • the threshold value used in step C) of the method can be determined by the performer before or during the execution of the actions A person skilled in the art will be able to determine said threshold value according to the required needs that will allow the best functioning of the method of the present invention.
  • this threshold value may be determined with reference to the noise of the input signal. More preferably, said threshold value will be 20% higher than the maximum value of the noise.
  • the signal filtering of step C) is carried out by the control and processing unit 115 of the instrument.
  • the signal processing step D) of the method comprises converting the signals coming from the at least one string 103 into an increase in the coefficient of friction, by means of the steps of a) sensing the change in capacitance produced by direct or indirect contact of the performer with the at least one string 103, b) obtain a digital value through the corresponding capacitive sensing integrated, where the sensing capacity value is proportional to the contact exerted, and therefore the measured signal is higher, c) using the values obtained from step a) to determine the increase in the coefficient of friction of the virtual string.
  • the signal processing step D) of the method comprises converting the signals coming from the at least one microphone 106 into a force, by the steps of a) deriving the digital signal coming from at least one A/D converter 112 as a function of time, b) using the value obtained from step a) as a force exerted on the at least one node of the at least one virtual string.
  • the signal processing step D) of the method comprises converting the signals coming from the at least one sensor 110 of the plurality of sensors 110 into a friction or motion restriction, by means of using the origin position at of the signal coming from the at least one sensor 110 of the fretboard 107 and the intensity of the capacitance change signal to determine the node where a force proportional to the measured signal is applied.
  • said force is
  • the signal processing step D) of the method comprises converting the signals coming from the at least one additional 111 modulator of the model into a force or other modulation of the model, following the steps of a) obtaining the sensing values of the at least one additional modulator 111 , through the microcontroller, b) using the values obtained from the step a) to modify at least one variable or parameter of the at least one virtual string, of the at least one node of the n nodes, or of the at least one spring of the n-1 springs.
  • the method for digitizing and processing signals of the present invention allows emulating the quenching of the at least one string 103.
  • each string of the at least one string 103 is provided with a connection to another capacitive sensing integrated circuit thus using the metallic string as a capacitive sensor.
  • continuous variation between full stopping of at least one string 103 and intermediate stops such as those achieved on a real stringed instrument is used.
  • said method for emulating string quenching is used to emulate intermediate stopping or full stopping of at least one string 103. In one embodiment, said method for emulating string quenching is used to emulate a continuous variation between an intermediate stopping and a full stopping of at least one string 103.
  • the parameters of the virtual string to be defined in step A) of the method comprise the mass of the nodes 201 , the coefficients k F of friction, the constants k R of the springs 202, the position of the frets, the spring balance position, the length of the virtual string, the distance on the X-axis of the virtual string with respect to the fretboard, the node from which the sound is to be extracted, the node on which the force is to be applied, among others.
  • each of the parameters are determined for each node of the n nodes 201 and for each spring of the (n-1) springs 202. Said parameters may be determined and modified before or during the execution of the instrument 100 of the present invention.
  • the equilibrium position of any node of the n nodes 201 and any spring of the (n-1) springs 202 is the position taken by the n nodes 201 and the (n-1) springs 202 when the instrument 100 is not subjected to any action by the performer, or is the position taken by the n nodes 201 and the (n-1) springs 202 when the unit 115 receives only null signals.
  • the virtual string variables to be defined in step A) comprise the displacement of each node of the n nodes 201 with respect to the equilibrium position, the speed of movement of each node of the nnodes 201 , the position where force is applied on the at least one virtual string 200, the position of the performer's non-deft hand on the 108 circuit board, the pressure of the performer's non-deft hand on the 108 circuit board, the direct or indirect contact of the performer on the at least one string 103, the movements and accelerations applied on the body 101 of the instrument 100 detected by means of the at least one additional modulator of the model 111.
  • the modeling of the at least one virtual string 200 of step A) comprises establishing a system in two dimensions X and Y, wherein the X axis is the axis perpendicular to the virtual string 200 and the Y axis is the axis longitudinal to the virtual string 200, and wherein the sound produced by said at least one virtual string 200 is determined based on the respective distance to the equilibrium position and velocity of the n nodes 201 comprising said at least one virtual string 200.
  • the (n-1) springs 202 making up the at least one virtual string 200 follow the dynamics described by Hooke's Law, or corrected versions of Hooke's Law.
  • the exact values of the virtual string parameters can be used by the performer to determine the pitch, timbre, tuning, type of instrument to be imitated by modeling the virtual string. Such parameters can result in sounds that are not obtainable with a real physical instrument, such as a string with infinite sustain, or a mixture of sounds.
  • the application the plurality of input signals of step B) of the method for reproducing sounds of the present invention correspond to the application of a force, wherein said force is applied on at least one node 201 , preferably, said force is applied on at least one group of nodes comprising between 1 -20 nodes, preferably 10 nodes, and wherein force is modulated by means of a Gaussian function according to:
  • Fexti F o ⁇ e ⁇ a l ⁇ Ncenter)2
  • Fexti the force on the i-th node
  • F o the force coming from the signal of at least one microphone 106 after being derived
  • a the parameter that regulates the width of the Gaussian
  • i the index of the node
  • N ce nter the central position of application of the force.
  • the plurality of input signals modulated by means of a Gaussian function as described above is the plurality of input signals resulting from the conversion of the signals coming from the at least one microphone 106.
  • the sound reproduction method of the present invention comprises applying the following forces on each node of the n nodes that comprise the modeled virtual string is subjected to the following forces: where F1 is the force resulting from the interaction between node i and its neighbor (i-1) linked through a spring, where kR is the spring constant and D , J is the distance between the i-th node and its neighbor (i-1); where F2 is the force resulting from the interaction between node i and its neighbor (i+1), linked through a spring, where k is the spring constant and Do, i+i> is the distance between the i-th node and its neighbor (i+1);
  • F4 C ⁇ Xn
  • F5 is the restriction to movement produced by the plurality of input signals resulting from digitizing and processing of the signals coming from the plurality of sensors 110 of the fretboard 107
  • C is a value proportional to the measured capacity (pressure)
  • X is the position on the X-axis (position). That is to say that the sum of forces is:
  • the at least one node 201 on which the force Fext is applied may be the same one that will produce a plurality of output signals, while the at least one node 201 on which the force F4 is applied will undergo a motion restriction that allows modulating the variable "string length". Therefore, in one embodiment of the invention the at least one node 201 to which the force Fext is applied does not receive the force F4 and vice versa.
  • the string modeling comprises the use of the following equation to determine the forces F1 and F2 that a node i exerts on a neighboring node i-1 (and its equivalent for i+1) where D is calculated as: where DY2 is a new parameter representing the distance between nodes on the Y-axis squared.
  • F x (i) is the force that node i exerts on node j in the X axis
  • kR is the spring constant
  • D is the modulus of the distance between nodes
  • D eq is the distance with respect to the spring equilibrium point
  • D ( i,i-i) is the distance between neighboring nodes in the X axis.
  • the string modeling comprises calculating the positions in successive steps by means of Verlet's algorithm.
  • producing output signals of step C) of the method comprises calculating the position of each node of the n nodes 201 with regards to the equilibrium position, the speed of movement of each node of the n nodes 201 , the position where force is applied on the at least one virtual string 200, the position of the performer's non- deft hand on the circuit board 108, the pressure of the performer's non-deft hand on the circuit board 108.
  • it comprises calculating the position on the perpendicular axis (X-axis) of each node of the n nodes 201 relative to its equilibrium position and the velocity with which each node of the n nodes 201 moves.
  • producing output signals of step C) comprises monitoring the position of one or a small group of nodes such as, for example, between 1 and 20 nodes, more preferably 10 nodes.
  • the plurality of signals generated by the instrument 100 of the present invention is a result of an action of the performer, wherein said action comprises establishing direct or indirect contact between at least a part of the performer's body, preferably, the deft hand of the performer and the at least one string 103 of the instrument 100 and, simultaneously, establishing contact between another part of the performer's body, preferably, the non-deft hand of the performer and the plurality of sensors 110 of the fretboard 107.
  • producing output signals comprises modulating the variables of at least one node of the n nodes 201 by applying at least one force, friction, parameter modulation, restriction, and/or a combination thereof.
  • the audio signal is generated by monitoring the position and velocity of one or a small group of nodes such as, for example, between 1 and 20 nodes, more preferably 10 nodes.
  • the software is based on the integration of Newton's equations of motion using a Verlet algorithm.
  • the virtual system is composed of a series of nodes linked by quadratic potentials (Hooke's law) keeping the end positions fixed.
  • the simulated system has only 2 dimensions, one longitudinal (Y) to the string and one perpendicular to the string (X), the sound is produced by recording the deflection in the perpendicular axis of the equilibrium position.
  • a very important and original part is to link the virtual system with a perturbation generated in the real world. This is achieved by applying on at least one node 201 a force proportional to the information provided by a sensor.
  • This sensor can be the initial sound disturbance read by a microphone 106 coupled to an actual 103 string or the capacitance measured on a touch sensor 110 that treads on the strings.
  • Verlet's algorithm calculates the positions in successive steps and records the sound produced as the distance to equilibrium of the position of a chosen node or a small group of nodes, for example, between 1 and 20 nodes, more preferably 10 nodes.
  • sending signals step D) of the method herein comprises sending the plurality of output signals to an analog audio output 118 or to a digital audio output 119, preferably an analog audio output 118.
  • step D) comprises the substeps of i) converting the plurality of digital output signals to a plurality of analog output signals by using a digital-to-analog converter (DAC) 116, ii) conditioning the plurality of analog output signals resulting from step i) by means of an audio signal conditioning 117 circuit.
  • DAC digital-to-analog converter
  • the step D) of the sound reproduction method of the present invention comprises a step of modifying the plurality of output signals prior to their reproduction, preferably, prior to their conversion to an analog signal.
  • This modification step includes the application of digital effects, for example, distortions, echoes, cameras, among others.
  • the sound reproduction method of the present invention can be used to reproduce at least one of the following effects: fully plucked string, slurs between two notes, harmonics, trills, quenching.
  • the sound reproduction method of the present invention can be used to reproduce the sound of another musical instrument, such as, for example, guitars, basses, violins, violas, violas, charangos, double basses, sitar, cuatro, ukulele and any other string instrument.
  • another musical instrument such as, for example, guitars, basses, violins, violas, violas, charangos, double basses, sitar, cuatro, ukulele and any other string instrument.
  • the instrument of the present invention comprises two parts, wherein the part of the instrument used to digitize the action of the deft hand of the performer is the body of the instrument, while the part of the instrument used to digitize the action of the non-deft hand of the performer is the neck of the instrument.
  • the musical instrument of the present invention comprises thus a body and a neck, wherein the body of the instrument comprises
  • a body comprising a container and a lid, thus defining an inner volume, wherein said body further comprises six (6) strings located on the outer surface of the lid and extending along the lid, wherein said 6 strings are metal electric guitar strings, two viscoelastic foam rubber media, wherein the first medium is located proximate to one end of the at least one string and the second medium is located proximate to the opposite end of the at least one string and wherein both media are in contact with the at least one string, six (6) springs located on the outer surface of the lid, where said six (6) springs are linked to the corresponding strings, six (6) microphones located on the outer surface of the lid, below the corresponding strings and without contact with the string;
  • capacitive sensing integrated circuits wherein said capacitive sensing integrated circuits are located within the interior volume and wherein they are connected to the six (6) strings by means of a wired connection, Six (6) analog-to-digital converters, or A/D converters, located inside the interior volume, where said A/D converters are connected to the corresponding microphones by means of a wired connection, a microcontroller, located in the inner volume of the body, connected to the capacitive sensing integrated circuit via an I2C connection, an audio output located in the interior volume of the body, wherein said at least one audio output is analog and comprises a digital-to-analog converter (DAC), an audio signal conditioning circuit that communicates with the at least one DAC; a control and processing unit (CPU), located in the interior volume of the body, where said control unit is in communication with the six (6) A/D converters via an I2C or SPI connection, the microcontroller via a USB connection, the audio output, via a DAC, and wherein
  • DAC digital
  • a printed circuit board comprising seventy-two (72) capacitive sensors, and wherein said seventy-two (72) sensors are arranged in six (6) sensor rows, the sensor rows comprising 12 sensors each, and wherein said circuit board is connected the at least one capacitive sensing integrated circuit of the body via an I2C connection,
  • an accelerometer where the accelerometer is integrated into an inertial measurement unit (IMU), located in the fretboard of the neck and connected to the body's microcontroller via an I2C or SPI connection, wherein the six (6) microphones, the six (6) A/D converters, the six (6) springs and the six (6) rows of sensors correspond to each of the strings of the six (6) strings, in a 1 to 1 ratio.
  • IMU inertial measurement unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Nonlinear Science (AREA)
  • Power Engineering (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

The present invention relates to a musical instrument that allows to replicate with great accuracy the subtle gesticulations and interpretations of a performer. Another aspect of the present invention relates to a method for digitizing and processing signals by means of the musical instrument and to a method for synthesizing sounds by means of the musical instrument for reproduction.

Description

MUSICAL INSTRUMENT THAT DIGITIZES AND PROCESSES SIGNALS, AND SYNTHESIZES SOUNDS AND RELATED METHODS
TECHNICAL FIELD OF THE INVENTION
The present invention relates to musical instruments. More particularly, the present invention relates to a digital electronic musical instrument comprising a synthesizer based on a physical simulation, a method allowing digitizing and processing signals generated continuously by said instrument, and a method for synthesizing sounds by means of the digital instrument.
BACKGROUND OF THE INVENTION
During the last century, the development of electronic instruments accompanied changes in music. Electric guitars and basses, synthesizers and samplers expanded the field of possible sounds, allowing the artistic expression of a mechanized human society. In these developments there is a marked difference between the organic and the mechanical.
Today we live in a different paradigm, with realistic animations, gestural interfaces and ubiquitous computer tools. In this new paradigm, the boundary between the organic and the synthetic becomes blurred, even disappearing.
However, from the sonority point of view, there is still no correlate of this change. While samplers can be mistaken for real instruments (since they are recorded sounds) they are not from the point of view of interaction, where the keyboard continues to be established as a de facto interface.
On the other hand, the digital revolution is still unable to provide expressive solutions to instruments where the interaction with the vibrating object (string, air, drumheads, etc.) is direct. This type of instrument gives rise to subtleties that cannot be reproduced by a keyboard-type controller that only supports a limited number of parameters (mainly note and velocity).
One of the most common digital musical systems or instruments are systems based on MIDI such as those described by patent applications US 8093482 B1 , US 2011/239848 A1 , and US 2022/208160 A1. Briefly, these systems use a processor that receives signals emitted by sensors and generates an output signal in MIDI format. However, these systems make use of discrete signals that are usually limited to an identity of a note played, its intensity and, eventually, some modulated parameter. As a result, these systems are incapable of executing and interpreting more subtle techniques and gestures resulting from interpretation by a performer.
In the prior art are also known systems or digital musical instruments capable of executing techniques and gestures resulting from interpretation by a performer, more particularly techniques and gestures typical of a stringed musical instrument, such as those described in patent applications US 2008/236374 A1 , US 8093482 B1 , and US 201 1/239848 A1. However, these systems are usually limited to implementations independent of the sound generation system, and limiting the recording of gestures to performance intensities and pulsing positions.
Thus, the prior art documents describe inventions that present certain disadvantages, limitations and incapacities when attempting to execute and interpret subtle techniques and gestures resulting from interpretation by a performer, more particularly subtle techniques and gestures peculiar to a stringed musical instrument.
Accordingly, there is a need for a musical instrument and a method to accurately digitize and process the actions of a performer, so that such instrument and method can execute and interpret subtle techniques and gestures resulting from the performance by a performer, more particularly the subtle techniques and gestures of a stringed musical instrument.
BRIEF DESCRIPTION OF THE INVENTION
Based on these considerations, the present invention provides a musical instrument that allows digitizing and processing signals to accurately replicate the actions of a performer in such a way that allows performing and interpreting the subtle techniques and gestures resulting from the interpretation by a performer, more particularly such subtle techniques and gestures are peculiar to a stringed musical instrument. Another aspect of the present invention is a method for digitizing and processing such shares. Additionally, another aspect of the present invention is a method for synthesizing sounds from the digitized and processed signals.
In a first aspect, the present invention relates to a musical instrument that digitizes and processes the actions of a performer, wherein said instrument comprises two parts, wherein one of the parts digitizes the action of the non-deft hand of the performer and the other part digitizes the action of the deft hand of the performer.
The instrument comprises a body and a neck, wherein the body of the instrument is the part of the musical instrument that digitizes the action of the deft hand of the performer while the neck of the instrument is the part of the musical instrument that digitizes the action of the non-deft hand of the performer.
One aspect of the present invention relates to a musical instrument that digitizes and continuously processes analog signals, digital signals, or both, produced by said instrument, and wherein said instrument comprises
- a body comprising a container and a lid, thus defining an inner volume, wherein said body further comprises at least one string located on the outer surface of the lid and extending along the lid, wherein said at least one string is a metal string, two string damping media, wherein the first medium is located proximate to one end of the at least one string and the second medium is located proximate to the opposite end of the at least one string and wherein both media are in contact with the at least one string, at least one spring located on the outer surface of the lid, wherein said at least one spring is linked to the corresponding at least one string, at least one microphone located on the outer surface of the lid, below the corresponding at least one string and without contact with the string; at least one capacitive sensing integrated circuit wherein said at least one capacitive sensing integrated circuit is located within the inner volume and wherein said at least one capacitive sensing integrated circuit is connected with the at least one string; at least one analog-to-digital converter, or A/D converter, located within the interior volume, wherein said A/D converter is connected to the at least one microphone; a microcontroller, located in the inner volume of the body, connected to the capacitive sensing integrated circuit; at least one audio output located in the interior volume of the body, wherein said at least one audio output may be analog, digital, or both; and a control and processing unit (CPU), located in the interior volume of the body, where this unit is in communication with at least one A/D converter, the microcontroller, the at least one audio output, and wherein said unit comprises a graphics processing unit (GPU); a neck comprising a fretboard, wherein said fretboard comprises - a printed circuit board comprising a plurality of sensors of the fretboard, and wherein said plurality of sensors are arranged in at least one sensor row, and wherein said board is connected to the at least one capacitive sensing integrated circuit,
- at least one additional modulator of the model, wherein said modulator is integrated into an inertial measurement unit (IMU), located in the fretboard and connected to the microcontroller, wherein the at least one microphone, the at least one capacitive sensing integrated circuit, the at least one A/D converter, the at least one spring, and the at least one row of sensors correspond to each of the strings of the at least one string, in a 1 to 1 ratio.
In one embodiment, the instrument body comprises 1 , 2, 3, 4, 5, 6, 7, 8 or more strings, more preferably it comprises 4, 5 or 6 strings, still more preferably, the instrument body comprises 6 strings.
In one embodiment, the length of the at least one string is between 10 and 40 cm, more preferably, between 25 and 35 cm, still more preferably, the length of the string is 30 cm.
The damping media comprise a viscoelastic material. In a preferred embodiment, said viscoelastic material is viscoelastic foam rubber.
In one embodiment, the at least one microphone is located below the corresponding at least one string at a distance of between 5, 6, 7, 8, 9 or 10 mm, more preferably at a distance of 5 mm.
The at least one capacitive sensing integrated circuit is connected to the at least one string, to the plurality of sensors of the fretboard, and to the microcontroller.
In one embodiment, the at least one A/D converter digitizes the information coming from the at least one microphone with a given sampling frequency, preferably the sampling frequency is between 12 and 192 kHz, preferably the sampling frequency is selected from the group consisting of 12, 24, 48, 96 and 192 kHz, even more preferably the sampling frequency is 96 kHz.
The instrument body comprises a control and processing unit (CPU), wherein said unit controls and processes the information coming from at least one A/D converter and the microcontroller.
In one embodiment, the control and processing unit comprises at least one sound channel. Preferably, said unit comprises 1 , 2, 3, 4, 5, 6, 7, 8 or more sound channels, more preferably it comprises 4, 5 or 6 sound channels. Even more preferably, said unit comprises 6 sound channels.
In one embodiment, the control and processing unit controls and processes the information coming from the at least one A/D converter with a given sampling frequency, preferably said sampling frequency is between 12 and 192 kHz, preferably the sampling frequency is selected from the group consisting of 24, 48, 96, 192 kHz, even more preferably the sampling frequency is 96 kHz.
In one embodiment, the control and processing unit models at least one virtual string. Preferably, the CPU models 1 , 2, 3, 4, 5, 6, 7, 8 or more virtual strings. Preferably, the CPU models 4, 5 or 6 virtual strings. Even more preferably, the CPU models 6 virtual strings.
In one embodiment, the audio output comprises an analog audio output, wherein said analog audio output comprises a digital-to-analog converter (DAC) in communication with an audio signal conditioning circuit, wherein said audio signal conditioning circuit conditions the signal for reproduction.
In one embodiment, the digital-to-analog converter (DAC) converts signals with a given sampling frequency, preferably said sampling frequency is between 12 and 192 kHz, preferably the sampling frequency is selected from the group consisting of 24, 48, 96, 192 kHz, even more preferably the sampling frequency is 48 Hz.
In a preferred embodiment, the audio signal conditioning circuit is in communication with a reproduction means which may be comprised by the instrument, or may be located outside the instrument, and wherein said reproduction means is a speaker or a headphone, more preferably a speaker.
In one embodiment, the plurality of sensors comprised on the fretboard printed circuit board comprises capacitive sensors or pressure sensors, more preferably, the plurality of sensors comprises capacitive sensors.
In one embodiment, the at least one row of sensors comprises at least 6 sensors. Preferably, said row comprises 6, 7, 8, 9, 10, 11 , 12 or more sensors, preferably, the at least one row of sensors comprises 10, 1 1 or 12 sensors. Even more preferably, the at least one row of sensors comprises 12 sensors.
In a more preferred embodiment, the plurality of sensors of the instrument fretboard comprises 72 sensors, wherein said 72 sensors are distributed in 6 sensor rows, wherein each sensor row comprises 12 sensors. In one embodiment, the at least one additional modulator of the model is selected from an accelerometer, gyroscope, magnetometer, or a combination thereof, and wherein said modulator is integrated into an inertial measurement unit (IMU).
Another aspect of the present invention refers to a method for digitizing and processing signals coming from the instrument of the present invention characterized in that it comprises the steps of:
A) continuously generating a plurality of signals by means of the musical instrument of the present invention, wherein said plurality of signals comprises digital and analog signals, and wherein said plurality of signals comprises i) signals coming from at least one string, ii) signals coming from at least one microphone, iii) signals coming from at least one sensor of the plurality of sensors of the fretboard, iv) signals coming from at least one additional modulator of the model, or v) a combination thereof;
B) digitize the plurality of analog signals of step A), wherein i) the plurality of signals coming from the at least one string is digitized by means of at least one capacitive sensing integrated circuit, ii) the plurality of signals coming from the at least one microphone is digitized by means of at least one A/D converter with a frequency of between 12 and 192 kHz, iii) the plurality of signals coming from the at least one sensor of the plurality of sensors of the fretboard is digitized by means of at least one capacitive sensing integrated circuit;
C) filtering the plurality of digital signals resulting from step A) or B) by means of the substeps of, i) setting a threshold value, ii) comparing each signal of the plurality of signals resulting from step A) or B) with said threshold value, iii) discarding those signals from the plurality of signals that were less than the threshold value;
D) processing the plurality of digital signals obtained from step C) to obtain a plurality of input signals, wherein said processing comprises i) converting the signals coming from the at least one string into an increase of the friction coefficient, ii) converting the signals coming from at least one microphone into a force, iii) converting the signals coming from the plurality of sensors of the fretboard into a friction, iv) converting the signals coming from the at least one additional modulator of the model into a force, volume, friction, position where a force is applied, or position where the sound is read.
In a preferred embodiment, the method for digitizing and processing signals of the present invention is performed in real time.
In one embodiment, the threshold value of step Ci) is set with reference to the noise of the digital signals resulting from step A) or B). More preferably, said threshold value is determined as 20% greater than said noise.
In one embodiment, the method for processing and digitizing signals is carried out independently for each string of the at least one string, for each microphone of the at least one microphone, for each sensor of the plurality of sensors of the fretboard, and for each additional model modulator of the at least one additional model modulator.
In one embodiment, the position and pressure of the performer's non-deft hand on the circuit board is used to modify the length of the string and consequently the resonant frequency of the at least one virtual string, generating different notes. In one embodiment, the pressure difference can be used to generate sound effects typical of real instruments such as a fully plucked string, harmonics, slurs between two notes, quenching, etc.
Another aspect of the present invention is a method for synthesizing sounds comprising the steps of
A) modeling at least one virtual string by means of the control and processing unit of the instrument of the present invention, following the substeps of i) arranging n nodes joined in series along an axis by (n-1) springs, and wherein the node located at each end of the virtual string maintains its fixed position at the equilibrium position, ii) defining the parameters and variables of the n nodes and the (n-1) springs; iii) defining the parameters and variables of at least one virtual string,
B) applying continuously on the at least one modeled virtual string of step A) a plurality of input signals, wherein said plurality of input signals comprises the plurality of signals obtained by means of the digitizing and processing method of the present invention, C) producing a plurality of output signals by means of the substeps of i) calculating the variables corresponding to each virtual string of the at least one virtual string, to each of the n nodes and each of the (n-1) springs, resulting from the application of the plurality of input signals of step B), ii) monitoring the position of at least one node of the n nodes, wherein said plurality of output signals comprise digital signals,
D) sending the plurality of output signals produced in step C) to an audio output for their reproduction.
In a preferred embodiment, the method for synthesizing sounds of the present invention is performed in real time.
In one embodiment, the value of n for the n nodes is between 100-500 nodes, preferably 500. Accordingly, the value of (n-1) for the number of springs is between 99 and 499 springs, preferably 499 springs.
BRIEF DESCRIPTION OF THE FIGURES
The Figure 1 shows a schematic representation of the instrument of the present invention showing the components and their connections.
The Figure 2 shows a representative schematic of the modeling of a virtual string according to the modeling as described in the present invention.
The Figure 3 shows a schematic of a preferred embodiment of the digitizing and processing method of the present invention in combination with a preferred embodiment of the sound generation method of the present invention.
The Figure 4 shows a block diagram of one embodiment of the musical instrument of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
The present invention will be described in greater detail below, referring to the appended Figures which illustrate exemplary embodiments of the invention, which are not to be interpreted as limiting the invention.
In each of the Figures the same or similar numerical references are used for each element of the instrument of the invention.
Throughout this description, the term "deft hand" will be used to refer to both the right hand of a right-handed guitarist and the left hand of a left-handed guitarist, while the term "non-deft hand" will be used to refer to both the left hand of a right-handed guitarist and the right hand of a left-handed guitarist. The terms "deft hand" and "non-deft hand" should not be interpreted in relation to the performer's ability, but only to the spatial arrangement in which the performer places the hands in relation to the instrument.
For the purposes of the present invention, the term "performer" shall be understood to mean a person who acts upon the instrument for the purpose of generating a music, sound, sound effect, or a combination thereof.
For the purposes of the present invention, the term "action of a performer" or "action of the performer" shall be understood to mean any action, including and not limited to movements, direct and indirect contacts, executions, strokes, strumming, techniques and gesticulations, among others, performed upon the instrument by the performer that is detectable by the instrument, i.e. that generates at least one non-null signal by the instrument, for the purpose of generating a music, sound, sound effect, or a combination thereof. Likewise, the term may refer to a singular action or to a series of consecutive actions.
For the purposes of the present invention, the term "digitizing" refers to any electronic process carried out by any type of converter, wherein a signal or plurality of signals of any type of non-digital nature is converted into a digital signal or into a plurality of digital signals.
Throughout the present disclosure the term "string" when not accompanied by any adjective or qualifier will refer to a real, physical string made of any material suitable for use as required by the instrument of the present invention and upon which a performer can physically interact in order to perform any of the methods disclosed in the present invention, unless the context clearly indicates otherwise. Likewise, the terms "real string", " string of the body", "string of the instrument" and "string of the body of the instrument" shall be considered synonymous and shall be used interchangeably.
On the other hand, the term "virtual string" will refer to a string resulting from mathematical modeling, in any of the embodiments disclosed in the present specification, executed by means of the software of a control and processing unit, and on which a signal or a plurality of signals coming from the instrument, preferably signals resulting from the actions of a performer, can be applied. In the same sense, when an "additional modulator of the model" is mentioned in the present application, the expression "of the model" should be understood as referring to the mathematical model used in the modeling of the virtual string.
For the purposes of the present invention, the terms "body of the instrument" and "part of the musical instrument that digitizes the action of the deft hand of the performer" are to be understood as synonyms and, therefore, will be used interchangeably. Likewise, the terms "neck of the instrument" and "part of the musical instrument that digitizes the action of the performer's non-deft hand" shall be considered synonymous and used interchangeably.
For the purposes of the present invention, the term "damped string" shall be understood to mean a string whose vibrations subsequent to the initial vibration are attenuated. Accordingly, any process to attenuate the vibrations of a string subsequent to the initial vibration shall be understood as "damp", "damping" or "the damping" and these terms shall be used interchangeably when they refer to a string that is subjected to such a process.
For the purposes of the present invention, the term "real time", when referring to a process or action, shall be understood as meaning that said process or action is completed in an amount of time that is not significantly perceptible to the performer or that said process or action is completed in an amount of time that does not represent an inconvenience to the execution of the performer's actions, and/or the operation of the instrument.
The "top face of the base" should be understood as the face on which the performer will execute the actions to be digitized and processed by the instrument. On the other hand, "bottom face of the base" should be understood as the face or region of the base that is diametrically opposite to the top face.
Throughout the present when the term "parameter" is used and refers to the at least one virtual string or the modeling of the at least one virtual string, it should be understood as magnitudes modeling physical properties (such as mass, spring hardness, air friction, etc.) of the at least one virtual string or the modeling of the at least one virtual string that determine its properties independently from the signals coming from the musical instrument. Throughout the present, when the term "variable" is used and refers to the at least one virtual string or the modeling of the at least one virtual string, it should be understood as magnitudes modeling physical properties (such as string length, friction coefficients, applied forces, etc.) of the at least one virtual string or the modeling of the at least one virtual string that determine its behavior in a dependently from the signals coming from the musical instrument.
For the purposes of the present invention the term "input signal", "input signals" and "plurality of input signals" will be used interchangeably and will refer to those digital signals resulting from the method for digitizing and processing of the present invention in any of its embodiments which are used to calculate or determine the variables of the at least one virtual string in any of its embodiments.
For the purposes of the present invention, the term "output signal", "output signals" and "plurality of output signals" will be used interchangeably and will refer to those digital signals resulting from the application, in any of its embodiments, of the input signals on the at least one virtual string of the present invention in any of its embodiments.
In the following, the musical instrument of the present invention will be described in greater detail, through the components that comprise it and the preferred embodiments.
One aspect of the present invention is a musical instrument 100 that digitizes and processes signals, preferably, wherein said signals result from the actions of a performer, such that the digitization and processing of the signals allows the techniques and gestures of the performer to be accurately replicated.
In one embodiment, the techniques and gestures that the instrument replicates comprise legato, tapping, slapping, glissando, vibrato, finger tapping, pick tapping, pick dragging, pizzicato, string quenching, strumming, snapping, harmonic generation, among others.
In one embodiment, the musical instrument 100 comprises a body 101 and a neck 102, wherein said body 101 and neck 102 are linked such that the neck 102 extends from the body 101 following its longitudinal axis. In one embodiment the body 101 is between 20 and 40 cm long, between 7 and 30 cm wide and between 4 and 10 cm thick. In one embodiment the neck 102 of the instrument is between 20 and 40 cm long, between 4 and 8 cm wide and between 1 and 3 cm thick. In one embodiment the body 101 comprises a container and a lid pivotally connected to the container, thus defining an inner volume.
In one embodiment, the instrument 100 of the invention comprises a neck 102 comprising a base with a partially or completely flat top face, wherein said top face is that face upon which the performer executes the actions to be digitized and processed by the instrument, and wherein said neck 102 further comprises: a fretboard 107 disposed on at least a planar portion of the top face of the base of the neck 102 comprising a printed circuit board 108, comprising a plurality of sensors 110 of the fretboard wherein said plurality of sensors 110 are arranged in at least one row 109 of sensors 110, and wherein said circuit board 108 is connected to the at least one 113 capacitive sensing integrated circuit via a wired connection;
- at least one additional modulator 111 of the model, wherein said modulator 111 is integrated to an inertial measurement unit (IMU) located in the fretboard 107 of the neck 102 and which is connected to the microcontroller 114.
A person skilled in the art will know how to choose from a variety of base shapes comprising a partially or completely flat top face. All base shapes and sizes that do not interfere with or compromise the functionality of the instrument are covered by the present disclosure.
In one embodiment, the at least one row 109 of sensors 110 is arranged such that it is aligned on the longitudinal axis with its corresponding string of the at least one string 103.
In one embodiment, the body 101 of the instrument 100 comprises at least one string 103 of a stringed musical instrument, preferably said at least one string 103 is of an acoustic or electric stringed instrument. More preferably, said at least one string is an acoustic guitar string or an electric guitar string.
In one embodiment the at least one string 103 is made of metal and is made of a metal selected from steel, nickel, brass, bronze, or any combination thereof. In a particularly preferred embodiment, the at least one string 103 is made of steel and nickel.
In one embodiment, each of the strings of the at least one string 103 is independently connected to the capacitive sensing integrated circuit 113. The at least one string 103 is a damped string with two damping media 104a and 104b, wherein the 104a medium is located proximate to the end of the at least one string 103 located most distal to the fretboard and the 104b medium is located proximate to the end of the at least one string 103 that is closest to the fretboard and wherein both media are in contact with the at least one string 103. "Proximate to the end of the string" will be understood to mean any distance that allows the damping medium to produce the desired damping and reduce vibrations subsequent to the initial vibration of the string. A person skilled in the art will be able to determine the position for the exact location of the damping media. Likewise, a person skilled in the art will be able to determine the optimum size that these damping means can be to produce the desired damping.
The inclusion of the viscoelastic material gives the instrument the ability to avoid reflections of the initial signal, thus achieving greater precision in the replication of the performer's intention and avoiding spurious signals, giving the invention the advantage of achieving greater precision in the replication of techniques and gestures performed by the performer.
In one embodiment, the body 101 of the instrument 100 comprises two supports. Preferably said supports are located on the outer surface of the lid and at opposite ends of the at least one string, wherein said two supports comprise a first support which is used as a tensioning point of the at least one string and a second support which is used as a bridge or anchor point. Preferably, the method of anchoring the at least one string is by attaching the string to the second body support.
The body 101 of the instrument 100 comprises two supports, wherein the first support is located at the more distal end relative to the neck 102 of the instrument 100 and the second support is located at the nearer end relative to the neck 102 of the instrument 100. In one embodiment the first support comprises at least one spring 105. Preferably, said support comprises 1 , 2, 3, 4, 5, 6, 7, 8 or more springs 105. More preferably, said support comprises 4, 5 or 6 springs 105. Even more preferably, said support comprises 6 springs 105.
Each of the at least one spring 105 is arranged in such a way that a spring is arranged for each of the strings of the at least one string 103. Preferably, said at least one spring 105 is linked to the at least one string 103 from a single end. When the instrument comprises more than one string 103 and therefore more than one spring 105, all the springs shall be arranged from the same end of each of the strings, so that in the instrument all the springs are located on the first support.
Thanks to the inclusion of the springs in the string attachment area, it is possible to achieve that the tension of the strings of the instrument is similar to that of the strings of a conventional length instrument despite the fact that the strings of the instrument of the present invention are significantly shorter.
In a preferred embodiment, the musical instrument 100 comprises at least one polyphonic microphone, more preferably said at least one microphone is a hexaphonic microphone.
The body 101 of the instrument 100 comprises a capacitive sensing integrated circuit 113, wherein said circuit 113 is connected to the at least one string 103 such that each of the strings of the at least one string 103 is independently connected. In a preferred embodiment, the connection between the at least one string 103 and the capacitive sensing integrated circuit 113 is made via a wired connection.
The at least one capacitive sensing integrated circuit 113 is used to detect the direct or indirect contact, for example, through an instrumental performance element, for example, a pick or a bow, of the performer with the at least one string 103. Further, the at least one capacitive sensing integrated circuit 113 receives signals from the at least one string 103, from the plurality of sensors 110 of the fretboard, or both and sends signals to the microcontroller 114.
The at least one capacitive sensing integrated circuit 113 comprises a group of sensing integrated circuits for sensing the quenching of the at least one string 103 and another group of capacitive sensing integrated circuits for sensing the force and position coming from the sensors 110 of the fretboard 107.
The instrument body comprises at least one A/D converter 112. In one embodiment of the invention, the bit depth chosen for the at least one A/D converter 112 is such as to allow sufficient resolution to feed the simulation, preferably, the bit depth is 16 bits.
In a preferred embodiment, each of the at least one A/D converter 112 is arranged such that an A/D converter 112 is arranged for each of the microphones 106 that sense the strings of the at least one string 103. More particularly, said converters 112 and microphones 106 are connected via a wired connection.
In one embodiment, the at least one A/D converter 112 digitizes information coming from the at least one microphone 106 at a given sampling rate, such as to allow the capture of audible components of the performance transient (20 Hz to 20 kHz) and of ultrasonic vibrations that may contribute to the final model synthesis result (20 kHz to 48 kHz). In one embodiment, the at least one A/D converter 112 digitizes the information coming from the at least one microphone 106 with a given sampling frequency, preferably the sampling frequency is between 12 and 192 kHz, preferably the sampling frequency is between 12, 24, 48, 96 and 192 kHz, even more preferably the sampling frequency is 96 kHz.
This high sampling rate allows the musical instrument 100 of the present invention to achieve replicating with greater detail and resolution the movements performed by the at least one string 103 and to transmit that information for use in the modeling of the at least one virtual string 200, thus allowing a more accurate replication of the movements performed on the real string. As a final result when combined with the information from the rest of the sensors, an expressiveness is obtained by the musical instrument that is more similarto the gesticulations and techniques of the performer that is not obtained by other musical instruments in the prior art.
The musical instrument 100 features a microcontroller 114. The microcontroller 114 is responsible for controlling, concentrating and distributing the at least one digital signal that does not come from the at least one A/D converter 112 to the control and processing unit 115. For example, said at least one digital signal coming from the at least one additional modulator 111 or from the at least one capacitive sensor 110.
In a preferred embodiment, said microcontroller 114 is connected to the capacitive sensing integrated circuit 113 via an I2C connection, or an SPI connection. In a preferred embodiment, the microcontroller 114 used is a 32-bit microcontroller with SPI, I2C digital communication ports and digital inputs, model ESP32. The body 101 of the instrument 100 comprises a control and processing unit 115 wherein said control and processing unit 115 controls and processes information coming from the at least one A/D converter 112. In one embodiment, the control and processing unit 115 is connected to the at least one A/D converter 112 through the use of at least one sound channel. In a preferred embodiment, each of the at least one sound channel is arranged in such a way that a sound channel is arranged for each A/D converter of the at least one A/D converter 112. That is, by way of example, when the instrument 100 comprises 6 sound channels, it will also comprise 6 microphones such that each sound channel receives information from a single microphone.
In one embodiment, the control and processing unit 115 operates with a latency such that it is not significantly noticeable to the performer or is not a drawback to the execution of the performer's actions, and/or the operation of the instrument. Preferably, the control and processing unit 115 operates with a latency of 2, 3, 4, 5, 6, 7, 8, 9, or 10 ms, more preferably, said unit 115 operates with a latency of 2, 3, 4 or 5 ms, more preferably, said unit 115 operates with a latency of 2 ms.
In one embodiment, the control and processing unit 115 comprising the body 101 of the instrument 100 digitizes information coming from the at least one microphone 106 at a given sampling rate. Preferably said sampling frequency of between 12 and 192 kHz, preferably the sampling frequency is between 12, 24, 48, 96 and 192 kHz, still more preferably the sampling frequency is 96 kHz.
In one embodiment, the control and processing unit 115 (CPU) communicates with the following elements in the following ways: at least one A/D converter 112 via an I2C or SPI connection, to the microcontroller 114 via USB connection the at least one audio output 118 or 119, by means of a DAC and an audio conditioning circuit 117 if the audio output 118 is analog, or by means of a wired connection if the audio output 119 is digital.
In one embodiment, the control and processing unit 115 comprises a graphics processing unit (GPU), a field programmable gate array (FPGA), an embedded system, or any other high-performance co-processor.
In an alternative embodiment, the control and processing unit 115 may be located outside the musical instrument 100 on an external processor, for example, inside a computer or notebook, and be in communication with the microcontroller 114 and the A/D converters 112 via a serial digital connection, such as USB. Additionally, when the external processor is a computer or notebook, the audio output used by the musical instrument shall be that comprised by the computer or notebook. In one embodiment, the control and processing unit 115 is a computer that may be located within the interior volume of the body 101 of the instrument 100, or located outside said instrument 100. In a particular embodiment, said computer may be an Nvidia Jetson nano, Jetson TX2 NX, or any other computer with similar computing capability, or any combination thereof.
In one embodiment, the control and processing unit 115 models at least one virtual string 200 for each of the at least one string 103 of the body 101. Preferably, the control and processing unit 115 models 1 , 2, 3, 4 or more virtual strings for each of the at least one string of the body.
In a preferred embodiment, the control and processing unit 115 models the at least one virtual string 200 such that each virtual string of the at least one virtual string 200 corresponds to each string of the at least one string 103 of the body 101.
In one embodiment, when modeling at least two virtual strings, said modeling may be independent or interrelated, i.e., said at least two strings may be modeled separately or together.
In a preferred embodiment, the instrument 100 comprises an analog audio output 118, wherein said analog audio output 118 comprises a digital-to-analog converter 116 (DAC), wherein said DAC 116 converts the signal generated by the model into an analog audio signal; and at least an audio signal conditioning circuit 117 that communicates with the at least one DAC converter 116, wherein said audio signal conditioning circuit conditions the signal for reproduction.
In one embodiment, audio signal conditioner circuit 117 is in direct communication to a reproduction medium, wherein said reproduction medium may be a speaker or a headphone, most preferably a speaker. In another embodiment, the audio signal conditioning circuit 117 is in indirect communication to a reproduction medium, for example, via an amplifier or an effects pedal. In one embodiment, the reproduction means may be inside the musical instrument, for example, inside the body 101 or inside the neck 102, or outside the musical instrument 100, for example, being a autonomous speaker, a computer, or any other reproduction means suitable for the reproduction of music, sounds, or sound effects. In one embodiment, the neck 102 of the instrument is linked to the body 101 of the instrument 100 via a heel, wherein said heel generates a linkage between the bottom face of the base of the neck 102 and the region of the body 101 of the instrument 100 closest to said bottom face, wherein said bottom face is the face or region of the base that is diametrically opposite the top face of the base of the neck 102.
The fretboard 107 comprises a circuit board 108 of the fretboard, wherein said circuit board 108 comprises a plurality of sensors 110 preferably distributed in at least one row 109 of integrated sensors 110. In one embodiment this integrated circuit board 108 is connected to the at least one capacitive sensing integrated circuit 113 via a wired connection.
In one embodiment, the fretboard 107 of the instrument 100 comprises a plurality of sensors 110. Preferably, the plurality of sensors 110 of the fretboard 107 comprises capacitive sensors or pressure sensors, more preferably, the plurality of sensors 110 of the fretboard 107 comprises capacitive sensors.
In one embodiment the plurality of sensors 110 comprises sensors protruding superficially from the fretboard 107, such that there is a height difference between the position of the plurality of sensors 110 and the fretboard 107 wherein they are located. In this way the plurality of sensors provides a tactile cue of their position to the performer. A person skilled in the art will know how to optimize the exact height at which these sensors protrude 110 in order to achieve the intended functionality in the best way.
In one embodiment the plurality of sensors 110 comprises metal sensors, preferably copper sensors, more preferably enameled copper sensors.
In a preferred embodiment, the at least one row 109 of sensors 110 comprises capacitive sensors. Preferably, each of the at least one row 109 of sensors 110 is arranged such that each row of sensors of the at least one row 109 of sensors 110 are aligned with each string of the at least one string 103 of the body 101 along a longitudinal axis. In a preferred embodiment, each row of the at least one row 109 of sensors 110 comprises the same number of sensors 110.
In a preferred embodiment, each sensor 110 of the at least one row 109 of sensors 110 corresponds to the position of each of the frets of a stringed musical instrument or to the position on the fretboard of a fretless stringed instrument. In one embodiment, the at least one row 109 of sensors 110 comprises capacitive sensors. In a preferred embodiment, the plurality of sensors 110 are connected to the at least one capacitive sensing integrated circuit 113, wherein said capacitive sensing integrated circuit 113 comprises an integrated mpr121 .
In a most preferred embodiment, the plurality of sensors 110 of the fretboard 107 of the instrument 100 comprises 72 sensors. In a more preferred embodiment, said 72 sensors are distributed in 6 sensor rows, wherein each sensor row comprises 12 sensors. In a preferred embodiment, each sensor corresponds to the position of each of the frets of a stringed musical instrument.
In one embodiment, the musical instrument 100 comprises at least one additional modulator 111 of the model. Preferably, said additional modulator 111 is located on the neck 102 of the instrument 100, more preferably it is located at the base of the neck 102. Preferably, the at least one additional modulator 111 of the model is integrated into an inertial measurement unit (IMU) and sends a continuous signal to the microcontroller 114 via an I2C or SPI connection.
In one embodiment, the at least one additional modulator 111 of the model is selected from an accelerometer, gyroscope, magnetometer, or a combination thereof. In one embodiment the at least one additional modulator 111 of the model transmits information that can be used to modify at least one variable or parameter of the at least one virtual string, of the at least one node of the n nodes, or of the at least one spring of the n-1 springs, for example: mass of the nodes, friction coefficients, position where the force is applied, tuning, output level, additional force, among others.
In a particular embodiment, the additional modulator 111 of the model is an accelerometer. The function of the accelerometer is to capture parameters such as vibrato or small gestures of the performer, which modify the sound of the string once the initial transient has passed, and introduce them to the model in the form of forces or modulation of model parameters.
In one embodiment, the musical instrument 100 of the invention comprises a power supply arranged in the inner volume of the body 101 , connected directly or indirectly to all parts of the instrument requiring electrical power. The power supply may be any power supply that a person skilled in the art would consider suitable to be able to power the instrument 100 of the present invention and to enable the methods of the present invention to be carried out.
The instrument 100 comprises at least one microphone 106, at least one capacitive sensing integrated circuit 113, a plurality of sensors 100 of the fretboard 107, and at least one additional modulator 111 of the model, wherein at least one of these elements produces a plurality of signals received by the processor 115, and wherein said plurality of signals is used by the processor 115 to calculate the parameters of the mathematical model of the at least one virtual string.
It should be understood that when the musical instrument 100 comprises at least one string 103, at least one microphone 106, at least one capacitive sensing integrated circuit 113, at least one A/D converter 112, at least one spring 105, and at least one row 109 of sensors 110, each of the at least one microphone, at least one integrated circuit, at least one A/D converter, the at least one spring, and the at least one row of sensors are arranged in such a way that a single microphone, integrated circuit, A/D converter, spring, and row of sensors are disposed with a single string of the at least one string 103 comprised by the instrument 100. That is to say, as an example and without limitation, that when the instrument comprises 6 strings, it will also comprise 6 microphones, 6 integrated circuits, 6 A/D converters, 6 springs and 6 rows of sensors such that each microphone, A/D converter, spring and row of sensors will be associated with a single string.
Each and every embodiment of the musical instrument resulting from the combination of different embodiments of each of the elements comprised, explicitly or implicitly, by said musical instrument is hereby included.
In the following, the method for digitizing and processing signals of the present invention will be described in greater detail, through the steps comprising it and the preferred embodiments.
In one embodiment, generating a plurality of signals of step A) comprises generating null signals, non-null signals, and a combination of null and non-null signals, wherein a "null signal" is that signal which is filtered out as a result of the method for digitizing and processing of the present invention. Preferably, the musical instrument 100 of the present invention produces only a plurality of null signals in the absence of actions of the performer thereon.
In one embodiment, the signal generation of step A) comprises generating at least one non-null signal, wherein said at least one non-null signal results from the action of a performer on the musical instrument 100 of the present invention.
In particular, the action of the performer on the musical instrument 100 of the present invention comprises establishing direct contact between the deft hand of the performer and the at least one string 103 of the instrument 100, or establishing indirect contact, e.g., via an instrumental performance element, e.g., a pick or a bow, between the deft hand of the performer and the at least one string 103 of the instrument 100.
Particularly, the action of the performer on the musical instrument 100 of the present invention comprises establishing direct contact between the non-deft hand of the performer and at least one sensor 110 of the plurality of sensors 110 of the fretboard 107.
More particularly, the action of the performer on the musical instrument 100 of the present invention comprises establishing a direct or indirect contact between the deft hand of the performer and the at least one string 103 of the instrument 100 and, simultaneously, establishing a contact between the non-deft hand of the performer and at least one sensor 110 of the plurality of sensors 110 of the fretboard 107, wherein said at least one string 103 and said at least one sensor 110 are located on the same longitudinal axis.
In one embodiment, generating signals by the at least one string 103 of step A) may comprise generating sounds and movements, wherein said generating sounds and movements result from the action of a performer. Particularly, sounds coming from the at least one string 103 are detected by the at least one microphone 106. Particularly, movements of the at least one string 103 are detected by the capacitive sensing integrated circuit 113.
In one embodiment, the plurality of signals of step A) coming from the at least one string 103 and the plurality of signals coming from the plurality of sensors 110 of the fretboard 107, come from a string and a sensor that are aligned.
In order to produce signals, the at least one string 103 is used as a capacitive sensor that sends signals to the at least one capacitive sensing integrated circuit 113. Preferably, the performer's actions on the at least one string 103 produce, at least one capacitance change signal associated with the input of the integrated which can be used to determine the contact between at least one part of the performer's body, for example, a hand or at least one finger, and the at least one string 103 of the instrument 100. It can also be used to detect an indirect contact, for example, via an instrumental performance element, e.g., a pick or a bow, between the performer and the at least one string 103 of the instrument 100.
The performer's actions upon at least one sensor 110 of the plurality of sensors 110 of the fretboard 107 produce at least one capacitance change signal associated with the input of the integrated. The origin of the at least one capacitance change signal produced by at least one sensor 110 of the plurality of sensors 110 of the fretboard 107 is used to determine the position on the fretboard 107 of the contact between at least one part of the performer's body and at least one sensor 110 of the plurality of sensors 110. On the other hand, the intensity of the at least one capacitance change signal may be used to determine the pressure exerted on the at least one sensor 110 of the plurality of sensors 110 of the fretboard 107.
Thanks to the digitization and processing method of the present invention, which makes it possible to incorporate pressure values into the modeling of the virtual string, it is possible to generate a spectrum of sounds, for example, a fully plucked string, harmonics, slurs between two notes, trills, quenching, etc. To achieve this, continuous pressure sensing is performed by using the printed circuit board 108 with capacitive sensors 110, which are read by at least one integrated circuit 113 that delivers a digital signal proportional to the change in capacitance added by finger contact with the surface. The more pressure is applied to the sensor 110, the more contact surface is generated on the plate 108 of the finger and a digital signal proportional to the pressure exerted can be obtained.
In one embodiment, the digitization of the plurality of signals coming from the at least one microphone 106 of step B) is performed by the at least one A/D converter 112, wherein said digitization is performed with a frequency of between 12 and 192 kHz, preferably between 24, 48, 96 and 192 kHz, more preferably with a frequency of 96 kHz.
The threshold value used in step C) of the method can be determined by the performer before or during the execution of the actions A person skilled in the art will be able to determine said threshold value according to the required needs that will allow the best functioning of the method of the present invention.
In a preferred embodiment, this threshold value may be determined with reference to the noise of the input signal. More preferably, said threshold value will be 20% higher than the maximum value of the noise.
In one embodiment, the signal filtering of step C) is carried out by the control and processing unit 115 of the instrument.
In one embodiment, the signal processing step D) of the method comprises converting the signals coming from the at least one string 103 into an increase in the coefficient of friction, by means of the steps of a) sensing the change in capacitance produced by direct or indirect contact of the performer with the at least one string 103, b) obtain a digital value through the corresponding capacitive sensing integrated, where the sensing capacity value is proportional to the contact exerted, and therefore the measured signal is higher, c) using the values obtained from step a) to determine the increase in the coefficient of friction of the virtual string.
In one embodiment, the signal processing step D) of the method comprises converting the signals coming from the at least one microphone 106 into a force, by the steps of a) deriving the digital signal coming from at least one A/D converter 112 as a function of time, b) using the value obtained from step a) as a force exerted on the at least one node of the at least one virtual string.
In one embodiment, the signal processing step D) of the method comprises converting the signals coming from the at least one sensor 110 of the plurality of sensors 110 into a friction or motion restriction, by means of using the origin position at of the signal coming from the at least one sensor 110 of the fretboard 107 and the intensity of the capacitance change signal to determine the node where a force proportional to the measured signal is applied. Wherein said force is
F = C ■ Xn where C is a value proportional to the associated capacitance change signal measured through the 113 capacitive sensing integrated circuit and Xn is the displacement perpendicular to the string at node n where the force is applied.
In an embodiment, the signal processing step D) of the method comprises converting the signals coming from the at least one additional 111 modulator of the model into a force or other modulation of the model, following the steps of a) obtaining the sensing values of the at least one additional modulator 111 , through the microcontroller, b) using the values obtained from the step a) to modify at least one variable or parameter of the at least one virtual string, of the at least one node of the n nodes, or of the at least one spring of the n-1 springs. In one embodiment, the method for digitizing and processing signals of the present invention allows emulating the quenching of the at least one string 103. To correctly emulate the sound extinction characteristic of string quenching resulting from contact of a part of the performer's body, for example, of a hand with a vibrating string, each string of the at least one string 103 is provided with a connection to another capacitive sensing integrated circuit thus using the metallic string as a capacitive sensor. For this purpose, continuous variation between full stopping of at least one string 103 and intermediate stops such as those achieved on a real stringed instrument is used.
In one embodiment, said method for emulating string quenching is used to emulate intermediate stopping or full stopping of at least one string 103. In one embodiment, said method for emulating string quenching is used to emulate a continuous variation between an intermediate stopping and a full stopping of at least one string 103.
In the following, the method for synthesizing sounds of the present invention will be described in greater detail, through the steps comprising it and the preferred embodiments.
In one embodiment, the parameters of the virtual string to be defined in step A) of the method comprise the mass of the nodes 201 , the coefficients kF of friction, the constants kR of the springs 202, the position of the frets, the spring balance position, the length of the virtual string, the distance on the X-axis of the virtual string with respect to the fretboard, the node from which the sound is to be extracted, the node on which the force is to be applied, among others.
In a preferred embodiment, each of the parameters are determined for each node of the n nodes 201 and for each spring of the (n-1) springs 202. Said parameters may be determined and modified before or during the execution of the instrument 100 of the present invention.
In one embodiment, the equilibrium position of any node of the n nodes 201 and any spring of the (n-1) springs 202, is the position taken by the n nodes 201 and the (n-1) springs 202 when the instrument 100 is not subjected to any action by the performer, or is the position taken by the n nodes 201 and the (n-1) springs 202 when the unit 115 receives only null signals.
In one embodiment, the virtual string variables to be defined in step A) comprise the displacement of each node of the n nodes 201 with respect to the equilibrium position, the speed of movement of each node of the nnodes 201 , the position where force is applied on the at least one virtual string 200, the position of the performer's non-deft hand on the 108 circuit board, the pressure of the performer's non-deft hand on the 108 circuit board, the direct or indirect contact of the performer on the at least one string 103, the movements and accelerations applied on the body 101 of the instrument 100 detected by means of the at least one additional modulator of the model 111.
In one embodiment, the modeling of the at least one virtual string 200 of step A) comprises establishing a system in two dimensions X and Y, wherein the X axis is the axis perpendicular to the virtual string 200 and the Y axis is the axis longitudinal to the virtual string 200, and wherein the sound produced by said at least one virtual string 200 is determined based on the respective distance to the equilibrium position and velocity of the n nodes 201 comprising said at least one virtual string 200.
In one embodiment, the (n-1) springs 202 making up the at least one virtual string 200 follow the dynamics described by Hooke's Law, or corrected versions of Hooke's Law.
The exact values of the virtual string parameters can be used by the performer to determine the pitch, timbre, tuning, type of instrument to be imitated by modeling the virtual string. Such parameters can result in sounds that are not obtainable with a real physical instrument, such as a string with infinite sustain, or a mixture of sounds.
In an embodiment, the application the plurality of input signals of step B) of the method for reproducing sounds of the present invention correspond to the application of a force, wherein said force is applied on at least one node 201 , preferably, said force is applied on at least one group of nodes comprising between 1 -20 nodes, preferably 10 nodes, and wherein force is modulated by means of a Gaussian function according to:
Fexti = Foe~a l~Ncenter)2 where Fexti is the force on the i-th node, Fo is the force coming from the signal of at least one microphone 106 after being derived, a is the parameter that regulates the width of the Gaussian, i is the index of the node and Ncenter, is the central position of application of the force. Using this function, the application of force is localized to an area and continuously fades away from that area when moving away from it.
In a preferred embodiment, the plurality of input signals modulated by means of a Gaussian function as described above is the plurality of input signals resulting from the conversion of the signals coming from the at least one microphone 106.
In one embodiment, the sound reproduction method of the present invention comprises applying the following forces on each node of the n nodes that comprise the modeled virtual string is subjected to the following forces:
Figure imgf000027_0001
where F1 is the force resulting from the interaction between node i and its neighbor (i-1) linked through a spring, where kR is the spring constant and D , J is the distance between the i-th node and its neighbor (i-1);
Figure imgf000027_0002
where F2 is the force resulting from the interaction between node i and its neighbor (i+1), linked through a spring, where k is the spring constant and Do, i+i> is the distance between the i-th node and its neighbor (i+1);
F3 = —kFi ■ vxt where kFi is the friction coefficient of node i and vx is the X-axis velocity of node i;
F4 = C ■ Xn where F5 is the restriction to movement produced by the plurality of input signals resulting from digitizing and processing of the signals coming from the plurality of sensors 110 of the fretboard 107, where C is a value proportional to the measured capacity (pressure) and X is the position on the X-axis (position). That is to say that the sum of forces is:
FTOTAL = Fl + F2 + F3 + F4 + Fext
It is evident that when the device produces only null signals, the values of F4 and Fext are equivalent to 0.
On the other hand, it can be appreciated that, upon application of a plurality of input signals coming from the at least one microphone 106, the at least one node 201 on which the force Fext is applied may be the same one that will produce a plurality of output signals, while the at least one node 201 on which the force F4 is applied will undergo a motion restriction that allows modulating the variable "string length". Therefore, in one embodiment of the invention the at least one node 201 to which the force Fext is applied does not receive the force F4 and vice versa.
In an alternative embodiment, the string modeling comprises the use of the following equation to determine the forces F1 and F2 that a node i exerts on a neighboring node i-1 (and its equivalent for i+1)
Figure imgf000027_0003
where D is calculated as:
Figure imgf000027_0004
where DY2 is a new parameter representing the distance between nodes on the Y-axis squared.
Where Fx (i) is the force that node i exerts on node j in the X axis, kR is the spring constant, D is the modulus of the distance between nodes, Deq is the distance with respect to the spring equilibrium point, and D(i,i-i) is the distance between neighboring nodes in the X axis.
In an alternative embodiment, the string modeling comprises calculating the positions in successive steps by means of Verlet's algorithm.
In one embodiment, producing output signals of step C) of the method comprises calculating the position of each node of the n nodes 201 with regards to the equilibrium position, the speed of movement of each node of the n nodes 201 , the position where force is applied on the at least one virtual string 200, the position of the performer's non- deft hand on the circuit board 108, the pressure of the performer's non-deft hand on the circuit board 108. Particularly, it comprises calculating the position on the perpendicular axis (X-axis) of each node of the n nodes 201 relative to its equilibrium position and the velocity with which each node of the n nodes 201 moves. Particularly, producing output signals of step C) comprises monitoring the position of one or a small group of nodes such as, for example, between 1 and 20 nodes, more preferably 10 nodes.
In one embodiment, the plurality of signals generated by the instrument 100 of the present invention is a result of an action of the performer, wherein said action comprises establishing direct or indirect contact between at least a part of the performer's body, preferably, the deft hand of the performer and the at least one string 103 of the instrument 100 and, simultaneously, establishing contact between another part of the performer's body, preferably, the non-deft hand of the performer and the plurality of sensors 110 of the fretboard 107.
In one embodiment, producing output signals comprises modulating the variables of at least one node of the n nodes 201 by applying at least one force, friction, parameter modulation, restriction, and/or a combination thereof.
The audio signal is generated by monitoring the position and velocity of one or a small group of nodes such as, for example, between 1 and 20 nodes, more preferably 10 nodes.
The software is based on the integration of Newton's equations of motion using a Verlet algorithm. The virtual system is composed of a series of nodes linked by quadratic potentials (Hooke's law) keeping the end positions fixed. For simplicity, the simulated system has only 2 dimensions, one longitudinal (Y) to the string and one perpendicular to the string (X), the sound is produced by recording the deflection in the perpendicular axis of the equilibrium position.
A very important and original part is to link the virtual system with a perturbation generated in the real world. This is achieved by applying on at least one node 201 a force proportional to the information provided by a sensor. This sensor can be the initial sound disturbance read by a microphone 106 coupled to an actual 103 string or the capacitance measured on a touch sensor 110 that treads on the strings.
After calculating the forces, Verlet's algorithm calculates the positions in successive steps and records the sound produced as the distance to equilibrium of the position of a chosen node or a small group of nodes, for example, between 1 and 20 nodes, more preferably 10 nodes.
In an embodiment, sending signals step D) of the method herein comprises sending the plurality of output signals to an analog audio output 118 or to a digital audio output 119, preferably an analog audio output 118.
When sending the plurality of output signals to an analog audio output 118, step D) comprises the substeps of i) converting the plurality of digital output signals to a plurality of analog output signals by using a digital-to-analog converter (DAC) 116, ii) conditioning the plurality of analog output signals resulting from step i) by means of an audio signal conditioning 117 circuit.
In an embodiment, the step D) of the sound reproduction method of the present invention comprises a step of modifying the plurality of output signals prior to their reproduction, preferably, prior to their conversion to an analog signal. This modification step includes the application of digital effects, for example, distortions, echoes, cameras, among others.
In one embodiment the sound reproduction method of the present invention can be used to reproduce at least one of the following effects: fully plucked string, slurs between two notes, harmonics, trills, quenching.
In one embodiment the sound reproduction method of the present invention can be used to reproduce the sound of another musical instrument, such as, for example, guitars, basses, violins, violas, violas, charangos, double basses, sitar, cuatro, ukulele and any other string instrument.
Each and every embodiment resulting from the combination, in any of their forms, of the different methods disclosed in the present specification, of their steps and/or sub-steps, are also considered embodiments of the present invention.
EXAMPLES
Next, a particularly preferred embodiment of the instrument 100 of the present invention is detailed. The same are to be considered only as examples and are not intended to limit the scope of possible embodiments of the instrument and methods of the present invention.
EXAMPLE 1 : PREFERRED EMBODIMENT OF THE MUSICAL INSTRUMENT OF THE PRESENT INVENTION
The instrument of the present invention comprises two parts, wherein the part of the instrument used to digitize the action of the deft hand of the performer is the body of the instrument, while the part of the instrument used to digitize the action of the non-deft hand of the performer is the neck of the instrument.
The musical instrument of the present invention comprises thus a body and a neck, wherein the body of the instrument comprises
- A body comprising a container and a lid, thus defining an inner volume, wherein said body further comprises six (6) strings located on the outer surface of the lid and extending along the lid, wherein said 6 strings are metal electric guitar strings, two viscoelastic foam rubber media, wherein the first medium is located proximate to one end of the at least one string and the second medium is located proximate to the opposite end of the at least one string and wherein both media are in contact with the at least one string, six (6) springs located on the outer surface of the lid, where said six (6) springs are linked to the corresponding strings, six (6) microphones located on the outer surface of the lid, below the corresponding strings and without contact with the string;
- twelve (12) capacitive sensing integrated circuits, wherein said capacitive sensing integrated circuits are located within the interior volume and wherein they are connected to the six (6) strings by means of a wired connection, Six (6) analog-to-digital converters, or A/D converters, located inside the interior volume, where said A/D converters are connected to the corresponding microphones by means of a wired connection, a microcontroller, located in the inner volume of the body, connected to the capacitive sensing integrated circuit via an I2C connection, an audio output located in the interior volume of the body, wherein said at least one audio output is analog and comprises a digital-to-analog converter (DAC), an audio signal conditioning circuit that communicates with the at least one DAC; a control and processing unit (CPU), located in the interior volume of the body, where said control unit is in communication with the six (6) A/D converters via an I2C or SPI connection, the microcontroller via a USB connection, the audio output, via a DAC, and wherein said control unit comprises six (6) sound channels, where each sound channel is connected to its corresponding A/D converter, a graphics processing unit (GPU), a neck comprising a fretboard, wherein said fretboard comprises
- a printed circuit board comprising seventy-two (72) capacitive sensors, and wherein said seventy-two (72) sensors are arranged in six (6) sensor rows, the sensor rows comprising 12 sensors each, and wherein said circuit board is connected the at least one capacitive sensing integrated circuit of the body via an I2C connection,
- an accelerometer, where the accelerometer is integrated into an inertial measurement unit (IMU), located in the fretboard of the neck and connected to the body's microcontroller via an I2C or SPI connection, wherein the six (6) microphones, the six (6) A/D converters, the six (6) springs and the six (6) rows of sensors correspond to each of the strings of the six (6) strings, in a 1 to 1 ratio.

Claims

1) A musical instrument (100) that digitizes and continuously processes analog signals, digital signals, or both, produced by said instrument (100), and wherein said instrument (100) comprises
- a body (101) comprising a container and a lid, thus defining an inner volume, wherein said body further comprises at least one string (103) located on the outer surface of the lid and extending along the lid, wherein said at least one string (103) is a metal string,
- two string damping media (104a, 104b), wherein the first medium (104a) is located proximate to one end of the at least one string (103) and the second medium (104b) is located proximate to the opposite end of the at least one string (103) and wherein both media (104a, 104b) are in contact with the at least one string (103), at least one spring (105) located on the outer surface of the lid, wherein said at least one spring (105) is linked to the corresponding at least one string (103), at least one microphone (106) located on the outer surface of the lid, below the corresponding at least one string (103) and without contact with the string; at least one capacitive sensing integrated circuit (113) wherein said at least one capacitive sensing integrated circuit (113) is located within the inner volume and wherein said at least one capacitive sensing integrated circuit (1 13) is connected with the at least one string (103); at least one analog-to-digital converter (112), or A/D converter, located within the interior volume, wherein said A/D converter (112) is connected to the at least one microphone (106); a microcontroller (1 14), located in the inner volume of the body (101), connected to the capacitive sensing integrated circuit (113); at least one audio output (118 or 1 19) located in the interior volume of the body (101), wherein said at least one audio output may be analog (118), digital (1 19), or both; and a control and processing unit (CPU) (1 15), located in the interior volume of the body (101), where this unit (1 15) is in communication with at least one A/D converter (112), the microcontroller (1 14), the at least one audio output (118 or 119), and wherein said control and processing unit (1 15) comprises a graphics processing unit (GPU); a neck (102) comprising a fretboard (107), wherein said fretboard (107) comprises
- a printed circuit board (108) comprising a plurality of sensors (110) of the fretboard (107), and wherein said plurality of sensors (110) are arranged in at least one sensor row (109), and wherein said board (108) is connected to the at least one capacitive sensing integrated circuit (1 13),
- at least one additional modulator (111) of the model, wherein said modulator (1 11) is integrated to an inertial measurement unit (IMU) located in the fretboard (107) which is connected to the microcontroller (1 14), wherein the at least one microphone (106), the at least one capacitive sensing integrated circuit (113), the at least one A/D converter (112), the at least one spring (105), and the at least one row (109) of sensors (110) correspond to each of the strings of the at least one string (103), in a 1 to 1 ratio.
2) The musical instrument (100) according to claim 1 , characterized in that the at least one A/D converter (112) digitizes the information coming from the at least one microphone (106) with a frequency of between 12 and 192 kHz.
3) The musical instrument (100) according to claim 1 or 2, characterized in that the at least one A/D converter (112) digitizes the information coming from the at least one microphone (106) with a frequency of between 24, 48, 96 and 192 kHz.
4) The musical instrument (100) according to any one of claims 1 to 3, characterized in that the at least one A/D converter (1 12) digitizes the information coming from the at least one microphone (106) with a frequency of 96 kHz.
5) The musical instrument (100) according to any one of claims 1 to 4, characterized in that the at least one audio output is at least one analog audio output (118) comprising at least one digital-to-analog converter (DAC) (1 16) and one audio conditioner (117)
6) The musical instrument according to claim 5, characterized in that the at least one analog audio output (118) communicates with a reproduction medium.
7) The musical instrument (100) according to any one of claims 1 to 6, characterized in that it comprises 1 , 2, 3, 4, 5, 6, 7 or 8 strings.
8) The musical instrument (100) according to any one of claims 1 to 7, characterized in that it comprises 4, 5, or 6 strings. 9) The musical instrument (100) according to any one of claims 1 to 8, characterized in that it comprises 6 strings.
10) A method for digitizing and processing signals coming from the instrument (100) of any one of claims 1-9, characterized in that it comprises the steps of:
A) continuously generating a plurality of signals by means of the musical instrument (100) of any one of claims 1-9, wherein said plurality of signals comprises digital and analog signals, and wherein said plurality of signals comprises i) signals coming from at least one string (103), ii) signals coming from at least one microphone (106), iii) signals coming from at least one sensor (1 10) of the plurality of sensors
(1 10) of the fretboard (107), iv) signals coming from at least one additional modulator (11 1) of the model, or v) a combination thereof;
B) digitize the plurality of analog signals of step A), wherein i) the plurality of signals coming from the at least one string (103) is digitized by means of at least one capacitive sensing integrated circuit (113), ii) the plurality of signals coming from the at least one microphone (106) is digitized by means of at least one A/D converter (112) with a frequency of between 12 and 192 kHz, iii) the plurality of signals coming from the at least one sensor (110) of the plurality of sensors (110) of the fretboard (107) is digitized by means of at least one capacitive sensing integrated circuit (113);
C) filtering the plurality of digital signals resulting from step A) or B) by means of the substeps of, i) setting a threshold value, ii) comparing each signal of the plurality of signals resulting from step A) or B) with said threshold value, iii) discarding those signals from the plurality of signals that were less than the threshold value;
D) processing the plurality of digital signals obtained from step C) to obtain a plurality of input signals, wherein said processing comprises i) convert the signals coming from the at least one string (103) into an increase of the friction coefficient, ii) converting the signals coming from at least one microphone (106) into a force, iii) converting the signals coming from the plurality of sensors (110) of the fretboard (107) into a friction or a movement restriction, iv) converting the signals coming from the at least one additional modulator (1 11) of the model into a force, volume, friction, position where a force is applied, or position where the sound is read.
11) The method according to claim 10, characterized in that it is performed in real time.
12) The method according to claim 10 or 11 , characterized in that the plurality of signals coming from the at least one microphone (106) of step B) is digitized by means of at least one A/D converter (112) with a frequency of between 24, 48, 96 and 192 kHz.
13) The method according to any one of claims 10 to 12, characterized in that the plurality of analog signals coming from the at least one microphone (106) of step B) is digitized by means of at least one A/D converter (1 12) with a frequency of 96 kHz.
14) The method according to any one of claims 10 to 13, characterized in that the plurality of signals generated in step A) are the result of actions of a performer on the musical instrument (100).
15) A method for synthesizing sounds comprising the steps of
A) modeling at least one virtual string (200) by means of the control and processing unit (115) of the instrument (100) according to any one of claims 1 to 9, following the substeps of i) arranging n nodes (201) joined in series along an axis by (n-1) springs (202), and wherein the node located at each end of the virtual string maintains its fixed position at the equilibrium position, ii) defining the parameters and variables of the n nodes (201) and the (n-1) springs (202); iii) defining the parameters and variables of at least one virtual string (200),
B) apply continuously on the at least one modeled virtual string (200) of step A) a plurality of input signals obtained by means of the digitizing and processing method of any one of claims 10-14,
C) producing a plurality of output signals by means of the substeps of i) calculate the variables corresponding to each virtual string of the at least one virtual string (200), each node of the n nodes (201) and to each spring of the (n-1) springs (202), resulting from the application of the plurality of input signals of step B), ii) monitoring the position of at least one node of the n nodes (201), wherein said plurality of output signals comprise digital signals,
D) sending the plurality of output signals produced in step C) to an audio output for their reproduction.
16) The method according to claim 15, characterized in that said method is performed in real time.
17) The method according to claim 15 or 16, characterized in that step Cii) comprises monitoring the position of one or a small group of nodes, wherein said small group comprises between 1 and 20 nodes.
18) The method according to claim 17, characterized in that the small group of nodes comprises 10 nodes.
PCT/IB2024/056771 2023-07-11 2024-07-11 Musical instrument that digitizes and processes signals, and synthesizes sounds and related methods Pending WO2025012857A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
ARP20230101813 2023-07-11
ARP230101813A AR129892A1 (en) 2023-07-11 2023-07-11 MUSICAL INSTRUMENT THAT DIGITIZES AND PROCESSES SIGNALS, AND SYNTHESIZES SOUNDS AND RELATED METHODS

Publications (1)

Publication Number Publication Date
WO2025012857A1 true WO2025012857A1 (en) 2025-01-16

Family

ID=92302688

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2024/056771 Pending WO2025012857A1 (en) 2023-07-11 2024-07-11 Musical instrument that digitizes and processes signals, and synthesizes sounds and related methods

Country Status (2)

Country Link
AR (1) AR129892A1 (en)
WO (1) WO2025012857A1 (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5587548A (en) * 1993-07-13 1996-12-24 The Board Of Trustees Of The Leland Stanford Junior University Musical tone synthesis system having shortened excitation table
US6049034A (en) * 1999-01-19 2000-04-11 Interval Research Corporation Music synthesis controller and method
US20080236374A1 (en) 2007-03-30 2008-10-02 Cypress Semiconductor Corporation Instrument having capacitance sense inputs in lieu of string inputs
US20110239848A1 (en) 2010-04-02 2011-10-06 Idan Beck Electronic musical instrument
US8093482B1 (en) 2008-01-28 2012-01-10 Cypress Semiconductor Corporation Detection and processing of signals in stringed instruments
US20130180384A1 (en) * 2012-01-17 2013-07-18 Gavin Van Wagoner Stringed instrument practice device and system
US20150262559A1 (en) * 2014-03-17 2015-09-17 Incident Technologies, Inc. Musical input device and dynamic thresholding
US20180047373A1 (en) * 2012-01-10 2018-02-15 Artiphon, Inc. Ergonomic electronic musical instrument with pseudo-strings
US20220208160A1 (en) 2019-07-21 2022-06-30 Jorge Marticorena Integrated Musical Instrument Systems

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5587548A (en) * 1993-07-13 1996-12-24 The Board Of Trustees Of The Leland Stanford Junior University Musical tone synthesis system having shortened excitation table
US6049034A (en) * 1999-01-19 2000-04-11 Interval Research Corporation Music synthesis controller and method
US20080236374A1 (en) 2007-03-30 2008-10-02 Cypress Semiconductor Corporation Instrument having capacitance sense inputs in lieu of string inputs
US8093482B1 (en) 2008-01-28 2012-01-10 Cypress Semiconductor Corporation Detection and processing of signals in stringed instruments
US20110239848A1 (en) 2010-04-02 2011-10-06 Idan Beck Electronic musical instrument
US20180047373A1 (en) * 2012-01-10 2018-02-15 Artiphon, Inc. Ergonomic electronic musical instrument with pseudo-strings
US20130180384A1 (en) * 2012-01-17 2013-07-18 Gavin Van Wagoner Stringed instrument practice device and system
US20150262559A1 (en) * 2014-03-17 2015-09-17 Incident Technologies, Inc. Musical input device and dynamic thresholding
US20220208160A1 (en) 2019-07-21 2022-06-30 Jorge Marticorena Integrated Musical Instrument Systems

Also Published As

Publication number Publication date
AR129892A1 (en) 2024-10-09

Similar Documents

Publication Publication Date Title
Arfib et al. Strategies of mapping between gesture data and synthesis model parameters using perceptual spaces
US6066794A (en) Gesture synthesizer for electronic sound device
CN101473368B (en) Apparatus for producing signals representing the sounds of musical instruments having keyboard chords
US7538268B2 (en) Keys for musical instruments and musical methods
USRE37654E1 (en) Gesture synthesizer for electronic sound device
CN105027192A (en) Device and method for enhancing digital music expression
US9082384B1 (en) Musical instrument with keyboard and strummer
US11462198B2 (en) Digital musical instrument and method for making the same
US20060243123A1 (en) Player technique control system for a stringed instrument and method of playing the instrument
Dimpker Extended Notation: The depiction of the unconventional
JP2022071098A5 (en) ELECTRONIC DEVICE, ELECTRONIC INSTRUMENT, METHOD AND PROGRAM
Serafin et al. Gestural control of a real-time physical model of a bowed string instrument
Chafe Simulating performance on a bowed instrument
EP2073194A1 (en) Electronic musical instrument
Shin et al. Real-time recognition of guitar performance using two sensor groups for interactive lesson
WO2025012857A1 (en) Musical instrument that digitizes and processes signals, and synthesizes sounds and related methods
Overholt Advancements in violin-related human-computer interaction
Arencibia Discrepancies in pianists’ experiences in playing acoustic and digital pianos
Nichols II The vbow: An expressive musical controller haptic human-computer interface
Champagne et al. Investigation of a novel shape sensor for musical expression
Taki A General Method for the Creation of New Electronic Musical Instruments and A New Electronic Wind Instrument
Heinrichs et al. A hybrid keyboard-guitar interface using capacitive touch sensing and physical modeling
Yoo et al. ZETA violin techniques: Limitations and applications
WO2024251999A1 (en) Musical instrument player hand-tracking
Braasch Expanding the saxophone with different tone generators and a foot controller for complementary voices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24755056

Country of ref document: EP

Kind code of ref document: A1