[go: up one dir, main page]

CN108257588B - Music composing method and device - Google Patents

Music composing method and device Download PDF

Info

Publication number
CN108257588B
CN108257588B CN201810058361.XA CN201810058361A CN108257588B CN 108257588 B CN108257588 B CN 108257588B CN 201810058361 A CN201810058361 A CN 201810058361A CN 108257588 B CN108257588 B CN 108257588B
Authority
CN
China
Prior art keywords
note
standard
target standard
current
notes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810058361.XA
Other languages
Chinese (zh)
Other versions
CN108257588A (en
Inventor
姜峰
姜皓天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201810058361.XA priority Critical patent/CN108257588B/en
Publication of CN108257588A publication Critical patent/CN108257588A/en
Priority to SG10201900497XA priority patent/SG10201900497XA/en
Priority to TW108102344A priority patent/TW201933332A/en
Application granted granted Critical
Publication of CN108257588B publication Critical patent/CN108257588B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H7/00Instruments in which the tones are synthesised from a data store, e.g. computer organs
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/111Automatic composing, i.e. using predefined musical rules
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/111Automatic composing, i.e. using predefined musical rules
    • G10H2210/115Automatic composing, i.e. using predefined musical rules using a random process to generate a musical note, phrase, sequence or structure
    • G10H2210/121Automatic composing, i.e. using predefined musical rules using a random process to generate a musical note, phrase, sequence or structure using a knowledge base
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/145Composing rules, e.g. harmonic or musical rules, for use in automatic composition; Rule generation algorithms therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Auxiliary Devices For Music (AREA)
  • Management Or Editing Of Information On Record Carriers (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

The invention provides a music composing method and a device, wherein the method comprises the following steps: constructing a note repository, wherein the note repository comprises: at least one standard note and at least one standard audio feature, and a correspondence between each of the standard notes and at least one standard audio feature; acquiring tune information generated from a user side; analyzing at least one current audio characteristic from the acquired tune information; determining, from the note store, a target standard audio feature corresponding to each of the current audio features; determining target standard musical notes corresponding to the target standard audio features according to the corresponding relation; and generating a music score corresponding to the tune information according to the determined target standard musical notes. The technical requirement degree of the music score can be reduced.

Description

Music composing method and device
Technical Field
The invention relates to the technical field of voice recognition, in particular to a music composing method and device.
Background
Music composition is a method for expressing the idea of the music of a creator by applying a technical theory system of harmony, polyphony, orchestration and music structure.
In music composition, it is necessary to integrate, assemble and creatively arrange and use music materials, which requires the creator to learn professional music theory, for example, how to combine specific notes to form a specific mode, such as a major mode or a minor mode, so that the music composition is a professional behavior.
However, many people who have not learned professional musical knowledge may also create new tunes, such as an impromptu humming melody or randomly playing a musical instrument, but because they have not learned professional musical knowledge, it is difficult to convert the created tunes into specific combinations of notes, i.e. to generate corresponding scores.
Disclosure of Invention
The embodiment of the invention provides a music composing method and device, which can reduce the professional requirement degree of music composing.
In a first aspect, an embodiment of the present invention provides a music composition method, including:
constructing a note repository, wherein the note repository comprises: at least one standard note and at least one standard audio feature, and a correspondence between each of the standard notes and at least one standard audio feature;
acquiring tune information generated from a user side;
analyzing at least one current audio characteristic from the acquired tune information;
determining, from the note store, a target standard audio feature corresponding to each of the current audio features;
determining target standard musical notes corresponding to the target standard audio features according to the corresponding relation;
and generating a music score corresponding to the tune information according to the determined target standard musical notes.
Preferably, the first and second electrodes are formed of a metal,
each of the standard audio features corresponds to a voiced audio frequency range and a high-bass identifier;
said determining from said note store a target standard audio feature corresponding to each of said current audio features, comprising:
for each of the current audio features, performing:
determining the current sound frequency corresponding to the current audio feature, and searching a sound frequency range corresponding to the current sound frequency from the note storage library;
taking the standard audio features corresponding to the searched sound frequency range as the target standard audio features;
generating a music score corresponding to the tune information according to the determined target standard musical notes, wherein the music score comprises:
for each of the target standard audio features: taking a high-low tone identifier corresponding to the target standard audio feature as a high-low tone identifier of the corresponding target standard musical note;
and generating a music score corresponding to the tune information according to the target standard notes and the high-low tone identifiers of the target standard notes.
Preferably, the first and second electrodes are formed of a metal,
the analyzing at least one current audio feature from the acquired tune information includes:
analyzing the at least one current audio characteristic from the tune information according to a time sequence, and recording a time point of each current audio characteristic corresponding to the tune information;
generating a music score corresponding to the tune information according to the determined target standard musical notes, wherein the music score comprises:
determining the time interval between every two adjacent current audio features according to the recorded time point of each current audio feature corresponding to the tune information;
determining the beat corresponding to each target standard note according to the determined time interval;
and generating the music score according to the determined target standard musical notes and the beat corresponding to each target standard musical note.
Preferably, the first and second electrodes are formed of a metal,
generating a music score corresponding to the tune information according to the determined target standard musical notes, wherein the music score comprises:
taking any one of the target standard notes as a current target standard note, performing A1 to A3:
a1: determining adjacent notes adjacent to the current target standard note according to the time point of each current audio characteristic corresponding to the tune information;
a2: determining whether the pause time between the adjacent musical notes and the current target standard musical notes is smaller than a preset threshold value, and if so, executing A3;
a3: merging the adjacent notes with the current target standard note into a connected group and performing A1 with the adjacent notes as the current target standard note;
determining a start note and a stop note from each of the target standard notes in the connected group according to a time point at which each of the current audio features corresponds to the tune information;
connecting the start note and the end note with a hyphen line;
and generating a music score corresponding to the tune information according to the connected starting note, the ending note and other unconnected target standard notes.
Preferably, the first and second electrodes are formed of a metal,
the building note repository comprises:
acquiring at least one standard audio feature generated by at least one sound source, wherein each standard audio feature corresponds to one standard musical note, and the at least one sound source comprises: any one or more of musical instruments, singers, and electronic audio equipment;
and generating the note storage library according to the acquired standard audio features and the standard notes corresponding to each standard audio feature.
In a second aspect, an embodiment of the present invention provides a music composing device, including: the device comprises a construction unit, an analysis unit, a determination unit and a music score generation unit; wherein,
the construction unit is configured to construct a note repository, where the note repository includes: at least one standard note and at least one standard audio feature, and a correspondence between each of the standard notes and at least one standard audio feature;
the analysis unit is used for acquiring melody information generated from a user side and analyzing at least one current audio characteristic from the acquired melody information;
the determining unit is configured to determine, from the note repository, a target standard audio feature corresponding to each of the current audio features, and determine, according to the correspondence, a target standard note corresponding to each of the target standard audio features;
and the music score generating unit is used for generating a music score corresponding to the tune information according to the determined target standard musical notes.
Preferably, the first and second electrodes are formed of a metal,
each of the standard audio features corresponds to a voiced audio frequency range and a high-bass identifier;
the determining unit is configured to, for each of the current audio features, perform: determining the current sound frequency corresponding to the current audio feature, and searching a sound frequency range corresponding to the current sound frequency from the note storage library; taking the standard audio features corresponding to the searched sound frequency range as the target standard audio features;
the score generation unit is configured to, for each of the target standard audio features: and taking the high-low tone identifier corresponding to the target standard audio characteristic as the high-low tone identifier of the corresponding target standard musical note, and generating a music score corresponding to the tune information according to each target standard musical note and the high-low tone identifier of each target standard musical note.
Preferably, the first and second electrodes are formed of a metal,
the analysis unit is used for analyzing the at least one current audio characteristic from the tune information according to a time sequence and recording a time point of each current audio characteristic corresponding to the tune information;
the music score generating unit is used for determining a time interval between every two adjacent current audio features according to the recorded time point of each current audio feature corresponding to the tune information, and determining a beat corresponding to each target standard note according to the determined time interval;
and generating the music score according to the determined target standard musical notes and the beat corresponding to each target standard musical note.
Preferably, the first and second electrodes are formed of a metal,
the music score generation unit comprises a connection group determination subunit, a connection subunit and a generation subunit; wherein,
the connected group determining subunit is configured to perform, with any one of the target standard notes as a current target standard note, a1 to A3:
a1: determining adjacent notes adjacent to the current target standard note according to the time point of each current audio characteristic corresponding to the tune information;
a2: determining whether the pause time between the adjacent musical notes and the current target standard musical notes is smaller than a preset threshold value, and if so, executing A3;
a3: merging the adjacent notes with the current target standard note into a connected group and performing A1 with the adjacent notes as the current target standard note;
the connecting subunit is configured to determine a start note and a stop note from the target standard notes in the connected group according to a time point at which each of the current audio features corresponds to the tune information, and connect the start note and the stop note by using a chain line;
the generating subunit is configured to generate a musical score corresponding to the tune information according to the connected start note, the end note, and the unconnected other target standard notes.
Preferably, the first and second electrodes are formed of a metal,
the constructing unit is configured to obtain at least one standard audio feature generated by at least one sound source, where each standard audio feature corresponds to one standard musical note, and the at least one sound source includes: any one or more of musical instruments, singers, and electronic audio equipment; and generating the note storage library according to the acquired standard audio features and the standard notes corresponding to each standard audio feature.
The embodiment of the invention provides a music composing method and device, which comprises a note storage library which is constructed in advance and comprises at least one standard note, at least one standard audio characteristic and a corresponding relation between the standard note and the standard audio characteristic. When the tune information generated from a user side is acquired, at least one current audio characteristic is analyzed from the tune information, then a target standard audio characteristic corresponding to each current audio characteristic is determined from a pre-constructed note storage library, target standard notes corresponding to each target standard audio characteristic are determined according to the corresponding relation stored in a note database, and a music score corresponding to the tune information is generated according to the determined target standard notes. Therefore, the corresponding music score can be automatically generated according to the melody information generated by the user side, so that the user who does not learn the music knowledge can create the music score, and the professional requirement degree of the music score is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of a method of composing music according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of composing music according to another embodiment of the present invention;
fig. 3 is a schematic structural diagram of a music composing device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a music composing device according to another embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer and more complete, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention, and based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without creative efforts belong to the scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a music composition method, which may include the steps of:
step 101: constructing a note repository, wherein the note repository comprises: at least one standard note and at least one standard audio feature, and a correspondence between each of the standard notes and at least one standard audio feature;
step 102: acquiring tune information generated from a user side;
step 103: analyzing at least one current audio characteristic from the acquired tune information;
step 104: determining, from the note store, a target standard audio feature corresponding to each of the current audio features;
step 105: determining target standard musical notes corresponding to the target standard audio features according to the corresponding relation;
step 106: and generating a music score corresponding to the tune information according to the determined target standard musical notes.
In the above embodiment, the note repository including at least one standard note and at least one standard audio feature and the corresponding relationship therebetween is constructed in advance. When the tune information generated from a user side is acquired, at least one current audio characteristic is analyzed from the tune information, then a target standard audio characteristic corresponding to each current audio characteristic is determined from a pre-constructed note storage library, target standard notes corresponding to each target standard audio characteristic are determined according to the corresponding relation stored in a note database, and a music score corresponding to the tune information is generated according to the determined target standard notes. Therefore, the corresponding music score can be automatically generated according to the melody information generated by the user side, so that the user who does not learn the music knowledge can create the music score, and the professional requirement degree of the music score is reduced.
In one embodiment of the invention, each standard audio feature corresponds to a voiced sound frequency range and a high and low sound identifier; the specific real-time manner of step 104 may include:
for each of the current audio features, performing:
determining the current sound frequency corresponding to the current audio feature, and searching a sound frequency range corresponding to the current sound frequency from the note storage library;
taking the standard audio features corresponding to the searched sound frequency range as the target standard audio features;
embodiments of step 106 may include:
for each of the target standard audio features: taking a high-low tone identifier corresponding to the target standard audio feature as a high-low tone identifier of the corresponding target standard musical note;
and generating a music score corresponding to the tune information according to the target standard notes and the high-low tone identifiers of the target standard notes.
The frequency of the sound may represent the level of the sound, where each standard audio stored in the note repository corresponds to a sound frequency range and a bass identifier, for example, the standard audio feature "duo" corresponds to a sound frequency range [ a2, a3 ], the treble corresponds to a sound frequency range [ a3, a4 ], and the bass corresponds to a sound frequency range [ a1, a2), where a1 is<a2<a3<a4. The standard audio characteristic duo and the high pitch and the low pitch of the standard audio characteristic duo correspond to the standard note 1, the high and low pitch identifier corresponding to the standard note 1 is empty, the high and low pitch identifier corresponding to the high pitch is "·" above the standard note 1, and the high and low pitch identifier corresponding to the low pitch is "·" below the standard note 1. After determining the current sound frequency corresponding to the current audio feature, for example, if the determined current sound frequency is a3, the corresponding sound frequency range may be found to be [ a3, a4 "), and it may be determined that the standard audio feature corresponding to the current audio feature is a treble of the standard audio feature" duo ", the corresponding target standard note is 1, and the high-low tone identifier is" · "above 1. Therefore, the target standard note and the corresponding high-low tone identifier corresponding to each current audio feature can be respectively determined, for example, each determined target standard note and the corresponding high-low tone identifier are
Figure BDA0001554529980000081
And 2, generating the music score according to the determined target standard musical notes and the corresponding high-low tone identifiers, thereby improving the accuracy of music score generation.
In an embodiment of the present invention, the specific real-time manner of step 103 may include: analyzing the at least one current audio characteristic from the tune information according to a time sequence, and recording a time point of each current audio characteristic corresponding to the tune information;
specific embodiments of step 105 may include:
determining the time interval between every two adjacent current audio features according to the recorded time point of each current audio feature corresponding to the tune information;
determining the beat corresponding to each target standard note according to the determined time interval;
and generating the music score according to the determined target standard musical notes and the beat corresponding to each target standard musical note.
In this embodiment, each current audio feature is first analyzed from the tune information according to a time sequence, for example, if the received tune information is a melody with a total duration of 30s, the received tune information is analyzed from a starting point (i.e. 0s) of the tune information until the last (30s) of the melody, and each time a current audio feature is analyzed, a time point of the current audio feature corresponding to the tune information is determined according to the time of analyzing the current audio feature and the starting time of beginning to analyze the tune information. For example, if the beginning time of beginning to parse the intonation information is 13:30:00 and the time of parsing the current audio feature a is 13:30:10, it may be determined that the time point of the current audio feature a corresponding to the melody information is 10 s.
After determining the time point of each current audio feature corresponding to the tune information, a time interval between every two adjacent current audio features may be determined, for example, the time point of the current audio feature B corresponding to the tune information is 8s, and there is no other current audio feature between the current audio feature B and the current audio feature a, and the current audio feature B is adjacent to the current audio feature a, the time interval between the current audio feature B and the current audio feature a may be determined to be 2s, if a preset beat is 1s, the beat of the current audio feature B is two beats, which is expressed in the music score as B-, if a preset beat is 0.5s, the beat of the current audio feature B is four beats, which is expressed in the music score as B-. Therefore, the target standard notes and the corresponding beats can be used for generating the music score, so that the generated music score is more consistent with the music score information generated by a user side, and the accuracy of the music score is improved.
In an embodiment of the present invention, the detailed implementation of step 106 may include:
taking any one of the target standard notes as a current target standard note, performing A1 to A3:
a1: determining adjacent notes adjacent to the current target standard note according to the time point of each current audio characteristic corresponding to the tune information;
a2: determining whether the pause time between the adjacent musical notes and the current target standard musical notes is smaller than a preset threshold value, and if so, executing A3;
a3: merging the adjacent notes with the current target standard note into a connected group and performing A1 with the adjacent notes as the current target standard note;
determining a start note and a stop note from each of the target standard notes in the connected group according to a time point at which each of the current audio features corresponds to the tune information;
connecting the start note and the end note with a hyphen line;
and generating a music score corresponding to the tune information according to the connected starting note, the ending note and other unconnected target standard notes.
For example, the determined current target standard notes are A, B, C, D, etc., the time point of the current target standard note a corresponding to the tune information is the 5 th s, the time point of the current target standard note B corresponding to the tune information is the 6 th s, the time point of the current target standard note C corresponding to the tune information is the 8 th s, and the time point of the current target standard note D corresponding to the tune information is the 9 th s. And taking the target standard note C as the current target standard note, wherein the corresponding adjacent notes are B and D, respectively determining whether the pause time between the current target standard note C and the adjacent note B and the adjacent note D is smaller than a preset threshold value, if so, indicating that the current target standard note and the adjacent note are sung continuously in the tune information, and if so, indicating that the current target standard note and the adjacent note are connected through a connecting line.
Here, with the pause time between the current target standard note C and the adjacent note B being less than the preset threshold value, for example, the pause time between the current target standard note C and the adjacent note D is greater than the preset threshold, the target standard note C and its neighboring note B are merged into a connected group, and the neighboring note B is taken as the current standard note, the determination of the neighboring note of the current standard note B is continued, in this case, it is determined as the adjacent note a, and it is determined whether the pause time between the current target standard note B and the adjacent note a is less than a predetermined threshold, and if still less, then the adjacent note is also added to the connected set, and the adjacent note a continues as the current target standard note, and so on until the adjacent notes of the current target standard notes do not exist in each target standard note, or determining that the pause time between the current target standard note and the adjacent note is not less than a preset threshold value.
Then, according to the time point corresponding to each target standard note, the start note and the end note in the connected group are determined, where the target standard note a is the start note of the connected group and the target standard note C is the end note, and the two are connected by using a continuous tone line, such as AB. By the method, each target standard note is respectively used as a current target standard note, the connection relation between each target standard note and other target standard notes can be determined, then the music score is generated according to the connected target standard notes, namely the starting note and the ending note, and the unconnected target standard notes, the connection relation between the target standard notes and the continuous singing relation between the audio features in the intonation information can be shown in the generated music score, and therefore the accuracy of the music score is improved.
In an embodiment of the present invention, the specific implementation of step 101 may include: acquiring at least one standard audio feature generated by at least one sound source, wherein each standard audio feature corresponds to one standard musical note, and the at least one sound source comprises: any one or more of musical instruments, singers, and electronic audio equipment;
and generating the note storage library according to the acquired standard audio features and the standard notes corresponding to each standard audio feature.
For example, the standard musical notes are 1, 2, 3, 4, 5, 6 and 7, and when constructing the musical note repository, the standard audio features corresponding to the standard musical notes of 1, 2, 3, 4, 5, 6 and 7, i.e., duo, lai, rice, hair, cord, and west, emitted by various instruments such as a piano, a guitar and a drum kit, can be collected. Meanwhile, classmates, teachers or vocational singers of the music college can sing the standard notes to collect the standard audio features corresponding to the standard notes. In addition, the corresponding standard musical notes can be played by some electronic audio devices, such as the intelligent sound box, the mp3 and the like, by using music programs stored in the electronic audio devices in advance, so as to acquire corresponding standard audio characteristics.
It is worth mentioning that, in order to conveniently and accurately determine the standard audio features corresponding to the current audio features, when the audio features are obtained, the standard audio features of different high and low tones corresponding to each standard note are obtained, and taking the standard note 1 as an example, the standard note 1 is corresponding to the standard note 1, and various standard audio features such as high tone, high treble, bass and bass which are not only used for the audio feature are obtained. Therefore, each standard note stored in the note storage library corresponds to a standard audio characteristic with different high and low tones, and each standard audio characteristic corresponds to the tone colors of a plurality of sound sources, so that the target standard audio characteristic corresponding to the current audio characteristic can be matched accurately, and the corresponding music score can be generated accurately.
The following takes generating a numbered musical notation according to the intonation information of the humming of the user as an example to describe in detail the music composing method provided by the embodiment of the present invention, as shown in fig. 2, the method may include the following steps:
step 201: obtaining at least one standard audio characteristic sung by at least one singer, wherein each standard audio characteristic corresponds to one standard musical note, and each standard audio characteristic corresponds to a voice frequency range and a high-low voice identifier.
The standard notes in the numbered musical notation are 1, 2, 3, 4, 5, 6 and 7, which may be played by classmates, teachers or vocational singers of the music college to collect the standard audio features corresponding to the standard notes.
In addition, when the audio features are obtained, the standard audio features of different high and low tones corresponding to each standard note are obtained, taking the standard note 1 as an example, and corresponding to the standard note 1, various standard audio features of high tone, treble, bass and the like of the duo-duo music are obtained.
Meanwhile, each standard audio stored in the note repository corresponds to a sound frequency range and a high-low sound identifier, for example, the standard audio feature "c" corresponds to a sound frequency range of [ a2, a3 ], the high sound corresponds to a sound frequency range of [ a3, a4 ], the low sound corresponds to a sound frequency range of [ a1, a2 ], where a1< a2< a3< a 4. The standard audio characteristic duo and the high pitch and the low pitch of the standard audio characteristic duo correspond to the standard note 1, the high and low pitch identifier corresponding to the standard note 1 is empty, the high and low pitch identifier corresponding to the high pitch is "·" above the standard note 1, and the high and low pitch identifier corresponding to the low pitch is "·" below the standard note 1.
Step 202: and generating the note storage library according to the acquired standard audio features and the standard notes corresponding to each standard audio feature.
And constructing a note storage library according to the plurality of standard audio features acquired in the step 101, so that each standard note stored in the note storage library corresponds to a standard audio feature with different high and low tones, and each standard audio feature corresponds to the tone colors of the plurality of sound sources, thereby facilitating the matching in the later period.
Step 203: acquiring melody information of user humming, analyzing at least one current audio feature from the acquired melody information according to a time sequence, and recording a time point of each current audio feature corresponding to the melody information.
For example, if the received tune information is a melody with a total duration of 30s, the tune information is analyzed from the starting point (i.e., 0s) of the tune information until the last (30s) of the melody, and each time a current audio feature is analyzed, the time point at which the current audio feature corresponds to the tune information is determined according to the time at which the current audio feature is analyzed and the starting time at which the tune information is analyzed. For example, if the beginning time of beginning to parse the intonation information is 13:30:00 and the time of parsing the current audio feature a is 13:30:10, it may be determined that the time point of the current audio feature a corresponding to the melody information is 10 s. By analogy, the point in time at which each current audio feature corresponds to tune information can be determined.
Step 204: for each of the current audio features, performing: determining the current sound frequency corresponding to the current audio frequency characteristic, searching a sound frequency range corresponding to the current sound frequency from the note storage library, and taking the standard audio characteristic corresponding to the searched sound frequency range as the target standard audio characteristic corresponding to the current audio characteristic.
Step 205, according to the corresponding relationship, determining a target standard note corresponding to each target standard audio characteristic, and taking a high-low tone identifier corresponding to each target standard audio characteristic as a high-low tone identifier of the corresponding target standard note.
For example, if it is determined that the current sound frequency of a current audio feature is a3, the corresponding sound frequency range is found to be [ a3, a4 ], and then it is determined that the standard audio feature corresponding to the current audio feature is a treble of the standard audio feature "duo", the corresponding target standard note is 1, and the high-low tone identifier is "·" above 1. By analogy, the target standard musical note corresponding to each current audio characteristic and the high-low tone identifier corresponding to the target standard musical note can be respectively determined.
Step 206: and determining the time interval between every two adjacent current audio features according to the recorded time point of each current audio feature corresponding to the tune information, and determining the beat corresponding to each target standard note according to the determined time interval.
For example, the time point of the current audio feature B corresponding to the tune information is 8s, and there is no other current audio feature between the current audio feature B and the current audio feature a, and the current audio feature B is adjacent to the current audio feature a, the time interval between the current audio feature B and the current audio feature a can be determined to be 2s, if the preset beat is 1s, the beat of the current audio feature B is two beats, which are represented as B in the music score, and if the preset beat is 0.5s, the beat of the current audio feature B is four beats, which are represented as B- - -in the music score, wherein the number of seconds corresponding to the preset beat can be customized by the user before the music score.
Step 207: and selecting one unselected target standard note from the target standard notes as the current target standard note.
Step 208: and determining adjacent notes adjacent to the current target standard notes according to the time point of each current audio characteristic corresponding to the tune information.
For example, the determined current target standard notes are A, B, C, D, etc., the time point of the current target standard note a corresponding to the tune information is the 5 th s, the time point of the current target standard note B corresponding to the tune information is the 6 th s, the time point of the current target standard note C corresponding to the tune information is the 8 th s, and the time point of the current target standard note D corresponding to the tune information is the 9 th s. And taking the target standard note C as the current target standard note, and then the corresponding adjacent notes are B and D.
Step 209: and judging whether the pause time between the adjacent musical notes and the current target standard musical note is smaller than a preset threshold value, if so, executing the step 210, and otherwise, ending the current process.
Step 210: the adjacent notes are merged with the current target standard note into a connected set and the adjacent notes are taken as the current target standard note and step 208 is performed.
And respectively determining whether the pause time between the current target standard note C and the adjacent note B and the adjacent note D thereof is smaller than a preset threshold value, if so, indicating that the current target standard note and the adjacent note are sung continuously in the tune information, and if so, indicating that the current target standard note and the adjacent note are connected through a connecting line.
Here, with the pause time between the current target standard note C and the adjacent note B being less than the preset threshold value, for example, the pause time between the current target standard note C and the adjacent note D is greater than the preset threshold, the target standard note C and its neighboring note B are merged into a connected group, and the neighboring note B is taken as the current standard note, the determination of the neighboring note of the current standard note B is continued, in this case, it is determined as the adjacent note a, and it is determined whether the pause time between the current target standard note B and the adjacent note a is less than a predetermined threshold, and if still less, then the adjacent note is also added to the connected set, and the adjacent note a continues as the current target standard note, and so on until the adjacent notes of the current target standard notes do not exist in each target standard note, or determining that the pause time between the current target standard note and the adjacent note is not less than a preset threshold value.
It is understood that, in order to determine the connected note corresponding to each target standard note, the target standard notes may be sequentially selected according to the sequence of time points from the first target standard note, in this example, the target standard note a is first selected, the pause time between the target standard note a and the adjacent note B is determined, and so on, and the target standard notes are sequentially selected. Therefore, the pause time between two adjacent target standard notes can be prevented from being determined for multiple times in the circulation process, for example, when A is taken as the current target standard note, the pause time between A and B is determined, when B is taken as the current target standard note, the pause time between B and A is determined, and the pause time between the A and the B is actually the same value, only one time needs to be determined, and therefore the efficiency of determining the connection group can be improved.
Step 211: it is determined whether there is an unselected target standard note, if so, step 207 is performed, otherwise, step 212 is performed.
Step 212: for each of the connection groups: and determining a start note and a stop note from each target standard note in the connected group according to the time point of the current audio characteristic corresponding to the tune information, and connecting the start note and the stop note by using a connecting line.
Determining a start note and a stop note in the connected group according to the time corresponding to each target note, where the target note A is the start note and the target note C is the stop note of the connected group, and connecting the two notes by using a linkage line, e.g. A, BC.
Step 213: and generating a numbered musical notation corresponding to the tune information according to the connected starting notes, the ending notes, the unconnected other target standard notes, and the high-low tone identifiers and the beats corresponding to the target standard notes.
After the corresponding music score is generated according to the intonation information generated by the user side, the current audio features analyzed from the intonation information can also be used as standard audio features to be stored in a note storage library, and the corresponding relation between the standard audio features and the standard notes is stored. Therefore, the standard audio features in the note storage library are continuously updated along with the increase of the using times of the user, so that the target audio features corresponding to the current audio features can be more accurately determined when the subsequent user performs music composition through the tone information, and the generation of the music score is facilitated.
As shown in fig. 3, an embodiment of the present invention provides a music composing device, including: a construction unit 301, an analysis unit 302, a determination unit 303 and a score generation unit 304; wherein,
the constructing unit 301 is configured to construct a note repository, where the note repository includes: at least one standard note and at least one standard audio feature, and a correspondence between each of the standard notes and at least one standard audio feature;
the parsing unit 302 is configured to obtain tune information generated from a user side, and parse at least one current audio feature from the obtained tune information;
the determining unit 303 is configured to determine, from the note repository, a target standard audio feature corresponding to each of the current audio features, and determine, according to the correspondence, a target standard note corresponding to each of the target standard audio features;
the music score generating unit 304 is configured to generate a music score corresponding to the tune information according to the determined target standard note.
In one embodiment of the invention, each standard audio feature corresponds to a voiced sound frequency range and a high and low sound identifier;
the determining unit 303 is configured to, for each of the current audio features, perform: determining the current sound frequency corresponding to the current audio feature, and searching a sound frequency range corresponding to the current sound frequency from the note storage library; taking the standard audio features corresponding to the searched sound frequency range as the target standard audio features;
the score generating unit 304 is configured to, for each of the target standard audio features: and taking the high-low tone identifier corresponding to the target standard audio characteristic as the high-low tone identifier of the corresponding target standard musical note, and generating a music score corresponding to the tune information according to each target standard musical note and the high-low tone identifier of each target standard musical note.
In an embodiment of the present invention, the parsing unit 302 is configured to parse the at least one current audio feature from the tune information according to a time sequence, and record a time point of each current audio feature corresponding to the tune information;
the music score generating unit 304 is configured to determine a time interval between every two adjacent current audio features according to a recorded time point at which each current audio feature corresponds to the tune information, and determine a beat corresponding to each target standard note according to the determined time interval; and generating the music score according to the determined target standard musical notes and the beat corresponding to each target standard musical note.
As shown in fig. 4, in an embodiment of the present invention, the music score generating unit 304 includes a connection group determining subunit 3041, a connection subunit 3042 and a generating subunit 3043; wherein,
the connection group determining subunit 3041, configured to perform a process from a1 to A3, with any one of the target standard notes as a current target standard note:
a1: determining adjacent notes adjacent to the current target standard note according to the time point of each current audio characteristic corresponding to the tune information;
a2: determining whether the pause time between the adjacent musical notes and the current target standard musical notes is smaller than a preset threshold value, and if so, executing A3;
a3: merging the adjacent notes with the current target standard note into a connected group and performing A1 with the adjacent notes as the current target standard note;
the connection subunit 3042, configured to determine a start note and a stop note from each target standard note in the connection group according to a time point at which each of the current audio features corresponds to the tune information, and connect the start note and the stop note by using a connecting line;
the generating subunit 3043 is configured to generate a musical score corresponding to the tune information according to the connected start note, the end note, and the unconnected other target standard notes.
In an embodiment of the present invention, the constructing unit 301 is configured to obtain at least one standard audio feature generated by at least one sound source, where each standard audio feature corresponds to one standard musical note, and the at least one sound source includes: any one or more of musical instruments, singers, and electronic audio equipment; and generating the note storage library according to the acquired standard audio features and the standard notes corresponding to each standard audio feature.
Because the information interaction, execution process, and other contents between the units in the device are based on the same concept as the method embodiment of the present invention, specific contents may refer to the description in the method embodiment of the present invention, and are not described herein again.
An embodiment of the present invention further provides a readable medium, which includes an execution instruction, and when a processor of a storage controller executes the execution instruction, the storage controller executes a method provided in any one of the above embodiments of the present invention.
An embodiment of the present invention further provides a storage controller, including: a processor, a memory, and a bus; the memory is used for storing execution instructions, the processor is connected with the memory through the bus, and when the storage controller runs, the processor executes the execution instructions stored in the memory, so that the storage controller executes the method provided by any one of the above embodiments of the invention.
In summary, the above embodiments of the present invention have at least the following advantages:
1. in the embodiment of the invention, a note repository comprising at least one standard note and at least one standard audio characteristic and corresponding relation between the standard note and the standard audio characteristic is constructed in advance. When the tune information generated from a user side is acquired, at least one current audio characteristic is analyzed from the tune information, then a target standard audio characteristic corresponding to each current audio characteristic is determined from a pre-constructed note storage library, target standard notes corresponding to each target standard audio characteristic are determined according to the corresponding relation stored in a note database, and a music score corresponding to the tune information is generated according to the determined target standard notes. Therefore, the corresponding music score can be automatically generated according to the melody information generated by the user side, so that the user who does not learn the music knowledge can create the music score, and the professional requirement degree of the music score is reduced.
2. In the embodiment of the invention, after the target standard musical notes are determined, the high-low tone identifier corresponding to each target standard musical note is further determined, and the music score is generated according to the determined target standard musical notes and the high-low tone identifiers corresponding to the target standard musical notes, so that the accuracy of the generated music score is improved.
3. In the embodiment of the invention, after each target standard note is determined, the beat corresponding to each target standard note is further determined through the time interval between every two target standard notes, and the target standard notes and the corresponding beats are used for generating the music score, so that the generated music score is more consistent with the music score information generated by a user side, and the accuracy of the music score is improved.
4. In the embodiment of the invention, after each target standard note is determined, the continuous note corresponding to each target standard note is determined according to the pause time between the adjacent target standard notes, and then the music score is generated according to the connected target standard note and the unconnected target standard note, so that the generated music score can show the connection relation between the target standard notes, thereby improving the accuracy of the music score.
5. In the embodiment of the invention, each standard note stored in the note storage library corresponds to the standard audio characteristics with different high and low tones, and each standard audio characteristic corresponds to the tone colors of a plurality of sound sources, so that the target standard audio characteristic corresponding to the current audio characteristic can be accurately matched, and the corresponding music score can be accurately generated.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a" does not exclude the presence of other similar elements in a process, method, article, or apparatus that comprises the element.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it is to be noted that: the above description is only a preferred embodiment of the present invention, and is only used to illustrate the technical solutions of the present invention, and not to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (6)

1. A method of composing a musical score, comprising:
constructing a note repository, wherein the note repository comprises: at least one standard note and at least one standard audio feature, and a correspondence between each of the standard notes and at least one standard audio feature;
acquiring tune information generated from a user side;
analyzing at least one current audio characteristic from the acquired tune information;
determining, from the note store, a target standard audio feature corresponding to each of the current audio features;
determining target standard musical notes corresponding to the target standard audio features according to the corresponding relation;
generating a music score corresponding to the tune information according to the determined target standard musical notes;
the analyzing at least one current audio feature from the acquired tune information includes:
analyzing the at least one current audio characteristic from the tune information according to a time sequence, and recording a time point of each current audio characteristic corresponding to the tune information;
generating a music score corresponding to the tune information according to the determined target standard musical notes, wherein the music score comprises:
determining the time interval between every two adjacent current audio features according to the recorded time point of each current audio feature corresponding to the tune information;
determining the beat corresponding to each target standard note according to the determined time interval;
generating the music score according to the determined target standard musical notes and the beat corresponding to each target standard musical note;
generating a music score corresponding to the tune information according to the determined target standard musical notes, wherein the music score comprises:
taking any one of the target standard notes as a current target standard note, performing A1 to A3:
a1: determining adjacent notes adjacent to the current target standard note according to the time point of each current audio characteristic corresponding to the tune information;
a2: determining whether the pause time between the adjacent musical notes and the current target standard musical notes is smaller than a preset threshold value, and if so, executing A3;
a3: merging the adjacent notes with the current target standard note into a connected group and performing A1 with the adjacent notes as the current target standard note;
determining a start note and a stop note from each of the target standard notes in the connected group according to a time point at which each of the current audio features corresponds to the tune information;
connecting the start note and the end note with a hyphen line;
and generating a music score corresponding to the tune information according to the connected starting note, the ending note and other unconnected target standard notes.
2. The method of claim 1,
each of the standard audio features corresponds to a voiced audio frequency range and a high-bass identifier;
said determining from said note store a target standard audio feature corresponding to each of said current audio features, comprising:
for each of the current audio features, performing:
determining the current sound frequency corresponding to the current audio feature, and searching a sound frequency range corresponding to the current sound frequency from the note storage library;
taking the standard audio features corresponding to the searched sound frequency range as the target standard audio features;
generating a music score corresponding to the tune information according to the determined target standard musical notes, wherein the music score comprises:
for each of the target standard audio features: taking a high-low tone identifier corresponding to the target standard audio feature as a high-low tone identifier of the corresponding target standard musical note;
and generating a music score corresponding to the tune information according to the target standard notes and the high-low tone identifiers of the target standard notes.
3. The method according to any one of claims 1 to 2,
the building note repository comprises:
acquiring at least one standard audio feature generated by at least one sound source, wherein each standard audio feature corresponds to one standard musical note, and the at least one sound source comprises: any one or more of musical instruments, singers, and electronic audio equipment;
and generating the note storage library according to the acquired standard audio features and the standard notes corresponding to each standard audio feature.
4. A music composing device, comprising: the device comprises a construction unit, an analysis unit, a determination unit and a music score generation unit; wherein,
the construction unit is configured to construct a note repository, where the note repository includes: at least one standard note and at least one standard audio feature, and a correspondence between each of the standard notes and at least one standard audio feature;
the analysis unit is used for acquiring melody information generated from a user side and analyzing at least one current audio characteristic from the acquired melody information;
the determining unit is configured to determine, from the note repository, a target standard audio feature corresponding to each of the current audio features, and determine, according to the correspondence, a target standard note corresponding to each of the target standard audio features;
the music score generating unit is used for generating a music score corresponding to the tune information according to the determined target standard musical notes;
the analysis unit is used for analyzing the at least one current audio characteristic from the tune information according to a time sequence and recording a time point of each current audio characteristic corresponding to the tune information;
the music score generating unit is used for determining a time interval between every two adjacent current audio features according to the recorded time point of each current audio feature corresponding to the tune information, and determining a beat corresponding to each target standard note according to the determined time interval;
generating the music score according to the determined target standard musical notes and the beat corresponding to each target standard musical note;
the music score generation unit comprises a connection group determination subunit, a connection subunit and a generation subunit; wherein,
the connected group determining subunit is configured to perform, with any one of the target standard notes as a current target standard note, a1 to A3:
a1: determining adjacent notes adjacent to the current target standard note according to the time point of each current audio characteristic corresponding to the tune information;
a2: determining whether the pause time between the adjacent musical notes and the current target standard musical notes is smaller than a preset threshold value, and if so, executing A3;
a3: merging the adjacent notes with the current target standard note into a connected group and performing A1 with the adjacent notes as the current target standard note;
the connecting subunit is configured to determine a start note and a stop note from the target standard notes in the connected group according to a time point at which each of the current audio features corresponds to the tune information, and connect the start note and the stop note by using a chain line;
the generating subunit is configured to generate a musical score corresponding to the tune information according to the connected start note, the end note, and the unconnected other target standard notes.
5. The apparatus of claim 4,
each of the standard audio features corresponds to a voiced audio frequency range and a high-bass identifier;
the determining unit is configured to, for each of the current audio features, perform: determining the current sound frequency corresponding to the current audio feature, and searching a sound frequency range corresponding to the current sound frequency from the note storage library; taking the standard audio features corresponding to the searched sound frequency range as the target standard audio features;
the score generation unit is configured to, for each of the target standard audio features: and taking the high-low tone identifier corresponding to the target standard audio characteristic as the high-low tone identifier of the corresponding target standard musical note, and generating a music score corresponding to the tune information according to each target standard musical note and the high-low tone identifier of each target standard musical note.
6. The apparatus according to any one of claims 4 to 5,
the constructing unit is configured to obtain at least one standard audio feature generated by at least one sound source, where each standard audio feature corresponds to one standard musical note, and the at least one sound source includes: any one or more of musical instruments, singers, and electronic audio equipment; and generating the note storage library according to the acquired standard audio features and the standard notes corresponding to each standard audio feature.
CN201810058361.XA 2018-01-22 2018-01-22 Music composing method and device Active CN108257588B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201810058361.XA CN108257588B (en) 2018-01-22 2018-01-22 Music composing method and device
SG10201900497XA SG10201900497XA (en) 2018-01-22 2019-01-19 Method and Device for Composing Music
TW108102344A TW201933332A (en) 2018-01-22 2019-01-22 Method and device for composing music

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810058361.XA CN108257588B (en) 2018-01-22 2018-01-22 Music composing method and device

Publications (2)

Publication Number Publication Date
CN108257588A CN108257588A (en) 2018-07-06
CN108257588B true CN108257588B (en) 2022-03-01

Family

ID=62726905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810058361.XA Active CN108257588B (en) 2018-01-22 2018-01-22 Music composing method and device

Country Status (3)

Country Link
CN (1) CN108257588B (en)
SG (1) SG10201900497XA (en)
TW (1) TW201933332A (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986841B (en) * 2018-08-08 2023-07-11 百度在线网络技术(北京)有限公司 Audio information processing method, device and storage medium
CN109215626A (en) * 2018-10-26 2019-01-15 广东电网有限责任公司 Method for automatically composing words and music
CN110010106B (en) * 2019-01-23 2023-01-03 张鹤宝 Automatic music score system of playing music
CN109872709B (en) * 2019-03-04 2020-10-02 湖南工程学院 A low-similarity new song generation method based on complex note network
CN112071287A (en) * 2020-09-10 2020-12-11 北京有竹居网络技术有限公司 Method, apparatus, electronic device and computer readable medium for generating song score

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1703734A (en) * 2002-10-11 2005-11-30 松下电器产业株式会社 Method and apparatus for determining musical notes from sounds
CN101471074A (en) * 2007-12-28 2009-07-01 英华达(南京)科技有限公司 Method for converting voice into music of electronic device
CN102298922A (en) * 2011-07-28 2011-12-28 顾昕昀 A digital musical score instrument and its automatic sheet-turning method
CN103824565A (en) * 2014-02-26 2014-05-28 曾新 Humming music reading method and system based on music note and duration modeling
CN105205047A (en) * 2015-09-30 2015-12-30 北京金山安全软件有限公司 Playing method, converting method and device of musical instrument music score file and electronic equipment
CN106782460A (en) * 2016-12-26 2017-05-31 广州酷狗计算机科技有限公司 The method and apparatus for generating music score
CN107067879A (en) * 2017-04-07 2017-08-18 济宁学院 A kind of intelligent Piano Teaching system
CN107274876A (en) * 2017-06-30 2017-10-20 武汉理工大学 A kind of audition paints spectrometer

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004022659B3 (en) * 2004-05-07 2005-10-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus for characterizing a sound signal
JP5509536B2 (en) * 2008-04-15 2014-06-04 ヤマハ株式会社 Audio data processing apparatus and program
JP5582915B2 (en) * 2009-08-14 2014-09-03 本田技研工業株式会社 Score position estimation apparatus, score position estimation method, and score position estimation robot
KR102038171B1 (en) * 2012-03-29 2019-10-29 스뮬, 인코포레이티드 Automatic conversion of speech into song, rap or other audible expression having target meter or rhythm
US8895830B1 (en) * 2012-10-08 2014-11-25 Google Inc. Interactive game based on user generated music content
CN104346147A (en) * 2013-07-29 2015-02-11 人人游戏网络科技发展(上海)有限公司 Method and device for editing rhythm points of music games

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1703734A (en) * 2002-10-11 2005-11-30 松下电器产业株式会社 Method and apparatus for determining musical notes from sounds
CN101471074A (en) * 2007-12-28 2009-07-01 英华达(南京)科技有限公司 Method for converting voice into music of electronic device
CN102298922A (en) * 2011-07-28 2011-12-28 顾昕昀 A digital musical score instrument and its automatic sheet-turning method
CN103824565A (en) * 2014-02-26 2014-05-28 曾新 Humming music reading method and system based on music note and duration modeling
CN105205047A (en) * 2015-09-30 2015-12-30 北京金山安全软件有限公司 Playing method, converting method and device of musical instrument music score file and electronic equipment
CN106782460A (en) * 2016-12-26 2017-05-31 广州酷狗计算机科技有限公司 The method and apparatus for generating music score
CN107067879A (en) * 2017-04-07 2017-08-18 济宁学院 A kind of intelligent Piano Teaching system
CN107274876A (en) * 2017-06-30 2017-10-20 武汉理工大学 A kind of audition paints spectrometer

Also Published As

Publication number Publication date
CN108257588A (en) 2018-07-06
SG10201900497XA (en) 2019-08-27
TW201933332A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
Xi et al. GuitarSet: A Dataset for Guitar Transcription.
CN108257588B (en) Music composing method and device
CN112382257B (en) Audio processing method, device, equipment and medium
US9607593B2 (en) Automatic composition apparatus, automatic composition method and storage medium
Bosch et al. Evaluation and combination of pitch estimation methods for melody extraction in symphonic classical music
CN113763913B (en) A music score generating method, electronic device and readable storage medium
US10504498B2 (en) Real-time jamming assistance for groups of musicians
US20130205975A1 (en) Method for Giving Feedback on a Musical Performance
WO2021166745A1 (en) Arrangement generation method, arrangement generation device, and generation program
CN113870818B (en) Training method, device, medium and computing equipment for song chord arrangement model
CN110010159B (en) Sound similarity determination method and device
WO2019180830A1 (en) Singing evaluating method, singing evaluating device, and program
Lerch Software-based extraction of objective parameters from music performances
Lerch Audio content analysis
Vatolkin Improving supervised music classification by means of multi-objective evolutionary feature selection
WO2020145326A1 (en) Acoustic analysis method and acoustic analysis device
Ramirez et al. Automatic performer identification in commercial monophonic jazz performances
HK1257715B (en) Music writing method and device
HK1257715A1 (en) Music writing method and device
Serra et al. Note onset deviations as musical piece signatures
CN114882859A (en) Melody and lyric alignment method and device, electronic equipment and storage medium
Ryynänen Automatic transcription of pitch content in music and selected applications
Kühl et al. Retrieving and recreating musical form
JP6954780B2 (en) Karaoke equipment
JP2007240552A (en) Musical instrument sound recognition method, musical instrument annotation method, and music search method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1257715

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant