Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Embodiments of the present application will be described below with reference to the accompanying drawings.
Embodiments of the present application provide a hand function training system 10, referring to fig. 1, the hand function training system 10 may include a computing device 101 and a manipulator 102. The computing device 101 may be a personal computer or a server, and the personal computer may be, for example, a smart phone, a tablet computer, a notebook computer, a desktop computer, a wearable device, a head-mounted device, etc. in which an application program (i.e., an application client) for controlling the manipulator 102 is installed, which is not limited herein, and in practical application, the computing device 101 may also be directly disposed on the manipulator 102 as a part of the manipulator 102. The manipulator 102 and the computing device 101 may be connected in a wired or wireless manner for data interaction with each other.
In the embodiment of the present application, when the band exercise event is processed, the manipulator 102 is mainly responsible for controlling the first joint of the manipulator to actively move to drive the hand of the user to exercise the musical instrument according to the configuration of the computing device 101. The computing device is mainly responsible for detecting a confirmation operation for a first music score and a first musical instrument, acquiring a first control parameter set of the manipulator 102 corresponding to the first music score and the first musical instrument, configuring a first joint of the manipulator according to a first motion parameter and a first time parameter in the first control parameter set, detecting first audio data detected in a band exercise event, and outputting exercise record information including an error position according to the first audio data, wherein the error position is specifically used for indicating a position where the first audio data does not match a pitch in standard audio data corresponding to the first music score and the first musical instrument.
Based on the method, the computing equipment can automatically configure the first joint of the manipulator according to the first music score and the first musical instrument selected by the user, and further drive the user to practice the musical instrument through the active motion of the manipulator, and can output an error position indicating that the first audio data is not matched with the pitch in the standard audio data according to the first audio data detected during practice, so that the accuracy of auxiliary training information processing in a training scene of the musical instrument and the intelligence of the manipulator are improved.
The first preset motion parameter may be specifically used to configure a motion direction and a motion angle of the first joint, and the first time parameter corresponding to the first preset time parameter may be used to configure a motion time of the first joint, that is, the computing device 101 may configure the motion direction, the motion angle, and the motion time of the first joint of the manipulator according to the first control parameter set. And the manipulator 102 can accurately drive the hand of the user to move for musical instrument exercise.
In addition, in practical applications, in addition to processing the band training event in the band training mode, the computing device 101 and the manipulator 102 may also process the evaluation event in the evaluation mode, and when processing the evaluation event, the computing device 101 may also obtain the second control parameter set of the manipulator 102 corresponding to the second spectrum and the second instrument in response to the confirmation operation for the second spectrum and the second instrument, but the computing device 101 may not configure the joints of the manipulator 102, that is, the manipulator 102 may not actively perform the motion, but perform the instrument training by the user's self-motion, and the computing device 101 may further obtain the actual motion parameter of the second joint of the manipulator 102 in the evaluation event, and output the training evaluation information according to the actual motion parameter to indicate the difference between the actual motion parameter and the second motion parameter in the second control parameter set, and/or indicate the difference between the actual time parameter corresponding to the actual motion parameter and the second time parameter in the second control parameter set.
It should be noted that the number of manipulators 102 in fig. 1 is merely illustrative, and the number of manipulators 102 communicatively connected to the computing device 101 in practical application may be one or more, which is not particularly limited herein.
Referring to fig. 2, the manipulator 102 of the present application may specifically include a finger joint driver 01, a battery and battery cover 02, a power adapter interface 03, a power switch 04, a palm rest 05, and a finger rest 06.
The manipulator 102 may specifically adjust the movement direction and movement angle of each joint through the finger joint driver 01 corresponding to each joint.
It should be noted that, the structure of the manipulator 102 in fig. 2 is merely illustrative, and more or fewer specific structures included in the manipulator 102 may be included in the practical application, the appearance of the manipulator 102, the type of each specific structure included in the manipulator 102, and the setting position of each specific structure may be different from those in fig. 2, for example, a switching button of an evaluation mode and a band-exercise mode may be further provided on the manipulator 102 (when the switching button is detected to correspond to a state with an exercise mode, the computing device 101 may configure a joint of the manipulator 102 according to a confirmed music spectrum and a control parameter set corresponding to an instrument, so that the manipulator 102 actively moves to drive the user to exercise a musical instrument, and when the switching button corresponds to the evaluation mode, the computing device 101 may not configure a joint of the manipulator 102, but output evaluation information by detecting an actual motion parameter and an actual time parameter of the joint of the manipulator 102, and/or indicate a difference between an actual time parameter and a confirmed music spectrum and a corresponding motion parameter, or indicate a difference between an actual time parameter and a corresponding time parameter of the music instrument, or may be further provided with a display screen or a microphone to do not limit the specific exercise.
The composition structure of the computing device in the present application may be as shown in fig. 3, and the computing device may include a processor 110, a memory 120, a communication interface 130, and one or more programs 121, where the one or more programs 121 are stored in the memory 120 and configured to be executed by the processor 110, and the one or more programs 121 include instructions for performing any of the steps of the method embodiments described below. Wherein the communication interface 130 is used to support communication of the computing device with other devices. In particular implementations, the processor 110 is configured to perform any of the steps performed by the computing device in the method embodiments described below, and when performing a data transmission, such as a transmission, the communication interface 130 may be selectively invoked to complete the corresponding operation. It should be noted that the schematic structural diagrams of the above computing devices are merely examples, and more or fewer devices may be specifically included, which are not limited only herein.
Referring to fig. 4, fig. 4 is a flow chart of a method for processing auxiliary training information in a musical instrument training scene, which is applied to a computing device in a hand function training system, wherein the hand function training system includes the computing device and a manipulator worn by a user, as shown in fig. 4, the method for processing auxiliary training information in the musical instrument training scene includes the following steps:
S201, the computing device obtains a first control parameter set of the manipulator in response to a confirmation operation for the first score and the first instrument in the band training event currently processed.
The first music score comprises a plurality of first notes, the first control parameter set comprises a plurality of first parameter sets and a plurality of first time parameters, the first parameter sets correspond to the first notes, each first parameter set comprises a first motion parameter of a first joint of the manipulator, the first motion parameter and the first time parameter are determined according to first music score information and the first musical instrument, and the first music score information comprises pitch information and time value information of the first notes.
The first control parameter set may be generated by the computing device according to the first music score information and the first musical instrument, or the computing device may also store the generated control parameter set in association with the corresponding music score and musical instrument after generating the control parameter set each time, after detecting the confirmation operation for the first music score and the first musical instrument, the computing device may determine whether the first control parameter set corresponding to the first music score and the first musical instrument exists in the control parameter set, if so, the first parameter set may be directly obtained, and if not, the first control parameter set may be generated according to the first music score and the first musical instrument, further in a scenario that the user repeatedly uses the same musical instrument to exercise the same music score, the processing efficiency of the auxiliary training information may be improved.
The first motion parameter may specifically include a motion direction and a motion angle. The first time parameter may specifically be time information or duration information.
In a specific implementation, the computing device may store a first mapping relationship between a plurality of instrument types and a plurality of hand motion rules, and when the computing device generates a first control parameter set according to the first score information and the first instrument, the computing device may determine, from the plurality of hand motion rules, a target hand motion rule according to the instrument type of the first instrument, where the target hand motion rule may include a second mapping relationship between a plurality of pitch combinations and a plurality of hand joints, and then the computing device may determine, according to pitch information of each note in the first score and the second mapping relationship, a motion joint of a manipulator corresponding to each note in the score, determine, according to motion joints corresponding to adjacent notes, a first joint and a first motion parameter corresponding to the first joint, and determine, for the first joint corresponding to each note, the first time parameter according to duration information of the note.
One of the parameter sets may correspond to one note, one of the notes may correspond to one pitch combination, and one of the pitch combinations may include one or more pitches that are played simultaneously, i.e., when the first joints of the manipulator are configured, a plurality of the first joints may be configured at a time for a set of pitches that are played simultaneously. The first time parameter corresponding to each first parameter set can be determined according to the time value information of the notes corresponding to the first parameter set.
The first time parameter corresponding to each note may be used to indicate a time difference between a performance start time corresponding to the note and a reference time, which may be, for example, a performance time corresponding to the first note, or a performance time corresponding to each note may be a performance time of a previous note.
The second mapping relationship between the pitch combinations and the hand joints may be specifically preset according to the type of the musical instrument, that is, the joint of each hand joint corresponding to each pitch combination needs to be moved when the musical instrument plays the pitch combination, for example, when the musical instrument type is a piano, the hand joint corresponding to each pitch combination is a finger joint that needs to make a key-pressing action when playing the pitch combination, and when the musical instrument type is a violin, the hand joints corresponding to different tone height combinations may further include wrist joints, for example, the left hand moves to different positions of the finger plate by changing the handle to adjust the pitch.
After determining the motion joint of the manipulator corresponding to each note, the first joint corresponding to each note and the first motion parameter corresponding to each first joint may be further determined according to the motion joint corresponding to the current note and the motion joint corresponding to the previous note of the current note and/or the motion joint corresponding to the next joint corresponding to the current note. For example, taking a piano as an example, if the motion joint corresponding to the current note does not include a left middle finger joint, that is, the current note does not require the left middle finger joint to perform the key-down action, and the motion joint corresponding to the previous note of the current note includes the left middle finger joint, the first joint corresponding to the current note also includes the left middle finger joint, and the motion direction corresponding to the left middle finger joint is the preset direction corresponding to the key-off action.
That is, the hand joints that the instrument sound needs to move when each pitch combination is played can be determined according to the second mapping relation, and the first joints may further include manipulator joints that are independent of the instrument sound and need to be reset.
The value of the movement angle may be a preset fixed value. Or the computing device may determine the value of the movement angle according to one or more of age, gender, finger length, palm width, etc. of the user information corresponding to the current exercise event. Or the computing device may also set different preset angle values for different types of knuckles, e.g., each knuckle corresponds to the same preset angle value, each wrist corresponds to the same preset angle value, etc. That is, the manipulator joints are configured according to the control parameter set in the belt training event, mainly by fingering that the manipulator handles show the performance of the user.
On the basis, when the computing equipment configures the first joint of the manipulator according to the first motion parameter and the first time parameter, the joint motion of the manipulator can specifically correspond to each note of the music score, so that the manipulator can actively control the first joint corresponding to each note to perform motion at the playing time corresponding to each note, and a user is driven to perform.
S202, the computing device configures a first joint of the manipulator according to a first control parameter set.
The motion time of the first joint corresponding to each first parameter set is determined according to the first time parameter corresponding to each first parameter set.
For example, taking the first parameter set including the parameter set a as an example, assuming that the first joint corresponding to the parameter set a is the joint 1, the first motion parameter corresponding to the joint 1 includes 15 degrees of downward motion, and the time difference between the first time parameter and the playing time of the first note is 10 seconds, the first joint hand of the manipulator is configured according to the first control parameter set, and the joint 1 of the manipulator moves downward for 15 degrees 10 seconds after the joint corresponding to the first note moves.
S203, the computing device acquires the first audio data detected in the currently processed banding event.
In a specific implementation, the time at which the first audio is collected may be the time at which the manipulator starts to move, and the time at which the first audio is collected may be after a preset time period at which the manipulator ends to move. The preset duration may be determined according to the time value information of the last note of the first score to ensure the integrity of the acquired first audio data.
In a specific implementation, the computing device may further process the original audio collected at the collection start time and the collection end time, and extract audio data corresponding to the first musical instrument in the original audio as the first audio data.
S204, the computing device outputs exercise record information according to the first audio data.
The exercise record information comprises an error position, wherein the error position is used for indicating an audio position with unmatched pitch in the first audio data and target audio data, and the target audio data is standard audio data corresponding to the first music score and the first musical instrument.
In a specific implementation, the manipulator is mainly used for prompting a user to play fingering, so that the user may press a wrong key or press a wrong string during playing, or the duration time of a single joint action is wrong during playing, so that the detected first audio data and the detected target audio data have a mismatch in pitch. At this time, the computing device may further specifically determine an error type, which may specifically include a pitch error and/or a duration error, and may further output the error type when outputting the exercise record information.
For example, referring to fig. 5, when the computing apparatus outputs exercise record information, the note corresponding to the error location may be specifically highlighted in the first music score, for example, the note corresponding to the error location is identified by way of area shading in fig. 5, and when a selection operation for the note of the error location such as a click operation is detected, an error type may be further displayed, for example, when a selection operation for the note of the shadow location is detected in fig. 5, the error type is displayed as a pitch error. In addition, as shown in fig. 5, basic information such as a score name of the first score (e.g., score a shown in fig. 5) and a musical instrument type of the first musical instrument (e.g., musical instrument a shown in fig. 5) may be displayed in the exercise record, and a performance advice for the error location may be specifically determined according to the error type, for example, when the error type is a pitch error, fig. 5, text information of "correct pitch x, performance pitch lower than correct pitch" is output, and in addition, when the error type includes a time value error, the output performance information may be used to indicate correct time value information, and to prompt that the performance time value is greater than the correct time value or that the performance time value is smaller than the correct time value, improving the comprehensiveness of the auxiliary training information output in the exercise scene of the musical instrument.
In particular, in practical applications, for the case of a tone error, the computing device may further obtain, according to a musical instrument type of the first musical instrument and a note corresponding to the error position, a correct hand pose chart when the first musical instrument is used to play the note, and display the hand pose chart in a display area of a play suggestion, so that a user can see and understand a correct relative positional relationship between a hand and the first musical instrument when the note is played.
It can be seen that, in the embodiment of the present application, the computing device in the hand function training system may automatically obtain the first control parameter set of the corresponding manipulator according to the first music score and the first musical instrument selected by the user, and configure the first joint of the manipulator according to the first motion parameter and the first time parameter of the first control parameter set, so that the manipulator drives the user to practice the musical instrument, and may output the exercise record information including the error position, so as to indicate the audio position where the pitch of the first audio data collected in the exercise is not matched with that of the standard audio data, and drive the user to practice the musical instrument and output the exercise record information through the computing device and the manipulator, thereby being beneficial to improving the intelligence of the manipulator.
In one possible example, after the exercise record information is output according to the first audio data, the method further includes determining at least one target parameter set corresponding to a target note from the plurality of first parameter sets in response to a selection operation for the target note in the first score, the target note including the error position, creating a rehearsal event according to the at least one target parameter set, the at least one target parameter set being used for configuring the first joint of the manipulator in the rehearsal event, acquiring second audio data detected in the rehearsal event, and updating the exercise record information according to the second audio data.
For example, taking the example that the first control parameter set includes the parameter set 1 corresponding to the note 1, the parameter set 2 corresponding to the note 2, and the parameter set 3 corresponding to the note 3, assuming that the error position corresponds to the note 2, the computing device determines that the target note is the note 2 and the note 3, the target parameter set includes the parameter set 2 and the parameter set 3, if the parameter set 2 corresponds to the joint 1 of the manipulator and the parameter set 3 corresponds to the joint 2 of the manipulator, after the computing device creates the rehabilitating event, the computing device may configure the joint 1 and the joint 2 of the manipulator according to the parameter set 2 and the parameter set 3, and then the manipulator drives the user to rehabilitate the segment of the note 2 and the note 3.
In a specific implementation, the computing device may display a rehearsal segment setting page first, and in the setting page, by displaying a score segment corresponding to the target note, when a confirmation operation for the score segment is detected, it may determine that a selection operation for the target note is detected.
For example, referring to fig. 6, a score segment corresponding to a target note may be displayed in a region-shaded manner in a segment adjustment region of the rehearsal segment setting page. Specifically, when the rehearsal segment setting page starts to be displayed, the target notes recommended by the computing device may be displayed by default, the subsequent computing device may expand a segment type list when detecting a selection operation for the drop-down control ∈in the segment type selection area, the segment type list may include, for example, segment types such as default, segments, paragraphs, and the like, and when detecting a selection operation for a specific segment type, the computing device may determine, as the target notes, the segment or paragraph, and the like, where the error location is located, and adjust the target note range displayed in the segment adjustment area.
In addition, after the target notes are automatically determined and displayed by the computing device by selecting the segment type of default, bar, or paragraph, the computing device may further detect an adjustment operation, such as a drag operation, for the target note display area in the segment adjustment area, determine a note corresponding to the target note from the first score according to the adjusted target note display area, and determine the target parameter set according to the note corresponding to the target note. That is, the computing device may also support a user-defined rehearsal scope, and flexibly configure joints of the manipulator according to the user-defined rehearsal scope.
In addition, as shown in fig. 6, the computing device may also display a prompt message, such as "draggable adjust clip scope", in the clip adjustment area. Similarly, basic information such as a score name and a musical instrument type, for example, a score a and a musical instrument a shown in fig. 6, may be displayed in the rehearsal segment setting page. Meanwhile, the exercise times can be displayed in the rehearsal segment setting page, and the exercise times can be adjusted in an increasing or decreasing mode according to the detected selection operation for the "+" or "-" control. Further, the selecting operation for the target note may be a clicking operation for the start rehearsal control, when the computing device detects the clicking operation for the start rehearsal control, a rehearsal event may be created according to the target parameter set, and the first joint of the manipulator may be configured according to the target parameter set, so as to drive the user to rehearsal the segment, and the rehearsal times may be determined according to the set rehearsal times.
In a specific implementation, the rehearsal segment setting page may be displayed after detecting a selection operation for the segment rehearsal control in the exercise record display page, as shown in fig. 5, the exercise record information display page may further display the full-curve rehearsal control and the segment rehearsal control, and when detecting the selection operation for the full-curve rehearsal control, the computing device may further determine a plurality of first parameter sets included in the first control parameter set as target parameter sets, and directly configure a first joint of the manipulator according to the first control parameter set, so as to start full-curve rehearsal.
The method for updating the exercise record information according to the second audio data may refer to the method for generating the exercise record information according to the first audio data, which is not described herein.
In this example, the computing device may further create a reconnection event according to the target parameter set corresponding to the target note in response to the selection operation for the target note including the error location, configure the first joint of the manipulator in the rehearsal event according to the target parameter set, re-acquire the second audio data detected in the rehearsal event, and update the exercise record information according to the second audio data, so as to further improve the comprehensiveness of the output auxiliary training information and the intelligence of the manipulator.
In one possible example, the target note is determined by determining an error type corresponding to the error location according to a pitch and a collection time of audio data corresponding to the error location, wherein the error type comprises a duration error or a pitch error, acquiring a segment extraction rule corresponding to the error type, and determining the target note from the plurality of first notes according to the segment extraction rule, wherein the target note comprises a note corresponding to the error location.
In particular implementations, the target notes determined in this example can be the default exercise clip initially displayed in FIG. 6.
In specific implementation, when determining the error positions of the unmatched pitches in the first audio data and the target audio data, the computing device can respectively identify the pitches appearing in the first audio data and the target audio data, mark the acquisition time of each pitch, establish a first time axis of the occurrence of the pitch according to the acquisition time of each pitch in the first audio data, establish a second time axis of the occurrence of the pitch according to the acquisition time of each pitch in the target audio, and determine the error positions and the error types by comparing the time axes.
In a specific implementation, the segment extraction rule corresponding to each error type may be preset by a user, or the computing device may obtain the segment extraction rule corresponding to each error type from the cloud server. In particular, different instrument types may correspond to different rule sets, and the computing device may determine, according to the instrument type of the first instrument, a target rule set from a plurality of rule sets, and then determine, from a plurality of segment extraction rules included in the target rule set, a segment extraction rule corresponding to the error type, so as to further improve flexibility and accuracy of determining the target note.
For example, the segment extraction rule corresponding to the error value preset by the user is to determine that the first note and the second note are target notes, the segment extraction rule corresponding to the error pitch is to determine that the first note and the third note are target notes, wherein the first note is the first note corresponding to the error position, the second note is a note in which the first note is played in the measure where the first note is located, the third note is a note in which the first note is played in the measure where the first note is located, and the order of play is before the first note. Assuming that the measure a corresponding to the error position is measure a, the multiple notes included in measure a are sequentially arranged from front to back in playing order as a note b, a note a and a note c, when the computing device determines that the error type is an error in the time value, the computing device determines that the note a and the note c are target notes.
In addition, in addition to the segment extraction rule set by the user, the computing device may determine, as a portion of the target notes, the notes in the other segments according to the positions of the notes corresponding to the error positions in the segments, for example, when the notes corresponding to the error positions are the first notes in the segments, the first preset number of notes in the previous segment of the segment with the last playing order is obtained as a portion of the target notes, and when the notes corresponding to the error positions are the last notes in the segments, the second preset number of notes in the next segment of the segment with the last playing order is obtained as a portion of the target notes.
In this example, the computing device may determine the error type of the error location according to the pitch and the acquisition time of the audio data corresponding to the error location, and obtain the segment extraction rule corresponding to the error type, so as to determine the target note from the plurality of first notes included in the first score according to the segment extraction rule, which is beneficial to improving the flexibility of determining the target note during music playing.
In one possible example, the target note is determined by determining a reference bar corresponding to a plurality of error positions from the first score, the reference bar including notes corresponding to the error positions, determining a plurality of consecutive score bars from the first score according to the positions of the reference bars, the plurality of score bars including a plurality of the reference bars, determining a rehearsal bar from the plurality of score bars according to the number of bars spaced between two adjacent reference bars, the rehearsal bar including the reference bar or including the reference bar and a bar adjacent to the reference bar, and determining the rehearsal bar including notes as the target note.
Wherein, according to the reference bar, namely the bar where the note corresponding to the error position is located.
In a specific implementation, the manner of determining the rehearsal section from the plurality of score sections may specifically be to determine the number of sections spaced between two reference sections with the closest playing time, determine the two reference sections and the section between the two reference sections as the rehearsal section when the number of the spaced sections is smaller than a preset number, and determine only the two reference sections as the rehearsal section and not determine the section between the two reference sections as the rehearsal section if the number of the spaced sections between the two reference sections is not smaller than the preset number. Furthermore, for the fragments with more concentrated error positions, the computing device can configure the manipulator to drive the user to perform unified exercise in one exercise event, and the exercise is not required to be split into a plurality of exercise events, so that the processing efficiency of auxiliary training information is further improved.
Wherein the preset number may be determined by the computing device based on user input.
For example, taking the first score comprising multiple sections sequentially ordered from front to back in performance order, for example, section 1, section 2, section 3, section 4, and section 5, assuming that the reference section where the error location is located is section 2 and section 5, the multiple score sections comprise section 2, section 3, section 4, and section 5. Then with a preset number of 1, the target note includes measure 2 and measure 5, and the computing device determines that there are discontinuous notes in the target note, two rehearsing events may be created, namely one rehearsing event for measure 2 and one rehearsing event for measure 5. Whereas in the case of a preset number of 2, the target notes include bar 2, bar 3, bar 4, and bar 5, since the notes in the target notes are all continuous, there is no discontinuous note, the computing device may create a rehearsing event for bar 2, bar 3, bar 4, and bar 5.
In a specific implementation, for the case that the number of error positions is 1, or for the case that notes corresponding to a plurality of error positions belong to the same reference measure, the computing device may directly determine that the reference measure where the note corresponding to the error position is located is the target note. Or the computing device may determine that the target notes include a note corresponding to the error location and a third predetermined number of notes preceding the note corresponding to the error location and a fourth predetermined number of notes following the note corresponding to the error location. In particular, the third preset number and the fourth preset number may be determined by the computing device according to user input, or may be obtained by the computing device from the cloud server.
It can be seen that, in this example, when the number of error positions is multiple and the number of reference measures where the notes corresponding to the error positions are located is multiple, the computing device determines, according to the positions of the reference measures in the continuous multiple score measures, a rehearsing measure from the continuous multiple score measures, and determines that the notes included in the rehearsing measure are target notes, which is beneficial to improving the flexibility of determining the target notes.
In one possible example, the first music score is obtained by identifying a target audio to obtain at least one audio data corresponding to at least one instrument type in response to an upload operation for the target audio, generating at least one reference music score corresponding to the at least one instrument type from the at least one audio data, outputting the at least one instrument type including the first instrument, and determining a reference music score corresponding to a first instrument of the at least one instrument type as the first music score in response to a confirm operation for the first instrument.
In a specific implementation, the computing device may determine a complete melody corresponding to the target audio by identifying a pitch and a duration of the target audio, identify an audio clip (i.e., at least one audio data) specifically appearing in each instrument appearing in the target audio, determine which portion of the complete melody is specifically played by each instrument, and obtain at least one reference melody, where the computing device may output a name of the instrument of the identified at least one instrument type, and obtain, when a confirmation operation for the first instrument is detected, the reference melody corresponding to the first instrument as the target melody.
For example, taking at least one audio data including data 1 and data 2 as an example, assuming that data 1 corresponds to instrument 1 and that data 2 corresponds to instrument 2 and that data 1 corresponds to instrument 2, the computing device determines that score 1 is the first score when instrument 1 is detected as being selected and that score 2 is the first score when instrument 2 is detected as being selected.
In this example, the computing device may identify the target audio uploaded by the user, obtain at least one audio data corresponding to at least one instrument type, generate at least one reference score according to the at least one audio data, and then determine, in response to a confirmation operation for the first instrument, that the reference score corresponding to the first instrument is the first score, thereby improving flexibility and convenience in acquiring the first score.
In one possible example, the method further comprises acquiring a second control parameter set of the manipulator corresponding to a second score and a second instrument in response to a confirmation operation of the second score and the second instrument in the evaluation event for the current process, the second score including a plurality of second notes, the second control parameter set including a plurality of second parameter sets corresponding to the plurality of second notes and a plurality of second time parameters, each of the second parameter sets including a second motion parameter of a second joint of the manipulator, the second motion parameter and the second time parameter being determined according to second score information and the second instrument, the second score information including pitch information and time value information of the plurality of second notes, acquiring an actual motion parameter of the second joint in the evaluation event for the current process, outputting exercise evaluation information according to the actual motion parameter and the actual time parameter corresponding to the actual motion parameter, the exercise information including first information and/or second information, the second information being used for indicating a difference between the motion parameter and the actual time parameter, and the second parameter indicating at least a difference between the actual motion parameter and the actual time parameter.
The method for determining the second motion parameter and the second time parameter in the second control parameter set according to the pitch information and the duration information of the second note may refer to the foregoing determination of the first motion parameter and the content of the first motion parameter, which are not described herein again.
The difference between the actual motion parameter and the second motion parameter may specifically include whether the corresponding manipulator joints are the same, whether the motion directions of the manipulator joints are the same, and the difference between the actual time parameter and the second time parameter may specifically refer to whether the time intervals between the motion start time and the reference time of the manipulator joints with the same motion sequence are the same.
In particular, the computing device may further query a mapping relationship between a plurality of preset number level parameters and a plurality of difference information (including the number of the mismatch and the difference value) according to the number of mismatch between the actual motion parameter and the second motion parameter and the difference value between the time intervals corresponding to the actual time parameter and the second time parameter, determine a proficiency level corresponding to the number of the mismatch and the difference value, implement quantitative evaluation of the logarithmic measurement level, and improve a refinement level of the auxiliary training information.
Furthermore, in other embodiments, the exercise evaluation information may include, in addition to fingering evaluation information determined according to the actual motion parameter, the actual time parameter, and the second control parameter set, pitch evaluation information, and the specific computing device may acquire third audio data collected in the evaluation event, and determine the pitch evaluation information according to the third audio data and standard audio data corresponding to the second music score and the second musical instrument, where the pitch evaluation information may also include an error location. In determining the proficiency level, the proficiency level may be determined in combination based on the number of mismatches, the magnitude of the difference, and the number of error locations.
In this example, the computing device may output exercise evaluation information according to the actual motion parameters and the actual time parameters of each joint of the manipulator detected during the autonomous motion of the user and the second score and the second control parameter set corresponding to the second instrument determined during the evaluation event, so as to indicate the difference between the actual motion parameters and/or the actual time parameters and the second control parameter set, which is beneficial to improving the comprehensiveness of the auxiliary training information provided by the computing device.
In one possible example, after the exercise evaluation information is output, the method further comprises acquiring a recommended score paragraph from a preset score paragraph according to the first information and/or the second information, outputting the recommended score paragraph, and configuring joints of the manipulator according to a motion parameter set corresponding to the recommended score paragraph when a selection operation for the recommended score paragraph is detected.
In a specific implementation, each preset score paragraph can correspond to at least one fingering training type, the computing device can specifically determine the type of difference information in the first information and the second information, determine a score of which the fingering training type includes the keyword from a plurality of preset score paragraphs as a recommended score paragraph, and determine a score paragraph of which the fingering training type includes the motion joint mismatch from a plurality of preset score paragraphs as a recommended score paragraph, for example, when the difference type includes the motion joint mismatch.
In particular, each preset score may also correspond to a proficiency level information, the computing device may first perform a preliminary screening of the score according to the fingering training type, and if the number of the score segments after screening is still multiple, the computing device may further perform a further screening of the score segments according to the proficiency level of the user in the evaluation event, and the proficiency level information includes the score segments of the user in the evaluation event with the number of the score levels as recommended score segments.
In this example, the computing device may recommend a score paragraph for the user according to the difference between the actual motion parameter and the second motion parameter in the exercise evaluation information and/or the difference between the actual time parameter and the second time parameter, and configure the joint of the manipulator according to the motion parameter set corresponding to the score paragraph, so as to drive the user to exercise the recommended score paragraph, which is beneficial to further improving the flexibility of auxiliary training information processing.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a processing device for auxiliary training information in a musical instrument training scene, which is applied to a computing device in a hand function training system, wherein the hand function training system includes the computing device and a manipulator worn by a user, and the processing device 30 for auxiliary training information in the musical instrument training scene includes:
A first obtaining unit 301, configured to obtain, in response to a confirmation operation for a first score and a first instrument in a band training event currently processed, a first control parameter set of the manipulator, where the first score includes a plurality of first notes, the first control parameter set includes a plurality of first parameter sets corresponding to the plurality of first notes, each of the first parameter sets includes a first motion parameter of a first joint of the manipulator, and a plurality of first time parameters, and the first motion parameter and the first time parameter are determined according to first score information and the first instrument, and the first score information includes pitch information and duration information of the plurality of first notes;
a configuration unit 302, configured to configure the first joint of the manipulator according to the first control parameter set, where a movement time of the first joint corresponding to each first parameter set is determined according to the first time parameter corresponding to each first parameter set;
a second acquiring unit 303, configured to acquire first audio data detected in the currently processed band training event;
And an output unit 304, configured to output exercise record information according to the first audio data, where the exercise record information includes an error position, and the error position is used to indicate a note position with a non-matching pitch in the first audio data and target audio data, and the target audio data is standard audio data corresponding to the first music score and the first musical instrument.
In one possible example, the processing device 30 of the auxiliary training information in the instrument training scene is further configured to determine, in response to a selection operation for a target note in the first score after the training record information is output according to the first audio data, at least one target parameter set corresponding to the target note from the plurality of first parameter sets, the target note including the error position, create a rehearsal event according to the at least one target parameter set, the at least one target parameter set being used to configure the first joint of the manipulator in the rehearsal event, acquire second audio data detected in the rehearsal event, and update the training record information according to the second audio data.
In one possible example, the target note is determined by determining an error type corresponding to the error location according to a pitch and a collection time of audio data corresponding to the error location, wherein the error type comprises a duration error or a pitch error, acquiring a segment extraction rule corresponding to the error type, and determining the target note from the plurality of first notes according to the segment extraction rule, wherein the target note comprises a note corresponding to the error location.
In one possible example, the target note is determined by determining a reference bar corresponding to a plurality of error positions from the first score, the reference bar including notes corresponding to the error positions, determining a plurality of consecutive score bars from the first score according to the positions of the reference bars, the plurality of score bars including a plurality of the reference bars, determining a rehearsal bar from the plurality of score bars according to the number of bars spaced between two adjacent reference bars, the rehearsal bar including the reference bar or including the reference bar and a bar adjacent to the reference bar, and determining the rehearsal bar including notes as the target note.
In one possible example, the first music score is obtained by identifying a target audio to obtain at least one audio data corresponding to at least one instrument type in response to an upload operation for the target audio, generating at least one reference music score corresponding to the at least one instrument type from the at least one audio data, outputting the at least one instrument type including the first instrument, and determining a reference music score corresponding to a first instrument of the at least one instrument type as the first music score in response to a confirm operation for the first instrument.
In one possible example, the processing device 30 of the auxiliary training information in the instrument training scene is further configured to obtain a second control parameter set of the manipulator corresponding to a second score and a second instrument in response to a confirmation operation of the second score and the second instrument in the currently processed evaluation event, the second score including a plurality of second notes, the second control parameter set including a plurality of second parameter sets corresponding to the plurality of second notes and a plurality of second time parameters, each second parameter set including a second motion parameter of a second joint of the manipulator, the second motion parameter and the second time parameter being determined according to second score information including pitch information and duration information of the plurality of second notes, the second control parameter set including an actual motion parameter of the second joint in the currently processed evaluation event, and output training information according to an actual time parameter corresponding to the actual motion parameter and the actual motion parameter, the second motion parameter and/or the second time parameter including evaluation information indicating a difference between the actual motion parameter and the actual motion parameter.
In a possible example, the processing device 30 of the auxiliary training information in the musical instrument training scene is further configured to obtain a recommended score paragraph from a preset score paragraph according to the first information and/or the second information after the output of the training evaluation information, output the recommended score paragraph, and configure a joint of the manipulator according to a motion parameter set corresponding to the recommended score paragraph when a selection operation for the recommended score paragraph is detected.
In the case of using an integrated unit, the structure of the processing device for auxiliary training information in another musical instrument training scene according to the embodiment of the present application is shown in fig. 8. In fig. 8, the processing device of the auxiliary training information in the musical instrument training scene includes a processing module 310 and a communication module 311. The processing module 310 is configured to control and manage actions of the processing device of the auxiliary training information in the musical instrument training scene, for example, steps performed by the first acquiring unit 301, the configuration unit 302, the second acquiring unit 303 and the output unit 304, and/or other processes for performing the techniques described in the present application. The communication module 311 is used to support interaction between the processing device 30 and other equipment of the auxiliary training information in the instrument training scene. As shown in fig. 8, the processing device of the auxiliary training information in the instrument training scene may further include a storage module 312, where the storage module 312 is configured to store program codes and data of the processing device of the auxiliary training information in the instrument training scene.
The processing module 310 may be a Processor or controller, such as a central processing unit (Central Processing Unit, CPU), a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an ASIC, FPGA or other programmable logic device, transistor logic device, hardware components, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. The processor may also be a combination that performs the function of a computation, e.g., a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, and the like. The communication module 311 may be a transceiver, an RF circuit, a communication interface, or the like. The memory module 312 may be a memory.
All relevant contents of each scenario related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein. The processing means of the auxiliary training information in the musical instrument training scene may perform the steps performed by the computing device in the processing method of the auxiliary training information in the musical instrument training scene shown in fig. 4.
The embodiment of the present application also provides a computer storage medium storing a computer program for electronic data exchange, where the computer program causes a computer to execute some or all of the steps of any one of the methods described in the above method embodiments.
The embodiment of the application also provides a computer program product, which comprises a computer program, and the computer program realizes part or all of the steps of any one of the methods in the embodiment of the method when being executed by a processor. The computer program product may be a software installation package.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, it should be understood by those skilled in the art that the embodiments described in the specification are all preferred embodiments, and the acts and elements referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. The Memory includes a U disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, etc. which can store the program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include a flash disk, a ROM, a RAM, a magnetic disk, an optical disk, etc.
While the foregoing describes in detail the embodiments of the present application, specific examples are provided to illustrate the principles and embodiments of the present application, and the above examples are provided to assist in understanding the method and core ideas of the present application, and in light of the ideas of the present application, those skilled in the art should not be construed to limit the scope of the present application in any way.