[go: up one dir, main page]

TW201034000A - Automatic grading method for karaoke song singing - Google Patents

Automatic grading method for karaoke song singing Download PDF

Info

Publication number
TW201034000A
TW201034000A TW098106930A TW98106930A TW201034000A TW 201034000 A TW201034000 A TW 201034000A TW 098106930 A TW098106930 A TW 098106930A TW 98106930 A TW98106930 A TW 98106930A TW 201034000 A TW201034000 A TW 201034000A
Authority
TW
Taiwan
Prior art keywords
score
scale
pitch
scores
music
Prior art date
Application number
TW098106930A
Other languages
Chinese (zh)
Other versions
TWI394141B (en
Inventor
Wen-Hsin Lin
Original Assignee
Wen-Hsin Lin
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wen-Hsin Lin filed Critical Wen-Hsin Lin
Priority to TW098106930A priority Critical patent/TWI394141B/en
Publication of TW201034000A publication Critical patent/TW201034000A/en
Application granted granted Critical
Publication of TWI394141B publication Critical patent/TWI394141B/en

Links

Landscapes

  • Auxiliary Devices For Music (AREA)

Abstract

The present invention provides an automatic grading method for karaoke song singing. The method obtains the score for sensation of tone, score for tempo and score for feeling respectively by comparing the pitch, tempo point location and volume of the signer to the pitch, tempo point location and volume of main melody; then sums up the weighted total score in a weighted scoring manner. By the design of this method, the signer's errors in pitch, tempo point location and volume at each song section can be accurately calculated; and uses the display effect of pitch curve and volume curve to let the signer easily know which part he does not sign accurately and which part needs improvement, thus this invention providing dual effects of teaching and entertainment thereby having the features of practical advancement.

Description

201034000 . 六、發明說明: 【發明所屬之技術領域】 本發明係涉及一種卡拉〇Κ歌曲伴唱自動評分 別是指一種依據音感、節奏感及情感等多項分數 權計分方式核算評分之創新設計者。 【先前技術】 按,在卡拉OK ( KARA0K )歌曲伴唱過程中’ ® 唱機通常伴有自動評分的功能,但是,此種功能 計’往往只是粗略估算整體分數而已,也可能只 歌聲音的分貝數值高低來作為評量的唯一參考, 唱機的評分結果,甚至與歌曲唱的好壞品質狀態 麼關連性,如此只能達到些許的娛樂效果而已, 正的評出歌曲唱的好壞,因此對於歌唱者的練唱 實並無法有所幫助。 ❿. 是以,針對上述習知卡拉〇κ歌曲伴唱產品設 所存在之問題點,如何研發出—種能夠更具理想 創新設計’實有待相關業界再加以思索突破之目 者。 有鑑於此,發明人本於多年從事相關產品之 與設計經驗’針對上述之目標,詳加設計與審慎 終得一確具實用性之本發明。 【發明内容】 方法,特 ,再以加 目前的伴 的習知設 是依據唱 而某些伴 其實沒什 並不能真 而言,其 計使用上 實用性之 標及方向 製造開發 評估後, 3 201034000 本發明之主要目的,係在提供一種卡拉οκ歌曲伴唱自 動評分方法,其所欲解決之問題點’係針對習知卡拉0K歌 曲伴唱機之自動評分功能並不能真正評出歌唱好壞,以致 對於歌唱者練唱而言並無所助益之問題點加以思索突破; 本發明解決問題之技術特點,在於所述卡拉歌曲伴 唱自動評分方法,主要是藉由比對唱歌者的音高、拍點位 置及音量與歌曲主旋律的音高、拍點位置及音量,分別得 到音感分數、節奏感分數及情感分數,最後以加權計分方 0 式核算加權總分;藉此創新獨特設計,使本發明對照先前 技術而言,可以精確計算出演唱者在每一個歌曲段落的音 高、拍點位置及音量誤差,並可利用音高曲線、音量曲線 的顯示效果,讓演唱者可以很容易知道哪個地方唱得不夠 準確以及哪個地方需要加強,同時具有教學及娛樂之雙重 效果而確具_實用進步性。 【實施方式】 請參閱笛 1〜16圖所示’係本發明卡拉歌曲伴唱自動 评分方法之較 專利申請上、貫她例,惟此等實施例僅供說明之用’在 動評分方法並不受此結構之限制;所述卡拉0Κ歌曲伴唱自 拍點位置及ϋ致而言,主要是藉由比對唱歌者的音高、 方式,以八曰量與歌曲主旋律的音高、拍點位置及音量的 分項目,最⑴侍到音感分數、節奏感分數及情感分數之計 分,以獲彳f ^以加權計分方式核算該等計分項目之加權總 卞自動評分之分數者。 畜—個人t⑽ %歌時,除了個人聲音的特質外’要評論 4 201034000 • 其歌聲與歌曲的四配,主要應包括三種感覺,/是音感、 二是節奏感、三是情感’音感是判斷其音高與相對之每個 音符的音高準確度;節奏感是判斷其拍點位置的誤差’包 括起唱拍點及結束拍點;情感是判斷其音量的變化,包括 每句的音量變化及整體的音量變化。而具體獲取所述音感 分數、節奏感分數及情感分數之方法分別說明如下: (一)音感分數: 請參考第1圖所示,每隔一小段時間(例如0· 1秒) 0 ,由演唱者所唱之麥克風音訊,計算一次演唱者的音高, 此音高估算,是取得人聲音訊的基頻(Fundamental Frequency) ,而其取得方法通常可利用基於自相關函數(Autocorrelatio n Function)的方法取得,然後,將此基頻經由音感估算器先 轉換成相對之音階’接著比對此音階與音樂主旋律中所擷 取到的音階之匹配程度,並給予該音階一音感分數,如此 計算所有音階之音感分數,直到演唱結束,然後輸出平均 音感分數。如第2圖所示,其具體說明如下: ® 首先是“初始參數設定”,其中初始化了的音階個數 n=0 、及人聲與該音階之高音感匹配值NoteHit = 0,和低音 感匹配值NoteHitAround = 0,NoteHit表示該音階演奏期間, 人聲音高與之完全匹配的時間段數,NoteHitAround則表示人 聲音高與該音階相差在一個半音之内的時間段數,接著取 得下一段時間的主旋律音階及計算一段時間的人聲音高, 主旋律音階是由midi等文件中直接取得的,依時間的增加 取得其相對於該時間的演奏音階,人聲音高(基頻),可 .經由轉碼表轉換得到相對於該音高的音階,例如音階 “A4 5 201034000 • 的頻率是440 Hz,每提高八度音,頻率增加兩倍,如 音階“A5”的頻率是880 Hz,一個八度有12個半音,每個 半音間的頻率相差2⑼2)倍,因為若人聲與該音階的頻率相 差2仏或1/2倍等,整數的倍數關係時,其音感是相同的 ,因此透過音階± 12個半音,我們調整了計算所得到的人 聲音階N〇te-P與主旋律的音階Note—»,令其相差在+6個半音 v、 5個半音之間’即 Note一p = Note_p +12*i, i是非〇的整數 ,使得-5 <= Note_p - Notejn <=6。接著,判斷是否為新的音 © 卩自若疋則什算上個音階的音感分數’然後重新設置起始 參數,NoteHit = 0 且 NoteHitAround = 0 及音階個數 n = η 十 1 ’若否則比較是否主旋律音階與人聲音階匹配,此匹配指 的是,誤差在、一個比較小的容許的範圍内,如〇. 5個半音 以内’若匹配則增加該音階之高音感匹配值N〇teHit = NoteHi t + 1,否則判斷是否主旋律音階與人聲音階為低音感匹配 此低日感匹配表示,誤差在一個比較大的容許的範圍内 ’如相差一個半音以内,若是則增加音階低音感匹配值N O oteHltAr〇und = NoteHitAround + 1,接著回到取得下一段時間的 主旋律音階及計算人聲音高。上述“計算上個音階的音感 分數”,其算法如第3圖所示: 先取知刖一音樂主弦律音階長度N〇teLength(m),其中: m = 〇、1、2、...、Μ 該Μ為音階總個數’然後判斷高音感匹配值N〇teHi1;是 否大於零’若是則計算高音感音階匹配分數:201034000. VI. Description of the invention: [Technical field of invention] The present invention relates to an automatic evaluation of karaoke song accompaniment, which refers to an innovative designer who calculates scores based on multiple scores, such as pitch, rhythm and emotion. . [Prior Art] Press, during the karaoke (KARA0K) song accompaniment process, the ® player is usually accompanied by an automatic scoring function. However, such a function meter is often only a rough estimate of the overall score, or it may be a decibel value of only the song sound. High and low as the only reference for the evaluation, the score of the record player, even related to the quality of the song sings, so can only achieve a little entertainment effect, just judge the song sing good or bad, so for singing The practice of singing is not helpful. ❿ 是以 是以 是以 是以 是以 是以 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对 针对In view of this, the inventor has been engaged in the related product design and design experience for many years. In view of the above objectives, the design and the prudence have finally become practical and practical. [Description of the Invention] The method, special, and the current knowledge of the current companion is based on the sing and some of the companionships are not really true, after the use of the practicality of the standard and the direction of manufacturing development and evaluation, 3 201034000 The main purpose of the present invention is to provide an automatic scoring method for Karaokea song accompaniment, and the problem to be solved is that the automatic scoring function for the conventional Karaoke karaoke machine can not really judge the singing quality. For the singer to practice singing, there is no problem to help to solve the problem; the technical feature of the problem solving of the present invention lies in the automatic scoring method of the karaoke song accompaniment, mainly by comparing the pitch and beat of the singer The position and volume and the pitch, beat position and volume of the main melody of the song are respectively obtained into the pitch score, the rhythm score and the emotion score, and finally the weighted total score is calculated by the weighted square formula 0; thereby, the innovative unique design makes the invention Compared with the prior art, the pitch, the beat position and the volume error of the singer in each song passage can be accurately calculated, and The use of pitch curve, showing volume curve, so that the singer can easily know where to sing inaccurate and which ground to make up, but also has the dual effect of education and entertainment with practical and progressive _ indeed. [Embodiment] Please refer to the patent application of the karaoke song accompaniment automatic scoring method of the present invention as shown in the flute 1 to 16, but these examples are for illustrative purposes only. Restricted by this structure; the Karaoke song sings the position of the selfie point and the chic point, mainly by comparing the pitch, the way of the singer, the volume of the gossip and the pitch of the main theme of the song, the position of the beat and the volume For the sub-items, the most (1) scores of the pitch score, rhythm score and sentiment score are obtained, and the scores of the weighted total scores of the scores are calculated by weighted scoring. Animal-personal t(10) % song, in addition to the characteristics of personal voice 'To comment 4 201034000 • The four songs of its songs and songs should mainly include three senses, / is the sense of sound, the second is the sense of rhythm, the third is the emotion 'sense is the judgment The pitch is relative to the pitch accuracy of each note; the sense of rhythm is the error that determines the position of the beat', including the vocal beat and the end of the beat; the emotion is to determine the change in its volume, including the volume change of each sentence. And the overall volume change. The methods for specifically obtaining the pitch score, rhythm score and emotion score are respectively described as follows: (1) Sound score: Please refer to Fig. 1 for every short period of time (for example, 0·1 second) 0, by singing The microphone audio sung by the singer calculates the pitch of the singer. This pitch estimation is the fundamental frequency of the human voice, and the acquisition method can usually use the autocorrelatio n function. The method obtains, and then converts the fundamental frequency into a relative scale by the pitch estimator, and then compares the scale of the scale with the scale captured in the main melody of the music, and gives the scale a pitch score, thus calculating all The pitch score of the scale is until the end of the concert, and then the average pitch score is output. As shown in Figure 2, the details are as follows: ® First is the "initial parameter setting", in which the number of scales initialized is n = 0, and the vocal match with the scale of the scale is NoteHit = 0, and the bass sense is matched. The value NoteHitAround = 0, NoteHit indicates the number of time periods during which the human voice is highly matched during the performance of the scale, and NoteHitAround indicates the number of time periods in which the human voice is higher than the scale within one semitone, and then the next period of time is obtained. The main melody scale and the person who calculated the period of time have a high voice. The main melody scale is directly obtained from the file such as midi. The scale of the performance relative to the time is obtained according to the increase of time. The human voice is high (the fundamental frequency), and can be transcoded. The table is converted to a scale relative to the pitch. For example, the scale "A4 5 201034000 • The frequency is 440 Hz. For every octave, the frequency is increased by two times. For example, the frequency of the scale "A5" is 880 Hz, an octave has 12 semitones, the frequency difference between each semitone is 2 (9) 2) times, because if the vocal is different from the frequency of the scale by 2 仏 or 1/2 times, etc. It is the same, so we adjust the calculated human sound level N〇te-P and the main melody scale Note-» through the scale ± 12 semitones, so that the difference is between +6 semitones v and 5 semitones. 'Note-p = Note_p +12*i, i is a non-〇 integer, so -5 <= Note_p - Notejn <=6. Then, determine whether it is a new sound © 卩自卩 什 算 算 上 上The pitch score ' then reset the starting parameters, NoteHit = 0 and NoteHitAround = 0 and the number of scales n = η ten 1 'If otherwise compare whether the main melody scale matches the human sound level, this match means that the error is, one Within a relatively small allowable range, such as 〇. 5 semitones. If matched, increase the treble match value of the scale N〇teHit = NoteHi t + 1, otherwise judge whether the main melody scale and the human sound level match the bass sense. The low-day-sensing match indicates that the error is within a relatively large allowable range, such as within one semitone of the difference, and if so, increase the scale-to-sound match value NO oteHltAr〇und = NoteHitAround + 1, and then return to the master who obtained the next period of time. The rhythm scale and the calculation of the human voice are high. The above-mentioned "calculation of the pitch score of the previous scale", the algorithm is as shown in Fig. 3: First, the length of the main string of the music is N〇teLength(m), where: m = 〇 1, 2, ..., Μ The Μ is the total number of scales ' then judge the high-pitched sound matching value N〇teHi1; whether it is greater than zero', then calculate the high-pitched scale matching score:

PitchScore(m)- PSH + Kl * NoteHit(m)/ NoteLength(m); 其中.PSH ’ K1為可調整之經驗值參數,否則計算低 6 201034000 . 音感音階匹配分數: .PitchScore(m)- PSH + Kl * NoteHit(m)/ NoteLength(m); where .PSH ’ K1 is an adjustable empirical value parameter, otherwise the calculation is low 6 201034000 . Phonological scale matching score: .

PitchScore(m)= PSL - K2 * NoteHitAround(m)/ NoteLength(m); 其中:PSL ,K2為可調整之經驗值參數,並限制: 0<=PitchScore(m)<= 100 最後判斷是否為最後一個音階,若否則重複上述流程 ,若是則“計算平均音感分數”,其算法為所有PitchScor e(in)以音長NoteLength(m)為加權比重的加權平均,如下: 令音階總長度Mv = I = 汍fmj,平均音感分 數 SOP (Score of Pitch): 1 1VX ~1PitchScore(m)= PSL - K2 * NoteHitAround(m)/ NoteLength(m); where: PSL, K2 are adjustable empirical value parameters, and limit: 0<=PitchScore(m)<= 100 The last scale, if otherwise repeat the above process, if it is, then "calculate the average pitch score", the algorithm is the weighted average of all PitchScor e (in) with the length of NoteLength (m) as the weighted proportion, as follows: Let the total length of the scale Mv = I = 汍fmj, Score of Pitch: 1 1VX ~1

NL SOP =-^PitchScore(m) · NoteLength(m) φ (二)節奏感分數: 節奏感是計算人聲起唱拍點與該音樂主旋律音階的起 奏時間及人聲結束拍點與該音樂主旋律音階的結束時間的 匹配程度來決定。要準確的估算出歌唱者每個節拍的拍點 位置,在此我們以估計歌唱者音高的變化,當做其演唱不 同音階的時間變化,進而來判斷其節拍的準確度,如第4 圖所示,其雷同第1圖所述方法,係先估算人聲的音高及 取得音樂主旋律的音階,然後透過節奏感估算器產生平均 節奏感分數。 7 201034000 . 經由節奏感估算器,先將人聲音高轉成相對之音階 然後比對此音階,與主旋律中得到之音階在時間上的誤 ,此時間的誤差包括提早或延遲的起奏拍點與結束拍點 並記錄每個音階的時間誤差,然後給予該音階之節奏感 數,如此計算所有的音階之節奏感分數,直到演唱結束 然後輸出平均節奏感分數。如第5圖所示,可利用節奏 延遲匹配器及節奏感超前匹配器,由轉換後之人聲音階 目前、上一個及下一個音樂主弦律音階,分別計算出人 0 與該音階在時間上延遲或超前的匹配程度,得到人聲結 拍點或起唱拍點延遲時間及超前時間,再經由計算音階 奏感分數之手段,得到該音階的節奏感分數,依此,從 一個音階開始,我們計算每個音階的節奏感誤差,直到 後一個音階結束,然後計算平均節奏感分數。 請配合參看第6圖所示,該節奏感延遲匹配器是先 斷是否為新音樂音階的開始,若否則判斷是否已設定起 拍點延遲時間,若是則結束,否則再判斷人聲音階與音 ❹ 音階是否匹配,若否則增加起唱拍點延遲時間,若是則 定起唱拍點延遲時間,然後結束,此延遲時間表示音樂 階開始後,人聲比它晚開始的時間誤差;若為新音樂音 的開始,則重設起唱拍點延遲時間並記錄上個音階結束 間,接著判斷人聲音階是否與上一個音樂主弦律音階匹 ,若是則再判斷下一個人聲音階是否與上一個音樂主弦 音階匹配,直到否為止,然後設定結束拍點延遲時間後 束,此延遲時間表示該上個音樂音階結束後,人聲比它 結束的時間誤差。 差 分 > 感 、 聲 束 ΛΛ- 即 第 最 判 唱 樂 6又 音 階 時 配 律 結 晚 8 201034000 - 請配合參看第7圖所示,該節奏感超前匹配器, 先判斷是否為新音樂音階的開始,若否,則判斷人聲 與目前音樂音階是否匹配,若是,則記錄人聲音階結 間,否則設定結束拍點超前時間,然後結束,此超前 表示該音樂音階結束前,人聲比它更早結束的時間誤 若為新音樂音階的開始,則重設結束拍點超前時間並 該音階開始時間,接著判斷人聲音階是否與該音樂主 音階匹配,若是則再判斷上一個人聲音階是否與該音 ❹ 配,直到否為止,然後設定起唱拍點超前時間後結束 超前時間表示該音樂音階開始前,人聲比它更早開始 間誤差。 接著,由起唱拍點延遲時間、起唱拍點超前時間 束拍點延遲時間及結束拍點超前時間,計算音階節奏 數 SOB (Score of Beat),算法如下: 令起唱拍點時間誤差為 7Λ9,則,起唱拍點分數 S): ❿ SOBS = ^^ + 100 -(l- TDS / Ls) 其中,TDS =起唱拍點延遲時間(NoteOnLag) +起唱拍 前時間(NoteOnLead) ,As與Ls是預設的經驗值參數。令 拍點時間誤差為TDE,則:結束拍點分數(S0BE): SOBE = Ae+ 100-(1- TDE / Le)NL SOP =-^PitchScore(m) · NoteLength(m) φ (2) Rhythm score: Rhythm sense is the calculation of the vocal beat point and the start time of the music main melody scale and the vocal end beat point and the music main melody scale The degree of matching of the end time is determined. To accurately estimate the position of the singer's beat at each beat, here we estimate the singer's pitch as a change in the time of the different scales, and then determine the accuracy of the beat, as shown in Figure 4. It is shown that the method described in the first figure first estimates the pitch of the human voice and the scale of the music main melody, and then generates an average rhythm sensation score through the rhythm estimator. 7 201034000 . Through the rhythm estimator, the human voice is first converted to the relative scale and then compared to the scale, and the scale obtained in the main melody is in time error. The error of this time includes the early or delayed attack point. The end time is taken and the time error of each scale is recorded, and then the rhythm sense of the scale is given, so that the rhythm scores of all the scales are calculated until the end of the singing and then the average rhythm score is output. As shown in Fig. 5, the rhythm delay matcher and the rhythm sense lead matcher can be used to calculate the person 0 and the scale at the time of the current, previous and next music main chord scales of the converted human voice level, respectively. The degree of matching between the delay or the lead, the delay time and the lead time of the vocal beat point or the vocal beat point, and then the score of the rhythm of the scale is obtained by calculating the score of the scale, and accordingly, starting from a scale, We calculate the rhythm error for each scale until the end of the next scale, and then calculate the average rhythm score. Please refer to Figure 6, the rhythm delay matcher is the beginning of whether the new music scale is broken, otherwise it is judged whether the start point delay time has been set, and if so, the end, otherwise judge the sound level and sound Whether the scale matches, if otherwise increase the slap beat delay time, if yes, set the slap beat delay time, and then end, this delay time indicates the time error after the start of the music stage, the vocal is later than it; if it is a new music sound At the beginning, reset the singer beat delay time and record the end of the previous scale, then judge whether the human sound level is equal to the previous music main chord scale, and if so, determine whether the next personal sound level is the same as the previous music main chord scale. Match until no, and then set the end beat delay time after the bundle, this delay time indicates the time error of the vocal than the end of the last musical scale. Difference > Sense, Sound Beam ΛΛ - that is, the most vocal music 6 and scales with the law of the night 8 201034000 - Please refer to Figure 7, the rhythm sense lead matcher, first determine whether it is a new musical scale Start, if not, determine whether the vocal matches the current musical scale, and if so, record the human sound between the knots, otherwise set the ending beat time, and then end, the advance indicates that the vocal is earlier than it before the end of the musical scale If the end time is wrong, the ending time of the new music scale is reset, and the ending time of the ending beat is started and the time is started. Then, it is determined whether the human sound level matches the main musical scale, and if so, whether the previous human sound level is related to the sound. ❹ Match, until no, then set the start time of the singer beat time and end the lead time to indicate that the vocal is earlier than the beginning of the sound before the start of the musical scale. Then, the vocal beat delay time, the vocal beat point lead time delay point time and the end beat time advance time, calculate the score rhythm number SOB (Score of Beat), the algorithm is as follows: Let the vocal beat time error is 7Λ9, then, the starting point score S): ❿ SOBS = ^^ + 100 -(l- TDS / Ls) where TDS = singer beat delay time (NoteOnLag) + singer time (NoteOnLead), As and Ls are preset empirical value parameters. Let the beat time error be TDE, then: end beat score (S0BE): SOBE = Ae+ 100-(1- TDE / Le)

則是 音階 束時 時間 差; 記錄 弦律 階匹 ,此 的時 、結 感分 (S0B 點超 結束 9 201034000 . 其中’ TDE =結束拍點延遲時間(Note0ffL呢)+結束拍點 超前時間(NoteOffLead),Ae與Le是預設的經驗值參數, 該音階節奏感分數(S0B):It is the time difference of the scale beam; record the chord order, the time and the knot sense (S0B point is over 9 201034000. Where 'TDE = end beat delay time (Note0ffL) + end beat lead time (NoteOffLead) , Ae and Le are preset empirical value parameters, the scale rhythm score (S0B):

SOB = SOBS · R + SOBE 其中,R為一預設的加權參數,且0 <= R <= 1 。 (三)情感分數: _ 情感是一種比較難以客觀衡量的參數’在此我們利用 計算人聲的平均振幅與音樂主旋律的平均振幅之匹配程度 來決定。人聲的平均振幅是藉由計算每/個人聲聲音區段 的RMS(Root of Mean Square)值得到,音樂主旋律的平均振幅亦 可藉由計算每一個主旋律聲音區段的RMS值或直接由合成 之音樂資訊中的振幅參數取得,所述RMS的算法如下: e RMS 二SOB = SOBS · R + SOBE where R is a preset weighting parameter and 0 <= R <= 1 . (3) Emotional scores: _ Emotion is a parameter that is difficult to measure objectively. Here we use the degree of matching between the average amplitude of the vocal and the average amplitude of the main melody of the music. The average amplitude of the vocal is obtained by calculating the RMS (Root of Mean Square) value of each/personal sound segment. The average amplitude of the main melody of the music can also be calculated by calculating the RMS value of each main melody sound segment or directly by synthesis. The amplitude parameter in the music information is obtained. The algorithm of the RMS is as follows: e RMS II

κ~\ Σ>2⑺ 其中,x(i),i = 〇,1,…,Κ-1,κ代表此—聲音區段之聲 音樣本點數(Samples),此RMS值,在實際運算上,還可用 其他方法如平均振幅或最大振幅等方法取代。如第8圖所 不,所述情感分數估算器,每隔一段時間(約〇·丨sec)分別 計算一次人聲信號與音樂主旋律的RMS值,可得到人聲與 音樂的RMS序列’假設分別為MicV〇1<^及.丨加⑷,卩=〇 10 201034000 、1、 N-l 、、、,表示第η個時間段,所得到的RMS值 ,其中N為歌曲時間總長度’並將MicVol (η)的能量準位調 成與MelVol (η)相同,然後將其依每個音階的長度做平均, 可得人聲與音樂的第m個音階的平均RMS序列分別為AvgMe lVol (m)、AvgMicVol (m);由 AvgMelVol (η),AvgMicVol (η)可用來 計算情感分數S0E (Score of Emotion),首先取得並計算人聲振 幅曲線與音樂振幅曲線的整體匹配程度S0ET,它可代表整 體的情感變化分數,如下:κ~\ Σ>2(7) where x(i),i = 〇,1,...,Κ-1,κ represents the sound sample points (Samples) of the sound segment, the RMS value, in actual operation, It can also be replaced by other methods such as average amplitude or maximum amplitude. As shown in Fig. 8, the sentiment score estimator calculates the RMS value of the vocal signal and the main melody of the music once every time (about 〇·丨sec), and obtains the RMS sequence of the vocal and music 'hypothesis respectively. MicV 〇1<^ and 丨(4), 卩=〇10 201034000, 1, Nl, ,,, represent the RMS value of the ηth time period, where N is the total length of the song' and MicVol(η) The energy level is adjusted to be the same as MelVol (η), and then averaged according to the length of each scale, and the average RMS sequence of the mth scale of vocals and music is AvgMe lVol (m), AvgMicVol (m) ); AvgMelVol (η), AvgMicVol (η) can be used to calculate the score of S0E (Score of Emotion), first obtain and calculate the overall match degree S0ET of the vocal amplitude curve and the music amplitude curve, which can represent the overall emotional change score, as follows:

其中Μ為音階總個數,且Where Μ is the total number of scales, and

故 S0ET <= 1〇〇 。 接著,可進行每一句情感分數S0MS的計算’首先係將 AvgMicVol(m) ,AvgMelVol(m)切成一句一句,假設每句歌詞的 起始音階為第S(j),j = 〇,1,2,…,L-1 ,個開始’其中L為 歌詞總句數,且令S(L) =M,則每一句的情感變化分數為: 201034000 SOES(j) sa+o-i [AvgMic Vol (m)AvgMel Vol (m) m^SU)Therefore S0ET <= 1〇〇. Then, the calculation of each sentiment score S0MS can be performed. First, AvgMicVol(m) and AvgMelVol(m) are cut into one sentence, assuming that the starting scale of each lyric is S(j), j = 〇, 1, 2,...,L-1, start 'where L is the total number of lyrics, and let S(L) =M, then the emotional change score of each sentence is: 201034000 SOES(j) sa+oi [AvgMic Vol (m )AvgMel Vol (m) m^SU)

xlOO (5(;+l)-l V 5(7+1)-1 、 J] AvgMic Vol2 (m) J] AvgMelVol2 {m) m=S(j) j - 0, 1, 2, ···, L-l ,然後計算每一句的相對情感變化 分數,此分數為每句音量相對於整體音量的變化:XlOO (5(;+l)-l V 5(7+1)-1 , J] AvgMic Vol2 (m) J] AvgMelVol2 {m) m=S(j) j - 0, 1, 2, ··· , Ll , and then calculate the relative emotional change score for each sentence, which is the change in volume of each sentence relative to the overall volume:

首先,令 Z AvgMic Vol(m)AvgMeI Vol(m) (5(7+1)-1 λ y] AvgMic Vol2 {m) ^ m=S(j) j f M-l 、First, let Z AvgMic Vol(m)AvgMeI Vol(m) (5(7+1)-1 λ y] AvgMic Vol2 {m) ^ m=S(j) j f M-l ,

AvgMic Vol(m)AvgMel Vol(m) A =AvgMic Vol(m)AvgMel Vol(m) A =

\/w=0_/\/w=0_/

’ N-l ^ AvgMic Vol2 {m) \m-0 則 SOEA(j) = j i j = 0, L 2, ..., L-l --100,' N-l ^ AvgMic Vol2 {m) \m-0 Then SOEA(j) = j i j = 0, L 2, ..., L-l --100,

Uu) 12 201034000 由上述可得’平均情感分數 SOE = a- SOET + ~ Σ (^ * S〇ES(j) + r · SOEA(j)) 其中α 、/3 、τ為加權係數,且α +冷+ /二丄。 (四)加權總分.(請參考第9圖所示) 由上述SOP、SOB、S0E可得加權總分廳(Average Evaluated Score)如下: ❹Uu) 12 201034000 From the above, the average emotional score SOE = a- SOET + ~ Σ (^ * S〇ES(j) + r · SOEA(j)) where α, /3, τ are weighting coefficients, and α + cold + / two. (4) Weighted total score. (Please refer to Figure 9) The Average Evaluated Score obtained by the above SOP, SOB, and S0E is as follows:

AES = p · SOP + q · SOB + r · SOE 其中P、Q、 r為加權係數,且p+ r = }。 實作範例: 以一首歌曲為例,我們每0. 1秒計算一次人聲的音高 MicPitch(n)及RMS平均值MicVol(n),同時操取音樂主旋律 音符的音高MelNoteOi)及計算其RMS平均值儉他/⑻,η = 〇 ,1’ 2,…,Ν ’ Ν表示歌曲總長度,在此不失一般性,為方 Ο 便說明’在此取Ν = 280,表示歌曲時間總長度為28秒,如 第10圖所示,為MicPitch(n)與MelNote(n)之曲線圖,圖中實線 代表主旋律_音符的音尚’縱軸為音高代碼,每一個整數間 隔為一個半音,60表示中音])〇,61表示中音升Do,69表示 中音La ’依此類推’圓點表示由人聲所計算出之音高,並 將之轉為音階代號,此音高已經經過正負12的調整,使得 人聲音尚最接近主方疋#音符的音高,圖中實線為一段一段 ,每一段表示一段持續的音階,每段的高低起伏,表示音 階的高低變化,在主旋律音階為-1時,表示該音符為休止 13 201034000 - 符或空的音階,將跳過忽略,圖中圓點為零時,表示該人 聲未被計算出音高,該點人聲可能為無聲氣音、靜音或雜 音等,將被視為未發出聲音" 首先由上述之音感分數的算法,可得到第m個音階的 高音感匹配值NoteHit (m)(如第11圖中圓形所示)與低音感 匹配值NoteHitAround (m)(如第11圖中三角形所示),m = 0,1 ,2,...Μ,Μ = 3,如第 11 圖所示,令 PSH = 50,K1 = 100,及 PS L = 35 ,K2 = 50,可得到每個音階/»的音感分數(如第11 H 圖中矩形所示),經過音階長度(如第11圖中星形所示) 的加權平均計算後可得平均音感分數ScoreOfPitch (S0P) = 98 ο 接著由上述之節奏感分數的算法,可得到第m個音階 的 NoteOnLag (m)(圓形)、NoteOnLead (m)(星形),令 As = 10 ,Ls = 10,可算出BeatOnScore(m)(矩形),如第12圖所示及 可得到 NoteOffLag (m)(圓形)與 NoteOffLead (m)(星形),令 Ae = 50,Le = NoteLength (音階長度),可算出 BeatOffScore 〇 (m)(圓形),如第13圖所示,經過音階長度的加權平均計 算後可得 ScoreOfBeatStart (SOBS) = 93. 19,ScoreOfBeatEnd (S0BE) =99. 82 ’ 令 R 二 0. 5, SOB = 96. 5。 再接著由上述之情感分數的算法,首先可得到人聲與 音樂主旋律的RMS序列MelVol (η)(如第14圖L1所示)、Μ icVol (η)(如第14圖L2所示),並將MicVol (η)的能量準位調 成與MelVol (η)相同,如第14圖所示,將其依每個音階的長 度if均,可得第m個音階的平均RMS序列AvgMelVol (m)(如 第15圖L3所示)、AvgMicVol (m)(如第15圖L4所示),如第 14 201034000 - 15圖所示,設定加權係數,並由此可算出S0ET二98. 33 j句的SOES(j)(如第16圖L5所示)及SOEA (j)(如第16 所示),j = 0, 1,2,…L-1 ,總句數L = 6,如第16圖所示 均之S0ES =97. 2,及S0EA = 95. 67 ,經過加權計算後可得 ScoreOfEmotion (S0E) = 97. 24 最後設定加權係數p = 0. 6,q = 0. 2,r = 0. 2,可 加權總分: AES 二 p · SOP + q . SOB + r . SOE = 9Ί.55 ❹ 本發明之優點: 本發明所述卡拉0Κ歌曲伴唱自動評分方法主要藉 對唱歌者音高'、拍點位置及音量與歌曲主旋律的音高 點位置及音量,分別得到音感分數、節奏感分數及情 數,再以加權計分方式核算加權總分之創新獨特設計 本發明對照先前技術而言,將可精確計算出演唱者在 個歌曲段落的音高、拍點位置及音量誤差,並可利用 Φ 曲線、音量曲線的顯示效果’讓演唱者可以很容易知 個地方唱得不夠準確以及哪個地方需要加強,達到同 教學及娛樂雙重效果之實用進步性者。 上述實施例所揭示者係藉以具體說明本發明,且 雖透過特定的術語進行說明,當不能以此限定本發明 利範圍;熟悉此項技術領域之人士當可在瞭解本發明 神與原則後對其進行變更與修改而達到等效之目的, 等變更與修改,皆應涵蓋於如后所述之申請專利範圍 定範中。 ,第 圖L6 ,平 得到 由比 、拍 感分 ,使 每一 音高 道哪 時具 文中 之專 之精 而此 所界 15 201034000 - 【圖式簡單說明】 第1圖: 本發明之音感分數取得方法文字方塊圖一。 第2圖: 本發明之音感分數取得方法文字方塊圖二。 第3圖: 本發明之音感分數取得方法文字方塊圖三。 第4圖: 本發明之節奏感分數取得方法文字方塊圖一 第5圖: 本發明之節奏感分數取得方法文字方塊圖二 第6圖= 本發明之節奏感分數取得方法文字方塊圖三 第7圖: 本發明之節奏感分數取得方法文字方塊圖四 © 第8圖’· 本發明之情感分數取得方法文字方塊圖。 第9圖: 本發明之自動評分估算方法文字方塊圖。 第10圖: 本發明之實作範例說明參考圖表一。 第11圖: 本發明之實作範例說明參考圖表二。 第12圖: 本發明之實作範例說明參考圖表三。 第13圖: 本發明之實作範例說明參考圖表四。 第14圖: 本發明之實作範例說明參考圖表五。 ^ 第15圖: ❿ :本發明之實作範例說明參考圖表六。 第16圖: 本發明之實作範例說明參考圖表七。 【主要元件符號說明 註:無元件符號 16AES = p · SOP + q · SOB + r · SOE where P, Q, r are weighting coefficients, and p + r = }. Example of implementation: Taking a song as an example, we calculate the vocal pitch MicPitch(n) and the RMS mean MicVol(n) every 0.1 seconds, and listen to the pitch of the music main melody note MelNoteOi) and calculate it. The RMS average 俭 / / (8), η = 〇, 1' 2, ..., Ν ' Ν indicates the total length of the song, without losing the generality, for the square 说明 explain 'here Ν = 280, indicating the total length of the song The degree is 28 seconds. As shown in Fig. 10, it is a graph of MicPitch(n) and MelNote(n). The solid line in the figure represents the main melody _ the pitch of the note. The vertical axis is the pitch code, and each integer interval is One semitone, 60 means midrange])〇, 61 means the midrange rises Do, 69 means the midrange La' and so on. The dot represents the pitch calculated by the human voice and turns it into a scale code. The height has been adjusted by plus or minus 12, so that the human voice is still closest to the pitch of the main note. The solid line in the figure is a segment, each segment represents a continuous scale, and the height of each segment is undulating, indicating the change of the scale. When the main melody scale is -1, it indicates that the note is a pause 13 201034000 - Or an empty scale, skipping the ignore, when the dot in the figure is zero, it means that the vocal is not calculated, and the vocal may be silent, mute or murmur, etc., and will be regarded as unvoiced. First, by the algorithm of the above-mentioned pitch score, the high-pitched sound matching value NoteHit (m) of the mth scale (as shown by the circle in Fig. 11) and the bass sense matching value NoteHitAround (m) can be obtained (as shown in Fig. 11). In the middle triangle), m = 0,1,2,...Μ, Μ = 3, as shown in Figure 11, let PSH = 50, K1 = 100, and PS L = 35, K2 = 50, Obtain the pitch score for each scale/» (as shown by the rectangle in Figure 11H), and calculate the average pitch score ScoreOfPitch (S0P) after a weighted average of the scale length (as shown by the star in Figure 11). 98 ο Then, by the above algorithm of the rhythm score, the NoteOnLag (m) (circle) and NoteOnLead (m) (star) of the mth scale can be obtained, so that As = 10 and Ls = 10, BeatOnScore can be calculated. m) (rectangle), as shown in Figure 12 and get NoteOffLag (m) (circle) and NoteOffLead (m) (star), let Ae = 50, Le = No teLength (scale length), can calculate BeatOffScore 〇 (m) (circle), as shown in Figure 13, after the weighted average calculation of the scale length, you can get ScoreOfBeatStart (SOBS) = 93. 19, ScoreOfBeatEnd (S0BE) = 99 82 '令R二0. 5, SOB = 96. 5. Then, by the algorithm of the above emotion score, the RMS sequence MelVol (η) of the vocal and music main melody (as shown in FIG. 14 L1) and ic icVol (η) (as shown in FIG. 14 L2) are first obtained. The energy level of MicVol (η) is adjusted to be the same as MelVol (η). As shown in Fig. 14, the average RMS sequence of the mth scale is obtained according to the length if of each scale. AvgMelVol (m) (as shown in Figure 15 L3), AvgMicVol (m) (as shown in Figure 15 L4), as shown in Figure 14 201034000 - 15, set the weighting factor, and thus can calculate SOET 2 98. 33 j sentence SOES(j) (as shown in Figure 16 L5) and SOEA (j) (as shown in Figure 16), j = 0, 1, 2, ... L-1, total number of sentences L = 6, as number 16 The S0ES = 97.2, and S0EA = 95. 67 are shown in the figure. After weighting, ScoreOfEmotion (S0E) = 97. 24 Finally, the weighting coefficient p = 0. 6, q = 0. 2, r = 0. 2, weighted total score: AES 2 p · SOP + q . SOB + r . SOE = 9Ί.55 ❹ Advantages of the present invention: The Karaoke Κ singer automatic scoring method of the present invention mainly relies on the pitch of the singer ', the position and volume of the beat and the main theme of the song The position and volume of the pitch point are respectively obtained by the pitch score, the rhythm score and the emotion, and the innovative unique design of the weighted total score is calculated by weighted scoring. The present invention can accurately calculate the singer in comparison with the prior art. The pitch, the position of the beat and the volume error of the song passage, and the display effect of the Φ curve and the volume curve can be used to make it easy for the singer to know where the sing is not accurate enough and which place needs to be strengthened to achieve the same teaching and entertainment. The practical advancement of the effect. The above embodiments are intended to be illustrative of the present invention, and are not to be construed as limiting the scope of the present invention, and those skilled in the art can understand the principles and principles of the present invention. Changes and modifications are made to the equivalent purpose, and such changes and modifications are to be included in the scope of the patent application as described later. In the picture L6, the score is obtained by comparison and shooting, so that each pitch has a special essence in the text and this is the boundary 15 201034000 - [Simplified illustration] Figure 1: The sound score of the present invention is obtained Method text block diagram 1. Fig. 2 is a block diagram 2 of the method for obtaining the pitch score of the present invention. Fig. 3 is a block diagram 3 of the method for obtaining the pitch score of the present invention. Figure 4: The method for obtaining the rhythm score of the present invention is shown in Figure 5: Figure 5: The method for obtaining the rhythm score of the present invention is shown in Figure 2. Figure 6 of the method for obtaining the rhythm of the present invention. Fig.: The rhythm score obtaining method of the present invention is shown in the figure block diagram 4. Fig. 8'· The block diagram of the emotion score obtaining method of the present invention. Figure 9: Text block diagram of the automatic score estimation method of the present invention. Figure 10: An example of the implementation of the present invention is illustrated in Figure 1. Figure 11: An example of the implementation of the present invention is illustrated in Figure 2. Figure 12: An example of the implementation of the present invention is illustrated in Figure 3. Figure 13: A practical example of the invention is illustrated in reference to Figure 4. Figure 14: A practical example of the invention is illustrated in Figure 5. ^ Figure 15: ❿: An example of the implementation of the present invention is illustrated in Figure 6. Figure 16: An example of the implementation of the present invention is illustrated in Figure 7. [Main component symbol description Note: No component symbol 16

Claims (1)

201034000 - 七、申請專利範圍: 1 、一種卡拉οκ歌曲伴唱自動評分方法,主要是藉由比對 唱歌者的音高、拍點位置及音量與音樂主旋律的音高 、拍點位置及音量的方式’以分別得到音感分數、節 奏感分數及情感分數之計分項目’最後以加權計分方 式核算該等計分項目之加權總分,以獲得自動評分之 分數者。 2 、依據申請專利範圍第1項所述之卡拉0K歌曲伴唱自動 評分方法,其中所述音感分數之取得,係透過每隔— 小段時間由演唱者所唱出之麥克風音訊估算一次演唱 者的音高,此音高之估算係取得人聲音訊的基頻 damental Frequency),然後將該基頻經由一音感估算器先 轉換成相對之音階,然後比對該音階與該音樂主旋律 中所掘取到的音階之匹配程度’並給予該音階一音感 ©分數,如此計算所有音階之θ感分數,直到演唱結走 ,即可輸出—平均音感分數者。 3 、依據申請專利範圍第2項所述之卡拉0K歌曲伴唱自動 評分方法,其中所述音高之估算取得方法,可利用基 於自相關函數(Autocorrelation Function)的方法取得者。 4 、依據申請專利範圍第1項所述之卡拉0K歌曲伴唱自動 17 201034000 • 評分方法,其中所述節奏感分數,是計算人聲起唱拍 點與該音樂主旋律音階的起奏時間及人聲結束拍點與 該音樂主旋律音階的結束時間的匹配程度來決定者。 5 、依據申請專利範圍第1項所述之卡拉0K歌曲伴唱自動 評分方法,其中所述情感分數,係利用計算人聲的平 均振幅與該音樂主旋律的平均振幅之匹配程度來決定 ;其令所述人聲的平均振幅是藉由計算每一個人聲聲 Q 音區段的RMS(Root of Mean Square)值得到,該音樂主旋律 的平均振幅可藉由計算每一個主旋律聲音區段的RMS 值或直接由合成之音樂資訊中的振幅參數取得者。 18201034000 - VII. Patent application scope: 1. A method for automatic scoring of Karabok songs, mainly by comparing the pitch of the singer, the position and volume of the beat and the pitch of the main melody of the music, the position of the beat and the volume. The scores of the scores of the sensation scores, the rhythm scores, and the emotion scores are respectively obtained. Finally, the weighted total scores of the score items are calculated by weighted scoring to obtain the scores of the automatic scores. 2. The method for automatically scoring a Karaoke song accompaniment according to Item 1 of the patent application scope, wherein the sound score is obtained by estimating the singer's voice by the microphone sound sung by the singer every other time. High, the estimation of the pitch is obtained by the fundamental frequency of the human voice, and then the fundamental frequency is first converted into a relative scale by a sound estimator, and then the same is found in the scale and the main melody of the music. The degree of matching of the scales 'and gives the scale a sensation © score, so that the θ sense scores of all scales are calculated, and the sings are output, and the average pitch score is output. 3. The karaoke 0K song accompaniment automatic scoring method according to item 2 of the patent application scope, wherein the method for estimating the pitch height can be obtained by using a method based on an autocorrelation function. 4, according to the scope of the patent application, the karaoke 0K song accompaniment automatic 17 201034000 • scoring method, wherein the rhythm sensation score is the calculation of the vocal singer beat and the music main melody scale and the end of the vocal beat The point is determined by the degree of matching with the end time of the musical main melody scale. 5. The Karaoke 0K song accompaniment automatic scoring method according to Item 1 of the patent application scope, wherein the emotion score is determined by using a degree of matching between an average amplitude of the calculated human voice and an average amplitude of the music main melody; The average amplitude of the vocal is obtained by calculating the RMS (Root of Mean Square) value of each individual voice chord. The average amplitude of the music main melody can be calculated by calculating the RMS value of each main melody sound segment or directly by synthesis. The amplitude parameter acquirer in the music information. 18
TW098106930A 2009-03-04 2009-03-04 Karaoke song accompaniment automatic scoring method TWI394141B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW098106930A TWI394141B (en) 2009-03-04 2009-03-04 Karaoke song accompaniment automatic scoring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW098106930A TWI394141B (en) 2009-03-04 2009-03-04 Karaoke song accompaniment automatic scoring method

Publications (2)

Publication Number Publication Date
TW201034000A true TW201034000A (en) 2010-09-16
TWI394141B TWI394141B (en) 2013-04-21

Family

ID=44855379

Family Applications (1)

Application Number Title Priority Date Filing Date
TW098106930A TWI394141B (en) 2009-03-04 2009-03-04 Karaoke song accompaniment automatic scoring method

Country Status (1)

Country Link
TW (1) TWI394141B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI419150B (en) * 2011-03-17 2013-12-11 Univ Nat Taipei Technology Singing and grading system
TWI497484B (en) * 2012-04-18 2015-08-21 Yamaha Corp Performance evaluation device, karaoke device, server device, performance evaluation system, performance evaluation method and program
CN113744721A (en) * 2021-09-07 2021-12-03 腾讯音乐娱乐科技(深圳)有限公司 Model training method, audio processing method, device and readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI232430B (en) * 2004-03-19 2005-05-11 Sunplus Technology Co Ltd Automatic grading method and device for audio source

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI419150B (en) * 2011-03-17 2013-12-11 Univ Nat Taipei Technology Singing and grading system
TWI497484B (en) * 2012-04-18 2015-08-21 Yamaha Corp Performance evaluation device, karaoke device, server device, performance evaluation system, performance evaluation method and program
CN113744721A (en) * 2021-09-07 2021-12-03 腾讯音乐娱乐科技(深圳)有限公司 Model training method, audio processing method, device and readable storage medium
CN113744721B (en) * 2021-09-07 2024-05-14 腾讯音乐娱乐科技(深圳)有限公司 Model training method, audio processing method, device and readable storage medium

Also Published As

Publication number Publication date
TWI394141B (en) 2013-04-21

Similar Documents

Publication Publication Date Title
US8626497B2 (en) Automatic marking method for karaoke vocal accompaniment
CN101859560B (en) Automatic Scoring Method for Karaoke Song Accompaniment
US8802953B2 (en) Scoring of free-form vocals for video game
CN106095925B (en) A kind of personalized song recommendations method based on vocal music feature
JP2013222140A5 (en)
CN104170006A (en) Performance evaluation device, karaoke device, and server device
Larrouy-Maestri et al. The evaluation of vocal pitch accuracy: The case of operatic singing voices
TW201034000A (en) Automatic grading method for karaoke song singing
TWI419150B (en) Singing and grading system
JP6365483B2 (en) Karaoke device, karaoke system, and program
JP6304650B2 (en) Singing evaluation device
TW201027514A (en) Singing synthesis systems and related synthesis methods
JP2008268369A (en) Vibrato detecting device, vibrato evaluating device, vibrato detecting method, and vibrato evaluating method, and program
JP5447624B2 (en) Karaoke equipment
TWI304569B (en)
JP2016180965A (en) Evaluation device and program
JP5983670B2 (en) Program, information processing apparatus, and data generation method
JP6454512B2 (en) Equipment with guitar scoring function
JP2010504563A (en) Automatic sound adjustment method and system for music accompaniment apparatus
JP5618743B2 (en) Singing voice evaluation device
Nix Why fry? An exploration of the lowest vocal register in amplified and unamplified singing
TWI232430B (en) Automatic grading method and device for audio source
KR102673570B1 (en) Methods and Apparatus for calculating song scores
WO2007045123A1 (en) A method for keying human voice audio frequency
CN1953051B (en) Human voice audio tuning method

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees