[go: up one dir, main page]

TW202000124A - Algorithmic method for extracting human pulse rate from compressed video data of a human face - Google Patents

Algorithmic method for extracting human pulse rate from compressed video data of a human face Download PDF

Info

Publication number
TW202000124A
TW202000124A TW107120251A TW107120251A TW202000124A TW 202000124 A TW202000124 A TW 202000124A TW 107120251 A TW107120251 A TW 107120251A TW 107120251 A TW107120251 A TW 107120251A TW 202000124 A TW202000124 A TW 202000124A
Authority
TW
Taiwan
Prior art keywords
signal
frequency
video
overlapping
face
Prior art date
Application number
TW107120251A
Other languages
Chinese (zh)
Other versions
TWI653027B (en
Inventor
林俊良
趙昶辰
陳偉海
Original Assignee
國立中興大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立中興大學 filed Critical 國立中興大學
Priority to TW107120251A priority Critical patent/TWI653027B/en
Application granted granted Critical
Publication of TWI653027B publication Critical patent/TWI653027B/en
Publication of TW202000124A publication Critical patent/TW202000124A/en

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

This invention includes the following steps: a first overlap cutting step, a first processing step, a first overlap reconstructing step, a second overlap cutting step, a second processing step, and a second overlap reconstructing step. Based on these steps, the single channel data separation is adopted for obtaining the biological information. Only the green channel is considered because it has the lowest influence, followed by the singular spectrum analysis (SSA) process, two times relationship selection process, and the spectral masking process. At last, the final heart rate can be obtained. This invention has the following advantages: 1) its algorithm for extracting human pulse rate of a human face is unique; 2) its application scope is wide; and 3) the single channel separation method can minimize the transmission volume of video data.

Description

人臉壓縮影像的心率提取演算方法Heart rate extraction algorithm for face compressed image

本發明係有關一種人臉壓縮影像的心率提取演算方法,尤指一種兼具人臉壓縮影像的心率提取演算法相當新穎、應用範圍廣及單通道訊號分離法大幅減少視頻影像傳輸量之人臉壓縮影像的心率提取演算方法。The present invention relates to a heart rate extraction algorithm for face compressed images, in particular to a face with a face compression image that is quite novel, has a wide range of applications, and a single channel signal separation method greatly reduces the amount of video image transmission. Calculation method of heart rate extraction of compressed images.

2008年威姆・費爾克魯伊瑟(Wim Verkruysse)等人指出,通過自然光和消費級攝像頭可以檢測心率波形,進而可以遠程的分析生理資訊。 2010年明哲・波(Ming-Zher Poh)等人利用盲信號分離(blind source separation)技術對視頻中的彩色信號進行分析,從中提取人體的心率信號。 2012年浩宇・吳(Hao-Yu Wu)等人提出尤拉影像放大演算法,對視頻流中皮膚顏色的微小變化進行放大,使得皮膚顏色的變化可以被人們觀察到。 2013年古哈・巴拉克利斯南(Guha Balakrishnan)等人指出,心臟的節律性跳動會引起頭部的微小晃動,可以通過檢測視頻中的這種頭部微小晃動從而偵測心率。 2015年,香港城市大學和荷蘭艾恩德后芬(Eindhoven)理工大學,各自發展出在非運動狀態下,對受測者肢體限制更少的測量技術,例如受測者可以在頭部轉動或晃動的情況中,偵測到可靠的心律數值。 2016年圖利雅科夫(Tulyakov)等人提出自適應矩陣填補方法,在檢測心率信號的同時,自動檢測哪個區域包含心率信號,將這些區域用於心率偵測。 以上方法均以未壓縮視頻數據為處理對象,採用三通道信號分離技術,將心率信號從原始信號中分離開來。當處理壓縮視頻時,由於壓縮算法對心率信號存在極大影響,上述方法都無法有效得到準確的心率信息。 2016年漢夫蘭德(Hanfland)等人對原始視頻進行壓縮並將壓縮視頻與原始視頻進行比較,研究結果表明心率信號在壓縮視頻中仍然存在,但總體的品質已大幅下降。 2017年麥克杜夫(McDuff)等人用兩種壓縮方法(x264和x265)將原始視頻壓縮到不同的比特率,結果說明視頻壓縮使心率信號的信噪比明顯下降。 此外,現行的遠程光體積變化描記圖法(Rppg)均採用未壓縮影像為處理對象,即在未壓縮之影像數據上面提取人臉隱含的心率信息;這種方式帶來的負面影響之一是會造成儲存的困難。眾所周知,未壓縮影像的數據量巨大,例如,一段分辨率為640*480,30fps一分鐘的影像需要大約1.7GB的儲存空間。如此巨大的儲存需求必然造成資源的浪費;另一個問題是未壓縮影像資料根本無法進行遠距離傳輸,這樣給rPPG技術的應用面帶來了很大的限制。目前網路傳輸能力無法實現未壓縮影像的即時傳輸;現有的rPPG技術無法應用於需要遠距離影像即時傳輸的場合。 總之,現今之影像遠距離傳送,幾乎都會進行影像壓縮,使傳送之數據量降低。較常見之壓縮方式有下列四種:x264、x265、vp8與vp9。但是,經壓縮後之影像,幾乎無法有效得到準確的心率信息。 有鑑於此,必須研發出可解決上述習用缺點之技術。In 2008, Wim Verkruysse and others pointed out that through natural light and consumer-grade cameras, heart rate waveforms can be detected, and physiological information can be analyzed remotely. In 2010 Ming-Zher Poh and others used blind source separation technology to analyze the color signal in the video and extract the heart rate signal from the human body. In 2012, Hao-Yu Wu and others proposed the Yura image magnification algorithm to amplify small changes in skin color in the video stream, so that changes in skin color can be observed. In 2013, Guha Balakrishnan and others pointed out that the rhythmic beating of the heart can cause tiny head shaking, which can be detected by detecting this kind of head shaking in the video. In 2015, the City University of Hong Kong and the Eindhoven University of Technology in the Netherlands each developed measurement techniques that limit the subject’s limbs in a non-moving state, for example, the subject can turn or shake on the head In the case of, a reliable heart rhythm value is detected. In 2016, Tulyakov (Tulyakov) and others proposed an adaptive matrix filling method. While detecting heart rate signals, it automatically detects which areas contain heart rate signals and uses these areas for heart rate detection. The above methods take uncompressed video data as the processing object, and adopt a three-channel signal separation technology to separate the heart rate signal from the original signal. When processing compressed video, because the compression algorithm has a great influence on the heart rate signal, none of the above methods can effectively obtain accurate heart rate information. In 2016, Hanfland and others compressed the original video and compared the compressed video with the original video. The research results show that the heart rate signal still exists in the compressed video, but the overall quality has been greatly reduced. In 2017, McDuff and others used two compression methods (x264 and x265) to compress the original video to different bit rates. The results show that video compression significantly reduces the signal-to-noise ratio of the heart rate signal. In addition, the current remote optical volume change tracing method (Rppg) uses uncompressed images as processing objects, that is, extracts the heart rate information hidden in the face from the uncompressed image data; one of the negative effects of this method It will cause storage difficulties. As we all know, the amount of data in uncompressed images is huge. For example, an image with a resolution of 640*480 and one minute at 30fps requires about 1.7GB of storage space. Such huge storage requirements will inevitably result in a waste of resources; another problem is that uncompressed image data cannot be transmitted over long distances at all, which puts great restrictions on the application of rPPG technology. The current network transmission capability cannot realize the real-time transmission of uncompressed images; the existing rPPG technology cannot be applied to occasions that require real-time transmission of long-distance images. In short, today's images are transmitted over long distances, and almost all images are compressed to reduce the amount of data transmitted. There are four common compression methods: x264, x265, vp8 and vp9. However, the compressed image can hardly effectively obtain accurate heart rate information. In view of this, it is necessary to develop technology that can solve the above-mentioned conventional shortcomings.

本發明之目的,在於提供一種人臉壓縮影像的心率提取演算方法,其兼具人臉壓縮影像的心率提取演算法相當新穎、應用範圍廣及單通道訊號分離法大幅減少視頻影像傳輸量等優點。特別是,本發明所欲解決之問題係在於未壓縮影像資料無法進行遠距離傳輸等問題。 解決上述問題之技術手段係提供一種人臉壓縮影像的心率提取演算方法,其包括下列步驟: 一.第一次重疊切割步驟; 二.第一次處理步驟: [a] 預處理步驟; [b] 帶通濾波處理步驟; [c] 第一次奇異譜分析步驟; [d] 二倍關係篩選步驟; [e] 重建步驟; 三.第一次重疊相加步驟; 四.第二次重疊切割步驟; 五.第二次處理步驟: [f] 頻率遮罩建構步驟; [g] 第二次奇異譜分析步驟; [h] 頻率遮罩篩選步驟; 六.第二次重疊相加步驟。 茲以下列實施例並配合圖式詳細說明本發明於後:The purpose of the present invention is to provide a heart rate extraction algorithm for face compressed images, which has the advantages of a relatively new heart rate extraction algorithm for face compressed images, a wide range of applications, and a single channel signal separation method to greatly reduce the amount of video image transmission. . In particular, the problem to be solved by the present invention is that uncompressed image data cannot be transmitted over a long distance. The technical means to solve the above problem is to provide a heart rate extraction calculation method for face compressed images, which includes the following steps: 1. The first overlapping cutting step; two. The first processing step: [a] pre-processing step; [b] band-pass filter processing step; [c] first singular spectrum analysis step; [d] double relationship screening step; [e] reconstruction step; three. The first step of overlapping addition; 4. The second overlapping cutting step; 5. The second processing step: [f] frequency mask construction step; [g] second singular spectrum analysis step; [h] frequency mask screening step; VI. The second overlap and add step. The following examples and drawings are used to explain the present invention in detail:

參閱第1及第2圖,本發明係為一人臉壓縮影像的心率提取演算方法,其包括下列步驟: 一.第一次重疊切割步驟S1:參閱第3圖,取得一經過視頻壓縮處理後之人臉壓縮影片M,其時間長度為T1,將該人臉壓縮影片M重疊切割為複數個小段影片M1,該每一小段影片M1之時間長度係為T2;該重疊切割之方式係被定義為由該人臉壓縮影片M之開始處,擷取第一個該小段影片M1,之後每隔一個步長時間T3,再擷取另一個該小段影片M1,重覆前述動作直到該人臉壓縮影片M結束,進而能取得複數個該小段影片M1。例如:該人臉壓縮影片M為600秒(即該時間長度T1)之影片,每一該小段影片M1為3秒(即該時間長度T2),該步長時間T3為1.5秒,則最後可被重疊切割為399個小段影片M1。 二.第一次處理步驟S2,係針對每一該小段影片M1逐一進行下列步驟: [a] 預處理步驟S21:每一該小段影片M1係包含N禎原始影像K(參閱第4圖);對每一禎原始影像K進行人臉區域追蹤(參閱第5A及第5B圖,此為公知技術),可得到一人臉區域P,該人臉區域P具有X乘Y個像素,針對該X乘Y之複數像素之綠色值加以平均,而得到一純量,其係被定義為單禎平均綠色值;依此類推,可得到N禎平均綠色值,進一步由該N禎平均綠色值得到一原始訊號G0(t)(參閱第6圖),其中t=1至N。 [b] 帶通濾波處理步驟S22:對前述之該原始訊號G0(t)進行帶通濾波處理,選取頻率0.8Hz至2.0Hz間之訊號(參閱第7及第8圖),而得到一第一訊號G1(t)(參閱第9圖),其中t=1至N。 [c] 第一次奇異譜分析(singular spectrum analysis,簡稱SSA)步驟S23:將該第一訊號G1(t)分解成複數個第一子序列G1m(參閱第10圖),並對所有該第一子序列G1m進行快速傅利葉轉換,而得到複數個該第一子序列G1m之頻譜,再將每一該第一子序列G1m之頻譜中之幅值最大的頻率作為其第一主頻率G1f。其中,前述之奇異譜分析係為已知技術,實務上,透過一般市面流通之應用軟體(例如MATLAB),即能據以實施,故該奇異譜分析之細部過程在此不贅述。 [d] 二倍關係篩選步驟S24:將前述第一子序列G1m兩兩進行比對,若該兩第一主頻率G1f比對後呈兩倍關係,則該兩第一主頻率G1f保留,否則捨棄該兩第一主頻率G1f。若兩兩比對後並無任一滿足此二倍關係時,則所有第一子序列G1m均保留;而可得到至少二個之第一保留子序列。 [e] 重建步驟S25:將前一步驟之所有的該第一保留子序列重建,而得到一第二訊號G2(t)(參閱第11圖),其中t=1至N,該第二訊號G2(t)之橫軸為時間,縱軸為綠色值強度,第二訊號G2(t)係具有該時間長度T2。 三.第一次重疊相加步驟S3:參閱第12圖,將該第一次處理步驟S2後得到之複數個第二訊號G2(t)利用現有升餘弦窗法之重疊相加技術處理,而得到一重疊相加後第二訊號G22,其具有該時間長度T1;該重疊相加技術係被定義為相鄰該兩第二訊號G2(t)之間,係以該步長時間T3重疊相加。亦即,對每個小段信號(例如該第二訊號G2(t))進行上述處理,將處理的結果乘以升餘弦窗(Hanning窗),再將處理結果相加,得到處理後的長時間訊號(例如該重疊相加後第二訊號G22)。實務上,可透過一般市面流通之應用軟體(例如MATLAB),即能據以實施,故細部過程在此不贅述。 四.第二次重疊切割步驟S4:參閱第13圖,取得前述重疊相加後第二訊號G22,其具有該時間長度T1,將該重疊相加後第二訊號G22重疊切割為複數個小段訊號,該每一小段訊號係具有該時間長度T2;該重疊切割之方式係被定義為由該重疊相加後第二訊號G22開始處,擷取一個該小段訊號,之後每隔一個該步長時間T3,再擷取另一個該小段訊號,重覆前述動作,直到前述重疊相加後第二訊號G22結束,進而能取複數個該小段訊號,該每一小段訊號係被定義為第三訊號G3(t) ,其中t=1至N。 五.第二次處理步驟S5:針對該每一第三訊號G3(t)逐一進行下列步驟: [f] 頻率遮罩建構步驟S51:將該第三訊號G3(t)之幅值最大的頻率作為其中心頻率G3j(參閱第14圖),並以該中心頻率G3j為中心,設定一通過上限頻率G3ju及一通過下限頻率G3jd,而得到一頻率遮罩範圍。 [g] 第二次奇異譜分析(SSA)步驟S52:將該第三訊號G3(t)分解成複數個第二子序列G3m(參閱第15圖),並對所有第二子序列G3m進行快速傅利葉轉換,而得到複數個第二子序列G3m之頻譜,再將該每一第二子序列G3m之頻譜中之幅值最大的頻率,作為其第二主頻率G3f。 [h] 頻率遮罩篩選步驟S53:僅將該第二主頻率G3f介於該頻率遮罩範圍內之該第二子序列G3m保留,則可篩選得到至少一個第二保留子序列,其變成一第四訊號G4(t),其中t=1至N,該第四訊號G4(t)之橫軸為時間,縱軸為綠色值強度。若篩選未得到任一該第二保留子序列,則該第四訊號G4(t)直接等於該第三訊號G3(t)。 六.第二次重疊相加步驟S6:參閱第16圖,將該第二次處理步驟S5後得到之複數個該第四訊號G4(t),利用現有升餘弦窗法之重疊相加技術處理,而得到一重疊相加後第四訊號G44,其具有該時間長度T1,係為最終心率訊號。 實務上,於該取得原始影像步驟S1,係設置兩組視頻裝置10及一網路連接裝置20。該兩組視頻裝置10係透過該網路連接裝置20,達成可供視頻連絡者。 該人臉壓縮影像之壓縮方式係選自x264、x265、vp8、vp9其中之一。 關於前述的奇異譜分析(singular spectrum analysis,簡稱SSA),其為公知技術,說明如下: 輸入:一個長度為N的向量。 第一步:漢克爾矩陣嵌入(英文為Hankel matrix embedding),即將y按照如下方式轉換成矩陣X。; 其中K =NL +1,L和K為矩陣的行與列,由用戶自己設定。 第二步:前述矩陣X進行奇異值分解(Singular value decomposition,簡稱SVD):。 其中為X的奇異值,為對應的奇異向量,的秩。令為秩為1的子矩陣,則:。 第三步:重建。方法為對角線平均(Diagonal averaging),這一步説起來比較複雜,具體參考附件。作用是將上式的矩陣相加形式轉換爲向量相加形式:。 每一個稱作一個重建成份(reconstructed component,或簡稱RC)。 第四步:根據需要,在中選擇滿足需要的RC,得到最後的時間序列。 舉例來講: 在我們的算法中,為綠色通道一個時間窗口所得到的時間序列,例如,若時間窗口(即時間長度T2)為3s,幀頻為30fps,則x的長度N=90。K一般設爲N的一半,即45。計算得到K=46。在SVD后,r一般為K和N的小者,即r=45。在第四步中,有r=45個,我們可以只考慮前20個以加快計算速度。 本發明之重點在於充分考慮到壓縮影像對人體生理訊號的影響,採用單通道訊號分離的方法進行生理訊號提取。進而選取被壓縮算法影響最小的通道(G通道)進行處理。亦即,只擷取該人臉區域之該X乘Y之複數像素之綠色值加以平均。這樣做的好處是充分避免壓縮算法對心率訊號所產生的影響,使得心率訊號可以準確、穩定地從壓縮視頻中提取出來。另外,配合奇異譜分析(singular spectrum analysis,簡稱SSA),利用心率訊號的頻率結構特徵,可以在包含有雜訊的混合訊號中有效的識別心率訊號。再通過頻率遮罩,進一步濾除雜訊,得到更加準確的心率訊號。本發明通過三種方法從混合訊號中提取心率訊號,可大幅保證心率訊號的準確程度,也可有效地避免壓縮算法對心率訊號計算的影響。 關於本案之實驗結果,請參閱第17A圖至第18D圖。 在一靜態影片(其壓縮方式為vp8,比特率為100kb/3),當只進行到第三步驟之帶通濾波處理(簡稱filter),其結果在頻域之波形如第17A圖所示;若再進續第一次奇異譜分析與兩倍關係篩選處理之後(簡稱filter+SSA),其結果在頻域之波形如第17B圖所示;若再繼續進行至第二次奇異譜分析與頻率遮罩篩選之步驟(簡稱filter+SSA+refine;即本案),其結果在頻域之波形如第17C圖所示。而這三者與實際心率訊號在時域之比較係如第17D圖(其中的第一曲線L1、第二曲線L2、第三曲線L3、第四曲線L4分別表示實際心率、帶通濾波、帶通濾波+SSA、帶通濾波+SSA+頻率遮罩)所示,可以看出,本案之結果最接近實際之實際心率訊號。 另外,在一動態影片(其壓縮方式為x264,比特率為584kb/3),當只進行到第三步驟之帶通濾波處理(簡稱filter),其結果在頻域之波形如第18A圖所示;若再進續第一次奇異譜分析與兩倍關係篩選處理之後(簡稱filter+SSA),其結果在頻域之波形如第18B圖所示;若再繼續進行至第二次奇異譜分析與頻率遮罩篩選之步驟(簡稱filter+SSA+refine;即本案),其結果在頻域之波形如第18C圖所示。而這三者與實際心率訊號在時域之比較係如第18D圖(其中的第一曲線LA、第二曲線LB、第三曲線LC、第四曲線LD分別表示實際心率、帶通濾波、帶通濾波+SSA、帶通濾波+SSA+頻率遮罩)所示,其中,可以看出,本案之結果最接近實際之實際心率訊號。 本發明之優點及功效係如下所述: [1] 人臉壓縮影像的心率提取演算法相當新穎。本發明採用獨特之奇異譜分析、二倍關係篩選、頻率遮罩篩選等處理步驟,而可在已壓縮視頻數據中提取人體心率訊號,是一前所未見之技術手段。故,本案之人臉壓縮影像的心率提取演算法相當新穎。 [2] 應用範圍廣。本發明可應用於一切需要影像壓縮的場合,例如,遠程醫療中,病人的視頻數據經過壓縮後傳輸到醫院做進一步分析。手機應用中,使用者拍攝的視頻經過無線網路傳輸到雲端進行心率測量與分析。藉本發明,可將本技術應用到遠程醫療領域、家庭照護領域或體適能訓練等領域,特別是遠程醫療領域在家庭照護方面的技術得以提升。故,應用範圍廣。 [3] 單通道訊號分離法大幅減少視頻影像傳輸量。本發明採用單通道訊號分離的方法進行生理訊號提取,進而選取被壓縮算法影響最小的通道(G通道)進行處理。亦即,只擷取該人臉區域中之X乘Y之複數像素之綠色值加以平均,可大幅減少視頻影像傳輸量。故,單通道訊號分離法大幅減少視頻影像傳輸量。 以上僅是藉由較佳實施例詳細說明本發明,對於該實施例所做的任何簡單修改與變化,皆不脫離本發明之精神與範圍。Referring to Figures 1 and 2, the present invention is a heart rate extraction algorithm for a face compressed image, which includes the following steps: 1. The first overlapping cutting step S1: refer to FIG. 3 to obtain a face compressed video M after video compression processing, the time length of which is T1, the face compressed video M is overlapped and cut into a plurality of small video M1, the The time length of each small video M1 is T2; the method of overlapping cutting is defined as the beginning of the face compressed video M, the first of the small video M1 is captured, and then every other step for a long time T3 , And then capture another small video M1, repeat the aforementioned actions until the face compression video M ends, and then obtain a plurality of small video M1. For example: if the face compression video M is 600 seconds (that is, the time length T1), each small video M1 is 3 seconds (that is, the time length T2), and the step time T3 is 1.5 seconds, then the last It was cut into 399 short films M1. two. The first processing step S2 is to perform the following steps one by one for each small video segment M1: [a] Pre-processing step S21: each small video segment M1 includes the original image K (see Figure 4); for each A tracing of the original image K for the face area (see FIGS. 5A and 5B, which is a well-known technique), a face area P can be obtained, the face area P has X by Y pixels, for the X by Y The green values of the complex pixels are averaged to obtain a scalar quantity, which is defined as the average green value of a single frame; and so on, the average green value of N frame can be obtained, and a raw signal G0 is further obtained from the average green value of N frame (t) (see Figure 6), where t=1 to N. [b] Bandpass filter processing step S22: perform bandpass filter processing on the aforementioned original signal G0(t), select a signal with a frequency between 0.8 Hz and 2.0 Hz (see FIGS. 7 and 8), and obtain a first A signal G1(t) (see Figure 9), where t=1 to N. [c] The first singular spectrum analysis (SSA) step S23: the first signal G1(t) is decomposed into a plurality of first subsequences G1m (see FIG. 10), and all the A sub-sequence G1m is subjected to fast Fourier transform to obtain a plurality of spectrums of the first sub-sequence G1m, and then the frequency with the largest amplitude in the spectrum of each first sub-sequence G1m is taken as its first main frequency G1f. Among them, the aforementioned singular spectrum analysis is a known technology. In practice, it can be implemented based on general market application software (such as MATLAB), so the detailed process of the singular spectrum analysis will not be repeated here. [d] Double relationship screening step S24: compare the aforementioned first subsequences G1m two by two, if the two first main frequencies G1f have a double relationship after comparison, the two first main frequencies G1f are retained, otherwise The two first main frequencies G1f are discarded. If there is no one satisfying this double relationship after pairwise comparison, all the first subsequences G1m are retained; and at least two first reserved subsequences can be obtained. [e] Reconstruction step S25: Reconstruct all the first reserved subsequences of the previous step to obtain a second signal G2(t) (see FIG. 11), where t=1 to N, the second signal The horizontal axis of G2(t) is time, and the vertical axis is green value intensity. The second signal G2(t) has the time length T2. three. The first overlapping addition step S3: referring to FIG. 12, the plurality of second signals G2(t) obtained after the first processing step S2 are processed by using the existing overlapping cosine window method of overlapping addition technology to obtain a After the overlapping addition, the second signal G22 has the time length T1; the overlapping addition technique is defined as between the two adjacent second signals G2(t), which is added by the step length T3. That is, the above processing is performed on each small signal (such as the second signal G2(t)), the processing result is multiplied by a raised cosine window (Hanning window), and then the processing results are added to obtain a long time after processing Signal (for example, the second signal G22 after the overlapping addition). In practice, it can be implemented based on general market application software (such as MATLAB), so the detailed process will not be repeated here. four. The second overlapping cutting step S4: refer to FIG. 13 to obtain the aforementioned second signal G22 after overlap addition, which has the time length T1, and the second signal G22 is overlapped and cut into a plurality of small signals after the overlap addition, the Each small segment signal has the length of time T2; the way of overlapping cutting is defined as the beginning of the second signal G22 after the overlap is added, to capture a small segment signal, and then every other step for a long time T3, Retrieve another small segment signal and repeat the above-mentioned actions until the second signal G22 ends after the overlap and addition, and then a plurality of small segment signals can be obtained, and each small segment signal is defined as the third signal G3(t) , Where t=1 to N. Fives. Second processing step S5: Perform the following steps one by one for each third signal G3(t): [f] Frequency mask construction step S51: Use the frequency with the largest amplitude of the third signal G3(t) as its The center frequency G3j (see FIG. 14), and with the center frequency G3j as the center, set a passing upper limit frequency G3ju and a passing lower limit frequency G3jd to obtain a frequency mask range. [g] Step S52 of the second singular spectrum analysis (SSA): the third signal G3(t) is decomposed into a plurality of second subsequences G3m (see FIG. 15), and all the second subsequences G3m are quickly processed Fourier transform to obtain a plurality of second sub-sequences G3m spectrum, and then use the frequency of the largest amplitude in the spectrum of each second sub-sequence G3m as its second main frequency G3f. [h] Frequency mask screening step S53: only the second sub-sequence G3m whose second main frequency G3f is within the frequency mask range is reserved, then at least one second reserved sub-sequence can be obtained by screening, which becomes a The fourth signal G4(t), where t=1 to N, the horizontal axis of the fourth signal G4(t) is time, and the vertical axis is the intensity of the green value. If any second reserved subsequence is not obtained by screening, the fourth signal G4(t) is directly equal to the third signal G3(t). six. The second overlapping addition step S6: referring to FIG. 16, the plurality of fourth signals G4(t) obtained after the second processing step S5 are processed by the existing overlapping addition technique of the raised cosine window method, and A fourth signal G44 after overlapping addition is obtained, which has the time length T1, and is the final heart rate signal. In practice, in the step S1 of obtaining the original image, two sets of video devices 10 and a network connection device 20 are provided. The two sets of video devices 10 are connected to the device 20 through the network to achieve a video contact. The compression method of the face compressed image is selected from one of x264, x265, vp8, and vp9. Regarding the aforementioned singular spectrum analysis (SSA), which is a well-known technique, it is explained as follows: Input: a vector of length N . The first step: Hankel matrix embedding (English is Hankel matrix embedding), that is, converting y into matrix X as follows. ; Where K = NL +1, L and K are the rows and columns of the matrix, set by the user. Step 2: Singular value decomposition (SVD) for the aforementioned matrix X: . among them Is the singular value of X, with Is the corresponding singular vector, for Of rank. make For a submatrix with rank 1, then: . The third step: reconstruction. The method is Diagonal averaging. This step is more complicated than that. Please refer to the attachment for details. The function is to convert the matrix addition form of the above formula into a vector addition form: . Every Called a reconstructed component (RC). Step 4: As needed, in Choose the RC that meets your needs to get the final time series. For example: In our algorithm, The time sequence obtained by a time window of the green channel, for example, if the time window (that is, the time length T2) is 3s and the frame rate is 30fps, then the length of x is N=90. K is generally set to half of N, which is 45. Calculated K=46. After SVD, r is generally the smaller of K and N, ie r=45. In the fourth step, there are r=45 , We can only consider the top 20 to speed up the calculation. The key point of the present invention is to fully consider the impact of compressed images on human physiological signals, and use a single-channel signal separation method to extract physiological signals. Then select the channel (G channel) that is least affected by the compression algorithm for processing. That is, only the green values of the complex pixels of the X times Y of the face area are captured and averaged. The advantage of this is to fully avoid the impact of the compression algorithm on the heart rate signal, so that the heart rate signal can be accurately and stably extracted from the compressed video. In addition, in conjunction with singular spectrum analysis (singular spectrum analysis, SSA), the frequency structure characteristics of the heart rate signal can be used to effectively identify the heart rate signal in the mixed signal including noise. The frequency mask is then used to further filter out noise and obtain a more accurate heart rate signal. The invention extracts the heart rate signal from the mixed signal through three methods, which can greatly ensure the accuracy of the heart rate signal, and can effectively avoid the influence of the compression algorithm on the calculation of the heart rate signal. For the experimental results of this case, please refer to Figures 17A to 18D. In a static film (the compression method is vp8, the bit rate is 100kb/3), when only the third step of band-pass filtering (filter for short) is performed, the resulting waveform in the frequency domain is shown in Figure 17A; If you continue to the first singular spectrum analysis and double relationship filtering (referred to as filter+SSA), the waveform in the frequency domain is shown in Figure 17B; if you continue to the second singular spectrum analysis and The frequency mask filtering step (referred to as filter+SSA+refine; in this case), the resulting waveform in the frequency domain is shown in Figure 17C. The comparison between these three and the actual heart rate signal in the time domain is shown in the 17D graph (the first curve L1, the second curve L2, the third curve L3, and the fourth curve L4 respectively represent the actual heart rate, bandpass filter, and band Pass filter + SSA, band pass filter + SSA + frequency mask), it can be seen that the result of this case is closest to the actual heart rate signal. In addition, in a dynamic film (the compression method is x264, the bit rate is 584kb/3), when only the third step of band-pass filtering (filter for short) is performed, the resulting waveform in the frequency domain is as shown in Figure 18A If you continue to the first singular spectrum analysis and double relationship filtering (referred to as filter+SSA), the waveform in the frequency domain is shown in Figure 18B; if you continue to the second singular spectrum The steps of analysis and frequency mask filtering (referred to as filter+SSA+refine; in this case), the waveform of the result in the frequency domain is shown in Figure 18C. The comparison between these three and the actual heart rate signal in the time domain is shown in the 18D diagram (the first curve LA, the second curve LB, the third curve LC, and the fourth curve LD respectively represent the actual heart rate, band pass filter, and band Pass filter + SSA, band pass filter + SSA + frequency mask), which can be seen that the result of this case is closest to the actual heart rate signal. The advantages and effects of the present invention are as follows: [1] The algorithm of heart rate extraction for face compressed images is quite novel. The invention adopts unique singular spectrum analysis, double relationship screening, frequency mask screening and other processing steps, and can extract the human heart rate signal from the compressed video data, which is an unprecedented technical means. Therefore, the heart rate extraction algorithm of the face compressed image in this case is quite novel. [2] Wide range of applications. The invention can be applied to all occasions requiring image compression. For example, in telemedicine, the patient's video data is compressed and transmitted to the hospital for further analysis. In the mobile phone application, the video taken by the user is transmitted to the cloud via a wireless network for heart rate measurement and analysis. With the help of the present invention, the technology can be applied to the fields of telemedicine, home care or fitness training, especially the technology of home care in the field of telemedicine has been improved. Therefore, the application range is wide. [3] The single-channel signal separation method greatly reduces the amount of video image transmission. The invention adopts a single-channel signal separation method to extract physiological signals, and then selects the channel (G channel) that is least affected by the compression algorithm for processing. That is, only the green values of the complex pixels of X times Y in the face area are captured and averaged, which can greatly reduce the amount of video image transmission. Therefore, the single-channel signal separation method greatly reduces the amount of video image transmission. The above is only a detailed description of the present invention through the preferred embodiment. Any simple modifications and changes made to this embodiment will not deviate from the spirit and scope of the present invention.

10‧‧‧視頻裝置 20‧‧‧網路連接裝置 S1‧‧‧第一次重疊切割步驟 S2‧‧‧第一次處理步驟 S21‧‧‧預處理步驟 S22‧‧‧帶通濾波處理步驟 S23‧‧‧第一次奇異譜分析步驟 S24‧‧‧二倍關係篩選步驟 S25‧‧‧重建步驟 S3‧‧‧第一次重疊相加步驟 S4‧‧‧第二次重疊切割步驟 S5‧‧‧第二次處理步驟 S51‧‧‧頻率遮罩建構步驟 S52‧‧‧第二次奇異譜分析步驟 S53‧‧‧頻率遮罩篩選步驟 S6‧‧‧第二次重疊相加步驟 M‧‧‧人臉壓縮影片 M1‧‧‧小段影片 T1、T2‧‧‧時間長度 T3‧‧‧步長時間 K‧‧‧原始影像 P‧‧‧人臉區域 G0(t)‧‧‧原始訊號 G1(t)‧‧‧第一訊號 G1m‧‧‧第一子序列 G1f‧‧‧第一主頻率 G2(t)‧‧‧第二訊號 G22‧‧‧重疊相加後第二訊號 G3(t)‧‧‧第三訊號 G3j‧‧‧中心頻率 G3ju‧‧‧通過上限頻率 G3jd‧‧‧通過下限頻率 G3m‧‧‧第二子序列 G3f‧‧‧第二主頻率 S34‧‧‧頻率遮罩篩選步驟 G4(t)‧‧‧第4訊號 G44‧‧‧重疊相加後第四訊號 L1、LA‧‧‧第一曲線 L2、LB‧‧‧第二曲線 L3、LC‧‧‧第三曲線 L4、LD‧‧‧第四曲線 10‧‧‧Video installation 20‧‧‧Network connection device S1‧‧‧First overlapping cutting step S2‧‧‧ First processing steps S21‧‧‧Pretreatment steps S22‧‧‧band pass filter processing steps S23‧‧‧The first step of singular spectrum analysis S24‧‧‧Double screening steps S25‧‧‧Reconstruction steps S3‧‧‧The first step of overlapping addition S4‧‧‧Second overlapping cutting step S5‧‧‧second processing steps S51‧‧‧Frequency mask construction steps S52‧‧‧ Second singular spectrum analysis step S53‧‧‧Frequency mask screening steps S6‧‧‧The second overlapping and adding step M‧‧‧ face compression video M1‧‧‧ video T1, T2‧‧‧ length of time T3‧‧‧ long step K‧‧‧ Original image P‧‧‧Face area G0(t)‧‧‧ original signal G1(t)‧‧‧First signal G1m‧‧‧The first subsequence G1f‧‧‧First main frequency G2(t)‧‧‧Second signal G22‧‧‧Second signal after overlapping addition G3(t)‧‧‧third signal G3j‧‧‧Center frequency G3ju‧‧‧ Pass the upper limit frequency G3jd‧‧‧pass the lower limit frequency G3m‧‧‧second subsequence G3f‧‧‧second main frequency S34‧‧‧ frequency mask screening steps G4(t)‧‧‧Signal 4 G44‧‧‧Fourth signal after overlapping addition L1, LA‧‧‧ First curve L2, LB‧‧‧Second curve L3, LC‧‧‧third curve L4, LD‧‧‧ fourth curve

第1圖係本發明之演算方法之流程圖 第2圖係本發明之示意圖 第3圖係本發明之第一次重疊切割過程之示意圖 第4圖係本發明之每一小段影片包含N禎原始影像之示意圖 第5A及第5B圖係分別為本發明之進行人臉區域之追蹤之示意圖 第6圖係本發明之原始訊號之示意圖 第7及第8圖係分別為本發明之原始訊號進行帶通濾波處理之處理前與處理後之示意圖 第9圖係本發明之第一訊號之示意圖 第10圖係本發明之第一次奇異譜分析步驟之示意圖 第11圖係本發明之第二訊號之示意圖 第12圖係本發明之第一次重疊相加過程之示意圖 第13圖係本發明之第二次重疊切割過程之示意圖 第14圖係本發明之頻率遮罩建構過程之示意圖 第15圖係本發明之第二次奇異譜分析過程之示意圖 第16圖係本發明之第二次重疊相加過程之示意圖 第17A圖係本發明之一靜態影片進行帶通濾波處理步驟後示意圖 第17B圖係第17A圖之第一次奇異譜分析步驟及兩倍關係篩選處理步驟後之示意圖 第17C圖係第17B圖之第二次奇異譜分析與頻率遮罩篩選步驟之示意圖 第17D圖係第17A、第17B與第17C圖之比較圖 第18A圖係本發明之一動態影片進行帶通濾波處理步驟後示意圖 第18B圖係第18A圖之第一次奇異譜分析步驟及兩倍關係篩選處理步驟後之示意圖 第18C圖係第18B圖之第二次奇異譜分析與頻率遮罩篩選步驟之示意圖 第18D圖係本發明之提取最終心率訊號經第18A、第18B與第18C圖後之比較圖Figure 1 is a flowchart of the calculation method of the present invention. Figure 2 is a schematic diagram of the present invention. Figure 3 is a schematic diagram of the first overlapping cutting process of the present invention. Figure 4 is a small video of the present invention. Image schematic diagrams 5A and 5B are schematic diagrams of tracking the face area of the present invention respectively. FIG. 6 is a schematic diagram of the original signal of the present invention. FIGS. 7 and 8 are diagrams of the original signal of the present invention. Fig. 9 is the schematic diagram of the first signal of the present invention. Fig. 10 is the schematic diagram of the first singular spectrum analysis step of the present invention. Fig. 11 is the second signal of the present invention. Fig. 12 is a schematic diagram of the first overlap-add process of the present invention. Fig. 13 is a schematic diagram of the second overlap-cutting process of the present invention. Fig. 14 is a schematic diagram of the frequency mask construction process of the present invention. Fig. 15 Schematic diagram of the second singular spectrum analysis process of the present invention. FIG. 16 is a schematic diagram of the second overlapping addition process of the present invention. FIG. 17A is a schematic diagram of a static film of the present invention after performing a band-pass filtering process. FIG. 17B Figure 17A is the schematic diagram after the first singular spectrum analysis step and the double relationship screening processing step. Figure 17C is the schematic diagram of the second singular spectrum analysis and frequency mask screening step in Figure 17B. Figure 17D is the 17A, Comparison of Figure 17B and Figure 17C Figure 18A is a schematic diagram of a dynamic film of the present invention after performing a band-pass filter processing step. Figure 18B is the first singular spectrum analysis step of Figure 18A and the double relationship filtering processing step Schematic diagram 18C is a schematic diagram of the second singular spectrum analysis and frequency mask screening steps of FIG. 18B. FIG. 18D is a comparison diagram of the present invention after the final heart rate signal is extracted through 18A, 18B and 18C.

S1‧‧‧第一次重疊切割步驟 S1‧‧‧First overlapping cutting step

S2‧‧‧第一次處理步驟 S2‧‧‧ First processing steps

S21‧‧‧預處理步驟 S21‧‧‧Pretreatment steps

S22‧‧‧帶通濾波處理步驟 S22‧‧‧band pass filter processing steps

S23‧‧‧第一次奇異譜分析步驟 S23‧‧‧The first step of singular spectrum analysis

S24‧‧‧二倍關係篩選步驟 S24‧‧‧Double screening steps

S25‧‧‧重建步驟 S25‧‧‧Reconstruction steps

S3‧‧‧第一次重疊相加步驟 S3‧‧‧The first step of overlapping addition

S4‧‧‧第二次重疊切割步驟 S4‧‧‧Second overlapping cutting step

S5‧‧‧第二次處理步驟 S5‧‧‧second processing steps

S51‧‧‧頻率遮罩建構步驟 S51‧‧‧Frequency mask construction steps

S52‧‧‧第二次奇異譜分析步驟 S52‧‧‧ Second singular spectrum analysis step

S6‧‧‧第二次重疊相加步驟 S6‧‧‧The second overlapping and adding step

Claims (3)

一種人臉壓縮影像的心率提取演算方法,係包括下列步驟: 一.第一次重疊切割步驟:取得一經過視頻壓縮處理後之人臉壓縮影片,將該人臉壓縮影片重疊切割為複數個小段影片;該重疊切割之方式係被定義為由該人臉壓縮影片之開始處,擷取第一個該小段影片,之後每隔一個步長時間,再擷取另一個該小段影片,重覆前述動作直到該人臉壓縮影片結束,進而能取得複數個該小段影片; 二.第一次處理步驟,係針對每一該小段影片逐一進行下列步驟: [a] 預處理步驟:每一該小段影片係包含N禎原始影像;對每一禎原始影像進行人臉區域追蹤,可得到一人臉區域,該人臉區域具有X乘Y個像素,針對該X乘Y之複數像素之綠色值加以平均,而得到一純量,其係被定義為單禎平均綠色值;依此類推,可得到N禎平均綠色值,進一步由該N禎平均綠色值得到一原始訊號G0(t),其中t=1至N; [b] 帶通濾波處理步驟:對前述之該原始訊號G0(t)進行帶通濾波處理,選取頻率0.8Hz至2.0Hz間之訊號,而得到一第一訊號G1(t),其中t=1至N; [c] 第一次奇異譜分析步驟:將該第一訊號G1(t)分解成複數個第一子序列,並對所有該第一子序列進行快速傅利葉轉換,而得到複數個該第一子序列之頻譜,再將每一該第一子序列之頻譜中之幅值最大的頻率作為其第一主頻率; [d] 二倍關係篩選步驟:將前述第一子序列兩兩進行比對,若該兩第一主頻率比對後呈兩倍關係,則該兩第一主頻率保留,否則捨棄該兩第一主頻率;若兩兩比對後並無任一滿足此二倍關係時,則所有第一子序列均保留;而可得到至少二個之第一保留子序列; [e] 重建步驟:將前一步驟之所有的該第一保留子序列重建,而得到一第二訊號G2(t),其中t=1至N,該第二訊號G2(t)之橫軸為時間,縱軸為綠色值強度,第二訊號G2(t)係具有該時間長度; 三.第一次重疊相加步驟:將該第一次處理步驟後得到之複數個第二訊號G2(t)利用現有升餘弦窗法之重疊相加技術處理,而得到一重疊相加後第二訊號,其具有該時間長度;該重疊相加技術係被定義為相鄰該兩第二訊號G2(t)之間,係以該步長時間重疊相加; 四.第二次重疊切割步驟:取得前述重疊相加後第二訊號,其具有該時間長度,將該重疊相加後第二訊號重疊切割為複數個小段訊號,該每一小段訊號係具有該時間長度;該重疊切割之方式係被定義為由該重疊相加後第二訊號開始處,擷取一個該小段訊號,之後每隔一個該步長時間,再擷取另一個該小段訊號,重覆前述動作,直到前述重疊相加後第二訊號結束,進而能取複數個該小段訊號,該每一小段訊號係被定義為第三訊號G3(t),其中t=1至N; 五.第二次處理步驟:針對該每一第三訊號G3(t)逐一進行下列步驟: [f] 頻率遮罩建構步驟:將該第三訊號G3(t)之幅值最大的頻率作為其中心頻率,並以該中心頻率為中心,設定一通過上限頻率及一通過下限頻率,而得到一頻率遮罩範圍; [g] 第二次奇異譜分析步驟:將該第三訊號G3(t)分解成複數個第二子序列,並對所有第二子序列進行快速傅利葉轉換,而得到複數個第二子序列之頻譜,再將該每一第二子序列之頻譜中之幅值最大的頻率,作為其第二主頻率; [h] 頻率遮罩篩選步驟:僅將該第二主頻率介於該頻率遮罩範圍內之該第二子序列保留,則可篩選得到至少一個第二保留子序列,其變成一第四訊號G4(t),其中t=1至N,該第四訊號G4(t)之橫軸為時間,縱軸為綠色值強度;若篩選未得到任一該第二保留子序列,則該第四訊號G4(t)直接等於該第三訊號G3(t); 六.第二次重疊相加步驟:將該第二次處理步驟後得到之複數個該第四訊號G4(t),利用現有升餘弦窗法之重疊相加技術處理,而得到一重疊相加後第四訊號,其具有該時間長度,係為最終心率訊號。A method for extracting and calculating the heart rate of a compressed image of a face includes the following steps: The first overlapping cutting step: obtain a face compressed video after video compression processing, overlap and cut the face compressed video into a plurality of small videos; the method of overlapping cutting is defined as the face compressed video At the beginning, capture the first video of the small segment, and then capture another video of the small segment every other step for a long time, repeat the aforementioned actions until the end of the face compression video, and then obtain a plurality of the small video; two. The first processing step is to perform the following steps one by one for each small video: [a] Pre-processing step: each small video contains N frame original images; face area tracking is performed for each frame original image. Obtain a face area with X times Y pixels, and average the green values of the X times Y complex pixels to obtain a scalar quantity, which is defined as the average green value of a single frame; and so on , The average green value of N frame can be obtained, and an original signal G0(t) is further obtained from the average green value of N frame, where t=1 to N; [b] Bandpass filter processing step: for the aforementioned original signal G0( t) Perform band-pass filtering and select a signal with a frequency between 0.8 Hz and 2.0 Hz to obtain a first signal G1(t), where t=1 to N; [c] The first singular spectrum analysis step: The first signal G1(t) is decomposed into a plurality of first sub-sequences, and a fast Fourier transform is performed on all the first sub-sequences to obtain a plurality of frequency spectra of the first sub-sequences, and then each of the first sub-sequences The frequency with the largest amplitude in the frequency spectrum is taken as its first main frequency; [d] Step of screening for double relationship: comparing the aforementioned first subsequences in pairs, if the two first main frequencies are doubled after comparison Relationship, the two first main frequencies are reserved, otherwise the two first main frequencies are discarded; if there is no one that satisfies this double relationship after pairwise comparison, all first subsequences are retained; and at least Two first reserved subsequences; [e] Reconstruction step: Reconstruct all the first reserved subsequences of the previous step to obtain a second signal G2(t), where t=1 to N, the first The horizontal axis of the second signal G2(t) is time, and the vertical axis is the intensity of the green value. The second signal G2(t) has the length of time; 3. The first overlapping and adding step: the plurality of second signals G2(t) obtained after the first processing step are processed by the existing overlapping and adding method of the raised cosine window method to obtain a second signal after overlapping and adding , Which has the length of time; the overlap-add technique is defined as between two adjacent second signals G2(t), which are overlapped and added in this step for a long time; 4. The second overlapping cutting step: obtaining the second signal after the overlapping addition has the time length, and overlappingly cutting the second signal after the overlapping addition into a plurality of small segment signals, each small segment signal having the time length ; The way of overlapping cutting is defined as the beginning of the second signal after the overlap is added, to capture a small segment of the signal, then every other step for a long time, and then capture another small segment of the signal, repeat the aforementioned Action until the second signal ends after the above-mentioned overlap addition, and then a plurality of small segment signals can be taken, and each small segment signal is defined as the third signal G3(t), where t=1 to N; 5. Second processing step: Perform the following steps one by one for each third signal G3(t): [f] Frequency mask construction step: Use the frequency with the largest amplitude of the third signal G3(t) as its center frequency , And using the center frequency as the center, set a passing upper limit frequency and a passing lower limit frequency to obtain a frequency mask range; [g] Second singular spectrum analysis step: decompose the third signal G3(t) into A plurality of second subsequences, and perform a fast Fourier transform on all second subsequences to obtain the spectrum of the plurality of second subsequences, and then use the frequency with the largest amplitude in the spectrum of each second subsequence as Its second main frequency; [h] frequency mask screening step: only the second subsequence with the second main frequency within the frequency mask range is retained, then at least one second reserved subsequence can be obtained by screening, It becomes a fourth signal G4(t), where t=1 to N, the horizontal axis of the fourth signal G4(t) is time, and the vertical axis is the intensity of the green value; if any second retainer is not obtained by screening Sequence, the fourth signal G4(t) is directly equal to the third signal G3(t); VI. The second overlapping addition step: the plurality of fourth signals G4(t) obtained after the second processing step are processed by the existing overlapping cosine window method of overlapping addition technique to obtain an overlapping addition The four signals, which have this length of time, are the final heart rate signals. 如申請專利範圍第1項所述之人臉壓縮影像的心率提取演算方法,其中,於該取得原始影像步驟,係設置兩組視頻裝置及一網路連接裝置,該兩組視頻裝置係透過該網路連接裝置,達成可供視頻連絡者。The method for extracting the heart rate of a face compressed image as described in item 1 of the patent application scope, wherein in the step of obtaining the original image, two sets of video devices and a network connection device are provided, and the two sets of video devices pass through the Network connection device to achieve video contact. 申請專利範圍第1項所述之人臉壓縮影像的心率提取演算方法,其中,該人臉壓縮影像之壓縮方式係選自x264、x265、vp8、vp9其中之一。The method for extracting the heart rate of a face compressed image as described in item 1 of the patent application scope, wherein the compression method of the face compressed image is selected from one of x264, x265, vp8, and vp9.
TW107120251A 2018-06-12 2018-06-12 Heart rate extraction algorithm for face compressed image TWI653027B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW107120251A TWI653027B (en) 2018-06-12 2018-06-12 Heart rate extraction algorithm for face compressed image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW107120251A TWI653027B (en) 2018-06-12 2018-06-12 Heart rate extraction algorithm for face compressed image

Publications (2)

Publication Number Publication Date
TWI653027B TWI653027B (en) 2019-03-11
TW202000124A true TW202000124A (en) 2020-01-01

Family

ID=66590731

Family Applications (1)

Application Number Title Priority Date Filing Date
TW107120251A TWI653027B (en) 2018-06-12 2018-06-12 Heart rate extraction algorithm for face compressed image

Country Status (1)

Country Link
TW (1) TWI653027B (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103054569B (en) 2012-12-20 2015-04-22 Tcl集团股份有限公司 Method, device and handhold device for measuring human body heart rate based on visible image
CN105678780B (en) 2016-01-14 2018-02-27 合肥工业大学智能制造技术研究院 A kind of video heart rate detection method for removing ambient light change interference
US10335045B2 (en) 2016-06-24 2019-07-02 Universita Degli Studi Di Trento Self-adaptive matrix completion for heart rate estimation from face videos under realistic conditions

Also Published As

Publication number Publication date
TWI653027B (en) 2019-03-11

Similar Documents

Publication Publication Date Title
CN114587311B (en) Non-cuff type blood pressure measuring device based on multiple orders and multiple modes
Macwan et al. Remote photoplethysmography with constrained ICA using periodicity and chrominance constraints
Wei et al. Non-contact, synchronous dynamic measurement of respiratory rate and heart rate based on dual sensitive regions
CN106073729B (en) Acquisition method of photoplethysmography signal
CN110068388A (en) A kind of method for detecting vibration of view-based access control model and blind source separating
CN116385837B (en) Self-supervision pre-training method for remote physiological measurement based on mask self-encoder
CN114067435A (en) Sleep behavior detection method and system based on pseudo-3D convolutional network and attention mechanism
CN118072919A (en) A method for monitoring heart rate variability based on remote photoplethysmography
CN115188073B (en) WiFi sign language translation system and method based on deep learning
CN112043257B (en) A motion-robust non-contact video heart rate detection method
EP3769285A1 (en) Analysis and visualization of subtle motions in videos
JP7044171B2 (en) Pulse wave calculation device, pulse wave calculation method and pulse wave calculation program
CN113326801A (en) Human body moving direction identification method based on channel state information
CN114722869B (en) A non-contact heart rate detection device and method based on face video
CN119961657B (en) Identity recognition method and system based on biological signal invariable representation learning
CN114463784B (en) Multi-person rope skipping analysis method based on video-audio multi-mode deep learning
Comas et al. Deep pulse-signal magnification for remote heart rate estimation in compressed videos
TW202000124A (en) Algorithmic method for extracting human pulse rate from compressed video data of a human face
CN113827234B (en) A remote pulse wave reconstruction method based on hyperspectral face video
CN116702056B (en) A heart rate variability feature analysis method based on deep learning
CN114511903B (en) Heart rate detection method, device and storage medium based on camera anti-shake
CN117173742B (en) Remote large-range heart rate estimation method based on deep learning and face segmentation
CN115153519A (en) Sitting posture identification system based on sound signals
Cheng et al. Robust real-time heart rate measurement from face videos
CN116681700B (en) Method, device and readable storage medium for evaluating heart rate and heart rate variability of user

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees