[go: up one dir, main page]

CN111626174A - Attitude robust motion recognition method based on channel state information - Google Patents

Attitude robust motion recognition method based on channel state information Download PDF

Info

Publication number
CN111626174A
CN111626174A CN202010438367.7A CN202010438367A CN111626174A CN 111626174 A CN111626174 A CN 111626174A CN 202010438367 A CN202010438367 A CN 202010438367A CN 111626174 A CN111626174 A CN 111626174A
Authority
CN
China
Prior art keywords
state information
channel state
equal
motion recognition
amplitude
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010438367.7A
Other languages
Chinese (zh)
Other versions
CN111626174B (en
Inventor
杨武
吕继光
苘大鹏
王巍
玄世昌
丁宁宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN202010438367.7A priority Critical patent/CN111626174B/en
Publication of CN111626174A publication Critical patent/CN111626174A/en
Application granted granted Critical
Publication of CN111626174B publication Critical patent/CN111626174B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

本发明属于单用户姿态鲁棒技术领域,具体涉及一种基于信道状态信息的姿态鲁棒动作识别方法。智能化的生活是人们未来追求的目标,准确的实现人机交互是当前亟待解决的问题。现有基于信道状态信息的动作识别方法针对同一用户不同时刻的动作识别准确率不高。本发明通过引入提取稳定特征的方法来改善现有方法的不足,提取改变甚微的稳定特征,提高识别准确率。

Figure 202010438367

The invention belongs to the technical field of single-user gesture robustness, and in particular relates to a gesture robust action recognition method based on channel state information. Intelligent life is the goal pursued by people in the future, and the accurate realization of human-computer interaction is an urgent problem to be solved at present. Existing action recognition methods based on channel state information have low accuracy in recognizing actions of the same user at different times. The invention improves the shortcomings of the existing methods by introducing a method for extracting stable features, extracts stable features with little change, and improves the recognition accuracy.

Figure 202010438367

Description

一种基于信道状态信息的姿态鲁棒动作识别方法A robust gesture recognition method based on channel state information

技术领域technical field

本发明属于单用户姿态鲁棒技术领域,具体涉及一种基于信道状态信息的姿态鲁棒动作识别方法。The invention belongs to the technical field of single-user gesture robustness, and in particular relates to a gesture robust action recognition method based on channel state information.

背景技术Background technique

如今对于CSI动作识别的研究,对于同一用户不同时刻所做动作的研究并不多。然而考虑到系统实用性,对于同一用户的动作研究也是十分必要的。由于即使是同一个用户,当其做同一个动作的时候,也会因为做动作的幅度或方向不同,而这将会导致易受影响的信道状态信息发生变化,即指纹信息发生改变,所以就会影响对动作的最终识别,本文将针对以上问题提出改进方法。本文提出的思想基础是与签名验证类似。在签名验证中由于每个人的签名都会有部分特征是和其他人不同且是保持不变的,从而提取出这一有特征的片段作为识别的主要部分。类似的,由于每个人有不同的生活习惯,所以他即使是在不同时刻做出相同的动作时会因为角度或者速度不同会引起信道状态信息发生改变,但是在他所做的动作序列中会有一个主要的部分是不会随着改变甚至是改变甚微的。所以依靠着这个想法提出解决办法。Nowadays, there are not many studies on the actions of the same user at different times in the research on CSI action recognition. However, considering the practicality of the system, it is also necessary to study the actions of the same user. Since even the same user performs the same action, the magnitude or direction of the action will be different, which will cause the easily affected channel state information to change, that is, the fingerprint information to change. It will affect the final recognition of the action. This paper will propose an improved method for the above problems. The idea proposed in this paper is based on the similarities of signature verification. In signature verification, since each person's signature will have some features that are different from others and remain unchanged, this feature fragment is extracted as the main part of identification. Similarly, since each person has different living habits, even if he makes the same action at different times, the channel state information will change due to different angles or speeds, but in the action sequence he does, there will be A major part is one that does not change or even changes very little. So rely on this idea to come up with a solution.

发明内容SUMMARY OF THE INVENTION

本发明的目的在于提供一种基于信道状态信息的姿态鲁棒动作识别方法。The purpose of the present invention is to provide a gesture robust action recognition method based on channel state information.

本发明的目的通过如下技术方案来实现:包括以下步骤:The object of the present invention is achieved through the following technical solutions: comprise the following steps:

步骤1:布置发射机和接收机,接收机获取信道状态信息;Step 1: Arrange the transmitter and receiver, and the receiver obtains the channel state information;

步骤2:计算信道状态信息中每个子载波的幅度方差,提取其中方差最大的N条关键子载波;Step 2: Calculate the amplitude variance of each subcarrier in the channel state information, and extract the N key subcarriers with the largest variance;

步骤3:对提取的关键子载波使用主成分分析法降维去噪;Step 3: Use the principal component analysis method to reduce the dimension and denoise the extracted key sub-carriers;

步骤4:使用离散小波变换进一步去噪;Step 4: Further denoising using discrete wavelet transform;

步骤5:提取稳定特征;Step 5: Extract stable features;

步骤5.1:把子载波分为L份,待匹配波称为C与D;Step 5.1: Divide the subcarriers into L parts, and the waves to be matched are called C and D;

步骤5.2:设置阈值T,令i=1;Step 5.2: Set the threshold T, let i=1;

步骤5.3:判断i是否小于L+1;若i大于L+1,则结束稳定特征的提取;若i小于L+1,则执行步骤5.4;Step 5.3: judge whether i is less than L+1; if i is greater than L+1, end the extraction of stable features; if i is less than L+1, execute step 5.4;

步骤5.4:计算Ci和DiStep 5.4: Calculate C i and D i ;

Ci=Xi+1-Xi C i =X i+1 -X i

Di=Yi+1-Yi D i =Y i+1 -Y i

其中,Xi代表C中第i个点的纵坐标,即幅值;Yi表示D中第i个点的幅值;Among them, X i represents the ordinate of the ith point in C, that is, the amplitude; Y i represents the amplitude of the ith point in D;

步骤5.5:判断Ci与Di是否同号;若Ci与Di不同号,则删除Ci与Di,令i=i+1,返回步骤5.3;若Ci与Di同号,则执行步骤5.6;Step 5.5: judge whether C i and D i have the same sign; if C i and D i have different signs, delete C i and D i , set i=i+1, and return to step 5.3; if C i and D i have the same sign, Then go to step 5.6;

步骤5.6:判断Ci-Di是否小于T;若Ci-Di小于T,则保留Ci与Di,令i=i+1,返回步骤5.3;若Ci-Di大于或等于T,则删除Ci与Di,令i=i+1,返回步骤5.3;Step 5.6: judge whether C i -D i is less than T; if C i -D i is less than T, keep C i and D i , set i=i+1, and return to step 5.3; if C i -D i is greater than or equal to T, then delete C i and D i , set i=i+1, and return to step 5.3;

步骤6:使用随机森林方法对提取的稳定特征进行分类,得到姿态鲁棒动作识别结果。Step 6: Use the random forest method to classify the extracted stable features to obtain pose robust action recognition results.

本发明的有益效果在于:The beneficial effects of the present invention are:

智能化的生活是人们未来追求的目标,准确的实现人机交互是当前亟待解决的问题。现有基于信道状态信息的动作识别方法针对同一用户不同时刻的动作识别准确率不高。本发明通过引入提取稳定特征的方法来改善现有方法的不足,提取改变甚微的稳定特征,提高识别准确率。Intelligent life is the goal pursued by people in the future, and the accurate realization of human-computer interaction is an urgent problem to be solved at present. Existing action recognition methods based on channel state information have low accuracy in recognizing actions of the same user at different times. The invention improves the deficiencies of the existing methods by introducing a method for extracting stable features, extracts stable features with little change, and improves the recognition accuracy.

附图说明Description of drawings

图1是本发明的框架图。FIG. 1 is a frame diagram of the present invention.

图2是本发明中稳定特征提取的流程图。FIG. 2 is a flow chart of stable feature extraction in the present invention.

具体实施方式Detailed ways

下面结合附图对本发明做进一步描述。The present invention will be further described below with reference to the accompanying drawings.

如附图1所示,该方法包括CSI数据采集、信号预处理、特征提取、行为识别等四个模块。As shown in FIG. 1 , the method includes four modules: CSI data acquisition, signal preprocessing, feature extraction, and behavior recognition.

1)将发射机和接收机放置在房间内任意位置,用户站在发射机与接收机间做动作,获取信道状态信息;1) Place the transmitter and receiver at any position in the room, and the user stands between the transmitter and the receiver to perform actions to obtain channel status information;

2)对收到的原始信息,提取其中敏感的多条关键子载波;2) Extracting a plurality of sensitive key sub-carriers from the received original information;

3)对关键子载波使用PCA降维去噪,得到最能体现动作信息的子载波;3) Use PCA dimensionality reduction and denoising on the key subcarriers to obtain the subcarriers that can best reflect the action information;

4)进一步使用小波变换去噪;4) Further use wavelet transform to denoise;

5)提取稳定特征;5) Extract stable features;

6)使用随机森林方法对动作进行分类;6) Use random forest method to classify actions;

1)附图1是本发明的框架图,首先从接收机采集无线信号原始的CSI数据,接下来依次进行数据预处理、特征提取、动作识别。1) FIG. 1 is a framework diagram of the present invention. First, the original CSI data of the wireless signal is collected from the receiver, and then data preprocessing, feature extraction, and action recognition are performed in sequence.

2)在数据采集阶段,将发射机和接收机摆放在房间内任意位置,但最好将二者分开一定的距离,用户站在发射机与接收机之间做动作,发射机为使用两根天线TP-Link802.11n的无线路由器,接收机为配有Intel 5300网卡并外接三根天线的levono笔记本电脑,设置采样频率为1000Hz,使用接收机获取信道状态信息。2) In the data collection stage, place the transmitter and receiver at any position in the room, but it is better to separate them by a certain distance. The user stands between the transmitter and the receiver to perform actions. Root antenna TP-Link802.11n wireless router, the receiver is a levono laptop with Intel 5300 network card and three external antennas, the sampling frequency is set to 1000Hz, and the receiver is used to obtain channel status information.

3)由于使用了OFDM技术将信道划分了30个子载波,但是每条子载波对信道敏感程度不同,所以首先将敏感程度量化,计算出每个子载波的幅度方差,方差的大小即表示着对动作的敏感程度,即方差越大则越是需要的关键子载波。3) Since the OFDM technology is used to divide the channel into 30 sub-carriers, but each sub-carrier has different sensitivity to the channel, so the sensitivity is first quantized, and the amplitude variance of each sub-carrier is calculated. Sensitivity, that is, the larger the variance is, the more critical subcarriers are needed.

4)然而所选子载波中还包含很多噪声甚至有一些是与动作本身并不相关。所以需要进一步进行选择,找出最能体现动作信息的子载波。这一步中可使用主成分分析法(PCA)。4) However, the selected sub-carriers still contain a lot of noise and even some of them are not related to the action itself. Therefore, further selection is required to find out the sub-carriers that can best reflect the action information. Principal component analysis (PCA) can be used in this step.

5)处理之后,虽然已经过滤了一部分噪声,但是扔存在由设备自身等问题带来的噪声,所以还需要进一步去噪。使用离散小波变换(DWT)来去除这部分噪声。5) After processing, although a part of the noise has been filtered, there is noise caused by the equipment itself and other problems, so further denoising is required. This part of the noise is removed using discrete wavelet transform (DWT).

6)使用图2提取稳定特征。6) Use Figure 2 to extract stable features.

7)使用随机森林的方法对结果进行分类。7) Use the method of random forest to classify the results.

以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included within the protection scope of the present invention.

Claims (1)

1. A posture robust motion recognition method based on channel state information is characterized by comprising the following steps:
step 1: arranging a transmitter and a receiver, wherein the receiver acquires channel state information;
step 2: calculating the amplitude variance of each subcarrier in the channel state information, and extracting N key subcarriers with the largest variance;
and step 3: reducing and denoising the extracted key subcarriers by using a principal component analysis method;
and 4, step 4: further denoising using discrete wavelet transform;
and 5: extracting stable characteristics;
step 5.1: dividing the sub-carrier into L parts, and calling waves to be matched as C and D;
step 5.2: setting a threshold value T, and enabling i to be 1;
step 5.3: judging whether i is smaller than L + 1; if i is larger than L +1, finishing the extraction of the stable features; if i is less than L +1, executing step 5.4;
step 5.4: calculating CiAnd Di
Ci=Xi+1-Xi
Di=Yi+1-Yi
Wherein, XiRepresents the ordinate, namely the amplitude, of the ith point in C; y isiRepresenting the amplitude of the ith point in D;
step 5.5: judgment CiAnd DiWhether the numbers are the same; if CiAnd DiIf the numbers are different, C is deletediAnd DiMaking i equal to i +1, and returning to the step 5.3; if CiAnd DiIf yes, executing step 5.6;
step 5.6: judgment Ci-DiWhether less than T; if Ci-DiIf less than T, then C is retainediAnd DiMaking i equal to i +1, and returning to the step 5.3; if Ci-DiIf T is greater than or equal to T, C is deletediAnd DiMaking i equal to i +1, and returning to the step 5.3;
step 6: and classifying the extracted stable features by using a random forest method to obtain a posture robust motion recognition result.
CN202010438367.7A 2020-05-22 2020-05-22 Attitude robust motion recognition method based on channel state information Expired - Fee Related CN111626174B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010438367.7A CN111626174B (en) 2020-05-22 2020-05-22 Attitude robust motion recognition method based on channel state information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010438367.7A CN111626174B (en) 2020-05-22 2020-05-22 Attitude robust motion recognition method based on channel state information

Publications (2)

Publication Number Publication Date
CN111626174A true CN111626174A (en) 2020-09-04
CN111626174B CN111626174B (en) 2023-03-24

Family

ID=72271094

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010438367.7A Expired - Fee Related CN111626174B (en) 2020-05-22 2020-05-22 Attitude robust motion recognition method based on channel state information

Country Status (1)

Country Link
CN (1) CN111626174B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862295A (en) * 2017-11-21 2018-03-30 武汉大学 A kind of method based on WiFi channel condition informations identification facial expression
CN109325399A (en) * 2018-07-13 2019-02-12 哈尔滨工程大学 A method and system for stranger gesture recognition based on channel state information
US20190087654A1 (en) * 2017-09-15 2019-03-21 Huazhong University Of Science And Technology Method and system for csi-based fine-grained gesture recognition
CN109635837A (en) * 2018-11-10 2019-04-16 天津大学 A kind of carefree fall detection system of scene based on commercial wireless Wi-Fi
CN109658655A (en) * 2019-01-15 2019-04-19 哈尔滨工程大学 A kind of passive intrusion detection method in interior based on wireless signal
CN110059612A (en) * 2019-04-15 2019-07-26 哈尔滨工程大学 A kind of gesture identification method and system that the position based on channel state information is unrelated
CN110337066A (en) * 2019-05-21 2019-10-15 西安电子科技大学 Based on channel state information indoor occupant activity recognition method, man-machine interactive system
US20190327124A1 (en) * 2012-12-05 2019-10-24 Origin Wireless, Inc. Method, apparatus, and system for object tracking and sensing using broadcasting

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190327124A1 (en) * 2012-12-05 2019-10-24 Origin Wireless, Inc. Method, apparatus, and system for object tracking and sensing using broadcasting
US20190087654A1 (en) * 2017-09-15 2019-03-21 Huazhong University Of Science And Technology Method and system for csi-based fine-grained gesture recognition
CN107862295A (en) * 2017-11-21 2018-03-30 武汉大学 A kind of method based on WiFi channel condition informations identification facial expression
CN109325399A (en) * 2018-07-13 2019-02-12 哈尔滨工程大学 A method and system for stranger gesture recognition based on channel state information
CN109635837A (en) * 2018-11-10 2019-04-16 天津大学 A kind of carefree fall detection system of scene based on commercial wireless Wi-Fi
CN109658655A (en) * 2019-01-15 2019-04-19 哈尔滨工程大学 A kind of passive intrusion detection method in interior based on wireless signal
CN110059612A (en) * 2019-04-15 2019-07-26 哈尔滨工程大学 A kind of gesture identification method and system that the position based on channel state information is unrelated
CN110337066A (en) * 2019-05-21 2019-10-15 西安电子科技大学 Based on channel state information indoor occupant activity recognition method, man-machine interactive system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
汤明阳: "基于CSI的人体动作识别方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Also Published As

Publication number Publication date
CN111626174B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN107633227A (en) A kind of fine granularity gesture identification method and system based on CSI
CN102508606A (en) Method and system for subdividing user groups by face recognition and setting corresponding functions of mobile handheld devices
WO2019200782A1 (en) Sample data classification method, model training method, electronic device and storage medium
CN110008674B (en) A highly generalized ECG signal identity authentication method
CN107392123B (en) Radio frequency fingerprint feature extraction and identification method based on coherent accumulation noise elimination
CN111142668B (en) Interaction method based on Wi-Fi fingerprint positioning and activity gesture joint recognition
CN111698258B (en) WiFi-based environmental intrusion detection method and system
CN101763502A (en) High-efficiency method and system for sensitive image detection
CN106951753A (en) An authentication method and authentication device for an electrocardiographic signal
CN110222660B (en) A signature authentication method and system based on the fusion of dynamic and static features
CN107169479A (en) Intelligent mobile equipment sensitive data means of defence based on fingerprint authentication
CN107688790A (en) Human bodys' response method, apparatus, storage medium and electronic equipment
CN111860130A (en) Audio-based gesture recognition method, device, terminal device and storage medium
CN106529377A (en) Age estimating method, age estimating device and age estimating system based on image
CN110808067A (en) Low signal-to-noise ratio sound event detection method based on binary multiband energy distribution
CN108921006B (en) A method for establishing the authenticity identification model of a handwritten signature image and a method for authenticating it
CN104361339A (en) Palm image extracting and identification method
CN106971203B (en) Identification method based on walking feature data
CN107026928A (en) A kind of behavioural characteristic identification authentication method and device based on mobile phone sensor
CN116246303A (en) Sample construction method, device, equipment and medium for model cross-domain training
CN115830648A (en) Fingerprint extraction method based on optical coherence tomography and related equipment
CN111626174B (en) Attitude robust motion recognition method based on channel state information
CN102368291B (en) Personal authentication system based on invisible consciousness of fingerprint image
CN113901960A (en) Signature recognition system and authentication method based on RFID single tag
Mendes et al. Subvocal speech recognition based on EMG signal using independent component analysis and neural network MLP

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20230324

CF01 Termination of patent right due to non-payment of annual fee