CN111970169A - Protocol flow identification method based on GRU network - Google Patents
Protocol flow identification method based on GRU network Download PDFInfo
- Publication number
- CN111970169A CN111970169A CN202010820902.5A CN202010820902A CN111970169A CN 111970169 A CN111970169 A CN 111970169A CN 202010820902 A CN202010820902 A CN 202010820902A CN 111970169 A CN111970169 A CN 111970169A
- Authority
- CN
- China
- Prior art keywords
- gru
- layer
- gru network
- protocol traffic
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/18—Protocol analysers
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/142—Network analysis or design using statistical or mathematical methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/147—Network analysis or design for predicting network behaviour
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Probability & Statistics with Applications (AREA)
- Algebra (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
本发明公开了一种基于GRU网络的协议流量识别方法,包括以下步骤:对不同协议流量样本进行数据预处理,得到符合GRU网络输入数据格式的训练样本集,并使用该训练样本集对GRU网络模型进行训练;对未知协议流量进行数据预处理,得到具有时间序列的空间位置特征数据,并输入至训练完成的GRU网络模型中;使用训练完成的GRU网络模型对数据预处理后的未知协议流量进行识别,最终得到预测标签。本发明通过数据预处理完成数据包的特征提取,能够有效克服人工提取特征的困难;而且,GRU网络模型的构建和使用,有效提高了协议识别的准确率;另外,流量交互过程中的信息,因为涉及空间位置特征、时序特征两个层次,使得协议流量识别的效果更加显著。
The invention discloses a protocol traffic identification method based on GRU network, comprising the following steps: performing data preprocessing on different protocol traffic samples to obtain a training sample set conforming to the input data format of the GRU network, and using the training sample set to identify the GRU network Model training; perform data preprocessing on unknown protocol traffic to obtain spatial location feature data with time series, and input it into the trained GRU network model; use the trained GRU network model to preprocess the unknown protocol traffic Identify, and finally get the predicted label. The present invention completes the feature extraction of data packets through data preprocessing, which can effectively overcome the difficulty of manually extracting features; moreover, the construction and use of the GRU network model effectively improves the accuracy of protocol identification; in addition, the information in the flow interaction process, Because it involves two levels of spatial location features and timing features, the effect of protocol traffic identification is more significant.
Description
技术领域technical field
本发明涉及计算机网络流量分析领域,更具体地,涉及一种基于GRU网络的协议流量识别方法。The invention relates to the field of computer network traffic analysis, and more particularly, to a protocol traffic identification method based on a GRU network.
背景技术Background technique
协议流量识别是指通过人工分析或自动化手段从TCP/IP协议承载的网络流量中提取出能够标识网络协议的关键特征,然后以这些特征为基础准确标识网络流量所隶属的协议。协议识别技术有助于对网络流量的组成进行分析,能够为网络管理与维护、网络内容审计、网络安全防御等多个研究领域提供数据支撑。但是面对如今大规模、多元化、高容量的网络流量,如何提高协议识别的准确率是一项巨大的挑战。Protocol traffic identification refers to extracting key features that can identify network protocols from network traffic carried by the TCP/IP protocol through manual analysis or automated means, and then accurately identifying the protocols to which network traffic belongs based on these features. Protocol identification technology helps to analyze the composition of network traffic, and can provide data support for many research fields such as network management and maintenance, network content auditing, and network security defense. However, in the face of today's large-scale, diversified, and high-capacity network traffic, how to improve the accuracy of protocol identification is a huge challenge.
协议流量识别方法主要包括基于预设规则的协议识别方法、基于载荷特征的协议识别方法、基于主机行为的协议识别方法以及基于机器学习的协议识别方法四种。深度学习在分类方面存在着优势,不过现有的协议流量识别方法也存在着人工提取特征困难的问题。Protocol traffic identification methods mainly include protocol identification methods based on preset rules, protocol identification methods based on load characteristics, protocol identification methods based on host behavior, and protocol identification methods based on machine learning. Deep learning has advantages in classification, but the existing protocol traffic identification methods also have the problem of difficulty in manually extracting features.
在现有技术中,公开号为CN107682216A的中国发明专利,于2018年02月09日公开了一种基于深度学习的网络流量协议识别方法,利用网络流数据与图像的相似性,绕过流量特征值选择和提取的工作,直接将网络流数据作为卷积神经网络的输入,进行监督学习,训练网络流量协议识别模型,实现网络流量协议识别功能。虽然该方案将待识别网络流量协议样本用于对卷积神经网络的训练,便能在一定程度上自动提取到有利于分类任务的特征,但是并未解决现有人工提取特征困难、协议识别准确率不高的问题,因此,用户急需一种基于GRU网络的协议流量识别方法。In the prior art, the Chinese invention patent with publication number CN107682216A discloses a deep learning-based network traffic protocol identification method on February 9, 2018, which utilizes the similarity between network flow data and images to bypass traffic characteristics In the work of value selection and extraction, the network flow data is directly used as the input of the convolutional neural network to perform supervised learning, train the network traffic protocol recognition model, and realize the network traffic protocol recognition function. Although this scheme uses the network traffic protocol samples to be identified for the training of the convolutional neural network, and can automatically extract features that are beneficial to the classification task to a certain extent, it does not solve the difficulty of existing manual feature extraction and accurate protocol identification. Therefore, users urgently need a protocol traffic identification method based on GRU network.
发明内容SUMMARY OF THE INVENTION
本发明为解决现有人工提取特征困难、协议识别准确率不高的问题,提供了一种基于GRU网络的协议流量识别方法。In order to solve the problems that the existing manual feature extraction is difficult and the protocol identification accuracy is not high, the invention provides a protocol traffic identification method based on a GRU network.
本发明的首要目的是为解决上述技术问题,本发明的技术方案如下:The primary purpose of the present invention is to solve the above-mentioned technical problems, and the technical scheme of the present invention is as follows:
一种基于GRU网络的协议流量识别方法,包括以下步骤:A method for identifying protocol traffic based on GRU network, comprising the following steps:
S1:对不同协议流量样本进行数据预处理,得到符合GRU网络输入数据格式的训练样本集,并使用该训练样本集对GRU网络模型进行训练;S1: Perform data preprocessing on traffic samples of different protocols to obtain a training sample set conforming to the GRU network input data format, and use the training sample set to train the GRU network model;
S2:对未知协议流量进行数据预处理,得到具有时间序列的空间位置特征数据,并输入至训练完成的GRU网络模型中;S2: Perform data preprocessing on unknown protocol traffic to obtain spatial location feature data with time series, and input them into the trained GRU network model;
S3:使用训练完成的GRU网络模型对数据预处理后的未知协议流量进行识别,最终得到预测标签。S3: Use the trained GRU network model to identify the unknown protocol traffic after data preprocessing, and finally obtain the predicted label.
优选地,步骤S1、S2中所述数据预处理包括流量切分、数据包聚类、会话数据转换。Preferably, the data preprocessing in steps S1 and S2 includes traffic segmentation, data packet clustering, and session data conversion.
优选地,所述流量切分的基本单元为会话。Preferably, the basic unit of the traffic segmentation is a session.
优选地,所述数据包聚类采用K均值算法进行。Preferably, the data packet clustering is performed using a K-means algorithm.
优选地,所述会话数据转换是将流量切分后的各数据包的内容格式替换为距离集合,所采用的距离计算公式为:Preferably, the session data conversion is to replace the content format of each data packet after traffic segmentation with a distance set, and the distance calculation formula used is:
其中,Max Subsequence函数为各数据包与各聚类中心之间的最长公共连续序列识别算法;D(x,centroid)为各数据包与各聚类中心的距离。Among them, the Max Subsequence function is the longest common continuous sequence identification algorithm between each data packet and each cluster center; D (x, centroid) is the distance between each data packet and each cluster center.
优选地,所述GRU网络模型包括输入层、Masking层、第一GRU层、第二GRU层、全连接层、输出层;其中:Preferably, the GRU network model includes an input layer, a Masking layer, a first GRU layer, a second GRU layer, a fully connected layer, and an output layer; wherein:
所述Masking层分别连接至输入层和第一GRU层;The Masking layer is connected to the input layer and the first GRU layer respectively;
所述第二GRU层分别连接至第一GRU层和全连接层;The second GRU layer is connected to the first GRU layer and the fully connected layer, respectively;
所述输出层与全连接层连接。The output layer is connected to the fully connected layer.
优选地,所述第一GRU层、第二GRU层提取特征值的维度均设置为64。Preferably, the dimensions of the extracted feature values of the first GRU layer and the second GRU layer are both set to 64.
优选地,所述全连接层采用ReLU函数作为激活函数。Preferably, the fully connected layer uses a ReLU function as an activation function.
优选地,所述全连接层采用Dropout设置其比率为0.5。Preferably, the fully connected layer adopts Dropout to set its ratio to 0.5.
优选地,所述输出层采用Sigmoid函数作为激活函数。Preferably, the output layer adopts a sigmoid function as an activation function.
与现有技术相比,本发明技术方案的有益效果是:Compared with the prior art, the beneficial effects of the technical solution of the present invention are:
本发明通过数据预处理完成数据包的特征提取,能够有效克服人工提取特征的困难;而且,GRU网络模型的构建和使用,有效提高了协议识别的准确率;另外,流量交互过程中的信息,因为涉及数据包本身空间位置特征、数据包之间时序特征两个层次,使得协议流量识别的效果更加显著。The invention completes the feature extraction of data packets through data preprocessing, which can effectively overcome the difficulty of manually extracting features; moreover, the construction and use of the GRU network model effectively improves the accuracy of protocol identification; in addition, the information in the flow interaction process, Because it involves two levels of spatial location characteristics of data packets and timing characteristics between data packets, the effect of protocol traffic identification is more significant.
附图说明Description of drawings
图1为本发明的总流程图;Fig. 1 is the general flow chart of the present invention;
图2为本发明中所述GRU网络模型识别未知协议流量的流程图;Fig. 2 is the flow chart of identifying unknown protocol traffic by GRU network model described in the present invention;
图3为本发明中所述GRU网络模型的结构示意图。FIG. 3 is a schematic structural diagram of the GRU network model described in the present invention.
具体实施方式Detailed ways
为了能够更清楚地理解本发明的上述目的、特征和优点,下面结合附图和具体实施方式对本发明进行进一步的详细描述。需要说明的是,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互组合。In order to understand the above objects, features and advantages of the present invention more clearly, the present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments of the present application and the features in the embodiments may be combined with each other in the case of no conflict.
在下面的描述中阐述了很多具体细节以便于充分理解本发明,但是,本发明还可以采用其他不同于在此描述的其他方式来实施,因此,本发明的保护范围并不受下面公开的具体实施例的限制。Many specific details are set forth in the following description to facilitate a full understanding of the present invention. However, the present invention can also be implemented in other ways different from those described herein. Therefore, the protection scope of the present invention is not limited by the specific details disclosed below. Example limitations.
实施例1Example 1
如图1所示,一种基于GRU网络的协议流量识别方法,包括以下步骤:As shown in Figure 1, a method for identifying protocol traffic based on GRU network includes the following steps:
S1:对不同协议流量样本进行数据预处理,得到符合GRU网络输入数据格式的训练样本集,并使用该训练样本集对GRU网络模型进行训练;S1: Perform data preprocessing on traffic samples of different protocols to obtain a training sample set conforming to the GRU network input data format, and use the training sample set to train the GRU network model;
S2:对未知协议流量进行数据预处理,得到具有时间序列的空间位置特征数据,并输入至训练完成的GRU网络模型中;S2: Perform data preprocessing on unknown protocol traffic to obtain spatial location feature data with time series, and input them into the trained GRU network model;
S3:使用训练完成的GRU网络模型对数据预处理后的未知协议流量进行识别,最终得到预测标签。S3: Use the trained GRU network model to identify the unknown protocol traffic after data preprocessing, and finally obtain the predicted label.
上述方案中,可见本方法分为两个阶段,第一阶段是训练阶段,使用训练样本集完成对GRU网络模型的训练;第二阶段是识别阶段,使用训练完成后的GRU网络模型对数据预处理后的未知协议流量进行识别,得到预测标签。In the above scheme, it can be seen that the method is divided into two stages. The first stage is the training stage, using the training sample set to complete the training of the GRU network model; The processed unknown protocol traffic is identified, and the predicted label is obtained.
如图2所示,具体地,步骤S1、S2中所述数据预处理包括流量切分、数据包聚类、会话数据转换。As shown in FIG. 2 , specifically, the data preprocessing in steps S1 and S2 includes traffic segmentation, data packet clustering, and session data conversion.
上述方案中,数据预处理是未知协议流量识别过程中的基础步骤,其中:流量切分,负责将未知协议流量按照一定依据切分成对应形式的数据集;数据包聚类,负责对数据集中的所有数据包进行聚类,并得到聚类中心;会话数据转换,负责将所有数据包的内容转换为各数据包与各聚类中心之间的距离;最后,按照各数据包的时序关系进行整合,将未知协议流量转化为具有时间序列的空间位置特征数据,符合GRU网络模型的输入数据格式。In the above scheme, data preprocessing is the basic step in the identification process of unknown protocol traffic, in which: traffic segmentation is responsible for dividing unknown protocol traffic into corresponding data sets according to a certain basis; All data packets are clustered, and the clustering center is obtained; session data conversion is responsible for converting the content of all data packets into the distance between each data packet and each clustering center; finally, according to the timing relationship of each data packet to integrate , which converts unknown protocol traffic into spatial location feature data with time series, which conforms to the input data format of the GRU network model.
具体地,所述流量切分的基本单元为会话。Specifically, the basic unit of the traffic segmentation is a session.
上述方案中,在流量粒度的选择上,采用了目前研究较多的会话,其具有相同五元组(源IP、源端口、目的IP、目的端口、传输层协议)的所有包,且五元组中的源和目的地址可以互换。In the above scheme, in the selection of traffic granularity, the session that has been studied more at present is used, which has all packets of the same quintuple (source IP, source port, destination IP, destination port, transport layer protocol), and the quintuple The source and destination addresses in the group can be interchanged.
具体地,所述数据包聚类采用K均值算法进行。Specifically, the data packet clustering is performed using a K-means algorithm.
上述方案中,K均值算法不仅易于实现,而且具有优化迭代功能,能消除训练样本集分类存在的不合理。In the above scheme, the K-means algorithm is not only easy to implement, but also has the function of optimization iteration, which can eliminate the unreasonable classification of the training sample set.
具体地,所述会话数据转换是将流量切分后的各数据包的内容格式替换为距离集合,所采用的距离计算公式为:Specifically, the session data conversion is to replace the content format of each data packet after traffic segmentation with a distance set, and the distance calculation formula used is:
其中,Max Subsequence函数为各数据包与各聚类中心之间的最长公共连续序列识别算法;D(x,centroid)为各数据包与各聚类中心的距离。Among them, the Max Subsequence function is the longest common continuous sequence identification algorithm between each data packet and each cluster center; D (x, centroid) is the distance between each data packet and each cluster center.
上述方案中,采用该距离计算公式,完成各数据包与各聚类中心之间距离的计算。In the above solution, the distance calculation formula is used to complete the calculation of the distance between each data packet and each cluster center.
如图3所示,具体地,所述GRU网络模型包括输入层、Masking层、第一GRU层、第二GRU层、全连接层、输出层;其中:As shown in Figure 3, specifically, the GRU network model includes an input layer, a Masking layer, a first GRU layer, a second GRU layer, a fully connected layer, and an output layer; wherein:
所述Masking层分别连接至输入层和第一GRU层;The Masking layer is connected to the input layer and the first GRU layer respectively;
所述第二GRU层分别连接至第一GRU层和全连接层;The second GRU layer is connected to the first GRU layer and the fully connected layer, respectively;
所述输出层与全连接层连接。The output layer is connected to the fully connected layer.
上述方案中,首先,Masking层将训练样本集中的补齐数据作跳过处理;其次连续两层的GRU门控循环单元,第一GRU层的参数return_sequences=TRUE,将每一个时间步的结果输出到第二GRU层中,在计算过程中,由于GRU网络存在更新门和重置门机制,可以保留前一时刻的状态信息传递至当前时刻,并带入到相同的程度,能充分提取会话流中的时序特征信息;再者,全连接层设有256个神经元,保证GRU网络模型的学习能力非线性表达能力;最后,输出层将识别结果输出。In the above scheme, first, the Masking layer skips the supplemented data in the training sample set; secondly, the GRU gated recurrent unit of two consecutive layers, the parameter return_sequences=TRUE of the first GRU layer, outputs the results of each time step In the second GRU layer, in the calculation process, due to the existence of update gate and reset gate mechanism in the GRU network, the state information of the previous moment can be retained and transferred to the current moment, and brought to the same extent, which can fully extract the session flow. In addition, the fully connected layer has 256 neurons to ensure the nonlinear expression ability of the learning ability of the GRU network model; finally, the output layer will output the recognition results.
具体地,所述第一GRU层、第二GRU层提取特征值的维度均设置为64。Specifically, the dimensions of the feature values extracted by the first GRU layer and the second GRU layer are both set to 64.
上述方案中,所设置的维度能确保两层GRU门控循环单元找出最有效的特征,达到降维效果,避免冗余。In the above scheme, the set dimension can ensure that the two-layer GRU gated recurrent unit finds the most effective features, achieves the effect of dimensionality reduction, and avoids redundancy.
具体地,所述全连接层采用ReLU函数作为激活函数。Specifically, the fully connected layer uses the ReLU function as the activation function.
上述方案中,采用ReLU函数作为激活函数,不仅时间和空间复杂度更低,而且能避免梯度消失的问题。In the above scheme, the ReLU function is used as the activation function, which not only has lower time and space complexity, but also avoids the problem of gradient disappearance.
具体地,所述全连接层采用Dropout设置其比率为0.5。Specifically, the fully connected layer adopts Dropout to set its ratio to 0.5.
上述方案中,采用Dropout机制以丢失50%的特征,能够很大程度上简化结构,以防治神经网络过拟合问题,并且能避免过多时间的花费。In the above scheme, the Dropout mechanism is adopted to lose 50% of the features, which can greatly simplify the structure, prevent the problem of neural network overfitting, and avoid excessive time consumption.
具体地,所述输出层采用Sigmoid函数作为激活函数。Specifically, the output layer uses a sigmoid function as an activation function.
上述方案中,输出层只设置了一个输出节点,输出结果为未知协议流量属于某一种协议类型的概率,而采用Sigmoid函数作为激活函数,使输出值介于0至1,符合二分类的需求。In the above scheme, only one output node is set in the output layer, and the output result is the probability that the unknown protocol traffic belongs to a certain protocol type, and the sigmoid function is used as the activation function, so that the output value is between 0 and 1, which meets the requirements of binary classification. .
显然,本发明的上述实施例仅仅是为清楚地说明本发明所作的举例,而并非是对本发明的实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动。这里无需也无法对所有的实施方式予以穷举。凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明权利要求的保护范围之内。Obviously, the above-mentioned embodiments of the present invention are only examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. For those of ordinary skill in the art, changes or modifications in other different forms can also be made on the basis of the above description. There is no need and cannot be exhaustive of all implementations here. Any modifications, equivalent replacements and improvements made within the spirit and principle of the present invention shall be included within the protection scope of the claims of the present invention.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010820902.5A CN111970169B (en) | 2020-08-14 | 2020-08-14 | Protocol flow identification method based on GRU network |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010820902.5A CN111970169B (en) | 2020-08-14 | 2020-08-14 | Protocol flow identification method based on GRU network |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111970169A true CN111970169A (en) | 2020-11-20 |
| CN111970169B CN111970169B (en) | 2022-03-08 |
Family
ID=73388920
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010820902.5A Active CN111970169B (en) | 2020-08-14 | 2020-08-14 | Protocol flow identification method based on GRU network |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111970169B (en) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112671757A (en) * | 2020-12-22 | 2021-04-16 | 无锡江南计算技术研究所 | Encrypted flow protocol identification method and device based on automatic machine learning |
| CN112910881A (en) * | 2021-01-28 | 2021-06-04 | 武汉市博畅软件开发有限公司 | Data monitoring method and system based on communication protocol |
| CN114462679A (en) * | 2022-01-04 | 2022-05-10 | 广州杰赛科技股份有限公司 | Network traffic prediction method, device, equipment and medium based on deep learning |
| CN115150165A (en) * | 2022-06-30 | 2022-10-04 | 北京天融信网络安全技术有限公司 | Traffic identification method and device |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190043483A1 (en) * | 2017-08-02 | 2019-02-07 | [24]7.ai, Inc. | Method and apparatus for training of conversational agents |
| CN109583656A (en) * | 2018-12-06 | 2019-04-05 | 重庆邮电大学 | Passenger Flow in Urban Rail Transit prediction technique based on A-LSTM |
| CN110011931A (en) * | 2019-01-25 | 2019-07-12 | 中国科学院信息工程研究所 | A kind of encryption traffic classes detection method and system |
| CN110287439A (en) * | 2019-06-27 | 2019-09-27 | 电子科技大学 | A network behavior anomaly detection method based on LSTM |
| CN110751222A (en) * | 2019-10-25 | 2020-02-04 | 中国科学技术大学 | Online encrypted traffic classification method based on CNN and LSTM |
| CN111209933A (en) * | 2019-12-25 | 2020-05-29 | 国网冀北电力有限公司信息通信分公司 | Network traffic classification method and device based on neural network and attention mechanism |
| CN111209563A (en) * | 2019-12-27 | 2020-05-29 | 北京邮电大学 | Network intrusion detection method and system |
-
2020
- 2020-08-14 CN CN202010820902.5A patent/CN111970169B/en active Active
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190043483A1 (en) * | 2017-08-02 | 2019-02-07 | [24]7.ai, Inc. | Method and apparatus for training of conversational agents |
| CN109583656A (en) * | 2018-12-06 | 2019-04-05 | 重庆邮电大学 | Passenger Flow in Urban Rail Transit prediction technique based on A-LSTM |
| CN110011931A (en) * | 2019-01-25 | 2019-07-12 | 中国科学院信息工程研究所 | A kind of encryption traffic classes detection method and system |
| CN110287439A (en) * | 2019-06-27 | 2019-09-27 | 电子科技大学 | A network behavior anomaly detection method based on LSTM |
| CN110751222A (en) * | 2019-10-25 | 2020-02-04 | 中国科学技术大学 | Online encrypted traffic classification method based on CNN and LSTM |
| CN111209933A (en) * | 2019-12-25 | 2020-05-29 | 国网冀北电力有限公司信息通信分公司 | Network traffic classification method and device based on neural network and attention mechanism |
| CN111209563A (en) * | 2019-12-27 | 2020-05-29 | 北京邮电大学 | Network intrusion detection method and system |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112671757A (en) * | 2020-12-22 | 2021-04-16 | 无锡江南计算技术研究所 | Encrypted flow protocol identification method and device based on automatic machine learning |
| CN112671757B (en) * | 2020-12-22 | 2023-10-31 | 无锡江南计算技术研究所 | Encryption flow protocol identification method and device based on automatic machine learning |
| CN112910881A (en) * | 2021-01-28 | 2021-06-04 | 武汉市博畅软件开发有限公司 | Data monitoring method and system based on communication protocol |
| CN114462679A (en) * | 2022-01-04 | 2022-05-10 | 广州杰赛科技股份有限公司 | Network traffic prediction method, device, equipment and medium based on deep learning |
| CN114462679B (en) * | 2022-01-04 | 2024-11-29 | 广州杰赛科技股份有限公司 | Network traffic prediction method, device, equipment and medium based on deep learning |
| CN115150165A (en) * | 2022-06-30 | 2022-10-04 | 北京天融信网络安全技术有限公司 | Traffic identification method and device |
| CN115150165B (en) * | 2022-06-30 | 2024-03-15 | 北京天融信网络安全技术有限公司 | Flow identification method and device |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111970169B (en) | 2022-03-08 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111970169A (en) | Protocol flow identification method based on GRU network | |
| CN110532564B (en) | On-line identification method for application layer protocol based on CNN and LSTM hybrid model | |
| CN113518063B (en) | Network intrusion detection method and system based on data enhancement and BilSTM | |
| CN104217225B (en) | A kind of sensation target detection and mask method | |
| CN113657443B (en) | An online IoT device identification method based on SOINN network | |
| CN107180248A (en) | Strengthen the hyperspectral image classification method of network based on associated losses | |
| CN111191033A (en) | An open-set classification method based on classification utility | |
| CN109218223A (en) | A kind of robustness net flow assorted method and system based on Active Learning | |
| CN107742095A (en) | Chinese sign language recognition method based on convolutional neural network | |
| CN112926403A (en) | Unsupervised pedestrian re-identification method based on hierarchical clustering and difficult sample triples | |
| CN111046732A (en) | Pedestrian re-identification method based on multi-granularity semantic analysis and storage medium | |
| CN109993100A (en) | Realization method of facial expression recognition based on deep feature clustering | |
| CN115659974A (en) | Software security public opinion event extraction method and device based on open source software supply chain | |
| CN106845456A (en) | A kind of method of falling over of human body monitoring in video monitoring system | |
| CN114881172A (en) | Software vulnerability automatic classification method based on weighted word vector and neural network | |
| CN117034112A (en) | A malicious network traffic classification method based on sample enhancement and contrastive learning | |
| Min et al. | Offline handwritten Chinese character recognition based on improved GoogLeNet | |
| CN116186513A (en) | A Vibration Signal Recognition Method Based on One-Dimensional Convolutional Neural Network | |
| CN112200260B (en) | A Person Attribute Recognition Method Based on Dropout Loss Function | |
| CN111970305B (en) | Anomaly traffic detection method based on semi-supervised dimension reduction and Tri-LightGBM | |
| Chao et al. | Research on network intrusion detection technology based on dcgan | |
| CN105469095A (en) | Vehicle model identification method based on pattern set histograms of vehicle model images | |
| CN116994104B (en) | Zero-shot recognition method and system based on tensor fusion and contrastive learning | |
| CN118864946A (en) | Meta-learning-based few-shot pneumonia classification method and medium | |
| Hmida et al. | Arabic sign language recognition algorithm based on deep learning for smart cities |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |