Disclosure of Invention
In view of the above problems, the embodiments of the present disclosure provide a method and apparatus for training an abnormal traffic detection model based on semi-supervised learning. By using the method and the device, the abnormal flow detection model is trained by adopting a semi-supervised learning mode, the marked training sample set is enriched in the continuous model training process, and whether the training process is ended or not is determined by aiming at the change rate of the abnormal prediction rate, so that the performance of the abnormal flow detection model can be guaranteed while the marked training samples are applied as little as possible, the abnormal flow can be effectively detected, and the data privacy is protected.
According to one aspect of the embodiment of the specification, the method comprises the steps of obtaining a training sample set of an abnormal flow detection model, wherein the training sample set comprises a marked training sample set and an unmarked training sample set, each training sample in the marked training sample set is provided with access flow characteristic data and marking data, each training sample in the unmarked training sample set is provided with access flow characteristic data, performing semi-supervised learning training on the current abnormal flow detection model based on the current training sample set until a training end condition is met, marking at least one training sample to be marked in the current unmarked training sample set to add the current marked training sample set to perform a next model training process when the training end condition is not met, and the training end condition comprises that the abnormal prediction rate of the current unmarked training sample set in the current model training process is not greater than a preset threshold value relative to the abnormal prediction rate of the current unmarked training sample set in the previous model training process.
Optionally, in one example of the above aspect, the method may further include clustering training samples in the training sample set when a training end condition is not met, and determining the at least one training sample to be marked from the current unmarked training sample set according to the clustering result.
Optionally, in one example of the above aspect, determining the at least one training sample to be marked from the current unlabeled training sample set according to the clustering result may include determining the clustering result in the current unlabeled training sample set as a training sample of outliers as the at least one training sample to be marked.
Optionally, in one example of the above aspect, determining the at least one training sample to be marked from the current non-marking training sample set according to the clustering result may include selecting at least one target cluster from each cluster in the clustering result, and determining the non-marking training sample in the at least one target cluster as the at least one training sample to be marked.
Optionally, in one example of the above aspect, selecting at least one target cluster from each of the clusters of the cluster results includes determining, for each of the clusters of the cluster results, a marked sample duty cycle of marked training samples in the cluster among the total marked training samples, and determining the at least one target cluster from the marked sample duty cycles of each cluster.
Optionally, in one example of the above aspect, determining the training samples in the current unlabeled exemplar set that are within a predetermined classification probability interval as the at least one training sample to be labeled when a training end condition is not met.
Optionally, in one example of the above aspect, when the training end condition is not met, an active learning mode is used to label at least one training sample in the current label-free training sample set to join the current labeled training sample set for a next model training process.
According to another aspect of the embodiment of the specification, an abnormal flow detection model training device based on semi-supervised learning is provided, and the abnormal flow detection model training device comprises a training sample set acquisition unit, a model training unit and a model training unit, wherein the training sample set acquisition unit acquires a training sample set of an abnormal flow detection model, the training sample set comprises a marked training sample set and an unmarked training sample set, each training sample in the marked training sample set is provided with access flow characteristic data and marking data, each training sample in the unmarked training sample set is provided with access flow characteristic data, the model training unit performs semi-supervised learning training on a current abnormal flow detection model based on the current training sample set until a training end condition is met, and at least one to-be-marked training sample in the current unmarked training sample set is marked with the current marked training sample set to perform a next model training process when the training end condition is not met, and the training end condition comprises that the abnormal prediction rate of the current unmarked training sample set in the current model training process is not greater than a preset threshold value relative to the abnormal prediction rate of the current unmarked training sample set in the previous model training process.
Optionally, in one example of the above aspect, the model training unit may include a model prediction module providing a current training sample set to a current abnormal traffic detection model to perform abnormal prediction to determine a current abnormal prediction rate for each current unlabeled training sample in the current unlabeled training sample set, a change rate determination module determining a change rate of the current abnormal prediction rate for each current unlabeled training sample relative to a previous abnormal prediction rate for each current unlabeled training sample in a previous model training process, and a sample labeling module labeling at least one to-be-labeled training sample in the current unlabeled training sample set to add a current labeled training sample set to perform a next model training process when the determined change rate is greater than a predetermined threshold, wherein the model prediction module, the change rate determination module, and the sample labeling module operate in a loop until the training end condition is satisfied.
Alternatively, in one example of the above aspect, the sample marking module may include a sample to be marked determination submodule that determines at least one training sample to be marked from the current unlabeled training sample set, and a sample marking submodule that marks the determined at least one training sample to be marked to join the current labeled training sample set.
Optionally, in one example of the above aspect, the to-be-marked sample determination submodule clusters training samples in the training sample set, and determines the at least one to-be-marked training sample from the current unmarked training sample set according to the clustering result.
Optionally, in one example of the above aspect, the to-be-marked sample determination submodule determines, as the at least one to-be-marked training sample, a training sample in which the clustering result in the current unlabeled training sample set is an outlier.
Optionally, in one example of the above aspect, the to-be-marked sample determination submodule selects at least one target cluster from each cluster in the cluster result, and determines a non-marked training sample in the at least one target cluster as the at least one to-be-marked training sample.
Optionally, in one example of the above aspect, the to-be-marked sample determining submodule determines, for each cluster in the cluster results, a marked sample ratio of marked training samples in the cluster in the total marked training samples, and determines the at least one target cluster according to the marked sample ratio of each cluster.
Optionally, in one example of the above aspect, the to-be-marked sample determination submodule may determine a training sample in the current unlabeled sample set that is within a predetermined classification probability interval as the at least one to-be-marked training sample.
According to another aspect of embodiments of the present specification, there is also provided an electronic device comprising at least one processor, and a memory storing instructions that, when executed by the at least one processor, cause the at least one processor to perform the abnormal traffic detection model training method based on semi-supervised learning as described above.
According to another aspect of embodiments of the present specification, there is also provided a machine-readable storage medium storing executable instructions that, when executed, cause the machine to perform the abnormal traffic detection model training method based on semi-supervised learning as described above.
Detailed Description
The subject matter described herein will be discussed below with reference to example embodiments. It should be appreciated that these embodiments are discussed only to enable a person skilled in the art to better understand and thereby practice the subject matter described herein, and are not limiting of the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements discussed without departing from the scope of the embodiments herein. Various examples may omit, replace, or add various procedures or components as desired. In addition, features described with respect to some examples may be combined in other examples as well.
As used herein, the term "comprising" and variations thereof mean open-ended terms, meaning "including, but not limited to. The term "based on" means "based at least in part on". The terms "one embodiment" and "an embodiment" mean "at least one embodiment. The term "another embodiment" means "at least one other embodiment". The terms "first," "second," and the like, may refer to different or the same object. Other definitions, whether explicit or implicit, may be included below. Unless the context clearly indicates otherwise, the definition of a term is consistent throughout this specification.
The term "active learning" may mean that which samples need to be labeled by an algorithm is actively proposed, and after the samples are labeled manually, the samples are added into a training sample set to perform training. The term "clustering" refers to an analysis process that groups a collection of physical or abstract objects into multiple classes of similar objects, which can be used to measure similarity between different data in a data source, and to classify the data source into different clusters.
Furthermore, the term "abnormal traffic" may represent abnormal access requests for the server, such as malicious attack requests or blackout access requests, etc.
FIG. 1 illustrates a flowchart of an example of a semi-supervised learning based abnormal flow detection model training method, according to an embodiment of the present disclosure.
As shown in flow 100 of fig. 1, a training sample set of an abnormal flow detection model is obtained, the training sample set including a labeled training sample set and an unlabeled training sample set, at block 110. Specifically, each of the labeled training samples has access to traffic feature data and labeled data, and each of the unlabeled training samples in the set of training samples has access to traffic feature data and no corresponding labeled data. Here, the access traffic characteristic data may be characteristic data of information such as URL, HTTP request, and user attribute, for example, URL length, URL content information, URL information entropy, HTTP request header, HTTP request body information, and the like.
Furthermore, the marker data for the training samples in the marked training sample set comprises a positive label for indicating that there is a risk of abnormal traffic and a negative label for indicating that there is no risk of abnormal traffic, that is, the marked training sample set comprises a positive label training sample and a negative label training sample. In some embodiments, when the positive and negative label training samples differ significantly, a sample balancing approach may be employed to balance the number of positive and negative label samples. Illustratively, when the number ratio of positive label training samples to negative label training samples in the labeled training sample set is lower than a set proportion threshold, upsampling is performed on the positive label training samples to expand the number of positive label training samples.
Next, semi-supervised learning training may be performed on the current abnormal flow detection model based on the current training sample set, i.e., the operations of blocks 120 through 150 are performed in a loop until a training end condition is satisfied, the training end condition including a rate of change of the abnormal prediction rate for the current unlabeled training sample set during the current model training relative to the abnormal prediction rate for the current unlabeled training sample set during the previous model training being no greater than a predetermined threshold. Here, in the model training process of different rounds, the training sample sets used by the abnormal flow detection model are different, so that the trained abnormal flow detection model is different, and further, for each unmarked training sample in the current unmarked training sample set, the abnormal prediction rates of the current abnormal flow detection model and the previous abnormal flow detection model may also change.
Specifically, in block 120, the current set of training samples is provided to a current abnormal traffic detection model for abnormal prediction to determine a current abnormal prediction rate for each current unlabeled training sample in the current unlabeled training sample set. Here, the current anomaly prediction rate may be an anomaly prediction rate obtained by performing anomaly prediction for each unlabeled training sample in the current unlabeled training sample set using the current anomaly traffic detection model.
Next, in block 130, the rate of change of the current anomaly prediction rate for each current unlabeled training sample relative to the last anomaly prediction rate for each current unlabeled training sample in the last model training process is determined. Here, as the model training process continues, the unlabeled training sample set used in the different model training processes is different. For example, if there are 80 unlabeled training samples in the current unlabeled training sample set and there may be 100 unlabeled training samples in the last unlabeled training sample set, the current abnormal traffic detection model and the last abnormal traffic detection model may be used to determine an abnormal prediction rate of each of the 80 unlabeled training samples in the current unlabeled training sample set, and calculate a difference value of the two abnormal prediction rates of the respective unlabeled training samples, and calculate a change rate of the abnormal rate from the difference value of the abnormal prediction rates of the respective unlabeled training samples (i.e., the 80 unlabeled training samples) in the current unlabeled training sample set, for example, an average value of the difference values of the abnormal prediction rates of the respective unlabeled training samples may be calculated as the change rate of the abnormal rate.
Next, in block 140, a determination is made as to whether the determined rate of change is greater than a predetermined threshold. Here, the predetermined threshold value may be predetermined through experience or a plurality of experiments. If the rate of change is greater than the predetermined threshold in block 140, the operations of block 150 are performed. In addition, when the iterative training operation of the first round is performed, there is no rate of change of the abnormal prediction rate, at which time the subsequent operation such as block 150 may be performed directly.
At block 150, at least one training sample to be marked in the current unmarked training sample set may be marked to add the current marked training sample set to obtain an adjusted training sample set, and then return to block 120 to perform the next model training process with the adjusted training sample set as the current training sample set.
If the rate of change is not greater than the predetermined threshold in block 140, the training is ended.
As described above, the set of labeled training samples corresponding to the model training procedure of the next round may be more abundant than the set of labeled training samples corresponding to the model training procedure of the previous round. Thus, if a richer labeled training sample set is used during the model training process for the current round, but no more significant change or optimization effect is achieved on the predicted outcome relative to the previous model training process (i.e., the rate of change of the abnormal prediction rate corresponding to the current unlabeled training sample set is lower), then it may be determined that this model has substantially converged. Conversely, if the newly added labeled training samples cause larger fluctuations in the prediction results corresponding to the continuous two-round model training process, it is indicated that the model may need further optimization, for example, more new labeled training samples may need to be added.
FIG. 2 shows a flowchart of an example of a process of labeling training samples to be labeled when the training end condition is not met according to an embodiment of the present disclosure.
As shown in flow 200 of fig. 2, at block 210, at least one training sample to be marked is determined from a current unmarked training sample set. For example, the training samples to be marked may be determined from the current unmarked training sample set in a random manner or in a specific manner.
Next, in block 220, the determined at least one training sample to be marked is marked to join the currently marked training sample set. Accordingly, the at least one unlabeled labeled training sample is removed from the current unlabeled training sample set, thereby yielding a new current unlabeled training sample set and a new current labeled training sample set.
In one example of an embodiment of the present specification, when the training end condition is not met, at least one training sample in the current unlabeled training sample set may be labeled in an active learning manner to join the current labeled training sample set for the next model training process. For example, at least one training sample to be marked in the current set of unmarked training samples may be determined based on various sample selection algorithms (e.g., clustering algorithms or other selection algorithms) and provided to an expert or developer for marking the respective markers by the expert or developer and updating the marked training sample set and the unmarked training sample set. Therefore, in the model training process of each round, the unmarked training samples are screened and marked in an active learning mode to enrich the marked training sample set until the model converges, so that the abnormal flow detection model can be ensured to have higher performance.
FIG. 3 illustrates a flowchart of an example of a process of determining training samples to be marked from a current unmarked training sample set when the training end condition is not met, according to embodiments of the present description.
As in flow 300 of fig. 3, training samples in a training sample set are clustered in block 310. For example, various types of clustering algorithms such as K-means algorithm, density clustering algorithm, and the like may be employed for clustering.
Next, in block 320, at least one training sample to be marked is determined from the current unmarked training sample set according to the clustering result. Here, the clustering result may be a cluster and/or an outlier determined by a clustering algorithm.
In one example of an embodiment of the present specification, a training sample in which a clustering result in a current unlabeled training sample set is an outlier may be determined as at least one training sample to be labeled. Here, the training samples corresponding to the outliers are significantly different from the training sample groups corresponding to other clusters, and samples of previously ignored or unknown abnormal traffic types are more easily found from the current unlabeled training samples corresponding to the outliers. Furthermore, by labeling the current unmarked training samples corresponding to the discrete points, the recognition capability of the abnormal flow detection model for more samples of previously ignored or unknown abnormal types can be improved, and the performance of the model can be improved.
FIG. 4 illustrates a flowchart of an example of determining training samples to be labeled based on clustering results according to an embodiment of the present disclosure.
As in flow 400 of fig. 4, at block 410, at least one target cluster is selected from among the clusters in the cluster result. For example, clusters that are large enough (e.g., exceed a threshold) may be selected as target clusters based on the size of the clusters. It will be appreciated that the target clusters may also be selected in other ways, more details of which will be discussed below.
Next, in block 420, unlabeled training samples in the at least one target cluster are determined as at least one training sample to be labeled. Therefore, the training samples to be marked, which are determined based on the target clusters, have commonalities, so that research personnel or experts can label the corresponding marking data on the training samples more easily, and the burden of manual marking work can be effectively reduced.
FIG. 5 illustrates a flow chart of an example of selecting at least one target cluster from among the individual clusters in the clustered results, according to an embodiment of the present disclosure.
As shown in flow 500 of fig. 5, at block 510, for each cluster in the cluster result, a marked sample duty cycle of marked training samples in the cluster among the total marked training samples is determined. Here, the number of marked training samples contained in each of the clusters in the cluster result may be different, e.g. 100 marked samples in the first cluster and 2 marked samples in the second cluster (i.e. marked samples with too low a ratio), resulting in a situation where the marked samples are out of distribution over the different clusters.
Next, in block 520, at least one target cluster is determined based on the marked sample duty cycle of each cluster. For example, clusters with marked sample duty ratios below a set scale threshold may be determined as target clusters. Therefore, the method can ensure that the proportion of marked samples in the target clusters is low, and the clusters are selected as the target clusters, so that the method can help to balance the number of marked training samples in different clusters, and the generalization capability of the model is improved.
It should be noted that, in the embodiment of the present disclosure, in addition to determining the training samples to be marked by clustering as in fig. 3-5, other manners may be used to determine the training samples to be marked.
In one example of the embodiment of the present specification, when the training end condition is not satisfied, a training sample in the current unlabeled exemplar set that is within the predetermined classification probability interval is determined as at least one training sample to be labeled. Here, the predetermined classification probability interval may be used to distinguish between an abnormal flow sample and a normal flow sample according to an abnormal rate of the sample. For example, when the predicted abnormality rate for the sample may be a value selected from 0 to 1, the predetermined classification probability interval may be 0.45 to 0.55 around 0.5. Therefore, at least one unlabeled training sample corresponding to the preset classification probability interval is selected for labeling, labeled training samples are enriched, and the recognition capability of the abnormal flow detection model for the normal data sample and the abnormal data sample can be improved.
Fig. 6 shows a block diagram of an example of an abnormal flow detection model training apparatus based on semi-supervised learning according to an embodiment of the present specification.
As shown in fig. 6, the model training apparatus 600 includes a training sample set acquisition unit 610 and a model training unit 620.
The training sample set obtaining unit 610 is configured to obtain a training sample set of the abnormal flow detection model, where the training sample set includes a labeled training sample set and an unlabeled training sample set, each of the labeled training samples has access flow characteristic data and labeled data, and each of the unlabeled training sample set has access flow characteristic data. The operation of training sample set acquisition unit 610 may be as described above with reference to block 110 in fig. 1.
The model training unit 620 is configured to perform semi-supervised learning training on the current abnormal traffic detection model based on the current training sample set until a training end condition is met, where when the training end condition is not met, marking at least one to-be-marked training sample in the current unmarked training sample set to join the current marked training sample set to perform a next model training process, the training end condition includes that a change rate of an abnormal prediction rate for the current unmarked training sample set in the current model training process relative to an abnormal prediction rate for the current unmarked training sample set in the previous model training process is not greater than a predetermined threshold. The operation of model training unit 620 may refer to the operations described above with reference to blocks 120-150 in fig. 1.
Fig. 7 shows a block diagram of a model training unit according to an embodiment of the present specification.
As shown in fig. 7, the model training unit 620 includes a model prediction module 710, a rate of change determination module 720, and a sample marking module 730.
Model prediction module 710 is configured to provide the current set of training samples to the current abnormal traffic detection model for abnormal prediction to determine a current abnormal prediction rate for each current unlabeled training sample in the current unlabeled training sample set. The operation of the model prediction module 710 may refer to the operation described above with reference to block 120 in fig. 1.
The rate of change determination module 720 is configured to determine a rate of change of a current anomaly prediction rate for the respective current unlabeled training samples relative to a previous anomaly prediction rate for the respective current unlabeled training samples during a previous model training process. The operation of the rate of change determination module 720 may refer to the operation described above with reference to block 130 in fig. 1.
The sample tagging module 730 is configured to tag at least one training sample to be tagged in the current untagged training sample set to join the current tagged training sample set for a next model training process when the determined rate of change is greater than a predetermined threshold, wherein the model prediction module 710, the rate of change determination module 720, and the sample tagging module operate in a loop until a training end condition is met. The operation of the sample marking module 730 may refer to the operations described above with reference to blocks 140 and 150 of fig. 1.
Fig. 8 shows a block diagram of an example of a sample marking module according to an embodiment of the present disclosure.
As shown in fig. 8, the sample marking module 730 includes a sample determination sub-module 731 to be marked and a sample marking sub-module 732.
The to-be-marked sample determination sub-module 731 is configured to determine at least one to-be-marked training sample from the current set of unmarked training samples. The operation of the sample determination sub-module 731 to be marked may be referred to the operation of block 210 described above with reference to fig. 2.
The sample tagging sub-module 732 is configured to tag the determined at least one training sample to be tagged with a tag to join the currently tagged training sample set. The operation of the sample tagging sub-module 732 may refer to the operation of the block 220 described above with reference to fig. 2.
In one example of an embodiment of the present specification, the to-be-marked sample determination sub-module 731 clusters training samples in the training sample set and determines the at least one to-be-marked training sample from the current non-marked training sample set according to the clustering result. For more details on the example of an embodiment of the present description, reference may be made to the operation of the flow 300 described above with reference to fig. 3.
Further, in one example, the to-be-marked sample determination sub-module 731 may determine the clustering result in the current unlabeled training sample set as a training sample of outliers as the at least one to-be-marked training sample.
In another example, the to-be-marked sample determination sub-module 731 may select at least one target cluster from among the clusters in the clustering result, and determine the unlabeled training samples in the at least one target cluster as the at least one to-be-marked training sample. For more details on this embodiment, reference may be made to the operations of flow 400 described above with reference to fig. 4.
Alternatively, the to-be-marked sample determination submodule 731 may determine, for each cluster in the cluster results, a marked sample duty ratio of marked training samples in the cluster among the total marked training samples, and determine the at least one target cluster according to the marked sample duty ratio of each cluster. For more details, reference may be made to the operation of the flow 500 described above with reference to fig. 5.
Additionally, optionally, in one example, the to-be-marked sample determination sub-module 731 may further determine that the training samples in the current unlabeled sample set that lie within the predetermined classification probability interval are at least one to-be-marked training sample.
Embodiments of a method and apparatus for training an abnormal traffic detection model based on semi-supervised learning according to embodiments of the present specification are described above with reference to fig. 1 through 8. The details mentioned in the above description of the method embodiments apply equally to the embodiments of the device of the present description. The training method of the abnormal flow detection model based on semi-supervised learning can be realized by adopting hardware, or can be realized by adopting software or a combination of hardware and software.
Fig. 9 shows a hardware configuration diagram of an example of an electronic device 900 trained based on a semi-supervised learning abnormal traffic detection model according to an embodiment of the present disclosure. As shown in fig. 9, the electronic device 900 may include at least one processor 910, memory (e.g., non-volatile memory) 920, memory 930, and a communication interface 940, with the at least one processor 910, memory 920, memory 930, and communication interface 940 being connected together via a bus 960. The at least one processor 910 executes at least one computer-readable instruction (i.e., the elements described above as being implemented in software) stored or encoded in memory.
In one embodiment, computer-executable instructions are stored in a memory that when executed cause the at least one processor 910 to obtain a training sample set of an abnormal traffic detection model, the training sample set comprising a marked training sample set and an unmarked training sample set, each of the marked training samples having access traffic characteristic data and marking data, each of the unmarked training sample set having access traffic characteristic data, semi-supervised learning training the current abnormal traffic detection model based on the current training sample set until a training end condition is met, wherein marking at least one to-be-marked training sample in the current unmarked training sample set with a mark to add the current marked training sample set for a next model training process is performed when the training end condition is not met, the training end condition comprising a rate of change of an abnormal prediction rate for the current unmarked training sample set relative to an abnormal prediction rate for the current unmarked training sample set in a previous model training process is not greater than a predetermined threshold.
It should be appreciated that the computer-executable instructions stored in memory 920, when executed, cause at least one processor 910 to perform the various operations and functions described above in connection with fig. 1-8 in various embodiments of the present description.
In this description, electronic device 900 may include, but is not limited to, personal computers, server computers, workstations, desktop computers, laptop computers, notebook computers, mobile electronic devices, smart phones, tablet computers, cellular phones, personal Digital Assistants (PDAs), handsets, messaging devices, wearable electronic devices, consumer electronic devices, and the like.
According to one embodiment, a program product, such as a machine-readable medium, is provided. The machine-readable medium may have instructions (i.e., elements described above implemented in software) that, when executed by a machine, cause the machine to perform the various operations and functions described above in connection with fig. 1-8 in various embodiments of the specification. In particular, a system or apparatus provided with a readable storage medium having stored thereon software program code implementing the functions of any of the above embodiments may be provided, and a computer or processor of the system or apparatus may be caused to read out and execute instructions stored in the readable storage medium.
In this case, the program code itself read from the readable medium may implement the functions of any of the above-described embodiments, and thus the machine-readable code and the readable storage medium storing the machine-readable code form part of the present invention.
Examples of readable storage media include floppy disks, hard disks, magneto-optical disks, optical disks (e.g., CD-ROMs, CD-R, CD-RWs, DVD-ROMs, DVD-RAMs, DVD-RWs), magnetic tapes, nonvolatile memory cards, and ROMs. Alternatively, the program code may be downloaded from a server computer or cloud by a communications network.
Fig. 10 shows an architectural diagram of an example of an abnormal flow detection apparatus based on an abnormal flow detection model suitable for application of the embodiments of the present specification.
As shown in fig. 10, in this architecture 1000, at least one client may send an access request to a server 1020 over a network 1010 to request access to data in the server. Here, the client may be a terminal device such as a desktop 1032, a notebook 1034, and a mobile phone 1036. In addition, the server 1020 provides services through private data sets. In one example of the present description, the private data set is stored on the server 1020, and in another example of the present description, the server 1020 may make a remote call to the private data set. In some application scenarios, a hacker may use a client and use multiple attack modes to steal private information through the server 1020, which provides a great challenge for security of data privacy.
In embodiments of the present description, the abnormal traffic detection apparatus 1040 may identify whether the access request belongs to abnormal traffic by locally or remotely invoking the abnormal traffic detection model, and may perform a corresponding security policy operation (e.g., not performing a response, or alerting) on the abnormal traffic. Here, the abnormal flow rate detection model is an abnormal flow rate detection model trained using the method described in fig. 1.
It will be appreciated by those skilled in the art that various changes and modifications may be made to the embodiments of the invention above without departing from the spirit thereof. Accordingly, the scope of the invention should be limited only by the attached claims.
It should be noted that not all the steps and units in the above flowcharts and the system configuration diagrams are necessary, and some steps or units may be omitted according to actual needs. The order of execution of the steps is not fixed and may be determined as desired. The apparatus structures described in the above embodiments may be physical structures or logical structures, that is, some units may be implemented by the same physical entity, or some units may be implemented by multiple physical entities, or may be implemented jointly by some components in multiple independent devices.
In the above embodiments, the hardware units or modules may be implemented mechanically or electrically. For example, a hardware unit, module or processor may include permanently dedicated circuitry or logic (e.g., a dedicated processor, FPGA or ASIC) to perform the corresponding operations. The hardware unit or processor may also include programmable logic or circuitry (e.g., a general purpose processor or other programmable processor) that may be temporarily configured by software to perform the corresponding operations. The particular implementation (mechanical, or dedicated permanent, or temporarily set) may be determined based on cost and time considerations.
The detailed description set forth above in connection with the appended drawings describes exemplary embodiments, but does not represent all embodiments that may be implemented or fall within the scope of the claims. The term "exemplary" used throughout this specification means "serving as an example, instance, or illustration," and does not mean "preferred" or "advantageous over other embodiments. The detailed description includes specific details for the purpose of providing an understanding of the described technology. However, the techniques may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described embodiments.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.