CA2474827C - Method and apparatus for sketch-based detection of changes in network traffic - Google Patents
Method and apparatus for sketch-based detection of changes in network traffic Download PDFInfo
- Publication number
- CA2474827C CA2474827C CA 2474827 CA2474827A CA2474827C CA 2474827 C CA2474827 C CA 2474827C CA 2474827 CA2474827 CA 2474827 CA 2474827 A CA2474827 A CA 2474827A CA 2474827 C CA2474827 C CA 2474827C
- Authority
- CA
- Canada
- Prior art keywords
- forecast
- sketch
- value
- parameter
- data item
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 99
- 238000001514 detection method Methods 0.000 title claims abstract description 73
- 230000008859 change Effects 0.000 claims abstract description 43
- 230000006870 function Effects 0.000 claims description 16
- 230000002547 anomalous effect Effects 0.000 claims description 9
- 238000004891 communication Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 claims description 2
- 238000005206 flow analysis Methods 0.000 abstract description 7
- 238000005259 measurement Methods 0.000 abstract description 5
- 230000008901 benefit Effects 0.000 abstract description 2
- 230000007246 mechanism Effects 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 39
- 238000013459 approach Methods 0.000 description 13
- 230000006399 behavior Effects 0.000 description 11
- 238000000714 time series forecasting Methods 0.000 description 10
- 238000002474 experimental method Methods 0.000 description 9
- 238000009499 grossing Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 238000011156 evaluation Methods 0.000 description 6
- 238000011160 research Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 3
- 230000001594 aberrant effect Effects 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000001186 cumulative effect Effects 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000005315 distribution function Methods 0.000 description 2
- 230000000135 prohibitive effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000013341 scale-up Methods 0.000 description 2
- 238000010845 search algorithm Methods 0.000 description 2
- 238000012731 temporal analysis Methods 0.000 description 2
- 238000000700 time series analysis Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 101100006960 Caenorhabditis elegans let-2 gene Proteins 0.000 description 1
- 244000187656 Eucalyptus cornuta Species 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000013401 experimental design Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 229910052960 marcasite Inorganic materials 0.000 description 1
- 230000008450 motivation Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 229910052683 pyrite Inorganic materials 0.000 description 1
- NIFIFKQPDTWWGU-UHFFFAOYSA-N pyrite Chemical compound [Fe+2].[S-][S-] NIFIFKQPDTWWGU-UHFFFAOYSA-N 0.000 description 1
- 230000004043 responsiveness Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000001932 seasonal effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000010561 standard procedure Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
A sketch-based change detection technique is introduced for anomaly detection and other applications that can benefit from having a quick and efficient change detection mechanism. The technique is capable of detecting significant changes in massive data streams with a large number of network time series. As part of the technique, we designed a variant of the sketch data structure, called k-ary sketch, uses a constant, small amount of memory, and has constant per-record update and reconstruction cost. A variety of time series forecast models are implemented on top of such summaries and detect significant changes by looking for flows with large forecast errors. Heuristics for automatically configuring the forecast model parameters are presented.
Real Internet traffic data is used to demonstrate that the sketch-based change detection method is highly accurate when compared with per-flow analysis, and can be implemented at low computation and memory costs. The results validate using the technique as a building block for network anomaly detection and traffic measurement in large computer networks.
Real Internet traffic data is used to demonstrate that the sketch-based change detection method is highly accurate when compared with per-flow analysis, and can be implemented at low computation and memory costs. The results validate using the technique as a building block for network anomaly detection and traffic measurement in large computer networks.
Description
METHOD AND APPARATUS FOR SKETCH-BASED DETECTION
OF CHANGES IN NETWORK TRAFFIC
Field of the Invention:
OF CHANGES IN NETWORK TRAFFIC
Field of the Invention:
[0002] This invention generally relates to multiple computer or process coordinating, and in particular it relates to computer network monitoring.
Background of the Invention:
Background of the Invention:
[0003] Traffic anomalies such as failures and attacks are commonplace in today's network, and identifying them rapidly and accurately is critical for large network operators. The detection typically treats the traffic as a collection of flows that need to be examined for significant changes in traffic pattern (e.g., volume, number of connections).
However, as link speeds and the number of flows increase, keeping per-flow state is either too expensive or too slow.
However, as link speeds and the number of flows increase, keeping per-flow state is either too expensive or too slow.
[0004] Traffic anomalies are an integral part of daily life for today's network operators.
Some traffic anomalies are expected or unanticipated but tolerable. Others are often indications PATENT
of performance bottlenecks due to flash crowds, network element failures, or malicious activities such as denial of service (DoS) attacks and worms. Suitable motivation exists to process massive data streams (available from diverse sources) quickly, in order to examine them for anomalous behavior. Two basic approaches to network anomaly detection are common.
Some traffic anomalies are expected or unanticipated but tolerable. Others are often indications PATENT
of performance bottlenecks due to flash crowds, network element failures, or malicious activities such as denial of service (DoS) attacks and worms. Suitable motivation exists to process massive data streams (available from diverse sources) quickly, in order to examine them for anomalous behavior. Two basic approaches to network anomaly detection are common.
[0005] The first approach is the "signature-based" approach, which detects traffic anomalies by looking for patterns that match signatures of known anomalies.
For example, such techniques may infer DoS activities based on address uniformity, a property shared by several popular DoS toolkits. Signature-based methods have been extensively explored in the literature and many software systems and toolkits. One limitation of this approach is the requirement that the anomaly signatures be known in advance. Thus. it cannot be applied to identify new anomalies. Also, a malicious attacker can evade signature-based detection systems by altering their signatures. One can see a parallel in the failure of filter-based, spam-fighting systems where spammers introduce random hashes in their spam messages.
For example, such techniques may infer DoS activities based on address uniformity, a property shared by several popular DoS toolkits. Signature-based methods have been extensively explored in the literature and many software systems and toolkits. One limitation of this approach is the requirement that the anomaly signatures be known in advance. Thus. it cannot be applied to identify new anomalies. Also, a malicious attacker can evade signature-based detection systems by altering their signatures. One can see a parallel in the failure of filter-based, spam-fighting systems where spammers introduce random hashes in their spam messages.
[0006] A second approach is the "statistics-based" approach, which does not require prior knowledge about the nature and properties of anomalies and therefore can be effective even for new anomalies or variants of existing anomalies. A very important component of the statistics-based approach is change detection. It detects traffic anomalies by deriving a model of normal behavior based on the past traffic history and looking for significant changes in short-term behavior (on the order of minutes to hours) that are inconsistent with the model.
[0007] Change detection has been extensively studied in the context of time series forecasting and outlier ana]ysis. The standard techniques include different smoothing techniques (such as exponential smoothing or sliding window averaging), Box-Jenkins AutoRegressive PATENT
Integrated Moving Average (ARIMA) modeling, and finally the more recent wavelet-based techniques.
Integrated Moving Average (ARIMA) modeling, and finally the more recent wavelet-based techniques.
[0008] Prior works have applied these techniques to network fault detection and intrusion detection. Examples in fault detection include: those that identify faults based on statistical deviations from normal traffic behavior; methods of identifying aberrant behavior by applying thresholds in time series models of network traffic; methods for intrusion detection including neural networks, Markov models, and clustering; and those that provide a characterization of different types of anomalies and propose wavelet-based methods for change detection.
[0009] Unfortunately, existing change detection techniques typically only handle a relatively small number of time series. While this may suffice for detecting changes in highly aggregated network traffic data (e.g., Simple Network Management Protocol (SNMP) link counts with a 5 minute sample interval), they cannot scale up to the needs at the network infrastructure (e.g., Internet Service Provider (ISP)) level. At an ISP level, traffic anomalies may be buried inside the aggregated traffic, mandating examination of the traffic at a much lower level of aggregation (e.g., Internet Protocol (IP) address level) in order to expose them.
[0010] Given today's traffic volume and link speeds, a suitable detection method has to be able to handle potentially several millions or more of concurrent network time series. Directly applying existing techniques on a per-flow basis cannot scale up to the needs of such massive data streams. Some recent research efforts have been directed towards developing scalable "heavy-hitter" detection techniques for accounting and anomaly detection purposes. However, heavy-hitter techniques do not necessarily correspond to flows experiencing significant changes and thus it is not clear how their techniques can be adapted to support change detection.
[0011] Accordingly, there is a need for an efficient, accurate, and scalable change detection mechanism for detecting significant changes in massive data streams with a large number of flows.
Summary of the Invention:
Summary of the Invention:
[0012] It is an object of the present disclosure, therefore, to introduce methods implemented on various apparatus in which compact summaries of the traffic data are built using sketches. In particular, a variant of the sketch data structure, k-ary sketch, is introduced. K-ary sketch uses a constant, small amount of memory, and has constant per-record update and reconstruction cost. Its linearity property enables the summarization of traffic at various levels. A
variety of time series forecast models (ARIMA, Holt-Winters, etc.) are then implemented on top of such summaries that detect significant changes by looking for flows with large forecast errors.
Heuristics for automatically configuring the model parameters are also introduced.
variety of time series forecast models (ARIMA, Holt-Winters, etc.) are then implemented on top of such summaries that detect significant changes by looking for flows with large forecast errors.
Heuristics for automatically configuring the model parameters are also introduced.
[0013] Using a large amount of real Internet traffic data from an operational tier-1 ISP, the sketch-based change detection method is shown to be highly accurate, and can be implemented at low computation and memory costs. The results indicate that the disclosed methods may be reliably used for network anomaly detection and traffic measurement.
[0013a) Certain exemplary embodiments can provide a method for detecting anomalous traffic flow, comprising: generating a k-ary sketch of observed values of an incoming data flow, the k-ary sketch identifying each data item in the incoming data flow by a key value comprising a destination Internet Protocol (IP) address of the data item and an update value comprising a size of the data item; generating a forecast sketch using a time series forecast model that includes a first parameter defining a number of hash functions applied to each data item of the incoming data flow, a second parameter defining a hash table size and a third parameter defining a time interval, the time series forecast model resulting in a forecast value; calculating a forecast error comprising a difference between the forecast value and an observed value of the incoming data flow;
establishing a threshold value based on an estimated second moment of the forecast error, the estimated second moment comprising a sum of squares of each update value associated with each key value; and indicating an alarm condition when the forecast error exceeds the threshold value.
[0013b] Certain exemplary embodiments can provide a computer readable medium for implementing a method, performed by a computer, for detecting anomalous traffic flow, comprising: a sketch module for creating a k-ary sketch of observed values of an incoming data flow, the k-ary sketch identifying each data item in the incoming data flow by a key value comprising an Internet Protocol (IP) address of each data item and an update value comprising a size of the data item; a forecast module for generating a forecast sketch for the incoming data flow using a time series forecast model that includes a first parameter defining a number of hash functions applied to each data item identified by the sketch module, a second parameter defining a hash table size and a third parameter defining a time interval, the time series forecast model resulting in a forecast value, the forecast module further for calculating a forecast error comprising a difference between the forecast value and an observed value of the incoming data flow determined by the sketch module; and a change detection module for establishing a threshold value based on 4a an estimated second moment of the forecast error received from the forecast module, the estimated second moment comprising a sum of squares of each update value associated with each key value, and for indicating an alarm condition when the forecast error exceeds the threshold value.
[0013c] Certain exemplary embodiments can provide an apparatus for detecting anomalous traffic flow, comprising: a processor; and a memory in communication with the processor, the memory for storing a plurality of processing instructions for directing the processor to: generate a k-ary sketch of observed values of an incoming data flow, the k-ary sketch identifying each data item in the incoming data flow by a key value comprising an Internet Protocol (IP) address of each data item and an update value comprising a size of the data item; generate a forecast sketch using a time series forecast model that includes a first parameter defining a number of hash functions applied to each data item of the incoming data flow, a second parameter defining a hash table size and a third parameter defining a time interval, the time series forecast model resulting in a forecast value;
calculate a forecast error comprising a difference between the forecast value and an observed value of the incoming data flow; establish a threshold value based on an estimated second moment of the forecast error, the estimated second moment comprising a sum of squares of each update value associated with each key value; and indicate an alarm condition when the forecast error exceeds the threshold value.
Brief Description of the Drawings:
[0013a) Certain exemplary embodiments can provide a method for detecting anomalous traffic flow, comprising: generating a k-ary sketch of observed values of an incoming data flow, the k-ary sketch identifying each data item in the incoming data flow by a key value comprising a destination Internet Protocol (IP) address of the data item and an update value comprising a size of the data item; generating a forecast sketch using a time series forecast model that includes a first parameter defining a number of hash functions applied to each data item of the incoming data flow, a second parameter defining a hash table size and a third parameter defining a time interval, the time series forecast model resulting in a forecast value; calculating a forecast error comprising a difference between the forecast value and an observed value of the incoming data flow;
establishing a threshold value based on an estimated second moment of the forecast error, the estimated second moment comprising a sum of squares of each update value associated with each key value; and indicating an alarm condition when the forecast error exceeds the threshold value.
[0013b] Certain exemplary embodiments can provide a computer readable medium for implementing a method, performed by a computer, for detecting anomalous traffic flow, comprising: a sketch module for creating a k-ary sketch of observed values of an incoming data flow, the k-ary sketch identifying each data item in the incoming data flow by a key value comprising an Internet Protocol (IP) address of each data item and an update value comprising a size of the data item; a forecast module for generating a forecast sketch for the incoming data flow using a time series forecast model that includes a first parameter defining a number of hash functions applied to each data item identified by the sketch module, a second parameter defining a hash table size and a third parameter defining a time interval, the time series forecast model resulting in a forecast value, the forecast module further for calculating a forecast error comprising a difference between the forecast value and an observed value of the incoming data flow determined by the sketch module; and a change detection module for establishing a threshold value based on 4a an estimated second moment of the forecast error received from the forecast module, the estimated second moment comprising a sum of squares of each update value associated with each key value, and for indicating an alarm condition when the forecast error exceeds the threshold value.
[0013c] Certain exemplary embodiments can provide an apparatus for detecting anomalous traffic flow, comprising: a processor; and a memory in communication with the processor, the memory for storing a plurality of processing instructions for directing the processor to: generate a k-ary sketch of observed values of an incoming data flow, the k-ary sketch identifying each data item in the incoming data flow by a key value comprising an Internet Protocol (IP) address of each data item and an update value comprising a size of the data item; generate a forecast sketch using a time series forecast model that includes a first parameter defining a number of hash functions applied to each data item of the incoming data flow, a second parameter defining a hash table size and a third parameter defining a time interval, the time series forecast model resulting in a forecast value;
calculate a forecast error comprising a difference between the forecast value and an observed value of the incoming data flow; establish a threshold value based on an estimated second moment of the forecast error, the estimated second moment comprising a sum of squares of each update value associated with each key value; and indicate an alarm condition when the forecast error exceeds the threshold value.
Brief Description of the Drawings:
[0014] Further aspects of the present disclosure will be more readily appreciated upon review of the detailed description of its various embodiments, described below, when taken in conjunction with the accompanying drawings, of which:
4b PATENT
4b PATENT
[0015] FIG. I is a block diagram of a computer network within which the processes disclosed herein may be performed;
[0016) FIG. 2 is a flowchart of an exemplary sketch-based anomaly detection process performed over the computer network of FIG. 1;
[0017] FIG. 3 depicts a graph of the cumulative distribution function (CDF) of Relative Difference between sketch-based processes used in the process of FIG. 2 and per-flow analysis;
[0018] FIG. 4 depicts graphs of the results of random selection of h parameters for time series forecasting models used with the process of FIG. 2;
[0019] FIG. 5 depicts graphs of the results of random selection of K
parameters for time series forecasting models used with the process of FIG. 2;
parameters for time series forecasting models used with the process of FIG. 2;
[0020] FIG. 6 depicts graphs of the overall similarity of per flow measurements and sketch functions used with the process of FIG. 2 for large router files with time series forecast parameters H=5 and K=32000;
[0021] FIG. 7 depicts graphs of the average similarity in the EWMA model used with the process of FIG. 2 where H is fixed at 5 and K varies between 8K and 64K, for both 300s and 60s time intervals;
[0022] FIG. 8 depicts graphs of the accuracy of top N vs. top X*N for the EWMA
model used with the process of FIG. 2 for large router files;
model used with the process of FIG. 2 for large router files;
[0023] FIG. 9 depicts graphs of the effect of varying H and K parameters for EWMA
models used with the process of FIG. 2 for large router files;
PATENT
models used with the process of FIG. 2 for large router files;
PATENT
[0024] FIG. 10 is depicts graphs of similarity metrics for an EWMA model used with the process of FIG. 2 for large and medium sized router files at an interval of 300s;
[0025] FIG. 11 depicts graphs of similarity metrics for an ARIMA model used with the process of FIG. 2 for large and medium sized router files at an interval of 300s;
[0026] FIG. 12 is depicts graphs of number of alarms, number of false positives and number of false negatives at differing threshold values using the NSHW model with the process of FIG. 2 on a large router at a 60 second interval;
[0027] FIG. 13 depicts graphs of number of alarms, number of false positives and number of false negatives at differing threshold values using the NSHW model with the process of FIG. 2 on a large router at a 300 second interval;
[0028] FIG. 14 depicts graphs of false negative ratio for EWMA and NSHW models used with the process of FIG. 2 as implemented on a medium sized router at a 300 second time interval;
[0029] FIG. 15 depicts graphs of false negative ratios for differing ARIMA
models used with the process of FIG. 2 as implemented on a medium sized router at a 300 second time interval;
models used with the process of FIG. 2 as implemented on a medium sized router at a 300 second time interval;
[0030] FIG. 16 depicts graphs of false positive ratios for the EWMA and NSHW
models used with the process of FIG. 2 as implemented on a medium sized router at a 300 second time interval;
models used with the process of FIG. 2 as implemented on a medium sized router at a 300 second time interval;
[0031] FIG. 17 depicts graphs of false positive ratios for differing ARIMA
models used with the process of FIG. 2 as implemented on a medium sized router at a 300 second time interval; and PATENT
[00321 FIG. 18 depicts a table summarizing the impact of hash computations and sketch functions on computing times in separate computing systems that may be used in the network of FIG. 1.
Detailed Description of the Specific Embodiments:
[0033] In the database research community, computation over massive data streams has been an active research area over the past several years. The emerging field of data stream computation deals with various aspects of computation that can be performed in a space- and time-efficient fashion when each tuple in a data stream can be touched only once (or a small number of times).
[0034] One particularly powerful technique is "sketch," a probabilistic summary technique proposed for analyzing large streaming datasets. Sketches avoid keeping per-flow state by dimensionality reduction, using projections along random vectors. Sketches have some interesting properties that have proven very useful in data stream computation: they are space efficient, provide provable probabilistic reconstruction accuracy guarantees, and are linear (i.e., sketches can be combined in an arithmetical sense).
[0035] A sketch-based change detection process will now be introduced wherein data stream computation techniques are incorporated into change detection, such that detection of significant changes in massive data streams with a large number of network time series is accommodated. With sketch-based change detection, compact summaries of the traffic data are generated using sketches. A variant of the sketch data structure introduced herein, named "k-ary sketch," uses a constant, small amount of memory, and has constant per-record update and reconstruction cost. A variety of time series forecast models (ARIMA, Holt-Winters, etc.) may PATENT
be implemented on top of such summaries and detect significant changes by looking for flows with large forecast errors. Being able to compute significant differences in the list of top flows quickly can point towards potential anomalies. Depending on the length of the time period for wliich forecasts are computed and the duration of significant changes, the process can accurately identify the presence of an anomaly. Note that an anomaly can be a benign surge in traffic (like a flash crowd) or an attack. Heuristics for configuring the model parameters are likewise introduced.
[0036] A large amount of real Internet traffic data was used to demonstrate that the sketch-based change detection method is highly accurate when compared with per-flow analysis, and can be implemented at lower computation and memory costs. Evaluations show that lists of the top flows in a time period may be reconstructed efficiently and accurately, resulting in similar forecast errors when compared with per-flow techniques. This method can thus readily serve as a foundation for network anomaly detection.
[0037] Referring now to FIGS. 1-18, wherein similar components of the present disclosure are referenced in like manner, various embodiments of a method and apparatus for sketch-based detection of changes in network traffic are disclosed.
[0038] FIG. I depicts an exemplary computing environment ] 00 in which the processes of the present disclosure may be performed. As one possible implementation, a server 102, such as an enterprise network server of the type commonly manufactured by IBM or a group of distributed servers, is implemented as a gateway between a local area network (LAN) 106, and a broader computing environment such as the Internet. The process may be implemented directly by the server 102, or mav instead be implemented on a separate coinputing device 104, such as a personal computing tenninal, computer workstation or separate dedicated server(s) tasked with PATENT
monitoring incoming data flow. In such embodiments, the computing device 104 may receive data flow input prior to the server 102, in parallel with the server 102 or after the server 102 as shown.
[0039] The LAN 106 may be a corporate, network environment or the like and may include any small to large sized distinct operating one or more computing environments and one or more network protocols. In a particular embodiment, server 102, tenminal 104 and LAN 106 are operated by an Internet Service Provider or the like, which are frequent targets of DoS
attacks and other traffic anomalies. Other implementations of the computing environment 100 may likewise be accommodated.
[0040] The operation of the sketch-based change detection process 200, after installation on the server 102 or the computing device 104, is summarized in the flowchart of FIG. 2. According to the process 200, a k-ary sketch of incoming data flows is generated for selected continuous time intervals (step 202). Time series forecasting is implemented to generate forecast sketches and forecast errors for each k-ary sketch (step 204). If any forecast errors exceed an established threshold value (step 206), the process identifies the anomalous flow and triggers an alarm condition (step 208). Otherwise, the monitoring of incoming data flows continues.
[0041 ] Particular implementations of steps 202-206 will be described in the following detailed discussions in which an overview of the available framework of the sketch-based change detection process is presented, followed by detailed discussions of the contemplated software modules for implementing the process 200, experimental setups for testing the process 200, and resu]ts of testing of sketch-based change detection process 200 on different large and real datasets.
PATENT
[0042] Over the past several years, various models have been proposed to describe data streams, including the Time Series Model, The Cache Register Model, and the Turnstile Model.
The most general one, the Turnstile Model, has been assumed in the discussions that follow, though other such models may likewise be accommodated. According to the selected model, let 2= ai , as, ===, be an input stream that arrives sequentially, item by item.
Each item ai~~~~ uz) a~r consists of a key az E[u] _{0,1, ===, u- 1), and a (possibly negative) update w-i ER.
Associated with each key a Ek4.1 is a time varying signal A[a]. The arrival of each new data item (ai, u;) causes the underlying signal A[a;] to be updated: A[ai]+= u;. The goal of change detection is to identify all such signals with significant changes in their behavior.
[0043] The Turnstile model can be instantiated in many ways with specific definitions of the key values and update values. In the context of network anomaly detection, the key can be defined using one or more fields in packet headers of data items from an input data stream, such as source and destination IP addresses, source and destination port numbers, protocol number, and the like. It is also possible to define keys with parameter values like network prefixes or Autonomous System (AS) numbers to achieve higher levels of aggregation.
[0044] The update va]ue can be the size of a packet, the total bytes or a number of data packets in a flow (when flow-level data is available). To keep the parameter space within a manageable size, however, destination IP version 4(Ipv4) address and bytes are readily used as the key value and the update value, respectively. Alternative choice of key and update values may be also used, some of which may impact the running time of the process 200 on a computing device.
PATENT
[0045] In an ideal environment with infinite resources, one could perform time series forecasting and change detection on a per-flow basis. Specifically, time may divided into discrete intervals 11,12 .... For each time interval I, and each signal A[a] that appears before or during interval I,, an observed value can be computed as t[ze total update to A[a]
during interval o'(l) - &EA (t) t42, where the set of indices AQ(t) dm*f {Q I az = aA (az, uz) arrives during I, The forecast value fa(t) can then be determined by applying a forecasting model to observed values in the past intervals. The forecast error eQ(t) - 4t) - fa(t) can then be determined and an alarm indicated whenever ea(t) is significant according to certain detection criteria.
[0046] In the real world, however, and as stated previously, per-flow analysis can be prohibitive because the number of signals present in the incoming data flow can be very large.
For instance, if source and destination IPv4 addresses are used as the key, the key space [u] can be as large as 264, and the number of signals can easily reach tens of millions given today's traffic volume and link speeds. Hence it can be too slow or too expensive to perform change detection on a per-flow basis.
[0047] The solution presented herein is to create sketches to summarize the input stream and then implement various forecasting models on top of the sketches.
The sketch-based change detection process 200 may be implemented as the following three basic modules: a Sketch module, a Forecasting module, and a Change Detection module.
[0048] The sketch module creates a space- and time-efficient sketch (the observed sketch So(t)) to summarize all the observed values oa(t) (total update to signal A[a]) during each time interval L. The forecasting module produces a forecast sketch SAt) using some forecasting models based on observed sketches in the past intervals. It then computes the forecast error ll PATENT
sketch Se(t) as the difference between So(t) and St{t), i.e., Se(t) =So(t) -SXt). The linearity of the sketch data structure allows us to implement various forecasting models and compute the forecast error directly at the sketch level. The change detection module uses the error sketch Se(t) to identify significant (i.e., anomalous) changes. The functions performed by these modules will now be described in turn.
[0049] Let (a,, u,), (a2, uz) ... be an input stream (for example, the substream of x that is observed during a given time interval). For each key 0' E[U), let V ' -ZveRp uis, where the def set of indices A _ {Z [ a~ = a}
[0050] For each interval, the "second moment" (F2) is defined as the sum of squares of the values associated with all the keys, i. e., F2 "a. We refer to the square root of the second moment (Vq~) as the "L2 norm."
[00511 The sketch module uses the sketch data structure to summarize all the va in each time interval. Sketch is a probabilistic summary data structure based on random projections. We have designed a variant of the sketch data structure, which we call the "k-ary sketch." The k-ary sketch is similar to the count sketch data structure recently proposed by others. However, the most common operations on k-ary sketch use simpler operations and are more efficient than the corresponding operations defined on count sketches.
[0052] Just like the count sketch, a k-ary sketch S consists of a H x K table of registers:
Ts[i][,j] (i E[HJ,,j E[Kj). Each row TS[2I['j(i E[H]) is associated with a hash function from [u]
to [K]: h;. The data structure for any k-ary sketch may then be viewed as an array of hash tables.
The hash functions are required to be 4-universal in order to provide probabilistic guarantees of PATENT
reconstruction accuracy. They may be constructed using a fast tabulation-based method.
Different h; are constructed using independent seeds, and are therefore independent.
[0053] There are four basic operations defined for k-ary sketches: (1) UPDATE
to update a sketch, (2) ESTIMATE to reconstruct va for a given key a, (3) ESTIMATEF2 to estimate the second moment F2, and (4) COMBINE to compute the linear combination of multiple sketches. These operations are used in various modules of the change detection process 200: UPDATE in the sketch module to update the observed sketch So(t); COMBINE
in the forecasting module to implement various forecasting models and to compute the forecast sketch SAt) and forecast error sketch Se(t); ESTIMATE in the change detection module to reconstruct forecast errors from Se(t); and ESTIMATEF2 in the change detection module to choose the threshold for judging whether forecast errors are significant.
[0054] One formal specification of these operations is as follows:
[0055] 1. UpDA'rE(S, a, u): For bi E[H], TS [i] [h,(a)]+ = u.
[0056] 2. ESTIMATE(S', a): Let sum(O sZ.zerxi T0[O][3] be the sum of all values in the sketch, which only needs to be computed once before any ESTIMATE(S', a) is called. Return an estimate of va:
va~ = medi~~~~ {aQ }
where T[z] [h_~.(a) ] - sum(S) /K
i - iIK
PATENT
As shown in the proofs at the end of this discussion, each ya' (z E [H Ja is an unbiased estimator of va with variance inversely proportional to (K - 1). veS1a further improves accuracy by avoiding the extreme estimates.
[0057] 3. ESTIMATEF2(S): Return an estimate of the second moment:
F'2't = median~E[y1{F2 R}
where ~ _ K ~*
P24 K -1 L, (Ts [ZJ[3J)2 ?E[K]
As shown in the proofs at the end of this discussion, each -P2 1i forms an unbiased estimator of F2 with variance inversely proportional to (K-1). ~t further improves accuracy by avoiding the extreme estimates.
[0058] 4. COMBINE(cl, S,, ... c(, S{): The linearity of the sketch data structure allows us to linearly combine multiple sketches by combining every entry in the table:
e T-T (zJ [j] - E c;- = Tsk [zJ [j J
k=i [0059] The forecasting module uses the observed sketches in the past intervals So(to) (to < t) to compute the forecast sketch SKt) and along with it the error between the observed and forecast sketches as Se(t). At least six known models may be used in univariate time series forecasting and change detection for these purposes. The first four models are simple smoothing models, while the other two belong to the family of ARIMA models. All six models can be implemented on top of sketches by exploiting the linearity property of sketches.
PATENT
[0060] The first four such useful forecasting models are simple smoothing models and are popular due to their simplicity. They are: moving average (MA), exponentially weighted moving average (EWMA), S-shaped moving average (SMA), and non-seasonal Holt-Winters (NSHW).
[0061] The Moving Average (MA) forecasting model assigns equal weights to all past samples, and has a single integer parameter W~A which specifies the number of past time intervals used for computing the forecast for time t.
~~ i 'S'r (t - fl) w r VI, > 1 [0062] The S-shaped Moving Average (SMA) forecasting model is a class of weighted moving average models that give higher weights to more recent samples.
Of(t) wQ = 0f (t - e~ ? w > 1 Ei-i Wz A subclass is used that gives equal weights to the most recent half of the window and linearly decayed weights for the earlier half.
[0063] ln the Exponentially Weighted Moving Average (EWMA) forecasting model, the forecast for time t is the weighted average of the previous forecast and the newly observed sample at time t - I.
~
~f (~) _ a ( ) (t - 1) + (1 - a) S~r(t -1), t f 2 SP1, t=2 The parameter ~ E10? 11 is called the smoothing constant. It indicates how much weight is given to new samples vs. historic data from previous intervals.
PATENT
[0064] The Non-Seasonal Holt-Winters (NSHW) forecasting model is another commonly used smoothing model that may be applied to detect aberrant behavior.
In the Non-Seasonal Holt-Winters model, there is a separate smoothing component SS(t) and a trend component St(t). There are two parameters a E(0,,1] and A E[0,1]
sS(t) a s,~t-1}+(1-a} sf(t-1}, t>2 t = 2 St (t) _ 0o(2) S(t}0-( }$(t -1)) + (1- ~) s~(t -1), ~ > 2 The forecast is then Sf(t) = SS(t) + S,(t).
[0065] Box-Jenkins methodology, or the AutoRegressive Integrated Moving Average (ARIMA) modeling technique, is a class of linear time series forecasting techniques that capture the linear dependency of the future values on the past values. They are able to model a wide spectrum of time series behavior. As a result, they have been extensively studied and widely used for univariate time series forecasting and change detection.
[0066] An ARIMA model includes three types of parameters: the autoregressive parameter (p), the number of differencing passes (d), and the moving average parameter (q). In the notation introduced by Box and Jenkins, models are summarized as ARIMA (p, d, q). A
model described as (0, 1, 2) means that it contains p = 0 (zero) autoregressive parameters and q 2 moving average parameters which were computed for the time series after it was differenced once (d = 1). In the discussions herein, only integral values for p, d, and q are used. Although there has been recent work on models with a fractional d parameter (such as the AutoRegressive Fractional Integrated Moving Average (ARFIMA) model) in the context of action-reaction models, though their application in the networking context has not been fully explored.
[0067] A general ARIMA model of order (p, d, q) can be expressed as:
PATENT
P
Zz -~ MA2 = Zz_ti = C+ et -LARj - et_2 where Z, is obtained by differencing the original time series d times, e, is the forecast error at time t, MAi (i = 1,...., q) and AR; (j = 1,...., p) are MA and AR
coefficients. In practice, p and q very rarely need to be greater than 2. The number of differences (d) is typically either 0 or 1.
Therefore, when we extend ARIMA models are applied to the sketch context, only the following two types of ARIMA models (the names are based on the number of differences) will be discussed in detail:
ARIMAO: ARIMA models of order (p <7, d = 0, q<9) ARIMAI: ARIMA models of order (p <2, d = 1, q!Q) [0068) In ARIMA models, the choice of MA and AR coefficients (MAi (i = 1,...., q) and ARj (j = 1,...., p)) must ensure that the resulting models are invertible and stationary. As a necessary but insufficient condition, MAi and AR; Inust belong to the range [-2, 2] when p, q<.
[0069] After constructing the forecast error sketch Se(t), the change detection module chooses an alann threshold TA based on the estimated second moment of Se(t):
TA d--'f T = ( EsTIMATfiF2(SYe (t)) where T is a parameter to be determined by the application.
[0070] Now for any key a, the change detection module can reconstruct its forecast error in Se(t) using ~-~TIMATE(&(t)> 0) and raise an alann whenever the estimated forecast error is above the alarm threshold TA.
[0071] The remaining question is how to obtain the stream of keys for the change detection module. Sketches only support reconstruction of the forecast error associated with a given key. It does not contain information about what keys have appeared in the input streani.
PATENT
[0072] There are several possible solutions to this problem. With the brute-force solution, one can record all the keys that appeared in recent intervals (e.g., the same interval t over which Se(t) is defined) and replay them after Se(t) has been constructed.
This still requires maintaining per-flow infonmation. Its scalability is limited by the maximum number of keys that appear in the window for key collection. One can avoid keeping per-flow state by using a two-pass algorithm-construct Se(t) in the first pass and detect changes on the second pass. Since the input stream itself will provide the keys, there is no need for keeping per-flow state. This requires access to the same input stream twice and thus useful only in the offline context. A third alternative is to use the keys that appear after Se(t) has been constructed.
This works in both online and offline context.
[0073] The risk is that those keys that do not appear again after they experience significant change will be missed. This is often acceptable for many applications like DoS attack detection, where the damage can be very limited if a key never appears again.
Note that this does not need to be done for every newly arrived data item. If the risk of missing some very infrequent keys is acceptable, one can sample the (future) input streams and only work on a sub-stream of keys.
[0074] Another possibility is to incorporate combinatorial group testing into sketches.
This allows one to directly infer keys from the (modified) sketch data structure without requiring a separate stream of keys. However, this scheme also increases the update and estimation costs and additional research is required to make it more efficient. In the remaining descriptions, the offline two-pass algorithm is assumed in all experiments.
PATENT
[0075] The change detection framework includes sketch-related parameters as well as control parameters for various forecasting models. Guidelines and heuristics for properly configuring these parameters will now be described.
[0076] H and K are two sketch-related parameters: the number of hash functions (H), and the size of hash tables (K). Depending on the choice of H and K, k-ary sketches can provide probabilistic guarantees on the estimation accuracy of the forecast errors and their total energy (see the proofs at the end of this discussion for details). Such analytical results may be used to determine the choice of H and K that are sufficient to achieve targeted accuracy. As the analytical results apply in a data-independent way, the resulting H and K may be too conservative for the actual dataset. Hence, analytical results may further be used to derive data-independent choice of H and K and treat them as upper bounds. Actual data may then be employed to find the best (data-dependent) H and K values in an actual application.
[0077] In the context of univariate time series forecasting, a commonly used simple heuristic for configuring model parameters is choosing parameters that minimize the total residual energy, i.e., the sum of squares of forecast errors over time. The above heuristic can be extended to the sketch context in order to look for parameters that minimize the total energy in the resulting forecast error sketches over time ~t "~2(S~e(t)), where F~(&M) is the second moment for all the forecast errors summarized by sketch Se(t).
[0078] The true F2(Se(t)) can not be known unless per-flow analysis is performed for each parameter setting, which can be prohibitive. Instead, one can use the estimated second moment FeS2 (Se(t)), as long as Fe512 (Se(t) closely approximates F2(Se(t)).
In other words, one must find parameters that minimize the estimated total energy of forecast errors ~z PATENT
[0079] For parameters that are continuous, a multi-pass grid search algorithm may be employed to find a good choice. Consider for example the EWMA model. The first pass finds a parameter a E{ 0.1, 0.2, ...,1.0} that minimizes the estimated total energy for the forecast errors.
Let ao be the best a. The second pass equally subdivides range [ao-0.1; ao+0.1 ] into N = 10 parts and repeats the process. High precision is obtained via multiple passes. For models with integral parameters, such as the moving average model, the parameter may simply be varied to find the best values. Note that grid search is only a heuristic. It does not guarantee that the optimal parameter combination that minimizes the estimated total energy for forecast errors will be found. However, grid search has been found to yield accurate parameters such that the resulting model captures the overall time series behavior.
[0080] Large amounts of real Internet traffic data were used to evaluate and validate this approach. A discussion of datasets and the experimental parameter settings used will now be presented in detail.
[0081] Input data flows were chosen to be four hours worth of netflow dumps from ten different routers in the backbone of a tier-1 ISP. Nearly 190 million records are processed with the smallest router having 861,000 records and the busiest one having over 60 million records in a contiguous four-hour stretch.
[0082] Various values of parameters were used in these experiments to determine acceptable ranges of choices. These values may be tailored in the sketch-based approach based on actual local data available. Accordingly, some of the parameters may have different values when the sketch technique is used in different applications.
PATENT
[0083] The cost of estimation and updating is dominated by the number of hash tables used for sketches, so small values for H should be chosen. Meanwhile, H
improves accuracy by making the probability of hitting extreme estimates exponentially small (see Theorems 2, 3, and appearing at the end of this discussion for more details), suggesting again that it is enough to use a small value for H. H was varied to determine the impact on the accuracy of estimation with respect to the cost. Selections of H(l, 5, 9, and 25) were driven by the fact that optimized median networks can be used to find the medians quickly without making any assumptions about the nature of the input.
[0084] The analytic upper bound needed to provide a specific degree of error threshold by using k-ary sketches was selected as the upper reach of K. One can tighten the lower bound of zero by empirically examining values between 0 and the upper bound in log(upper-bound) steps.
Experimental results indicated an upper bound of K=64,000 and a lower bound of K = 1024.
[0085] Another important parameter is the interval size: a long interval would result in delays since the sketch-based process 200 reports anomalies at the end of each interval and events that occur within a single interval only may be missed. A short interval requires updating the sketch-based forecasting data structures more frequently. Five minutes (300 seconds (s)) was selected as a reasonable tradeoff between the responsiveness and the computational overhead.
Such an interval is used in other SNMP based network anomaly detection systems. We also use one minute (60 s) intervals to examine the impact of shorter intervals.
[0086] Each of the six time series forecasting models requires different choices of parameters. For the moving average models (MA and SMA), a single time interval was used for the minimum window size and ten (or twelve) to be the maximum window size for an interval size of five (or one) minutes. The window size yielding the minimum total energy of forecast PATENT
errors across each of the interval values was then selected as the value for this parameter. For the remaining models, a two-pass grid search algorithm was applied to choose various parameters.
For the EWMA and NSHW models, during each pass the current ranges were partitioned into ten equal intervals. For ARIMA models, however, the number of parameters is much larger and the search space becomes too large if each parameter range is partitioned into ten parts. To limit the search space then, the current search range was instead partitioned into seven parts. During grid search, H was fixed at I and K at 8000. As will later be demonstrated, with H
=1 and K = 8000 the estimated total energy of forecast errors closely approximates the true energy obtained using per-flow analysis.
[0087] The results of the evaluation of sketches for change detection will now be presented. The setup for the various experiments is described results are presented in detail for three models (EWMA and ARIMA with d=0 and l), with occasional results for NSHW. In most cases, the results from the various models are largely similar and are excluded in the interest of brevity.
[0088] The evaluation is divided into three parts: First, the validity of the parameters generated by the grid search is reported. Next, an evaluation of sketches is provided at the flow-level-focusing on what sketch reports as (i) the top-N flows with the maximum absolute forecast errors, and (ii) the flows whose absolute forecast error exceeds a threshold, as well as a comparison of the sketch report with per-flow schemes.
[0089] The experiments now described were concerned with determining appropriate parameter settings for the forecast models, values for H and K, and in evaluating the usefulness of grid search functions. The estimated total energy (instead of the >>=ue total energy) was used as the metric for selection of the forecast model parameter setting. For this approach to yield good PATENT
performance, the estimated value must closely track the true value. This was the focus of these experiments. The space of (H, K) values and various parameter settings were examined to select suitable choices of H and K that resulted in acceptable performance. Grid search functions were used to select the parameter setting that results in the minimum total energy.
The "goodness" of the parameter selected by grid search was then compared to a random selection of parameters.
[0090] In FIG. 3, the Cumulative Distribution Function (CDF) for Relative Differenceis shown across all models with interval=300 seconds, H=1, and K=1024. A set of experiments (called random) were performed over a collection of 10 router files (consisting of over 189 million flow records). For each forecast model, a number of points were randomly selected in the model parameter space, and for each chosen point and (H, K) value combination, both sketch and per-flow based detection were run on each router trace. The goal here was to examine differences between the different forecast models, and to evaluate parameter value choices for H and K (the hash table and range sizes). This experiment also allowed examination of how sketches and per-flow compare when forecast parameters are not selected carefully. The comparison metric is the "Relative Difference," which is defined as the difference between the total energy (square root of the sum of second moments for each time interval) computed from the sketch-based technique and the total energy obtained using per-flow detection, expressed as a percentage of the total energy obtained using per-flow detection. For a particular forecast model and (H, K) combination, for each router file, we obtain multiple Relative Difference values, one for each selected point in the parameter space for that model.
[0091] In FIGS. 3-5, each curve corresponds to a particular forecast model and (H. K) combination, and represents the empirical CDF of the Relative Difference values aggregated from across all the routers. FIG. 3 shows that even for small H (l) and K (l 024), across all the PATENT
models, most of the mass is concentrated in the neighborhood of the 0% point on the x-axis, indicating that even for randomly chosen model parameters, the total energy from the sketch-based approach is very close to that for per-flow. Only for the NSHW model a small percentage of points have sketch values that differ by more th;an 1.5% from the corresponding per-flow values. The worst case difference is 3.5%.
[0092] Next, the impact of varying the H parameter is examined. FIG. 4 shows, in the graph 400 of the EWMA model and the graph 402 of the ARIMAO models, that there is no need to increase H beyond 5 to achieve low relative difference.
[0093] The last set of results for the random parameter technique is shown in FIG. 5, and demonstrates that once K = 8192 the relative difference becomes insignificant, obviating the need to increase K further. The grid search technique for identifying parameters uses six models for both 60s and 300s intervals (shown in graphs 500 and 502, respectively), a representative sample of router files (one large, one medium, and one small sized file), and (H=1, K=8192) combination. For each (model, router, H, K) combination, grid search outputs the parameter value(s) for that model that minimize the total energy in the resulting forecast errors. Using this parameter setting output by grid search, per-flow analysis was run to obtain the corresponding total energy. The per-flow estimate was then compared against the per-flow estimates of the random parameters generated in the previous technique, for the same router file and model. The goal of this experiment was twofold: first, to ensure that grid search results are never worse than any of the per-flow values of the random parameters; second, to show that grid search results can be significantly better than the results in the random case. The experimental results show that in all cases (all models, three router files, both intervals) grid search is never worse than the random parameters. Secondly, in at least 20% of the cases the results with the random parameters are at PATENT
least twice (and in many cases much more) as bad as the errors in the grid search case. This justifies the use of grid search to generate the parameters for the remainder of the experiments.
[0094] After validating the set of parameters from the grid search scheme, the sketch scheme's accuracy is compared to per-flow estimates via two techniques: (i) Top-N, and (ii) Thresholding.
[0095] The values of H and K are key to the accuracy of the forecasts as well as efficient operation. Care must be taken to choose the proper range of values on a per-application basis. Experimental results based on large and diverse data sets show that the values chosen (H =
1...25), (K = 1,000...64,000) are indeed suitable for the change detection class of applications.
[0096] Top-N sketch vs. perflow evaluation was conducted for a given N, to determine how many of the top-N flows (ranked in order of decreasing magnitude of forecasting errors) detected by the per-flow scheme are also detected as to p-ranking by the sketch-based scheme.
Three values of H (5, 9, 25) and K (8000, 32000, 64000), two values of intervals (60s and 300s), and three router data files representing high volume (over 60 Million), medium (12.7 Million), and low (5.3 Million) records, were selected to carry out sketch accuracy evaluation across all models. For the model parameters, the parameter values selected by the grid search process were used. For each time interval, the top-N flows with the maximum absolute forecast errors (recall that a higher absolute forecast error indicates that a flow's volume has a higher deviation from that predicted by the underlying model) are generated for both sketches and per-flow techniques.
For four values of N (50, 100, 500, 1000), we see how many of the top-N flows are in common between the two resulting sets and compute a similarity metric NAB/N, where NAB is the number of common elements in the two sets.
PATENT
[0097] While some of the top-N ranked elements from the per-flow technique may not belong to exactly the top-N elements output by the sketch technique, the hope is that these elements will still be high in the sketch-based ranking. Thus, it is possible to increase the accuracy by comparing the top-N per-flow list witb additional elements in the sketch-based ranked list. To evaluate this possibility, a second set of comparisons involves comparing the top-N per-flow results against the top-X*N (X = 1, 1.2, 1.5, 2) results from the sketch-based approach.
[0098] Results show how well sketches perform when compared with per-flow by comparing their top-N (N=50, 100, 500,1000) flows. The metric is essentially a similarity one:
the number of common elements in the two sets nonmalized by N. It has been demonstrated that this metric is remarkably consistent across the time intervals, for moderate H
and K. The first hour of the four-hour data sets were used only for model warm-up purposes, leaving 180 and 37 intervals respectively in the 60 second and 300 second time interval cases.
[0099] Graphs 600 and 602 of FIG. 6 show that even for large N (1000), the similarity is around 0.95 for both the 60s and 300s intervals, respectively, for H=5 and K=32K. In the remaining FIGS. 7-9, we show the mean similarity value across the 180 and 37 intervals.
[0100] Graphs 700 and 702 of FIG. 7 use the EWMA model to show average similarity (across the time intervals), where H is fixed at 5 and K varies between 8K and 64K, for both 300s and 60s time intervals, respectively. As can be seen, for K=32000, the similarity is over 0.95 even for large N. For a smaller N (say 50 or 100), the overlap is nearly 100%. Larger values of K are of limited additional benefit. Note that similarity improves (for large N) with the smaller interval size of 60 seconds. This increased accuracy can be attributed to the fact that for a smaller interval, there are potentially fewer flows that have to be summarized in each interval.
PATENT
[0101] The potential of improving accuracy is explored by performing a top-N
vs. top-X*N (X = 1, 1.2, 1.5, 2) comparison. As can be seen in graphs 800 and 802 of FIG. 8 for the 300 s and 60 s intervals respectively, the similarity increases for K=8000, even for large N. With X=1.5, the similarity has risen significantly even for large N. For the settings examined, a very high accuracy is achieved with X = 1.5, and higher values of X result in only marginal additional accuracy gains. This is desirable because a larger X, while increasing accuracy, also results in more false positives.
[0102] The effect of varying H on the accuracy is next considered. Graph 900 of FIG. 9 shows that with a small K=800, H needs to be at least 9 to get high similarity values, especially for large N. A large H is undesirable as an increase in H directly corresponds to increased computation overhead (the number of update operations per key is proportional to the value of H) and memory (for sketches) overhead.
[0103] However, as graph 902 of FIG. 9 shows, even for very large N, increasing K to 32000 instantly increases similarity to nearly 1, for a small H=5. A larger K
(for sketches) implies a large space overhead. This suggests a space-computation overhead tradeoff. In many applications where the computation overhead is more critical, with K = 32000 or more, one can obtain good accuracy results with a small value for H.
[0104] The results for a different router file, where all files have similar output, are displayed in graph 1000 (showing top-N v. top-N results) and graph 1002 (showing top-N v. top X*N results) of FIG. 10 as the similarity metric for the EWMA model for a medium sized router file.
PATENT
[0105] Likewise, we show the effect of an ARIMAO model, i.e., ARIMA with d =0.
Graphs 1100 and 1102 of FIG. 11 show similarity for large and medium sized router files, respectively, for an interval of 300 seconds.
[0106] Instead of comparing just the top-N values, as in the previous accuracy tests, the flows were limited to those whose absolute forecast error is greater than or equal to a certain fraction of the L2 norm (obtained by the square root of the sum of squares of the forecast errors of all flows in a time interval). This threshold level was varied across 0.01, 0.02, 0.05, 0.07, and 0.1. The results for each of the two time intervals (60s, 300s) were examined for three models (EWMA, NSHW, and ARIMA with d = 0). For each of sketch and per-flow based change detection, the flows were ranked in decreasing order of absolute value of forecast error. The metrics of interest here are the false negative ratio, false positive ratio, and the number of alarms.
For a given value of threshold T, let Npr (-r) and Nsk(T) refer to the number of flows that meet the threshold in per-flow and sketch based detection, respectively. The number of alarms for per-flow and sketches are then Npr (T)and Nsk(T) respectively. Let NAB(2) be the count of flows that are common to both the sketch and per-flow lists. The false negative ratio is computed as:
N1,,f('t ) -1'IAB (T ) Pf7 The false positive ratio is:
[0107] At this point, for each metric there is a time series, with one value per time interval. The mean value over the entire time series were then considered.
[0108] The similarity of sketch and per-flow results when flows are selected by thresholding were also considered. The overall summary here is that with K set to be at least PATENT
32000, one can provide excellent guarantees for low false negatives and false positives. This is shown in FIGS. 12 and 13 where large sized router data files and the Non-Seasonal Holt-Winters model were used for the 60s (FIG. 12) and 300s (FIG. 13) time intervals. Graph 1200 of FIG. 12 shows that for a very low value of H(=1), the number of alarms are very high.
Simply increasing H to 5 suffices to dramatically reduce the number of alarms. The graph 1200 also demonstrates the significant reduction in number of alarms that can be realized by increasing the threshold value. Finally, it shows that there is virtually no difference between the per-flow results and the sketch results when H?5 and K_8000.
[0109] The graph 1202 of FIG. 12 shows that for K=32000 and beyond, the false negative ratio drops rapidly to be less than 2% even for very low threshold values and is below l% for threshold of 0.05. The false positive ratio, as the graph 1204 of FIG.
12 shows, for K=32000 and even a low threshold of 0.02, is quite low (below 1%). The overall results are similar for the 300 second interval shown in corresponding graphs 1300, 1302 and 1304 of FIG.
13.
[01 10] The graphs in FIGS. 14-17 use medium sized router data files, for a single interval size (300s) and varies across four models: EWMA, Non-Seasonal Holt-Winters model, and ARIMA with d=0 and d=1. Only the false negative and false positive ratios are displayed.
[0111] The graph 1400 of FIG. 14 shows the false negative ratio for the EWMA
model to be well below l% for thresholds larger than 0.01. Likewise, the graph 1402 of FIG. 14 shows the false negative ratio for the Non-Seasonal Holt-Winters model to be slightly better than the EWMA model.
PATENT
[0112] The graphs 1500 and 1502 of FIG. 15 show for the two different ARIMA
models (d = 0 and 1, respectively), that false negatives are low but differ a bit as compared to EWMA and NSHW models for a low threshold of 0.01.
[0113] Graphs 1600 and 1602 of FIG. 16 show the false positive ratios for the EWMA
and NSHW models respectively, to be well below l% for thresholds larger than 0.01 for K=32000 or higher.
[0114] Likewise, the graphs 1700 and 1702 of FIG. 17 show low false positive ratios for ARIMA models d=0 and d= 1, respectively.
[0115] There are three components in the sketch-based change detection implementation: 4-universal hash functions, sketches, and forecasting. The implementation of 4-universal hash functions can be accomplished with about 200 lines of programming code in the C programming language, while sketches will be around 250 lines. Forecasting code varies with the forecasting models used, but all of the models are each less than 800 lines of code.
[0116] Hash computation and sketch UPDATE need to be done on every data item in the input stream. Sketch ESTIMATE, by default, also needs to be done on a per-item basis.
However, if it is acceptable to miss some keys that appear too infrequently (which arguably can only cause limited damage), one can sample the stream of incoming keys and only do ESTIMATE on select sub-streams. Operations like ESTIMATEF2 only need to be done infrequently-once every interval-and their amortized costs are insignificant.
[0117] Running time for performing 10 million hash computations and sketch operations on "computer A" (a 400 Megahertz (MHz) SGI Rl2k processor running IR1X64 6.5) and "computer B" (a 900 MHz ULTRASPARC-lII processor running SOLARIS 5.8) are shown in the table of FIG. 18. Each hash computation produces 8 independent 16-bit hash values and PATENT
therefore suffices for k-ary sketches with H<_8 and K<9 16. Both UPDATE and ESTIMATE
assume the hash values have already been computed (which needs to done only once per item).
The sketch parameters we use are H = 5 and K = 216. As shown therein, the overhead of these operations are not very high.
[0118] The sketch-based change detection process 200 is highly accurate when compared with per-flow time series analysis as described in the foregoing. It offers promise to be a building block for network anomaly detection and traffic measurement.
[0119] In further embodiments, the sketch-based technique may be capable of near real-time change detection by modifying it to obtain the forecast model parameters online. One possible way is periodically re-computing the forecast model parameters using history data to keep up with changes in overall traffic behavior.
[0120] In additional embodiments, boundary effects due to fixed interval sizes may be avoided. Possible solutions include (i) simultaneously running multiple models using different interval sizes and different starting points, and (ii) randomizing the interval size (e.g., using exponentially distributed interval size) and detecting changes of total values normalized by interval size. The linearity of sketches makes both of these solutions possible.
[0121] Accurate detection of significant deviation from normal traffic behavior is a significant goal. However, some traffic anomalies are benign. The problem of reducing false alarms is a major challenge for all change-detection based network anomaly detection systems.
The sketch-based change detection framework introduced in the foregoing has tunable parameters, which can be adjusted to limit the false positives. For instance, the technique can be programmed to only report the top N major changes, or the changes that are above a threshold.
The particular application needs will guide the actual setting of these tunable parameters. The PATENT
alarm condition may be triggered and reported by any standard means available to a computing system, such as by dispatching an e-mail, instant message or any other notification (such as by telephone or pager) to appropriate network administration personnel.
[0122] Given the massive volumes of data generated in large networks, sampling is increasingly being used in ISP network measurement infrastructures for reducing the volume of data that has to be collected. The approach introduced herein combines time series analysis with sketches for scalable change detection in massive data sets. Sampling techniques may also be combined with the process 200 for increased scalability.
[0123] Given the wide range of parameters available, it would be useful to have reasonable guidance for selecting proper and justifiable values for them. The full factorial method in the statistical experimental design domain can help in narrowing the number of levels (or "versions") for the various variables. Such techniques may yield those parameters that are independent of each other and move towards identifying reasonable values overall based on the similarity. For example, H has overall impact independent of other parameters.
The tedium related to having multiple runs can also be reduced for example by using a Yates algorithm.
[0124] Proofs alluded to above will now be presented. Notation: For any let a -b denote h(a) = h(b), a?~ b denote h(a) ;clh(b).
[0125] ANALYSIS FOR VA ESTIMATION (Accuracy of vaa ): Theorem I below h, .
states that each " {2 r-- [H 1) is an unbiased estimator of va with variance inversely proportional to(K- l).
[0126] THEOREM 1.
r<-E zG = au 3~
PATENT
Var [vs] <_ F2 1 [0127] PROOF. For any h E{ho, ..., hlfr_i }, we have:
1-ijK
~ K -1b = va+ 1: vb- va 8~aA8oa ~ bryGa Define:
_ 1 ifbrya XQ~z - -~ otherwwe rr-i Jx and va becomes:
vQ=va+E vbXa,b boa Since h is 4-universal, for any distinct a' b E[u], we have:
E,[Xa,e] = 0 E XQ,b = ~ -1 ln addition, for any distinct Q> br ~ E[u], we have:
E [Xa,& Xa,'] = c) Now we are ready to prove the theorem:
Va + E [Xa,S]
Iva PATENT
s Vax [v] = E vh - E [vt]
(a1 E I ~ Vb Xa,b I
= I b#Q I
1 2 ~
= <
bqE a [0128] y~t further improves accuracy by avoiding the extreme estimates.
Theorems 2 est and 3 below summarize the accuracy guarantee of v .
[0129] THEOREM 2 For any ' E[u], T E(0,1), and 0, E[1, oo), if IyaI ?ct 7'V72, then:
4 ~/2 Pr IV46 :5 T -T) < (K-1(a-1 T2 [0130] Proof: For any h E{ho, ..., hK-i }, by the Chenyshev inequality, we have:
Prf lvQ1~ T+,~) < Prf lvQ - vQI ~ Iva I -2'~}
< F'rlVq ~l Al - (a-1)T~1 = Pr[lIva,t -E [v"] I > (a-1)Tv~A}
< Var u (Chebyshev Inequality) (a-1)TV7 2 1~'2 ~'(h - 1'i 1 < [(a-1)Tj p2 = (if-1)(a-1)sTs Since 2'a t is obtained by taking the median of H copies of va', by the Chemoff inequality, we immediately have the results of Theorem 2.
[0131 ] THEOREM 3. For any a E{u], T E(0,1), and ~8 E[0,1) if ~yQ TV72 then:
PATENT
x1s Prlvd6'l~~'~~ (K-1)(1-)S)2?' [0132] PROOF. The proof is almost identical and is omitted here in the interest of brevity.
[0133] As an example, let K= 216, a = 2; [i = 0.5, T = 1/32, and H = 20. If we raise an alanm whenever vaest >_ VV32, then according to Theorem 2, the probability that we will miss a va >V%/16 is less than 9.0 X l 0""; according to Theorem 3, the probability that we will falsely raise an alarm for a va <T2 /64 is less than 9.5 X 10''.
[0134] ANALYSIS FOR F2 ESTIMATION (Accuracy of ~'2 Y): Theorem 4 below shows that each F2 -i forms an unbiased estimator of F2 with variance inversely proportional to (K - 1). In order to achieve the same variance using count sketch, one has to either live with lower speed or double the memory requirement.
[0135] THEOREM 4.
E {F2'] 6 F2 Var [F2 i] < F2 F*25t further improves accuracy by avoiding the extreme estimates. Theorem 5 provides the accuracy guarantee of ~'~t.
[0136] THEOREM 5. For any ?,> 0, we have:
8 ~~2 Pr I.F2-t-F2j> h.F2 <
(H -1)A2 [0137] Proof: By Theorem 4 and the Chebyshev inequality, PATENT
Var Fhij Pr{iFte-F21 > XF2} S (Xy~) s < 2F2Q f(K-1) - 2 (A F2)2 (K - 172 Since F26st is the median of H copies of F2 , by the Chemoff inequality, one immediately obtains the result in Theorem 5.
[0138] As an example, let K = 2", k = 0.05, and H = 20, Theorem 5 states that the probability that the estimate Ffst is 5% off its real value F2 is below 7.7 X
10"14.
[0139] Although the best methodologies of the invention have been particularly described in the foregoing disclosure, it is to be understood that such descriptions have been provided for purposes of illustration only, and that other variations both in form and in detail can be made thereupon by those skilled in the art without departing from the spirit and scope of the present invention, which is defined first and foremost by the appended claims.
models used with the process of FIG. 2 as implemented on a medium sized router at a 300 second time interval; and PATENT
[00321 FIG. 18 depicts a table summarizing the impact of hash computations and sketch functions on computing times in separate computing systems that may be used in the network of FIG. 1.
Detailed Description of the Specific Embodiments:
[0033] In the database research community, computation over massive data streams has been an active research area over the past several years. The emerging field of data stream computation deals with various aspects of computation that can be performed in a space- and time-efficient fashion when each tuple in a data stream can be touched only once (or a small number of times).
[0034] One particularly powerful technique is "sketch," a probabilistic summary technique proposed for analyzing large streaming datasets. Sketches avoid keeping per-flow state by dimensionality reduction, using projections along random vectors. Sketches have some interesting properties that have proven very useful in data stream computation: they are space efficient, provide provable probabilistic reconstruction accuracy guarantees, and are linear (i.e., sketches can be combined in an arithmetical sense).
[0035] A sketch-based change detection process will now be introduced wherein data stream computation techniques are incorporated into change detection, such that detection of significant changes in massive data streams with a large number of network time series is accommodated. With sketch-based change detection, compact summaries of the traffic data are generated using sketches. A variant of the sketch data structure introduced herein, named "k-ary sketch," uses a constant, small amount of memory, and has constant per-record update and reconstruction cost. A variety of time series forecast models (ARIMA, Holt-Winters, etc.) may PATENT
be implemented on top of such summaries and detect significant changes by looking for flows with large forecast errors. Being able to compute significant differences in the list of top flows quickly can point towards potential anomalies. Depending on the length of the time period for wliich forecasts are computed and the duration of significant changes, the process can accurately identify the presence of an anomaly. Note that an anomaly can be a benign surge in traffic (like a flash crowd) or an attack. Heuristics for configuring the model parameters are likewise introduced.
[0036] A large amount of real Internet traffic data was used to demonstrate that the sketch-based change detection method is highly accurate when compared with per-flow analysis, and can be implemented at lower computation and memory costs. Evaluations show that lists of the top flows in a time period may be reconstructed efficiently and accurately, resulting in similar forecast errors when compared with per-flow techniques. This method can thus readily serve as a foundation for network anomaly detection.
[0037] Referring now to FIGS. 1-18, wherein similar components of the present disclosure are referenced in like manner, various embodiments of a method and apparatus for sketch-based detection of changes in network traffic are disclosed.
[0038] FIG. I depicts an exemplary computing environment ] 00 in which the processes of the present disclosure may be performed. As one possible implementation, a server 102, such as an enterprise network server of the type commonly manufactured by IBM or a group of distributed servers, is implemented as a gateway between a local area network (LAN) 106, and a broader computing environment such as the Internet. The process may be implemented directly by the server 102, or mav instead be implemented on a separate coinputing device 104, such as a personal computing tenninal, computer workstation or separate dedicated server(s) tasked with PATENT
monitoring incoming data flow. In such embodiments, the computing device 104 may receive data flow input prior to the server 102, in parallel with the server 102 or after the server 102 as shown.
[0039] The LAN 106 may be a corporate, network environment or the like and may include any small to large sized distinct operating one or more computing environments and one or more network protocols. In a particular embodiment, server 102, tenminal 104 and LAN 106 are operated by an Internet Service Provider or the like, which are frequent targets of DoS
attacks and other traffic anomalies. Other implementations of the computing environment 100 may likewise be accommodated.
[0040] The operation of the sketch-based change detection process 200, after installation on the server 102 or the computing device 104, is summarized in the flowchart of FIG. 2. According to the process 200, a k-ary sketch of incoming data flows is generated for selected continuous time intervals (step 202). Time series forecasting is implemented to generate forecast sketches and forecast errors for each k-ary sketch (step 204). If any forecast errors exceed an established threshold value (step 206), the process identifies the anomalous flow and triggers an alarm condition (step 208). Otherwise, the monitoring of incoming data flows continues.
[0041 ] Particular implementations of steps 202-206 will be described in the following detailed discussions in which an overview of the available framework of the sketch-based change detection process is presented, followed by detailed discussions of the contemplated software modules for implementing the process 200, experimental setups for testing the process 200, and resu]ts of testing of sketch-based change detection process 200 on different large and real datasets.
PATENT
[0042] Over the past several years, various models have been proposed to describe data streams, including the Time Series Model, The Cache Register Model, and the Turnstile Model.
The most general one, the Turnstile Model, has been assumed in the discussions that follow, though other such models may likewise be accommodated. According to the selected model, let 2= ai , as, ===, be an input stream that arrives sequentially, item by item.
Each item ai~~~~ uz) a~r consists of a key az E[u] _{0,1, ===, u- 1), and a (possibly negative) update w-i ER.
Associated with each key a Ek4.1 is a time varying signal A[a]. The arrival of each new data item (ai, u;) causes the underlying signal A[a;] to be updated: A[ai]+= u;. The goal of change detection is to identify all such signals with significant changes in their behavior.
[0043] The Turnstile model can be instantiated in many ways with specific definitions of the key values and update values. In the context of network anomaly detection, the key can be defined using one or more fields in packet headers of data items from an input data stream, such as source and destination IP addresses, source and destination port numbers, protocol number, and the like. It is also possible to define keys with parameter values like network prefixes or Autonomous System (AS) numbers to achieve higher levels of aggregation.
[0044] The update va]ue can be the size of a packet, the total bytes or a number of data packets in a flow (when flow-level data is available). To keep the parameter space within a manageable size, however, destination IP version 4(Ipv4) address and bytes are readily used as the key value and the update value, respectively. Alternative choice of key and update values may be also used, some of which may impact the running time of the process 200 on a computing device.
PATENT
[0045] In an ideal environment with infinite resources, one could perform time series forecasting and change detection on a per-flow basis. Specifically, time may divided into discrete intervals 11,12 .... For each time interval I, and each signal A[a] that appears before or during interval I,, an observed value can be computed as t[ze total update to A[a]
during interval o'(l) - &EA (t) t42, where the set of indices AQ(t) dm*f {Q I az = aA (az, uz) arrives during I, The forecast value fa(t) can then be determined by applying a forecasting model to observed values in the past intervals. The forecast error eQ(t) - 4t) - fa(t) can then be determined and an alarm indicated whenever ea(t) is significant according to certain detection criteria.
[0046] In the real world, however, and as stated previously, per-flow analysis can be prohibitive because the number of signals present in the incoming data flow can be very large.
For instance, if source and destination IPv4 addresses are used as the key, the key space [u] can be as large as 264, and the number of signals can easily reach tens of millions given today's traffic volume and link speeds. Hence it can be too slow or too expensive to perform change detection on a per-flow basis.
[0047] The solution presented herein is to create sketches to summarize the input stream and then implement various forecasting models on top of the sketches.
The sketch-based change detection process 200 may be implemented as the following three basic modules: a Sketch module, a Forecasting module, and a Change Detection module.
[0048] The sketch module creates a space- and time-efficient sketch (the observed sketch So(t)) to summarize all the observed values oa(t) (total update to signal A[a]) during each time interval L. The forecasting module produces a forecast sketch SAt) using some forecasting models based on observed sketches in the past intervals. It then computes the forecast error ll PATENT
sketch Se(t) as the difference between So(t) and St{t), i.e., Se(t) =So(t) -SXt). The linearity of the sketch data structure allows us to implement various forecasting models and compute the forecast error directly at the sketch level. The change detection module uses the error sketch Se(t) to identify significant (i.e., anomalous) changes. The functions performed by these modules will now be described in turn.
[0049] Let (a,, u,), (a2, uz) ... be an input stream (for example, the substream of x that is observed during a given time interval). For each key 0' E[U), let V ' -ZveRp uis, where the def set of indices A _ {Z [ a~ = a}
[0050] For each interval, the "second moment" (F2) is defined as the sum of squares of the values associated with all the keys, i. e., F2 "a. We refer to the square root of the second moment (Vq~) as the "L2 norm."
[00511 The sketch module uses the sketch data structure to summarize all the va in each time interval. Sketch is a probabilistic summary data structure based on random projections. We have designed a variant of the sketch data structure, which we call the "k-ary sketch." The k-ary sketch is similar to the count sketch data structure recently proposed by others. However, the most common operations on k-ary sketch use simpler operations and are more efficient than the corresponding operations defined on count sketches.
[0052] Just like the count sketch, a k-ary sketch S consists of a H x K table of registers:
Ts[i][,j] (i E[HJ,,j E[Kj). Each row TS[2I['j(i E[H]) is associated with a hash function from [u]
to [K]: h;. The data structure for any k-ary sketch may then be viewed as an array of hash tables.
The hash functions are required to be 4-universal in order to provide probabilistic guarantees of PATENT
reconstruction accuracy. They may be constructed using a fast tabulation-based method.
Different h; are constructed using independent seeds, and are therefore independent.
[0053] There are four basic operations defined for k-ary sketches: (1) UPDATE
to update a sketch, (2) ESTIMATE to reconstruct va for a given key a, (3) ESTIMATEF2 to estimate the second moment F2, and (4) COMBINE to compute the linear combination of multiple sketches. These operations are used in various modules of the change detection process 200: UPDATE in the sketch module to update the observed sketch So(t); COMBINE
in the forecasting module to implement various forecasting models and to compute the forecast sketch SAt) and forecast error sketch Se(t); ESTIMATE in the change detection module to reconstruct forecast errors from Se(t); and ESTIMATEF2 in the change detection module to choose the threshold for judging whether forecast errors are significant.
[0054] One formal specification of these operations is as follows:
[0055] 1. UpDA'rE(S, a, u): For bi E[H], TS [i] [h,(a)]+ = u.
[0056] 2. ESTIMATE(S', a): Let sum(O sZ.zerxi T0[O][3] be the sum of all values in the sketch, which only needs to be computed once before any ESTIMATE(S', a) is called. Return an estimate of va:
va~ = medi~~~~ {aQ }
where T[z] [h_~.(a) ] - sum(S) /K
i - iIK
PATENT
As shown in the proofs at the end of this discussion, each ya' (z E [H Ja is an unbiased estimator of va with variance inversely proportional to (K - 1). veS1a further improves accuracy by avoiding the extreme estimates.
[0057] 3. ESTIMATEF2(S): Return an estimate of the second moment:
F'2't = median~E[y1{F2 R}
where ~ _ K ~*
P24 K -1 L, (Ts [ZJ[3J)2 ?E[K]
As shown in the proofs at the end of this discussion, each -P2 1i forms an unbiased estimator of F2 with variance inversely proportional to (K-1). ~t further improves accuracy by avoiding the extreme estimates.
[0058] 4. COMBINE(cl, S,, ... c(, S{): The linearity of the sketch data structure allows us to linearly combine multiple sketches by combining every entry in the table:
e T-T (zJ [j] - E c;- = Tsk [zJ [j J
k=i [0059] The forecasting module uses the observed sketches in the past intervals So(to) (to < t) to compute the forecast sketch SKt) and along with it the error between the observed and forecast sketches as Se(t). At least six known models may be used in univariate time series forecasting and change detection for these purposes. The first four models are simple smoothing models, while the other two belong to the family of ARIMA models. All six models can be implemented on top of sketches by exploiting the linearity property of sketches.
PATENT
[0060] The first four such useful forecasting models are simple smoothing models and are popular due to their simplicity. They are: moving average (MA), exponentially weighted moving average (EWMA), S-shaped moving average (SMA), and non-seasonal Holt-Winters (NSHW).
[0061] The Moving Average (MA) forecasting model assigns equal weights to all past samples, and has a single integer parameter W~A which specifies the number of past time intervals used for computing the forecast for time t.
~~ i 'S'r (t - fl) w r VI, > 1 [0062] The S-shaped Moving Average (SMA) forecasting model is a class of weighted moving average models that give higher weights to more recent samples.
Of(t) wQ = 0f (t - e~ ? w > 1 Ei-i Wz A subclass is used that gives equal weights to the most recent half of the window and linearly decayed weights for the earlier half.
[0063] ln the Exponentially Weighted Moving Average (EWMA) forecasting model, the forecast for time t is the weighted average of the previous forecast and the newly observed sample at time t - I.
~
~f (~) _ a ( ) (t - 1) + (1 - a) S~r(t -1), t f 2 SP1, t=2 The parameter ~ E10? 11 is called the smoothing constant. It indicates how much weight is given to new samples vs. historic data from previous intervals.
PATENT
[0064] The Non-Seasonal Holt-Winters (NSHW) forecasting model is another commonly used smoothing model that may be applied to detect aberrant behavior.
In the Non-Seasonal Holt-Winters model, there is a separate smoothing component SS(t) and a trend component St(t). There are two parameters a E(0,,1] and A E[0,1]
sS(t) a s,~t-1}+(1-a} sf(t-1}, t>2 t = 2 St (t) _ 0o(2) S(t}0-( }$(t -1)) + (1- ~) s~(t -1), ~ > 2 The forecast is then Sf(t) = SS(t) + S,(t).
[0065] Box-Jenkins methodology, or the AutoRegressive Integrated Moving Average (ARIMA) modeling technique, is a class of linear time series forecasting techniques that capture the linear dependency of the future values on the past values. They are able to model a wide spectrum of time series behavior. As a result, they have been extensively studied and widely used for univariate time series forecasting and change detection.
[0066] An ARIMA model includes three types of parameters: the autoregressive parameter (p), the number of differencing passes (d), and the moving average parameter (q). In the notation introduced by Box and Jenkins, models are summarized as ARIMA (p, d, q). A
model described as (0, 1, 2) means that it contains p = 0 (zero) autoregressive parameters and q 2 moving average parameters which were computed for the time series after it was differenced once (d = 1). In the discussions herein, only integral values for p, d, and q are used. Although there has been recent work on models with a fractional d parameter (such as the AutoRegressive Fractional Integrated Moving Average (ARFIMA) model) in the context of action-reaction models, though their application in the networking context has not been fully explored.
[0067] A general ARIMA model of order (p, d, q) can be expressed as:
PATENT
P
Zz -~ MA2 = Zz_ti = C+ et -LARj - et_2 where Z, is obtained by differencing the original time series d times, e, is the forecast error at time t, MAi (i = 1,...., q) and AR; (j = 1,...., p) are MA and AR
coefficients. In practice, p and q very rarely need to be greater than 2. The number of differences (d) is typically either 0 or 1.
Therefore, when we extend ARIMA models are applied to the sketch context, only the following two types of ARIMA models (the names are based on the number of differences) will be discussed in detail:
ARIMAO: ARIMA models of order (p <7, d = 0, q<9) ARIMAI: ARIMA models of order (p <2, d = 1, q!Q) [0068) In ARIMA models, the choice of MA and AR coefficients (MAi (i = 1,...., q) and ARj (j = 1,...., p)) must ensure that the resulting models are invertible and stationary. As a necessary but insufficient condition, MAi and AR; Inust belong to the range [-2, 2] when p, q<.
[0069] After constructing the forecast error sketch Se(t), the change detection module chooses an alann threshold TA based on the estimated second moment of Se(t):
TA d--'f T = ( EsTIMATfiF2(SYe (t)) where T is a parameter to be determined by the application.
[0070] Now for any key a, the change detection module can reconstruct its forecast error in Se(t) using ~-~TIMATE(&(t)> 0) and raise an alann whenever the estimated forecast error is above the alarm threshold TA.
[0071] The remaining question is how to obtain the stream of keys for the change detection module. Sketches only support reconstruction of the forecast error associated with a given key. It does not contain information about what keys have appeared in the input streani.
PATENT
[0072] There are several possible solutions to this problem. With the brute-force solution, one can record all the keys that appeared in recent intervals (e.g., the same interval t over which Se(t) is defined) and replay them after Se(t) has been constructed.
This still requires maintaining per-flow infonmation. Its scalability is limited by the maximum number of keys that appear in the window for key collection. One can avoid keeping per-flow state by using a two-pass algorithm-construct Se(t) in the first pass and detect changes on the second pass. Since the input stream itself will provide the keys, there is no need for keeping per-flow state. This requires access to the same input stream twice and thus useful only in the offline context. A third alternative is to use the keys that appear after Se(t) has been constructed.
This works in both online and offline context.
[0073] The risk is that those keys that do not appear again after they experience significant change will be missed. This is often acceptable for many applications like DoS attack detection, where the damage can be very limited if a key never appears again.
Note that this does not need to be done for every newly arrived data item. If the risk of missing some very infrequent keys is acceptable, one can sample the (future) input streams and only work on a sub-stream of keys.
[0074] Another possibility is to incorporate combinatorial group testing into sketches.
This allows one to directly infer keys from the (modified) sketch data structure without requiring a separate stream of keys. However, this scheme also increases the update and estimation costs and additional research is required to make it more efficient. In the remaining descriptions, the offline two-pass algorithm is assumed in all experiments.
PATENT
[0075] The change detection framework includes sketch-related parameters as well as control parameters for various forecasting models. Guidelines and heuristics for properly configuring these parameters will now be described.
[0076] H and K are two sketch-related parameters: the number of hash functions (H), and the size of hash tables (K). Depending on the choice of H and K, k-ary sketches can provide probabilistic guarantees on the estimation accuracy of the forecast errors and their total energy (see the proofs at the end of this discussion for details). Such analytical results may be used to determine the choice of H and K that are sufficient to achieve targeted accuracy. As the analytical results apply in a data-independent way, the resulting H and K may be too conservative for the actual dataset. Hence, analytical results may further be used to derive data-independent choice of H and K and treat them as upper bounds. Actual data may then be employed to find the best (data-dependent) H and K values in an actual application.
[0077] In the context of univariate time series forecasting, a commonly used simple heuristic for configuring model parameters is choosing parameters that minimize the total residual energy, i.e., the sum of squares of forecast errors over time. The above heuristic can be extended to the sketch context in order to look for parameters that minimize the total energy in the resulting forecast error sketches over time ~t "~2(S~e(t)), where F~(&M) is the second moment for all the forecast errors summarized by sketch Se(t).
[0078] The true F2(Se(t)) can not be known unless per-flow analysis is performed for each parameter setting, which can be prohibitive. Instead, one can use the estimated second moment FeS2 (Se(t)), as long as Fe512 (Se(t) closely approximates F2(Se(t)).
In other words, one must find parameters that minimize the estimated total energy of forecast errors ~z PATENT
[0079] For parameters that are continuous, a multi-pass grid search algorithm may be employed to find a good choice. Consider for example the EWMA model. The first pass finds a parameter a E{ 0.1, 0.2, ...,1.0} that minimizes the estimated total energy for the forecast errors.
Let ao be the best a. The second pass equally subdivides range [ao-0.1; ao+0.1 ] into N = 10 parts and repeats the process. High precision is obtained via multiple passes. For models with integral parameters, such as the moving average model, the parameter may simply be varied to find the best values. Note that grid search is only a heuristic. It does not guarantee that the optimal parameter combination that minimizes the estimated total energy for forecast errors will be found. However, grid search has been found to yield accurate parameters such that the resulting model captures the overall time series behavior.
[0080] Large amounts of real Internet traffic data were used to evaluate and validate this approach. A discussion of datasets and the experimental parameter settings used will now be presented in detail.
[0081] Input data flows were chosen to be four hours worth of netflow dumps from ten different routers in the backbone of a tier-1 ISP. Nearly 190 million records are processed with the smallest router having 861,000 records and the busiest one having over 60 million records in a contiguous four-hour stretch.
[0082] Various values of parameters were used in these experiments to determine acceptable ranges of choices. These values may be tailored in the sketch-based approach based on actual local data available. Accordingly, some of the parameters may have different values when the sketch technique is used in different applications.
PATENT
[0083] The cost of estimation and updating is dominated by the number of hash tables used for sketches, so small values for H should be chosen. Meanwhile, H
improves accuracy by making the probability of hitting extreme estimates exponentially small (see Theorems 2, 3, and appearing at the end of this discussion for more details), suggesting again that it is enough to use a small value for H. H was varied to determine the impact on the accuracy of estimation with respect to the cost. Selections of H(l, 5, 9, and 25) were driven by the fact that optimized median networks can be used to find the medians quickly without making any assumptions about the nature of the input.
[0084] The analytic upper bound needed to provide a specific degree of error threshold by using k-ary sketches was selected as the upper reach of K. One can tighten the lower bound of zero by empirically examining values between 0 and the upper bound in log(upper-bound) steps.
Experimental results indicated an upper bound of K=64,000 and a lower bound of K = 1024.
[0085] Another important parameter is the interval size: a long interval would result in delays since the sketch-based process 200 reports anomalies at the end of each interval and events that occur within a single interval only may be missed. A short interval requires updating the sketch-based forecasting data structures more frequently. Five minutes (300 seconds (s)) was selected as a reasonable tradeoff between the responsiveness and the computational overhead.
Such an interval is used in other SNMP based network anomaly detection systems. We also use one minute (60 s) intervals to examine the impact of shorter intervals.
[0086] Each of the six time series forecasting models requires different choices of parameters. For the moving average models (MA and SMA), a single time interval was used for the minimum window size and ten (or twelve) to be the maximum window size for an interval size of five (or one) minutes. The window size yielding the minimum total energy of forecast PATENT
errors across each of the interval values was then selected as the value for this parameter. For the remaining models, a two-pass grid search algorithm was applied to choose various parameters.
For the EWMA and NSHW models, during each pass the current ranges were partitioned into ten equal intervals. For ARIMA models, however, the number of parameters is much larger and the search space becomes too large if each parameter range is partitioned into ten parts. To limit the search space then, the current search range was instead partitioned into seven parts. During grid search, H was fixed at I and K at 8000. As will later be demonstrated, with H
=1 and K = 8000 the estimated total energy of forecast errors closely approximates the true energy obtained using per-flow analysis.
[0087] The results of the evaluation of sketches for change detection will now be presented. The setup for the various experiments is described results are presented in detail for three models (EWMA and ARIMA with d=0 and l), with occasional results for NSHW. In most cases, the results from the various models are largely similar and are excluded in the interest of brevity.
[0088] The evaluation is divided into three parts: First, the validity of the parameters generated by the grid search is reported. Next, an evaluation of sketches is provided at the flow-level-focusing on what sketch reports as (i) the top-N flows with the maximum absolute forecast errors, and (ii) the flows whose absolute forecast error exceeds a threshold, as well as a comparison of the sketch report with per-flow schemes.
[0089] The experiments now described were concerned with determining appropriate parameter settings for the forecast models, values for H and K, and in evaluating the usefulness of grid search functions. The estimated total energy (instead of the >>=ue total energy) was used as the metric for selection of the forecast model parameter setting. For this approach to yield good PATENT
performance, the estimated value must closely track the true value. This was the focus of these experiments. The space of (H, K) values and various parameter settings were examined to select suitable choices of H and K that resulted in acceptable performance. Grid search functions were used to select the parameter setting that results in the minimum total energy.
The "goodness" of the parameter selected by grid search was then compared to a random selection of parameters.
[0090] In FIG. 3, the Cumulative Distribution Function (CDF) for Relative Differenceis shown across all models with interval=300 seconds, H=1, and K=1024. A set of experiments (called random) were performed over a collection of 10 router files (consisting of over 189 million flow records). For each forecast model, a number of points were randomly selected in the model parameter space, and for each chosen point and (H, K) value combination, both sketch and per-flow based detection were run on each router trace. The goal here was to examine differences between the different forecast models, and to evaluate parameter value choices for H and K (the hash table and range sizes). This experiment also allowed examination of how sketches and per-flow compare when forecast parameters are not selected carefully. The comparison metric is the "Relative Difference," which is defined as the difference between the total energy (square root of the sum of second moments for each time interval) computed from the sketch-based technique and the total energy obtained using per-flow detection, expressed as a percentage of the total energy obtained using per-flow detection. For a particular forecast model and (H, K) combination, for each router file, we obtain multiple Relative Difference values, one for each selected point in the parameter space for that model.
[0091] In FIGS. 3-5, each curve corresponds to a particular forecast model and (H. K) combination, and represents the empirical CDF of the Relative Difference values aggregated from across all the routers. FIG. 3 shows that even for small H (l) and K (l 024), across all the PATENT
models, most of the mass is concentrated in the neighborhood of the 0% point on the x-axis, indicating that even for randomly chosen model parameters, the total energy from the sketch-based approach is very close to that for per-flow. Only for the NSHW model a small percentage of points have sketch values that differ by more th;an 1.5% from the corresponding per-flow values. The worst case difference is 3.5%.
[0092] Next, the impact of varying the H parameter is examined. FIG. 4 shows, in the graph 400 of the EWMA model and the graph 402 of the ARIMAO models, that there is no need to increase H beyond 5 to achieve low relative difference.
[0093] The last set of results for the random parameter technique is shown in FIG. 5, and demonstrates that once K = 8192 the relative difference becomes insignificant, obviating the need to increase K further. The grid search technique for identifying parameters uses six models for both 60s and 300s intervals (shown in graphs 500 and 502, respectively), a representative sample of router files (one large, one medium, and one small sized file), and (H=1, K=8192) combination. For each (model, router, H, K) combination, grid search outputs the parameter value(s) for that model that minimize the total energy in the resulting forecast errors. Using this parameter setting output by grid search, per-flow analysis was run to obtain the corresponding total energy. The per-flow estimate was then compared against the per-flow estimates of the random parameters generated in the previous technique, for the same router file and model. The goal of this experiment was twofold: first, to ensure that grid search results are never worse than any of the per-flow values of the random parameters; second, to show that grid search results can be significantly better than the results in the random case. The experimental results show that in all cases (all models, three router files, both intervals) grid search is never worse than the random parameters. Secondly, in at least 20% of the cases the results with the random parameters are at PATENT
least twice (and in many cases much more) as bad as the errors in the grid search case. This justifies the use of grid search to generate the parameters for the remainder of the experiments.
[0094] After validating the set of parameters from the grid search scheme, the sketch scheme's accuracy is compared to per-flow estimates via two techniques: (i) Top-N, and (ii) Thresholding.
[0095] The values of H and K are key to the accuracy of the forecasts as well as efficient operation. Care must be taken to choose the proper range of values on a per-application basis. Experimental results based on large and diverse data sets show that the values chosen (H =
1...25), (K = 1,000...64,000) are indeed suitable for the change detection class of applications.
[0096] Top-N sketch vs. perflow evaluation was conducted for a given N, to determine how many of the top-N flows (ranked in order of decreasing magnitude of forecasting errors) detected by the per-flow scheme are also detected as to p-ranking by the sketch-based scheme.
Three values of H (5, 9, 25) and K (8000, 32000, 64000), two values of intervals (60s and 300s), and three router data files representing high volume (over 60 Million), medium (12.7 Million), and low (5.3 Million) records, were selected to carry out sketch accuracy evaluation across all models. For the model parameters, the parameter values selected by the grid search process were used. For each time interval, the top-N flows with the maximum absolute forecast errors (recall that a higher absolute forecast error indicates that a flow's volume has a higher deviation from that predicted by the underlying model) are generated for both sketches and per-flow techniques.
For four values of N (50, 100, 500, 1000), we see how many of the top-N flows are in common between the two resulting sets and compute a similarity metric NAB/N, where NAB is the number of common elements in the two sets.
PATENT
[0097] While some of the top-N ranked elements from the per-flow technique may not belong to exactly the top-N elements output by the sketch technique, the hope is that these elements will still be high in the sketch-based ranking. Thus, it is possible to increase the accuracy by comparing the top-N per-flow list witb additional elements in the sketch-based ranked list. To evaluate this possibility, a second set of comparisons involves comparing the top-N per-flow results against the top-X*N (X = 1, 1.2, 1.5, 2) results from the sketch-based approach.
[0098] Results show how well sketches perform when compared with per-flow by comparing their top-N (N=50, 100, 500,1000) flows. The metric is essentially a similarity one:
the number of common elements in the two sets nonmalized by N. It has been demonstrated that this metric is remarkably consistent across the time intervals, for moderate H
and K. The first hour of the four-hour data sets were used only for model warm-up purposes, leaving 180 and 37 intervals respectively in the 60 second and 300 second time interval cases.
[0099] Graphs 600 and 602 of FIG. 6 show that even for large N (1000), the similarity is around 0.95 for both the 60s and 300s intervals, respectively, for H=5 and K=32K. In the remaining FIGS. 7-9, we show the mean similarity value across the 180 and 37 intervals.
[0100] Graphs 700 and 702 of FIG. 7 use the EWMA model to show average similarity (across the time intervals), where H is fixed at 5 and K varies between 8K and 64K, for both 300s and 60s time intervals, respectively. As can be seen, for K=32000, the similarity is over 0.95 even for large N. For a smaller N (say 50 or 100), the overlap is nearly 100%. Larger values of K are of limited additional benefit. Note that similarity improves (for large N) with the smaller interval size of 60 seconds. This increased accuracy can be attributed to the fact that for a smaller interval, there are potentially fewer flows that have to be summarized in each interval.
PATENT
[0101] The potential of improving accuracy is explored by performing a top-N
vs. top-X*N (X = 1, 1.2, 1.5, 2) comparison. As can be seen in graphs 800 and 802 of FIG. 8 for the 300 s and 60 s intervals respectively, the similarity increases for K=8000, even for large N. With X=1.5, the similarity has risen significantly even for large N. For the settings examined, a very high accuracy is achieved with X = 1.5, and higher values of X result in only marginal additional accuracy gains. This is desirable because a larger X, while increasing accuracy, also results in more false positives.
[0102] The effect of varying H on the accuracy is next considered. Graph 900 of FIG. 9 shows that with a small K=800, H needs to be at least 9 to get high similarity values, especially for large N. A large H is undesirable as an increase in H directly corresponds to increased computation overhead (the number of update operations per key is proportional to the value of H) and memory (for sketches) overhead.
[0103] However, as graph 902 of FIG. 9 shows, even for very large N, increasing K to 32000 instantly increases similarity to nearly 1, for a small H=5. A larger K
(for sketches) implies a large space overhead. This suggests a space-computation overhead tradeoff. In many applications where the computation overhead is more critical, with K = 32000 or more, one can obtain good accuracy results with a small value for H.
[0104] The results for a different router file, where all files have similar output, are displayed in graph 1000 (showing top-N v. top-N results) and graph 1002 (showing top-N v. top X*N results) of FIG. 10 as the similarity metric for the EWMA model for a medium sized router file.
PATENT
[0105] Likewise, we show the effect of an ARIMAO model, i.e., ARIMA with d =0.
Graphs 1100 and 1102 of FIG. 11 show similarity for large and medium sized router files, respectively, for an interval of 300 seconds.
[0106] Instead of comparing just the top-N values, as in the previous accuracy tests, the flows were limited to those whose absolute forecast error is greater than or equal to a certain fraction of the L2 norm (obtained by the square root of the sum of squares of the forecast errors of all flows in a time interval). This threshold level was varied across 0.01, 0.02, 0.05, 0.07, and 0.1. The results for each of the two time intervals (60s, 300s) were examined for three models (EWMA, NSHW, and ARIMA with d = 0). For each of sketch and per-flow based change detection, the flows were ranked in decreasing order of absolute value of forecast error. The metrics of interest here are the false negative ratio, false positive ratio, and the number of alarms.
For a given value of threshold T, let Npr (-r) and Nsk(T) refer to the number of flows that meet the threshold in per-flow and sketch based detection, respectively. The number of alarms for per-flow and sketches are then Npr (T)and Nsk(T) respectively. Let NAB(2) be the count of flows that are common to both the sketch and per-flow lists. The false negative ratio is computed as:
N1,,f('t ) -1'IAB (T ) Pf7 The false positive ratio is:
[0107] At this point, for each metric there is a time series, with one value per time interval. The mean value over the entire time series were then considered.
[0108] The similarity of sketch and per-flow results when flows are selected by thresholding were also considered. The overall summary here is that with K set to be at least PATENT
32000, one can provide excellent guarantees for low false negatives and false positives. This is shown in FIGS. 12 and 13 where large sized router data files and the Non-Seasonal Holt-Winters model were used for the 60s (FIG. 12) and 300s (FIG. 13) time intervals. Graph 1200 of FIG. 12 shows that for a very low value of H(=1), the number of alarms are very high.
Simply increasing H to 5 suffices to dramatically reduce the number of alarms. The graph 1200 also demonstrates the significant reduction in number of alarms that can be realized by increasing the threshold value. Finally, it shows that there is virtually no difference between the per-flow results and the sketch results when H?5 and K_8000.
[0109] The graph 1202 of FIG. 12 shows that for K=32000 and beyond, the false negative ratio drops rapidly to be less than 2% even for very low threshold values and is below l% for threshold of 0.05. The false positive ratio, as the graph 1204 of FIG.
12 shows, for K=32000 and even a low threshold of 0.02, is quite low (below 1%). The overall results are similar for the 300 second interval shown in corresponding graphs 1300, 1302 and 1304 of FIG.
13.
[01 10] The graphs in FIGS. 14-17 use medium sized router data files, for a single interval size (300s) and varies across four models: EWMA, Non-Seasonal Holt-Winters model, and ARIMA with d=0 and d=1. Only the false negative and false positive ratios are displayed.
[0111] The graph 1400 of FIG. 14 shows the false negative ratio for the EWMA
model to be well below l% for thresholds larger than 0.01. Likewise, the graph 1402 of FIG. 14 shows the false negative ratio for the Non-Seasonal Holt-Winters model to be slightly better than the EWMA model.
PATENT
[0112] The graphs 1500 and 1502 of FIG. 15 show for the two different ARIMA
models (d = 0 and 1, respectively), that false negatives are low but differ a bit as compared to EWMA and NSHW models for a low threshold of 0.01.
[0113] Graphs 1600 and 1602 of FIG. 16 show the false positive ratios for the EWMA
and NSHW models respectively, to be well below l% for thresholds larger than 0.01 for K=32000 or higher.
[0114] Likewise, the graphs 1700 and 1702 of FIG. 17 show low false positive ratios for ARIMA models d=0 and d= 1, respectively.
[0115] There are three components in the sketch-based change detection implementation: 4-universal hash functions, sketches, and forecasting. The implementation of 4-universal hash functions can be accomplished with about 200 lines of programming code in the C programming language, while sketches will be around 250 lines. Forecasting code varies with the forecasting models used, but all of the models are each less than 800 lines of code.
[0116] Hash computation and sketch UPDATE need to be done on every data item in the input stream. Sketch ESTIMATE, by default, also needs to be done on a per-item basis.
However, if it is acceptable to miss some keys that appear too infrequently (which arguably can only cause limited damage), one can sample the stream of incoming keys and only do ESTIMATE on select sub-streams. Operations like ESTIMATEF2 only need to be done infrequently-once every interval-and their amortized costs are insignificant.
[0117] Running time for performing 10 million hash computations and sketch operations on "computer A" (a 400 Megahertz (MHz) SGI Rl2k processor running IR1X64 6.5) and "computer B" (a 900 MHz ULTRASPARC-lII processor running SOLARIS 5.8) are shown in the table of FIG. 18. Each hash computation produces 8 independent 16-bit hash values and PATENT
therefore suffices for k-ary sketches with H<_8 and K<9 16. Both UPDATE and ESTIMATE
assume the hash values have already been computed (which needs to done only once per item).
The sketch parameters we use are H = 5 and K = 216. As shown therein, the overhead of these operations are not very high.
[0118] The sketch-based change detection process 200 is highly accurate when compared with per-flow time series analysis as described in the foregoing. It offers promise to be a building block for network anomaly detection and traffic measurement.
[0119] In further embodiments, the sketch-based technique may be capable of near real-time change detection by modifying it to obtain the forecast model parameters online. One possible way is periodically re-computing the forecast model parameters using history data to keep up with changes in overall traffic behavior.
[0120] In additional embodiments, boundary effects due to fixed interval sizes may be avoided. Possible solutions include (i) simultaneously running multiple models using different interval sizes and different starting points, and (ii) randomizing the interval size (e.g., using exponentially distributed interval size) and detecting changes of total values normalized by interval size. The linearity of sketches makes both of these solutions possible.
[0121] Accurate detection of significant deviation from normal traffic behavior is a significant goal. However, some traffic anomalies are benign. The problem of reducing false alarms is a major challenge for all change-detection based network anomaly detection systems.
The sketch-based change detection framework introduced in the foregoing has tunable parameters, which can be adjusted to limit the false positives. For instance, the technique can be programmed to only report the top N major changes, or the changes that are above a threshold.
The particular application needs will guide the actual setting of these tunable parameters. The PATENT
alarm condition may be triggered and reported by any standard means available to a computing system, such as by dispatching an e-mail, instant message or any other notification (such as by telephone or pager) to appropriate network administration personnel.
[0122] Given the massive volumes of data generated in large networks, sampling is increasingly being used in ISP network measurement infrastructures for reducing the volume of data that has to be collected. The approach introduced herein combines time series analysis with sketches for scalable change detection in massive data sets. Sampling techniques may also be combined with the process 200 for increased scalability.
[0123] Given the wide range of parameters available, it would be useful to have reasonable guidance for selecting proper and justifiable values for them. The full factorial method in the statistical experimental design domain can help in narrowing the number of levels (or "versions") for the various variables. Such techniques may yield those parameters that are independent of each other and move towards identifying reasonable values overall based on the similarity. For example, H has overall impact independent of other parameters.
The tedium related to having multiple runs can also be reduced for example by using a Yates algorithm.
[0124] Proofs alluded to above will now be presented. Notation: For any let a -b denote h(a) = h(b), a?~ b denote h(a) ;clh(b).
[0125] ANALYSIS FOR VA ESTIMATION (Accuracy of vaa ): Theorem I below h, .
states that each " {2 r-- [H 1) is an unbiased estimator of va with variance inversely proportional to(K- l).
[0126] THEOREM 1.
r<-E zG = au 3~
PATENT
Var [vs] <_ F2 1 [0127] PROOF. For any h E{ho, ..., hlfr_i }, we have:
1-ijK
~ K -1b = va+ 1: vb- va 8~aA8oa ~ bryGa Define:
_ 1 ifbrya XQ~z - -~ otherwwe rr-i Jx and va becomes:
vQ=va+E vbXa,b boa Since h is 4-universal, for any distinct a' b E[u], we have:
E,[Xa,e] = 0 E XQ,b = ~ -1 ln addition, for any distinct Q> br ~ E[u], we have:
E [Xa,& Xa,'] = c) Now we are ready to prove the theorem:
Va + E [Xa,S]
Iva PATENT
s Vax [v] = E vh - E [vt]
(a1 E I ~ Vb Xa,b I
= I b#Q I
1 2 ~
= <
bqE a [0128] y~t further improves accuracy by avoiding the extreme estimates.
Theorems 2 est and 3 below summarize the accuracy guarantee of v .
[0129] THEOREM 2 For any ' E[u], T E(0,1), and 0, E[1, oo), if IyaI ?ct 7'V72, then:
4 ~/2 Pr IV46 :5 T -T) < (K-1(a-1 T2 [0130] Proof: For any h E{ho, ..., hK-i }, by the Chenyshev inequality, we have:
Prf lvQ1~ T+,~) < Prf lvQ - vQI ~ Iva I -2'~}
< F'rlVq ~l Al - (a-1)T~1 = Pr[lIva,t -E [v"] I > (a-1)Tv~A}
< Var u (Chebyshev Inequality) (a-1)TV7 2 1~'2 ~'(h - 1'i 1 < [(a-1)Tj p2 = (if-1)(a-1)sTs Since 2'a t is obtained by taking the median of H copies of va', by the Chemoff inequality, we immediately have the results of Theorem 2.
[0131 ] THEOREM 3. For any a E{u], T E(0,1), and ~8 E[0,1) if ~yQ TV72 then:
PATENT
x1s Prlvd6'l~~'~~ (K-1)(1-)S)2?' [0132] PROOF. The proof is almost identical and is omitted here in the interest of brevity.
[0133] As an example, let K= 216, a = 2; [i = 0.5, T = 1/32, and H = 20. If we raise an alanm whenever vaest >_ VV32, then according to Theorem 2, the probability that we will miss a va >V%/16 is less than 9.0 X l 0""; according to Theorem 3, the probability that we will falsely raise an alarm for a va <T2 /64 is less than 9.5 X 10''.
[0134] ANALYSIS FOR F2 ESTIMATION (Accuracy of ~'2 Y): Theorem 4 below shows that each F2 -i forms an unbiased estimator of F2 with variance inversely proportional to (K - 1). In order to achieve the same variance using count sketch, one has to either live with lower speed or double the memory requirement.
[0135] THEOREM 4.
E {F2'] 6 F2 Var [F2 i] < F2 F*25t further improves accuracy by avoiding the extreme estimates. Theorem 5 provides the accuracy guarantee of ~'~t.
[0136] THEOREM 5. For any ?,> 0, we have:
8 ~~2 Pr I.F2-t-F2j> h.F2 <
(H -1)A2 [0137] Proof: By Theorem 4 and the Chebyshev inequality, PATENT
Var Fhij Pr{iFte-F21 > XF2} S (Xy~) s < 2F2Q f(K-1) - 2 (A F2)2 (K - 172 Since F26st is the median of H copies of F2 , by the Chemoff inequality, one immediately obtains the result in Theorem 5.
[0138] As an example, let K = 2", k = 0.05, and H = 20, Theorem 5 states that the probability that the estimate Ffst is 5% off its real value F2 is below 7.7 X
10"14.
[0139] Although the best methodologies of the invention have been particularly described in the foregoing disclosure, it is to be understood that such descriptions have been provided for purposes of illustration only, and that other variations both in form and in detail can be made thereupon by those skilled in the art without departing from the spirit and scope of the present invention, which is defined first and foremost by the appended claims.
Claims (16)
1. A method for detecting anomalous traffic flow, comprising:
generating a k-ary sketch of observed values of an incoming data flow, the k-ary sketch identifying each data item in the incoming data flow by a key value comprising a destination Internet Protocol (IP) address of the data item and an update value comprising a size of the data item;
generating a forecast sketch using a time series forecast model that includes a first parameter defining a number of hash functions applied to each data item of the incoming data flow, a second parameter defining a hash table size and a third parameter defining a time interval, the time series forecast model resulting in a forecast value;
calculating a forecast error comprising a difference between the forecast value and an observed value of the incoming data flow;
establishing a threshold value based on an estimated second moment of the forecast error, the estimated second moment comprising a sum of squares of each update value associated with each key value; and indicating an alarm condition when the forecast error exceeds the threshold value.
generating a k-ary sketch of observed values of an incoming data flow, the k-ary sketch identifying each data item in the incoming data flow by a key value comprising a destination Internet Protocol (IP) address of the data item and an update value comprising a size of the data item;
generating a forecast sketch using a time series forecast model that includes a first parameter defining a number of hash functions applied to each data item of the incoming data flow, a second parameter defining a hash table size and a third parameter defining a time interval, the time series forecast model resulting in a forecast value;
calculating a forecast error comprising a difference between the forecast value and an observed value of the incoming data flow;
establishing a threshold value based on an estimated second moment of the forecast error, the estimated second moment comprising a sum of squares of each update value associated with each key value; and indicating an alarm condition when the forecast error exceeds the threshold value.
2. The method of claim 1, wherein the k-ary sketch uses a Turnstile model for identifying each data item in the incoming data flow by the key value and the update value.
3. The method of claim 2, the key value further comprising at least one of: a source IP address of the data item, a port number associated with the data item, and a network prefix associated with the data item.
4. The method of claim 2, the update value further comprising a packet size of the data item.
5. The method of claim 2, the update value further comprising a number of packets associated with the data item.
6. The method of claim 1, said generating the forecast sketch further comprising:
calculating the forecast value using an AutoRegressive Integrated Moving Average (ARIMA) time series forecast model wherein the first parameter includes an autoregressive parameter (p), the second parameter includes a number of differencing passes parameter (d) and the third parameter includes a moving average parameter (q).
calculating the forecast value using an AutoRegressive Integrated Moving Average (ARIMA) time series forecast model wherein the first parameter includes an autoregressive parameter (p), the second parameter includes a number of differencing passes parameter (d) and the third parameter includes a moving average parameter (q).
7. The method of claim 6, wherein p<=2, d=0, q<=2.
8. The method of claim 6, wherein p<=2, d=1, q<=2.
9. The method of claim 1, wherein the first parameter comprises a value between 1 and 25 inclusive.
10. The method of claim 1, wherein the second parameter comprises a value between 1000 and 64000 inclusive.
11. The method of claim 1, wherein the third parameter comprises a value between 60 seconds and 300 seconds inclusive.
12. The method of claim 1, said generating the forecast sketch further comprising:
selecting values for the first parameter, the second parameter and the third parameter that minimize a total residual energy of the forecast error.
selecting values for the first parameter, the second parameter and the third parameter that minimize a total residual energy of the forecast error.
13. The method of claim 1, said time series forecast model comprising at least one of: a Moving Average model, an S-Shaped Moving Average model, an Exponentially Weighted Moving Average model and a Non-Seasonal Holt-Winters model.
14. The method of claim 1, the incoming data flow comprising a plurality of incoming data flows.
15. A computer readable medium for implementing a method, performed by a computer, for detecting anomalous traffic flow, comprising:
a sketch module for creating a k-ary sketch of observed values of an incoming data flow, the k-ary sketch identifying each data item in the incoming data flow by a key value comprising an Internet Protocol (IP) address of each data item and an update value comprising a size of the data item;
a forecast module for generating a forecast sketch for the incoming data flow using a time series forecast model that includes a first parameter defining a number of hash functions applied to each data item identified by the sketch module, a second parameter defining a hash table size and a third parameter defining a time interval, the time series forecast model resulting in a forecast value, the forecast module further for calculating a forecast error comprising a difference between the forecast value and an observed value of the incoming data flow determined by the sketch module; and a change detection module for establishing a threshold value based on an estimated second moment of the forecast error received from the forecast module, the estimated second moment comprising a sum of squares of each update value associated with each key value, and for indicating an alarm condition when the forecast error exceeds the threshold value.
a sketch module for creating a k-ary sketch of observed values of an incoming data flow, the k-ary sketch identifying each data item in the incoming data flow by a key value comprising an Internet Protocol (IP) address of each data item and an update value comprising a size of the data item;
a forecast module for generating a forecast sketch for the incoming data flow using a time series forecast model that includes a first parameter defining a number of hash functions applied to each data item identified by the sketch module, a second parameter defining a hash table size and a third parameter defining a time interval, the time series forecast model resulting in a forecast value, the forecast module further for calculating a forecast error comprising a difference between the forecast value and an observed value of the incoming data flow determined by the sketch module; and a change detection module for establishing a threshold value based on an estimated second moment of the forecast error received from the forecast module, the estimated second moment comprising a sum of squares of each update value associated with each key value, and for indicating an alarm condition when the forecast error exceeds the threshold value.
16. An apparatus for detecting anomalous traffic flow, comprising:
a processor; and a memory in communication with the processor, the memory for storing a plurality of processing instructions for directing the processor to:
generate a k-ary sketch of observed values of an incoming data flow, the k-ary sketch identifying each data item in the incoming data flow by a key value comprising an Internet Protocol (IP) address of each data item and an update value comprising a size of the data item;
generate a forecast sketch using a time series forecast model that includes a first parameter defining a number of hash functions applied to each data item of the incoming data flow, a second parameter defining a hash table size and a third parameter defining a time interval, the time series forecast model resulting in a forecast value;
calculate a forecast error comprising a difference between the forecast value and an observed value of the incoming data flow;
establish a threshold value based on an estimated second moment of the forecast error, the estimated second moment comprising a sum of squares of each update value associated with each key value; and indicate an alarm condition when the forecast error exceeds the threshold value.
a processor; and a memory in communication with the processor, the memory for storing a plurality of processing instructions for directing the processor to:
generate a k-ary sketch of observed values of an incoming data flow, the k-ary sketch identifying each data item in the incoming data flow by a key value comprising an Internet Protocol (IP) address of each data item and an update value comprising a size of the data item;
generate a forecast sketch using a time series forecast model that includes a first parameter defining a number of hash functions applied to each data item of the incoming data flow, a second parameter defining a hash table size and a third parameter defining a time interval, the time series forecast model resulting in a forecast value;
calculate a forecast error comprising a difference between the forecast value and an observed value of the incoming data flow;
establish a threshold value based on an estimated second moment of the forecast error, the estimated second moment comprising a sum of squares of each update value associated with each key value; and indicate an alarm condition when the forecast error exceeds the threshold value.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US49531403P | 2003-08-14 | 2003-08-14 | |
US60/495,314 | 2003-08-14 |
Publications (2)
Publication Number | Publication Date |
---|---|
CA2474827A1 CA2474827A1 (en) | 2005-02-14 |
CA2474827C true CA2474827C (en) | 2008-04-08 |
Family
ID=34193302
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA 2474827 Expired - Fee Related CA2474827C (en) | 2003-08-14 | 2004-07-16 | Method and apparatus for sketch-based detection of changes in network traffic |
Country Status (2)
Country | Link |
---|---|
CA (1) | CA2474827C (en) |
IL (1) | IL163107A (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100981128B1 (en) * | 2008-06-11 | 2010-09-10 | 포항공과대학교 산학협력단 | Estimation method of battery usable time of mobile terminal based on usage pattern |
CN114237934B (en) * | 2021-12-15 | 2025-03-25 | 中国农业银行股份有限公司 | Abnormal operation determination method, device, equipment and storage medium |
CN118631589B (en) * | 2024-08-09 | 2024-10-11 | 四川云互未来科技有限公司 | A network traffic supervision anomaly identification and early warning method and system |
-
2004
- 2004-07-16 CA CA 2474827 patent/CA2474827C/en not_active Expired - Fee Related
- 2004-07-20 IL IL16310704A patent/IL163107A/en not_active IP Right Cessation
Also Published As
Publication number | Publication date |
---|---|
CA2474827A1 (en) | 2005-02-14 |
IL163107A (en) | 2009-11-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7751325B2 (en) | Method and apparatus for sketch-based detection of changes in network traffic | |
Krishnamurthy et al. | Sketch-based change detection: Methods, evaluation, and applications | |
US12019745B2 (en) | Cyberanalysis workflow acceleration | |
Schweller et al. | Reversible sketches for efficient and accurate change detection over network data streams | |
US9245121B1 (en) | Detecting suspicious network behaviors based on domain name service failures | |
US8260914B1 (en) | Detecting DNS fast-flux anomalies | |
US8931088B2 (en) | Adaptive distinct counting for network-traffic monitoring and other applications | |
US7523190B1 (en) | Real-time performance assessment of large area network user experience | |
Xie et al. | A novel model for detecting application layer DDoS attacks | |
US7424489B1 (en) | Methods and apparatus for space efficient adaptive detection of multidimensional hierarchical heavy hitters | |
Xu et al. | ELDA: Towards efficient and lightweight detection of cache pollution attacks in NDN | |
Xu et al. | Towards persistent detection of DDoS attacks in NDN: A sketch-based approach | |
Sarmento et al. | Social network analysis in streaming call graphs | |
Callegari et al. | Detecting anomalies in backbone network traffic: a performance comparison among several change detection methods | |
CA2474827C (en) | Method and apparatus for sketch-based detection of changes in network traffic | |
US20240333755A1 (en) | Reactive domain generation algorithm (dga) detection | |
Day et al. | CONDOR: A hybrid ids to offer improved intrusion detection | |
Park et al. | Supporting interoperability to heterogeneous IDS in secure networking framework | |
Zhang et al. | DDoS Detection Based on Sliding Window Entropy and Decision Tree with Programmable Switch | |
Callegari et al. | Forecasting the distribution of network traffic for anomaly detection | |
Somarriba et al. | URL-Based Dynamic Monitoring of Android Malware using Machine Learning | |
Callegari et al. | A novel bivariate entropy-based network anomaly detection system | |
Kavut et al. | DDoS Detection in Network Traffic Using LightGBM: A Study on the CICIDS2018 Dataset | |
Karabatis et al. | Towards adaptive big data cyber-attack detection via semantic link networks | |
Pain | A Review of the Literature Surrounding Malware Detection in the Domain Name System |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
MKLA | Lapsed |