Manage alerting costs

This document describes strategies you can use to reduce costs for alerting. For information about the pricing model, see Google Cloud Observability pricing.

Consolidate alerting policies to operate over more resources

Alerting charges a per-condition cost. For this reason, when possible, use one alerting policy to monitor multiple resources instead of creating one alerting policy for each resource.

For example, assume that you have 100 VMs. Each VM generates a time series for the metric type my_metric. Here are two different ways you can monitor the time series:

  • You create one alerting policy that has one condition. The condition monitors my_metric and aggregates data to the VM level. After aggregation, there is one time series for each VM. Therefore, the condition monitors 100 time series.

  • You create 100 alerting policies and each contains 1 condition. Each condition monitors the my_metric time series for one of the VMs, and it aggregates data to the VM level. Therefore, each condition monitors one time series.

The second option, which creates 100 conditions, is more expensive than the first option, which only creates 1 condition. Both options monitor 100 time series.

Aggregate to only the level that you need to alert on

There is a cost for each time series that is monitored by an alerting policy. Aggregating to higher levels of granularity results in higher costs than aggregating to lower levels of granularity. For example, aggregating to the Google Cloud project level is cheaper than aggregating to the cluster level, and aggregating to the cluster level is cheaper than aggregating to the cluster and namespace level.

For example, assume that you have 100 VMs. Each VM generates a time series for the metric type my_metric. Each of your VMs belongs to one of five services. You decide to create one alerting policy that has one condition that monitors my_metric. Here are two different aggregation options:

  • You aggregate data to the service. After aggregation, there is one time series for each service. Therefore, the condition monitors 5 time series.

  • You aggregate data to the VM level. After aggregation, there is one time series for each VM. Therefore, the condition monitors 100 time series.

The second option, which monitors 100 time series, is more expensive than the first option, which only monitors five time series.

When you configure your alerting policies, choose aggregation levels that work best for your use case. For example, if you care about alerting on CPU utilization, then you might want to aggregate to the VM and CPU level. If you care about alerting on latency by service, then you might want to aggregate to the service level.

Don't alert on raw, unaggregated data

Monitoring uses a dimensional metrics system, where any metric has total cardinality equal to the number of resources monitored multiplied by the number of label combinations on that metric. For example, if you have 100 VMs emitting a metric, and that metric has 10 labels with 10 values each, then your total cardinality is 100 * 10 * 10 = 10,000.

As a result of how cardinality scales, alerting on raw data can be extremely expensive. In the previous example, you have 10,000 time series returned for each execution period. However, if you aggregate to the VM, then you have only 100 time series returned per execution period, regardless of the label cardinality of the underlying data.

Alerting on raw data also puts you at risk for increased time series when your metrics receive new labels. In the previous example, if a user adds a new label to your metric, then your total cardinality increases to 100 * 11 * 10 = 11,000 time series. In this case, your number of returned time series increases by 1,000 each execution period even though your alerting policy is unchanged. If you instead aggregate to the VM, then, despite the increased underlying cardinality, you still have only 100 time series returned.

Filter out unnecessary responses

Configure your conditions to evaluate only data that's necessary for your alerting needs. If you wouldn't take action to fix something, then exclude it from your alerting policies. For example, you probably don't need to alert on an intern's development VM.

To reduce unnecessary incidents and costs, you can filter out time series that aren't important. You can use Google Cloud metadata labels to tag assets with categories and then filter out the unneeded metadata categories.

Use top-stream operators to reduce the number of time series returned

If your condition uses a PromQL query, then you can use a top-streams operator to select a number of the time series returned with the highest values:

For example, a topk(metric, 5) clause in a PromQL query limits the number of time series returned to five in each execution period.

Limiting to a top number of time series might result in missing data and faulty incidents, such as:

  • If more than N time series violate your threshold, then you will miss data outside the top N time series.
  • If a violating time series occurs outside the top N time series, then your incidents might auto-close despite the excluded time series still violating the threshold.
  • Your condition queries might not show you important context such as baseline time series that are functioning as intended.

To mitigate such risks, choose large values for N and use the top-streams operator only in alerting policies that evaluate many time series, such as incidents for individual Kubernetes containers.

Increase the length of the execution period (PromQL only)

If your condition uses a PromQL query, then you can modify the length of your execution period by setting the evaluationInterval field in the condition.

Longer evaluation intervals result in fewer time series returned per month; for example, a condition query with a 15-second interval runs twice as often as a query with a 30-second interval, and a query with a 1-minute interval runs half as often as a query with a 30-second interval.