[go: up one dir, main page]

Benchmark ClickHouse for Error Tracking

Goal

Test the scalability of using ClickHouse to ingest Error tracking data and gather information on what could break and why.

Leverage existing work focused on schema development

Definition of done

  • Baseline: Send 100req/sec for sustained 5 minutes (average payload 20kb), see research issue for baselines
    • Ingestion data visible via project UI
    • 100% data retention
    • 0 connectivity errors
    • <100ms average write request latency
    • <1000ms average read request latency
  • Medium Load: Send 1000req/sec for sustained 5 minutes (average payload 20kb), relies on average per sentry CSP base project volume
    • Ingestion data visible via project UI
    • 100% data retention
    • 0 connectivity errors
    • <100ms average write request latency
    • <1000ms average read request latency
  • High Load: Send 10,000req/sec for sustained 5 minutes (average payload 20kb)
    • Ingestion data visible via project UI
    • 100% data retention
    • 0 connectivity errors
    • <100ms average write request latency
    • <1000ms average read request latency
Edited by Nick Klick