diff --git a/doc/administration/reference_architectures/10k_users.md b/doc/administration/reference_architectures/10k_users.md index 288640e8da06371ab30d2631f0b0577fdc025fb7..a811ec83b712f83f49bffb9ba5c0486e50a82877 100644 --- a/doc/administration/reference_architectures/10k_users.md +++ b/doc/administration/reference_architectures/10k_users.md @@ -2182,7 +2182,8 @@ There are two ways of specifying object storage configuration in GitLab: Starting with GitLab 13.2, consolidated object storage configuration is available. It simplifies your GitLab configuration since the connection details are shared across object types. Refer to [Consolidated object storage configuration](../object_storage.md#consolidated-object-storage-configuration) guide for instructions on how to set it up. GitLab Runner returns job logs in chunks which Omnibus GitLab caches temporarily on disk in `/var/opt/gitlab/gitlab-ci/builds` by default, even when using consolidated object storage. With default configuration, this directory needs to be shared via NFS on any GitLab Rails and Sidekiq nodes. -In GitLab 13.6 and later, it's recommended to switch to [Incremental logging](../job_logs.md#incremental-logging-architecture), which uses Redis instead of disk space for temporary caching of job logs. + +In GitLab 13.6 and later, it's also recommended to switch to [Incremental logging](../job_logs.md#incremental-logging-architecture), which uses Redis instead of disk space for temporary caching of job logs. This is required when no NFS node has been deployed. For configuring object storage in GitLab 13.1 and earlier, or for storage types not supported by consolidated configuration form, refer to the following guides based @@ -2282,11 +2283,11 @@ use Google Cloud's Kubernetes Engine (GKE) and associated machine types, but the and CPU requirements should translate to most other providers. We hope to update this in the future with further specific cloud provider details. -| Service | Nodes | Configuration | GCP | Allocatable CPUs and Memory | -|-------------------------------------------------------|-------|-------------------------|------------------|-----------------------------| -| Webservice | 4 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | 127.5 vCPU, 118 GB memory | -| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | 15.5 vCPU, 50 GB memory | -| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory | +| Service | Nodes | Configuration | GCP | AWS | Min Allocatable CPUs and Memory | +|-----------------------------------------------|-------|-------------------------|-----------------|--------------|---------------------------------| +| Webservice | 4 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | `c5.9xlarge` | 127.5 vCPU, 118 GB memory | +| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | 15.5 vCPU, 50 GB memory | +| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | 7.75 vCPU, 25 GB memory | - For this setup, we **recommend** and regularly [test](index.md#validation-and-test-results) [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) and [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/). Other Kubernetes services may also work, but your mileage may vary. @@ -2296,18 +2297,18 @@ future with further specific cloud provider details. Next are the backend components that run on static compute VMs via Omnibus (or External PaaS services where applicable): -| Service | Nodes | Configuration | GCP | -|-----------------------------------------------------|-------|-------------------------|------------------| -| Consul1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| PostgreSQL1 | 3 | 8 vCPU, 30 GB memory | `n1-standard-8` | -| PgBouncer1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Internal load balancing node3 | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Redis/Sentinel - Cache2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | -| Redis/Sentinel - Persistent2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | -| Gitaly5 | 3 | 16 vCPU, 60 GB memory | `n1-standard-16` | -| Praefect5 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Praefect PostgreSQL1 | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Object storage4 | n/a | n/a | n/a | +| Service | Nodes | Configuration | GCP | AWS | +|------------------------------------------|-------|-----------------------|------------------|--------------| +| Consul1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| PostgreSQL1 | 3 | 8 vCPU, 30 GB memory | `n1-standard-8` | `m5.2xlarge` | +| PgBouncer1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| Internal load balancing node3 | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| Redis/Sentinel - Cache2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | +| Redis/Sentinel - Persistent2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | +| Gitaly5 | 3 | 16 vCPU, 60 GB memory | `n1-standard-16` | `m5.4xlarge` | +| Praefect5 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| Praefect PostgreSQL1 | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| Object storage4 | n/a | n/a | n/a | n/a | diff --git a/doc/administration/reference_architectures/25k_users.md b/doc/administration/reference_architectures/25k_users.md index fbd7915b8adb34d7b340e9f0e6bee268da15b582..ba5a412773db9fd6401e2ebff8ab8881e9fb05ed 100644 --- a/doc/administration/reference_architectures/25k_users.md +++ b/doc/administration/reference_architectures/25k_users.md @@ -2186,7 +2186,8 @@ There are two ways of specifying object storage configuration in GitLab: Starting with GitLab 13.2, consolidated object storage configuration is available. It simplifies your GitLab configuration since the connection details are shared across object types. Refer to [Consolidated object storage configuration](../object_storage.md#consolidated-object-storage-configuration) guide for instructions on how to set it up. GitLab Runner returns job logs in chunks which Omnibus GitLab caches temporarily on disk in `/var/opt/gitlab/gitlab-ci/builds` by default, even when using consolidated object storage. With default configuration, this directory needs to be shared via NFS on any GitLab Rails and Sidekiq nodes. -In GitLab 13.6 and later, it's recommended to switch to [Incremental logging](../job_logs.md#incremental-logging-architecture), which uses Redis instead of disk space for temporary caching of job logs. + +In GitLab 13.6 and later, it's also recommended to switch to [Incremental logging](../job_logs.md#incremental-logging-architecture), which uses Redis instead of disk space for temporary caching of job logs. This is required when no NFS node has been deployed. For configuring object storage in GitLab 13.1 and earlier, or for storage types not supported by consolidated configuration form, refer to the following guides based @@ -2280,11 +2281,11 @@ use Google Cloud's Kubernetes Engine (GKE) and associated machine types, but the and CPU requirements should translate to most other providers. We hope to update this in the future with further specific cloud provider details. -| Service | Nodes | Configuration | GCP | Allocatable CPUs and Memory | -|-------------------------------------------------------|-------|-------------------------|------------------|-----------------------------| -| Webservice | 7 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | 223 vCPU, 206.5 GB memory | -| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | 15.5 vCPU, 50 GB memory | -| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory | +| Service | Nodes | Configuration | GCP | AWS | Min Allocatable CPUs and Memory | +|-----------------------------------------------|-------|-------------------------|-----------------|-----------------|---------------------------------| +| Webservice | 7 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | `c5.9xlarge` | 223 vCPU, 206.5 GB memory | +| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | 15.5 vCPU, 50 GB memory | +| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | 7.75 vCPU, 25 GB memory | - For this setup, we **recommend** and regularly [test](index.md#validation-and-test-results) [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) and [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/). Other Kubernetes services may also work, but your mileage may vary. @@ -2294,18 +2295,18 @@ future with further specific cloud provider details. Next are the backend components that run on static compute VMs via Omnibus (or External PaaS services where applicable): -| Service | Nodes | Configuration | GCP | -|-----------------------------------------------------|-------|-------------------------|------------------| -| Consul1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| PostgreSQL1 | 3 | 16 vCPU, 60 GB memory | `n1-standard-16` | -| PgBouncer1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Internal load balancing node3 | 1 | 4 vCPU, 3.6GB memory | `n1-highcpu-4` | -| Redis/Sentinel - Cache2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | -| Redis/Sentinel - Persistent2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | -| Gitaly5 | 3 | 32 vCPU, 120 GB memory | `n1-standard-32` | -| Praefect5 | 3 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | -| Praefect PostgreSQL1 | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Object storage4 | n/a | n/a | n/a | +| Service | Nodes | Configuration | GCP | AWS | +|------------------------------------------|-------|------------------------|------------------|--------------| +| Consul1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| PostgreSQL1 | 3 | 16 vCPU, 60 GB memory | `n1-standard-16` | `m5.4xlarge` | +| PgBouncer1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| Internal load balancing node3 | 1 | 4 vCPU, 3.6GB memory | `n1-highcpu-4` | `c5.xlarge` | +| Redis/Sentinel - Cache2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | +| Redis/Sentinel - Persistent2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | +| Gitaly5 | 3 | 32 vCPU, 120 GB memory | `n1-standard-32` | `m5.8xlarge` | +| Praefect5 | 3 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | +| Praefect PostgreSQL1 | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| Object storage4 | n/a | n/a | n/a | n/a | diff --git a/doc/administration/reference_architectures/2k_users.md b/doc/administration/reference_architectures/2k_users.md index 355316033c5f7cfa35b9b52f02c4e3f9aae6c338..b88a8839345b138dbbf8a7c64c3450ee9f2d57ef 100644 --- a/doc/administration/reference_architectures/2k_users.md +++ b/doc/administration/reference_architectures/2k_users.md @@ -905,7 +905,8 @@ There are two ways of specifying object storage configuration in GitLab: Starting with GitLab 13.2, consolidated object storage configuration is available. It simplifies your GitLab configuration since the connection details are shared across object types. Refer to [Consolidated object storage configuration](../object_storage.md#consolidated-object-storage-configuration) guide for instructions on how to set it up. GitLab Runner returns job logs in chunks which Omnibus GitLab caches temporarily on disk in `/var/opt/gitlab/gitlab-ci/builds` by default, even when using consolidated object storage. With default configuration, this directory needs to be shared via NFS on any GitLab Rails and Sidekiq nodes. -In GitLab 13.6 and later, it's recommended to switch to [Incremental logging](../job_logs.md#incremental-logging-architecture), which uses Redis instead of disk space for temporary caching of job logs. + +In GitLab 13.6 and later, it's also recommended to switch to [Incremental logging](../job_logs.md#incremental-logging-architecture), which uses Redis instead of disk space for temporary caching of job logs. This is required when no NFS node has been deployed. For configuring object storage in GitLab 13.1 and earlier, or for storage types not supported by consolidated configuration form, refer to the following guides based @@ -1002,11 +1003,11 @@ use Google Cloud's Kubernetes Engine (GKE) and associated machine types, but the and CPU requirements should translate to most other providers. We hope to update this in the future with further specific cloud provider details. -| Service | Nodes | Configuration | GCP | Allocatable CPUs and Memory | -|-------------------------------------------------------|-------|------------------------|-----------------|-----------------------------| -| Webservice | 3 | 8 vCPU, 7.2 GB memory | `n1-highcpu-8` | 23.7 vCPU, 16.9 GB memory | -| Sidekiq | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | 3.9 vCPU, 11.8 GB memory | -| Supporting services such as NGINX, Prometheus | 2 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | 1.9 vCPU, 5.5 GB memory | +| Service | Nodes | Configuration | GCP | AWS | Min Allocatable CPUs and Memory | +|-----------------------------------------------|-------|------------------------|-----------------|--------------|---------------------------------| +| Webservice | 3 | 8 vCPU, 7.2 GB memory | `n1-highcpu-8` | `c5.2xlarge` | 23.7 vCPU, 16.9 GB memory | +| Sidekiq | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | `m5.large` | 3.9 vCPU, 11.8 GB memory | +| Supporting services such as NGINX, Prometheus | 2 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | `m5.large` | 1.9 vCPU, 5.5 GB memory | - For this setup, we **recommend** and regularly [test](index.md#validation-and-test-results) [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) and [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/). Other Kubernetes services may also work, but your mileage may vary. @@ -1016,12 +1017,12 @@ future with further specific cloud provider details. Next are the backend components that run on static compute VMs via Omnibus (or External PaaS services where applicable): -| Service | Nodes | Configuration | GCP | -|--------------------------------------------|-------|-------------------------|------------------| -| PostgreSQL1 | 1 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | -| Redis2 | 1 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | -| Gitaly | 1 | 4 vCPU, 15 GB memory | `n1-standard-4` | -| Object storage3 | n/a | n/a | n/a | +| Service | Nodes | Configuration | GCP | AWS | +|----------------------------|-------|------------------------|-----------------|-------------| +| PostgreSQL1 | 1 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | `m5.large` | +| Redis2 | 1 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | `m5.large` | +| Gitaly | 1 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | +| Object storage3 | n/a | n/a | n/a | n/a | diff --git a/doc/administration/reference_architectures/3k_users.md b/doc/administration/reference_architectures/3k_users.md index b6da6daaaaa8d07c478def789e145e41901c3190..da1efb129dd4c0630626b125b8ea4e0971aff999 100644 --- a/doc/administration/reference_architectures/3k_users.md +++ b/doc/administration/reference_architectures/3k_users.md @@ -2121,7 +2121,8 @@ There are two ways of specifying object storage configuration in GitLab: Starting with GitLab 13.2, consolidated object storage configuration is available. It simplifies your GitLab configuration since the connection details are shared across object types. Refer to [Consolidated object storage configuration](../object_storage.md#consolidated-object-storage-configuration) guide for instructions on how to set it up. GitLab Runner returns job logs in chunks which Omnibus GitLab caches temporarily on disk in `/var/opt/gitlab/gitlab-ci/builds` by default, even when using consolidated object storage. With default configuration, this directory needs to be shared via NFS on any GitLab Rails and Sidekiq nodes. -In GitLab 13.6 and later, it's recommended to switch to [Incremental logging](../job_logs.md#incremental-logging-architecture), which uses Redis instead of disk space for temporary caching of job logs. + +In GitLab 13.6 and later, it's also recommended to switch to [Incremental logging](../job_logs.md#incremental-logging-architecture), which uses Redis instead of disk space for temporary caching of job logs. This is required when no NFS node has been deployed. For configuring object storage in GitLab 13.1 and earlier, or for storage types not supported by consolidated configuration form, refer to the following guides based @@ -2240,11 +2241,11 @@ use Google Cloud's Kubernetes Engine (GKE) and associated machine types, but the and CPU requirements should translate to most other providers. We hope to update this in the future with further specific cloud provider details. -| Service | Nodes | Configuration | GCP | Allocatable CPUs and Memory | -|-------------------------------------------------------|-------|-------------------------|------------------|-----------------------------| -| Webservice | 2 | 16 vCPU, 14.4 GB memory | `n1-highcpu-16` | 31.8 vCPU, 24.8 GB memory | -| Sidekiq | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | 11.8 vCPU, 38.9 GB memory | -| Supporting services such as NGINX, Prometheus | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | 3.9 vCPU, 11.8 GB memory | +| Service | Nodes | Configuration | GCP | AWS | Min Allocatable CPUs and Memory | +|-----------------------------------------------|-------|-------------------------|-----------------|--------------|---------------------------------| +| Webservice | 2 | 16 vCPU, 14.4 GB memory | `n1-highcpu-16` | `c5.4xlarge` | 31.8 vCPU, 24.8 GB memory | +| Sidekiq | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | 11.8 vCPU, 38.9 GB memory | +| Supporting services such as NGINX, Prometheus | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | `m5.large` | 3.9 vCPU, 11.8 GB memory | - For this setup, we **recommend** and regularly [test](index.md#validation-and-test-results) [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) and [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/). Other Kubernetes services may also work, but your mileage may vary. @@ -2254,17 +2255,17 @@ future with further specific cloud provider details. Next are the backend components that run on static compute VMs via Omnibus (or External PaaS services where applicable): -| Service | Nodes | Configuration | GCP | -|--------------------------------------------|-------|-------------------------|------------------| -| Redis2 | 3 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | -| Consul1 + Sentinel2 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| PostgreSQL1 | 3 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | -| PgBouncer1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Internal load balancing node3 | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Gitaly5 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | -| Praefect5 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Praefect PostgreSQL1 | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Object storage4 | n/a | n/a | n/a | +| Service | Nodes | Configuration | GCP | AWS | +|-------------------------------------------|-------|-----------------------|-----------------|-------------| +| Redis2 | 3 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | `m5.large` | +| Consul1 + Sentinel2 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| PostgreSQL1 | 3 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | `m5.large` | +| PgBouncer1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| Internal load balancing node3 | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| Gitaly5 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | +| Praefect5 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| Praefect PostgreSQL1 | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| Object storage4 | n/a | n/a | n/a | n/a | diff --git a/doc/administration/reference_architectures/50k_users.md b/doc/administration/reference_architectures/50k_users.md index 4121e3ef6f308e7b41b7d7ba71fd0904fe5876bc..1120dd58895a54ae7abeaf8c586c474e07d1d7a2 100644 --- a/doc/administration/reference_architectures/50k_users.md +++ b/doc/administration/reference_architectures/50k_users.md @@ -2202,7 +2202,8 @@ There are two ways of specifying object storage configuration in GitLab: Starting with GitLab 13.2, consolidated object storage configuration is available. It simplifies your GitLab configuration since the connection details are shared across object types. Refer to [Consolidated object storage configuration](../object_storage.md#consolidated-object-storage-configuration) guide for instructions on how to set it up. GitLab Runner returns job logs in chunks which Omnibus GitLab caches temporarily on disk in `/var/opt/gitlab/gitlab-ci/builds` by default, even when using consolidated object storage. With default configuration, this directory needs to be shared via NFS on any GitLab Rails and Sidekiq nodes. -In GitLab 13.6 and later, it's recommended to switch to [Incremental logging](../job_logs.md#incremental-logging-architecture), which uses Redis instead of disk space for temporary caching of job logs. + +In GitLab 13.6 and later, it's also recommended to switch to [Incremental logging](../job_logs.md#incremental-logging-architecture), which uses Redis instead of disk space for temporary caching of job logs. This is required when no NFS node has been deployed. For configuring object storage in GitLab 13.1 and earlier, or for storage types not supported by consolidated configuration form, refer to the following guides based @@ -2296,12 +2297,12 @@ use Google Cloud's Kubernetes Engine (GKE) and associated machine types, but the and CPU requirements should translate to most other providers. We hope to update this in the future with further specific cloud provider details. -| Service | Nodes | Configuration | GCP | Allocatable CPUs and Memory | -|-------------------------------------------------------|-------|-------------------------|------------------|-----------------------------| -| Webservice | 16 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | 510 vCPU, 472 GB memory | -| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | 15.5 vCPU, 50 GB memory | -| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory | - +| Service | Nodes | Configuration | GCP | AWS | Min Allocatable CPUs and Memory | +|-----------------------------------------------|-------|-------------------------|-----------------|--------------|---------------------------------| +| Webservice | 16 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | `m5.8xlarge` | 510 vCPU, 472 GB memory | +| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | 15.5 vCPU, 50 GB memory | +| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | 7.75 vCPU, 25 GB memory | + - For this setup, we **recommend** and regularly [test](index.md#validation-and-test-results) [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) and [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/). Other Kubernetes services may also work, but your mileage may vary. - Nodes configuration is shown as it is forced to ensure pod vcpu / memory ratios and avoid scaling during **performance testing**. @@ -2310,18 +2311,18 @@ future with further specific cloud provider details. Next are the backend components that run on static compute VMs via Omnibus (or External PaaS services where applicable): -| Service | Nodes | Configuration | GCP | -|-----------------------------------------------------|-------|-------------------------|------------------| -| Consul1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| PostgreSQL1 | 3 | 32 vCPU, 120 GB memory | `n1-standard-32` | -| PgBouncer1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Internal load balancing node3 | 1 | 8 vCPU, 7.2 GB memory | `n1-highcpu-8` | -| Redis/Sentinel - Cache2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | -| Redis/Sentinel - Persistent2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | -| Gitaly5 | 3 | 64 vCPU, 240 GB memory | `n1-standard-64` | -| Praefect5 | 3 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | -| Praefect PostgreSQL1 | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Object storage4 | n/a | n/a | n/a | +| Service | Nodes | Configuration | GCP | AWS | +|------------------------------------------|-------|------------------------|------------------|---------------| +| Consul1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| PostgreSQL1 | 3 | 32 vCPU, 120 GB memory | `n1-standard-32` | `m5.8xlarge` | +| PgBouncer1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| Internal load balancing node3 | 1 | 8 vCPU, 7.2 GB memory | `n1-highcpu-8` | `c5.2xlarge` | +| Redis/Sentinel - Cache2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | +| Redis/Sentinel - Persistent2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | +| Gitaly5 | 3 | 64 vCPU, 240 GB memory | `n1-standard-64` | `m5.16xlarge` | +| Praefect5 | 3 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | +| Praefect PostgreSQL1 | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| Object storage4 | n/a | n/a | n/a | n/a | diff --git a/doc/administration/reference_architectures/5k_users.md b/doc/administration/reference_architectures/5k_users.md index da4b40b35a5f8235d1796f8c27f8ebb7c18fe155..bbcac4d8ef59f3f62a73a85732a2e329a642ed38 100644 --- a/doc/administration/reference_architectures/5k_users.md +++ b/doc/administration/reference_architectures/5k_users.md @@ -2121,7 +2121,8 @@ There are two ways of specifying object storage configuration in GitLab: Starting with GitLab 13.2, consolidated object storage configuration is available. It simplifies your GitLab configuration since the connection details are shared across object types. Refer to [Consolidated object storage configuration](../object_storage.md#consolidated-object-storage-configuration) guide for instructions on how to set it up. GitLab Runner returns job logs in chunks which Omnibus GitLab caches temporarily on disk in `/var/opt/gitlab/gitlab-ci/builds` by default, even when using consolidated object storage. With default configuration, this directory needs to be shared via NFS on any GitLab Rails and Sidekiq nodes. -In GitLab 13.6 and later, it's recommended to switch to [Incremental logging](../job_logs.md#incremental-logging-architecture), which uses Redis instead of disk space for temporary caching of job logs. + +In GitLab 13.6 and later, it's also recommended to switch to [Incremental logging](../job_logs.md#incremental-logging-architecture), which uses Redis instead of disk space for temporary caching of job logs. This is required when no NFS node has been deployed. For configuring object storage in GitLab 13.1 and earlier, or for storage types not supported by consolidated configuration form, refer to the following guides based @@ -2215,11 +2216,11 @@ use Google Cloud's Kubernetes Engine (GKE) and associated machine types, but the and CPU requirements should translate to most other providers. We hope to update this in the future with further specific cloud provider details. -| Service | Nodes | Configuration | GCP | Allocatable CPUs and Memory | -|-------------------------------------------------------|-------|-------------------------|------------------|-----------------------------| -| Webservice | 5 | 16 vCPU, 14.4 GB memory | `n1-highcpu-16` | 79.5 vCPU, 62 GB memory | -| Sidekiq | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | 11.8 vCPU, 38.9 GB memory | -| Supporting services such as NGINX, Prometheus | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | 3.9 vCPU, 11.8 GB memory | +| Service | Nodes | Configuration | GCP | AWS | Min Allocatable CPUs and Memory | +|-----------------------------------------------|-------|-------------------------|-----------------|--------------|---------------------------------| +| Webservice | 5 | 16 vCPU, 14.4 GB memory | `n1-highcpu-16` | `c5.4xlarge` | 79.5 vCPU, 62 GB memory | +| Sidekiq | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | 11.8 vCPU, 38.9 GB memory | +| Supporting services such as NGINX, Prometheus | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | `m5.large` | 3.9 vCPU, 11.8 GB memory | - For this setup, we **recommend** and regularly [test](index.md#validation-and-test-results) [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) and [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/). Other Kubernetes services may also work, but your mileage may vary. @@ -2229,17 +2230,17 @@ future with further specific cloud provider details. Next are the backend components that run on static compute VMs via Omnibus (or External PaaS services where applicable): -| Service | Nodes | Configuration | GCP | -|--------------------------------------------|-------|-------------------------|------------------| -| Redis2 | 3 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | -| Consul1 + Sentinel2 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| PostgreSQL1 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | -| PgBouncer1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Internal load balancing node3 | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Gitaly5 | 3 | 8 vCPU, 30 GB memory | `n1-standard-8` | -| Praefect5 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Praefect PostgreSQL1 | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Object storage4 | n/a | n/a | n/a | +| Service | Nodes | Configuration | GCP | AWS | +|-------------------------------------------|-------|-----------------------|-----------------|--------------| +| Redis2 | 3 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | `m5.large` | +| Consul1 + Sentinel2 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| PostgreSQL1 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | +| PgBouncer1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| Internal load balancing node3 | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| Gitaly5 | 3 | 8 vCPU, 30 GB memory | `n1-standard-8` | `m5.2xlarge` | +| Praefect5 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| Praefect PostgreSQL1 | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | +| Object storage4 | n/a | n/a | n/a | n/a |