From d7e7cb13db7754de86e8de36a24ac9139223b79e Mon Sep 17 00:00:00 2001 From: Nailia Iskhakova Date: Tue, 13 Jul 2021 16:24:08 +0300 Subject: [PATCH 1/8] Add 3k hybrid documentation Signed-off-by: Nailia Iskhakova --- .../reference_architectures/3k_users.md | 199 +++++++++++++++++- .../reference_architectures/index.md | 1 + 2 files changed, 189 insertions(+), 11 deletions(-) diff --git a/doc/administration/reference_architectures/3k_users.md b/doc/administration/reference_architectures/3k_users.md index 71ca67075d33c0..145cd21020ac3f 100644 --- a/doc/administration/reference_architectures/3k_users.md +++ b/doc/administration/reference_architectures/3k_users.md @@ -63,10 +63,7 @@ together { collections "**Sidekiq** x4" as sidekiq #ff8dd1 } -together { - card "**Prometheus + Grafana**" as monitor #7FFFD4 - collections "**Consul** x3" as consul #e76a9b -} +card "**Prometheus + Grafana**" as monitor #7FFFD4 card "Gitaly Cluster" as gitaly_cluster { collections "**Praefect** x3" as praefect #FF8C00 @@ -86,14 +83,15 @@ card "Database" as database { postgres_primary .[#4EA7FF]> postgres_secondary } -card "redis" as redis { - collections "**Redis Persistent** x3" as redis_persistent #FF6347 - collections "**Redis Cache** x3" as redis_cache #FF6347 - collections "**Redis Persistent Sentinel** x3" as redis_persistent_sentinel #FF6347 - collections "**Redis Cache Sentinel** x3"as redis_cache_sentinel #FF6347 +node "**Consul + Sentinel** x3" as consul_sentinel { + component Consul as consul #e76a9b + component Sentinel as sentinel #e6e727 +} - redis_persistent <.[#FF6347]- redis_persistent_sentinel - redis_cache <.[#FF6347]- redis_cache_sentinel +card "Redis" as redis { + collections "**Redis** x3" as redis_nodes #FF6347 + + redis_nodes <.[#FF6347]- sentinel } cloud "**Object Storage**" as object_storage #white @@ -2091,6 +2089,185 @@ but with smaller performance requirements, several modifications can be consider - As Redis Sentinel runs on the same box as Consul in this architecture, it may need to be run on a separate box if Redis is still being run via Omnibus. - Redis: Can be run on reputable Cloud PaaS solutions such as Google Memorystore and AWS ElastiCache. In this setup, the Redis Sentinel is no longer required. +## Cloud Native Hybrid reference architecture with Helm Charts (alternative) + +As an alternative approach, you can also run select components of GitLab as Cloud Native +in Kubernetes via our official [Helm Charts](https://docs.gitlab.com/charts/). +In this setup, we support running the equivalent of GitLab Rails and Sidekiq nodes +in a Kubernetes cluster, named Webservice and Sidekiq respectively. In addition, +the following other supporting services are supported: NGINX, Task Runner, Migrations, +Prometheus and Grafana. + +Hybrid installations leverage the benefits of both cloud native and traditional +Kubernetes, you can reap certain cloud native workload management benefits while +the others are deployed in compute VMs with Omnibus as described above in this +page. + +NOTE: +This is an **advanced** setup. Running services in Kubernetes is well known +to be complex. **This setup is only recommended** if you have strong working +knowledge and experience in Kubernetes. The rest of this +section will assume this. + +### Cluster topology + +The following tables and diagram details the hybrid environment using the same formats +as the normal environment above. + +First starting with the components that run in Kubernetes. The recommendations at this +time use Google Cloud’s Kubernetes Engine (GKE) and associated machine types, but the memory +and CPU requirements should translate to most other providers. We hope to update this in the +future with further specific cloud provider details. + +| Service | Nodes(1) | Configuration | GCP | Allocatable CPUs and Memory | +|-------------------------------------------------------|----------|-------------------------|------------------|-----------------------------| +| Webservice | 2 | 16 vCPU, 14.4 GB memory | `n1-highcpu-16` | 31.8 vCPU, 24.8 GB memory | +| Sidekiq | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | 11.8 vCPU, 38.9 GB memory | +| Supporting services such as NGINX, Prometheus, etc. | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | 3.9 vCPU, 11.8 GB memory | + + + +1. Nodes configuration is shown as it is forced to ensure pod vcpu / memory ratios and avoid scaling during **performance testing**. + In production deployments there is no need to assign pods to nodes. A minimum of three nodes in three different availability zones is strongly recommended to align with resilient cloud architecture practices. + + +Next are the backend components that run on static compute VMs via Omnibus (or External PaaS +services where applicable): + +| Service | Nodes | Configuration | GCP | +|--------------------------------------------|-------|-------------------------|------------------| +| Redis(2) | 3 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | +| Consul(1) + Sentinel(2) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| PostgreSQL(1) | 3 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | +| PgBouncer(1) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| Internal load balancing node(3) | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| Gitaly | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | +| Praefect | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| Praefect PostgreSQL(1) | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| Object storage(4) | n/a | n/a | n/a | + + + +1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. Google Cloud SQL and AWS RDS are known to work, however Azure Database for PostgreSQL is [not recommended](https://gitlab.com/gitlab-org/quality/reference-architectures/-/issues/61) due to performance issues. Consul is primarily used for PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However it is also used optionally by Prometheus for Omnibus auto host discovery. +2. Can be optionally run on reputable third-party external PaaS Redis solutions. Google Memorystore and AWS Elasticache are known to work. +3. Can be optionally run on reputable third-party load balancing services (LB PaaS). AWS ELB is known to work. +4. Should be run on reputable third party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. + + +NOTE: +For all PaaS solutions that involve configuring instances, it is strongly recommended to implement a minimum of three nodes in three different availability zones to align with resilient cloud architecture practices. + +```plantuml +@startuml 3k + +card "Kubernetes via Helm Charts" as kubernetes { + card "**External Load Balancer**" as elb #6a9be7 + + together { + collections "**Webservice** x2" as gitlab #32CD32 + collections "**Sidekiq** x3" as sidekiq #ff8dd1 + } + + card "**Prometheus + Grafana**" as monitor #7FFFD4 + card "**Supporting Services**" as support +} + +card "**Internal Load Balancer**" as ilb #9370DB + +node "**Consul + Sentinel** x3" as consul_sentinel { + component Consul as consul #e76a9b + component Sentinel as sentinel #e6e727 +} + +card "Gitaly Cluster" as gitaly_cluster { + collections "**Praefect** x3" as praefect #FF8C00 + collections "**Gitaly** x3" as gitaly #FF8C00 + card "**Praefect PostgreSQL***\n//Non fault-tolerant//" as praefect_postgres #FF8C00 + + praefect -[#FF8C00]-> gitaly + praefect -[#FF8C00]> praefect_postgres +} + +card "Database" as database { + collections "**PGBouncer** x3" as pgbouncer #4EA7FF + card "**PostgreSQL** (Primary)" as postgres_primary #4EA7FF + collections "**PostgreSQL** (Secondary) x2" as postgres_secondary #4EA7FF + + pgbouncer -[#4EA7FF]-> postgres_primary + postgres_primary .[#4EA7FF]> postgres_secondary +} + +card "Redis" as redis { + collections "**Redis** x3" as redis_nodes #FF6347 + + redis_nodes <.[#FF6347]- sentinel +} + +cloud "**Object Storage**" as object_storage #white + +elb -[#6a9be7]-> gitlab +elb -[#6a9be7]-> monitor +elb -[hidden]-> support + +gitlab -[#32CD32]> sidekiq +gitlab -[#32CD32]--> ilb +gitlab -[#32CD32]-> object_storage +gitlab -[#32CD32]---> redis +gitlab -[hidden]--> consul + +sidekiq -[#ff8dd1]--> ilb +sidekiq -[#ff8dd1]-> object_storage +sidekiq -[#ff8dd1]---> redis +sidekiq -[hidden]--> consul + +ilb -[#9370DB]-> gitaly_cluster +ilb -[#9370DB]-> database + +consul .[#e76a9b]-> database +consul .[#e76a9b]-> gitaly_cluster +consul .[#e76a9b,norank]--> redis + +monitor .[#7FFFD4]> consul +monitor .[#7FFFD4]-> database +monitor .[#7FFFD4]-> gitaly_cluster +monitor .[#7FFFD4,norank]--> redis +monitor .[#7FFFD4]> ilb +monitor .[#7FFFD4,norank]u--> elb + +@enduml +``` + +### Resource usage settings + +The following formulas help when calculating how many pods may be deployed within resource constraints. +The [3k reference architecture example values file](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/examples/ref/3k.yaml) +documents how to apply the calculated configuration to the Helm Chart. + +#### Webservice + +Webservice pods typically need about 1 vCPU and 1.25 GB of memory _per worker_. +Each Webservice pod will consume roughly 4 vCPUs and 5 GB of memory using +the [recommended topology](#cluster-topology) because four worker processes +are created by default and each pod has other small processes running. + +For 3k users we recommend a total Puma worker count of around 16. +With the [provided recommendations](#cluster-topology) this allows the deployment of up to 2 +Webservice pods with 4 workers per pod and 2 pods per node. Expand available resources using +the ratio of 1 vCPU to 1.25 GB of memory _per each worker process_ for each additional +Webservice pod. + +For further information on resource usage, see the [Webservice resources](https://docs.gitlab.com/charts/charts/gitlab/webservice/#resources). + +#### Sidekiq + +Sidekiq pods should generally have 1 vCPU and 2 GB of memory. + +[The provided starting point](#cluster-topology) allows the deployment of up to +8 Sidekiq pods. Expand available resources using the 1 vCPU to 2GB memory +ratio for each additional pod. + +For further information on resource usage, see the [Sidekiq resources](https://docs.gitlab.com/charts/charts/gitlab/sidekiq/#resources). +
Back to setup components diff --git a/doc/administration/reference_architectures/index.md b/doc/administration/reference_architectures/index.md index 23e1cc355e0ed8..344f00e339b9b4 100644 --- a/doc/administration/reference_architectures/index.md +++ b/doc/administration/reference_architectures/index.md @@ -71,6 +71,7 @@ The following reference architectures are available: The following Cloud Native Hybrid reference architectures, where select recommended components can be run in Kubernetes, are available: +- [Up to 3,000 users](3k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative) - [Up to 10,000 users](10k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative) - [Up to 25,000 users](25k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative) - [Up to 50,000 users](50k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative) -- GitLab From bab193848cd519ca181cb1fdadba6427945993a1 Mon Sep 17 00:00:00 2001 From: Nailia Iskhakova Date: Wed, 14 Jul 2021 17:43:23 +0300 Subject: [PATCH 2/8] Update hybrid installations wording Signed-off-by: Nailia Iskhakova --- doc/administration/reference_architectures/3k_users.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/doc/administration/reference_architectures/3k_users.md b/doc/administration/reference_architectures/3k_users.md index 145cd21020ac3f..7b07fc398b5f2c 100644 --- a/doc/administration/reference_architectures/3k_users.md +++ b/doc/administration/reference_architectures/3k_users.md @@ -2099,9 +2099,9 @@ the following other supporting services are supported: NGINX, Task Runner, Migra Prometheus and Grafana. Hybrid installations leverage the benefits of both cloud native and traditional -Kubernetes, you can reap certain cloud native workload management benefits while -the others are deployed in compute VMs with Omnibus as described above in this -page. +compute deployments. With this, _stateless_ components can benefit from cloud native +workload management benefits while _stateful_ components are deployed in compute VMs +with Omnibus to benefit from increased permanence. NOTE: This is an **advanced** setup. Running services in Kubernetes is well known -- GitLab From 3224c981c4167e58d688bc6b8f2fa4095a4dc9ea Mon Sep 17 00:00:00 2001 From: Nailia Iskhakova Date: Wed, 14 Jul 2021 17:59:00 +0300 Subject: [PATCH 3/8] Update webservice pods count Signed-off-by: Nailia Iskhakova --- doc/administration/reference_architectures/3k_users.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/administration/reference_architectures/3k_users.md b/doc/administration/reference_architectures/3k_users.md index 7b07fc398b5f2c..755ed2aa461ae8 100644 --- a/doc/administration/reference_architectures/3k_users.md +++ b/doc/administration/reference_architectures/3k_users.md @@ -2251,7 +2251,7 @@ the [recommended topology](#cluster-topology) because four worker processes are created by default and each pod has other small processes running. For 3k users we recommend a total Puma worker count of around 16. -With the [provided recommendations](#cluster-topology) this allows the deployment of up to 2 +With the [provided recommendations](#cluster-topology) this allows the deployment of up to 4 Webservice pods with 4 workers per pod and 2 pods per node. Expand available resources using the ratio of 1 vCPU to 1.25 GB of memory _per each worker process_ for each additional Webservice pod. -- GitLab From efe35d5d647f227076acd7e52d2dec3a1e492228 Mon Sep 17 00:00:00 2001 From: Nailia Iskhakova Date: Thu, 15 Jul 2021 18:32:16 +0300 Subject: [PATCH 4/8] Remove link between Rails and Sidekiq in diagrams Signed-off-by: Nailia Iskhakova --- doc/administration/reference_architectures/3k_users.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/doc/administration/reference_architectures/3k_users.md b/doc/administration/reference_architectures/3k_users.md index 755ed2aa461ae8..bcbbaf62742a08 100644 --- a/doc/administration/reference_architectures/3k_users.md +++ b/doc/administration/reference_architectures/3k_users.md @@ -99,7 +99,6 @@ cloud "**Object Storage**" as object_storage #white elb -[#6a9be7]-> gitlab elb -[#6a9be7]--> monitor -gitlab -[#32CD32]> sidekiq gitlab -[#32CD32]--> ilb gitlab -[#32CD32]-> object_storage gitlab -[#32CD32]---> redis @@ -2209,7 +2208,6 @@ elb -[#6a9be7]-> gitlab elb -[#6a9be7]-> monitor elb -[hidden]-> support -gitlab -[#32CD32]> sidekiq gitlab -[#32CD32]--> ilb gitlab -[#32CD32]-> object_storage gitlab -[#32CD32]---> redis -- GitLab From 446947fb55d0a307acb3f75a3d43f899dc1fe592 Mon Sep 17 00:00:00 2001 From: Nailia Iskhakova Date: Wed, 21 Jul 2021 15:35:40 +0300 Subject: [PATCH 5/8] Use collections for Redis and Sentinel per review Signed-off-by: Nailia Iskhakova --- .../reference_architectures/3k_users.md | 12 ++++++------ .../reference_architectures/5k_users.md | 12 ++++++------ 2 files changed, 12 insertions(+), 12 deletions(-) diff --git a/doc/administration/reference_architectures/3k_users.md b/doc/administration/reference_architectures/3k_users.md index b86d3376c492e0..3195ff0284cdeb 100644 --- a/doc/administration/reference_architectures/3k_users.md +++ b/doc/administration/reference_architectures/3k_users.md @@ -83,9 +83,9 @@ card "Database" as database { postgres_primary .[#4EA7FF]> postgres_secondary } -node "**Consul + Sentinel** x3" as consul_sentinel { - component Consul as consul #e76a9b - component Sentinel as sentinel #e6e727 +card "**Consul + Sentinel**" as consul_sentinel { + collections "**Consul** x3" as consul #e76a9b + collections "**Redis Sentinel** x3" as sentinel #e6e727 } card "Redis" as redis { @@ -2177,9 +2177,9 @@ card "Kubernetes via Helm Charts" as kubernetes { card "**Internal Load Balancer**" as ilb #9370DB -node "**Consul + Sentinel** x3" as consul_sentinel { - component Consul as consul #e76a9b - component Sentinel as sentinel #e6e727 +card "**Consul + Sentinel**" as consul_sentinel { + collections "**Consul** x3" as consul #e76a9b + collections "**Redis Sentinel** x3" as sentinel #e6e727 } card "Gitaly Cluster" as gitaly_cluster { diff --git a/doc/administration/reference_architectures/5k_users.md b/doc/administration/reference_architectures/5k_users.md index e57c4545b13cb8..f28af8fadaba38 100644 --- a/doc/administration/reference_architectures/5k_users.md +++ b/doc/administration/reference_architectures/5k_users.md @@ -80,9 +80,9 @@ card "Database" as database { postgres_primary .[#4EA7FF]> postgres_secondary } -node "**Consul + Sentinel** x3" as consul_sentinel { - component Consul as consul #e76a9b - component Sentinel as sentinel #e6e727 +card "**Consul + Sentinel**" as consul_sentinel { + collections "**Consul** x3" as consul #e76a9b + collections "**Redis Sentinel** x3" as sentinel #e6e727 } card "Redis" as redis { @@ -2150,9 +2150,9 @@ card "Kubernetes via Helm Charts" as kubernetes { card "**Internal Load Balancer**" as ilb #9370DB -node "**Consul + Sentinel** x3" as consul_sentinel { - component Consul as consul #e76a9b - component Sentinel as sentinel #e6e727 +card "**Consul + Sentinel**" as consul_sentinel { + collections "**Consul** x3" as consul #e76a9b + collections "**Redis Sentinel** x3" as sentinel #e6e727 } card "Gitaly Cluster" as gitaly_cluster { -- GitLab From 781b7f6a0a0bb18e7a6a795a4f827e35794f15ec Mon Sep 17 00:00:00 2001 From: Nailia Iskhakova Date: Wed, 21 Jul 2021 21:36:19 +0300 Subject: [PATCH 6/8] Docs changes per technical review Signed-off-by: Nailia Iskhakova --- .../reference_architectures/10k_users.md | 20 +++++++++---------- .../reference_architectures/25k_users.md | 20 +++++++++---------- .../reference_architectures/2k_users.md | 8 ++++---- .../reference_architectures/3k_users.md | 18 ++++++++--------- .../reference_architectures/50k_users.md | 20 +++++++++---------- .../reference_architectures/5k_users.md | 20 +++++++++---------- 6 files changed, 53 insertions(+), 53 deletions(-) diff --git a/doc/administration/reference_architectures/10k_users.md b/doc/administration/reference_architectures/10k_users.md index 1fc3483fbd45a3..61b1e98105dd87 100644 --- a/doc/administration/reference_architectures/10k_users.md +++ b/doc/administration/reference_architectures/10k_users.md @@ -40,7 +40,7 @@ full list of reference architectures, see 1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. Google Cloud SQL and AWS RDS are known to work, however Azure Database for PostgreSQL is [not recommended](https://gitlab.com/gitlab-org/quality/reference-architectures/-/issues/61) due to performance issues. Consul is primarily used for PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However it is also used optionally by Prometheus for Omnibus auto host discovery. 2. Can be optionally run on reputable third-party external PaaS Redis solutions. Google Memorystore and AWS Elasticache are known to work. 3. Can be optionally run on reputable third-party load balancing services (LB PaaS). AWS ELB is known to work. -4. Should be run on reputable third party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. +4. Should be run on reputable third-party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. NOTE: @@ -141,7 +141,7 @@ is recommended instead of using NFS. Using an object storage service also doesn't require you to provision and maintain a node. It's also worth noting that at this time [Praefect requires its own database server](../gitaly/praefect.md#postgresql) and -that to achieve full High Availability a third party PostgreSQL database solution will be required. +that to achieve full High Availability a third-party PostgreSQL database solution will be required. We hope to offer a built in solutions for these restrictions in the future but in the meantime a non HA PostgreSQL server can be set up via Omnibus GitLab, which the above specs reflect. Refer to the following issues for more information: [`omnibus-gitlab#5919`](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5919) & [`gitaly#3398`](https://gitlab.com/gitlab-org/gitaly/-/issues/3398) @@ -2368,7 +2368,7 @@ in Kubernetes via our official [Helm Charts](https://docs.gitlab.com/charts/). In this setup, we support running the equivalent of GitLab Rails and Sidekiq nodes in a Kubernetes cluster, named Webservice and Sidekiq respectively. In addition, the following other supporting services are supported: NGINX, Task Runner, Migrations, -Prometheus and Grafana. +Prometheus, and Grafana. Hybrid installations leverage the benefits of both cloud native and traditional compute deployments. With this, _stateless_ components can benefit from cloud native @@ -2379,15 +2379,15 @@ NOTE: This is an **advanced** setup. Running services in Kubernetes is well known to be complex. **This setup is only recommended** if you have strong working knowledge and experience in Kubernetes. The rest of this -section will assume this. +section assumes this. ### Cluster topology -The following tables and diagram details the hybrid environment using the same formats +The following tables and diagram detail the hybrid environment using the same formats as the normal environment above. -First starting with the components that run in Kubernetes. The recommendations at this -time use Google Cloud’s Kubernetes Engine (GKE) and associated machine types, but the memory +First are the components that run in Kubernetes. The recommendation at this time is to +use Google Cloud’s Kubernetes Engine (GKE) and associated machine types, but the memory and CPU requirements should translate to most other providers. We hope to update this in the future with further specific cloud provider details. @@ -2426,7 +2426,7 @@ services where applicable): 1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. Google Cloud SQL and AWS RDS are known to work, however Azure Database for PostgreSQL is [not recommended](https://gitlab.com/gitlab-org/quality/reference-architectures/-/issues/61) due to performance issues. Consul is primarily used for PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However it is also used optionally by Prometheus for Omnibus auto host discovery. 2. Can be optionally run on reputable third-party external PaaS Redis solutions. Google Memorystore and AWS Elasticache are known to work. 3. Can be optionally run on reputable third-party load balancing services (LB PaaS). AWS ELB is known to work. -4. Should be run on reputable third party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. +4. Should be run on reputable third-party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. NOTE: @@ -2520,11 +2520,11 @@ documents how to apply the calculated configuration to the Helm Chart. #### Webservice Webservice pods typically need about 1 vCPU and 1.25 GB of memory _per worker_. -Each Webservice pod will consume roughly 4 vCPUs and 5 GB of memory using +Each Webservice pod consumes roughly 4 vCPUs and 5 GB of memory using the [recommended topology](#cluster-topology) because four worker processes are created by default and each pod has other small processes running. -For 10k users we recommend a total Puma worker count of around 80. +For 10,000 users we recommend a total Puma worker count of around 80. With the [provided recommendations](#cluster-topology) this allows the deployment of up to 20 Webservice pods with 4 workers per pod and 5 pods per node. Expand available resources using the ratio of 1 vCPU to 1.25 GB of memory _per each worker process_ for each additional diff --git a/doc/administration/reference_architectures/25k_users.md b/doc/administration/reference_architectures/25k_users.md index e45a8f6963c03c..36108be3d1a283 100644 --- a/doc/administration/reference_architectures/25k_users.md +++ b/doc/administration/reference_architectures/25k_users.md @@ -40,7 +40,7 @@ full list of reference architectures, see 1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. Google Cloud SQL and AWS RDS are known to work, however Azure Database for PostgreSQL is [not recommended](https://gitlab.com/gitlab-org/quality/reference-architectures/-/issues/61) due to performance issues. Consul is primarily used for PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However it is also used optionally by Prometheus for Omnibus auto host discovery. 2. Can be optionally run on reputable third-party external PaaS Redis solutions. Google Memorystore and AWS Elasticache are known to work. 3. Can be optionally run on reputable third-party load balancing services (LB PaaS). AWS ELB is known to work. -4. Should be run on reputable third party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. +4. Should be run on reputable third-party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. NOTE: @@ -141,7 +141,7 @@ is recommended instead of using NFS. Using an object storage service also doesn't require you to provision and maintain a node. It's also worth noting that at this time [Praefect requires its own database server](../gitaly/praefect.md#postgresql) and -that to achieve full High Availability a third party PostgreSQL database solution will be required. +that to achieve full High Availability a third-party PostgreSQL database solution will be required. We hope to offer a built in solutions for these restrictions in the future but in the meantime a non HA PostgreSQL server can be set up via Omnibus GitLab, which the above specs reflect. Refer to the following issues for more information: [`omnibus-gitlab#5919`](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5919) & [`gitaly#3398`](https://gitlab.com/gitlab-org/gitaly/-/issues/3398) @@ -2380,7 +2380,7 @@ in Kubernetes via our official [Helm Charts](https://docs.gitlab.com/charts/). In this setup, we support running the equivalent of GitLab Rails and Sidekiq nodes in a Kubernetes cluster, named Webservice and Sidekiq respectively. In addition, the following other supporting services are supported: NGINX, Task Runner, Migrations, -Prometheus and Grafana. +Prometheus, and Grafana. Hybrid installations leverage the benefits of both cloud native and traditional compute deployments. With this, _stateless_ components can benefit from cloud native @@ -2391,15 +2391,15 @@ NOTE: This is an **advanced** setup. Running services in Kubernetes is well known to be complex. **This setup is only recommended** if you have strong working knowledge and experience in Kubernetes. The rest of this -section will assume this. +section assumes this. ### Cluster topology -The following tables and diagram details the hybrid environment using the same formats +The following tables and diagram detail the hybrid environment using the same formats as the normal environment above. -First starting with the components that run in Kubernetes. The recommendations at this -time use Google Cloud’s Kubernetes Engine (GKE) and associated machine types, but the memory +First are the components that run in Kubernetes. The recommendation at this time is to +use Google Cloud’s Kubernetes Engine (GKE) and associated machine types, but the memory and CPU requirements should translate to most other providers. We hope to update this in the future with further specific cloud provider details. @@ -2438,7 +2438,7 @@ services where applicable): 1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. Google Cloud SQL and AWS RDS are known to work, however Azure Database for PostgreSQL is [not recommended](https://gitlab.com/gitlab-org/quality/reference-architectures/-/issues/61) due to performance issues. Consul is primarily used for PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However it is also used optionally by Prometheus for Omnibus auto host discovery. 2. Can be optionally run on reputable third-party external PaaS Redis solutions. Google Memorystore and AWS Elasticache are known to work. 3. Can be optionally run on reputable third-party load balancing services (LB PaaS). AWS ELB is known to work. -4. Should be run on reputable third party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. +4. Should be run on reputable third-party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. NOTE: @@ -2532,11 +2532,11 @@ documents how to apply the calculated configuration to the Helm Chart. #### Webservice Webservice pods typically need about 1 vCPU and 1.25 GB of memory _per worker_. -Each Webservice pod will consume roughly 4 vCPUs and 5 GB of memory using +Each Webservice pod consumes roughly 4 vCPUs and 5 GB of memory using the [recommended topology](#cluster-topology) because four worker processes are created by default and each pod has other small processes running. -For 25k users we recommend a total Puma worker count of around 140. +For 25,000 users we recommend a total Puma worker count of around 140. With the [provided recommendations](#cluster-topology) this allows the deployment of up to 35 Webservice pods with 4 workers per pod and 5 pods per node. Expand available resources using the ratio of 1 vCPU to 1.25 GB of memory _per each worker process_ for each additional diff --git a/doc/administration/reference_architectures/2k_users.md b/doc/administration/reference_architectures/2k_users.md index ff3db877553330..71f9ef43c21d7b 100644 --- a/doc/administration/reference_architectures/2k_users.md +++ b/doc/administration/reference_architectures/2k_users.md @@ -28,10 +28,10 @@ For a full list of reference architectures, see | NFS server (optional, not recommended) | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | -1. Can be optionally run on reputable third party external PaaS PostgreSQL solutions. Google Cloud SQL and AWS RDS are known to work, however Azure Database for PostgreSQL is [not recommended](https://gitlab.com/gitlab-org/quality/reference-architectures/-/issues/61) due to performance issues. Consul is primarily used for PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However it is also used optionally by Prometheus for Omnibus auto host discovery. -2. Can be optionally run as reputable third party external PaaS Redis solutions. Google Memorystore and AWS Elasticache are known to work. -3. Can be optionally run as reputable third party load balancing services (LB PaaS). AWS ELB is known to work. -4. Should be run on reputable third party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. +1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. Google Cloud SQL and AWS RDS are known to work, however Azure Database for PostgreSQL is [not recommended](https://gitlab.com/gitlab-org/quality/reference-architectures/-/issues/61) due to performance issues. Consul is primarily used for PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However it is also used optionally by Prometheus for Omnibus auto host discovery. +2. Can be optionally run as reputable third-party external PaaS Redis solutions. Google Memorystore and AWS Elasticache are known to work. +3. Can be optionally run as reputable third-party load balancing services (LB PaaS). AWS ELB is known to work. +4. Should be run on reputable third-party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. NOTE: diff --git a/doc/administration/reference_architectures/3k_users.md b/doc/administration/reference_architectures/3k_users.md index 3195ff0284cdeb..f7fea5c8c7e590 100644 --- a/doc/administration/reference_architectures/3k_users.md +++ b/doc/administration/reference_architectures/3k_users.md @@ -47,7 +47,7 @@ For a full list of reference architectures, see 1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. Google Cloud SQL and AWS RDS are known to work, however Azure Database for PostgreSQL is [not recommended](https://gitlab.com/gitlab-org/quality/reference-architectures/-/issues/61) due to performance issues. Consul is primarily used for PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However it is also used optionally by Prometheus for Omnibus auto host discovery. 2. Can be optionally run on reputable third-party external PaaS Redis solutions. Google Memorystore and AWS Elasticache are known to work. 3. Can be optionally run on reputable third-party load balancing services (LB PaaS). AWS ELB is known to work. -4. Should be run on reputable third party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. +4. Should be run on reputable third-party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. NOTE: @@ -2099,7 +2099,7 @@ in Kubernetes via our official [Helm Charts](https://docs.gitlab.com/charts/). In this setup, we support running the equivalent of GitLab Rails and Sidekiq nodes in a Kubernetes cluster, named Webservice and Sidekiq respectively. In addition, the following other supporting services are supported: NGINX, Task Runner, Migrations, -Prometheus and Grafana. +Prometheus, and Grafana. Hybrid installations leverage the benefits of both cloud native and traditional compute deployments. With this, _stateless_ components can benefit from cloud native @@ -2110,15 +2110,15 @@ NOTE: This is an **advanced** setup. Running services in Kubernetes is well known to be complex. **This setup is only recommended** if you have strong working knowledge and experience in Kubernetes. The rest of this -section will assume this. +section assumes this. ### Cluster topology -The following tables and diagram details the hybrid environment using the same formats +The following tables and diagram detail the hybrid environment using the same formats as the normal environment above. -First starting with the components that run in Kubernetes. The recommendations at this -time use Google Cloud’s Kubernetes Engine (GKE) and associated machine types, but the memory +First are the components that run in Kubernetes. The recommendation at this time is to +use Google Cloud’s Kubernetes Engine (GKE) and associated machine types, but the memory and CPU requirements should translate to most other providers. We hope to update this in the future with further specific cloud provider details. @@ -2154,7 +2154,7 @@ services where applicable): 1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. Google Cloud SQL and AWS RDS are known to work, however Azure Database for PostgreSQL is [not recommended](https://gitlab.com/gitlab-org/quality/reference-architectures/-/issues/61) due to performance issues. Consul is primarily used for PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However it is also used optionally by Prometheus for Omnibus auto host discovery. 2. Can be optionally run on reputable third-party external PaaS Redis solutions. Google Memorystore and AWS Elasticache are known to work. 3. Can be optionally run on reputable third-party load balancing services (LB PaaS). AWS ELB is known to work. -4. Should be run on reputable third party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. +4. Should be run on reputable third-party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. NOTE: @@ -2248,11 +2248,11 @@ documents how to apply the calculated configuration to the Helm Chart. #### Webservice Webservice pods typically need about 1 vCPU and 1.25 GB of memory _per worker_. -Each Webservice pod will consume roughly 4 vCPUs and 5 GB of memory using +Each Webservice pod consumes roughly 4 vCPUs and 5 GB of memory using the [recommended topology](#cluster-topology) because four worker processes are created by default and each pod has other small processes running. -For 3k users we recommend a total Puma worker count of around 16. +For 3,000 users we recommend a total Puma worker count of around 16. With the [provided recommendations](#cluster-topology) this allows the deployment of up to 4 Webservice pods with 4 workers per pod and 2 pods per node. Expand available resources using the ratio of 1 vCPU to 1.25 GB of memory _per each worker process_ for each additional diff --git a/doc/administration/reference_architectures/50k_users.md b/doc/administration/reference_architectures/50k_users.md index 766f94f6c535b7..77db7a33bd74c8 100644 --- a/doc/administration/reference_architectures/50k_users.md +++ b/doc/administration/reference_architectures/50k_users.md @@ -40,7 +40,7 @@ full list of reference architectures, see 1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. Google Cloud SQL and AWS RDS are known to work, however Azure Database for PostgreSQL is [not recommended](https://gitlab.com/gitlab-org/quality/reference-architectures/-/issues/61) due to performance issues. Consul is primarily used for PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However it is also used optionally by Prometheus for Omnibus auto host discovery. 2. Can be optionally run on reputable third-party external PaaS Redis solutions. Google Memorystore and AWS Elasticache are known to work. 3. Can be optionally run on reputable third-party load balancing services (LB PaaS). AWS ELB is known to work. -4. Should be run on reputable third party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. +4. Should be run on reputable third-party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. NOTE: @@ -141,7 +141,7 @@ is recommended instead of using NFS. Using an object storage service also doesn't require you to provision and maintain a node. It's also worth noting that at this time [Praefect requires its own database server](../gitaly/praefect.md#postgresql) and -that to achieve full High Availability a third party PostgreSQL database solution will be required. +that to achieve full High Availability a third-party PostgreSQL database solution will be required. We hope to offer a built in solutions for these restrictions in the future but in the meantime a non HA PostgreSQL server can be set up via Omnibus GitLab, which the above specs reflect. Refer to the following issues for more information: [`omnibus-gitlab#5919`](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5919) & [`gitaly#3398`](https://gitlab.com/gitlab-org/gitaly/-/issues/3398) @@ -2391,7 +2391,7 @@ in Kubernetes via our official [Helm Charts](https://docs.gitlab.com/charts/). In this setup, we support running the equivalent of GitLab Rails and Sidekiq nodes in a Kubernetes cluster, named Webservice and Sidekiq respectively. In addition, the following other supporting services are supported: NGINX, Task Runner, Migrations, -Prometheus and Grafana. +Prometheus, and Grafana. Hybrid installations leverage the benefits of both cloud native and traditional compute deployments. With this, _stateless_ components can benefit from cloud native @@ -2402,15 +2402,15 @@ NOTE: This is an **advanced** setup. Running services in Kubernetes is well known to be complex. **This setup is only recommended** if you have strong working knowledge and experience in Kubernetes. The rest of this -section will assume this. +section assumes this. ### Cluster topology -The following tables and diagram details the hybrid environment using the same formats +The following tables and diagram detail the hybrid environment using the same formats as the normal environment above. -First starting with the components that run in Kubernetes. The recommendations at this -time use Google Cloud’s Kubernetes Engine (GKE) and associated machine types, but the memory +First are the components that run in Kubernetes. The recommendation at this time is to +use Google Cloud’s Kubernetes Engine (GKE) and associated machine types, but the memory and CPU requirements should translate to most other providers. We hope to update this in the future with further specific cloud provider details. @@ -2449,7 +2449,7 @@ services where applicable): 1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. Google Cloud SQL and AWS RDS are known to work, however Azure Database for PostgreSQL is [not recommended](https://gitlab.com/gitlab-org/quality/reference-architectures/-/issues/61) due to performance issues. Consul is primarily used for PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However it is also used optionally by Prometheus for Omnibus auto host discovery. 2. Can be optionally run on reputable third-party external PaaS Redis solutions. Google Memorystore and AWS Elasticache are known to work. 3. Can be optionally run on reputable third-party load balancing services (LB PaaS). AWS ELB is known to work. -4. Should be run on reputable third party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. +4. Should be run on reputable third-party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. NOTE: @@ -2543,11 +2543,11 @@ documents how to apply the calculated configuration to the Helm Chart. #### Webservice Webservice pods typically need about 1 vCPU and 1.25 GB of memory _per worker_. -Each Webservice pod will consume roughly 4 vCPUs and 5 GB of memory using +Each Webservice pod consumes roughly 4 vCPUs and 5 GB of memory using the [recommended topology](#cluster-topology) because four worker processes are created by default and each pod has other small processes running. -For 50k users we recommend a total Puma worker count of around 320. +For 50,000 users we recommend a total Puma worker count of around 320. With the [provided recommendations](#cluster-topology) this allows the deployment of up to 80 Webservice pods with 4 workers per pod and 5 pods per node. Expand available resources using the ratio of 1 vCPU to 1.25 GB of memory _per each worker process_ for each additional diff --git a/doc/administration/reference_architectures/5k_users.md b/doc/administration/reference_architectures/5k_users.md index f28af8fadaba38..bac5ea2e4d30d8 100644 --- a/doc/administration/reference_architectures/5k_users.md +++ b/doc/administration/reference_architectures/5k_users.md @@ -44,7 +44,7 @@ costly-to-operate environment by using the 1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. Google Cloud SQL and AWS RDS are known to work, however Azure Database for PostgreSQL is [not recommended](https://gitlab.com/gitlab-org/quality/reference-architectures/-/issues/61) due to performance issues. Consul is primarily used for PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However it is also used optionally by Prometheus for Omnibus auto host discovery. 2. Can be optionally run on reputable third-party external PaaS Redis solutions. Google Memorystore and AWS Elasticache are known to work. 3. Can be optionally run on reputable third-party load balancing services (LB PaaS). AWS ELB is known to work. -4. Should be run on reputable third party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. +4. Should be run on reputable third-party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. NOTE: @@ -143,7 +143,7 @@ is recommended instead of using NFS. Using an object storage service also doesn't require you to provision and maintain a node. It's also worth noting that at this time [Praefect requires its own database server](../gitaly/praefect.md#postgresql) and -that to achieve full High Availability a third party PostgreSQL database solution will be required. +that to achieve full High Availability a third-party PostgreSQL database solution will be required. We hope to offer a built in solutions for these restrictions in the future but in the meantime a non HA PostgreSQL server can be set up via Omnibus GitLab, which the above specs reflect. Refer to the following issues for more information: [`omnibus-gitlab#5919`](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/5919) & [`gitaly#3398`](https://gitlab.com/gitlab-org/gitaly/-/issues/3398) @@ -2072,7 +2072,7 @@ in Kubernetes via our official [Helm Charts](https://docs.gitlab.com/charts/). In this setup, we support running the equivalent of GitLab Rails and Sidekiq nodes in a Kubernetes cluster, named Webservice and Sidekiq respectively. In addition, the following other supporting services are supported: NGINX, Task Runner, Migrations, -Prometheus and Grafana. +Prometheus, and Grafana. Hybrid installations leverage the benefits of both cloud native and traditional compute deployments. With this, _stateless_ components can benefit from cloud native @@ -2083,15 +2083,15 @@ NOTE: This is an **advanced** setup. Running services in Kubernetes is well known to be complex. **This setup is only recommended** if you have strong working knowledge and experience in Kubernetes. The rest of this -section will assume this. +section assumes this. ### Cluster topology -The following tables and diagram details the hybrid environment using the same formats +The following tables and diagram detail the hybrid environment using the same formats as the normal environment above. -First starting with the components that run in Kubernetes. The recommendations at this -time use Google Cloud’s Kubernetes Engine (GKE) and associated machine types, but the memory +First are the components that run in Kubernetes. The recommendation at this time is to +use Google Cloud’s Kubernetes Engine (GKE) and associated machine types, but the memory and CPU requirements should translate to most other providers. We hope to update this in the future with further specific cloud provider details. @@ -2127,7 +2127,7 @@ services where applicable): 1. Can be optionally run on reputable third-party external PaaS PostgreSQL solutions. Google Cloud SQL and AWS RDS are known to work, however Azure Database for PostgreSQL is [not recommended](https://gitlab.com/gitlab-org/quality/reference-architectures/-/issues/61) due to performance issues. Consul is primarily used for PostgreSQL high availability so can be ignored when using a PostgreSQL PaaS setup. However it is also used optionally by Prometheus for Omnibus auto host discovery. 2. Can be optionally run on reputable third-party external PaaS Redis solutions. Google Memorystore and AWS Elasticache are known to work. 3. Can be optionally run on reputable third-party load balancing services (LB PaaS). AWS ELB is known to work. -4. Should be run on reputable third party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. +4. Should be run on reputable third-party object storage (storage PaaS) for cloud implementations. Google Cloud Storage and AWS S3 are known to work. NOTE: @@ -2221,11 +2221,11 @@ documents how to apply the calculated configuration to the Helm Chart. #### Webservice Webservice pods typically need about 1 vCPU and 1.25 GB of memory _per worker_. -Each Webservice pod will consume roughly 4 vCPUs and 5 GB of memory using +Each Webservice pod consumes roughly 4 vCPUs and 5 GB of memory using the [recommended topology](#cluster-topology) because four worker processes are created by default and each pod has other small processes running. -For 5k users we recommend a total Puma worker count of around 40. +For 5,000 users we recommend a total Puma worker count of around 40. With the [provided recommendations](#cluster-topology) this allows the deployment of up to 10 Webservice pods with 4 workers per pod and 2 pods per node. Expand available resources using the ratio of 1 vCPU to 1.25 GB of memory _per each worker process_ for each additional -- GitLab From 78ecd67bd5f72a3dd2afc9a94f93c69bc1afa836 Mon Sep 17 00:00:00 2001 From: Nailia Iskhakova Date: Wed, 21 Jul 2021 21:42:40 +0300 Subject: [PATCH 7/8] Remove etc usage Signed-off-by: Nailia Iskhakova --- doc/administration/reference_architectures/10k_users.md | 2 +- doc/administration/reference_architectures/25k_users.md | 2 +- doc/administration/reference_architectures/3k_users.md | 2 +- doc/administration/reference_architectures/50k_users.md | 2 +- doc/administration/reference_architectures/5k_users.md | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-) diff --git a/doc/administration/reference_architectures/10k_users.md b/doc/administration/reference_architectures/10k_users.md index 61b1e98105dd87..2e9a345b7d9e39 100644 --- a/doc/administration/reference_architectures/10k_users.md +++ b/doc/administration/reference_architectures/10k_users.md @@ -2395,7 +2395,7 @@ future with further specific cloud provider details. |-------------------------------------------------------|----------|-------------------------|------------------|-----------------------------| | Webservice | 4 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | 127.5 vCPU, 118 GB memory | | Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | 15.5 vCPU, 50 GB memory | -| Supporting services such as NGINX or Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory | +| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory | diff --git a/doc/administration/reference_architectures/25k_users.md b/doc/administration/reference_architectures/25k_users.md index 36108be3d1a283..62c992b6133fa7 100644 --- a/doc/administration/reference_architectures/25k_users.md +++ b/doc/administration/reference_architectures/25k_users.md @@ -2407,7 +2407,7 @@ future with further specific cloud provider details. |-------------------------------------------------------|----------|-------------------------|------------------|-----------------------------| | Webservice | 7 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | 223 vCPU, 206.5 GB memory | | Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | 15.5 vCPU, 50 GB memory | -| Supporting services such as NGINX, Prometheus, etc. | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory | +| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory | diff --git a/doc/administration/reference_architectures/3k_users.md b/doc/administration/reference_architectures/3k_users.md index f7fea5c8c7e590..becae6f24e0f69 100644 --- a/doc/administration/reference_architectures/3k_users.md +++ b/doc/administration/reference_architectures/3k_users.md @@ -2126,7 +2126,7 @@ future with further specific cloud provider details. |-------------------------------------------------------|----------|-------------------------|------------------|-----------------------------| | Webservice | 2 | 16 vCPU, 14.4 GB memory | `n1-highcpu-16` | 31.8 vCPU, 24.8 GB memory | | Sidekiq | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | 11.8 vCPU, 38.9 GB memory | -| Supporting services such as NGINX, Prometheus, etc. | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | 3.9 vCPU, 11.8 GB memory | +| Supporting services such as NGINX, Prometheus | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | 3.9 vCPU, 11.8 GB memory | diff --git a/doc/administration/reference_architectures/50k_users.md b/doc/administration/reference_architectures/50k_users.md index 77db7a33bd74c8..3d21b8289b8319 100644 --- a/doc/administration/reference_architectures/50k_users.md +++ b/doc/administration/reference_architectures/50k_users.md @@ -2418,7 +2418,7 @@ future with further specific cloud provider details. |-------------------------------------------------------|----------|-------------------------|------------------|-----------------------------| | Webservice | 16 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | 510 vCPU, 472 GB memory | | Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | 15.5 vCPU, 50 GB memory | -| Supporting services such as NGINX, Prometheus, etc. | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory | +| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory | diff --git a/doc/administration/reference_architectures/5k_users.md b/doc/administration/reference_architectures/5k_users.md index bac5ea2e4d30d8..3e0ed87b56dc76 100644 --- a/doc/administration/reference_architectures/5k_users.md +++ b/doc/administration/reference_architectures/5k_users.md @@ -2099,7 +2099,7 @@ future with further specific cloud provider details. |-------------------------------------------------------|----------|-------------------------|------------------|-----------------------------| | Webservice | 5 | 16 vCPU, 14.4 GB memory | `n1-highcpu-16` | 79.5 vCPU, 62 GB memory | | Sidekiq | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | 11.8 vCPU, 38.9 GB memory | -| Supporting services such as NGINX, Prometheus, etc. | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | 3.9 vCPU, 11.8 GB memory | +| Supporting services such as NGINX, Prometheus | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | 3.9 vCPU, 11.8 GB memory | -- GitLab From a215d20a9ffabc948af8836437c9d3e158b22c45 Mon Sep 17 00:00:00 2001 From: Nailia Iskhakova Date: Wed, 21 Jul 2021 21:57:33 +0300 Subject: [PATCH 8/8] Use superscripts instead of numbers in brackets Per review Signed-off-by: Nailia Iskhakova --- .../reference_architectures/10k_users.md | 76 +++++++++---------- .../reference_architectures/25k_users.md | 76 +++++++++---------- .../reference_architectures/2k_users.md | 8 +- .../reference_architectures/3k_users.md | 40 +++++----- .../reference_architectures/50k_users.md | 76 +++++++++---------- .../reference_architectures/5k_users.md | 40 +++++----- 6 files changed, 158 insertions(+), 158 deletions(-) diff --git a/doc/administration/reference_architectures/10k_users.md b/doc/administration/reference_architectures/10k_users.md index 2e9a345b7d9e39..65be734f51b5da 100644 --- a/doc/administration/reference_architectures/10k_users.md +++ b/doc/administration/reference_architectures/10k_users.md @@ -15,25 +15,25 @@ full list of reference architectures, see > - **High Availability:** Yes ([Praefect](#configure-praefect-postgresql) needs a third-party PostgreSQL solution for HA) > - **Test requests per second (RPS) rates:** API: 200 RPS, Web: 20 RPS, Git (Pull): 20 RPS, Git (Push): 4 RPS -| Service | Nodes | Configuration | GCP | AWS | Azure | -|--------------------------------------------|-------------|-------------------------|------------------|--------------|-----------| -| External load balancing node(3) | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| Consul(1) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| PostgreSQL(1) | 3 | 8 vCPU, 30 GB memory | `n1-standard-8` | `m5.2xlarge` | `D8s v3` | -| PgBouncer(1) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| Internal load balancing node(3) | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| Redis - Cache(2) | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | -| Redis - Queues / Shared State(2) | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | -| Redis Sentinel - Cache(2) | 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | `c5.large` | `A1 v2` | -| Redis Sentinel - Queues / Shared State(2) | 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | `c5.large` | `A1 v2` | -| Gitaly | 3 | 16 vCPU, 60 GB memory | `n1-standard-16` | `m5.4xlarge` | `D16s v3` | -| Praefect | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| Praefect PostgreSQL(1) | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | -| GitLab Rails | 3 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | `c5.9xlarge` | `F32s v2` | -| Monitoring node | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | -| Object storage(4) | n/a | n/a | n/a | n/a | n/a | -| NFS server (optional, not recommended) | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | +| Service | Nodes | Configuration | GCP | AWS | Azure | +|-----------------------------------------------------|-------------|-------------------------|------------------|--------------|-----------| +| External load balancing node3 | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| Consul1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| PostgreSQL1 | 3 | 8 vCPU, 30 GB memory | `n1-standard-8` | `m5.2xlarge` | `D8s v3` | +| PgBouncer1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| Internal load balancing node3 | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| Redis - Cache2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | +| Redis - Queues / Shared State2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | +| Redis Sentinel - Cache2 | 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | `c5.large` | `A1 v2` | +| Redis Sentinel - Queues / Shared State2 | 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | `c5.large` | `A1 v2` | +| Gitaly | 3 | 16 vCPU, 60 GB memory | `n1-standard-16` | `m5.4xlarge` | `D16s v3` | +| Praefect | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| Praefect PostgreSQL1 | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | +| GitLab Rails | 3 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | `c5.9xlarge` | `F32s v2` | +| Monitoring node | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | +| Object storage4 | n/a | n/a | n/a | n/a | n/a | +| NFS server (optional, not recommended) | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | @@ -2391,11 +2391,11 @@ use Google Cloud’s Kubernetes Engine (GKE) and associated machine types, but t and CPU requirements should translate to most other providers. We hope to update this in the future with further specific cloud provider details. -| Service | Nodes(1) | Configuration | GCP | Allocatable CPUs and Memory | -|-------------------------------------------------------|----------|-------------------------|------------------|-----------------------------| -| Webservice | 4 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | 127.5 vCPU, 118 GB memory | -| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | 15.5 vCPU, 50 GB memory | -| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory | +| Service | Nodes1 | Configuration | GCP | Allocatable CPUs and Memory | +|-------------------------------------------------------|-------------------|-------------------------|------------------|-----------------------------| +| Webservice | 4 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | 127.5 vCPU, 118 GB memory | +| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | 15.5 vCPU, 50 GB memory | +| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory | @@ -2406,20 +2406,20 @@ future with further specific cloud provider details. Next are the backend components that run on static compute VMs via Omnibus (or External PaaS services where applicable): -| Service | Nodes | Configuration | GCP | -|--------------------------------------------|-------|-------------------------|------------------| -| Consul(1) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| PostgreSQL(1) | 3 | 8 vCPU, 30 GB memory | `n1-standard-8` | -| PgBouncer(1) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Internal load balancing node(3) | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Redis - Cache(2) | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | -| Redis - Queues / Shared State(2) | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | -| Redis Sentinel - Cache(2) | 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | -| Redis Sentinel - Queues / Shared State(2) | 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | -| Gitaly | 3 | 16 vCPU, 60 GB memory | `n1-standard-16` | -| Praefect | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Praefect PostgreSQL(1) | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Object storage(4) | n/a | n/a | n/a | +| Service | Nodes | Configuration | GCP | +|-----------------------------------------------------|-------|-------------------------|------------------| +| Consul1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| PostgreSQL1 | 3 | 8 vCPU, 30 GB memory | `n1-standard-8` | +| PgBouncer1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| Internal load balancing node3 | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| Redis - Cache2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | +| Redis - Queues / Shared State2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | +| Redis Sentinel - Cache2 | 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | +| Redis Sentinel - Queues / Shared State2 | 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | +| Gitaly | 3 | 16 vCPU, 60 GB memory | `n1-standard-16` | +| Praefect | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| Praefect PostgreSQL1 | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| Object storage4 | n/a | n/a | n/a | diff --git a/doc/administration/reference_architectures/25k_users.md b/doc/administration/reference_architectures/25k_users.md index 62c992b6133fa7..de222e2890ccd3 100644 --- a/doc/administration/reference_architectures/25k_users.md +++ b/doc/administration/reference_architectures/25k_users.md @@ -15,25 +15,25 @@ full list of reference architectures, see > - **High Availability:** Yes ([Praefect](#configure-praefect-postgresql) needs a third-party PostgreSQL solution for HA) > - **Test requests per second (RPS) rates:** API: 500 RPS, Web: 50 RPS, Git (Pull): 50 RPS, Git (Push): 10 RPS -| Service | Nodes | Configuration | GCP | AWS | Azure | -|------------------------------------------|-------------|-------------------------|------------------|--------------|-----------| -| External load balancing node(3) | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | -| Consul(1) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| PostgreSQL(1) | 3 | 16 vCPU, 60 GB memory | `n1-standard-16` | `m5.4xlarge` | `D16s v3` | -| PgBouncer(1) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| Internal load balancing node(3) | 1 | 4 vCPU, 3.6GB memory | `n1-highcpu-4` | `c5.large` | `F2s v2` | -| Redis - Cache(2) | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | -| Redis - Queues / Shared State(2) | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | -| Redis Sentinel - Cache(2) | 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | `c5.large` | `A1 v2` | -| Redis Sentinel - Queues / Shared State(2)| 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | `c5.large` | `A1 v2` | -| Gitaly | 3 | 32 vCPU, 120 GB memory | `n1-standard-32` | `m5.8xlarge` | `D32s v3` | -| Praefect | 3 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | -| Praefect PostgreSQL(1) | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | -| GitLab Rails | 5 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | `c5.9xlarge` | `F32s v2` | -| Monitoring node | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | -| Object storage(4) | n/a | n/a | n/a | n/a | n/a | -| NFS server (optional, not recommended) | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | +| Service | Nodes | Configuration | GCP | AWS | Azure | +|---------------------------------------------------|-------------|-------------------------|------------------|--------------|-----------| +| External load balancing node3 | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | +| Consul1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| PostgreSQL1 | 3 | 16 vCPU, 60 GB memory | `n1-standard-16` | `m5.4xlarge` | `D16s v3` | +| PgBouncer1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| Internal load balancing node3 | 1 | 4 vCPU, 3.6GB memory | `n1-highcpu-4` | `c5.large` | `F2s v2` | +| Redis - Cache2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | +| Redis - Queues / Shared State2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | +| Redis Sentinel - Cache2 | 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | `c5.large` | `A1 v2` | +| Redis Sentinel - Queues / Shared State2| 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | `c5.large` | `A1 v2` | +| Gitaly | 3 | 32 vCPU, 120 GB memory | `n1-standard-32` | `m5.8xlarge` | `D32s v3` | +| Praefect | 3 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | +| Praefect PostgreSQL1 | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | +| GitLab Rails | 5 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | `c5.9xlarge` | `F32s v2` | +| Monitoring node | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | +| Object storage4 | n/a | n/a | n/a | n/a | n/a | +| NFS server (optional, not recommended) | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | @@ -2403,11 +2403,11 @@ use Google Cloud’s Kubernetes Engine (GKE) and associated machine types, but t and CPU requirements should translate to most other providers. We hope to update this in the future with further specific cloud provider details. -| Service | Nodes(1) | Configuration | GCP | Allocatable CPUs and Memory | -|-------------------------------------------------------|----------|-------------------------|------------------|-----------------------------| -| Webservice | 7 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | 223 vCPU, 206.5 GB memory | -| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | 15.5 vCPU, 50 GB memory | -| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory | +| Service | Nodes1 | Configuration | GCP | Allocatable CPUs and Memory | +|-------------------------------------------------------|-------------------|-------------------------|------------------|-----------------------------| +| Webservice | 7 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | 223 vCPU, 206.5 GB memory | +| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | 15.5 vCPU, 50 GB memory | +| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory | @@ -2418,20 +2418,20 @@ future with further specific cloud provider details. Next are the backend components that run on static compute VMs via Omnibus (or External PaaS services where applicable): -| Service | Nodes | Configuration | GCP | -|--------------------------------------------|-------|-------------------------|------------------| -| Consul(1) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| PostgreSQL(1) | 3 | 16 vCPU, 60 GB memory | `n1-standard-16` | -| PgBouncer(1) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Internal load balancing node(3) | 1 | 4 vCPU, 3.6GB memory | `n1-highcpu-4` | -| Redis - Cache(2) | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | -| Redis - Queues / Shared State(2) | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | -| Redis Sentinel - Cache(2) | 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | -| Redis Sentinel - Queues / Shared State(2) | 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | -| Gitaly | 3 | 32 vCPU, 120 GB memory | `n1-standard-32` | -| Praefect | 3 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | -| Praefect PostgreSQL(1) | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Object storage(4) | n/a | n/a | n/a | +| Service | Nodes | Configuration | GCP | +|-----------------------------------------------------|-------|-------------------------|------------------| +| Consul1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| PostgreSQL1 | 3 | 16 vCPU, 60 GB memory | `n1-standard-16` | +| PgBouncer1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| Internal load balancing node3 | 1 | 4 vCPU, 3.6GB memory | `n1-highcpu-4` | +| Redis - Cache2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | +| Redis - Queues / Shared State2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | +| Redis Sentinel - Cache2 | 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | +| Redis Sentinel - Queues / Shared State2 | 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | +| Gitaly | 3 | 32 vCPU, 120 GB memory | `n1-standard-32` | +| Praefect | 3 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | +| Praefect PostgreSQL1 | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| Object storage4 | n/a | n/a | n/a | diff --git a/doc/administration/reference_architectures/2k_users.md b/doc/administration/reference_architectures/2k_users.md index 71f9ef43c21d7b..19a1d74b7bca95 100644 --- a/doc/administration/reference_architectures/2k_users.md +++ b/doc/administration/reference_architectures/2k_users.md @@ -18,13 +18,13 @@ For a full list of reference architectures, see | Service | Nodes | Configuration | GCP | AWS | Azure | |------------------------------------------|--------|-------------------------|-----------------|--------------|----------| -| Load balancer(3) | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| PostgreSQL(1) | 1 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | `m5.large` | `D2s v3` | -| Redis(2) | 1 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | `m5.large` | `D2s v3` | +| Load balancer3 | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| PostgreSQL1 | 1 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | `m5.large` | `D2s v3` | +| Redis2 | 1 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | `m5.large` | `D2s v3` | | Gitaly | 1 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | | GitLab Rails | 2 | 8 vCPU, 7.2 GB memory | `n1-highcpu-8` | `c5.2xlarge` | `F8s v2` | | Monitoring node | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| Object storage(4) | n/a | n/a | n/a | n/a | n/a | +| Object storage4 | n/a | n/a | n/a | n/a | n/a | | NFS server (optional, not recommended) | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | diff --git a/doc/administration/reference_architectures/3k_users.md b/doc/administration/reference_architectures/3k_users.md index becae6f24e0f69..17491379afbbfe 100644 --- a/doc/administration/reference_architectures/3k_users.md +++ b/doc/administration/reference_architectures/3k_users.md @@ -27,19 +27,19 @@ For a full list of reference architectures, see | Service | Nodes | Configuration | GCP | AWS | Azure | |--------------------------------------------|-------------|-----------------------|-----------------|--------------|----------| -| External load balancing node(3) | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| Redis(2) | 3 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | `m5.large` | `D2s v3` | -| Consul(1) + Sentinel(2) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| PostgreSQL(1) | 3 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | `m5.large` | `D2s v3` | -| PgBouncer(1) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| Internal load balancing node(3) | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| External load balancing node3 | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| Redis2 | 3 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | `m5.large` | `D2s v3` | +| Consul1 + Sentinel2 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| PostgreSQL1 | 3 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | `m5.large` | `D2s v3` | +| PgBouncer1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| Internal load balancing node3 | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | | Gitaly | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | | Praefect | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| Praefect PostgreSQL(1) | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| Praefect PostgreSQL1 | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | | Sidekiq | 4 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | `m5.large` | `D2s v3` | | GitLab Rails | 3 | 8 vCPU, 7.2 GB memory | `n1-highcpu-8` | `c5.2xlarge` | `F8s v2` | | Monitoring node | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| Object storage(4) | n/a | n/a | n/a | n/a | n/a | +| Object storage4 | n/a | n/a | n/a | n/a | n/a | | NFS server (optional, not recommended) | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | @@ -2122,11 +2122,11 @@ use Google Cloud’s Kubernetes Engine (GKE) and associated machine types, but t and CPU requirements should translate to most other providers. We hope to update this in the future with further specific cloud provider details. -| Service | Nodes(1) | Configuration | GCP | Allocatable CPUs and Memory | -|-------------------------------------------------------|----------|-------------------------|------------------|-----------------------------| -| Webservice | 2 | 16 vCPU, 14.4 GB memory | `n1-highcpu-16` | 31.8 vCPU, 24.8 GB memory | -| Sidekiq | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | 11.8 vCPU, 38.9 GB memory | -| Supporting services such as NGINX, Prometheus | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | 3.9 vCPU, 11.8 GB memory | +| Service | Nodes1 | Configuration | GCP | Allocatable CPUs and Memory | +|-------------------------------------------------------|-------------------|-------------------------|------------------|-----------------------------| +| Webservice | 2 | 16 vCPU, 14.4 GB memory | `n1-highcpu-16` | 31.8 vCPU, 24.8 GB memory | +| Sidekiq | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | 11.8 vCPU, 38.9 GB memory | +| Supporting services such as NGINX, Prometheus | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | 3.9 vCPU, 11.8 GB memory | @@ -2139,15 +2139,15 @@ services where applicable): | Service | Nodes | Configuration | GCP | |--------------------------------------------|-------|-------------------------|------------------| -| Redis(2) | 3 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | -| Consul(1) + Sentinel(2) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| PostgreSQL(1) | 3 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | -| PgBouncer(1) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Internal load balancing node(3) | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| Redis2 | 3 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | +| Consul1 + Sentinel2 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| PostgreSQL1 | 3 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | +| PgBouncer1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| Internal load balancing node3 | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | | Gitaly | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | | Praefect | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Praefect PostgreSQL(1) | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Object storage(4) | n/a | n/a | n/a | +| Praefect PostgreSQL1 | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| Object storage4 | n/a | n/a | n/a | diff --git a/doc/administration/reference_architectures/50k_users.md b/doc/administration/reference_architectures/50k_users.md index 3d21b8289b8319..acc0a3c27e0d78 100644 --- a/doc/administration/reference_architectures/50k_users.md +++ b/doc/administration/reference_architectures/50k_users.md @@ -15,25 +15,25 @@ full list of reference architectures, see > - **High Availability:** Yes ([Praefect](#configure-praefect-postgresql) needs a third-party PostgreSQL solution for HA) > - **Test requests per second (RPS) rates:** API: 1000 RPS, Web: 100 RPS, Git (Pull): 100 RPS, Git (Push): 20 RPS -| Service | Nodes | Configuration | GCP | AWS | Azure | -|------------------------------------------|-------------|-------------------------|------------------|---------------|-----------| -| External load balancing node(3) | 1 | 8 vCPU, 7.2 GB memory | `n1-highcpu-8` | `c5.2xlarge` | `F8s v2` | -| Consul(1) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| PostgreSQL(1) | 3 | 32 vCPU, 120 GB memory | `n1-standard-32` | `m5.8xlarge` | `D32s v3` | -| PgBouncer(1) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| Internal load balancing node(3) | 1 | 8 vCPU, 7.2 GB memory | `n1-highcpu-8` | `c5.2xlarge` | `F8s v2` | -| Redis - Cache(2) | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | -| Redis - Queues / Shared State(2) | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | -| Redis Sentinel - Cache(2) | 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | `c5.large` | `A1 v2` | -| Redis Sentinel - Queues / Shared State(2)| 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | `c5.large` | `A1 v2` | -| Gitaly | 3 | 64 vCPU, 240 GB memory | `n1-standard-64` | `m5.16xlarge` | `D64s v3` | -| Praefect | 3 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | -| Praefect PostgreSQL(1) | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | -| GitLab Rails | 12 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | `c5.9xlarge` | `F32s v2` | -| Monitoring node | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | -| Object storage(4) | n/a | n/a | n/a | n/a | n/a | -| NFS server (optional, not recommended) | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | +| Service | Nodes | Configuration | GCP | AWS | Azure | +|---------------------------------------------------|-------------|-------------------------|------------------|---------------|-----------| +| External load balancing node3 | 1 | 8 vCPU, 7.2 GB memory | `n1-highcpu-8` | `c5.2xlarge` | `F8s v2` | +| Consul1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| PostgreSQL1 | 3 | 32 vCPU, 120 GB memory | `n1-standard-32` | `m5.8xlarge` | `D32s v3` | +| PgBouncer1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| Internal load balancing node3 | 1 | 8 vCPU, 7.2 GB memory | `n1-highcpu-8` | `c5.2xlarge` | `F8s v2` | +| Redis - Cache2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | +| Redis - Queues / Shared State2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | +| Redis Sentinel - Cache2 | 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | `c5.large` | `A1 v2` | +| Redis Sentinel - Queues / Shared State2| 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | `c5.large` | `A1 v2` | +| Gitaly | 3 | 64 vCPU, 240 GB memory | `n1-standard-64` | `m5.16xlarge` | `D64s v3` | +| Praefect | 3 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | +| Praefect PostgreSQL1 | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | +| GitLab Rails | 12 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | `c5.9xlarge` | `F32s v2` | +| Monitoring node | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | +| Object storage4 | n/a | n/a | n/a | n/a | n/a | +| NFS server (optional, not recommended) | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | @@ -2414,11 +2414,11 @@ use Google Cloud’s Kubernetes Engine (GKE) and associated machine types, but t and CPU requirements should translate to most other providers. We hope to update this in the future with further specific cloud provider details. -| Service | Nodes(1) | Configuration | GCP | Allocatable CPUs and Memory | -|-------------------------------------------------------|----------|-------------------------|------------------|-----------------------------| -| Webservice | 16 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | 510 vCPU, 472 GB memory | -| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | 15.5 vCPU, 50 GB memory | -| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory | +| Service | Nodes1 | Configuration | GCP | Allocatable CPUs and Memory | +|-------------------------------------------------------|-------------------|-------------------------|------------------|-----------------------------| +| Webservice | 16 | 32 vCPU, 28.8 GB memory | `n1-highcpu-32` | 510 vCPU, 472 GB memory | +| Sidekiq | 4 | 4 vCPU, 15 GB memory | `n1-standard-4` | 15.5 vCPU, 50 GB memory | +| Supporting services such as NGINX, Prometheus | 2 | 4 vCPU, 15 GB memory | `n1-standard-4` | 7.75 vCPU, 25 GB memory | @@ -2429,20 +2429,20 @@ future with further specific cloud provider details. Next are the backend components that run on static compute VMs via Omnibus (or External PaaS services where applicable): -| Service | Nodes | Configuration | GCP | -|--------------------------------------------|-------|-------------------------|------------------| -| Consul(1) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| PostgreSQL(1) | 3 | 32 vCPU, 120 GB memory | `n1-standard-32` | -| PgBouncer(1) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Internal load balancing node(3) | 1 | 8 vCPU, 7.2 GB memory | `n1-highcpu-8` | -| Redis - Cache(2) | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | -| Redis - Queues / Shared State(2) | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | -| Redis Sentinel - Cache(2) | 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | -| Redis Sentinel - Queues / Shared State(2) | 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | -| Gitaly | 3 | 64 vCPU, 240 GB memory | `n1-standard-64` | -| Praefect | 3 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | -| Praefect PostgreSQL(1) | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Object storage(4) | n/a | n/a | n/a | +| Service | Nodes | Configuration | GCP | +|-----------------------------------------------------|-------|-------------------------|------------------| +| Consul1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| PostgreSQL1 | 3 | 32 vCPU, 120 GB memory | `n1-standard-32` | +| PgBouncer1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| Internal load balancing node3 | 1 | 8 vCPU, 7.2 GB memory | `n1-highcpu-8` | +| Redis - Cache2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | +| Redis - Queues / Shared State2 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | +| Redis Sentinel - Cache2 | 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | +| Redis Sentinel - Queues / Shared State2 | 3 | 1 vCPU, 3.75 GB memory | `n1-standard-1` | +| Gitaly | 3 | 64 vCPU, 240 GB memory | `n1-standard-64` | +| Praefect | 3 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | +| Praefect PostgreSQL1 | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| Object storage4 | n/a | n/a | n/a | diff --git a/doc/administration/reference_architectures/5k_users.md b/doc/administration/reference_architectures/5k_users.md index 3e0ed87b56dc76..8fcd350d22aac2 100644 --- a/doc/administration/reference_architectures/5k_users.md +++ b/doc/administration/reference_architectures/5k_users.md @@ -24,19 +24,19 @@ costly-to-operate environment by using the | Service | Nodes | Configuration | GCP | AWS | Azure | |--------------------------------------------|-------------|-------------------------|-----------------|--------------|----------| -| External load balancing node(3) | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| Redis(2) | 3 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | `m5.large` | `D2s v3` | -| Consul(1) + Sentinel(2) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| PostgreSQL(1) | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | -| PgBouncer(1) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| Internal load balancing node(3) | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| External load balancing node3 | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| Redis2 | 3 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | `m5.large` | `D2s v3` | +| Consul1 + Sentinel2 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| PostgreSQL1 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` | +| PgBouncer1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| Internal load balancing node3 | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | | Gitaly | 3 | 8 vCPU, 30 GB memory | `n1-standard-8` | `m5.2xlarge` | `D8s v3` | | Praefect | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| Praefect PostgreSQL(1) | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | +| Praefect PostgreSQL1 | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | | Sidekiq | 4 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | `m5.large` | `D2s v3` | | GitLab Rails | 3 | 16 vCPU, 14.4 GB memory | `n1-highcpu-16` | `c5.4xlarge` | `F16s v2`| | Monitoring node | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | `c5.large` | `F2s v2` | -| Object storage(4) | n/a | n/a | n/a | n/a | n/a | +| Object storage4 | n/a | n/a | n/a | n/a | n/a | | NFS server (optional, not recommended) | 1 | 4 vCPU, 3.6 GB memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` | @@ -2095,11 +2095,11 @@ use Google Cloud’s Kubernetes Engine (GKE) and associated machine types, but t and CPU requirements should translate to most other providers. We hope to update this in the future with further specific cloud provider details. -| Service | Nodes(1) | Configuration | GCP | Allocatable CPUs and Memory | -|-------------------------------------------------------|----------|-------------------------|------------------|-----------------------------| -| Webservice | 5 | 16 vCPU, 14.4 GB memory | `n1-highcpu-16` | 79.5 vCPU, 62 GB memory | -| Sidekiq | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | 11.8 vCPU, 38.9 GB memory | -| Supporting services such as NGINX, Prometheus | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | 3.9 vCPU, 11.8 GB memory | +| Service | Nodes1 | Configuration | GCP | Allocatable CPUs and Memory | +|-------------------------------------------------------|-------------------|-------------------------|------------------|-----------------------------| +| Webservice | 5 | 16 vCPU, 14.4 GB memory | `n1-highcpu-16` | 79.5 vCPU, 62 GB memory | +| Sidekiq | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | 11.8 vCPU, 38.9 GB memory | +| Supporting services such as NGINX, Prometheus | 2 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | 3.9 vCPU, 11.8 GB memory | @@ -2112,15 +2112,15 @@ services where applicable): | Service | Nodes | Configuration | GCP | |--------------------------------------------|-------|-------------------------|------------------| -| Redis(2) | 3 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | -| Consul(1) + Sentinel(2) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| PostgreSQL(1) | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | -| PgBouncer(1) | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Internal load balancing node(3) | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| Redis2 | 3 | 2 vCPU, 7.5 GB memory | `n1-standard-2` | +| Consul1 + Sentinel2 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| PostgreSQL1 | 3 | 4 vCPU, 15 GB memory | `n1-standard-4` | +| PgBouncer1 | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| Internal load balancing node3 | 1 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | | Gitaly | 3 | 8 vCPU, 30 GB memory | `n1-standard-8` | | Praefect | 3 | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Praefect PostgreSQL(1) | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | -| Object storage(4) | n/a | n/a | n/a | +| Praefect PostgreSQL1 | 1+ | 2 vCPU, 1.8 GB memory | `n1-highcpu-2` | +| Object storage4 | n/a | n/a | n/a | -- GitLab