Deploying CloudNativePG in multiple availability zones

Deploy CloudNativePG as the database building block in a single-cluster deployment.

This topic describes how to deploy a CloudNativePG cluster across multiple availability zones to tolerate one or more availability zone failures in a given AWS region.

This deployment is intended to be used with the setup described in the Concepts for single-cluster deployments guide. Use this deployment with the other building blocks outlined in the Building blocks single-cluster deployments guide.

We provide these blueprints to show a minimal functionally complete example with a good baseline performance for regular installations. You would still need to adapt it to your environment and your organization’s standards and security best practices.

Architecture

CloudNativePG is an opensource operator that manages PostgreSQL clusters on Kubernetes. It is designed to operate one primary writer instance and optional reader instances.

cnpg multi az.dio

Installing CloudNativePG

Installing the CloudNativePG Operator

Install the operator directly using the operator manifest:

Command:
kubectl apply --server-side -f \
  https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.28/releases/cnpg-1.28.1.yaml

Use the following command to verify the installation status:

Command:
kubectl rollout status deployment \
  -n cnpg-system cnpg-controller-manager
Output:
deployment "cnpg-controller-manager" successfully rolled out

It is possible to install the operator using other supported methods such as Helm Chart, OLM, or cnpg plugin for kubectl. See the CloudNativePG documentation for details.

Installing CloudNativePG Cluster

Installation and configuration of CloudNativePG cluster is done via a Cluster resource.

  1. Create a cluster.yaml file based on the following content:

    Cluster resource:
    apiVersion: postgresql.cnpg.io/v1
    kind: Cluster
    
    metadata:
      name: cnpg-keycloak
    
    spec:
    
      instances: 3 (1)
    
      storage:
        size: 8Gi (2)
    
      affinity: (3)
        podAntiAffinityType: required
        topologyKey: topology.kubernetes.io/zone
    
      postgresql:
        synchronous: (4)
          method: any
          number: 1
          dataDurability: required
        parameters:
          max_connections: "100" (5)
    
      bootstrap:
        initdb: (6)
          database: keycloak
          owner: keycloak
    
      managed:
        services:
          disabledDefaultServices: ["ro", "r"] (7)
    1 Number of instances.
    2 Pod storage size. This setting needs to take into account the expected size of the database and PostgreSQL WAL logs.
    3 Pod affinity rules for Kubernetes scheduler. The topology.kubernetes.io/zone value ensures the scheduler will spread the pods across different availability zones.
    4 Enable quorum-based synchronous replication with a single standby server. For more information about synchronous replication follow the CloudNativePG documentation.
    5 Database connection limit. This value should be adjusted based on the expected total number of the JDBC connections from the Keycloak cluster.
    6 Create a database keycloak owned by the user keycloak.
    7 Disables the -ro and -r default services which are intended for read-only applications. Since Keycloak requires a read-write access it only connects to the -rw service.
  2. Create the cnpg-keycloak namespace.

    Command:
    kubectl create ns cnpg-keycloak
  3. Create the cnpg-keycloak cluster resource by applying the cluster.yaml file.

    Command:
    kubectl -n cnpg-keycloak apply -f cluster.yaml
  4. Wait for the cnpg-keycloak cluster to get into the Ready state.

    Command:
    kubectl -n cnpg-keycloak wait --for condition=Ready --timeout=300s cluster cnpg-keycloak
    Output:
    cluster.postgresql.cnpg.io/cnpg-keycloak condition met
  5. Optionally, view the cnpg-keycloak cluster pods and their roles.

    Command:
    kubectl -n cnpg-keycloak get pods -L role
    Example output:
    NAME              READY   STATUS    RESTARTS   AGE   ROLE
    cnpg-keycloak-1   1/1     Running   0          10m   primary
    cnpg-keycloak-2   1/1     Running   0          10m   replica
    cnpg-keycloak-3   1/1     Running   0          10m   replica

Enable Monitoring of the CloudNativePG cluster

  1. Create a PodMonitor resource.

    Command:
    kubectl -n cnpg-keycloak apply -f - <<EOF
    apiVersion: monitoring.coreos.com/v1
    kind: PodMonitor
    metadata:
      name: cnpg-keycloak-pm (1)
    spec:
      selector:
        matchLabels:
          cnpg.io/cluster: cnpg-keycloak
      podMetricsEndpoints:
      - port: metrics
    EOF
    1 The name of the pod monitor resource needs to be different from the name of the cluster.
  2. Add the CloudNativePG Grafana Dashboard to your Grafana instance.

Connecting Keycloak to CloudNativePG Cluster

Now that a CloudNativePG cluster has been installed, here are the relevant Keycloak CR options to connect the Keycloak to the database service. These changes will be required in the Deploying Keycloak across multiple availability-zones with the Operator guide. The JDBC url is configured to use the CloudNativeDB database writer service.

  1. Update spec.db.url to be jdbc:postgresql://cnpg-keycloak-rw.cnpg-keycloak.svc.cluster.local:5432/keycloak, the writer instance of the CloudNativePG cluster.

  2. Ensure that the Secrets referenced by spec.db.usernameSecret and spec.db.passwordSecret contain usernames and passwords for the CloudNativePG cluster.

Secure the JDBC connection

  1. Export CloudNativePG CA secret to a file.

    Command:
    kubectl --namespace cnpg-keycloak get secrets cnpg-keycloak-ca -ojson | jq -r '.data."ca.crt"' | base64 -d > cnpg-keycloak-ca-cert.pem
  2. Create a ConfigMap resource cnpg-keycloak-ca in the keycloak namespace with the database CA secret.

    Command:
    kubectl --namespace keycloak create configmap cnpg-keycloak-ca --from-file cert.pem=./cnpg-keycloak-ca-cert.pem

Prepare the secret for accessing the database

Create keycloak-db-secret in the keycloak namespace based on cnpg-keycloak-app secret from the cnpg-keycloak namespace by running the following script.

Script:
secret=$(kubectl get --namespace cnpg-keycloak secret cnpg-keycloak-app -ojson)
username=$(echo "$secret" | jq -r .data.username | base64 -d)
password=$(echo "$secret" | jq -r .data.password | base64 -d)
kubectl --namespace keycloak create secret generic keycloak-db-secret \
  --from-literal="username=$username" --from-literal="password=$password"

Next steps

After successful deployment of the CloudNativePG database continue with Deploying Keycloak across multiple availability-zones with the Operator

On this page