This topic describes how to deploy a CloudNativePG cluster across multiple availability zones to tolerate one or more availability zone failures in a given AWS region.
This deployment is intended to be used with the setup described in the Concepts for single-cluster deployments guide. Use this deployment with the other building blocks outlined in the Building blocks single-cluster deployments guide.
| We provide these blueprints to show a minimal functionally complete example with a good baseline performance for regular installations. You would still need to adapt it to your environment and your organization’s standards and security best practices. |
CloudNativePG is an opensource operator that manages PostgreSQL clusters on Kubernetes. It is designed to operate one primary writer instance and optional reader instances.
Install the operator directly using the operator manifest:
kubectl apply --server-side -f \
https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.28/releases/cnpg-1.28.1.yaml
Use the following command to verify the installation status:
kubectl rollout status deployment \
-n cnpg-system cnpg-controller-manager
deployment "cnpg-controller-manager" successfully rolled out
It is possible to install the operator using other supported methods such as Helm Chart, OLM, or cnpg plugin for kubectl. See the CloudNativePG documentation for details.
Installation and configuration of CloudNativePG cluster is done via a Cluster resource.
Create a cluster.yaml file based on the following content:
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: cnpg-keycloak
spec:
instances: 3 (1)
storage:
size: 8Gi (2)
affinity: (3)
podAntiAffinityType: required
topologyKey: topology.kubernetes.io/zone
postgresql:
synchronous: (4)
method: any
number: 1
dataDurability: required
parameters:
max_connections: "100" (5)
bootstrap:
initdb: (6)
database: keycloak
owner: keycloak
managed:
services:
disabledDefaultServices: ["ro", "r"] (7)
| 1 | Number of instances. |
| 2 | Pod storage size. This setting needs to take into account the expected size of the database and PostgreSQL WAL logs. |
| 3 | Pod affinity rules for Kubernetes scheduler. The topology.kubernetes.io/zone value ensures the scheduler will spread the pods across different availability zones. |
| 4 | Enable quorum-based synchronous replication with a single standby server. For more information about synchronous replication follow the CloudNativePG documentation. |
| 5 | Database connection limit. This value should be adjusted based on the expected total number of the JDBC connections from the Keycloak cluster. |
| 6 | Create a database keycloak owned by the user keycloak. |
| 7 | Disables the -ro and -r default services which are intended for read-only applications. Since Keycloak requires a read-write access it only connects to the -rw service. |
Create the cnpg-keycloak namespace.
kubectl create ns cnpg-keycloak
Create the cnpg-keycloak cluster resource by applying the cluster.yaml file.
kubectl -n cnpg-keycloak apply -f cluster.yaml
Wait for the cnpg-keycloak cluster to get into the Ready state.
kubectl -n cnpg-keycloak wait --for condition=Ready --timeout=300s cluster cnpg-keycloak
cluster.postgresql.cnpg.io/cnpg-keycloak condition met
Optionally, view the cnpg-keycloak cluster pods and their roles.
kubectl -n cnpg-keycloak get pods -L role
NAME READY STATUS RESTARTS AGE ROLE
cnpg-keycloak-1 1/1 Running 0 10m primary
cnpg-keycloak-2 1/1 Running 0 10m replica
cnpg-keycloak-3 1/1 Running 0 10m replica
Create a PodMonitor resource.
kubectl -n cnpg-keycloak apply -f - <<EOF
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: cnpg-keycloak-pm (1)
spec:
selector:
matchLabels:
cnpg.io/cluster: cnpg-keycloak
podMetricsEndpoints:
- port: metrics
EOF
| 1 | The name of the pod monitor resource needs to be different from the name of the cluster. |
Add the CloudNativePG Grafana Dashboard to your Grafana instance.
Now that a CloudNativePG cluster has been installed, here are the relevant Keycloak CR options to connect the Keycloak to the database service. These changes will be required in the Deploying Keycloak across multiple availability-zones with the Operator guide. The JDBC url is configured to use the CloudNativeDB database writer service.
Update spec.db.url to be jdbc:postgresql://cnpg-keycloak-rw.cnpg-keycloak.svc.cluster.local:5432/keycloak, the writer instance of the CloudNativePG cluster.
Ensure that the Secrets referenced by spec.db.usernameSecret and spec.db.passwordSecret contain usernames and passwords for the CloudNativePG cluster.
Export CloudNativePG CA secret to a file.
kubectl --namespace cnpg-keycloak get secrets cnpg-keycloak-ca -ojson | jq -r '.data."ca.crt"' | base64 -d > cnpg-keycloak-ca-cert.pem
Create a ConfigMap resource cnpg-keycloak-ca in the keycloak namespace with the database CA secret.
kubectl --namespace keycloak create configmap cnpg-keycloak-ca --from-file cert.pem=./cnpg-keycloak-ca-cert.pem
Create keycloak-db-secret in the keycloak namespace based on cnpg-keycloak-app secret from the cnpg-keycloak namespace by running the following script.
secret=$(kubectl get --namespace cnpg-keycloak secret cnpg-keycloak-app -ojson)
username=$(echo "$secret" | jq -r .data.username | base64 -d)
password=$(echo "$secret" | jq -r .data.password | base64 -d)
kubectl --namespace keycloak create secret generic keycloak-db-secret \
--from-literal="username=$username" --from-literal="password=$password"
After successful deployment of the CloudNativePG database continue with Deploying Keycloak across multiple availability-zones with the Operator