Installing OpenShift on AWS

Red Hat OpenShift Service on AWS (ROSA) provides an OpenShift instance to run Keycloak.

About

This module is intended to automate tasks around provisioning OpenShift clusters in AWS via ROSA tool, as described in the ROSA installation guide. The scripts are located in the folder provision/aws in this repository.

It will also install EFS as the storage provider for ReadWriteMany PersistentVolumeClaims with the storage class efs-sc. See AWS Elastic File Service as ReadWriteMany storage for more information.

Prerequisites

  1. Installing AWS CLI

  2. Install OpenTofu

  3. Perform the steps outlined in the ROSA installation guide:

    1. Enable ROSA Service in AWS account

    2. Download and install the ROSA command line tool

    3. Create the service linked role for the Elastic Load Balancer

    4. Log in to the ROSA CLI with your Red Hat account token and create AWS account roles and policies

    5. Verify your credentials and quota

Installation

The installation process is automated in the rosa_create_cluster.sh script in the folder provision/aws which takes its parameters from environment variables.

It loads environment variables pre-set in .env file inside the aws/ directory.

The script creates the OpenShift cluster via rosa create cluster command, additionally it creates the required operator roles and OIDC provider. After the installation process is finished, it creates a new admin user.

Example .env file
CLUSTER_NAME=rosa-kcb
VERSION=4.13.8
REGION=eu-central-1
COMPUTE_MACHINE_TYPE=m5.2xlarge
MULTI_AZ=false
REPLICAS=3

If no ADMIN_PASSWORD is provided in the configuration, it reads it from the AWS Secrets Manager.

Mandatory parameters

VERSION

OpenShift cluster version.

REGION

AWS region where the cluster should run.

COMPUTE_MACHINE_TYPE

AWS instance type for the default OpenShift worker machine pool.

REPLICAS

Number of worker nodes. If multi-AZ installation is selected, then this needs to be a multiple of the number of AZs available in the region. For example, if the region has 3 AZs, then replicas need to be set to some multiple of 3.

Use the following command to find out about the AZs in the region:

aws ec2 describe-availability-zones --region region-name

Optional parameters

CLUSTER_NAME

Name of the cluster. If not set, the value of the $(whoami) command will be used.

ADMIN_PASSWORD

Password for the cluster-admin user. If not set, it is obtained from AWS Secrets Manager secret equal to KEYCLOAK_MASTER_PASSWORD_SECRET_NAME parameter.

KEYCLOAK_MASTER_PASSWORD_SECRET_NAME

Name of the AWS Secrets Manager secret containing the password for the cluster-admin user. Defaults to keycloak-master-password.

Finding URLs

To find out about existing clusters and their URLs, use the following commands:

rosa list clusters
rosa describe cluster -c cluster-name

Re-create admin user

The above installation script creates an admin user automatically but in case the user needs to be re-created it can be done via the rosa_recreate_admin.sh script, providing the CLUSTER_NAME and optionally ADMIN_PASSWORD parameter.

Scaling the cluster’s nodes on demand

The standard setup of nodes might be too small for running a load test, at the same time using a different instance type and rebuilding the cluster takes a lot of time (about 45 minutes). To scale the cluster on demand, the standard setup has a machine pool named scaling with instances of type m5.2xlarge which is auto-scaled based on the current demand from 4 to 15 instances. However, auto-scaling of worker nodes is quite time-consuming as nodes are scaled one by one.

To scale the machine pool faster, issue a command like the following, and the additional nodes will be spawned at the same time:

rosa edit machinepool -c clustername --min-replicas 4 --autorepair scaling

To allow scaling the machine pool back to 0, use a command like the following:

rosa edit machinepool -c clustername --min-replicas 0 --autorepair scaling

To use different instance types, use rosa create machinepool to create additional machine pools

AWS Elastic File Service as ReadWriteMany storage

This setup installs EFS as the storage provider for ReadWriteMany PersistentVolumeClaims with the storage class efs-sc.

Using the scripts rosa_efs_create.sh and rosa_efs_delete.sh, the EFS configuration can be added and removed. Those are intended to be called from rosa_create_cluster.sh and rosa_delete_cluster.sh respectively.

Even when the scripts have completed, it might take a little while until the DNS in the PVC picks up the new IP address of the mount point. In the meantime, you might see an error message like “Failed to resolve server file-system-id.efs.aws-region.amazonaws.com”.

The following docs have been used to set up EFS:

Rotate admin user password

The admin user password can be rotated via the rosa_rotate_admin_password.sh script. Note the admin password for existing clusters is not updated. The new password can be applied using script rosa_recreate_admin.sh with corresponding CLUSTER_NAME variable.

Uninstallation

The uninstallation is handled by the rosa_delete_cluster.sh script.

The only required parameter is CLUSTER_NAME.

Additionally, it deletes the cluster’s operator roles and OIDC provider, and the admin user.