Concepts for sizing CPU and memory resources

Understand concepts for avoiding resource exhaustion and congestion.

Use this as a starting point to size a product environment. Adjust the values for your environment as needed based on your load tests.

Performance recommendations

  • Performance will be lowered when scaling to more Pods (due to additional overhead) and using a multi-cluster setup (due to additional traffic and operations).

  • Increased cache sizes can improve the performance when Keycloak instances running for a longer time. This will decrease response times and reduce IOPS on the database. Still, those caches need to be filled when an instance is restarted, so do not set resources too tight based on the stable state measured once the caches have been filled.

  • Use these values as a starting point and perform your own load tests before going into production.

Summary:

  • The used CPU scales linearly with the number of requests up to the tested limit below.

Recommendations:

  • The base memory usage for a Pod including caches of Realm data and 10,000 cached sessions is 1250 MB of RAM.

  • In containers, Keycloak allocates 70% of the memory limit for heap-based memory. It will also use approximately 300 MB of non-heap-based memory. To calculate the requested memory, use the calculation above. As memory limit, subtract the non-heap memory from the value above and divide the result by 0.7.

  • For each 15 password-based user logins per second, allocate 1 vCPU to the cluster (tested with up to 300 per second).

    Keycloak spends most of the CPU time hashing the password provided by the user, and it is proportional to the number of hash iterations.

  • For each 120 client credential grants per second, 1 vCPU to the cluster (tested with up to 2000 per second).*

    Most CPU time goes into creating new TLS connections, as each client runs only a single request.

  • For each 120 refresh token requests per second, 1 vCPU to the cluster (tested with up to 435 refresh token requests per second).*

  • Leave 150% extra head-room for CPU usage to handle spikes in the load. This ensures a fast startup of the node, and enough capacity to handle failover tasks. Performance of Keycloak dropped significantly when its Pods were throttled in our tests.

  • When performing requests with more than 2500 different clients concurrently, not all client information will fit into Keycloak’s caches when those are using the standard cache sizes of 10000 entries each. Due to this, the database may become a bottleneck as client data is reloaded frequently from the database. To reduce the database usage, increase the users cache size by two times the number of concurrently used clients, and the realms cache size by four times the number of concurrently used clients.

Keycloak, which by default stores user sessions in the database, requires the following resources for optimal performance on an Aurora PostgreSQL multi-AZ database:

For every 100 login/logout/refresh requests per second:

  • Budget for 1400 Write IOPS.

  • Allocate between 0.35 and 0.7 vCPU.

The vCPU requirement is given as a range, as with an increased CPU saturation on the database host the CPU usage per request decreases while the response times increase. A lower CPU quota on the database can lead to slower response times during peak loads. Choose a larger CPU quota if fast response times during peak loads are critical. See below for an example.

Measuring the activity of a running Keycloak instance

Sizing of a Keycloak instance depends on the actual and forecasted numbers for password-based user logins, refresh token requests, and client credential grants as described in the previous section.

To retrieve the actual numbers of a running Keycloak instance for these three key inputs, use the metrics Keycloak provides:

  • The user event metric keycloak_user_events_total for event type login includes both password-based logins and cookie-based logins, still it can serve as a first approximate input for this sizing guide.

  • To find out number of password validations performed by Keycloak use the metric keycloak_credentials_password_hashing_validations_total. The metric also contains tags providing some details about the hashing algorithm used and the outcome of the validation. Here is the list of available tags: realm, algorithm, hashing_strength, outcome.

  • Use the user event metric keycloak_user_events_total for the event types refresh_token and client_login for refresh token requests and client credential grants respectively.

See the Monitoring user activities with event metrics and HTTP metrics guides for more information.

These metrics are crucial for tracking daily and weekly fluctuations in user activity loads, identifying emerging trends that may indicate the need to resize the system and validating sizing calculations. By systematically measuring and evaluating these user event metrics, you can ensure your system remains appropriately scaled and responsive to changes in user behavior and demand.

Calculation example (single cluster)

Target size:

  • 45 logins and logouts per seconds

  • 360 client credential grants per second*

  • 360 refresh token requests per second (1:8 ratio for logins)*

  • 3 Pods

Limits calculated:

  • CPU requested per Pod: 3 vCPU

    (45 logins per second = 3 vCPU, 360 client credential grants per second = 3 vCPU, 360 refresh tokens = 3 vCPU. This sums up to 9 vCPU total. With 3 Pods running in the cluster, each Pod then requests 3 vCPU)

  • CPU limit per Pod: 7.5 vCPU

    (Allow for an additional 150% CPU requested to handle peaks, startups and failover tasks)

  • Memory requested per Pod: 1250 MB

    (1250 MB base memory)

  • Memory limit per Pod: 1360 MB

    (1250 MB expected memory usage minus 300 non-heap-usage, divided by 0.7)

  • Aurora Database instance: either db.t4g.large or db.t4g.xlarge depending on the required response times during peak loads.

    (45 logins per second, 5 logouts per second, 360 refresh tokens per seconds. This sums up to 410 requests per second. This expected DB usage is 1.4 to 2.8 vCPU, with a DB idle load of 0.3 vCPU. This indicates either a 2 vCPU db.t4g.large instance or a 4 vCPU db.t4g.xlarge instance. A 2 vCPU db.t4g.large would be more cost-effective if the response times are allowed to be higher during peak usage. In our tests, the median response time for a login and a token refresh increased by up to 120 ms once the CPU saturation reached 90% on a 2 vCPU db.t4g.large instance given this scenario. For faster response times during peak usage, consider a 4 vCPU db.t4g.xlarge instance for this scenario.)

On this page