Keycloak 8.0.1 released

Monday, December 02 2019

To download the release go to Keycloak downloads.

Highlights

LDAP Issue

This release fixes a critical vulnerability in LDAP introduced in Keycloak 7. If you are using Keycloak 7.0.0, 7.0.1 or 8.0.0 in production we strongly suggest that you upgrade immediately.

WildFly 18.0.1.Final

Upgrade to WildFly 18.0.1.Final which includes updates to a number of CVEs in third-party libraries.

All resolved issues

The full list of resolved issues are available in JIRA

Upgrading

Before you upgrade remember to backup your database and check the upgrade guide for anything that may have changed.

Introducing Keycloak.X

Friday, October 11 2019, posted by Stian Thorgersen

What are we trying to improve?

The first stable release of Keycloak was way back in 2014. As always when building software there are things that could have been done better.

With Keycloak.X we are aiming to introduce some bigger changes to make Keycloak leaner, easier and more future-proof.

A few goals with Keycloak.X are:

  • Make it easier to configure
  • Make it easier to scale, including multi-site support
  • Make it easier to extend
  • Reduce startup time and memory footprint
  • Support zero-downtime upgrades
  • Support continuous delivery

This work will be broken into several parts:

  • A new and improved storage layer
  • A new distribution powered by Quarkus
  • A new approach to custom providers

Distribution

Building a new distribution powered by Quarkus will allow us to significantly reduce startup time and memory footprint.

We will be able to create a leaner distribution in terms of size and dependencies as well. Reducing dependencies will further reduce the number of CVEs in third-party libraries.

We are also planning to introduce a proper Keycloak configuration file, where we will document directly how to configure everything related to Keycloak. In the current WildFly based distribution the configuration file is very complex as it contains everything to configure the underlying application server, and more often than not it is required to refer to WildFly documentation to figure out how to configure things properly.

Storage

The current storage layer is complex, especially when deployed to multiple-sites. It has a number of scalability issues like the number of realms and clients. Sessions are only kept in-memory, which can be good for performance, but not so great for scaling when you consider a large portion of sessions are idle and unused most of the time.

Exactly what the new storage layer will look like is still to be decided, but we know for sure that we want to:

  • Reduce complexity with regards to configuring, SPIs and schema
  • Support zero downtime upgrades
  • Make sure we can scale to large number of realms and clients
  • Make sure we can scale to millions of sessions, including support for persisting and passivation

Providers

Providers today have some issues that we would like to address. Including:

  • Deprecation and versioned approach to SPIs - breaking changes to APIs are horrible in a continuous delivery world
  • Polyglot - not everyone is a JavaEE developer, let's embrace that and allow more options when it comes to extending Keycloak
  • Sand-boxing - allow safe customizations in a SaaS world

Continuous Delivery

We are aiming to make it easier to use Keycloak in a continuous delivery world. This should consider Keycloak upgrades, custom providers as well as configuration.

Keycloak upgrades should be seamless and there should not be any breaking changes, rather deprecation periods.

It should be possible to more easily manage and reproduce the config of Keycloak, including realm config, in different environments. A developer should be able to try some config changes in a dev environment, push to a test environment, before finally making the changes live in a production environment.

Contributing

We would love help from the community on Keycloak.X. You can contribute with code, with discussions or simply just trying it out and giving us feedback.

Migration to Keycloak.X

There will be a migration required to Keycloak.X. In fact there will be multiple migrations required as everything mentioned earlier will not be ready in one go.

It is an aim to make this migration as simple and painless as possible though.

Timing

We are staring with the Quarkus powered distribution. The aim is to have a fully functional stable distribution by the end of 2019, but we already have a prototype you can try out and contribute to.

In 2020 we are aiming to work on both the storage layer and providers. Hopefully, by the end of 2020 we will have most if not everything sorted out.

We will continue to support the current Keycloak version in parallel with Keycloak.X and will give everyone plenty of time to do the migration before we eventually will pull the plug on the old.

What's Coming To Keycloak

Tuesday, September 03 2019, posted by Stian Thorgersen

New Account Console and Account REST API

The current account console is getting dated. It is also having issues around usability and being hard to extend. For this reason we had the UXD team at Red Hat develop wireframes for a new account console. The new console is being implemented with React.js providing a better user experience as well as making it easier to extend and customise.

WebAuthn

We are working towards adding WebAuthn support both for two factor authentication and passwordless experience. This task is not as simple as adding an authenticator for WebAuth, but will also require work on improving authentication flows and the account console.

Operator

Operators are becoming an important way to manage software running on Kubernetes and we are working on an operator for Keycloak. The aim is to have an operator published on OperatorHub.io soon which provides basic install and seamless upgrade capabilities. This will be based on the awesome work done by the Red Hat Integreatly team.

Vault

At the moment to keep credentials such as LDAP bind credentials more secure it is required to encrypt the whole database. This can be complex and can also have a performance overhead.

We are working towards enabling loading credentials, such as LDAP bind credential and SMTP password, from an external vault. We're providing a built-in integration with Kubernetes secrets as well as an SPI allowing integrating with any vault provider.

In the future we will also provide the option to encrypt other more dynamic credentials at rest in the database.

User Profile

Currently there's no single place to define user profiles for a realm. To resolve this we are planning to introduce the Profile SPI, which will make it possible to define a user profile for a realm. It will be possible to define mandatory as well as optional attributes and also add validation to the attributes.

The built-in Profile SPI provider will make it possible to declaratively define the user profile for a realm and we also aim to have an editor in the admin console.

Observerability

Keycloak already comes with basic support for metrics and health endpoints provided by the underlying WildFly container. We plan to document how to enable this as well as extend with Keycloak specific metrics and health checks. If you would like to try this out today check the WildFly documentation.

Continuous Delivery

Over the last few months the team has invested a significant amount of time into automated testing and builds. This will pay of in the long run as we will need to spend less time on releases and will also make sure Keycloak is always release ready. In fact we're taking this as far as not allowing maintainers to manually merge PRs anymore, but rather have created a bot called the Merge Monster that will merge PRs automatically after they have been both manually reviewed and all tests have passed.

Keycloak.X

It's 5 years since the first Keycloak release so high time for some rearchitecting. More details coming soon!

Kanban Planning Board

For more insight and details into what we are working on and our backlog, check out our Kanban Planning Board.

Keycloak and JDBC Ping

Monday, August 12 2019, posted by Sebastian Łaskaiwec

A few months back, we had a great article about clustering using JDBC_PING protocol. Since then, we introduced some improvements for the Keycloak container image that can simplify the setup. So, before diving into this blog post, I highly encourage you to visit the Keycloak Cluster Setup article.

What has changed in our Container Image?

Probably the most important change is configuring the JGroups discovery protocol by using variables (see the Pull Request). Once the change got in, we could configure the JGroups discovery by setting two properties:

  • JGROUPS_DISCOVERY_PROTOCOL
  • JGROUPS_DISCOVERY_PROPERTIES

Let's apply the changes, shall we...

The JDBC_PING-based setup works fine in all scenarios, where we connect all Keyclaok instances to the same database. Since JDBC_PING can be configured to obtain a database connection using JNDI binding, it can easily connect to the Keycloak database. All we need to do is to add two parameters to our docker image:

  • JGROUPS_DISCOVERY_PROTOCOL=JDBC_PING
  • JGROUPS_DISCOVERY_PROPERTIES=datasource_jndi_name=java:jboss/datasources/KeycloakDS

You may find an end-to-end scenario here.

Additional configuration

In some scenarios, you may need additional configuration. All additional settings might be added to the JGROUPS_DISCOVERY_PROPERTIES. Here are some hints and common problems, that you may find:

Problem description Possible solution
The initialization SQL needs to be adjusted In this case, you might want to look at initialize_sql JDBC_PING property
When Keycloak crashes, the database is not cleared Turn remove_old_coords_on_view_change property on
When Keycloak crashes, the database is not cleared Also, when a cluster is not too large, you may turn the remove_all_data_on_view_change property on
Sometimes, Keycloak doesn't write its data into the database You may lower the info_writer_sleep_time and info_writer_max_writes_after_view property values


Haven fun and don't forget to let us know what you think about this blog post using the User Mailing List.
Sebastian Łaskawiec and the Keycloak Team

Keycloak Cluster Setup

Friday, May 10 2019, posted by 张立强 liqiang@fit2cloud.com

This post shares some solutions to setup Keycloak cluster in various scenarios (e.g. cross-DC, docker cross-host, Kubernetes).

If you'd like to setup Keycloak cluster, this blog may give you some reference.

Two cli script files are added to the Keycloak image as per the guide.

The Dockerfile is below and these two files are the most important matter for this blog, you can find them from TCPPING.cli and JDBC_PING.cli.

FROM jboss/keycloak:latest

ADD cli/TCPPING.cli /opt/jboss/tools/cli/jgroups/discovery/
ADD cli/JDBC_PING.cli /opt/jboss/tools/cli/jgroups/discovery/

First of all we should know that for a Keycloak cluster, all keycloak instances should use same database and this is very simple, another thing is about cache(generally there are two kinds of cache in Keycloaks, the 1st is persistent data cache read from database aim to improve performance like realm/client/user, the 2nd is the non-persistent data cache like sessions/clientSessions, the 2nd is very important for a cluster) which is a little bit complex to configure, we have to make sure the consistent of cache in a cluster view.

Totally here are 3 solutions for clustering, and all of the solutions are base on the discovery protocols of JGroups (Keycloak use Infinispan cache and Infinispan use JGroups to discover nodes).

1. PING

PING is the default enabled clustering solution of Keycloak using UDP protocol, and you don't need to do any configuration for this.

But PING is only available when multicast network is enabled and port 55200 should be exposed, e.g. bare metals, VMs, docker containers in the same host.

We tested this by two Keycloak containers in same host.

The logs show that the two Keycloak instances discovered each other and clustered.

2. TCPPING

TCPPING use TCP protocol with 7600 port. This can be used when multicast is not available, e.g. deployments cross DC, containers cross host.

We tested this by two Keycloak containers cross host.

And in this solution we need to set three below environment variables for containers.

#IP address of this host, please make sure this IP can be accessed by the other Keycloak instances
JGROUPS_DISCOVERY_EXTERNAL_IP=172.21.48.39
#protocol
JGROUPS_DISCOVERY_PROTOCOL=TCPPING
#IP and Port of all host
JGROUPS_DISCOVERY_PROPERTIES=initial_hosts="172.21.48.4[7600],172.21.48.39[7600]"

The logs show that the two Keycloak instances discovered each other and clustered.

3. JDBC_PING

JDBC_PING use TCP protocol with 7600 port which is similar as TCPPING, but the difference between them is, TCPPING requires you configure the IP and port of all instances, for JDBC_PING you just need to configure the IP and port of current instance, this is because in JDBC_PING solution each instance insert its own information into database and the instances discover peers by the ping data read from database.

We tested this by two Keycloak containers cross host.

And in this solution we need to set two below environment variables for containers.

#IP address of this host, please make sure this IP can be accessed by the other Keycloak instances
JGROUPS_DISCOVERY_EXTERNAL_IP=172.21.48.39
#protocol
JGROUPS_DISCOVERY_PROTOCOL=JDBC_PING

The ping data of all instances haven been saved in database after instances started.

The logs show that the two Keycloak instances discovered each other and clustered.

One more thing

The above solutions are available for most scenarios, but they are still not enough for some others, e.g.Kubernetes.

The typical deployment on Kubernetes is one Deployment/ReplicateSet/StatefulSet contains multi Keycloak Pods, the Pods are really dynamic as they can scale up and down or failover to another node, which requires the cluster to discover and remove these dynamic members.

On Kubernetes we can use DNS_PING or KUBE_PING which work quite well in practice.

Besides DNS_PING and KUBE_PING, JDBC_PING is another option for Kubernetes.

On Kubernetes multicast is available only for the containers in the same node and a pod has no static ip which can be used to configure TCPPING or JDBC_PING. But in the JDBC_PING.cli mentioned above we have handled this, if you don't set the JGROUPS_DISCOVERY_EXTERNAL_IP env, the pod ip will be used, that means on Kubernetes you can simply set JGROUPS_DISCOVERY_PROTOCOL=JDBC_PING then your keycloak cluster is ok.

Discussion

Suggestions and comments can be discussed via Keycloak User Mail List or this GitHub Repository.

For older entries go here.