Deploying the ForgeRock Identity Platform on Kubernetes Using Skaffold and Kustomize

Written by Warren Strange

If you are following along with the ForgeOps repository, you will see some significant changes in the way we deploy the ForgeRock Identity Platform to Kubernetes. These changes are aimed at dramatically simplifying the workflow to configure, test, and deploy ForgeRock Access Manager, Identity Manager, Directory Services, and Identity Gateway.

To understand the motivation for the change, let’s recap the current deployment approach:

  • The Kubernetes manifests are maintained in one git repository (forgeops), while the product configuration is another (forgeops-init).
  • At runtime, Kubernetes init containers clone the configuration from git and make it available to the component using a shared volume.

The advantage of this approach is that the Docker container for a product can be (relatively) stable. Usually, it is the configuration that is changing, not the product binary.

This approach seemed like a good idea at the time, but in retrospect, it created a lot of complexity in the deployment:

  • The runtime configuration is complex, requiring orchestration (init containers) to make the configuration available to the product.
  • It creates a runtime dependency on a git repository being available. This isn’t a show stopper (you can create a local mirror), but it is one more moving part to manage.
  • The Helm charts are complicated. We need to weave git repository information throughout the deployment. For example, we put git secrets and configuration into each product chart. We had to invent a mechanism to let the user switch to a different git repo or configuration, which added further complexity. Feedback from users indicated this was a frequent source of errors.
  • Iterating on configuration during development is slow. Changes need to be committed to git and the deployment rolled to test out a simple configuration change.
  • Kubernetes rolling deployments are tricky. The product container version must be in sync with the git configuration. A mistake here might not get caught until runtime.

It became clear that it would be much simpler if the products could just bundle the configuration in the Docker container so that it is “ready to run” without any complex orchestration or runtime dependency on git.

[As an aside, we often get asked why we don’t store configuration in ConfigMaps. The short answer is: We do—for top level configuration such as domain names and global environment variables. Products like AM have large and complex configurations (~1000 json files for a full AM export). Managing these in ConfigMaps is cumbersome. We also need a hierarchical directory structure, which is an outstanding ConfigMap RFE.]

The challenge with the “bake the configuration in the Docker image” approach is that it creates a lot of Docker containers. If each configuration change results in a new (and unique) container, you quickly realize that automation is required to be successful.

About a year ago, one of my colleagues happened to stumble across a new tool from Google called Skaffold. The Skaffold documentation states, “Skaffold handles the workflow for building, pushing, and deploying your application. So you can focus more on application development.”

To some extent, Skaffold is syntactic sugar on top of this workflow:

docker build; docker tag; docker push;

kustomize build |  kubectl apply -f -

Calling it syntactic sugar doesn’t really do it justice, so do read through their excellent documentation. There isn’t anything that Skaffold does that you can’t accomplish with other tools (or a whack of Bash scripts), but Skaffold focuses on smoothing out and automating this basic workflow.

A key element of Skaffold is its tagging strategy. Skaffold will apply a unique tag to each Docker image, usually an sha256 hash, or a git commit. This is essential for our workflow where we want to ensure that combination of the product (for example, AM) and a specific configuration is guaranteed to be unique. By using a git commit tag on the final image, we can be confident that we know exactly how a container was built, including its configuration. This also makes rolling deployments easier, as we can update a deployment tag and let Kubernetes spin down the older container and replace it with the new one.

If it isn’t clear from the above, the configuration for the product lives inside the Docker image, and that in turn is tracked in a git repository. If, for example, you check out the source for the Identity Management (IDM) container: forgeops/docker/idm at master · ForgeRock/forgeops · GitHub, you will see that the Dockerfile copies the configuration into the final image. When IDM runs, its configuration will be right there, ready to go.

Skaffold has two major modes of operation. Skaffold “run” mode is a one-shot build, tag, push, and deploy. You will typically use Skaffold run as part of CD pipeline. Watch for git commit, and invoke Skaffold to deploy the change. Again, you can do this with other tools, but Skaffold just makes it super convenient.

Where Skaffold really shines is in “dev” mode. If you run skaffold dev, it will run a continual loop watching the file system for changes, and rebuilding and deploying as you edit files. This diagram (taken from the Skaffold project) shows the workflow:

This process is really snappy. We find that we can deploy changes within 20-30 seconds, and most of that is just container restarts. When pushing to a remote GKE cluster, the first deployment is a little slower, as we need to push all those containers to gcr.io. Subsequent updates are fast as you are pushing configuration deltas that are just a few KB in size.

Note that git commits are not required during development. A developer will iterate on the desired configuration, and only when they are happy, will they commit the changes to git and create a pull request. At this stage, a CD process will pick up the commit and deploy the change to a QA environment. We have a simple CD sample here, using Google Cloudbuild.

At this point, we haven’t said anything about Helm and why we decided to move to Kustomize.

Once our runtime deployments became simpler (no more git init containers, simpler ConfigMaps, and so on), we found ourselves questioning the need for complex Helm templates. There was some internal resistance from our developers on using golang templates (they are pretty ugly when combined with yaml), and the security issues raised by Helm’s Tiller component raised additional red flags.

Suffice to say, there was no compelling reason to stick with Helm, and transitioning to Kustomize was painless. A shout out to the folks at Replicated, who have a very nifty tool called ship, that will convert your Helm charts to Kustomize. The transition from Helm to Kustomize took a couple of days. We might look at Helm 3 in the future, but for now, our requirements are being met by Kustomize. One nice side effect that we noticed is that Kustomize deployments with Skaffold are really fast.

This work is being done on the master branch of forgeops (targetting the 7.0 release), but if you would like to try out this new workflow with the current (6.5.2) products, you are in luck! We have a preview branch that uses the current products. The following should just work™ on minikube:

git clone https://github.com/ForgeRock/forgeops

cd forgeops

skaffold -f skaffold-6.5.yaml dev

There are some prerequisites that you need to install. See the README file.

The initial feedback on this workflow has been very positive. We’d love for folks to try it out and let us know what you think. Feel free to reach out to me at my ForgeRock email (warren dot strange at forgerock.com).