Working with Identity Gateway in ForgeOps

Before you go any further with this article, it’s very important that you have deployed and are familiar with the CDK. You also should know how to access its administration GUIs and REST APIs. You also should have experience with complex Kubernetes deployments because deploying the ForgeRock platform in a containerized environment requires a high level of proficiency in many cloud technologies. So, keep that in mind before you start.

OK! Now that we have talked about what it would take to get to this point, let’s get into what it takes to deploy ForgeRock Identity Gateway (IG) in ForgeOps, and then go even further to customize the Docker image to get your routes configured and saved.

Deploying IG in ForgeOps

There are two ways to deploy IG in ForgeOps: Skaffold and CDK. We will walk you through both options but keep in mind that the CDK has been worked on day in and day out to make the deployment easier, so that would be my recommendation.

Method 1: Skaffold

The ForgeOps base image does not include the configuration to deploy the IG container. The file you need to change to deploy IG will depend on how you are running Skaffold, but the default file is the kustomize.yaml file under /path/to/forgeops/kustomize/overlay/7.0/all, if you are following our documentation. Add the following line at the end of the resources section:

- ../../../base/ig

Here is how the resources section should look like after the change:

namespace: my-namespace
commonLabels:
app.kubernetes.io/part-of: “forgerock”
resources:
- ../../../base/kustomizeConfig
- ../../../base/secrets
- ../../../base/7.0/ds/cts
- ../../../base/7.0/ds/idrepo
- ../../../base/am-cdk
- ../../../base/amster
- ../../../base/idm-cdk
- ../../../base/rcs-agent
- ../../../base/end-user-ui
- ../../../base/login-ui
- ../../../base/admin-ui
- ../../../base/ingress
- ../../../base/ldif-importer
- ../../../base/ig

If you are using a profile when running Skaffold, for instance:

skaffold run -p small

Then, you will have to change the kustomize.yaml from that specific profile instead. The file would be located in /path/to/forgeops/kustomize/overlay/7.0/small, where small is the name of the profile in the example above.

Once you deploy the platform, you will see the IG pod running but it won’t really do anything because IG doesn’t have any routes preconfigured from our base image. It actually might have a few routes that are safe to delete because you probably won’t use them.

Method 2: CDK

As I said in the introduction of this section, the CDK was developed to make the deployment easier, so here it is. The CDK command to deploy IG is:

/path/to/forgeops/bin/cdk install

then,

/path/to/forgeops/bin/cdk install ig

That’s it! IG should be deployed.

Understanding IG in ForgeOps

Before we get into how we will build a custom IG image, it’s important to understand how IG works in ForgeOps, where the configuration files are located, and how the files are deployed and saved when using ForgeOps.

The first to understand is where the configuration is coming from. It might seem like a basic concept, but for some people, including myself, that comes from the concept of having a VM server with all the configuration in a specific directory in that server or a local or remote database. It is extremely important to understand it.

Let’s get into it!

Once you have cloned the forgeops repository, there is a config directory where all the configuration profiles are located. This is called the master directory. The directory is considered the source of truth for any Forgeops deployments. This is the directory where you should use git to manage the files.

You should be using the CDK profile, which is the one supported by ForgeRock. However, you will see other profiles, such as, am-only, ds-only, idm-only, and ig-only that are not supported, but available as an example. The IG configuration will be located under /path/to/forgeops/config/7.0/cdk/ig/config

  • The routes-pod directory is intended for routes that directly hit the pod’s IP address. In practice, http requests coming to the pod’s IP address should really only be kubelet. Don’t add routes herenfor organizational reasons. In the future, this directory will likely be merged into a new “routes” directory.
  • The routes-service directory is intended for routes that hit the service IP address, and should be where all our routes live. In the future, this will likely be directly merged into a new routes directory.
  • Admin.json is the IG configuration file for administration purposes. The AdminHttpApplication serves requests on the administrative route; such as, the creation of routes and the collection of monitoring information. The administrative route and its subroutes are reserved for administration endpoints.
  • Config.json is the gateway configuration. The GatewayHttpApplication is the entry point for all incoming gateway requests. It is responsible for initializing a heap of objects, described in Heap Objects, and provides the main handler that receives all the incoming requests.

Here is how it works: The configuration and routes are located in the path above, and we need to initialize it to copy the initial master configuration to the Docker directory. This is also called staging. By doing that, we set up the staging environment so Docker can build the environments based on that configuration.

The command to initialize the configuration is the following:

./config.sh init — profile cdk

This command will simply copy the configuration files from the master config to the Docker directory, which is the staging area. Docker will then be able to build the images based on that.

We can now run scaffold run to deploy the platform. Or, use /path/to/forgeops/bin/cdk install, then /path/to/forgeops/bin/cdk install ig if you are using CDK.

Note that the base image might come with a few preconfigured routes. You should be ok to delete those if you are not going to use them.

Keep in mind that the config.sh will be deprecated in future releases, which means that you don’t have to worry about running the config.sh command anymore. Instead, the config files will always be located in the /path/to/forgeops/docker/ig/config-profiles. This will be the source of truth, and when you run the command /path/to/forgeops/bin/cdk/build ig. This directory is added to the container under /var/ig. Then, execute the /path/to/forgeops/bin/cdk install ig to deploy IG.

“The cdk build command calls Skaffold to build a new Docker image, and to push the image to your Docker registry. It also updates the image defaulter le so that the next time you install AM, the cdk install command gets AM static configuration from your new custom Docker image”.

Creating your own IG image

I will start this section by saying that you should not change the IG configuration in production. The IG image in production should be immutable, and that’s the reason our base image is already in production mode.

As I said before, the base image from ForgeOps does not come preconfigured with any route, because each use case will require a different route.

In order to create your own IG image, you can follow the procedure described in the IG Deployment Guide, and set up a development server to create and test the routes. Once you have created and tested it thoroughly, you can promote your configuration to your custom base image by copying the routes to the /path/to/forgeops/config/7.0/cdk/ig/config/routes-service directory from your custom ForgeOps repo. Then, you can maintain the version control using Git.

Once your new routes are in place in the custom image, run the following command to copy the configuration into the staging (Docker directory):

./config.sh init — profile cdk

Now deploy the image using scaffold run, or, if you are using CDK, /path/to/forgeops/bin/cdk build ig to build your custom image into the Docker container, and then /path/to/forgeops/bin/cdk install, and /path/to/forgeops/bin/cdk install, and /path/to/forgeops/bin/cdk install ig. By default, if you don’t specify a profile parameter, it will use the ig profile:
/path/to /forgeops/docker/7.0/ig/.

You can also use a profile when executing the build command by executing /path/to/forgeops/bin/cdk build ig -p my-profile. The IG configuration will be copied from the my-profile profile.

For future releases: When you run the build command and do not specify any profile, the profile called "cdk"will be used by default: /path/to/forgeops/docker/ig/config-profiles/cdk. You can specify a profile by executing the following command: /path/to/forgeops/bin/cdk build ig -p my-profile. Then, the following profile will be used:
/path/to/forgeops/docker/ig/config-profile.

That’s it! Your custom IG image is now running.

DNS

One common use case in IG is to use different hostnames for different applications to make it easier for the users to access the applications. In this section, I will use an example of a fictitious company called ForgeBank.

ForgeBank has two different applications that are protected by IG:

  1. Payroll: Employees can access their payroll information.
  2. Hr: Employees can access their PTO information, request time out, enroll for benefits, etc.

Once you deploy IG in ForgeOps, you can access the cluster by the default FQDN: https://prod.iam.example.com. You may want to work on the default FQDN and just create a new route in IG to route the traffic to the correct application. If you choose this route, then you don’t have to worry about the DNS, because you would access the applications like so:

https://prod.iam.example.com/ig/payroll

Or

https://prod.iam.example.com/ig/hr

However, those URLs do not make it a great user experience, because you do not have a good hostname, and you can also see the name of the underlying server in the URL, which is IG. You probably do not want to have the user type in this URL to access the payroll or HR application.

What we want to do is to configure each application with its own FQDN, and IG will know where to send the traffic. For instance, https://payroll.iam.example.com and https://hr.iam.example.com.

Ok! It sounds great but how do we configure that in ForgeOps? This is a great question. If IG was installed as a stand alone on premises, all you would have to do is to change your DNS server and point payroll.iam.example.com or hr.iam.example.com to the IG server external IP address. Done. However, in ForgeOps, after you add the new hostname in your DNS server, there are still a few extra steps that need to be done. Let’s dive into it.

First things first. We need something called Ingress in a Kubernetes environment in order to “expose HTTP and HTTPS routes from outside the cluster to services within the cluster." Traffic routing is controlled by rules defined on the ingress resource.

Here is a simple example where an ingress sends all of its traffic to one service:

An ingress may be configured to give services externally reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting. An ingress controller is responsible for fulfilling the ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic: https://kubernetes.io/docs/concepts/services-networking/ingress/.

In ForgeOps, we have a file called ingress.yaml located at /path/to/forgeops/kustomize/base/ingress. This is where we have the information about the Ingress controller, such as apiVersion, kind, metadata. The specifics about this file are out of the scope for this document, but you can read more about it on the Kubernetes website.

The important part of this file for IG is the following:

--
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
 name: ig-web
 annotations:
 kubernetes.io/ingress.class: nginx
 nginx.ingress.kubernetes.io/ssl-redirect: “true”
 nginx.ingress.kubernetes.io/rewrite-target: “/$2”
 cert-manager.io/cluster-issuer: $(CERT_ISSUER)spec:
 tls:
 — hosts:
 — $(FQDN)
 secretName: sslcert
 rules:
 — host: $(FQDN)
 http:
 paths:
 — backend:
 serviceName: ig
 servicePort: 8080
 path: /ig(/|$)(.*)

Note that you have a rewrite-target set to /$2. This is the target URI where the traffic must be redirected. In this example, any character captured in (/|$) will be assigned the placeholder of $1 and any character captured in (.*) will be assigned the placeholder of $2.

There are a couple ways of exposing a new endpoint in the ingress, but for this example the user will need to hit hr.iam.example.com or payroll.iam.example.com to access either of the two applications. In order for this to work, we first need to add those two hostnames to the DNS server pointing to the external IP address from the Kubernetes cluster, and to expose those endpoints in the ingress controller. What we can do is to duplicate the whole ig-web entry in the ingress.yaml file, and add the rules for each endpoint like so:

--
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
 name: ig-hr
 annotations:
 kubernetes.io/ingress.class: nginx
 nginx.ingress.kubernetes.io/ssl-redirect: “true”
 cert-manager.io/cluster-issuer: $(CERT_ISSUER)spec:
 tls:
 — hosts:
 — hr.iam.example.com
 secretName: sslcert
 rules:
 — host: hr.iam.example.com
 http:
 paths:
 — backend:
 serviceName: ig
 servicePort: 8080
 path: /

--
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
 name: ig-payroll
 annotations:
 kubernetes.io/ingress.class: nginx
 nginx.ingress.kubernetes.io/ssl-redirect: “true”
 cert-manager.io/cluster-issuer: $(CERT_ISSUER)spec:
 tls:
 — hosts:
 — payroll.iam.example.com
 secretName: sslcert
 rules:
 — host: payroll.iam.example.com
 http:
 paths:
 — backend:
 serviceName: ig
 servicePort: 8080
 path: /

Note that I have changed the host to the hostname I want the users to have access to and also, under rules I configured the path to the root context /. This is because I want the users to access the application using the hostname only. No URI will be used.

The users should be able to access the application via IG using the application external hostname. Well done!

Troubleshooting

How do I tail the logs?

kubectl logs -l app.kubernetes.io/name=ig -f — tail=10 — all-containers=true</em>

I see an error message: ImagePullBackOff. What should I do?

The image you’re attempting to run can’t be pulled. This commonly happens if you didn’t set a — default-repo when running Skaffold builds (however, Skaffold should have failed, too). Check the image urls, and make sure it exists and the cluster has pull secrets if required:

kubectl get pods -o yaml -l app.kubernetes.io/name=ig | grep image</em>

I see an error message: CrashLoopBackOff. What should I do?

The container is starting but the application is actually crashing. See the logs.

What is the deployed configuration?

kubectl exec -it deploy/ig — cat /path/to/some/config

How do I know if the container has pulled the correct IG configuration?

You can list the files under /var/ig in the container by executing the following command:

kubectl exec ig-99f7f7996–59m6m — ls /var/ig/

Where ig-99f7f7996–59m6m is the name of the ig pod.


Big thanks to Max Resnick, Rob Jackson, Jake Feasel, David Goldsmith, Warren Strange, and Shankar Raman for their contributions to this article.