Written by Javed Shah
Please note that the EA program for Microservices has ended. The code provided in this post is not supported by ForgeRock.
Introduction
Openshift, a Kubernetes-as-a-Paas service, is increasingly being considered as an alternative to managed kubernetes platforms such as those from Tectonic, Rancher, etc and vanilla native kubernetes implementations such as those provided by Google, Amazon and even Azure. RedHat’s OpenShift is a PaaS that provides a complete platform which includes features such as source-2-image build and test management, managing images with built-in change detection, and deploying to staging / production.
In this blog, I demonstrate how ForgeRock Identity Microservices could be deployed into OpenShift Origin (community project).
OpenShift Origin Overview
OpenShift Origin is a distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OpenShift adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams.
OpenShift embeds Kubernetes and extends it with security and other integrated concepts. An OpenShift Origin release corresponds to the Kubernetes distribution – for example, OpenShift 1.7 includes Kubernetes 1.7
Docker Images
I described in this blog post how I built the envconsul layer on top of the forgerock-docker-public.bintray.io/microservice/authn:BASE-ONLY
image for the Authentication microservice, and similarly for the token-validation and token-exchange micro services. Back then I took the easy route and used the vbox (host) IP for exposing the Consul IP address externally to the rest of the microservices pods.
This time however, I rebuilt the docker images using a modified version of the envconsul.sh
script which is of course, used as the ENTRYPOINT
as seen in my github repo. I use an environment variable called CONSUL_ADDRESS in the script instead of a hard coded virtualbox adapter IP. The latter works nicely for minikube deployments, but of course, is not production friendly. However, with the env variable we move much closer to a production grade deployment.
How does OpenShift Origin resolve the environment variable on container startup? Read on!
OpenShift Origin Configuration
I have used minishift for this demonstration. More info about minishift can be found here. I started up Openshift Origin using these settings:
minishift start --openshift-version=v3.10.0-rc.0 eval $(minishift oc-env) eval $(minishift docker-env) minishift enable addon anyuid
The anyuid addon is only necessary when you have containers that must be run as root. Consul is not one of those, but most docker images rely on root, including the microservices, so I decided to set it as such anyway.
Apps can be deployed to OC in a number of ways, using the web console or the oc CLI. The main method I used is to use an existing container image hosted on the Bintray ForgeRock repository as well as the Consul image on Docker Hub.
The following command creates a new project:
$oc new-project consul-microservices --display-name="Consul based MS" --description="consul powered env vars"
At this point we have a project container for all our OC artifacts, such as deployments, image streams, pods, services and routes.
The new-app
command can be used to deploy any image hosted on an external public or private registry. Using this command, OC will create a deployment configuration with a rolling strategy for updatesz. The command shown here will deploy the Hashicorp Consul image from docker hub (the default repository OC looks for). OC will download the image and store it into an internal image registry. The image is then copied from there to each node in an OC cluster where the app is run.
$oc new-app consul --name=consul-ms
With the deployment configuration created, two triggers are added by default: a ConfigChange and ImageChange which means that each deployment configuration change, or image update on the external repository will automatically trigger a new deployment. A replication controller is created automatically by OC soon after the deployment itself.
Make sure to add a route for the Consul UI running at port 8500 as follows:
I have described the Consul configuration needed for ForgeRock Identity Microservices in my earlier blog post on using envconsul for sourcing environment configuration at runtime. For details on how to get the microservices configuration into Consul as Key-Value pairs please refer to that post. A few snapshots of the UI showing the namespaces for the configuration are presented here:
A sample key value used for token exchange is shown here:
Next, we move to deploying the microservices. And in order for envconsul
to read the configuration from Consul at runtime, we have the option of passing an --env
switch to the oc new-app
command indicating what the value for the CONSUL_ADDRESS
should be, or we could also include the environment variable and a value for it in the deployment definition as shown in the next section.
OpenShift Deployment Artifacts
If bundling more than one object for import, such as pod and service definitions, via a yaml file, OC expects strongly typed template configuration, as shown here, for the ms-authn
microservice. Note the extra metadata for template identification, and use of the objects
syntax:
This template, once stored in OC, can be reused to create new deployments of the ms-authn
microservice. Notice the use of the environment variable indicating the Consul IP. This informs the envconsul.sh script of the location of Consul and envconsul is able to pull the configuration key-value pairs before spawning the microservice.
I used the oc new-app
command in this demonstration insteed.
As described in the other blog, copy the configuration export (could be automated using GIT as well) into the Consul docker container:
$docker cp consul-export.json 5a2e414e548a:/consul-export.json
And then import the configuration using the consul kv
command:
$consul kv import @consul-export.json Imported: ms-authn/CLIENT_CREDENTIALS_STORE Imported: ms-authn/CLIENT_SECRET_SECURITY_SCHEME Imported: ms-authn/INFO_ACCOUNT_PASSWORD Imported: ms-authn/ISSUER_JWK_STORE Imported: ms-authn/METRICS_ACCOUNT_PASSWORD Imported: ms-authn/TOKEN_AUDIENCE Imported: ms-authn/TOKEN_DEFAULT_EXPIRATION_SECONDS Imported: ms-authn/TOKEN_ISSUER Imported: ms-authn/TOKEN_SIGNATURE_JWK_BASE64 Imported: ms-authn/TOKEN_SUPPORTED_ADDITIONAL_CLAIMS Imported: token-exchange/ Imported: token-exchange/EXCHANGE_JWT_INTROSPECT_URL Imported: token-exchange/EXCHANGE_OPENAM_AUTH_SUBJECT_ID Imported: token-exchange/EXCHANGE_OPENAM_AUTH_SUBJECT_PASSWORD Imported: token-exchange/EXCHANGE_OPENAM_AUTH_URL Imported: token-exchange/EXCHANGE_OPENAM_POLICY_ALLOWED_ACTORS_ATTR Imported: token-exchange/EXCHANGE_OPENAM_POLICY_AUDIENCE_ATTR Imported: token-exchange/EXCHANGE_OPENAM_POLICY_COPY_ADDITIONAL_ATTR Imported: token-exchange/EXCHANGE_OPENAM_POLICY_SCOPE_ATTR Imported: token-exchange/EXCHANGE_OPENAM_POLICY_SET_ID Imported: token-exchange/EXCHANGE_OPENAM_POLICY_SUBJECT_ATTR Imported: token-exchange/EXCHANGE_OPENAM_POLICY_URL Imported: token-exchange/INFO_ACCOUNT_PASSWORD Imported: token-exchange/ISSUER_JWK_JSON_ISSUER_URI Imported: token-exchange/ISSUER_JWK_JSON_JWK_BASE64 Imported: token-exchange/ISSUER_JWK_OPENID_URL Imported: token-exchange/ISSUER_JWK_STORE Imported: token-exchange/METRICS_ACCOUNT_PASSWORD Imported: token-exchange/TOKEN_EXCHANGE_POLICIES Imported: token-exchange/TOKEN_EXCHANGE_REQUIRED_SCOPE Imported: token-exchange/TOKEN_ISSUER Imported: token-exchange/TOKEN_SIGNATURE_JWK_BASE64 Imported: token-validation/ Imported: token-validation/INFO_ACCOUNT_PASSWORD Imported: token-validation/INTROSPECTION_OPENAM_CLIENT_ID Imported: token-validation/INTROSPECTION_OPENAM_CLIENT_SECRET Imported: token-validation/INTROSPECTION_OPENAM_ID_TOKEN_INFO_URL Imported: token-validation/INTROSPECTION_OPENAM_SESSION_URL Imported: token-validation/INTROSPECTION_OPENAM_URL Imported: token-validation/INTROSPECTION_REQUIRED_SCOPE Imported: token-validation/INTROSPECTION_SERVICES Imported: token-validation/ISSUER_JWK_JSON_ISSUER_URI Imported: token-validation/ISSUER_JWK_JSON_JWK_BASE64 Imported: token-validation/ISSUER_JWK_STORE Imported: token-validation/METRICS_ACCOUNT_PASSWORD
The oc new-app
command used to create a deployment:
$oc new-app forgerock-docker-public.bintray.io/microservice/authn:ENVCONSUL --env CONSUL_ADDRESS=consul-ms-ui-consul-microservices.192.168.64.3.nip.io
This creates a new deployment for the authn microservice, and as described in detail in this blog, envconsul is able to reach the Consul server, extract configuration and set up the environment required by the auth microservice.
Commands for the token validation microservice are shown here:
$oc new-app forgerock-docker-public.bintray.io/microservice/token-validation:ENVCONSUL --env CONSUL_ADDRESS=consul-ms-ui-consul-microservices.192.168.64.3.nip.io
$oc expose svc/token-validation route "token-validation" exposed
Token Exchange Microservice is setup as follows
$oc new-app forgerock-docker-public.bintray.io/microservice/token-exchange:ENVCONSUL --env CONSUL_ADDRESS=consul-ms-ui-consul-microservices.192.168.64.3.nip.io
$oc expose svc/token-exchange route "token-validation" exposed
Image Streams
OpenShift has a neat CI/CD trigger for docker images built in to help you manage your pipeline. Whenever the image is updated in the remote public or private repository, the deployment is restarted.
Details about the token exchange micro service image stream are shown here:
More elaborate Jenkins based pipelines can be built and will be demonstrated in a later blog!
Test
Setup the env variables in Postman like so:
I have several tests in this github repo. An example of getting a new access token using client credentials from the authn microservice is shown here:
This was an introduction to using OpenShift with our micro services, and there is much more to come!
Helpful Links
Microservices Docs