Migrating identities to Identity Cloud with Identity Gateway



This article Migrating Identities to Identity Cloud with Scripted REST has been showing a way in performing identity migration with zero downtime to Identity Cloud. This solution is not perfect, as it requires altering the production site to configure reconciliation with Identity Cloud.

We are examining in this article a method that allows to migrate identities to Identity Cloud without altering the production site configuration, and without bloating Identity Cloud with residue mapping links that remain after the migration. This is done by :

  • Installing a new deployment with a latest IDM release that supports the external IDM service, to reconcile identities from an on premise IDM 6.5 instance to Identity Cloud
  • Adding IG in front of IDM 6.5 in production to capture updates and initiate individual record reconciliation in the proxy deployment.

The following git repository is available as a reference : platform-compose / branch idm-proxy. Make sure to clone the idm-proxy branch.

Reconciliation Configuration

The customisation are identical to those in Migrating Identities to Identity Cloud with Scripted REST, with only these changes:

  • There is no more Groovy connector to Identity Cloud. Instead, the reconciliation mapping makes use of the external IDM service
  • All scripts have been changed so that references to openidm/managed are replaced by openidm/external/idm/idm65/managed, and references to openidm/system/scriptedrest/managed to openidm/external/idm/idc/managed
  • In sync.json, the uid reference in the target resource is now its native name _id, as the external IDM service does not impose the same restrictions as the Provisioning Service.

Therefore, the functionality is the same, except a big one: implicit synchronisations are no more. At this stage, this could just suffice if the required downtime stays in the limit of expectations.

If not, then we can initiate those implicit synchronisations by inserting Identity Gateway in the mix.

Prepare Identity Cloud

You can find instructions in configuring the OAuth2 client and adding permissions here : Authenticate through AM :: ForgeRock Identity Cloud Docs.

Replaying updates

We rely here in the ability to re-configure the network architecture to insert an Identity Gateway layer in front of the IDM instances. When request go through IG, the route identifies those that are directed to the /managed service and which are updates, that is either PATCH, POST, PUT or DELETE.

The design relies on a useful option : reconciling a single record (_action=reconById). IG detects update requests going thru, capture the id of the updated/created resource and send a POST /openidm/recon?_action=reconById&mapping=<mapping>&id=<id> to the proxy instance. This initiates the reconciliation for this single record, and therefore, keep the IDC instance synchronised with live updates.

Delete request are an exception though - reconById has no effect for deleted source objects, and therefore, in this case IG sends the DELETE request directly to Identity Cloud, and send a request to the IDM proxy to delete the associated mapping link.

The route definition

Let’s examine the configuration at platform-compose / branch idm-proxy in the IG folder.

The route is configured to respond to port 8090, and is configured in config.json using a DispatchHandler:

"handler": {
        "type": "DispatchHandler",
        "config": {
          "bindings": [
              "condition": "${request.uri.port == 8090}",
              "handler": {
                "name": "router-replay",
                "type": "Router",
                "config": {
                  "scanInterval": "disabled",
                  "directory": "${openig.configDirectory}/routes-replay"
              "condition": "${request.uri.port == 9090}",
              "handler": {
                "name": "router-service",
                "type": "Router",
                "config": {
                  "scanInterval": "disabled",
                  "directory": "${openig.configDirectory}/routes-service"

The second port maps to a different folder, not used in this sample, it’s there to demonstrate a separation between different functionalities - if there is already an IG layer in production - this shows a convenient way to add the functionality without disturbing the existing functionality.

In the routes-replay folder, the IDM route is configured in 220-IDM.json. The baseURI points to the IDM 6.5 instance, and the route is configured to capture the /admin and /openidm paths. The main handler is a DispatchHandler which is composed of:

  • A StaticResponseHandler when the path is /admin and which redirects to /admin/ to prevent the IDM instance to initiate the re-direct (which would cause bypassing of IG).
  • A Chain made of a ScriptableFilter, and the ReverseProxyHandler. The script (catchAndReplayUpdates.groovy) is designed to capture responses from IDM 6.5 to create/update/delete requests, and initiate an additional call to the IDM proxy when IDM return a successful response.

The clientHandler passed to the script (IDMClientHandler) is a chain that adds the admin credentials headers (openidm-admin) to the request, optionally change this to use credential of an internal user with the sufficient privileges to perform the required operations to the proxy instance - these are reconById operations on the recon endpoint, reading and deleting into the /repo/link resources and delete on the external/idm/idc/managed/alpha_* endpoint. The client handler is passed to the script as the http object, and is used in the reconById and deleteResource functions.

The core of the logic is encapsulated here:


if (...) {

    return next.handle(context, request).thenOnResult { response ->
        if (response.status.family == SUCCESSFUL) {
            if (assignIdOnResponse) {
                id = response.entity.json._id
            if (useReconById) {
                reconById(id, mapping)
            } else {
                deleteResource(id, "alpha_${type}")

return next.handle(context, request)

Updates are made with PATCH, PUT and as well POST (?_action=create) and DELETE directed at relationships. In this case the reconById function is employed. The id of the object is captured from the request path.

Resource creation is done with POST ?_action=create. Note that a request such as POST managed/user/{userId}/roles is targeting the roles attribute, and therefore is treated as an update to the user resource - whose id is available in the request path. When creating a new resource, the id of the resource is assigned by IDM, and therefore only available in the response.

Finally the deleteResource function is triggered when deleting a resource. Again, a request likeDELETE managed/user/{userId}/roles/{roleId} is also considered an update, and in this case reconById is employed.

Note that the requests sent to the IDM proxy are fire and forget, we do not care about the response, hence the use of thenOnResult which returns IDM’s response to the caller.


Though replays are “fire and forget”, each replay is recorded into a custom audit log. This section in the documentation describes how this is done: Record custom audit events. The audit log topic is “ReplayActivityTopic”, which reports the successful events “reconByIdReplayEvent” and “deleteReplayEvent”. The former contains the actual reconciliation details, while the latter prints the deleted resource id and resource type.

Are recorded as well replay errors - which are the information the most useful here: you want to know if all updates have been successful, and if not, have the ability to perform corrections. The associated topics are : ‘reconByIdReplayErrorEvent’, ‘deleteReplayError’, ‘deleteRepoLinkReplayError’ and ‘deleteReplayNoLink’. The latter could be minor, or might never occur - it indicates that no link was found to delete, however, the resource have been successfully deleted in IdentityCloud.

Have fun

Follow the instructions in platform-compose / branch idm-proxy to deploy the IDM proxy. Have an IDM 6.5 instance handy around, into which you have thrown a couple of users, roles and assignments. Make sure to update correctly the host names in the routes and scripts. And as explained in the Readme, perform an initial reconciliation.

Then play with the admin UI loaded thru the IG instance on port 8090. See how edits in the UI are also propagated to Identity Cloud.

Use this command to spy on nginx: docker logs -f nginx.local. You can then observe the requests sent to the IDM proxy when you perform updates with the admin console.


This method is more elaborated, as it involves a groovy script in the IG route (so please test, test, and test again), and necessitates to add infrastructure to host an Identity Gateway cluster to support the load. However, it does not impact the configuration - only infrastructure.

This hopefully completes a series of articles on the migration subject (until the next time :slight_smile: ), and here they are if you want to explore more:

Reading the two following articles will also help you understand the logic in the groovy script, and will give you the ability to adapt/enhance the solution for your needs: