Introduction
A previous article - IDM: Zero Downtime Upgrade Strategy Using a Blue/Green Deployment - have been showing a strategy for migrating identities from an IDM 5.5 deployment to 6.5 using a Groovy CREST connector. Since then, starting from 6.5.1 included, the CREST groovy connector have been removed. Furthermore, Identity Cloud (and platform deployments) now only supports bearer token authentication.
This article’s aim is therefore to provide a sample solution for these latest platform evolutions, demonstrating migration from an IDM 6.5.1 instance to Identity Cloud.
The sample repository
This git repository: https://stash.forgerock.org/scm/proserv/scriptedrestmigration.git provides a demonstration in migrating identities - users with roles and assignments - to Identity Cloud. The changes to the sample provided in IDM: Zero Downtime Upgrade Strategy Using a Blue/Green Deployment are:
- Use of the GroovyS scripted REST connector
- OAuth2 Authentication
- Migration of roles and assignments, including conditional roles and temporal constraints.
- Migration of user metadata.
The remaining of the article is focussed in describing the main design artefacts to help you making it your own.
When to use this solution
In order to perform a migration using the Out of the Box Migration Service, the IDM server must be kept out of update traffic to ensure data consistency at the other end. This is an acceptable solution when some minimum outage is permitted, and the data size allows to meet this window. This is the ideal case, as it deals with any managed object model, whatever complex, and ensure an exact data copy.
But for a large dataset, or when outage is unacceptable, then the solution presented here might be an option. If it is not, then consider looking at compromises, such as the methodology presented in this article: An Incremental Migration with IDM.
Here are the parameters to take into account:
- What is the object graph complexity. Syncing relationships incur additional I/O to the target system to ensure consistency.
- The production system must be re-configured to add a synchronisation mapping, and the scripted REST connector.
- Since this is sample code, you have to adapt it to fit the deployment requirements, and fully test it.
- The reconciliation is adding load to the server, so the deployment must be carefully re-planned so that users are not impacted.
- What is the migration completion deadline? The more complex the object model is and/or the larger the data is, the longer is the migration process; this can be mitigated by re-sizing the deployment and dedicating a couple of IDM instances to participate in the clustered reconciliation, however, this planning adds up to the task.
Prepare Identity Cloud to receive the updates
The OAuth client used to authenticate with Identity Management in Identity Cloud must be created, and it is as well provided in the provisioner configuration:
"configurationProperties" : {
"serviceAddress" : "https://<tenant URL>:443",
"OAuthClientId" : "idm-client",
"OAuthClientSecret" : "password",
"OAuthScope" : "fr:idm:*",
"OAuthTokenEndpoint" : "https://<tenant URL>:443/am/oauth2/realms/root/realms/alpha/access_token"
...
The subject - “idm-client” - must have the privileges to perform the required operations, which is done with configuring authentication and access. To obtain the authentication configuration, issue a REST call to IDM with an access token obtained with a resource owner password credentials grant flow. This is documented here : Authenticate to Identity Cloud REST API with access token.
Authentication
To get the authentication configuration perfrom the following cURL command:
curl 'https://{{tenant-URL}}/openidm/config/authentication' \
--header 'Authorization: Bearer *.....*'
In the authentication JSON result, locate
"staticUserMapping": [
{
"localUser": "internal/user/openidm-admin",
"roles": [
"internal/role/openidm-authorized",
"internal/role/openidm-admin"
],
"subject": "amadmin",
"userRoles": "authzRoles/*"
},
And add to the array the following definition:
,
{
"localUser": "internal/user/idm-provisioning",
"roles": [
"internal/role/platform-provisioning"
],
"subject": "idm-client"
}
And send the modified JSON payload to Identity Cloud:
curl --request PUT 'https://{{tenant-URL}}/openidm/config/authentication' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer .....' \
--data-raw '{
"_id": "authentication",
"rsFilter": { .....
}
}'
Now idm-client
is authorised to perform provisioning calls to IDM.
Authorization
To get the authorization configuration, perform the following call:
curl 'https://{{tenant-URL}}/openidm/config/access' \
--header 'Authorization: Bearer .....'
And modify the access JSON result so that for pattern "pattern": "managed/*"
to look this:
{
"pattern": "managed/*",
"roles": "internal/role/platform-provisioning",
"methods": "create,read,delete,query,patch,update"
}
Then send the modified JSON to IDM:
curl --request PUT 'https://{{tenant-URL}}/openidm/config/access' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer .....' \
--data-raw '{
"_id": "access",
"configs": [ ...
}'
This adds update
and delete
to the permitted actions.
Sample walk through
The scripted REST connector
The code is provided in the tools/
folder.
The connector implementation supports all the user profile properties defined in Identity Cloud, including the custom attrinutes. The supported types (“Object Class”) are alpha_user
, alpha_role
, alpha_assignment
and alpha_usermeta
. If the target is the bravo
realm, then all scripts must be adapted to add those types, using the existing code as a guide.
There are important points with regard to this design:
- Updates are performed with PATCH, rather than PUT. This is because non-nullable single relationships (like
manager
) can’t be deleted with a PUT, it has to be a PATCH with operation remove. - The IDM provisioning service blocks the syncing of the
_id
property. So the property name in the provisioner configuration isuid
but native name is_id
. Therefore in the reconciliation mappings_id
(source) is mapped touid
(target); this allows to preserve the managed objects ids. However, the relationships id can’t be preserved as IDM does no allow the client to generate it. - The correlation query is based on the _id equality e.g
target's uid == source's _id
. So roles, users and assignments keep the same id. However, relationships id can only be assigned by the server, therefore, these are not preserved - the result is not an exact copy.
The reconciliation mappings
roles, members and assignments
The mappings do no include the relationships properties, as first, depending on the schema, they may not be returned by default, secondly, they have to be transformed on the fly, since the _ref
property in Identity Cloud target another object type (e.g managed/role/<_id>
is transformed into managed/alpha_role/<_id>
). Therefore the relationship synchronisation is performed in the onUpdate
and onCreate
scripts. In the first ever reconciliation, the user objects are synchronised without the roles
relationships - or with an incomplete set - as not all roles will be present yet in Identity Cloud. Since the role
mapping is synchronising as well members
, at the end the data is guaranteed to be consistent. In the same manner some of the users won’t have the reports
and manager
sync’ed until the other end has been sync’ed. And then the same applies for assignment
. There are two methods in checking whether the other end of a relationship has been already sync’ed:
- Perform a REST call to IDC
- Or, check the local
repo/link
resource.
The current implementation relies on the latter. This is implemented in syncRelationships.js
:
- The relationship property is read locally from
/managed
, - Then for each item, the relationship end is tested for existence - that is - for example, provided the source _ref is :
/managed/role/<id>
, whether/managed/alpha_role/<id>
exists in Identity Cloud. If yes, it is added to the candidate target object.
Conditional roles
There is a special treatment for the user roles
; only conditionally assigned roles are sync’ed (the _refProperties
do not have a _grantType
). Those relations are established in Identity Cloud when the corresponding role condition
property is sync’ed.
_meta
The user _meta
property is processed by the syncMeta
function in syncRelationships.js
. Consider carefully wether this attribute needs to be propagated, and wether the value format is still compatible with Identity Cloud. In the contrary, consider removing this feature.
authzRoles
Finally, the users authzRoles
property is synchronised, however it is limited. Custom internal roles, or authorization managed roles (and their priviledges) are not supported in this sample.
Running the reconciliation
The 3 reconciliation mappings can be run one after the other, or in parallel. Since the logic is checking target references, the data should be consistent at the end of the process. As updates hit the source instance, they are implicitly sync’ed as well during the reconciliation. At the end, the old system can be put off traffic, and after a short while (to allow the last updates to propagate) Identity Cloud can take over.
Other releases than 6.5.1
For 5.5 and earlier, the scripted REST connector does not support OAuth2 configuration. Therefore, these parameters (client id, client secret, scope, and so on) are to be hard coded into the Customizer.groovy script.
For 7.x, the scripted REST connector is not needed anymore, just define an IDM external service:
external.idm-idc.json:
{
"instanceUrl" : "https://&{tenant.url}/openidm/",
"authType" : "bearer",
"clientId" : "idm-client",
"clientSecret" : "password",
"scope" : [ "fr:idm:*" ],
"tokenEndpoint" : "https://&{tenant.url}/am/oauth2/realms/root/realms/alpha/access_token"
}
Then in all javascripts, replace references to /system/scriptedrest/
with /external/idm/idc/openidm/
.
Conclusion
This is sample code that can’t possibly cover for all the possible cases; it is thus a template in which you can further add your own specifics. Most importantly, a large part of the process is to ensure that this strategy will meet the business requirements, and, of course rehearse, test, test and test again and again before engaging the migration to ensure success!