Not too long ago I encountered some issues when configuring UEM and IDM integration. When providing the vIDM URL in UEM for configuring the integration it would error out with below error:
After some troubleshooting it appeared that the access policies where not properly configured as in the last rule in the default access application ruleset was blocking access. Resolution was editing the default policy and ending it with the password method which is associated with the built-in workspace IDP, after that the integration part is working as expected.
Another configuration task which caught me by surprise was that after the configuration is set up between UEM and vIDM the following errors occurred:
Turned out that the integration between UEM and vIDM is depending on Active Directory integration. The basic system domain accounts (even full admins) won’t work in this scenario. Resolution is configuring an domain account with the necessary admin rights in both tenants and then it will work as expected.
Hope this helps!
quick win blog to mention and give a heads-up that when you are in the process
of configuring vIDM and o365 you might encounter native clients prompting for
authentication and a big ass delay when you flip over the authentication and the
requested domain from managed to federated with vIDM. This might be up to eight
hours!!! Thanks to the #community #vExpert that I got this answer quite fast
because I recalled that Laurens van Duijn put something similar in the vExpert
Slack group mentioning that he saw this kind of behavior.
summary, do it on a Friday and inform your users.
out to Laurens van Duijn and be sure to follow him on twitter and his blog
long ago I’ve encountered an vCenter instance blowing up the
/dev/mapper/core_vg-core with gigabytes of java dump errors.. Just for
reference the customers setup is an dual SDDC with respectively an vCenter at each
site comprising of vCenter 6.5 U2 and embedded linked mode enabled.
mode I’ve encountered the following two articles:
decided to open up a support case. This resulted in a session which stated that
they had seen this sort of issues arising in 6.7u1 and higher which root caused
against hardware level 13 for the appliance and WIA Active Directory
setup had an hardware level 13 deployment on both sites and only one
experiencing the problem, and using Active Directory over LDAP integration.
resolution of the issue was downgrading the VCSA hardware level to version 10.
way is restoring the VCSA with a VAMI back-up restore, my way was re-register
the appliance with the VMX file downgraded to the level needed, see https://communities.vmware.com/thread/517825