0 comments

Azure Management Portal: Properly Remove Co-Adminstrators Permissions

Published on Wednesday, August 26, 2015 in ,

Something I’ve noticed for a while now: whenever I perform an Add-AzureAccount I see more subscriptions being returned than I’d expect. The list I have to choose from in the old portal (manage.windowsazure.com) is definitely not showing that much subscriptions. The new portal (portal.azure.com) displays also more subscriptions than I’d expect. The problem to sort those out is that many of those belong to subscriptions I’ve once have gotten access to, but now I no longer have. Either from customers or test subscriptions from colleagues.

For test purpose subscriptions I don’t really care whether people take my permissions away or not. But for production subscriptions I feel more at ease when I don’t have any permissions I don’t need anyway. Lately a customer mentioned my permissions were taken away, but I still saw their entry in the new Portal. Hmm, odd! Here’s how that’s possible:

First off, Initially I was granted access on my Microsoft Account (invisibal_at_gmail.com) through the old Portal:

image

Now I could manage that subscription through both old and new Portal.

image

And as I also worked for another “customer”, I had multiple subscriptions to manage, Setspn and RealDolmen Azure POC:

image

After my work was done, the customer removed me from the list of Administrators of the Setspn subscription.

subvs

su2

Now when I log in to the old Portal (manage.windowsazure.com) I’ll only see the other subscription.

image

However, when I log on to the new Portal, it’s still there!

image

Trying to show “all resources” of the Setspn subscription shows nothing. As expected.

image

The same is observed through PowerShell:

image

Now the only solution I could think is to also remove the live ID from the Azure Active Directory the subscription is linked to.

Capture3

Captur4e

After removing the user from the Azure AD, you’ll no longer see the subscription in the new Portal:

image

Well as you can see, not exactly… Typically when you try to reproduce things for screenshots, it doesn’t happen or it goes wrong. This is a case “it goes wrong”.  I tried a few times, but the GUID (belonging to the Azure AD I was part of) kept appearing… All I can say whenever the customer actually removed me from their Azure AD it got properly removed from my Azure Portal UI and PowerShell experience….

Conclusion:

I’m pretty sure the only reason you keeping seeing the entry in the new Portal is because you still have the User role assigned in the Azure Active Directory instance. So in a way you’re not really seeing the subscription, but rather the Azure Active Directory instance. But the issue remains the same, it clutters your PowerShell (get-AzureSubscription) and Portal UI experience. So whenever someone takes your co-administrator permissions away, ask them to also remove you from the Azure AD instance.

5 comments

MIM 2016: PowerShell Workflow and PowerShell v3

Published on Friday, August 21, 2015 in ,

One of the issues of running FIM 2010 R2 on Windows Server 2012 is calling PowerShell scripts from within FIM Portal Workflows (.NET). It seems the workflow code is running .NET 3.5 but uses PowerShell 2.0. When we started migrating our FIM 2010 to MIM 2016 (on Server 2012 R2) we ran into the same issues. This is the .NET code that has been running fine on Windows 2008 R2 for years without any issues:

RunspaceConfiguration config = RunspaceConfiguration.Create();
Runspace runspace = RunspaceFactory.CreateRunspace(config);
runspace.Open();
psh = PowerShell.Create();
psh.Runspace = runspace;
psh.AddCommand(this.PSCmdlet)

psh.Invoke();

And one the scripts that was executed contained code like this:

001
002
003
004
005
006
007
008

doSomething.ps1

#region Parameters
Param([string]$UserName,[string]$Department)
#endregion Parameters
Import-Module ActiveDirectory
Get-Aduser 
...

Now when porting that same logic to our MIM 2016 running on Windows Server 2016 we saw that our get-AD* cmdlets returned nothing. After some investigation we found the following error was triggered when running import-module Active Directory: The 'C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules\ActiveDirectory\ActiveDirectory.psd1' module cannot be imported because its manifest contains one or more members that are not valid. The valid manifest members are ('ModuleToProcess', 'NestedModules', 'GUID', 'Author', 'CompanyName', 'Copyright', 'ModuleVersion', 'Description', 'PowerShellVersion', 'PowerShellHostName', 'PowerShellHostVersion', 'CLRVersion', 'DotNetFrameworkVersion', 'ProcessorArchitecture', 'RequiredModules', 'TypesToProcess', 'FormatsToProcess', 'ScriptsToProcess', 'PrivateData', 'RequiredAssemblies', 'ModuleList', 'FileList', 'FunctionsToExport', 'VariablesToExport', 'AliasesToExport', 'CmdletsToExport'). Remove the members that are not valid ('HelpInfoUri'), then try to import the module again.

There are various topics online that cover this exact issue.

It seems some PowerShell modules are hardwired to require PowerShell v3. I came across the following suggestion a few times, but it scares me a bit as with my (limited?) knowledge of .NET It’s hard to estimate what impact this might have on FIM. The suggestion was to add the following to the Microsoft.ResourceManagement.Service.exe.config file.

001
002
003
004

<startup>
 <supportedRuntime version="v4.0"/>
 <supportedRuntime version="v2.0.50727"/>
</startup>

I found some approaches using a script that calls another script, but I wanted to avoid this. So I came up with the following approach to update the workflow itself:

PowerShellProcessInstance instance = new PowerShellProcessInstance(new Version(3, 0), null, null, false);
Runspace runspace = RunspaceFactory.CreateOutOfProcessRunspace(new TypeTable(new string[0]), instance);

Source:http://stackoverflow.com/questions/22383915/how-to-powershell-2-or-3-when-creating-runspace

The PowerShellProcessInstance is class that is available in System.Management.Automation. And that’s part of PowerShell itself. I tried various DLLS, but either they didn’t know the class or they resulted in the following error when building my .NET project:

The primary reference "System.Management.Automation" could not be resolved because it has an indirect dependency on the .NET Framework assembly "System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" which has a higher version "4.0.0.0" than the version "2.0.0.0" in the current target framework.    FODJ.FIM.Workflow.ActivityLibrary

My project is configured to build for .NET 3.5, but If I’m not mistaken .NET 3.5 use CLR 2.0. Whilst .net 4/4.5 use CLR 4.0 (see .NET Framework Versions and Dependencies). So I guess this route isn’t going to work after all. Back to the drawing board. As I only got a number of scripts to call like this, I decided to go back the wrapper script approach:

The script containing the logic to be executed:

001
002
003
004
005
006
007
008

doSomething.script.ps1

#region Parameters
Param([string]$UserName,[string]$Department)
#endregion Parameters
Import-Module ActiveDirectory
Get-Aduser 
...

As you can see I prepended .script to the .ps1 extension. And here’s my wrapper script. This is the one that is called from the FIM/MIM Workflow:

001
002
003
004
005
006

doSomething.ps1

Param
([string]$UserName,[string]$Department)
$script = $myinvocation.invocationName.replace(".ps1",".script.ps1")
powershell -version 3.0 -file $script -UserName $username -Department 
$Department

There are some things to note: the param line is just a copy past from the base script. And I just specify them as parameters again when calling the base script. I had been looking for a way to use unbound parameters, e.g. the calling workflow says –username … –department … and the wrapper script just passes it over. That would have allowed me to have a generic wrapper script. I got pretty close to getting it to work, but I kept running into issues. In the end I just decided to go for KISS.

Note: if you want to capture errors like the one I show from “import-module Active Directory”, just use the $error variable. You can use it like this. Saving it to disk is just one example. Typically you could integrate this with your logging function.

001
002
003

$error.Clear
Import-Module ActiveDirectory
$error | out-file c:\users\public\error.txt

10 comments

Azure Quick Tip: Block or Allow ICMP using Network Security Groups

Published on Tuesday, August 18, 2015 in ,

For a while now Azure allows administrators to restrict network communications between virtual machines in Azure. Restrictions can be configured through the use of Network Security Groups (NSGs). Those can be linked to both subnets or virtual machines. Check the following link if you want some more background information: https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-nsg/

A NSG always contains some default rules. By default all outbound traffic is allowed, and inbound from other subnets (not the internet) is also allowed. Typically if you ping between VM’s on different subnets (same VNET) you’ll see that the machines respond as expected.

Now what if you want to restrict traffic between subnets but still allow ICMP? ICMP is great for troubleshooting connectivity. Set-AzureNetworkSecurityRule allows you to provide the protocol parameter. In a typical firewall scenario this value would contain TCP, UDP, ICMP, … Ping uses ICMP which is neither TCP or UDP… Azure only seem to allow TCP, UDP and * for the protocol:

image

Now how can we block all traffic but allow ICMP? Simple, by explicitly denying UDP and TCP but allowing *. In this example I included the allow rule, but it should be covered by the default rules anyhow.

001
002
003
004
005
006

#allow ping, block UDP/TCP
Get-AzureNetworkSecurityGroup -name "NSG-1" | Set-AzureNetworkSecurityRule -Name BlockTCP -Type Inbound -Priority 40000 -Action Deny -SourceAddressPrefix "*"  -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '*' -Protocol "TCP"

Get-AzureNetworkSecurityGroup -name "NSG-1" | Set-AzureNetworkSecurityRule -Name BlockUDP -Type Inbound -Priority 40001 -Action Deny -SourceAddressPrefix "*"  -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '*' -Protocol "UDP"

Get-AzureNetworkSecurityGroup -name "NSG-1" | Set-AzureNetworkSecurityRule -Name AllowPing -Type Inbound -Priority 40002 -Action Allow -SourceAddressPrefix "*"  -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '*' -Protocol "*"

If we want to work the other way round: allow UDP/TCP but block ICMP we can turn the logic around:

001
002
003
004
005
006

#block ping, allow UDP/TCP
Get-AzureNetworkSecurityGroup -name "NSG-1" | Set-AzureNetworkSecurityRule -Name AllowTCP -Type Inbound -Priority 40000 -Action Allow -SourceAddressPrefix "*"  -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '*' -Protocol "TCP"

Get-AzureNetworkSecurityGroup -name "NSG-1" | Set-AzureNetworkSecurityRule -Name AllowUDP -Type Inbound -Priority 40001 -Action Allow -SourceAddressPrefix "*"  -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '*' -Protocol "UDP"

Get-AzureNetworkSecurityGroup -name "NSG-1" | Set-AzureNetworkSecurityRule -Name BlockPing -Type Inbound -Priority 40002 -Action Deny -SourceAddressPrefix "*"  -SourcePortRange '*' -DestinationAddressPrefix '*' -DestinationPortRange '*' -Protocol "*"

The source/destination information is pretty open as I use * for those, but that’s just an example here. It’s up to you to decide for which ranges to apply this. And you might probably open up some additional ports for actual traffic to be allowed. The above logic is also mentioned in the information I linked at the beginning of the article:

The current NSG rules only allow for protocols ‘TCP’ or ‘UDP’. There is not a specific tag for ‘ICMP’. However, ICMP traffic is allowed within a Virtual Network by default through the Inbound VNet rules that allow traffic from/to any port and protocol ‘*’ within the VNet.

Kudos to my colleague Nichola (http://www.vnic.be) for taking the time to verify this.

8 comments

FIM 2010 (NOT R2!) Upgrade to MIM 2016

Published on Tuesday, August 11, 2015 in , ,

This blog post will assist you in upgrading a FIM 2010 environment to MIM 2016. To be clear: FIM 2010, not FIM 2010 R2. Disclaimer: if you “play” around like I do below, make sure you use one, or more, of the following.

  • A test environment
  • SQL Backups
  • VM Snapshots

Trust me sooner or later they’ll save your life or at least your day. After each attempt I did an SQL restore to be absolutely sure my upgrade path was OK. The installer “touches” the databases pretty quickly even if it fails in the beginning of the process.

The upgrade process is explained on TechNet: Upgrading Forefront Identity Manager 2010 R2 to Microsoft Identity Manager 2016 as well, but the guide is only partially applicable for the scenario I’ve foreseen.

  • No information on upgrading from FIM 2010, only FIM 2010 R2 is mentioned
  • No information on transitioning to a more recent Operating System
  • No information on transitioning to a more recent database platform

In order to clarify I’ll show a topology diagram of our current setup:

visio

Current versions:

  • Operating System: Windows 2008 R2
  • SQL: SQL Server 2008
  • FIM: FIM 2010 (build 4.0.3576.2)

Target versions:

  • Operating System: Windows 2012 R2
  • SQL: SQL Server 2012 SP1
  • FIM: MIM 2016 (RTM)

I won’t post a target diagram as in our case we decided not to change anything. We intend to upgrade FIM 2010 to MIM2016. However, we also would like to upgrade the various supporting components such as the underlying operating system and the SQL server edition. The TechNet guide shows you what has to be done to perform an in place upgrade of FIM 2010 R2 to MIM 2016. If I were to do an in place upgrade I would end up with MIM 2016 on server 2008 R2. I’d rather not do an in place upgrade of 2008 R2 to 2012 R2. That means I would have to migrate MIM 2016 to another box. Another disadvantage of upgrading in place is that you’ll have downtime during the upgrade. Well eventually you’ll have some downtime, but if you can leave the current environment intact, you can avoid the lengthy restore process if something goes wrong. And what about the database upgrade processes? Depending on your environment that can take quite some time. If you want plan your window for the upgrade, you could follow my approach as a “dry run” with the production data without impacting your current (running) environment! If you’re curious how, read on!

I wanted to determine the required steps to get from current to target with the least amount of hassle. I’ll describe the steps I followed and the issues I encountered:

Upgrading/Transitioning the FIM Synchronization Service: Attempt #1

  1. Stop and disable all scheduled tasks that execute run profiles
  2. Stop and disable all FIM 2010 services (both Sync and Service)
  3. Backup the FIMSynchronization database on the SQL 2008 platform (see note at bottom)
  4. Restore the FIMSynchronization database on the SQL 2012 platform
  5. Enable SQL Server Service Broker for the FIMSynchronization database (see note at bottom)
  6. Transfer the logins used by the database from SQL 2008 to SQL 2012
  7. Copy the FIM Synchronization service encryption keys  to the Windows 2012 R2 Server
  8. Ran the MIM 2016 Synchronization Service MSI on the Windows 2012 R2 server

However that resulted in the following events and concluded with an MSI installation failure:

Error1

In words: Error 25009.The Microsoft Identity Manager Synchronization Service setup wizard cannot configure the specified database. Invalid object name 'mms_management_agent'. <hr=0x80230406>

And in the Application Event log:

sync1

In words:Conversion of reference attributes started.

sync2

In words: Conversion of reference attributes failed.

Sync3

In words: Product: Microsoft Identity Manager Synchronization Service -- Error 25009.The Microsoft Identity Manager Synchronization Service setup wizard cannot configure the specified database. Invalid object name 'mms_management_agent'. <hr=0x80230406>

The same information was also found in the MSI verbose log. Some googling led me to some fixes regarding SQL access rights or SQL compatibility level. None of which worked for me.

Upgrading/Transitioning the FIM Synchronization Service: Attempt #2

This attempt is mostly the same as the previous. However now I’ll be running the MIM 2016 installer directly on the FIM 2010 Synchronization Server. I’ll save you the trouble: it fails with the exact same error. As a bonus the setup rolls back and leaves you with a server with NO FIM installed.

Upgrading/Transitioning the FIM Synchronization Service: Attempt #3

I’ll provide an overview of the steps again:

  1. Stop and disable all scheduled tasks that execute run profiles
  2. Stop and disable all FIM 2010 services (both Sync and Service)
  3. Backup the FIMSynchronization database on the SQL 2008 platform (see note at bottom)
  4. Restore the FIMSynchronization database on the SQL 2012 platform
  5. Enable SQL Server Service Broker for the FIMSynchronization database (see note at bottom)
  6. Transfer the logins used by the database from SQL 2008 to SQL 2012
  7. Install a new (temporary) Windows 2012 Server
  8. Copy the FIM Synchronization service encryption keys to the Windows 2012 Server
  9. Run the FIM 2010 R2 (4.1.2273.0) Synchronization Service MSI on the Windows 2012 server–> Success
  10. Stop and disable the FIM Synchronization Service on the Windows 2012 server
  11. Copy the FIM Synchronization service encryption keys to the Windows 2012 R2 Server
  12. Run the MIM 2016 Synchronization Service MSI on the Windows 2012 R2 server

Again that resulted in several events and concluded with the an MSI installation failure:

SyncErrorBis

In words: Product: Microsoft Identity Manager Synchronization Service -- Error 25009.The Microsoft Identity Manager Synchronization Service setup wizard cannot configure the specified database. Incorrect syntax near 'MERGE'. You may need to set the compatibility level of the current database to a higher value to enable this feature. See help for the SET COMPATIBILITY_LEVEL option of ALTER DATABASE.

Now that’s an error that doesn’t seem to scary. It’s clearly suggesting to raise the database compatibility level so that the MERGE feature is available.

Upgrading/Transitioning the FIM Synchronization Service: Attempt #4 –> Success!

I’ll provide an overview of the steps again:

  1. Stop and disable all scheduled tasks that execute run profiles
  2. Stop and disable all FIM 2010 services (both Sync and Service)
  3. Backup the FIMSynchronization database on the SQL 2008 platform (see note at bottom)
  4. Restore the FIMSynchronization database on the SQL 2012 platform
  5. Enable SQL Server Service Broker for the FIMSynchronization database (see note at bottom)
  6. Transfer the logins used by the database from SQL 2008 to SQL 2012
  7. Don’t worry about the SQL Agent Jobs, the MIM Service setup will recreate those
  8. Install a new (temporary) Windows 2012 Server
  9. Copy the FIM Synchronization service encryption keys to the Windows 2012 Server
  10. Run the FIM 2010 R2  (4.1.2273.0) Synchronization Service MSI on the Windows 2012 server
  11. Stop and disable the FIM Synchronization Service on the Windows 2012 server
  12. Changed the SQL Compatibility Level to 2008 (100) on the database
  13. Copy the FIM Synchronization service encryption keys to the Windows 2012 R2 Server
  14. Run the MIM 2016 Synchronization Service MSI on the Windows 2012 R2 server –> Success!

Changing the compatibility level can easily be done using the using the SQL Management Studio:

sqlCompatLevel

In my case it was on SQL Server 2005 (90) and I changed it to SQL Server 2008 (100). If you prefer doing this through an SQL query:

USE [master]

GO

ALTER DATABASE [FIMSynchronization] SET COMPATIBILITY_LEVEL = 100

GO

Bonus information:

This is the command I ran to install both the FIM 2010 R2 and MIM 2016 Synchronization Instance:

Msiexec /i "Synchronization Service.msi" /qb! STORESERVER=sqlcluster.contoso.com SQLINSTANCE=fimsql SQLDB=FIMSynchronization SERVICEACCOUNT=svcsync SERVICEDOMAIN=CONTOSO SERVICEPASSWORD=PASSWORD GROUPADMINS=CONTOSO\GGFIMSyncSvcAdmins GROUPOPERATORS=CONTOSO\GGFIMSyncSvcOps GROUPACCOUNTJOINERS=CONTOSO\GGFIMSyncSvcJoiners GROUPBROWSE=CONTOSO\GGFIMSyncSvcBrowse GROUPPASSWORDSET=CONTOSO\GGFIMSyncSvcPWReset FIREWALL_CONF=1 ACCEPT_EULA="1" SQMOPTINSETTING="0" /l*v C:\MIM\LOGS\FIMSynchronizationServiceInstallUpgrade.log

No real rocket science here. However, make sure not to run /q but use /qb! as the latter allows popups to be thrown and answered by you. For instance when prompted to provide the encryption keys.

Upgrading/Transitioning the FIM Service: Attempt #1 –> Success!

Now to be honest, the upgrade I feared the most proved to be the easiest. From past FIM experiences I know the FIM Service comes with a DB upgrade utility. The setup runs this for you. I figured: why on earth would they throw away the information to upgrade from FIM 2010 to FIM 2010 R2 and cripple the tool so that it can only upgrade FIM 2010 R2 to MIM 2016?! And indeed, they did not! Here’s the steps I took to upgrade my FIM Portal & Service:

  1. Stop and disable all scheduled tasks that execute run profiles => this was already the case
  2. Stop and disable all FIM 2010 services (both Sync and Service) => this was already the case
  3. Backup the FIMService database on the SQL 2008 platform (see note at bottom)
  4. Restore the FIMService database on the SQL 2012 platform
  5. Enable SQL Server Service Broker for the FIMService database (see note at bottom)
  6. Transfer the logins used by the database from SQL 2008 to SQL 2012
  7. Installed a Standalone Sharepoint 2013 Foundation SP2
  8. Run the MIM 2016 Service and Portal MSI on the Windows 2012 R2 server –> Success!
  9. Note: the compatibility level was raised to 2008 (100) by the setup

One thing that assured me the FIM Service database was upgrade successfully was the database upgrade log. The following event indicates where you can find it:

dbupgrade

The path: c:\Program Files\Microsoft Forefront Identity Manager\2010\Service\Microsoft.IdentityManagement.DatabaseUpgrade_tracelog.txt An extract:

Database upgrade : Started.
Database upgrade : Starting command line parsing.
Database upgrade : Completed commandline parsing.
Database upgrade : Connection string is : Data Source=sqlcluster.contoso.com\fimsql;Initial Catalog=FIMService;Integrated Security=SSPI;Pooling=true;Connection Timeout=225.
Database upgrade : Trying to connect to database server.
Database upgrade : Succesfully connected to database server.
Database upgrade : Setting the database version to -1.
Database upgrade : Starting database schema upgrade.
Schema upgrade: Starting schema upgrade
Schema upgrade : Upgrading FIM database from version: 20 to the latest version.
Schema upgrade : Starting schema upgrade from version 20 to 21.
...
Database upgrade : Out-of-box object upgrade completed.
Database ugrade : Completed successfully.
Database upgrade : Database version upgraded from: 20 to: 2004
The AppDomain's parent process is exiting.

You can clearly see that the datbase upgrade utility intelligently detects the current (FIM 2010) schema and upgrades all the way to the MIM 2016 database schema.

Bonus information

This is the command I ran to install the MIM 2016 Portal and Service. Password reset/registration portals are not deployed, no reporting and no PIM components. If you just want to test your FIM Service database upgrade, you can even get away with only installing the CommonServices component.

Msiexec /i "Service and Portal.msi" /qb! ADDLOCAL=CommonServices,WebPortals SQLSERVER_SERVER=sqlcluster.contoso.com\fimsql SQLSERVER_DATABASE=FIMService EXISTINGDATABASE=1 SERVICE_ACCOUNT_NAME=svcfim SERVICE_ACCOUNT_DOMAIN=CONTOSO SERVICE_ACCOUNT_PASSWORD=PASSWORD SERVICE_ACCOUNT_EMAIL=svcfim@contoso.com MAIL_SERVER=mail.contoso.com MAIL_SERVER_USE_SSL=1 MAIL_SERVER_IS_EXCHANGE=1 POLL_EXCHANGE_ENABLED=1 SYNCHRONIZATION_SERVER=fimsync.contoso.com SYNCHRONIZATION_SERVER_ACCOUNT=CONTOSO\svcfimma SERVICEADDRESS=fimsvc.contoso.com FIREWALL_CONF=1 SHAREPOINT_URL=http://idm.contoso.com SHAREPOINTUSERS_CONF=1 ACCEPT_EULA=1 FIREWALL_CONF=1 SQMOPTINSETTING=0 /l*v c:\MIM\LOGS\FIMServiceAndPortalsInstall.log

Note: SQL Management Studio Database Backup

Something I learned in the past year or so: whenever taking an “ad hoc” SQL backup, make sure to check the “Copy-only backup” box. That way you won’t interfere with the regular backups that have been configured by your DBA/Backup Admin.

SQL Backup

Note: SQL Server Service Broker

Lately I’ve seen cases where applications are unhappy due to the fact that the SQL Server Broker Service is disabled for their database. In my case it was an ADFS setup. But here’s an (older) example for FIM: http://justanothertechguy.blogspot.be/2012/11/fim-2010-unable-to-start-fim-service.html Typically a database that is restored from an SQL backup has this feature disabled. I checked the SQL Server Broker Service for the FIM databases on the SQL 2008 platform and it was enabled. I checked on my SQL 2012 where I did the restore and I could see it was off. Here’s some relevant commands:

Checking whether it’s on for your database:

SELECT is_broker_enabled FROM sys.databases WHERE name = 'FIMSYnchronization';

brokerOff

Enable:

ALTER DATABASE FIMSYnchronization SET ENABLE_BROKER WITH NO_WAIT

If issues arise you can try this one: I’m not an SQL guy. All I can guess it handles the thing less gracefully:

ALTER DATABASE FIMSYnchronization SET ENABLE_BROKER WITH ROLLBACK IMMEDIATE;

And now it’s on:

BrokerOn

Remark

I didn’t went all to deep on certain details, if you feel something is unclear, post a comment and I’ll see if I can add information where needed. The above doesn’t describe how to install the second FIM Portal/Service server or the standby MIM Synchronization Server. I expect the instructions to be fairly simple as the database is already on the correct level. If I encounter issues you can expect a post on that well. To conclude: there’s more to do than just running the MIM installers and be done with this. You’ll have to transfer your customizations (like custom workflow DLLs) as well. FIM/MIM Synchronization extensions are transferred for you, but be sure to test everything! Don’t assume! Happy upgrading!

Conclusion

The FIM 2010 Portal and Service can be upgraded to MIM 2016 without the need for a FIM 2010 R2 intermediate upgrade. The FIM 2010 Synchronization Service could not be upgraded directly to MIM 2016. This could be tied to something specific in our environment, or it could be common…

Update #1 (12/08/2015)

Someone prompted me if the FIM can be upgraded without providing the FIM/MIM Synchronization service encryption keys. Obviously it can not. That part has not changed. Whenever you install FIM/MIM on a new box and you point it to an existing database, it will prompt for the key file. I’ve added some “copy keys” steps in my process so that you have them ready when prompted for by the MSI.

1 comments

ADFS Alternate Login ID: Some or all identity references could not be translated

Published on Wednesday, August 5, 2015 in

First day back at work I already had the chance to get my hands dirty with an ADFS issue at a customer. The customer had an INTERNAL.contoso.com domain and an EXTERNAL.contoso.com domain. Both were connected with a two-way forest trust. The INTERNAL domain also had an ADFS farm. Now they wanted both users from INTERNAL and EXTERNAL to be authenticated by that ADFS. Technically this is possible through the AD trust. Nothing special there, the catch was that they wanted both INTERNAL and EXTERAL users to authenticate using @contoso.com usernames. Active Directory has no problems authenticating users with an UPN different with that from the domain. You can even share the UPN suffix namespace in more than one domain, but… you cannot route shared suffixes cross the forest trust! In our case that would mean the ADFS instance would be able to authenticate user.internal@contoso.com but not user.external@contoso.com as there would be no way to locate that user in the other domain.

Alternate Login ID to the rescue! Alternate Login ID is a feature on ADFS that allows you to specify an additional attribute to be used for user lookups. Most commonly “mail” is used for this. This allows people to leave the UPN, commonly a non public domain (e.g. contoso.local), untouched. Although I’m mostly advising to change the UPN to something public (e.g. contoso.com). The cool thing about Alternate Login ID is that you can specify one or more LookupForests! In our case the command looked like:

001
002

Set-AdfsClaimsProviderTrust -TargetIdentifier "AD AUTHORITY" -AlternateLoginID mail -LookupForests internal.contoso.com,external.contoso.com

Some more information about Alternate Login ID: TechNet: Configuring Alternate Login ID

Remark: When alternate login ID feature is enabled, AD FS will try to authenticate the end user with alternate login ID first and then fall back to use UPN if it cannot find an account that can be identified by the alternate login ID. You should make sure there are no clashes between the alternate login ID and the UPN if you want to still support the UPN login. For example, setting one’s mail attribute with the other’s UPN will block the other user from signing in with his UPN.

Now where’s the issue? We could authenticate INTERNAL users just fine, but EXTERNAL users were getting an error:

3

In words:

The Federation Service failed to issue a token as a result of an error during processing of the WS-Trust request.

Activity ID: 00000000-0000-0000-5e95-0080000000f1

Request type: http://schemas.microsoft.com/idfx/requesttype/issue

Additional Data
Exception details:
System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated.
   at System.Security.Principal.SecurityIdentifier.Translate(IdentityReferenceCollection sourceSids, Type targetType, Boolean forceSuccess)
   at System.Security.Principal.SecurityIdentifier.Translate(Type targetType)
   at System.Security.Principal.WindowsIdentity.GetName()
   at System.Security.Principal.WindowsIdentity.get_Name()
   at Microsoft.IdentityModel.Claims.WindowsClaimsIdentity.InitializeName()
   at Microsoft.IdentityModel.Claims.WindowsClaimsIdentity.get_Claims()
   at Microsoft.IdentityServer.Service.Tokens.MSISWindowsUserNameSecurityTokenHandler.AddClaimsInWindowsIdentity(UserNameSecurityToken usernameToken, WindowsClaimsIdentity windowsIdentity, DateTime PasswordMustChange)
   at Microsoft.IdentityServer.Service.Tokens.MSISWindowsUserNameSecurityTokenHandler.ValidateTokenInternal(SecurityToken token)
   at Microsoft.IdentityServer.Service.Tokens.MSISWindowsUserNameSecurityTokenHandler.ValidateToken(SecurityToken token)
   at Microsoft.IdentityModel.Tokens.SecurityTokenHandlerCollection.ValidateToken(SecurityToken token)
   at Microsoft.IdentityServer.Web.WSTrust.SecurityTokenServiceManager.GetEffectivePrincipal(SecurityTokenElement securityTokenElement, SecurityTokenHandlerCollection securityTokenHandlerCollection)
   at Microsoft.IdentityServer.Web.WSTrust.SecurityTokenServiceManager.Issue(RequestSecurityToken request, IList`1& identityClaimSet)

Now the weird part: just before the error I was seeing a successful login for that particular user:

2

I decided to start my search with this part: System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. That led me to all kind of blogs/posts where people were having issue with typo’s in scripts or with users that didn’t exist in AD. But that wasn’t the case with me, after all, I just had a successful authentication! Using the first line of the stack trace: at System.Security.Principal.SecurityIdentifier.Translate(IdentityReferenceCollection sourceSids, Type targetType, Boolean forceSuccess) I took an educated guess of what the ADFS service was trying to do. And I was able to do the same using PowerShell

001
002
003

$objSID = New-Object System.Security.Principal.SecurityIdentifier ("S-1-5-21-3655502699-1342072961-xxxxxxxxxx-1136") 
$objUser = $objSID.Translate( [System.Security.Principal.NTAccount]) 
$objUser.Value

And yes I got the same error!:

psError

At first sight this gave me nothing. But this was actually quite powerful: I was now able to reproduce the issue as many times as I liked, no need to go through the logon pages and most importantly: I could now take this PowerShell code and execute it on other servers! This way I could determine whether it was OS related, AD related, trust related,… I found out the following:

  • Command fails on ADFS-SRV-01
  • Command fails on ADFS-SRV-02
  • Command fails on WEB-SRV-01
  • Command runs on HyperV-SRV-01
  • Command runs on DC-INTERNAL-01

Now what did this learned me:

  • The command is fine and should work
  • The command runs fine on other 2012 R2 servers
  • The command runs fine on a member server (the Hyper-V server)

As I was getting nowhere with this I decided to take a Network Trace on the ADFS server while executing the PowerShell command. I expected to see one of the typical SID translation methods (TechNet: How SIDs and Account Names Can Be Mapped in Windows) to appear. However absolutely nothing appeared?! No outgoing traffic related to this code. Now wtf? I had found this article: ASKDS: Troubleshooting SID translation failures from the obvious to the not so obvious but that wouldn’t help me if there was no traffic to begin with.

Suddenly an idea popped up in my head. What if the network traffic wasn’t showing any SID resolving because the machine looked locally? And why would the machine look locally? Perhaps if the domain portion of the machine SID is the same as that of the user we were looking up? But they’re in different domains… However, there’s also the machine’s local SID! The one that is typically never encountered or seen! Here’s some info on it: Mark Russinovich: The Machine SID Duplication Myth (and Why Sysprep Matters)

I didn’t took the time to find out whether I could retrieve it’s value with PowerShell or so, but I just took PsGetsid.exe from SysInternals. This is what the command showed me for the ADFS server:

2015-08-03_14-43-07

Bazinga! It seemed the local SID of all the machines that were failing the command were the same as the domain portion of the EXTERNAL domain SIDs! Now I asked to customer if he could deploy a new test server so I could reproduce the issue one more time. Indeed the issue appeared again. The local SID was again identical. Running sysprep on the server changed the local SID and after joining the server again to the domain we were able to succesfully execute the PowerShell commands!

Resolution:

The customer had been copying the same VHD over and over again without actually running sysprep on it… As the EXTERNAL domain was also created on a VM from that image the Domain Controller promotion process choose that local SID as base for the EXTERNAL domain SID. My customer choose to resolve this issue by destroying the EXTERNAL domain and setting it up again. Obviously this does not solve the fact that several servers were not sysprepped, and in the future this might cause other issues…

Sysprep location:

image

For a template you can run sysprep with generalize and the shutdown option:

image

Each time you boot a copy of your template it will run the sysprep process at first boot.

P.S. Don’t run sysprep on a machine with software/services installed. It might have a nasty outcome…