0 comments

Azure DSC and Configuration Archive Case Sensitiveness

Published on Monday, June 29, 2015 in , ,

Lately I’ve been working on my Azure Automation skills. More precisely I want to have a script that is able to create a virtual machine and creates a new Active Directory (domain controller) on it. The are several ways of doing this. One way is to create a PowerShell script that is executed through the Azure script extension. An other way is through the Desired State Configuration (DSC) extension. In my opinion the latter is the best option. DSC is really great at getting your server configured with minimal scripting. If you’re unfamiliar with DSC you might be experiencing quite some issues in the beginning. Having a working DSC extension is one thing, but getting it to work through the Azure DSC extension has it’s own challenges. Most of these so called issues have probably to do with me being at the bottom of the DSC learning curve…

A while back I wrote a simple DSC extension to get the time zone right (Working with PowerShell DSC and Azure VM’s based on Windows 2012). That simple example went pretty well. Now I wasn’t even getting my DSC script to properly download to the target system. Now how on earth could something that simple be that hard? Here’s the error I was having:

Log file location: C:\WindowsAzure\Logs\Plugins\Microsoft.Powershell.DSC\1.10.1.0\DscExtensionHandler.3.20150627-211133

VERBOSE: [2015-06-27T21:11:42] File lock does not exist: begin processing
VERBOSE: [2015-06-27T21:11:42] File
C:\Packages\Plugins\Microsoft.Powershell.DSC\1.10.1.0\bin\..\DSCWork\2-Completed.Install.dsc exists; invoking extension
handler...
VERBOSE: [2015-06-27T21:11:43] Reading handler environment from
C:\Packages\Plugins\Microsoft.Powershell.DSC\1.10.1.0\bin\..\HandlerEnvironment.json
VERBOSE: [2015-06-27T21:11:44] Reading handler settings from
C:\Packages\Plugins\Microsoft.Powershell.DSC\1.10.1.0\RuntimeSettings\3.settings
VERBOSE: [2015-06-27T21:11:47] Applying DSC configuration:
VERBOSE: [2015-06-27T21:11:47]     Sequence Number:              3
VERBOSE: [2015-06-27T21:11:47]     Configuration Package URL:   
https://thvuystoragetest.blob.core.windows.net/windows-powershell-dsc/MyDC.ps1.zip
VERBOSE: [2015-06-27T21:11:47]     ModuleSource:                
VERBOSE: [2015-06-27T21:11:47]     Configuration Module Version:
VERBOSE: [2015-06-27T21:11:47]     Configuration Container:      MyDC.ps1
VERBOSE: [2015-06-27T21:11:47]     Configuration Function:       MyDC (2 arguments)
VERBOSE: [2015-06-27T21:11:47]     Configuration Data URL:      
https://thvuystoragetest.blob.core.windows.net/windows-powershell-dsc/MyDC-69d57a1f-2522-41d7-b5ac-3b635c63ba93.psd1
VERBOSE: [2015-06-27T21:11:47]     Certificate Thumbprint:       FC89BDBF395EFC39EA3633BBDEAE9BB7AA7C475E
VERBOSE: [2015-06-27T21:11:47] Creating Working directory:
C:\Packages\Plugins\Microsoft.Powershell.DSC\1.10.1.0\bin\..\DSCWork\MyDC.ps1.3
VERBOSE: [2015-06-27T21:11:48] Downloading configuration package
VERBOSE: [2015-06-27T21:11:48] Downloading
https://thvuystoragetest.blob.core.windows.net/windows-powershell-dsc/MyDC.ps1.zip?sv=2014-02-14&sr=b&sig=k38XoVn5%2Bn5P1UIMM8q
mh9bc7YBD7Q5ZNV%2B5aqvP2xs%3D&se=2015-06-27T20%3A10%3A16Z&sp=rd to
C:\Packages\Plugins\Microsoft.Powershell.DSC\1.10.1.0\bin\..\DSCWork\MyDC.ps1.3\MyDC.ps1.zip
VERBOSE: [2015-06-27T21:11:48] An error occurred processing the configuration package; removing
C:\Packages\Plugins\Microsoft.Powershell.DSC\1.10.1.0\bin\..\DSCWork\MyDC.ps1.3
VERBOSE: [2015-06-27T21:11:48] [ERROR] An error occurred downloading the Azure Blob: Exception calling "DownloadFile" with "2"
argument(s): "The remote server returned an error: (404) Not Found."
The Set-AzureVMDscExtension cmdlet grants access to the blobs only for 1 hour; have you exceeded that interval?
VERBOSE: [2015-06-27T21:11:49] Writing handler status to C:\Packages\Plugins\Microsoft.Powershell.DSC\1.10.1.0\Status\3.status
VERBOSE: [2015-06-27T21:11:49] Removing file lock


The most interesting part:

VERBOSE: [2015-06-27T21:11:48] [ERROR] An error occurred downloading the Azure Blob: Exception calling "DownloadFile" with "2"
argument(s): "The remote server returned an error: (404) Not Found."
The Set-AzureVMDscExtension cmdlet grants access to the blobs only for 1 hour; have you exceeded that interval?

I found the following URL from the log file: https://thvuystoragetest.blob.core.windows.net/windows-powershell-dsc/MyDC.ps1.zip?sv=2014-02-14&sr=b&sig=k38XoVn5%2Bn5P1UIMM8qmh9bc7YBD7Q5ZNV%2B5aqvP2xs%3D&se=2015-06-27T20%3A10%3A16Z&sp=rd

Some googling led me to some results, but nothing relevant. I took the URL and copy pasted into a browser:

StorageContainerXMLChrome

It showed me an XML type response stating: BlobNotFound: The specified blob does not exist. By accident I used an open chrome instance as I typically use IE. If I visited this URL using IE I simply got a page not found error. That’s probably something that can be tweaked in the IE settings, but still good to know. After seeing that error page I went to the Azure management portal:

StorageContainer

I drilled down till I found my .ps1.zip file and copy pasted its URL in a notepad++ windows:

image

As you can see the only difference is the casing of “MyDC.ps1”… The URL in the log file is constructed by the Azure DSC extension. More particular by the following PowerShell lines:

001
002
003
004
005
006
007
008
009
010
011
012

$configurationArchive = "MyDC.ps1.zip"
$configurationName = "MyDC" 
$configurationData = "C:\Users\Thomas\SkyDrive\Documenten\Work\Blog\DSC\Final\myDC.psd1"

$VM = get-AzureVM -Service $svcname -name 
$vmname
$vm
 = Set-AzureVMDSCExtension -VM $vm `
    -ConfigurationArchive $configurationArchive `
    -ConfigurationName $configurationName `
    -ConfigurationArgument $configurationArguments `
-ConfigurationDataPath 
$configurationData

$vm
 | update-azurevm

Updating my $configurationArchive to myDC.ps1.zip was all I needed to do to get this baby running.

Summary

Whenever creating storage accounts, containers or blobs on them, make sure to watch out for case sensitiveness. In my opinion using an all lower case approach might be the best way forward.

0 comments

Azure Billing Changes

Published on Friday, June 26, 2015 in

A bit less technical for once, but a few days ago I noticed several announcements for billing related changes that I though were worth mentioning. And besides that, my personal test subscription got disabled once more because I ran out of credit… So what else is there to do? ; )

Azure Billing Detailed Usage Change

Every time I talk to a customer who is new to Azure and is starting to get into IAAS, I explain how Virtual Machines are billed. Roughly there’s 4 things to take into account:

  • Compute hours: depends on the uptime/tier
  • Storage space consumed: the bigger the VM,….
  • Storage transactions: the more disk IO VM performs, ….
  • Network IO: “upload”/”download” where download is free

Now a lot of customers want to, or foresee, that they want to split the bill to the responsible department, project or another factor. Before the recent changes there were 2 ways to do this:

  • Separate subscriptions
  • Creating your VM’s on separate storage accounts/in separate cloud services

Personally I’m not too fond of the separate subscriptions idea. It will bring you overhead in terms of network connectivity and the overall picture might become more difficult to see. I’m aware that there are definitely cases where you clearly want to provide a group of people “full control” on “their” stuff and you want to be able to just send the bill of everything they use. But in many cases I feel having many subscriptions will become a PITA to manage. What if your billing scheme changes, instead of per department, you have to have a picture per application. Do you really want to tie your subscriptions to that?

Now I’m more in favor of creating VM’s in separate storage accounts and cloud services. Still not ideal if you have to restructure, but the impact should be less. Here’s how the detailed usage looked before June:

Type Unit Granularity
Networking Data Transfer In( GB) Cloud Service
Networking Data Transfer Out (GB) Cloud Service
Storage Standard IO – Page Blob/DISK (GB) Storage Account
Virtual Machines Compute Hours Cloud Service\Tier
Data Management Storage Transactions (in 10,000s) Storage Account

As you can see Cloud Service and Storage Account are really important if you want to separate our resources. Now things have changed, both Networking and Compute now include the VM name (next to the Cloud Service):

Type Unit Granularity
Networking Data Transfer In( GB) Cloud Service (VM Name)
Networking Data Transfer Out (GB) Cloud Service (VM Name)
Storage Standard IO – Page Blob/DISK (GB) Storage Account
Virtual Machines Compute Hours Cloud Service (VM Name)
Data Management Storage Transactions (in 10,000s) Storage Account

So assigning VM’s to cloud services is no longer an absolute requirement for building detailed bills. Other than that, there’s two more new fields:

  • Resource Group
  • Tags

From tags I know they are a V2 (Azure Resource Manager) feature. Resource groups are also available for V1 VMs. On my detailed usage overview the column resource group was empty. So it might be that this only will be filled in for V2 resources. Once V2 resources are commonly used we’ll be able to add one ore more tags to resources like VM’s. This will greatly benefit Azure Automation and Azure Billing! You’ll be able to specify information that can help identify the VM: e.g. Environment: Dev/Test/Acceptance/Production or Department: HR/IT/Sales or …

Enterprise Agreement: MSDN subscriptions

Something that has been available for a while: MSDN subscriptions below an Enterprise Agreement. If your company has both an Azure Agreement and your developers/IT Pro’s have an MSDN, they are allowed to have machines run at MSDN rates. These machines cannot belong to production! The advantage is pricing: Windows VM run at the price of the equivalent Linux VM and software available in the MSDN library is for free (e.g. SQL). You can configure this on the EA portal: https://ea.azure.com

image

Azure Billing API

In the pas there was an API available for the EA customers. Luckily the new Azure Usage API (MSDN) and Azure RateCard API (MSDN) or for all subscriptions! You can read more on these here: ScottGu: New Azure Billing APIs Available

Side note

It’s a common practice to shutdown VM’s that are not being used in order to save Azure credits. The less hours a VM turns, the better. One thing I overlooked this month is the cost of the Azure VNET Gateway. I had been playing with a site to site VPN (between two Azure VNets) and this resulted in two Gateways burning quite some credit. So I’d say: keen any eye on those gateways! They can cost quite a lot.

0 comments

Working with PowerShell DSC and Azure VM’s based on Windows 2012

Published on Wednesday, June 17, 2015 in ,

Mostly when I work with Azure VM’s I do the actual VM creation using Azure PowerShell cmdlets. I like how you can have some template scripts that create VM’s from beginning to end. Typically I create a static IP reservation, I join them to an AD domain, I add one or more additional disks,I add the Microsoft Antimalware extension…. When the VM is provisioned I log on and I’m pretty much ready to go. One of the things I noticed is that the Time Zone was set to UTC where I like it to be GMT+1. Obviously this only requires two clicks but I wanted this to be done for me. Now there are various approaches: either us traditional tooling like SCCM, GPO (is there a setting/registry key? ), …. or do it the Azure way. As far as Azure is concerned I could create a custom VM image or use PowerShell DSC (Desired State Configuration).

I prefer DSC over a custom image. The main reason is that I can apply these DSC customizations to whatever image from the gallery I feel like applying them. If the SharePoint team wants to take the latest SharePoint image from the gallery, I can just apply my DSC over it. If there’s a more recent Windows 2012 R2 image, I can just throw my DSC against it and I’m ready to go.

The following example shows how to apply a given DSC configuration to a VM.

001
002
003
004
005

$configurationArchive = "DSC_SetTimeZone.ps1.zip"
$configurationName = "SetTimeZone"
$VM = Get-AzureVM -ServiceName "contoso-svc" -Name "CONTOSO-SRV"
$VM = Set-AzureVMDSCExtension -VM $vm -ConfigurationArchive $configurationArchive -ConfigurationName 
$configurationName
$VM
 | update-AzureVM

Now I wont go into all of the details, but here are some things I personally ran into.

Creating and Uploading the DSC configuration archive

Initially I had some trouble wrapping my head around how to get my script to run on a target machine. I had this cool DSC script I found on the internet and tweaked it a bit:

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022

#Requires -version 4.0
Configuration SetTimeZone
{
    Param
    (
        #Target nodes to apply the configuration
        [Parameter(Mandatory = $false)]
        [ValidateNotNullorEmpty()]
        [String]$SystemTimeZone="Romance Standard Time"
    )

    Import-DSCResource -ModuleName xTimeZone

    Node localhost
    {
        xTimeZone TimeZoneExample
        {
            TimeZone = $SystemTimeZone
        }
    }
}

This script depends on the xTimeZone DSC resource. As I already knew, those DSC resources, like xTimeZone, come in waves. Would my server have the latest version? Did I have to install that out of band? It seems not. All you need to do is create a configuration archive, a ZIP file, which contains both your script and the resource it depends on. The Azure cmdlets are an easy way to do this. They’ll also make sure all the dependent DSC resources are added to the package.

We got our script in c:\users\thomas\onedrive\documenten\work\blog\DSC. Some steps further I’ll get you the information where to store the DSC resources.

DSC_1

By using the following command we can create and upload the package to the “setspn” storage account:

001
002
003
004

$subscriptionID = "af2f6ce8-e4f3-abcd-abcd-34ab4ce9c7d3"
$storageAccountName = "setspn"
Set-AzureSubscription -SubscriptionId $subscriptionID -CurrentStorageAccount $storageAccountName
Publish-AzureVMDscConfiguration -ConfigurationPath C:\Users\Thomas\OneDrive\Documenten\Work\Blog\DSC\DSC_SetTimeZone.ps1

We need to execute this from an Azure PowerShell prompt. I executed this command from a Windows 8.1 machine that is running PowerShell v4.

DSC_2

It seems to be complaining that we are running this from an x86 prompt instead of an x64 prompt. But the Azure PowerhShell prompt is an x86 prompt… The error in words:

Publish-AzureVMDscConfiguration : Configuration script 'C:\Users\Thomas\SkyDrive\Documenten\Work\Blog\DSC\DSC_SetTimeZo
ne.ps1' contained parse errors:
At C:\Users\Thomas\SkyDrive\Documenten\Work\Blog\DSC\DSC_SetTimeZone.ps1:2 char:1
+ Configuration SetTimeZone
+ ~~~~~~~~~~~~~
Configuration is not supported in a Windows PowerShell x86-based console. Open a Windows PowerShell x64-based console,
and then try again.
At C:\Users\Thomas\SkyDrive\Documenten\Work\Blog\DSC\DSC_SetTimeZone.ps1:3 char:1
+ {
+ ~
Unexpected token '{' in expression or statement.
At C:\Users\Thomas\SkyDrive\Documenten\Work\Blog\DSC\DSC_SetTimeZone.ps1:21 char:1
+ }
+ ~
Unexpected token '}' in expression or statement.
At line:1 char:1

The error is quite misleading. I tried the various “DSC” cmdlets, like Get-DSCResource, and they all failed saying that the cmdlet could not be found. So it seems I needed the WMF framework to be installed. Shame on me. Here’s some explanation towards the prerequisites: TechNet Gallery: DSC Resource Kit (All Modules) Using the WMF 5.0 installer got me further.

DSC_2b

Now off to creating the package again: again an error…

DSC_3

Now it seems to complain it can’t find the DSC resources… But I installed them?! The error in words:

VERBOSE: Parsing configuration script: C:\Users\Thomas\SkyDrive\Documenten\Work\Blog\DSC\DSC_SetTimeZone.ps1
VERBOSE: Loading module from path 'C:\Program Files
(x86)\WindowsPowerShell\Modules\xTimeZoneSource\DSCResources\xTimeZone\xTimeZone.psm1'.
Publish-AzureVMDscConfiguration : Configuration script 'C:\Users\Thomas\SkyDrive\Documenten\Work\Blog\DSC\DSC_SetTimeZo
ne.ps1' contained parse errors:
At C:\Users\Thomas\SkyDrive\Documenten\Work\Blog\DSC\DSC_SetTimeZone.ps1:16 char:9
+         xTimeZone TimeZoneExample
+         ~~~~~~~~~
Undefined DSC resource 'xTimeZone'. Use Import-DSCResource to import the resource.
At line:1 char:1

After some googling I found out that the PowerShell prompt imports to module it finds in it’s PATH variable. As we are running from an x86 prompt, the folder that was loaded was different. Typically all DSC guides tell you to install DSC resources below C:\Program Files\WindowsPowerShell\Modules but in fact for the Azure PowerShell prompt you need to put them in C:\Program Files (x86)\WindowsPowerShell\Modules or you have to modify your PATH variable to include the x64 location…. I choose to copy the module to the x86 location:

DSC_3c

And all seems fine now:

DSC_4

Applying the DSC to a Windows 2012 VM

The DSC script I created worked fine on a newly install Windows 2012 R2 VM, but on a Windows 2012 the extension seemed to have troubles. Now that wasn’t supposed to happen… The good thing about the Azure DSC extension is that the logging is quite decent. Inside the VM you can find some log files in the following location: C:\WindowsAzure\Logs\Plugins\Microsoft.Powershell.DSC\1.10.1.0

The following extract comes from the DscExtensionHandler log file:

VERBOSE: [2015-05-28T21:43:17] Applying DSC configuration:
VERBOSE: [2015-05-28T21:43:17]     Sequence Number:              0
VERBOSE: [2015-05-28T21:43:17]     Configuration Package URL:   
https://setspn.blob.core.windows.net/windows-powershell-dsc/DSC_SetTimeZone.ps1.zip
VERBOSE: [2015-05-28T21:43:17]     ModuleSource:                
VERBOSE: [2015-05-28T21:43:17]     Configuration Module Version:
VERBOSE: [2015-05-28T21:43:17]     Configuration Container:      DSC_SetTimeZone.ps1
...
VERBOSE: [2015-05-28T21:44:27] [ERROR] Importing module xTimeZone failed with error - File C:\Program
Files\WindowsPowerShell\Modules\xTimeZone\DscResources\xTimeZone\xTimeZone.psm1 cannot be loaded because running scripts is
disabled on this system. For more information, see about_Execution_Policies at http://go.microsoft.com/fwlink/?LinkID=135170.

Now that’s a pretty well known message… Bummer. Seems the execution policy on the Windows 2012 machine is set to restricted. Now there’s a way around that. Scripts could be executed with the option “-executionpolicy bypass”. But we can’t control that as the DSC extension is responsible for this. Kind of a bummer. The Windows 2012 R2 image seems to have RemoteSigned as default execution policy…

Now this got me curious. Would the run PowerShell script extension also suffer from this? If it would not, I could have a small PowerShell script execute first that alters the execution policy!

001
002

Set-ExecutionPolicy remotesigned

I create a PowerShell script with this line in it. Saved it to disk and then used AzCopy to copy it a container in a storage account. Executing the script:

DSC_ExecScript

After executing I can confirm that the execution policy has changed:

ExecPol

The logging for this extension can be found here: C:\WindowsAzure\Logs\Plugins\Microsoft.Compute.CustomScriptExtension\1.4 From the log file we can see that the PowerShell script extension run scripts in a more robust way:

2015-05-28T21:36:58.7646763Z    [Info]:    HandlerSettings = ProtectedSettingsCertThumbprint: , ProtectedSettings: {}, PublicSettings: {FileUris: [https://storaccount.blob.core.windows.net/windows-powershell-dsc/test.ps1?sv=2014-02-14&sr=b&sig=eijcTn9I2kWuOPU1CK%2F9zQ3tAO1NIUrs8wT2gUE8z0o%3D&se=2015-05-29T21%3A07%3A00Z&sp=r], CommandToExecute: powershell -ExecutionPolicy Unrestricted -file test.ps1 }

As you can see this script is called while specifying the execution policy. Now we’ll be able to apply our DSC extension

DSC_ExecDSC

And the log file contents:

VERBOSE: [2015-05-29T00:00:00] Import script as new module:
C:\Packages\Plugins\Microsoft.Powershell.DSC\1.10.1.0\bin\..\DSCWork\DSC_SetTimeZone.ps1.1\DSC_SetTimeZone.ps1
...
VERBOSE: [2015-05-29T00:00:12] Executing Start-DscConfiguration...
...
VERBOSE: [2015-05-29T00:00:20] [SRV2012]: LCM:  [ Start  Set      ]
VERBOSE: [2015-05-29T00:00:22] [SRV2012]: LCM:  [ Start  Resource ]  [[xTimeZone]TimeZoneExample]
VERBOSE: [2015-05-29T00:00:22] [SRV2012]: LCM:  [ Start  Test     ]  [[xTimeZone]TimeZoneExample]
VERBOSE: [2015-05-29T00:00:22] [SRV2012]: LCM:  [ End    Test     ]  [[xTimeZone]TimeZoneExample]  in 0.1100 seconds.
...

Conclusion

Configuring the time zone using DSC might be a bit overkill. But it’s an excellent exercise to get the hang of this DSC stuff. For a good resource on DSC check this tweet from me. I myself plan to create more DSC scripts in the near future. I tear VM’s up and down all the time. I would love to have a DSC that creates me a Windows AD domain, a Microsoft Identity Manager installation, a ….

0 comments

Federating ADFS with the Belnet Federation

Published on Monday, June 8, 2015 in ,

logo_federationThe Belnet federation is a federation where a lot of Belgian educational or educational related institutions are joined to. I’m currently involved in a POC at one of these institutions. Here’s the situation we started from: they have an Active Directory domain for their employees, and are part of the Belnet federation through a Shibboleth server which is configured as an IDP with their AD. Basically this means that for certain services hosted on the Belnet federation, they can choose to login using their AD credentials through the Shibboleth server.

Now they want to host a service themselves. They would like to provide users outside of their organization access to that service, a SharePoint farm. These users will have an account at one of the institutions federated with Belnet. After some research it came clear to use that we would need an ADFS instance to act as a protocol bridge between SAML and WS-FED. SharePoint does not natively speak SAML. Now the next question: how do we get Belnet to trust our ADFS instance and how do we get our ADFS instance to trust the IDP’s part of the Belnet federation?

These are two different problems and both need to be addressed in order for authentication to succeed. We need to find out how we can let Belnet trust our ADFS instance. But first we zoom into the part where we try to trust the IDP’s in the Belnet federation. This federation has over 20 IDP’s in it and it’s metadata is available at the following URL: Metadata XML file - Official Belnet federation From my first contacts with the people responsible for this federation I heard that it would be hard to get ADFS to “talk” to this federation. They mentioned ADFS does speak SAML, but not all SAML specifications are supported. One of the things that ADFS cannot handle is creating a claims provider trust based upon a metadata file which contains multiple IDPs. And guess what this Belnet metadata file contains…

Some research led me to the concept of federation trusts topologies. Suppose you got two partners who want to expose their Identity Provider so that their users can authenticate at services hosted between partners. In the Microsoft world one typically configures one ADFS instance as a claims provider trust and on the other side the other way round: as a relying party trust. And for the other organization the other way round. And that’s it. But what happens if you want to federate with 3 parties? Now each party has to add two claims provider trusts. And what happens when a new organization joins the federation? Each organization that is already active in the federation has to exchange metadata and add the new organization. As the number of partners in the federation grows you can see that the Microsoft approach seems to scale badly for this…

Now after reading up a bit on this subject I learned that there are two types of topologies: full mesh and proxy based. In the proxy approach each party federates with the proxy and the proxy remains in the middle for authentication requests. In the full mesh topology each party federates with each party. As I explained above, a full mesh approach scales bad. The Belnet setup is mostly based upon Shibboleth and each Shibboleth server gets updated automatically whenever an additional IDP or SP is added to the federation. So Belnet is only responsible for distributing the federation partner information to each member. So I came up with the following idea: If I were to take the Belnet XML file and chop it into multiple IDP XML files, I could add those one by one to the ADFS configuration. I got this idea here: Technet (Incommon Federation): Use FEMMA to import IDPs

Here’s a schematic view of the Federation Metadata exchanges. It might makes things a bit more clear. On the schema you’ll see the Shibboleth server, but in fact, for the SharePoint/ADFS instance it’s irrelevant.

belnet

Adding Belnet IDP’s to ADFS

Search the Belnet federation XML file for something recognizable like part of the DNS domain: vub.ac.be, or (part of) the name of the IDP: Brussel Once you got the good entry we need everything from this IDP that’s between the <EntityDescriptor> tags. So you should have something like this:

001
002
003
004
005
006
007

<EntityDescriptor entityID="https://idp.vub.ac.be/idp/shibboleth" xmlns="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:shibmd="urn:mace:shibboleth:metadata:1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> 
     
    <GivenName>Technical Support</GivenName> 
    <SurName>Technical Support</SurName> 
    <EmailAddress>support@vub.ac.be</EmailAddress> 
    </ContactPerson> 
</EntityDescriptor>

Copy this to a separate file and save it as FederationMetadata_VUB.xml

Now go to the ADFS management console and add a claims provider trust.

image

When asked, provide the XML file we just created. When you’re done change the Signature hash algorithm. You can find this on the advanced trust. This might differ from trust to trust and you can try without changing, but if your authentication results in an error, check your ADFS event logs and if necessary change this setting.

image

The error:

image

In words:

Authentication Failed. The token used to authenticate the user is signed using a weaker signature algorithm than expected.

And that’s it. Repeat for any other IDP’s you care about. Depending on the number of IDP’s this is a task you’d want to script or not. The InCommon federation guide contains a script written in Python which provides similar functionality.

Adding your ADFS as SP to the Belnet Federation

Now the first part seemed easy. We had to do some cutting and pasting, but for a smaller amount of IDP’s this seems doable. Now we have to ensure all involved IDP’s trust our ADFS server. In the worst case we have to contact them one by one and exchange information. But that would mean we’re not benefitting the Belnet federation. Our goal is to have our ADFS trusted by Belnet and that will ensure all Belnet partners trust our ADFS instance. This would ensure we only have to exchange information with one party and thus simplifying this process a lot!

First we need the Federation Metadata from the ADFS instance: https://sts.contoso.com/FederationMetadata/2007-06/FederationMetadata.xml

Then we need to edit a bit so that the Belnet application that manages the metadata is capable of parsing the file we give it. Therefore we’ll remove the blocks we don’t need or that tooling at Belnet is not compatible with:

  • Signature block: <signature>…</signature>
  • WS-FED stuff: <RoleDescriptor xsi:type="fed:ApplicationServiceType … </RoleDescriptor>
  • Some more WS-FED stuff: <RoleDescriptor xsi:type="fed:SecurityTokenServiceType" … </RoleDescriptor>
  • SAML IDP stuff, not necessary as we’re playing SP: <IDPSSODescriptor protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol"> … </IDPSSODescriptor>

We also need to add some contact information:

There should be a block present that looks like this: <ContactPerson contactType="support"/>

Replace it with:

001
002
003
004
005
006
007
008
009
010
011

<Organization>
    <OrganizationName xml:lang="en" xmlns:xml="http://www.w3.org/XML/1998/namespace"> Contoso </OrganizationName>
    <OrganizationDisplayName xml:lang="en" xmlns:xml="http://www.w3.org/XML/1998/namespace"> Contoso Corp </OrganizationDisplayName>
    <OrganizationURL xml:lang="en" xmlns:xml="http://www.w3.org/XML/1998/namespace"> http://www.contoso.com </OrganizationURL>
</Organization>
<ContactPerson contactType="technical">
    <GivenName>Thomas</GivenName>
    <SurName>Vuylsteke</SurName>
    <EmailAddress>adfs.admin@contoso.com</EmailAddress>
<
/ContactPerson>

Now you’re ready to upload your modified metadata at Belnet: https://idpcustomer.belnet.be/idp/Authn/UserPassword

After some time you’ll be able to logon using the IDP’s you configured. Pretty cool eh! Authentication will rely on the trusts shown below:

belnetAu

Some remarks:

Scoping: once you trust several IDP’s like this, you might be interested in a way to limit the users to the ones your organization works with. The customer I implemented this has an overview of all users in their Active Directory. So we allow the user to log on at their IDP, but we have ADFS authorization rules that only issue a permit claim when we find the user as an enabled AD user in the customer AD. These user are there for legacy reasons and can now be seen as some form of ghost accounts.

Certificates: the manual nature of the above procedure also means you have to keep the certificates up to date manually! If the IDP starts using an other certificate you have to update that IDP specific information. If you change your certificates on the ADFS instance you have to contact Belnet again and have your metadata updated. Luckily most IDP’s in the Belnet federation have expiration dates far away in the future. But not all of them. Definitely a point of attention.

Just drop a comment if you want more information or if you got some feedback.

8 comments

Synchronizing Time on Azure Virtual Machines

Published on Friday, June 5, 2015 in ,

I’m currently setting up a a small Identity infrastructure on some Azure Virtual Machines for a customer. The components we’re installing consist of some domain controllers, a FIM server, a FIM GAL Sync server and an SQL server to support the FIM services. All of those are part of the CONTOSO domain. Besides the Azure virtual machines we also got two on-premises machines, also member of the CONTOSO domain. They communicate with the other CONTOSO servers across a site to site VPN with Azure.

Eventually I came to the task of verifying my time synchronization setup. Throughout the years there have been small variations in recommendations. Initially I had configured time synchronization like I always do: configure a GPO that targets specifically the PDC domain controller. This GPO configures the PDC domain controller to use an NTP server for it’s time.

Administrative Templates > System > Windows Time Service > Global Configuration Settings:

image

Set to AnnounceFlags to 5 so this domain controller advertise as a reliable time source. Besides that we also need to give a good source for the PDC domain controller:

Administrative Templates > System > Windows Time Service > Global Configuration Settings > Time Providers

image

In the above example I’m just using time.windows.com as a source and the type is set to NTP. Just for the reference, the WMI filter that tells this GPO to only apply on the PDC domain controller:

image

Typically that’s all what’s needed. Keep in mind, the above was done on a 2012 R2 based domain controller/GPMC. If you use older versions you might have other values for certain settings, on 2012 R2 they are supposed to be as per current recommendations. But that’s not the point of this post. For the above to work, you should make sure that the NTP client on ALL clients, servers and domain controllers OTHER than the PDC is set to NT5DS:

w32tm /query /configuration

image

Once the above is al set the following logic should be active:

Put simple: if you got a single domain, single forest topology:

  • The PDC domain controllers syncs from an internet/external source
  • The domain controllers sync from the PDC domain controller
  • The clients/member servers sync from A domain controller

You can verify this by executing w32tm /query /source:

On my PDC (DC001), on a DC (DC002) and on a member server (hosted in Azure):

time1

=> VM IC Time Synchronization Provider

On my DC (DC003)(hosted on premises on VMware):

time2

=> The PDC domain controller

On my member server (hosted on premises on VMware):

time3

=> A domain controller

As you can see, that’s a bit weird. What is that VM IC Time Synchronization Provider? If I’m not mistaken, it’s a component that gets installed with Windows, and is capable of interacting with the hypervisor (E.g. on-premises Hyper-V or Azure Hyper-V). As far as I can tell, VMware guests ignore it. Basically it’s a component that helps the guest sync the time with the physical host it runs on. Now you can imagine that if guests run on different hosts, time might start to drift slowly. In order to mitigate this, we need to ensure the time is properly synchronized using the domain hierarchy.

Luckily it seems we can easily disable this functionality. We can simply set the enabled registry key to 0 for this provider. The good news: setting from 0 –> 1 seems to require a Windows Time Service restart, but I did some tests and setting from 1 –> 0 seems to be come effective after a small period of time. The good news part 2: setting it to 0 doesn’t seem to have a side effect for on-premises VM’s as well.

In my case I opted to use group policy preferences for this:

time4

The registry path: SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider set the Value Enabled to 0

And now we can repeat our tests again:

On my PDC (hosted in Azure):

time5

On my DC (hosted in Azure):

time6

On a member server (hosted in Azure):

time7

Summary

I’ll try to validate this with some people, and I’ll definitely update this post If I’m proven to be wrong, but as far as I can tell: whenever you host virtual machines in Azure that are part of a Windows Active Directory Domain, make sure to disable to VM IC Time Provider component.

Imho this kind of information is definitely something that should be added to MSDN: Guidelines for Deploying Windows Server Active Directory on Azure Virtual Machines or Azure.microsoft.com: Install a replica Active Directory domain controller in an Azure virtual network

References:

1 comments

Protecting a Domain Controller in Azure with Microsoft Antimalware

Published on Wednesday, June 3, 2015 in ,

I’m getting more and more involved with customers using Azure to host some VM’s in an IAAS scenario. In some cases they like to have a Domain Controller from their corporate domain on Azure. I think it’s a best practice to have some form of malware protection installed. Some customers opt to use their on-premise solution, other opt to use the free Microsoft Antimalware solution. The latter comes as an extension which you can add when creating a virtual machine. Or just add it afterwards. One of the drawbacks is that there’s no central management. You push it out to each machine and that’s it.

Both the old and new portals allow to specify this during the machine creation:

Old portal wizard:

image

New portal wizard:

image

However, the new portal allows you to specify additional parameters:

image

As you can see you can also specify the exclusions. For certain workloads (like SQL) this is pretty important. From past experiences I know that getting exclusions for a given application is a pretty tedious work. You have to go through various articles and compose your list. I took a look at the software installed on an Azure VM and I noticed it was called System Center Endpoint Protection.

image

Second I went ahead and looked in the registry:

image

The easiest way to configure those exclusion setting is through PowerShell. The Set-AzureVMMicrosoftAntimalwareExtension cmdlet has a parameter called AntimalwareConfigFile that accepts both an XML or JSON file. Initially I thought I’d just take the XML files from a System Center Endpoint Protection implementation and be done with it. Quickly I found out that the format for this XML file was different than the templates SCEP uses. So I thought I’d do some quick find and replace. But no matter what I tried, issues kept popping inside the guest and the XML file failed to be parsed successfully. This guide explains it pretty well, but I failed to do so: Microsoft Antimalware for Azure Cloud Services and Virtual Machines

I was preferring XML as that format allows for comment tags which is pretty easy to document certain exclusions. Now I had to resort to JSON which is just a bunch of text in brackets/colons. Here’s some sample config files based upon the files from SCEP:

A Regular Server

{
"AntimalwareEnabled": true,
"RealtimeProtectionEnabled": true,
"ScheduledScanSettings": {
"isEnabled": false,
"day": 1,
"time": 180,
"scanType": "Full"
},
"Exclusions": {
"Extensions": "",
"Paths": "%allusersprofile%\\NTUser.pol;%systemroot%\\system32\\GroupPolicy\\Machine\\registry.pol;%windir%\\Security\\database\\*.chk;%windir%\\Security\\database\\*.edb;%windir%\\Security\\database\\*.jrs;%windir%\\Security\\database\\*.log;%windir%\\Security\\database\\*.sdb;%windir%\\SoftwareDistribution\\Datastore\\Datastore.edb;%windir%\\SoftwareDistribution\\Datastore\\Logs\\edb.chk;%windir%\\SoftwareDistribution\\Datastore\\Logs\\edb*.log;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Edbres00001.jrs;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Edbres00002.jrs;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Res1.log;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Res2.log;%windir%\\SoftwareDistribution\\Datastore\\Logs\\tmp.edb",
"Processes": ""
}
}

A SQL Server

{
"AntimalwareEnabled": true,
"RealtimeProtectionEnabled": true,
"ScheduledScanSettings": {
"isEnabled": false,
"day": 1,
"time": 180,
"scanType": "Full"
},
"Exclusions": {
"Extensions": "",
"Paths": "%allusersprofile%\\NTUser.pol;%systemroot%\\system32\\GroupPolicy\\Machine\\registry.pol;%windir%\\Security\\database\\*.chk;%windir%\\Security\\database\\*.edb;%windir%\\Security\\database\\*.jrs;%windir%\\Security\\database\\*.log;%windir%\\Security\\database\\*.sdb;%windir%\\SoftwareDistribution\\Datastore\\Datastore.edb;%windir%\\SoftwareDistribution\\Datastore\\Logs\\edb.chk;%windir%\\SoftwareDistribution\\Datastore\\Logs\\edb*.log;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Edbres00001.jrs;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Edbres00002.jrs;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Res1.log;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Res2.log;%windir%\\SoftwareDistribution\\Datastore\\Logs\\tmp.edb",
"Processes": "%ProgramFiles%\\Microsoft SQL Server\\MSSQL10.MSSQLSERVER\\MSSQL\\Binn\\SQLServr.exe"
}
}

This one is almost identical to the server one, but here we exclude the SQLServr.exe process. The path to this executable might be different in your environment!
A Domain Controller

{
"AntimalwareEnabled": true,
"RealtimeProtectionEnabled": true,
"ScheduledScanSettings": {
"isEnabled": false,
"day": 1,
"time": 180,
"scanType": "Full"
},
"Exclusions": {
"Extensions": "",
"Paths": "%allusersprofile%\\NTUser.pol;%systemroot%\\system32\\GroupPolicy\\Machine\\registry.pol;%windir%\\Security\\database\\*.chk;%windir%\\Security\\database\\*.edb;%windir%\\Security\\database\\*.jrs;%windir%\\Security\\database\\*.log;%windir%\\Security\\database\\*.sdb;%windir%\\SoftwareDistribution\\Datastore\\Datastore.edb;%windir%\\SoftwareDistribution\\Datastore\\Logs\\edb.chk;%windir%\\SoftwareDistribution\\Datastore\\Logs\\edb*.log;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Edbres00001.jrs;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Edbres00002.jrs;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Res1.log;%windir%\\SoftwareDistribution\\Datastore\\Logs\\Res2.log;%windir%\\SoftwareDistribution\\Datastore\\Logs\\tmp.edb;E:\\Windows\\ntds\\ntds.dit;E:\\Windows\\ntds\\EDB*.log;E:\\Windows\\ntds\\Edbres*.jrs;E:\\Windows\\ntds\\EDB.chk;E:\\Windows\\ntds\\TEMP.edb;E:\\Windows\\ntds\\*.pat;E:\\Windows\\SYSVOL\\domain\\DO_NOT_REMOVE_NtFrs_PreInstall_Directory;E:\\Windows\\SYSVOL\\staging;E:\\Windows\\SYSVOL\\staging areas;E:\\Windows\\SYSVOL\\sysvol;%systemroot%\\System32\\Dns\\*.log;%systemroot%\\System32\\Dns\\*.dns;%systemroot%\\System32\\Dns\\boot",
"Processes": "%systemroot%\\System32\\ntfrs.exe;%systemroot%\\System32\\dfsr.exe;%systemroot%\\System32\\dfsrs.exe"
}
}

Again a lot of familiar exceptions as in the server template but also specific exclusions for NTDS related files and DNS related files. Remark: One of the best practices for installing domain controllers in Azure is to relocate the AD database/log files and sysvol to another disk with caching set to none. So the above exclusions might be wrong! Replace %systemroot% with the drive letter containing your AD files!

Special remark: the SCEP templates have a bug where they add %systemroot%\\system32\\GroupPolicy\\Registry.pol which in fact should be %systemroot%\\system32\\GroupPolicy\\Machine\\registry.pol I’ve given an example issue of that here: Setspn.blogspot.com: Corrupt Local GPO Files

The templates above are in the JSON format. I save them as MicrosoftAntiMalware_DC.json

001
002
003

$vm = get-AzureVM -servicename "CoreInfra" -name "SRVDC01"
$vm | Set-AzureVMMicrosoftAntimalwareExtension -AntimalwareConfigFile C:\Users\Thomas\Documenten\Work\MicrosoftAntiMalware_DC.json | 
Update-AzureVM

Now in the registry on the VM we can verify our extensions are applied:

reg3

Some good references: