SCCM

Title: Windows Setup Body: Windows could not parse or process unattend answer file [C:windowsPantherunattend.xml] for pass [specialize]. The answer file is invalid.

MDT Tutorial Part 11: Troubleshooting Part 3: Windows could not parse or process unattend answer file [C:\windows\Panther\unattend.xml] for pass [specialize].  The answer file is invalid.

Living Table of Contents

What These Guides Are:
A guide to help give you some insight into the troubleshooting process in general.

What These Guides Are Not:
A guide to fix all issues you’re going to encounter.

We’re going to role-play a bunch of scenarios and try to work through them.  Remember in math where you had to show your work?  Well, what follows is like that which is why this post is [more than] a [little] lengthy.

Windows could not parse or process unattend answer file [C:\windows\Panther\unattend.xml] for pass [specialize].  The answer file is invalid.

Your last victory is short lived as the same error message appears and this time unattend.xml looks fine:

Troubleshoot-010.PNG

Stumped, you might search for ‘Microsoft-Windows-Shell-Setup’ which might lead you here:
https://docs.microsoft.com/en-us/windows-hardware/customize/desktop/unattend/microsoft-windows-shell-setup

As you review each section carefully the issue becomes clear: The computer name is more than 15 characters.

 

Copypasta Closing

Hopefully these examples will help give you an idea of the overall troubleshooting process.  Most of the time the problems you’ll encounter will be caused by a typso, order of operations or a ‘known issue’ that requires a specific process to be followed.

As you make changes to your environment, here’s what I recommend:

  • Be diligent about keeping a change log so you can easily backtrack
  • Backup your CS.INI or Bootstrap.ini before you make any changes
  • Backup your ts.xml or unattend.xml (in DeploymentShare\Control\TaskSequenceID) before you make any changes
  • Introduce small changes at time with set checkpoints in between and set milestones markers where you backup core files (e.g cs.ini bootstrap.ini ts.xml unattend.xml etc) to help minimize frustration troubleshooting.

And if when you do run into some turbulence, upload relevant logs (at least smsts.log but be prepared to submit others depending on the issue) to a file sharing service like OneDrive, post on TechNet then give a shout to your resources on Twitter.

Good Providence to you!

Advertisements

Getting Real Lenovo Model Details

Our computer naming convention includes a portion of the model number to make it easier to identify trends and those with old, or new, assets.  Coming from a Dell shop where we did something similar, I was disappointed to learn that Lenovo machines didn’t populate the Model details the same way.  So instead of seeing something like ThinkPad W541, we were seeing something very cryptic: 20EFCTO1WW

Get your Decoder Ring

Thinking something was up with the built-in scripts or logic I ran the below on a Lenovo machine which confirmed it was what it was:


wmic path win32_computersystem get model

Model
20EFCTO1WW

For a while we kept a matrix of sorts that we’d feed into our CustomSettings.ini to ensure machines were named correctly.  We expected this pain as models were phased out and new models came in, but it was also very frustrating as the details would change mid-stream for the same model.  This led us to studying the Lenovo Product Specifications Reference or PSREF.

Not being keen on this, I learned somewhere (unsure of the actual source) that Lenovo stashes the bits we were after in Win32_ComputerSystemProduct under Version


wmic path win32_computersystemproduct get version

Version
ThinkPad W541

Once confirmed across a few machines, I went right to work.

UserExit Script: GetAbbrModel.vbs

This is a modified version of the script we use in production but the result is the same: It gets the human-readable format of the model, trims the parts we don’t want and returns an abbreviated version of the model.  So a ThinkPad W541 ends up being returned to MDT/SCCM as W54.  You can modify to suit, like creating a new property/variable called RealModel and assigning the script output to that or overwrite the existing Model property via the script itself.

The script works on 99% of the machines in our environment but it does occasionally fail:

  • some unexpected data is in there: sometimes it’s really bizzare or mirrors Model in Win32_ComputerSystem or Name in Win32_ComputerSystemProduct.
  • most of the time it’s because the field is blank/null/empty and we typically see this on machines that were serviced, specifically a board replacement, and tech didn’t run the utility to set the bits in the BIOS.  Accidents happen.
  • it’s running on very hardware that should have been retired 🙂

Good Providence to you as you adjust it to suit your needs!


' //***************************************************************************
' //
' // Solution:Get Model Abbreviation for Microsoft Deployment
' // File: jgp_GetAbbrModel.vbs
' //
' // Purpose: Gets & sets the correct model abbreviation for use in computer name and other custom configurations
' //
' // ***** End Header *****
' //***************************************************************************

'//----------------------------------------------------------------------------
'//
'// Global constant and variable declarations
'//
'//----------------------------------------------------------------------------
Option Explicit

'//----------------------------------------------------------------------------
'// End declarations
'//----------------------------------------------------------------------------

'//----------------------------------------------------------------------------
'// Main routine
'//----------------------------------------------------------------------------
Function UserExit(sType, sWhen, sDetail, bSkip)
	UserExit = Success
End Function

Function GetAbbrModel()
	on error goto 0
	Dim sMake : sMake = oEnvironment.Item("Make")
	Dim sModel : sModel = oEnvironment.Item("Model")
	Dim sAbbrModel : sAbbrModel = "UNK"

	Select Case UCase(sMake)

		Case UCase("Dell")

			If InStr(1,sModel,"OptiPlex ",1) > 0 Then
				sAbbrModel = Left(Replace(sModel,"ptiPlex ","",1,-1,1),3)
			elseif InStr(1,sModel,"Latitude ",1) > 0 Then
				sAbbrModel = Left(Replace(sModel,"Latitude ","",1,-1,1),3)
			elseif InStr(1,sModel,"XPS",1) > 0 Then
				sAbbrModel = Left(Replace(sModel,"PS","",1,-1,1),3)
			end if

		Case UCase("Lenovo")
			Dim oCSP
			For Each oCSP in GetObject("winmgmts:").ExecQuery("SELECT Version,Name FROM Win32_ComputerSystemProduct")
				Dim sLenovoModel : sLenovoModel = oCSP.Version
				Dim sLenovoProductType : sLenovoProductType = oCSP.Name
				exit for
			Next

			If InStr(1,sLenovoModel,"ThinkCentre ",1) > 0 Then
				sAbbrModel = Left(Replace(sLenovoModel,"ThinkCentre ","",1,-1,1),3)
			elseif InStr(1,sLenovoModel,"ThinkStation ",1) > 0 Then
					sAbbrModel = Left(Replace(sLenovoModel,"ThinkStation ","",1,-1,1),3)
			elseif InStr(1,sLenovoModel,"ThinkPad ",1) > 0 Then
				if Instr(1,sLenovoModel,"Carbon",1) > 0 Then
					If InStr(1,sLenovoModel,"Carbon 4th",1) > 0 Then
						sAbbrModel = Left(Replace(Replace(Replace(sLenovoModel,"ThinkPad ","",1,-1,1),"arbon 4th","")," ",""),3)
					elseif InStr(1,sLenovoModel,"Carbon 3rd",1) > 0 Then
						sAbbrModel = Left(Replace(Replace(Replace(sLenovoModel,"ThinkPad ","",1,-1,1),"arbon 3rd","")," ",""),3)
					elseif InStr(1,sLenovoModel,"Carbon 2nd",1) > 0 Then
						sAbbrModel = Left(Replace(Replace(Replace(sLenovoModel,"ThinkPad ","",1,-1,1),"arbon 2nd","")," ",""),3)
					elseif InStr(1,sLenovoModel,"Carbon",1) > 0 Then
						sAbbrModel = Left(Replace(Replace(Replace(sLenovoModel,"ThinkPad ","",1,-1,1),"arbon","")," ",""),3)
					end if
				else
					sAbbrModel = Left(Replace(sLenovoModel,"ThinkPad ","",1,-1,1),3)
				end if
			else
				' Alternatively you could build & maintain (yuck) a table of product types
				Select Case UCase(Left(sLenovoProductType,4))
					Case UCase("5032")
						sAbbrModel = "M81"

					case UCase("20EF")
						sAbbrModel = "W54"
				End Select
			end if

		Case UCase("innotek GmbH")
			sAbbrModel = UCase(Left(sMake,1) & Mid(sMake,8,1) & Right(sMake,1))

		Case UCase("VMware, Inc.")
			sAbbrModel = UCase(Left(sMake,3))

		Case UCase("Microsoft Corporation")
			sAbbrModel = "HPV"

	End Select
	GetAbbrModel = sAbbrModel
End Function

Preparing for Windows 10: Move Computers into to Win 10 OUs

One thing that has annoyed me about MDT and SCCM was that there wasn’t a built-in method to move machines into the proper OU.  As such it required creating something from scratch and I opted for something that didn’t require dependencies, like the ActiveDirectory module.

This isn’t the best way and this isn’t the only way – it’s just a way; one of many in fact.

Please note that this is NOT the ideal way to handle any operations that require credentials!  Keeping credentials in a script is bad practice as anyone snooping around could happen upon them and create some problems.  Instead, you should rely on web services to do this and Maik Koster has put together an excellent little care package to help get you started.

Move-ComputerToOU Prerequisites

My script has a few prerequisites:

  • The current AD site
  • A [local] Domain Controller to use (recommended)
  • The current OU of the machine to be moved
  • The new OU to move the machine into

It’s important to know that this script does not rely on the ActiveDirectory module.
One of my [many] quirks is to try to keep everything self-contained where it makes sense to do so, and I liked the idea of not having to rely on some installed component, an EXE and so on.  But to be honest, web services is the way to go for this.

Getting the Current AD Site

Better see this post for that.

Getting a Local Domain Controller

Better see this post for that.

Getting the Current OU

Better see this post for that.

Getting / Setting the New OU

If this is being executed as part of an OSD, yank those details from the MachineObjectOU Task Sequence variable via something like:

Function Get-TaskSequenceEnvironmentVariable
    {
        Param([Parameter(Mandatory=$true,Position=0)]$VarName)
        Try { return (New-Object -COMObject Microsoft.SMS.TSEnvironment).Value($VarName) }
        Catch { throw $_ }
    }
$MachineObjectOU = Get-TaskSequenceEnvironmentVariable MachineObjectOU

Otherwise just feed it the new OU via the parameter

Process Explanation

We first have to find existing object in AD


# This is the machine we want to move
$ADComputer = $env:COMPUTERNAME

# Create Directory Entry with authentication
$de = New-Object System.DirectoryServices.DirectoryEntry($LDAPPath,"$domain\$user",$pass)

# Create Directory Searcher
$ds = New-Object System.DirectoryServices.DirectorySearcher($de)

# Fiter for the machine in question
$ds.Filter = "(&(ObjectCategory=computer)(samaccountname=$ADComputerName$))"

# Optionally, set other search parameters
#$ds.SearchRoot = $de
#$ds.PageSize = 1000
#$ds.Filter = $strFilter
#$ds.SearchScope = "Subtree"
#$colPropList = "distinguishedName", "Name", "samaccountname"
#foreach ($Property in $colPropList) { $ds.PropertiesToLoad.Add($Property) | Out-Null }

# Execute the find operation
$res = $ds.FindAll()

 

Then we bind to the existing computer object


# If there's an existing asset in AD with that sam account name, it should be the first - and only - item in the array.
# So we bind to the existing computer object in AD
$CurrentComputerObject = New-Object System.DirectoryServices.DirectoryEntry($res[0].path,"$domain\$user",$pass)

# Extract the current OU
$CurrentOU = $CurrentComputerObject.Path.Substring(8+$SiteDC.Length+3+$ADComputerName.Length+1)

 

From there setup the destination OU


# This Here we set the new OU location
$DestinationOU = New-Object System.DirectoryServices.DirectoryEntry("LDAP://$SiteDC/$NewOU","$domain\$user",$pass)

 

And finally move the machine from the old/current OU to the new/destination OU


# And now we move the asset to that new OU
$Result = $CurrentComputerObject.PSBase.MoveTo($DestinationOU)

Move-ComputerToProperOU.ps1

So this is a shortened version of the script I’m using in production.  All you need to do is fill in the blanks and test it in your environment.


# There's a separate function to get the local domain controller
$SiteDC = Get-SiteDomainController
# Or you can hard code the local domain controller
$SiteDC = 'localdc.domain.fqdn'

# Build LDAP String
# I //think// there were maybe two cases whre this din't work
$LocalRootDomainNamingContext = ([ADSI]"LDAP://$SiteDC/RootDSE").rootDomainNamingContext
# So I added logic to trap that and pull what I needed from SiteDC
#$LocalRootDomainNamingContext = $SiteDC.Substring($SiteDC.IndexOf('.'))
#$LocalRootDomainNamingContext = ($LocalRootDomainNamingContext.Replace('.',',DC=')).Substring(1)

# Building the LDAP string
$LDAPPath = 'LDAP://' + $SiteDC + "/" + $LocalRootDomainNamingContext

# Set my domian for authentication
$domain = 'mydomain'

# Set the user for authentication
$user = 'myjoindomainaccount'

# Set the password for authentication
$pass = 'my sekret leet-o-1337 creds'

# This is the machine I want to find & move into the proper OU.
$ADComputerName = $env:COMPUTERNAME

# This is the new OU to move the above machine into.
$NewOU = 'OU=Laptops,OU=Win10,OU=Office,OU=CorpWorkstations,DC=domain,DC=fqdn'

# Create Directory Entry with authentication
$de = New-Object System.DirectoryServices.DirectoryEntry($LDAPPath,"$domain\$user",$pass)

# Create Directory Searcher
$ds = New-Object System.DirectoryServices.DirectorySearcher($de)

# Fiter for the machine in question
$ds.Filter = "(&(ObjectCategory=computer)(samaccountname=$ADComputerName$))"

# Optionally, set other search parameters
#$ds.SearchRoot = $de
#$ds.PageSize = 1000
#$ds.Filter = $strFilter
#$ds.SearchScope = "Subtree"
#$colPropList = "distinguishedName", "Name", "samaccountname"
#foreach ($Property in $colPropList) { $ds.PropertiesToLoad.Add($Property) | Out-Null }

# Execute the find operation
$res = $ds.FindAll()

# If there's an existing asset in AD with that sam account name, it should be the first - and only - item in the array.
# So we bind to the existing computer object in AD
$CurrentComputerObject = New-Object System.DirectoryServices.DirectoryEntry($res[0].path,"$domain\$user",$pass)

# Extract the current OU
$CurrentOU = $CurrentComputerObject.Path.Substring(8+$SiteDC.Length+3+$ADComputerName.Length+1)

# This Here we set the new OU location
$DestinationOU = New-Object System.DirectoryServices.DirectoryEntry("LDAP://$SiteDC/$NewOU","$domain\$user",$pass)

# And now we move the asset to that new OU
$Result = $CurrentComputerObject.PSBase.MoveTo($DestinationOU)

 

Happy Upgrading & Good Providence!

Task Sequence Fails to Start 0x80041032

Recommended reading: https://blogs.msdn.microsoft.com/steverac/2014/08/25/policy-flow-the-details/

After preparing a 1730 upgrade Task Sequence for our 1607 machines, I kicked it off on a box via Software Center and walked away once the status changed to ‘Installing thinking I’d come back to an upgraded machine an hour later.  To my surprise, I was still logged on and the TS was still ‘running’.  A few hours later the button was back to ‘Install’ with a status of ‘Available’.  Thinking I goofed I tried again and the same thing happened.

I assumed it was unique to this one machine so I tried it on another and it behaved the same way.  I then ran the upgrade sequence on 5 other machines and they all exhibited the same behavior.  I knew the Task Sequence was sound because others were using it, so it was definitely something unique to my machines but what?

Since I was doing this on 1607 machines I tried upgrading form 1511 to 1703 and 1511 to 1607 but they too failed the same way confirming it was not Task Sequence specific but again unique to my machines.  After spending a quite a few cycles on this, my original machine started failing differently: I was now seeing a ‘Retry’ button with a status of ‘Failed’.  I checked the smsts.log but it didn’t have a recent date/time stamp so it never got that far.  Hmm…

Check the TSAgent.log

Opening the TSAgent.log file I could see some 80070002 errors about not being able to delete HKLM\Software\Microsoft\SMS\Task Sequence\Active Request Handle but the real cause was a bit further up.

TSAgentLog

The lines of interest:


Getting assignments from local WMI. TSAgent 9/1/2017 12:58:27 PM 1748 (0x06D4)
pIWBEMServices->;;ExecQuery (BString(L"WQL"), BString (L"select * from XXX_Policy_Policy4"), WBEM_FLAG_FORWARD_ONLY, NULL, &pWBEMInstanceEnum), HRESULT=80041032 (e:\nts_sccm_release\sms\framework\osdmessaging\libsmsmessaging.cpp,3205) TSAgent 9/1/2017 1:03:55 PM 1748 (0x06D4)
Query for assigned policies failed. 80041032 TSAgent 9/1/2017 1:03:55 PM 1748 (0x06D4)oPolicyAssignments.RequestAssignmentsLocally(), HRESULT=80041032 (e:\cm1702_rtm\sms\framework\tscore\tspolicy.cpp,990) TSAgent 9/1/2017 1:03:55 PM 1748 (0x06D4)
Failed to get assignments from local WMI (Code 0x80041032) TSAgent 9/1/2017 1:03:55 PM 1748 (0x06D4)

The source of error code 80041032 is Windows Management (WMI) and translates to ‘Call cancelled’ which presumably happened while running the query select * from XXX_Policy_Policy4, where XXX is the site code.

I ran a similar query on my machine to get a feel for the number of items in there:


(gwmi -Class xxx_policy_policy4 -Namespace root\xxx\Policy\machine\RequestedConfig).Count

Which ended up failing with a Quota violation error suggesting I’ve reached the WMI memory quota.

Increase WMI MemoryPerHost & MemoryAllHosts

Fortunately, there’s a super helpful TechNet Blog post about this.  Since all of my test machines were running into this, I decided to make life easier for myself and use PowerShell to accomplish the task on a few of them thinking I’d have to raise the limit once.


$PHQC = gwmi -Class __providerhostquotaconfiguration -Namespace root
$PHQC.MemoryPerHost = 805306368
# Below is optional but mentioned in the article
#$PHQC.MemoryAllHosts = 2147483648
$PHQC.Put()
Restart-Computer

After the machine came up I ran the same query again, and after 2 minutes and 38 seconds it returned over 1800 items.  Great!  I ran it again and after 5 minutes it failed with the same quota violation error.  Boo urns.  I kept raising MemoryPerHost and MemoryAllHosts to insane levels to get the query to run back to back successfully.

The good news is that I made progress suggesting I’m definitely hitting some sort of memory ceiling that has now been raised.

The bad news is why me and not others?  Hmm…

Root Cause Analysis

I checked the deployments available to that machine and wowzers it was super long.  I won’t embarrass myself by posting an image of that but it was very long.  This helped to identify areas of improvement in the way we’re managing collections & deploying things, be it Applications, Packages, Software Updates and so on.

On my patient zero machine I ran the following to clear out the policies stored in WMI:


gwmi -namespace root\xxx\softmgmtagent -query "select * from ccm_tsexecutionrequest" | remove-wmiobject

gwmi -namespace root\xxx -query "select * from sms_maintenancetaskrequests" | remove-wmiobject

restart-service -name ccmexec

I then downloaded the policies and tried to image – it worked!  I decided to let that machine go and focus on some administrative cleanup in SCCM.  After tidying things up a bit, the rest of my 1607 machines started the 1703 upgrade Task Sequence without issue and the 1511 machines ran the 1607 upgrade Task Sequence as well.

As we continue to phase out Windows 7, we’ll hopefully update our methods to help avoid problems like this and perform that routine maintenance a little more frequently.

 

Upgrading Windows 10 1511 to 1607

In 2016 we began the process of moving from Windows 7 to Windows 10 1511 learning a ton along the way.  After 1607 went Current Branch for Business (CBB) we began planning for that upgrade phase, and what lies below is a overview of that process.

Initial Smoke Test

Once 1607 went CBB we very quickly threw an upgrade Task Sequence together to see what the upgrade process looked like and what, if anything, broke.  The upgrade process went smoothly, the vast majority of applications worked but there were a handful of things that needed to be worked out

  • Remote Server Administration Tools (RSAT) was no longer present
  • Citrix Single Sign-On was broken
  • A bunch of Universal Windows Platform (UWP) or Modern Applications we didn’t want were back
  • Default File Associations had reverted
  • The Windows default wall paper had returned
  • User Experience Virtualization (UE-V/UX-V) wasn’t working.
  • Taskbar pinnings were incorrect; Specifically Edge and the Windows Store had returned

Still, everything seemed very doable so we had high hopes for getting it out there quickly.

Approach: Servicing vs Task Sequence

Wanting to get our feet wet with Windows as as Service (WaaS), we explored leveraging Servicing Plans to get everyone on to 1607 but quickly ran into a show stopper: How were we going to fix all of the 1607 upgrade ‘issues’ if we went down this path?

We couldn’t find an appealing solution for this, so we went ahead with the Task Sequence based upgrade approach.  This gave us greater flexibility because not only could we fix all the upgrade issues but also do a little housekeeping, like install new applications, upgrade existing applications, remove retired applications and more.  This was far simpler and more elegant than setting up a bunch of deployments for the various tasks we wanted to accomplish either before the upgrade or after.

Avoiding Resume Updating/Generating Events

One concern with Servicing was ensuring the upgrade wasn’t heavy handed, forcing a machine to either upgrade mid-day because they were beyond the deadline or during the night because they left their machine on.  This was because the upgrade would bounce their machine which could potentially result in lost work, something most people find undesirable.  With Servicing, we couldn’t come up with a sure-fire way to check for and block the upgrade if, say instances of work applications were detected, such as the Office suite, Acrobat and so on.

Sure, we could increase the auto-save frequency – perhaps setting it to 1 minute – and craft a technical solution to programmatically save files in the Office Suite, safe Drafts and try to do some magic to save PDF’s and so on.  But at the end of the day, we couldn’t account for every situation: we would never know if the person wanted to create a new file vs a new version or simply overwrite the existing one.  And most importantly, we didn’t want to have to answer why a bunch of Partners lost work product as a result of the upgrade.

So, we decided to go the Task Sequence route.

Task Sequence Based Upgrade

Now that we knew which way we need to go, it was just a matter of building the fixes to remediate the upgrade issues then setup the Task Sequence.

Upgrade Remediation

  • Remote Server Administration Tools (RSAT) – Prior to performing the OS upgrade, a script is executed to detect RSAT, and if present, a Boolean variable which is referenced after the upgrade is complete to triggers re-installation of RSAT.
    .
  • Citrix Single Sign-On – This is a known issue – see CTX216312 for more details.
    .
  • Universal Windows Platform (UWP) applications – Re-run our in-house script to remove the applications.
    .
  • Default File Associations
    • Option 1: Prior to performing the OS upgrade, export HKCR and HKCU\Software\Classes then import post upgrade.
    • Option 2: Re-apply the defaults via dism from our ‘master’ file.
      .
  • Wallpaper – Re-apply in the Task Sequence by taking advantage of the img0 trick.
    .
  • UE-V/UX-V – The upgrade process broke the individual components of the UE-V Scheduled Tasks requiring a rebuild.  Once fixed on a machine we copied the good/fixed files from C:\Windows\System32\Tasks\Microsoft\UE-V and setup the Task Sequence:
    1. Enable UE-V during the upgrade via PowerShell
    2. Copied the fixed files into C:\Windows\System32\Tasks\Microsoft\UE-V
    3. Updated the command line the Scheduled Task ran
    4. Disabled the ‘Synchronize Settings at Logoff‘ Scheduled Task since that was still buggy, causing clients to hang on log off.
      .
  • Taskbar Pinnings – Prior to performing the OS upgrade, export HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Taskband then import post upgrade.
    .
  • Critical Process Detection – CustomSettings.ini calls a user exit script that loops through a series of key executables (outlook.exe, winword.exe etc.) running tasklist to see if a process is detected and if so sets a Task Sequence variable that’s later evaluated.

Since we were going the Task Sequence route, and it would be generally available in Software Center, it was decided a password prompt might help prevent accidental foot shooting.  So shortly after the Task Sequence launches an HTA driven password prompt is displayed that only IT should be able to successfully navigate.  This added yet another line of defense for anyone who ‘accidentally’ launched the Task Sequence;
Even though one has to click through two prompts to confirm the installation but whatever. 🙂

Determine Current OU of Machine

I created this function ages ago as part of a larger script that needed to be executed completely unattended.

Please note that this is NOT the ideal way to handle any operations that require credentials.  Keeping credentials in a script is bad practice as anyone snooping around could happen upon them and create some problems.  Instead, you should rely on webservices to do this and Maik Koster has put together an excellent little care package to help you get started.

Get-CurrentOU Prerequisites

My script has a few prerequisites:

  • The name of the computer you’re working with
  • The current AD site
  • A [local] Domain Controller to connect to

This script does not rely on the ActiveDirectory module as I needed this to execute in environments where it wouldn’t be present.
I personally try to keep everything self contained where it makes sense to do so.  It’s one of my [many] quirks.

The Computer Name

Just feed it as a parameter or let it default to the current machine name.

Finding the Current AD Site

Better see this post for that.

Finding a Local Domain Controller

Better see this post for that.

Get-CurrentOU

Function Get-CurrentOU
    {
        Param
            (
                [Parameter(Mandatory=$true)]
                    [string]$ADComputerName = $env:COMPUTERNAME,

                [Parameter(Mandatory=$false)]
                    [string]$Site = $Global:Site,

                [Parameter(Mandatory=$false)]
                    [string]$SiteDC = $Global:SiteDC
            )

        Try
            {
                $Domain = ([ADSI]"LDAP://RootDSE").rootDomainNamingContext
                $_computerType = 'CN=Computer,CN=Schema,CN=Configuration,' + $Domain
                $path = 'LDAP://' + $SiteDC + "/" + $Domain
                $12 = 'YwBvAG4AdABvAHMAbwAuAGMAbwBtAFwAcwBlAHIAdgBpAGMAZQBfAGEAYwBjAG8AdQBuAHQAXwBqAG8AaQBuAF8AZABvAG0AYQBpAG4A'
                $3 = 'bQB5ACAAdQBiAGUAcgAgAHMAZQBrAHIAZQB0ACAAUAA0ADUANQB3ADAAcgBkACAAZgBvAHIAIABhAG4AIAAzADEAMwAzADcAIABoAGEAeAAwAHIAIQAhACEAMQAhAA=='
                $DirectoryEntry = New-Object System.DirectoryServices.DirectoryEntry($path,[System.Text.Encoding]::Unicode.GetString([System.Convert]::FromBase64String($12)),[System.Text.Encoding]::Unicode.GetString([System.Convert]::FromBase64String($3)))
                $DirectorySearcher = New-Object System.DirectoryServices.DirectorySearcher($DirectoryEntry)
                $DirectorySearcher.Filter = "(&(ObjectCategory=computer)(samaccountname=$ADComputerName$))"
                $SearchResults = $DirectorySearcher.FindAll()

                if($SearchResults.count -gt 0) { return (New-Object System.DirectoryServices.DirectoryEntry($SearchResults[0].Path,[System.Text.Encoding]::Unicode.GetString([System.Convert]::FromBase64String($12)),[System.Text.Encoding]::Unicode.GetString([System.Convert]::FromBase64String($3)))).Path.Substring(8+$siteDC.Length+3+$ADComputerName.length+1) }
                Else { Write-Host "ERROR: Computer object not found in AD: [$ADComputerName]"; return $false }
            }
        Catch { return $_ }
    }

 

This has worked well for me, but you’re welcome to use a different filter instead, such as

$DirectorySearcher.Filter = "(Name=$ADComputerName)"

 

As I mentioned above, you should really explore webservices instead of hardcoding passwords in scripts but this will work in a pinch until you can get that setup.

 

Good Providence!

Finding a Local Domain Controller

I created this function ages ago as part of a larger script that needed to be executed completely unattended.

Since I started down the System Engineer / Desktop Engineer / Application Packager path a handful of years ago, I’ve seen some interesting behaviors.  For example, when joining a domain, a machine might reach out to a domain controller overseas versus one at the local site or even in the same continent.  And because things were working, those types of anomalies were never really looked into.

This only ever really came up when there were OU structure changes.  For instance, when going from XP to Windows 7, or after completing an AD remediation project.  As machines were being moved around, we noticed the Domain Controller used varied and the time it took to replicate those changes varied from:

  • instantaneously – suggesting the change was made on a local DC OR a DC with change notification enabled
  • upwards of an hour – suggesting a remote DC (and usually verified by logs or checking manually) or on a DC where change notification was not enabled.

So I decided to write a small function for my scripts that would ensure they were using a local Domain Controller every time.

There are a handful of ways to get a domain controller

  • LOGONSERVER
  • dsquery
  • Get-ADDomainController
  • Other third-party utilities like adfind, admod

But either they were not always reliable (as was the case with LOGONSERVER which would frequently point to a remote DC) or they required supporting files (as is the case with dsquery and Get-ADDomainController).  Nothing wrong with creating packages, adding these utilities to %ScriptRoot% or even calling files on a network share.
I simply preferred something self-contained.

Find Domain Controllers via NSLOOKUP

I started exploring by using nslookup to locate for service record (SRV) resource records for domain controllers and I landed on this:

nslookup -type=srv _kerberos._tcp.dc._msdcs.$env:USERDNSDOMAIN

Problem was it returned every domain controller in the organization.  Fortunately  you can specify an AD site name to narrow it down to that specific AD Site:

nslookup -type=srv _kerberos._tcp.$ADSiteName._sites.$env:USERDNSDOMAIN

Bingo!  This gave me exactly what I was hoping to find and the DC’s it returns are in a different order every time which helps to round-robin things a bit.

From there it was all downhill:

  1. Put AD Sites in the script
  2. Figure out which Site the machine is in by it’s gateway IP
  3. Use nslookup to query for SRV records in that site
  4. Pull the data accordingly.

Yes I know – hard coding AD Sites and gateway IP’s is probably frowned upon because

What if it changes?

I don’t know about your organisation but in my [limited] experience the rationale was that AD Sites and gateway IP’s typically don’t change often enough to warrant that level of concern, so it didn’t deter me from using this method.  But I do acknowledge that it is something to remember especially during expansion.

Also, I already had all of our gateway IP’s accessible in a bootstrap.ini due to our existing MDT infrastructure making this approach much simpler workwise.

Get-SiteDomainController

And here’s where I ended up.  The most useful part – to me anyway – is that it works in Windows (real Windows) at any stage of the imaging process and doesn’t require anything extra to work.  The only gotcha is that it does not work in WinPE because nslookup isn’t built-in and I don’t know why after all these years it still isn’t built-in.

Function Get-SiteDomainController
    {
        Param
            (
                [Parameter(Mandatory=$true)]
                    [string]$Site,

                [Parameter(Mandatory=$true)]
                    [string]$Domain
            )
        Try { return (@(nslookup -type=srv _kerberos._tcp.$Site._sites.$Domain | where {$_ -like "*hostname*"}))[0].SubString(20) }
        Catch { return $_ }
    }

Hindsight

I’ve been using the above method for determining the local DC for years and its been super reliable for me.  It wasn’t until recently that I learned that the query I was using would not always return a domain controller since the query simply locates servers that are running the Kerberos KDC service for that domain.  Whoops.

Instead, I should have used this query which will always only return domain controllers:

nslookup -type=srv _kerberos._tcp.$ADSiteName._sites.dc._msdcs.$env:USERDNSDOMAIN

 

Honorable Mention: NLTEST

I would have been able to accomplish the same task with nltest via

nltest /dclist:$env:USERDNSDOMAIN

And then continuing with the gateway IP to AD Site translation to pull out just the entries I need.

nltest /dclist:$env:USERDNSDOMAIN | find /i ": $ADSiteName"

One interesting thing to note about nltest is that it seems to always return the servers in the same order.  This could be good for consistency’s sake, but it could also be a single point of failure if that first server has a problem.

The nslookup query on the other hand returns results in a different order each time it’s executed.  Again, this could also be bad, making it difficult to spot an issue with a Domain Controller, but it could be good in that it doesn’t halt whatever process is using the output of that function.

¯\_(ツ)_/¯

Anyway, since the nslookup script has been in production for some time now and works just fine, I’m not going to change it.  However, any future scripts I create that need similar functionality will likely use the updated nslookup query method.

 

Good Providence!

Determine AD Site

I created this function ages ago as part of a larger script that needed to be executed completely unattended.

For a variety of reasons, I needed a reliable way to determine which AD Site a particular machine was in.  Over the years I’ve seen some really odd behaviors that – technically – shouldn’t happen.  But as with many organizations (one hopes anyway…) things don’t always go as planned so you need that backup plan.

Get-Site Prerequisites

My script has a few prerequisites:

I personally try to keep everything self contained where it makes sense to do so.  It’s one of my [many(?)] quirks.

Finding the Current Default Gateway IP

Better see this post for that.

Get-Site Explained

In our environment, the machine names contain the office location which makes it easy to figure out where the machine is physically.  However, not every machine is named ‘correctly’ (read: follows this naming convention) as they were either renamed by their owners or are running an image prior to this new naming convention.  This is why I’m leveraging a hash table array of IP’s as a backup.

It seemed tedious at first, but since I already had a list of IP’s in the bootstrap.ini so it made this a little easier.  Also, Site IP’s don’t change often which means this should be a reliable method for years to come with only the occasional update.

Function Get-Site
    {
        Param([Parameter(Mandatory=$true)][string]$DefaultGatewayIP)

        Switch($env:COMPUTERNAME.Substring(0,2))
            {
                { 'HQ', 'TX', 'FL' -contains $_ } { $Site = $_ }
                Default
                    {
                        $htOfficeGateways = @{
                            "Headquarters"  = "192.168.0.1","192.168.10.1";
                            "Office1"       = "192.168.1.1","192.168.11.1";
                            "Office2"       = "192.168.2.1","192.168.12.1";
                        }

                        $DefaultGatewayIP = '192.168.0.1'
                        #$DefaultGatewayIP = '192.168.1.1'
                        #$DefaultGatewayIP = '192.168.2.1'

                        Foreach($Office in ($htOfficeGateways.GetEnumerator() | Where-Object { $_.Value -eq $DefaultGatewayIP } ))
                            {
                                Switch($($Office.Name))
                                    {
                                        "Headquarters" { $Site = 'HQ' }
                                        "Office1"      { $Site = 'TX' }
                                        "Office2"      { $Site = 'FL' }
                                    }
                            }
                    }
            }
        return $Site
    }

 

I didn’t add a check prior to returning $Site to confirm it was actually set but that can be handled outside of the function as well.  Otherwise you could do $Site = 'DDOJSIOC' at the top and then check for that later since that should never be a valid site … unless you’re Billy Rosewood.

 

Get Gateway IP Address

I created this function ages ago as part of a larger script that needed to be executed completely unattended.

I needed a method to determine which office a particular asset was in at the time.  I settled on using the gateway IP address since I already had a list of them from our BootStrap.ini and CustomSettings.ini.

Originally I was using this query:

[array]$arrDefaultGatewayIP = gwmi Win32_NetworkAdapterConfiguration -ErrorAction Stop | select -ExpandProperty DefaultIPGateway

But this proved problematic on machines running older versions of PowerShell so I ended up using this instead.

[array]$arrDefaultGatewayIP = gwmi Win32_NetworkAdapterConfiguration -ErrorAction Stop | Where { $_.DefaultIPGateway -ne $null } | select -ExpandProperty DefaultIPGateway

Alternatively I suppose I could have Try’ied the first then executed the latter if PoSH Caught any exceptions.

And converted it to a function

Function Get-GatewayIP

Function Get-GatewayIP
{
try { return [string](gwmi Win32_networkAdapterConfiguration -Filter 'IPEnabled = True' -ErrorAction Stop | ? { $_.DefaultIPGateway -ne $null } | select -ExpandProperty DefaultIPGateway | Select -First 1) }
catch { return $_ }
}

From there it was just a matter of validating that the return result looked like an IP before feeding them through a hash table array to determine the office location.

So Why is this Relevant?

My original plan was to rely on the computer name since, in our environment, machine names contain an abbreviated version of the office name – the office code for lack of a better term.  For example,

  • a machine based out of the New York office might be called SOME-NY-THING
  • another in our Nation’s Capital might be OTHER-DC-THING etc.

However some machines don’t follow this format because:

  1. they’re running an older build imaged prior to the new naming convention
  2. some overzealous/tenacious individuals have renamed their machines

I had also considered querying AD to get the machine’s current OU but since this script didn’t always run under the context of an AD account (e.g.: in WinPE) it meant baking the credentials into the script, and I really don’t like doing that any more than I have to.
(Hint: Webservices is the answer to that problem)

I’ve been using this approach for well over a year now and its come in handy countless times.  Hopefully you’ll find some value for this as well.

Good Providence!

Preparing for Windows 10: OU Structure

Our team has been playing around with Windows 10 in our corporate environment for months now and one thing that was apparent from day one was that some something was breaking the Task Bar rendering the Start Menu & Cortana completely useless.  (i.e.: You click on it and it doesn’t work.)  We were certain it was a GPO, but we didn’t know which GPO or more specifically which setting within was creating the problem.  We started digging into it a little bit but it wasn’t long before we realized it was almost a moot point because:

  1. Some GPOs were specific to Windows 7 & wouldn’t apply to Windows 10
  2. Some GPOs were specific to Office 2010 & wouldn’t apply to Office 2016
  3. Others were simply no longer needed like our IEM policy for IE9.
  4. Finally, and perhaps most importantly: we wanted to take this opportunity to start from scratch: Re-validate all the GPO’s, the settings within & consolidating where possible.

And in order to do all that, we needed to re-evaluate our OU structure.

When it comes to OU structures, depending on the audience, it makes for very “exciting” conversation as some OU structures have a fairly simple layout:

  • Corp Computers
    • Hardware Type

While others look like a rabbit warren:

  • Continent
    • Country
      • Region (or equivalent)
        • State  (or equivalent)
          • City
            • Role/Practice Group/Functional Group
              • Hardware Type
                • Special Configurations
                  • Other
Note: I’m not knocking the latter or suggesting the former is better.  Really just depends on the specific needs of the organization.
No one size fits all; but some sizes may fit many.

When we did an image refresh some time ago – prior to our SCCM implementation – we took a ‘middle of the road’ approach with the main branches being location and hardware type. Later during during our SCCM implementation last summer, knowing that Windows 10 was on the horizon we revised the layout adding the OS branch in order to properly support both environments.

Bulk Creating OU’s

Once you know your layout, for example:

  • CorpSystems
    • Location
      • OS
        • Hardware Type

It’s time to create them all.  I wasn’t about to do this by hand so I hunted around for a way to recursively create OUs.  The solution comes courtesy of a serverfault question, and I just tweaked it to work for us.

Function Create-NewOU
    {
        # http://serverfault.com/questions/624279/how-can-i-create-organizational-units-recursively-on-powershell
        Param
            (
                [parameter(Position = 0,Mandatory = $true,HelpMessage = "Name of the new OU (e.g.: Win10)")]
                [alias('OUName')]
                    [string]$NewOU,

                [parameter(Position = 1,Mandatory = $false,HelpMessage = "Location of the new OU (e.g.: OU=IT,DC=Contoso,DC=com)")]
                [alias("OUPath")]
                    [string]$Path
            )

        # For testing purposes
        #$NewOU = 'Win10'
        #$NewOU = 'OU=SpecOps,OU=IT,OU=Laptops,OU=Win10,OU=Miami,OU=Florida,OU=Eastern,OU=NorthAmerica'
        #$Path = 'DC=it,DC=contoso,DC=com'
        #$Path = (Get-ADRootDSE).defaultNamingContext

        Try
            {
                if($Path) { if($NewOU.Substring(0,3) -eq 'OU=') { $NewOU = $NewOU + ',' + $Path } else { $NewOU = 'OU=' + $NewOU + ',' + $Path } }
                else { if($NewOU.Substring(0,3) -eq 'OU=') { $NewOU = $NewOU + ',' + (Get-ADRootDSE).defaultNamingContext } else { $NewOU = 'OU=' + $NewOU + ',' + (Get-ADRootDSE).defaultNamingContext } }

                # A regex to split the distinguishedname (DN), taking escaped commas into account
                $DNRegex = '(?<![\\]),'

                # We'll need to traverse the path, level by level, let's figure out the number of possible levels
                $Depth = ($NewOU -split $DNRegex).Count

                # Step through each possible parent OU
                for($i = 1;$i -le $Depth;$i++)
                    {
                        $NextOU = ($NewOU -split $DNRegex,$i)[-1]
                        if(($NextOU.Substring(0,3) -eq "OU=") -and ([ADSI]::Exists("LDAP://$NextOU") -eq $false)) { [String[]]$MissingOUs += $NextOU }
                    }

                # Reverse the order of missing OUs, we want to create the top-most needed level first
                [array]::Reverse($MissingOUs)

                # Now create the missing part of the tree, including the desired OU
                foreach($OU in $MissingOUs)
                    {
                        $NewOUName = (($OU -split $DNRegex,2)[0] -split "=")[1]
                        $NewOUPath = ($OU -split $DNRegex,2)[1]

                        write-host "Creating [$NewOUName] in [$NewOUPath]"
                        New-ADOrganizationalUnit -Name $newOUName -Path $newOUPath -Verbose
                    }
            }
        Catch { return $_ }
        return $true
    }

 

With that done, it was on to the next bit which was creating and linking GPOs.

 

Good Providence!

Non Sequitur: Creating Dynamic Variables on the Fly in VBScript

Update 2016-06-08: WordPress keeps butchering the code so I’m removing the pretty formatting for now.

I’m only mentioning this because of something that came up recently and I thought it was kind of neat.

To keep things simple, I’m going to break this up into a few posts:

  1. Batch
  2. VBScript
  3. PowerShell

 

VBScript

About 4 if not almost 5 years ago, before I learned the ways of PowerShell, I moved up from Batch to VBScript mostly when things got complicated.  I was still getting used to PowerShell, practicing on doing simple things like managing AD objects and doing things in Exchange (mail exports, delegation, tracking down messages etc.) so I relied heavily on Batch and VBScript to get things done because they were languages I was quite comfortable with.

SCCM 2012 (no R2) is my first foray into the System Center ecosystem and I was excuted as it was supposed to solve many of the problems we faced with the imaging process we had then, and address nearly all of our concerns with some of changes we would be making in a year’s time or so.  Like most organizations we have a hefty list of applications on all of our workstations that usually fall into one of the following categories:

  1. Core Apps – These are applications required by just about everyone in the organization like Microsoft Office, FileSite, EMM etc.
  2. Practice Apps – These are applications required by those in a particular practice group (IP, Litigation, Trust & Estates etc.) or a functional group (Development, Admin/HR, Accounting etc.)
  3. Extra Apps – These are usually applications that aren’t necessarily required but are often needed.

When creating the Task Sequence, we had two options for installing applications:

  1. Add the applications to the installation list, or
  2. Install applications according to a dynamic variable list.

The former is great because it allows a friendly interface for selecting, adding and organizing the applications for installation. The biggest drawback here is that each task can only support 9 application installations, which means more install application tasks and a lot of clicking.

The latter is also great because one can define Collection Variables that correspond to applications and install them on the fly.  Its biggest drawback was that it required maintaining a list of Collection Variables in potentially multiple collections and I didn’t want to spend my time renumbering because I removed or added an application since there couldn’t be any gaps in the numbers.  Augh.

Please don’t misconstrue: Both methods work and I’m not trying to knock either method!

Note: It’s possible things have changed since then and maybe there was in fact a solution addressing this exact problem.  I wasn’t an expert then – nor am I an expert now – so I improvised.

I thought to myself “Wouldn’t it neat if I could generate sequentially numbered variables on the fly at runtime?”  Then it dawned on me: my old batch script!  I decided to write a small vbscript that I could use to populate those variables in the TS.

I ended up with something very similar to what’s below.  It was a ‘proof-of-concept’ script running in my, then, lab environment and it worked wonderfully straight through to production.

option explicit

' Lets define our array housing the core applications.
' Easy to add, delete & reorganize.
'
' NOTE: The application names here corespond to the application
' names within SCCM 2012
dim arrCoreApps : arrCoreApps=Array("Microsoft Office Professional Plus 2099",_
				    "FileSite",_
				    "DeskSite",_
				    "Nuance PDF Converter Enterprise",_
				    "Workshare Professional",_
				    "PayneGroup Metadata Assistant",_
				    "PayneGroup Numbering Assistant",_
				    "Adobe Acrobat Pro",_
				    "Some Fake App",_
				    "Some Real App"_
				    )

' With the array populated, lets build sequentially numbered variables with a static prefix
' The prefix here is CoreApps
' This is variable that should be used in the Application Install Task Sequence step.

BuildApplicationVariables arrCoreApps,"CoreApps"

' Lets make sure the Task Sequence Variables have successfully been defined
RetrieveApplicationVariables arrCoreApps,"CoreApps"
wscript.echo

' Another example
dim arrExtraApps : arrExtraApps=Array("Java",_
				      "Google Chrome",_
				      "Mozilla Firefox"_
				      )

BuildApplicationVariables arrExtraApps,"ExtraApps"
RetrieveApplicationVariables arrExtraApps,"ExtraApps"

Sub BuildApplicationVariables(Required_Application_Array,Required_Prefix)
	dim arrApplications : arrApplications = Required_Application_Array
	dim sPrefix : sPrefix = Required_Prefix

	dim i
	For i=0 to UBound(arrApplications)
		if (i+1 < 10) Then
			Execute "wscript.echo ""Creating Variable ["" & sPrefix & 0 & i+1 & ""] Containing ["" & arrApplications(i) & ""]"""
			ExecuteGlobal sPrefix & "0" & i+1 & "=" & """" & arrApplications(i) & """"
		else
			Execute "wscript.echo ""Creating Variable [" & sPrefix & i+1 & "] Containing [" & arrApplications(i) & "]"""
			ExecuteGlobal sPrefix & i+1 & "=" & """" & arrApplications(i) & """"
		end if
	Next
End Sub

Sub RetrieveApplicationVariables(Required_Application_Array,Required_Prefix)
	dim arrApplications : arrApplications = Required_Application_Array
	dim sPrefix : sPrefix = Required_Prefix

	dim i
	For i=0 to UBound(arrApplications)
		if (i+1 < 10) Then
			Execute "wscript.echo vbtab & ""[" & sPrefix & "0" & i+1 & "] = ["" & " & sPrefix & "0" & i+1 & " & ""]"""
		else
			Execute "wscript.echo vbtab & ""[" & sPrefix & i+1 & "] = ["" & " & sPrefix & i+1 & " & ""]"""
		end if
	Next
end Sub

Anyway this worked well for us in setting up those dynamic variables on the fly and it’s a piece of code I’ll keep in my back pocket just in case.

Good Providence!

UEFI Windows 10 Installation via USB

Most organizations are running Windows 7 on either legacy hardware or UEFI capable hardware but have disabled UEFI in favor of the legacy BIOS emulation and using an MBR partitioning style versus GPT.  Prior to Windows 7, most deployment tools didn’t work with UEFI and there were almost no UEFI benefits for Windows 7, which is why the legacy configuration was preferred.  But in Windows 10, there are some benefits like faster startup time, better support for resume/hibernate, security etc. that you’ll want to take advantage of.

Although not ideal for Windows 10, you could keep using legacy BIOS emulation (which will work just fine, and “be supported for years to come”) and deal with UEFI for new devices or as devices are returned to IT and prepared for redistribution.  But if you want to take advantage of the new capabilities Windows 10 on UEFI enabled devices offers, you’ll essentially have to do a hardware swap because there’s no good way to ‘convert’ as it requires:

  • changing the firmware settings on the devices
  • changing the disk from an MBR disk to a GPT disk
  • changing the partition structure

All coordinated as part of an OS refresh or an upgrade.

Now, I wouldn’t go as far as to say it’s not possible to automate the above (I love me a good challenge), but the recommended procedure is to capture the state from the old device, do a bare metal deployment on a properly configured UEFI device then restore the data onto said device.

If you’re imaging machines with MDT or SCCM and are PXE booting, all you need to do is:

  • add the x64 boot media to your task sequence
  • deploy your task sequence to a device collection that contains the machine you wish to image in question
  • reconfigure the BIOS for UEFI

If however you’re imaging machines by hand with physical boot media, you’ll want a properly configured USB drive to execute the installation successfully.

There are loads of blogs that talk about creating bootable USB media but the majority of them don’t speak to UEFI.  And those that do touch on UEFI, almost all of them miss that one crucial step which is what allows for a proper UEFI based installation.

What you need:

  • 4GB+ USB drive
  • a UEFI compatible system
  • some patience

Terminology

  • USB drive = a USB stick or USB thumb drve – whatever you want to call it
  • USB hard drive = an external hard drive connected via USB; not the same as above
  • Commands I’m referencing are in italics
  • Commands you have to type are in bold italics

Step 1 – Locate your USB Drive

Open an elevated command prompt & run diskpart
At the diskpart console, type: lis dis
At the diskpart console, type: lis vol

You should have a screen that looks similar to this:
Diskpart_lisdis_lisvol

I frequently have two USB hard drives and one USB drive plugged into my machine, so when I have to re-partition the USB drive, I have to be super extra careful.  So to make sure I’m not screwing up, I rely on a few things to make sure I’m picking the proper device.

First: The dead giveaway lies in the ‘lis vol‘ command which shows you the ‘Type’ of device.  We know USB drives can be removed and they’re listed as ‘Removable’.  There’s only one right now, Volume 8 which is assigned drive letter E.

Second: I know that my USB drive is 8GB in size, so I’m looking at the ‘Size’ column in both the ‘lis vol‘ and ‘lis dis‘ commands to confirm I’m looking at the right device.  And from ‘lis dis‘ I see my USB drive is Disk 6.

Step 2 – Prepare USB Drive

From the diskpart console, we’re going to issue a bunch of commands to accomplish our goal.

Select the proper device: sel dis 6

Issue these seven diskpart commands to prepare the USB drive:

  1. cle
  2. con gpt
  3. cre par pri
  4. sel par 1
  5. for fs=fat32 quick
  6. assign
  7. exit

That’s it!  The second diskpart command above is the *most critical step* for properly preparing your USB drive for installing Windows on UEFI enabled hardware, and nearly all the popular sites omit that step.  Bonkers!
Feel free to close the command window now.

Step 3 – Prepare the Media

With your USB drive properly setup now, all you need to do is mount the Windows 10 ISO and copy the contents to the USB drive.

If you’re on Windows 8 or Windows 10 already, right right-click the ISO and ‘Mount’.
If you’re on Windows 7, use something like WinCDEmu to mount the ISO.

Once mounted, you can copy the contents from the ‘CD’ top the USB drive.

Step 4 – Image

A this point all that’s left to do is

  • boot your machine(s)
  • make sure your BIOS is setup for UEFI versus Legacy BIOS; or simply enable ‘Secure Boot’ which on many machines sets UEFI as the default automatically
  • boot from your USB drive
  • install Windows

 

Hopefully this has helped point you in the right direction for taking advantage of all Windows 10 on UEFI enabled hardware has to offer.

 

Good Providence!

Applying Hotfix 3143760 for Windows ADK v1511

Although I’m moving full-steam-ahead with PowerShell, I regularly fall back on batch for really simple things mostly because I’m comfortable with the ‘language.’   (A little too comfortable maybe.)

I needed to apply hotfix KB3143760 on a handful of machines so I pulled the instructions from the KB, put them into a batch file and executed from the central repository since I had already previously downloaded the files.

@echo off
rem can be amd64 or x86
Set _Arch=x86
Set _WIMPath=%ProgramFiles(x86)%\Windows Kits\10\Assessment and Deployment Kit\Windows Preinstallation Environment\%_Arch%\en-us\winpe.wim
Set _MountPoint=%SystemDrive%\WinPE_%_Arch%\mount
Set _ACLFile=%SystemDrive%\WinPE_%_Arch%\AclFile

if /i [%_Arch%]==[amd64] (Set _Schema=\\path\to\schema-x64.dat)
if /i [%_Arch%]==[x86] (Set _Schema=\\path\to\schema-x86.dat)

if not exist "%_WIMPath%" echo. &amp; echo ERROR: WIM NOT FOUND&amp;&amp;goto end
if not exist "%_Schema%" echo. &amp; echo ERROR: SCHEMA NOT FOUND&amp;&amp;goto end
if not exist "%_MountPoint%" mkdir "%_MountPoint%"
if exist "%_ACLFile%" del /q "%_ACLFile%"
if not exist "%_WIMPath%.ORIG" echo f | xcopy "%_WIMPath%" "%_WIMPath%.ORIG" /V /F /H /R /K /O /Y /J
if %ERRORLEVEL% NEQ 0 echo ERROR %ERRORLEVEL% OCCURRED&amp;&amp;goto end

:mount
dism /Mount-Wim /WimFile:"%_WIMPath%" /index:1 /MountDir:"%_MountPoint%"

:backuppermissions
icacls "%_MountPoint%\Windows\System32\schema.dat" /save "%_ACLFile%"

:applyfix
takeown /F "%_MountPoint%\Windows\System32\schema.dat" /A
icacls "%_MountPoint%\Windows\System32\schema.dat" /grant BUILTIN\Administrators:(F)
xcopy "%_Schema%" "%_MountPoint%\Windows\System32\schema.dat" /V /F /Y

:restorepermissions
icacls "%_MountPoint%\Windows\System32\schema.dat" /setowner "NT SERVICE\TrustedInstaller"
icacls "%_MountPoint%\Windows\System32\\" /restore "%_ACLFile%"

echo. &amp; echo.

:confirm
Set _Write=0
set /p _UsrResp=Did everything complete successfully? (y/n):
if /i [%_UsrResp:~0,1%]==[y] (set _Write=1) else (if /i [%_UsrResp:~0,1%]==[n] (set _Write=0) else (goto confirm))

:unmount
echo. &amp; echo.
if %_Write% EQU 1 (
	echo. &amp; echo Unmounting and COMMITTING Changes
	dism /Commit-Wim /MountDir:"%_MountPoint%"
	dism /Unmount-Wim /MountDir:"%_MountPoint%" /commit
) else (
	echo. &amp; echo Unmounting and DISCARDING Changes
	dism /Unmount-Wim /MountDir:"%_MountPoint%" /discard
)
dism /Cleanup-Wim

:end
if exist "%_ACLFile%" del /q "%_ACLFile%"
Set _Write=
Set _UsrResp=
Set _MountPoint=
Set _WIMPath=
Set _Arch=
pause

 

Really simple, and it worked brilliantly.

It’s nowhere nearly as elegant or complete as Keith Garner’s solution, but it got the job done for me lickety split.

 

Good Providence and patch safely!

Practical Disk Wiping (NOT for SSDs)

##########################
DANGER WILL ROBINSON!

This post specifically speaks to traditional mechanical or spindle-based hard drives.
This post is NOT for solid-state drives (aka SSD)
See this post for guidance on how to wipe SSD’s.

I’m not data recovery or security expert so feel free to take this with a grain of salt.

##########################


Over the years I’ve collected quite a few mechanical/spindle drives at home.  Most of them were inherited from other systems I acquired over time, some gifts and some purchased by myself.  As needs grew and technology evolved, I upgraded to larger drives here and there, made the move to SSDs on some systems and kept the old drives in a box for future rainy day projects.  Well there haven’t been too many rainy days and I’m feeling it’s time to purge these drives.

I’ve explored some professional drive destruction services, but honestly I don’t trust some of the more affordable ones, and the ones I do trust are almost prohibitively expensive.  And by that I mean: While I don’t want someone perusing through my collection of family photos, music and what little intellectual property I may like to think I have, I also don’t know what dollar amount I’m willing to pay to keep said privacy.

I’ve always been a big fan of dd and dban, the latter of which has been my recommended go-to disk wiping solution, and was even the solution employed by one of my past employers.  But being that I live in a Windows world driven largely by MDT and SCCM,  I wanted something I could easily integrate with that environment, leveraging “ubiquitous utilities” and minimizing reliance on third-party software.

TL;DR

Leverage some built-in and easily accessible Windows utilities to secure and wipe your mechanical disks.

Diskpart

Diskpart has the ability to ‘zero out’ every sector on the disk.  It only does a single pass so its not ideal, but its at least a start.

@&amp;quot;
sel vol $VolumeNumber
clean all
cre par pri
sel par
assign letter=$VolumeLetter
exit
&amp;quot;@ | Out-File $env:Temp\diskpart.script

diskpart /s $env:Temp\diskpart.script

]

Cipher

Cipher is something of a multi-tool because you can not only use it to encrypt directories and files, but also perform a 3 pass wipe – 0x00 on the first, 0xFF on the second & random on the third – that will remove data from available unused disk space on the entire volume.

Unfortunately cipher doesn’t encrypt read-only files which means you’d have to run something ahead of time to remove that property before encrypting and you should use the /B switch to catch errors otherwise it will just continue.

# remove read-only properties
Try{gci $Volume -Recurse | ? { $_.IsReadOnly -eq $true } | Set-ItemProperty -Name IsReadOnly -Value $false }Catch {}

# encrypt
cipher /E /H /S:$Volume

# 3-pass wipe
cipher /W:$Volume

BitLocker

BitLocker won’t wipe your disk, but it will allow you to securely encrypt whatever data may be on there.

Enable-BitLocker -MountPoint $Volume -EncryptionMethod XtsAes256 -Password $(ConvertTo-SecureString &amp;quot;123 enter 456 something 789 random 0 here!@*@#&amp;amp;amp;&amp;quot; -AsPlainText -Force) -PasswordProtector

Format

Format allows you to zero out every sector on the volume and also overwrite each sector with a different random number on each consecutive pass.

format $Volume /FS:NTFS /V:SecureWiped /X /P:$Pass /Y

SDelete

Dating back to the Windows XP and Server 2003 days, this a tried and true utility’s sole purpose is to wipe data securely erase, both file data and unallocated portions of a disk, and offers an appreciable number of options:

  1. Remove Read-Only attribute
  2. Clean free space
  3. Perform N number of overwrite passes
  4. Recurse subdirectories
  5. Zero free space

In fact, sdelete implements the Department of Defense clearing and sanitizing standard DOD 5220.22-M.  Boom goes the dynamite.

sdelete.exe -accepteula -p $Pass -a -s -c -z $Volume

The Plan

I do most of my wiping from a Windows 10 machine, versus WinPE, so diskpart, cpher, bitlocker and format are all available to me.  Sdelete however does require downloading the utility or at least accessing it from the live sysinternals site.

Knowing this, I follow this wiping strategy that I feel is ‘good enough’ for me.

Please Note: If you research ‘good enough security’, you’ll see that in reality it’s really not good enough so please don’t practice that.
.

1 – Run diskpart’s clean all on the Drives

I mostly do this just to wipe the partitions and zero everything out.

2 – Encrypt the Drives with BitLocker

If anything’s left, its encrypted using the XTS-AES encryption algorithm using a randomly generated 128 ASCII character long password.  (256 is the max by the way…)

3 – Wipe the Drives

When it comes to wiping drives, one pass is arguably sufficient, but I typically cite the well known & ever popular DoD 5220.22-M standard.  But what exactly is this alleged well-known standard?

The ODAA Process Manual v3.2 published in November of 2013 contains a copy of the ‘DSS Clearing and Sanitization Matrix‘ on page 116, which outlines how to handle various types of media for both cleaning and sanitization scenarios:

Cleaning:

  • (a) Degauss with a Type I, II, or III degausser.
  • (c) Overwrite all addressable locations with a single character utilizing an approved overwrite utility.

Sanitizing

  • (b) Degauss with a Type I, II, or III degausser.
  • (d) Overwrite with a pattern, and then its complement, and finally with another unclassified pattern (e.g., “00110101” followed by “11001010” and then followed by “10010111” [considered three cycles]).  Sanitization is not complete until three cycles are successfully completed.
  • (l) Destruction

I’m guessing the average person doesn’t have a degausser, and since I left mine in Cheboygan, I have to consider other options.  If you’re planning on donating, selling or otherwise repurposing this hardware – as I am – physical destruction of the drive isn’t an option.  This leaves me with performing a combination of ‘c’ and ‘d’ using a utility of my choosing as the document doesn’t specify what an “approved overwrite utility” is.

Because of sdelete’s reputation, its the most desirable utility from the get go.  But I’m of the opinion that between cipher and format, you have another multi-pass wipe solution at your disposal.  Also, I default to a 3 pass wipe since 7, 35, 42 (or more) passes really are not necessary
But if you’re paranoid, sky’s the limit pass wise and you should consider destroying your drives.

4 – Lock the Drive

The drive has been encrypted and wiped so before I move forward with the validation/verification phase, I take the key out of the ignition by locking the drives:

Lock-BitLocker -MountPoint $Volume

5 – Verification

I don’t have access to a state-of-the-art data recovery facility, so I do my best with a handful of utilities that recover both files and partitions.  Just search for data recovery, partition recovery, undelete software and that’s pretty much what I used.

For this post I downloaded 16 data recovery utilities to see what could be recovered:

  • The large majority of them couldn’t even access the drives and failed immediately.
  • Some fared better and were able to perform block-level scans for data and found nothing.
  • A few applications allegedly found multiple files types ranging from 200MB to 4GB.
    • For example, two different apps claimed to have found several 1-2GB SWF’s, a few 1GB TIFF’s and some other media formats as well.
    • I know for a fact I didn’t have any SWF’s and if I did have a one or two TIFF’s, they were nowhere near 1GB.
    • I’m guessing the applications are using some sort of algorithm to determine file type in an effort to piece things together.
  • I restored a few of those files but I wasn’t unable to reconstruct them into anything valuable.  Please note that I’m not known for being a hex-editing sleuth.
    .

Where Did I End Up?

If NIST Special Publication 800-88r1 is to be believed:

… a single overwrite pass with a fixed pattern such as binary zeros typically hinders recovery of data even if state of the art laboratory techniques are applied to attempt to retrieve the data.

I’m pretty confident there’s no real tangible data left considering I used a few multi-pass wipes then encrypted the drive.

Gotcha’s and Other Considerations

  • In my environment, Internet access isn’t a problem so I use sdelete from the live sysinternals site in both Windows and WinPE (MDT and SCCM).  The plus side is that I don’t have to bother adding it to the boot image, packaging it, downloading it somewhere or storing it in %DeployRoot% or %ScriptRoot%.
  • However, at the moment, sdelete will not work in an amd64 WinPE environment which is problematic for UEFI environments booting x64 boot media.
    .
  • The XTS-AES encryption algorithm is only supported by BitLocker on Windows 10 but you could fall back on AES256 if you’re on Windows 7 or 8/8.1.
    .
  • If you don’t have BitLocker cmdlets (I’m looking at you Windows 7) you’ll have to automate it using manage-bde or do it manually via the Control Panel.

When I think of that the old “security is like an onion…” adage, I believe there’s value in taking multiple approaches:

  • Use diskpart to zero out the drive.
  • Use cipher to wipe the drive a few times taking into consideration that each execution does 3 passes.
  • Perform an N pass format.
  • Encrypt the drive with BitLocker using up to a 256 (max) ASCII character long password.
  • Use sdelete from the live sysinternals site to perform an N pass wipe
  • Lock the drive leaving it wiped but encrypted.
  • Destroy the drive yourself with a hammer and a handful of nails in a few strategic locations.

Below is the framework for the ‘fauxlution’ I came up with.
It does include the cipher encryption step but not BitLocker. (See above for that)

[CmdletBinding()]
Param
     (
        [Parameter(Mandatory=$false)]
            [string[]]$Volume,

         [Parameter(Mandatory=$false)]
            [int]$Pass = 7
    )

if(!($Volume)) { [array]$Volume = $(((Read-Host -Prompt &amp;quot;Enter volume to utterly destroy.&amp;quot;) -split ',') -split ' ').Split('',[System.StringSplitOptions]::RemoveEmptyEntries) }
if(!($Pass)) { [int]$Pass = Read-Host -Prompt &amp;quot;Enter number of passes.&amp;quot; }

Foreach($Vol in $Volume)
    {
        $Vol = $Vol.Substring(0,1) + ':'
     if($Vol -eq $env:SystemDrive) { write-host &amp;quot;ERROR: Skipping volume [$Vol] as it matches SystemDrive [$env:SystemDrive]&amp;quot;; continue }
     If(!(Test-Path $Vol)) { Write-Host &amp;quot;Error: Volume [$Vol] does not exist!&amp;quot;; continue }

     &amp;lt;# Diskpart Section #&amp;gt;
     &amp;quot;lis vol`r`nexit&amp;quot; | Out-File $env:Temp\diskpart-lisvol.script -Encoding utf8 -Force
     [int32]$VolNumber = 999; [int32]$VolNumber = (diskpart /s &amp;quot;$env:Temp\diskpart-lisvol.script&amp;quot; | ? { $_ -like &amp;quot;* $Volume *&amp;quot; }).ToString().Trim().Substring(7,1)
     if($VolNumber -eq 999) { Write-host &amp;quot;No VolNumber Found For Volume [$Volume]&amp;quot;;break }
     #&amp;quot;sel vol $VolNumber`r`nclean all`r`ncre par pr`r`nsel par 1`r`nassign letter=$Vol`r`nexit&amp;quot; | Out-File $env:Temp\diskpart-cleanall.script -Encoding utf8 -Force
     #$dpResult = diskpart /s &amp;quot;$env:Temp\diskpart-lisvol.script&amp;quot;

     &amp;lt;# Cipher Section #&amp;gt;
     #Try { gci $Vol -Recurse | ? { $_.IsReadOnly -eq $true } | Set-ItemProperty -Name IsReadOnly -Value $false } Catch {}
     #cipher /E /H /S:$Vol
     #for($i=0; $i -lt $([System.Math]::Round($Pass/3)); $i++) { cipher /W:$Vol }

     &amp;lt;# Format Section #&amp;gt;
     #format $Vol /FS:NTFS /V:SecureWiped /X /P:$Pass /Y

     &amp;lt;# SDelete Section #&amp;gt;
     if(!(Get-Command &amp;quot;sdelete.exe&amp;quot; -ErrorAction SilentlyContinue))
         {
             Try
                {
                    $NPSDResult = New-PSDrive -Name LiveSysinternals -PSProvider FileSystem -Root \\live.sysinternals.com\tools -ErrorAction Stop
                    if(!(Test-Path LiveSysinternals:\sdelete.exe -PathType Leaf)) { Debug-Echo &amp;quot;ERROR: Unable to locate sdelete&amp;quot;; Remove-PSDrive $NPSDResult -Force -ea SilentlyContinue; break }
                    $OrigLocation = (Get-Location).Path
                    Set-Location LiveSysinternals:
                }
             Catch { Debug-Echo &amp;quot;ERROR: Unable to connect to live.sysinternals.com&amp;quot;; break }
         }
     #sdelete.exe -accepteula -p $Pass -a -s -c -z $Vol
     if($OrigLocation) { Set-Location $OrigLocation; Remove-PSDrive $NPSDResult -Force -ea SilentlyContinue }
 }

 

Two things about this:

  1. I added a check to make sure you’re not accidentally blowing away your SystemDrive.
  2. All the damaging code is commented out to minimize accidental execution.  Only uncomment after you’ve debugged the script in  your environment.

It may not be worthy for use in corporate environments or to even replace whatever your current wiping solution is.
But if you’re not doing anything, please consider starting here!

 

Good Providence!

Exploring Windows 10: CB, CBB, LTSB – Oh My!

The introduction of Windows 10 brings with it the concept of both Windows as a Service (WaaS) and Servicing Options.

  • Windows as a Service is the idea that Windows 10 may likely be the final numbered version of Windows (e.g.: don’t expect a monolithic Windows 15 upgrade in 5 years) and instead will continually evolve over time with cumulative releases or updates.
  • Servicing Options (or Windows Branches if you will) allow you to subscribe to varying levels of updates depending on your organizational needs for your particular build of Windows.

I’m not going to go deep into how you figure out which Windows Branch you want or what the benefits/drawbacks to each are because that’s too much to cover and Microsoft and others have documented that well.  Instead, I’ll summarize from other parts of the web some key considerations for each.

Windows Insider Branch (WIB)

IT users with test lab machines to spare who want to be on the cutting edge.

  • See new features before they are released and provide feedback.  Note, in some cases you may see features that are pulled prior to being released.
  • This gives you the ability to smoke test compatibility with existing applications and hardware.
  • The target audience is IT administrators & geeks on non-critical devices, because if something breaks, you don’t want to be down a day trying to fix it.

 

Current Branch (CB)

Early adopters in the organization, initial pilots and the IT machines to start preparing for broader rollout

  • CB is the broadly deployed branch of Windows 10 aimed at consumers.
  • New features and updates that make the cut for release are rolled out to this branch first.
  • Critical security updates and fixes (aka “Servicing Updates”) will still be released on the 2nd Tuesday of the month.
  • The expected cadence of new features (aka “Feature Upgrades”) is every few months but that may vary.
  • CB has all the bells and whistles of the given version of Windows such as both IE and Edge browsers, Store apps, etc.
  • You can go from CB to CBB by checking the the ‘Defer Upgrades’ box under the Advanced Options of Windows Updates.

 

Current Branch for Business (CBB)

Broad deployment to organization providing successful roll-out/pilot of Current Branch equivalent previously
Note: This can be delayed with the enterprise management tools etc.

  • This is the same OS as the Current Branch but the Feature Upgrade cadence is aimed at business users.
  • Follows the same critical security updates and fixes release as CB.
  • The new feature/functionality upgrades, though, will be deployed to CBB systems on a later schedule, months after CB systems receive them.
    • This can be from 4-12 months after they were released to the CB, depending on how they are deployed
      • Windows Update-connected CBB systems will defer the updates for 4 months
      • SCCM or other managed CBB systems can defer up to 12 months
  • CBB has all the bells and whistles of the given version of Windows such as both IE and Edge browsers, Store apps, etc
  • You can go from CBB to CB by unchecking the the ‘Defer Upgrades’ box under the Advanced Options of Windows Updates.

 

Long Term Servicing Branch (LTSB)

Very specific specialized systems; this should be a small percentage of systems within your organization.

  • This is for machines that are not interested in innovation and instead need the highest levels of stability such as kiosks, ATMs and so on
  • LTSB is actually a different OS SKU than the CB/CBB and it is intended for mission-critical systems (i.e. cash registers, health care systems, air traffic control, etc) where “set it and forget it” is a requirement.
  • Receives critical security updates and fixes just as CB and CBB.
  • The new feature/functionality upgrades, though, will not be deployable to an LTSB OS until the next version of an LTSB is released, which could be anywhere from 3 to 5+ years.
  • LTSB does NOT have all the bells and whistles of the given version of Windows – it only has IE (no Edge); it doesn’t have the Store Apps or support for it.
  • You cannot go from LTSB to WIB, or LTSB to CB or LTSB to CBB.  If you want to switch out, you’ll have to go to the media and upgrade.

 

So in our organization, we’ve settled on the following recommendation:

  • The large majority of our organization, including some members of IT, will be on CBB.
  • Key members of IT and members of our ‘Workstation Stability Group’ – which doesn’t exist yet but is a body of volunteers consisting of normal user in various departments – will be on CB.
  • The real tire-kickers in IT will likely use CB day-to-day with maybe one backup machine running CBB for regression testing.  (I primarily see ‘system owners’ – people who are primarily responsible for a user facing system – with this configuration.)
  • Myself and a few others will probably live on the edge with WIB and have machines running CB & CBB for smoke testing and regression testing.

There’s a lot to consider, and there’s no one size fits all but I hope this helps point you in a meaningful right direction.

 

Good Providence!