SCCM

MDT Tutorial Part 12: Wrap Up

Living Table of Contents

What These Guides Are:
A guide to help give you some insight into the MDT imaging & troubleshooting process in general.

What These Guides Are Not:
A guide to create a one-size fits all solution that fixes all the issues you’re ever going to encounter.

Why Bother with My Tutorials?

You totally don’t have to take my word on it!  I’m not world-reknowned like others listed on the Resources page – heck I’m not even county-reknowned so you totally should scrutinize the content and provide me with any constructive criticism you may have. 🙂

However I’ve learned from experience, as well as from others in the circles I travel in, that although turn-key solutions are great, once implemented those who implemented them are not fully equipped to maintain the environment.

Please do not misconstrue what I’m saying here: There is nothing wrong with the turn-key solutions out there!  It’s simply that we’re not all at the same level from a technical perspective and we all have knowledge gaps.  But it’s that combination that makes it challenging for some to support those turn-key solutions.

For me anyway I find that having a good base and some reference points better equips me for the road that lies ahead.  And when something breaks it’s an excellent opportunity to review my depth of knowledge on the subject to troubleshoot my way back into a functioning state.  But that’s me and it may not be you.  Maybe you’re just some savant and it comes naturally.

If you were brave enough to go through this process and successfully built, captured & deployed images, then you should have sufficient functional knowledge to efficiently use the turn-key solution below.

More Labs & Test Environments from Microsoft

Turn-Key Solution from Microsoft: Microsoft 365 Powered Device Lab Kit (Formerly Windows 10 Deployment and Management Lab Kit)

The Microsoft 365 powered device lab kit (formerly Windows 10 Deployment and Management Lab Kit) is a complete pre-configured virtual lab environment including evaluation versions of:

  • Windows 10 Enterprise, version 1803 (Windows 10 April 2018 Update)
  • System Center Configuration Manager, version 1802
  • Windows Assessment and Deployment Kit for Windows 10, version 1803
  • Microsoft Deployment Toolkit
  • Microsoft Application Virtualization (App-V) 5.1
  • Microsoft BitLocker Administration and Monitoring 2.5 SP1
  • Windows Server 2016
  • Microsoft SQL Server 2014
  • Connected trials of:
    • Office 365 Enterprise E5
    • Enterprise Mobility + Security

The best part is that it also includes illustrated step-by-step lab guides to take you through multiple deployment and management scenarios, including:

Servicing

  • Windows Analytics Update Compliance
  • Servicing Windows 10 with Configuration Manager
  • Servicing Office 365 ProPlus with Configuration Manager

Deployment and management

  • Modern Device Deployment
  • Modern Device Management with AutoPilot
  • Modern Device Co-Management
  • Office 365 ProPlus Deployment
  • BIOS to UEFI Conversion
  • Modern Application Management with Intune
  • Enterprise State Roaming
  • Remote Access (VPN)

Security

  • Windows Information Protection
  • Windows Defender Advanced Threat Protection
  • Windows Defender Application Guard
  • Windows Defender Application Control
  • Windows Defender Antivirus
  • Windows Defender Exploit Guard
  • Windows Hello for Business
  • Credential Guard
  • Device Encryption (BitLocker)
  • Remote Access (VPN)

Compatibility

  • Windows App Certification Kit
  • Windows Analytics Upgrade Readiness
  • Browser Compatibility
  • Windows App CertificationKit
  • Desktop Bridges

 

This is an amazing kit because of its holistic approach to helping IT Pros transition to the ‘Modern Desktop’.  As such it’s a hefty download and the hardware requirements are steep.

Turn-Key Solution from Johan Arwidmark: Hydration Kit

Johan has been churning out Hydration Kits since as far back as ConfigMgr 2007 SP2 and MDT 2010 so he’s one of the top 5 go-to’s for this sort of thing.

From the author:

This Kit builds a complete ConfigMgr CB 1702, and ConfigMgr TP 1703, with Windows Server 2016 and SQL Server 2016 SP1 infrastructure, and some (optional) supporting servers. This kit is tested on both Hyper-V and VMware virtual platforms, but should really work on any virtualization platform that can boot from an ISO. The kit offers a complete setup of both a primary site server running ConfigMgr Current Branch v1702 (server CM01, as well as a primary site server running ConfigMgr Technical Preview Branch v1703 (server CM02). You also find guidance on upgrading current branch platform to the latest build.

There’s plenty of documentation in the link:

https://deploymentresearch.com/Research/Post/580/Hydration-Kit-For-Windows-Server-2016-and-ConfigMgr-Current-Technical-Preview-Branch

 

Turn-Key Solution from Mikael Nystrom: Image Factory for Hyper-V

Mikael has been working on this for at least 2 years and is another in the top 5 go-to’s for this sort of thing.

From the author:

The image factory creates reference images using Microsoft Deployment Toolkit and PowerShell. You run the script on the MDT server. The script will connect to a hyper-v host specified in the XML file and build one virtual machine for each task sequences in the REF folder. It will then grab the BIOS serial number from each virtual machine, inject it into customsettings.ini, start the virtual machines, run the build and capture process, turn off the virtual machine and remove them all.

There’s some great documentation here but plenty to browse through in the link:

https://github.com/DeploymentBunny/ImageFactoryV3ForHyper-V

 

Turn-Key Solution from Mike Galvin: Image Factory for Microsoft Deployment Toolkit

I happened upon this by accident but I like how straight forward it appears to be.

From the author:

Recently I’ve been looking into automating my image build process further using PowerShell to create an Image Factory of sorts. There are other similar scripts that I’ve found online, but I like the relatively simple and straightforward method I’ve developed below. If like me you’re looking to automate your image build process so that you can have a fully up-to-date image library after patch Tuesday each month, then this may be for you.

This link is the original post where the script is introduced but I strongly urge you to start with the below link then work your way backwards from there.

https://gal.vin/2017/08/26/image-factory/

 

Turn-Key Solution from Coretech: Image Factory

This unfortunately is not something I can link without burning a bridge.  This solution is only obtainable from [select (?)] Coretech training classes or simply if Kent Agerlund likes you.  It’s conceptually similar to many others here, relying on Hyper-V to spin up VM’s, start them, boot, start a specific build & capture task sequence, run through the task sequence then destroy the VM.  Done.

 

In Closing

As you can see there are plenty of training, lab and turn key imaging solutions out there but to quote Browning:

Image the whole, then execute the parts
          Fancy the fabric
Quite, ere you build, ere steel strike fire from quartz,
          Ere mortar dab brick!

In other words, get the total picture of the process; and I argue there are two ways of doing that: with a telescope and a microscope.

Start with the telescope to look at all of it from thirty-thousand feet and you’ll see almost all these require a dab of elbow grease to get going.  In fact, most make assumptions about the environment as well as a certain amount of proficiency with the various technologies being used: PowerShell, MDT, Hyper-V, SCCM, Active Directory etc.

In the same vain that I’d rather learn to drive a manual transmission on an old beater car before I jump into an STI, GT3 RS, or <insert other absurdly expensive manual transmission vehicle here> , it makes more sense to have a slightly more-than-basic understanding of how the technology works before diving in head first.

Good Providence to you!

Advertisements

Preparing for Windows 10: Move Computers into to Win 10 OUs

One thing that has annoyed me about MDT and SCCM was that there wasn’t a built-in method to move machines into the proper OU.  As such it required creating something from scratch and I opted for something that didn’t require dependencies, like the ActiveDirectory module.

This isn’t the best way and this isn’t the only way – it’s just a way; one of many in fact.

Please note that this is NOT the ideal way to handle any operations that require credentials!  Keeping credentials in a script is bad practice as anyone snooping around could happen upon them and create some problems.  Instead, you should rely on web services to do this and Maik Koster has put together an excellent little care package to help get you started.

Move-ComputerToOU Prerequisites

My script has a few prerequisites:

  • The current AD site
  • A [local] Domain Controller to use (recommended)
  • The current OU of the machine to be moved
  • The new OU to move the machine into

It’s important to know that this script does not rely on the ActiveDirectory module.
One of my [many] quirks is to try to keep everything self-contained where it makes sense to do so, and I liked the idea of not having to rely on some installed component, an EXE and so on.  But to be honest, web services is the way to go for this.

Getting the Current AD Site

Better see this post for that.

Getting a Local Domain Controller

Better see this post for that.

Getting the Current OU

Better see this post for that.

Getting / Setting the New OU

If this is being executed as part of an OSD, yank those details from the MachineObjectOU Task Sequence variable via something like:

Function Get-TaskSequenceEnvironmentVariable
    {
        Param([Parameter(Mandatory=$true,Position=0)]$VarName)
        Try { return (New-Object -COMObject Microsoft.SMS.TSEnvironment).Value($VarName) }
        Catch { throw $_ }
    }
$MachineObjectOU = Get-TaskSequenceEnvironmentVariable MachineObjectOU

Otherwise just feed it the new OU via the parameter

Process Explanation

We first have to find existing object in AD


# This is the machine we want to move
$ADComputer = $env:COMPUTERNAME

# Create Directory Entry with authentication
$de = New-Object System.DirectoryServices.DirectoryEntry($LDAPPath,"$domain\$user",$pass)

# Create Directory Searcher
$ds = New-Object System.DirectoryServices.DirectorySearcher($de)

# Fiter for the machine in question
$ds.Filter = "(&(ObjectCategory=computer)(samaccountname=$ADComputerName$))"

# Optionally, set other search parameters
#$ds.SearchRoot = $de
#$ds.PageSize = 1000
#$ds.Filter = $strFilter
#$ds.SearchScope = "Subtree"
#$colPropList = "distinguishedName", "Name", "samaccountname"
#foreach ($Property in $colPropList) { $ds.PropertiesToLoad.Add($Property) | Out-Null }

# Execute the find operation
$res = $ds.FindAll()

 

Then we bind to the existing computer object


# If there's an existing asset in AD with that sam account name, it should be the first - and only - item in the array.
# So we bind to the existing computer object in AD
$CurrentComputerObject = New-Object System.DirectoryServices.DirectoryEntry($res[0].path,"$domain\$user",$pass)

# Extract the current OU
$CurrentOU = $CurrentComputerObject.Path.Substring(8+$SiteDC.Length+3+$ADComputerName.Length+1)

 

From there setup the destination OU


# This Here we set the new OU location
$DestinationOU = New-Object System.DirectoryServices.DirectoryEntry("LDAP://$SiteDC/$NewOU","$domain\$user",$pass)

 

And finally move the machine from the old/current OU to the new/destination OU


# And now we move the asset to that new OU
$Result = $CurrentComputerObject.PSBase.MoveTo($DestinationOU)

Move-ComputerToProperOU.ps1

So this is a shortened version of the script I’m using in production.  All you need to do is fill in the blanks and test it in your environment.


# There's a separate function to get the local domain controller
$SiteDC = Get-SiteDomainController
# Or you can hard code the local domain controller
$SiteDC = 'localdc.domain.fqdn'

# Build LDAP String
# I //think// there were maybe two cases whre this din't work
$LocalRootDomainNamingContext = ([ADSI]"LDAP://$SiteDC/RootDSE").rootDomainNamingContext
# So I added logic to trap that and pull what I needed from SiteDC
#$LocalRootDomainNamingContext = $SiteDC.Substring($SiteDC.IndexOf('.'))
#$LocalRootDomainNamingContext = ($LocalRootDomainNamingContext.Replace('.',',DC=')).Substring(1)

# Building the LDAP string
$LDAPPath = 'LDAP://' + $SiteDC + "/" + $LocalRootDomainNamingContext

# Set my domian for authentication
$domain = 'mydomain'

# Set the user for authentication
$user = 'myjoindomainaccount'

# Set the password for authentication
$pass = 'my sekret leet-o-1337 creds'

# This is the machine I want to find & move into the proper OU.
$ADComputerName = $env:COMPUTERNAME

# This is the new OU to move the above machine into.
$NewOU = 'OU=Laptops,OU=Win10,OU=Office,OU=CorpWorkstations,DC=domain,DC=fqdn'

# Create Directory Entry with authentication
$de = New-Object System.DirectoryServices.DirectoryEntry($LDAPPath,"$domain\$user",$pass)

# Create Directory Searcher
$ds = New-Object System.DirectoryServices.DirectorySearcher($de)

# Fiter for the machine in question
$ds.Filter = "(&(ObjectCategory=computer)(samaccountname=$ADComputerName$))"

# Optionally, set other search parameters
#$ds.SearchRoot = $de
#$ds.PageSize = 1000
#$ds.Filter = $strFilter
#$ds.SearchScope = "Subtree"
#$colPropList = "distinguishedName", "Name", "samaccountname"
#foreach ($Property in $colPropList) { $ds.PropertiesToLoad.Add($Property) | Out-Null }

# Execute the find operation
$res = $ds.FindAll()

# If there's an existing asset in AD with that sam account name, it should be the first - and only - item in the array.
# So we bind to the existing computer object in AD
$CurrentComputerObject = New-Object System.DirectoryServices.DirectoryEntry($res[0].path,"$domain\$user",$pass)

# Extract the current OU
$CurrentOU = $CurrentComputerObject.Path.Substring(8+$SiteDC.Length+3+$ADComputerName.Length+1)

# This Here we set the new OU location
$DestinationOU = New-Object System.DirectoryServices.DirectoryEntry("LDAP://$SiteDC/$NewOU","$domain\$user",$pass)

# And now we move the asset to that new OU
$Result = $CurrentComputerObject.PSBase.MoveTo($DestinationOU)

 

Happy Upgrading & Good Providence!

Upgrading Windows 10 1511 to 1607

In 2016 we began the process of moving from Windows 7 to Windows 10 1511 learning a ton along the way.  After 1607 went Current Branch for Business (CBB) we began planning for that upgrade phase, and what lies below is a overview of that process.

Initial Smoke Test

Once 1607 went CBB we very quickly threw an upgrade Task Sequence together to see what the upgrade process looked like and what, if anything, broke.  The upgrade process went smoothly, the vast majority of applications worked but there were a handful of things that needed to be worked out

  • Remote Server Administration Tools (RSAT) was no longer present
  • Citrix Single Sign-On was broken
  • A bunch of Universal Windows Platform (UWP) or Modern Applications we didn’t want were back
  • Default File Associations had reverted
  • The Windows default wall paper had returned
  • User Experience Virtualization (UE-V/UX-V) wasn’t working.
  • Taskbar pinnings were incorrect; Specifically Edge and the Windows Store had returned

Still, everything seemed very doable so we had high hopes for getting it out there quickly.

Approach: Servicing vs Task Sequence

Wanting to get our feet wet with Windows as as Service (WaaS), we explored leveraging Servicing Plans to get everyone on to 1607 but quickly ran into a show stopper: How were we going to fix all of the 1607 upgrade ‘issues’ if we went down this path?

We couldn’t find an appealing solution for this, so we went ahead with the Task Sequence based upgrade approach.  This gave us greater flexibility because not only could we fix all the upgrade issues but also do a little housekeeping, like install new applications, upgrade existing applications, remove retired applications and more.  This was far simpler and more elegant than setting up a bunch of deployments for the various tasks we wanted to accomplish either before the upgrade or after.

Avoiding Resume Updating/Generating Events

One concern with Servicing was ensuring the upgrade wasn’t heavy handed, forcing a machine to either upgrade mid-day because they were beyond the deadline or during the night because they left their machine on.  This was because the upgrade would bounce their machine which could potentially result in lost work, something most people find undesirable.  With Servicing, we couldn’t come up with a sure-fire way to check for and block the upgrade if, say instances of work applications were detected, such as the Office suite, Acrobat and so on.

Sure, we could increase the auto-save frequency – perhaps setting it to 1 minute – and craft a technical solution to programmatically save files in the Office Suite, safe Drafts and try to do some magic to save PDF’s and so on.  But at the end of the day, we couldn’t account for every situation: we would never know if the person wanted to create a new file vs a new version or simply overwrite the existing one.  And most importantly, we didn’t want to have to answer why a bunch of Partners lost work product as a result of the upgrade.

So, we decided to go the Task Sequence route.

Task Sequence Based Upgrade

Now that we knew which way we need to go, it was just a matter of building the fixes to remediate the upgrade issues then setup the Task Sequence.

Upgrade Remediation

  • Remote Server Administration Tools (RSAT) – Prior to performing the OS upgrade, a script is executed to detect RSAT, and if present, a Boolean variable which is referenced after the upgrade is complete to triggers re-installation of RSAT.
    .
  • Citrix Single Sign-On – This is a known issue – see CTX216312 for more details.
    .
  • Universal Windows Platform (UWP) applications – Re-run our in-house script to remove the applications.
    .
  • Default File Associations
    • Option 1: Prior to performing the OS upgrade, export HKCR and HKCU\Software\Classes then import post upgrade.
    • Option 2: Re-apply the defaults via dism from our ‘master’ file.
      .
  • Wallpaper – Re-apply in the Task Sequence by taking advantage of the img0 trick.
    .
  • UE-V/UX-V – The upgrade process broke the individual components of the UE-V Scheduled Tasks requiring a rebuild.  Once fixed on a machine we copied the good/fixed files from C:\Windows\System32\Tasks\Microsoft\UE-V and setup the Task Sequence:
    1. Enable UE-V during the upgrade via PowerShell
    2. Copied the fixed files into C:\Windows\System32\Tasks\Microsoft\UE-V
    3. Updated the command line the Scheduled Task ran
    4. Disabled the ‘Synchronize Settings at Logoff‘ Scheduled Task since that was still buggy, causing clients to hang on log off.
      .
  • Taskbar Pinnings – Prior to performing the OS upgrade, export HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Taskband then import post upgrade.
    .
  • Critical Process Detection – CustomSettings.ini calls a user exit script that loops through a series of key executables (outlook.exe, winword.exe etc.) running tasklist to see if a process is detected and if so sets a Task Sequence variable that’s later evaluated.

Since we were going the Task Sequence route, and it would be generally available in Software Center, it was decided a password prompt might help prevent accidental foot shooting.  So shortly after the Task Sequence launches an HTA driven password prompt is displayed that only IT should be able to successfully navigate.  This added yet another line of defense for anyone who ‘accidentally’ launched the Task Sequence;
Even though one has to click through two prompts to confirm the installation but whatever. 🙂

Backing up Recovery Keys to MBAM and AD During OSD

Scenario

As we prepared for our Windows 10 roll out, we had MBAM all setup and ready to go when a wise man suggested we backup the keys to AD too.  I was a little perplexed: In my mind this is redundant since that’s what MBAM is supposed to do.  Can’t we just trust MBAM to do its thing?  But then the same wise man dropped a statement that I totally agreed with:

“I don’t want to be the one to have to explain to our CIO that we have no way of unlocking some VIP’s machine.”

Neither did I.

Here’s a high level overview of how we setup MBAM during OSD.
It’s not the best way and it’s not the only way.  It’s just a way.

Prerequisites:

  1. Export your BitLocker registry settings from a properly configured machine
  2. Edit the export, set the ‘ClientWakeupFrequency‘ to something low like 5 minutes
  3. Edit the export, set the ‘StatusReportingFrequency‘ to something low like 10 minutes
  4. Package up the .REG file as part of your MBAM client installation
    • This could either be a true Package, but I would recommend an Application that runs a wrapper to import the registry configuration; or create an MST; or add it to the original MSI.

Task Sequence Setup

  1. Wait until the machine is in real Windows, not WinPE
  2. Install the MBAM client (obviously!)
  3. Reboot
  4. Stop the MBAM service – We need to do this so that the settings we make below take effect
  5. Set the MBAM service to start automatically without delay – Want to make sure it fires as soon as possible.
  6. Import your BitLocker registry settings you exported & edited
    • This is the real meat: Since GPO’s are not applied during OSD, your GPO policies won’t reach the machine during the imaging process.  This will ensure your policies are in play as soon as possible.
    • Most places don’t set the  ‘ClientWakeupFrequency‘ and/or ‘StatusReportingFrequency‘ values to something insanely low via GPO which is why we manually edited the .REG file.  If you left them at the default values, the keys wouldn’t get escrowed for a few hours due to the way the MBAM client works.
  7. Optional but Recommended: Switch to AES-XTS-256 by setting ‘EncryptionMethodWithXtsOs‘ in ‘HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\FVE‘ to ‘7
  8. Start the MBAM service
  9. Enable BitLocker using the MBAM Deployment Scripts
  10. Reboot the machine
  11. Continue with your normal imaging process

The Good

  • We’ve not run into machines with improper configurations.
  • Every machine is encrypted using
    • full disk encryption versus used space
    • leverages AES-XTS-256
  • Keys are quickly escrowed to both AD and MBAM.
  • It just works: Deployed with 1511, we’re moving to 1607 and IT is testing 1703.

The Bad

  • I couldn’t figure out how to perform full disk AES-XTS-256 encryption in WinPE so this has to happen when we’re in a real OS.
    • I tried setting the keys via the registry but didn’t bother editing WSF files or trying to reverse engineer what goes on in that step to see if I could make it work.
  • Encryption does NOT begin until after someone logs on.
  • Encryption takes a while (but not too long) on SSDs.

In Closing

I would really like to hear from others on this one.  Because it was – and has been working for a over a year now – we really couldn’t justify dedicating bandwidth to exploring this further.  So we left it as-is.  My brain would like to see it work ‘properly’ one day, but that’ll have to wait.

Good Providence to you!

Finding a Local Domain Controller

I created this function ages ago as part of a larger script that needed to be executed completely unattended.

Since I started down the System Engineer / Desktop Engineer / Application Packager path a handful of years ago, I’ve seen some interesting behaviors.  For example, when joining a domain, a machine might reach out to a domain controller overseas versus one at the local site or even in the same continent.  And because things were working, those types of anomalies were never really looked into.

This only ever really came up when there were OU structure changes.  For instance, when going from XP to Windows 7, or after completing an AD remediation project.  As machines were being moved around, we noticed the Domain Controller used varied and the time it took to replicate those changes varied from:

  • instantaneously – suggesting the change was made on a local DC OR a DC with change notification enabled
  • upwards of an hour – suggesting a remote DC (and usually verified by logs or checking manually) or on a DC where change notification was not enabled.

So I decided to write a small function for my scripts that would ensure they were using a local Domain Controller every time.

There are a handful of ways to get a domain controller

  • LOGONSERVER
  • dsquery
  • Get-ADDomainController
  • Other third-party utilities like adfind, admod

But either they were not always reliable (as was the case with LOGONSERVER which would frequently point to a remote DC) or they required supporting files (as is the case with dsquery and Get-ADDomainController).  Nothing wrong with creating packages, adding these utilities to %ScriptRoot% or even calling files on a network share.
I simply preferred something self-contained.

Find Domain Controllers via NSLOOKUP

I started exploring by using nslookup to locate for service record (SRV) resource records for domain controllers and I landed on this:

nslookup -type=srv _kerberos._tcp.dc._msdcs.$env:USERDNSDOMAIN

Problem was it returned every domain controller in the organization.  Fortunately  you can specify an AD site name to narrow it down to that specific AD Site:

nslookup -type=srv _kerberos._tcp.$ADSiteName._sites.$env:USERDNSDOMAIN

Bingo!  This gave me exactly what I was hoping to find and the DC’s it returns are in a different order every time which helps to round-robin things a bit.

From there it was all downhill:

  1. Put AD Sites in the script
  2. Figure out which Site the machine is in by it’s gateway IP
  3. Use nslookup to query for SRV records in that site
  4. Pull the data accordingly.

Yes I know – hard coding AD Sites and gateway IP’s is probably frowned upon because

What if it changes?

I don’t know about your organisation but in my [limited] experience the rationale was that AD Sites and gateway IP’s typically don’t change often enough to warrant that level of concern, so it didn’t deter me from using this method.  But I do acknowledge that it is something to remember especially during expansion.

Also, I already had all of our gateway IP’s accessible in a bootstrap.ini due to our existing MDT infrastructure making this approach much simpler workwise.

Get-SiteDomainController

And here’s where I ended up.  The most useful part – to me anyway – is that it works in Windows (real Windows) at any stage of the imaging process and doesn’t require anything extra to work.  The only gotcha is that it does not work in WinPE because nslookup isn’t built-in and I don’t know why after all these years it still isn’t built-in.

Function Get-SiteDomainController
    {
        Param
            (
                [Parameter(Mandatory=$true)]
                    [string]$Site,

                [Parameter(Mandatory=$true)]
                    [string]$Domain
            )
        Try { return (@(nslookup -type=srv _kerberos._tcp.$Site._sites.$Domain | where {$_ -like "*hostname*"}))[0].SubString(20) }
        Catch { return $_ }
    }

Hindsight

I’ve been using the above method for determining the local DC for years and its been super reliable for me.  It wasn’t until recently that I learned that the query I was using would not always return a domain controller since the query simply locates servers that are running the Kerberos KDC service for that domain.  Whoops.

Instead, I should have used this query which will always only return domain controllers:

nslookup -type=srv _kerberos._tcp.$ADSiteName._sites.dc._msdcs.$env:USERDNSDOMAIN

 

Honorable Mention: NLTEST

I would have been able to accomplish the same task with nltest via

nltest /dclist:$env:USERDNSDOMAIN

And then continuing with the gateway IP to AD Site translation to pull out just the entries I need.

nltest /dclist:$env:USERDNSDOMAIN | find /i ": $ADSiteName"

One interesting thing to note about nltest is that it seems to always return the servers in the same order.  This could be good for consistency’s sake, but it could also be a single point of failure if that first server has a problem.

The nslookup query on the other hand returns results in a different order each time it’s executed.  Again, this could also be bad, making it difficult to spot an issue with a Domain Controller, but it could be good in that it doesn’t halt whatever process is using the output of that function.

¯\_(ツ)_/¯

Anyway, since the nslookup script has been in production for some time now and works just fine, I’m not going to change it.  However, any future scripts I create that need similar functionality will likely use the updated nslookup query method.

 

Good Providence!

Determine AD Site

I created this function ages ago as part of a larger script that needed to be executed completely unattended.

For a variety of reasons, I needed a reliable way to determine which AD Site a particular machine was in.  Over the years I’ve seen some really odd behaviors that – technically – shouldn’t happen.  But as with many organizations (one hopes anyway…) things don’t always go as planned so you need that backup plan.

Get-Site Prerequisites

My script has a few prerequisites:

I personally try to keep everything self contained where it makes sense to do so.  It’s one of my [many(?)] quirks.

Finding the Current Default Gateway IP

Better see this post for that.

Get-Site Explained

In our environment, the machine names contain the office location which makes it easy to figure out where the machine is physically.  However, not every machine is named ‘correctly’ (read: follows this naming convention) as they were either renamed by their owners or are running an image prior to this new naming convention.  This is why I’m leveraging a hash table array of IP’s as a backup.

It seemed tedious at first, but since I already had a list of IP’s in the bootstrap.ini so it made this a little easier.  Also, Site IP’s don’t change often which means this should be a reliable method for years to come with only the occasional update.

Function Get-Site
    {
        Param([Parameter(Mandatory=$true)][string]$DefaultGatewayIP)

        Switch($env:COMPUTERNAME.Substring(0,2))
            {
                { 'HQ', 'TX', 'FL' -contains $_ } { $Site = $_ }
                Default
                    {
                        $htOfficeGateways = @{
                            "Headquarters"  = "192.168.0.1","192.168.10.1";
                            "Office1"       = "192.168.1.1","192.168.11.1";
                            "Office2"       = "192.168.2.1","192.168.12.1";
                        }

                        $DefaultGatewayIP = '192.168.0.1'
                        #$DefaultGatewayIP = '192.168.1.1'
                        #$DefaultGatewayIP = '192.168.2.1'

                        Foreach($Office in ($htOfficeGateways.GetEnumerator() | Where-Object { $_.Value -eq $DefaultGatewayIP } ))
                            {
                                Switch($($Office.Name))
                                    {
                                        "Headquarters" { $Site = 'HQ' }
                                        "Office1"      { $Site = 'TX' }
                                        "Office2"      { $Site = 'FL' }
                                    }
                            }
                    }
            }
        return $Site
    }

 

I didn’t add a check prior to returning $Site to confirm it was actually set but that can be handled outside of the function as well.  Otherwise you could do $Site = 'DDOJSIOC' at the top and then check for that later since that should never be a valid site … unless you’re Billy Rosewood.

 

Non Sequitur: Creating Dynamic Variables on the Fly in VBScript

Update 2016-06-08: WordPress keeps butchering the code so I’m removing the pretty formatting for now.

I’m only mentioning this because of something that came up recently and I thought it was kind of neat.

To keep things simple, I’m going to break this up into a few posts:

  1. Batch
  2. VBScript
  3. PowerShell

 

VBScript

About 4 if not almost 5 years ago, before I learned the ways of PowerShell, I moved up from Batch to VBScript mostly when things got complicated.  I was still getting used to PowerShell, practicing on doing simple things like managing AD objects and doing things in Exchange (mail exports, delegation, tracking down messages etc.) so I relied heavily on Batch and VBScript to get things done because they were languages I was quite comfortable with.

SCCM 2012 (no R2) is my first foray into the System Center ecosystem and I was excuted as it was supposed to solve many of the problems we faced with the imaging process we had then, and address nearly all of our concerns with some of changes we would be making in a year’s time or so.  Like most organizations we have a hefty list of applications on all of our workstations that usually fall into one of the following categories:

  1. Core Apps – These are applications required by just about everyone in the organization like Microsoft Office, FileSite, EMM etc.
  2. Practice Apps – These are applications required by those in a particular practice group (IP, Litigation, Trust & Estates etc.) or a functional group (Development, Admin/HR, Accounting etc.)
  3. Extra Apps – These are usually applications that aren’t necessarily required but are often needed.

When creating the Task Sequence, we had two options for installing applications:

  1. Add the applications to the installation list, or
  2. Install applications according to a dynamic variable list.

The former is great because it allows a friendly interface for selecting, adding and organizing the applications for installation. The biggest drawback here is that each task can only support 9 application installations, which means more install application tasks and a lot of clicking.

The latter is also great because one can define Collection Variables that correspond to applications and install them on the fly.  Its biggest drawback was that it required maintaining a list of Collection Variables in potentially multiple collections and I didn’t want to spend my time renumbering because I removed or added an application since there couldn’t be any gaps in the numbers.  Augh.

Please don’t misconstrue: Both methods work and I’m not trying to knock either method!

Note: It’s possible things have changed since then and maybe there was in fact a solution addressing this exact problem.  I wasn’t an expert then – nor am I an expert now – so I improvised.

I thought to myself “Wouldn’t it neat if I could generate sequentially numbered variables on the fly at runtime?”  Then it dawned on me: my old batch script!  I decided to write a small vbscript that I could use to populate those variables in the TS.

I ended up with something very similar to what’s below.  It was a ‘proof-of-concept’ script running in my, then, lab environment and it worked wonderfully straight through to production.

option explicit

' Lets define our array housing the core applications.
' Easy to add, delete & reorganize.
'
' NOTE: The application names here corespond to the application
' names within SCCM 2012
dim arrCoreApps : arrCoreApps=Array("Microsoft Office Professional Plus 2099",_
				    "FileSite",_
				    "DeskSite",_
				    "Nuance PDF Converter Enterprise",_
				    "Workshare Professional",_
				    "PayneGroup Metadata Assistant",_
				    "PayneGroup Numbering Assistant",_
				    "Adobe Acrobat Pro",_
				    "Some Fake App",_
				    "Some Real App"_
				    )

' With the array populated, lets build sequentially numbered variables with a static prefix
' The prefix here is CoreApps
' This is variable that should be used in the Application Install Task Sequence step.

BuildApplicationVariables arrCoreApps,"CoreApps"

' Lets make sure the Task Sequence Variables have successfully been defined
RetrieveApplicationVariables arrCoreApps,"CoreApps"
wscript.echo

' Another example
dim arrExtraApps : arrExtraApps=Array("Java",_
				      "Google Chrome",_
				      "Mozilla Firefox"_
				      )

BuildApplicationVariables arrExtraApps,"ExtraApps"
RetrieveApplicationVariables arrExtraApps,"ExtraApps"

Sub BuildApplicationVariables(Required_Application_Array,Required_Prefix)
	dim arrApplications : arrApplications = Required_Application_Array
	dim sPrefix : sPrefix = Required_Prefix

	dim i
	For i=0 to UBound(arrApplications)
		if (i+1 < 10) Then
			Execute "wscript.echo ""Creating Variable ["" & sPrefix & 0 & i+1 & ""] Containing ["" & arrApplications(i) & ""]"""
			ExecuteGlobal sPrefix & "0" & i+1 & "=" & """" & arrApplications(i) & """"
		else
			Execute "wscript.echo ""Creating Variable [" & sPrefix & i+1 & "] Containing [" & arrApplications(i) & "]"""
			ExecuteGlobal sPrefix & i+1 & "=" & """" & arrApplications(i) & """"
		end if
	Next
End Sub

Sub RetrieveApplicationVariables(Required_Application_Array,Required_Prefix)
	dim arrApplications : arrApplications = Required_Application_Array
	dim sPrefix : sPrefix = Required_Prefix

	dim i
	For i=0 to UBound(arrApplications)
		if (i+1 < 10) Then
			Execute "wscript.echo vbtab & ""[" & sPrefix & "0" & i+1 & "] = ["" & " & sPrefix & "0" & i+1 & " & ""]"""
		else
			Execute "wscript.echo vbtab & ""[" & sPrefix & i+1 & "] = ["" & " & sPrefix & i+1 & " & ""]"""
		end if
	Next
end Sub

Anyway this worked well for us in setting up those dynamic variables on the fly and it’s a piece of code I’ll keep in my back pocket just in case.

Good Providence!

UEFI Windows 10 Installation via USB

Most organizations are running Windows 7 on either legacy hardware or UEFI capable hardware but have disabled UEFI in favor of the legacy BIOS emulation and using an MBR partitioning style versus GPT.  Prior to Windows 7, most deployment tools didn’t work with UEFI and there were almost no UEFI benefits for Windows 7, which is why the legacy configuration was preferred.  But in Windows 10, there are some benefits like faster startup time, better support for resume/hibernate, security etc. that you’ll want to take advantage of.

Although not ideal for Windows 10, you could keep using legacy BIOS emulation (which will work just fine, and “be supported for years to come”) and deal with UEFI for new devices or as devices are returned to IT and prepared for redistribution.  But if you want to take advantage of the new capabilities Windows 10 on UEFI enabled devices offers, you’ll essentially have to do a hardware swap because there’s no good way to ‘convert’ as it requires:

  • changing the firmware settings on the devices
  • changing the disk from an MBR disk to a GPT disk
  • changing the partition structure

All coordinated as part of an OS refresh or an upgrade.

Now, I wouldn’t go as far as to say it’s not possible to automate the above (I love me a good challenge), but the recommended procedure is to capture the state from the old device, do a bare metal deployment on a properly configured UEFI device then restore the data onto said device.

If you’re imaging machines with MDT or SCCM and are PXE booting, all you need to do is:

  • add the x64 boot media to your task sequence
  • deploy your task sequence to a device collection that contains the machine you wish to image in question
  • reconfigure the BIOS for UEFI

If however you’re imaging machines by hand with physical boot media, you’ll want a properly configured USB drive to execute the installation successfully.

There are loads of blogs that talk about creating bootable USB media but the majority of them don’t speak to UEFI.  And those that do touch on UEFI, almost all of them miss that one crucial step which is what allows for a proper UEFI based installation.

What you need:

  • 4GB+ USB drive
  • a UEFI compatible system
  • some patience

Terminology

  • USB drive = a USB stick or USB thumb drve – whatever you want to call it
  • USB hard drive = an external hard drive connected via USB; not the same as above
  • Commands I’m referencing are in italics
  • Commands you have to type are in bold italics

Step 1 – Locate your USB Drive

Open an elevated command prompt & run diskpart
At the diskpart console, type: lis dis
At the diskpart console, type: lis vol

You should have a screen that looks similar to this:
Diskpart_lisdis_lisvol

I frequently have two USB hard drives and one USB drive plugged into my machine, so when I have to re-partition the USB drive, I have to be super extra careful.  So to make sure I’m not screwing up, I rely on a few things to make sure I’m picking the proper device.

First: The dead giveaway lies in the ‘lis vol‘ command which shows you the ‘Type’ of device.  We know USB drives can be removed and they’re listed as ‘Removable’.  There’s only one right now, Volume 8 which is assigned drive letter E.

Second: I know that my USB drive is 8GB in size, so I’m looking at the ‘Size’ column in both the ‘lis vol‘ and ‘lis dis‘ commands to confirm I’m looking at the right device.  And from ‘lis dis‘ I see my USB drive is Disk 6.

Step 2 – Prepare USB Drive

From the diskpart console, we’re going to issue a bunch of commands to accomplish our goal.

Select the proper device: sel dis 6

Issue these seven diskpart commands to prepare the USB drive:

  1. cle
  2. con gpt
  3. cre par pri
  4. sel par 1
  5. for fs=fat32 quick
  6. assign
  7. exit

That’s it!  The second diskpart command above is the *most critical step* for properly preparing your USB drive for installing Windows on UEFI enabled hardware, and nearly all the popular sites omit that step.  Bonkers!
Feel free to close the command window now.

Step 3 – Prepare the Media

With your USB drive properly setup now, all you need to do is mount the Windows 10 ISO and copy the contents to the USB drive.

If you’re on Windows 8 or Windows 10 already, right right-click the ISO and ‘Mount’.
If you’re on Windows 7, use something like WinCDEmu to mount the ISO.

Once mounted, you can copy the contents from the ‘CD’ top the USB drive.

Step 4 – Image

A this point all that’s left to do is

  • boot your machine(s)
  • make sure your BIOS is setup for UEFI versus Legacy BIOS; or simply enable ‘Secure Boot’ which on many machines sets UEFI as the default automatically
  • boot from your USB drive
  • install Windows

 

Hopefully this has helped point you in the right direction for taking advantage of all Windows 10 on UEFI enabled hardware has to offer.

 

Good Providence!

Applying Hotfix 3143760 for Windows ADK v1511

Although I’m moving full-steam-ahead with PowerShell, I regularly fall back on batch for really simple things mostly because I’m comfortable with the ‘language.’   (A little too comfortable maybe.)

I needed to apply hotfix KB3143760 on a handful of machines so I pulled the instructions from the KB, put them into a batch file and executed from the central repository since I had already previously downloaded the files.

@echo off
rem can be amd64 or x86
Set _Arch=x86
Set _WIMPath=%ProgramFiles(x86)%\Windows Kits\10\Assessment and Deployment Kit\Windows Preinstallation Environment\%_Arch%\en-us\winpe.wim
Set _MountPoint=%SystemDrive%\WinPE_%_Arch%\mount
Set _ACLFile=%SystemDrive%\WinPE_%_Arch%\AclFile

if /i [%_Arch%]==[amd64] (Set _Schema=\\path\to\schema-x64.dat)
if /i [%_Arch%]==[x86] (Set _Schema=\\path\to\schema-x86.dat)

if not exist "%_WIMPath%" echo. &amp; echo ERROR: WIM NOT FOUND&amp;&amp;goto end
if not exist "%_Schema%" echo. &amp; echo ERROR: SCHEMA NOT FOUND&amp;&amp;goto end
if not exist "%_MountPoint%" mkdir "%_MountPoint%"
if exist "%_ACLFile%" del /q "%_ACLFile%"
if not exist "%_WIMPath%.ORIG" echo f | xcopy "%_WIMPath%" "%_WIMPath%.ORIG" /V /F /H /R /K /O /Y /J
if %ERRORLEVEL% NEQ 0 echo ERROR %ERRORLEVEL% OCCURRED&amp;&amp;goto end

:mount
dism /Mount-Wim /WimFile:"%_WIMPath%" /index:1 /MountDir:"%_MountPoint%"

:backuppermissions
icacls "%_MountPoint%\Windows\System32\schema.dat" /save "%_ACLFile%"

:applyfix
takeown /F "%_MountPoint%\Windows\System32\schema.dat" /A
icacls "%_MountPoint%\Windows\System32\schema.dat" /grant BUILTIN\Administrators:(F)
xcopy "%_Schema%" "%_MountPoint%\Windows\System32\schema.dat" /V /F /Y

:restorepermissions
icacls "%_MountPoint%\Windows\System32\schema.dat" /setowner "NT SERVICE\TrustedInstaller"
icacls "%_MountPoint%\Windows\System32\\" /restore "%_ACLFile%"

echo. &amp; echo.

:confirm
Set _Write=0
set /p _UsrResp=Did everything complete successfully? (y/n):
if /i [%_UsrResp:~0,1%]==[y] (set _Write=1) else (if /i [%_UsrResp:~0,1%]==[n] (set _Write=0) else (goto confirm))

:unmount
echo. &amp; echo.
if %_Write% EQU 1 (
	echo. &amp; echo Unmounting and COMMITTING Changes
	dism /Commit-Wim /MountDir:"%_MountPoint%"
	dism /Unmount-Wim /MountDir:"%_MountPoint%" /commit
) else (
	echo. &amp; echo Unmounting and DISCARDING Changes
	dism /Unmount-Wim /MountDir:"%_MountPoint%" /discard
)
dism /Cleanup-Wim

:end
if exist "%_ACLFile%" del /q "%_ACLFile%"
Set _Write=
Set _UsrResp=
Set _MountPoint=
Set _WIMPath=
Set _Arch=
pause

 

Really simple, and it worked brilliantly.

It’s nowhere nearly as elegant or complete as Keith Garner’s solution, but it got the job done for me lickety split.

 

Good Providence and patch safely!

Practical Disk Wiping (NOT for SSDs)

##########################
DANGER WILL ROBINSON!

This post specifically speaks to traditional mechanical or spindle-based hard drives.
This post is NOT for solid-state drives (aka SSD)
See this post for guidance on how to wipe SSD’s.

I’m not data recovery or security expert so feel free to take this with a grain of salt.

##########################


Over the years I’ve collected quite a few mechanical/spindle drives at home.  Most of them were inherited from other systems I acquired over time, some gifts and some purchased by myself.  As needs grew and technology evolved, I upgraded to larger drives here and there, made the move to SSDs on some systems and kept the old drives in a box for future rainy day projects.  Well there haven’t been too many rainy days and I’m feeling it’s time to purge these drives.

I’ve explored some professional drive destruction services, but honestly I don’t trust some of the more affordable ones, and the ones I do trust are almost prohibitively expensive.  And by that I mean: While I don’t want someone perusing through my collection of family photos, music and what little intellectual property I may like to think I have, I also don’t know what dollar amount I’m willing to pay to keep said privacy.

I’ve always been a big fan of dd and dban, the latter of which has been my recommended go-to disk wiping solution, and was even the solution employed by one of my past employers.  But being that I live in a Windows world driven largely by MDT and SCCM,  I wanted something I could easily integrate with that environment, leveraging “ubiquitous utilities” and minimizing reliance on third-party software.

TL;DR

Leverage some built-in and easily accessible Windows utilities to secure and wipe your mechanical disks.

Diskpart

Diskpart has the ability to ‘zero out’ every sector on the disk.  It only does a single pass so its not ideal, but its at least a start.

@&amp;quot;
sel vol $VolumeNumber
clean all
cre par pri
sel par
assign letter=$VolumeLetter
exit
&amp;quot;@ | Out-File $env:Temp\diskpart.script

diskpart /s $env:Temp\diskpart.script

]

Cipher

Cipher is something of a multi-tool because you can not only use it to encrypt directories and files, but also perform a 3 pass wipe – 0x00 on the first, 0xFF on the second & random on the third – that will remove data from available unused disk space on the entire volume.

Unfortunately cipher doesn’t encrypt read-only files which means you’d have to run something ahead of time to remove that property before encrypting and you should use the /B switch to catch errors otherwise it will just continue.

# remove read-only properties
Try{gci $Volume -Recurse | ? { $_.IsReadOnly -eq $true } | Set-ItemProperty -Name IsReadOnly -Value $false }Catch {}

# encrypt
cipher /E /H /S:$Volume

# 3-pass wipe
cipher /W:$Volume

BitLocker

BitLocker won’t wipe your disk, but it will allow you to securely encrypt whatever data may be on there.

Enable-BitLocker -MountPoint $Volume -EncryptionMethod XtsAes256 -Password $(ConvertTo-SecureString &amp;quot;123 enter 456 something 789 random 0 here!@*@#&amp;amp;amp;&amp;quot; -AsPlainText -Force) -PasswordProtector

Format

Format allows you to zero out every sector on the volume and also overwrite each sector with a different random number on each consecutive pass.

format $Volume /FS:NTFS /V:SecureWiped /X /P:$Pass /Y

SDelete

Dating back to the Windows XP and Server 2003 days, this a tried and true utility’s sole purpose is to wipe data securely erase, both file data and unallocated portions of a disk, and offers an appreciable number of options:

  1. Remove Read-Only attribute
  2. Clean free space
  3. Perform N number of overwrite passes
  4. Recurse subdirectories
  5. Zero free space

In fact, sdelete implements the Department of Defense clearing and sanitizing standard DOD 5220.22-M.  Boom goes the dynamite.

sdelete.exe -accepteula -p $Pass -a -s -c -z $Volume

The Plan

I do most of my wiping from a Windows 10 machine, versus WinPE, so diskpart, cpher, bitlocker and format are all available to me.  Sdelete however does require downloading the utility or at least accessing it from the live sysinternals site.

Knowing this, I follow this wiping strategy that I feel is ‘good enough’ for me.

Please Note: If you research ‘good enough security’, you’ll see that in reality it’s really not good enough so please don’t practice that.
.

1 – Run diskpart’s clean all on the Drives

I mostly do this just to wipe the partitions and zero everything out.

2 – Encrypt the Drives with BitLocker

If anything’s left, its encrypted using the XTS-AES encryption algorithm using a randomly generated 128 ASCII character long password.  (256 is the max by the way…)

3 – Wipe the Drives

When it comes to wiping drives, one pass is arguably sufficient, but I typically cite the well known & ever popular DoD 5220.22-M standard.  But what exactly is this alleged well-known standard?

The ODAA Process Manual v3.2 published in November of 2013 contains a copy of the ‘DSS Clearing and Sanitization Matrix‘ on page 116, which outlines how to handle various types of media for both cleaning and sanitization scenarios:

Cleaning:

  • (a) Degauss with a Type I, II, or III degausser.
  • (c) Overwrite all addressable locations with a single character utilizing an approved overwrite utility.

Sanitizing

  • (b) Degauss with a Type I, II, or III degausser.
  • (d) Overwrite with a pattern, and then its complement, and finally with another unclassified pattern (e.g., “00110101” followed by “11001010” and then followed by “10010111” [considered three cycles]).  Sanitization is not complete until three cycles are successfully completed.
  • (l) Destruction

I’m guessing the average person doesn’t have a degausser, and since I left mine in Cheboygan, I have to consider other options.  If you’re planning on donating, selling or otherwise repurposing this hardware – as I am – physical destruction of the drive isn’t an option.  This leaves me with performing a combination of ‘c’ and ‘d’ using a utility of my choosing as the document doesn’t specify what an “approved overwrite utility” is.

Because of sdelete’s reputation, its the most desirable utility from the get go.  But I’m of the opinion that between cipher and format, you have another multi-pass wipe solution at your disposal.  Also, I default to a 3 pass wipe since 7, 35, 42 (or more) passes really are not necessary
But if you’re paranoid, sky’s the limit pass wise and you should consider destroying your drives.

4 – Lock the Drive

The drive has been encrypted and wiped so before I move forward with the validation/verification phase, I take the key out of the ignition by locking the drives:

Lock-BitLocker -MountPoint $Volume

5 – Verification

I don’t have access to a state-of-the-art data recovery facility, so I do my best with a handful of utilities that recover both files and partitions.  Just search for data recovery, partition recovery, undelete software and that’s pretty much what I used.

For this post I downloaded 16 data recovery utilities to see what could be recovered:

  • The large majority of them couldn’t even access the drives and failed immediately.
  • Some fared better and were able to perform block-level scans for data and found nothing.
  • A few applications allegedly found multiple files types ranging from 200MB to 4GB.
    • For example, two different apps claimed to have found several 1-2GB SWF’s, a few 1GB TIFF’s and some other media formats as well.
    • I know for a fact I didn’t have any SWF’s and if I did have a one or two TIFF’s, they were nowhere near 1GB.
    • I’m guessing the applications are using some sort of algorithm to determine file type in an effort to piece things together.
  • I restored a few of those files but I wasn’t unable to reconstruct them into anything valuable.  Please note that I’m not known for being a hex-editing sleuth.
    .

Where Did I End Up?

If NIST Special Publication 800-88r1 is to be believed:

… a single overwrite pass with a fixed pattern such as binary zeros typically hinders recovery of data even if state of the art laboratory techniques are applied to attempt to retrieve the data.

I’m pretty confident there’s no real tangible data left considering I used a few multi-pass wipes then encrypted the drive.

Gotcha’s and Other Considerations

  • In my environment, Internet access isn’t a problem so I use sdelete from the live sysinternals site in both Windows and WinPE (MDT and SCCM).  The plus side is that I don’t have to bother adding it to the boot image, packaging it, downloading it somewhere or storing it in %DeployRoot% or %ScriptRoot%.
  • However, at the moment, sdelete will not work in an amd64 WinPE environment which is problematic for UEFI environments booting x64 boot media.
    .
  • The XTS-AES encryption algorithm is only supported by BitLocker on Windows 10 but you could fall back on AES256 if you’re on Windows 7 or 8/8.1.
    .
  • If you don’t have BitLocker cmdlets (I’m looking at you Windows 7) you’ll have to automate it using manage-bde or do it manually via the Control Panel.

When I think of that the old “security is like an onion…” adage, I believe there’s value in taking multiple approaches:

  • Use diskpart to zero out the drive.
  • Use cipher to wipe the drive a few times taking into consideration that each execution does 3 passes.
  • Perform an N pass format.
  • Encrypt the drive with BitLocker using up to a 256 (max) ASCII character long password.
  • Use sdelete from the live sysinternals site to perform an N pass wipe
  • Lock the drive leaving it wiped but encrypted.
  • Destroy the drive yourself with a hammer and a handful of nails in a few strategic locations.

Below is the framework for the ‘fauxlution’ I came up with.
It does include the cipher encryption step but not BitLocker. (See above for that)

[CmdletBinding()]
Param
     (
        [Parameter(Mandatory=$false)]
            [string[]]$Volume,

         [Parameter(Mandatory=$false)]
            [int]$Pass = 7
    )

if(!($Volume)) { [array]$Volume = $(((Read-Host -Prompt &amp;quot;Enter volume to utterly destroy.&amp;quot;) -split ',') -split ' ').Split('',[System.StringSplitOptions]::RemoveEmptyEntries) }
if(!($Pass)) { [int]$Pass = Read-Host -Prompt &amp;quot;Enter number of passes.&amp;quot; }

Foreach($Vol in $Volume)
    {
        $Vol = $Vol.Substring(0,1) + ':'
     if($Vol -eq $env:SystemDrive) { write-host &amp;quot;ERROR: Skipping volume [$Vol] as it matches SystemDrive [$env:SystemDrive]&amp;quot;; continue }
     If(!(Test-Path $Vol)) { Write-Host &amp;quot;Error: Volume [$Vol] does not exist!&amp;quot;; continue }

     &amp;lt;# Diskpart Section #&amp;gt;
     &amp;quot;lis vol`r`nexit&amp;quot; | Out-File $env:Temp\diskpart-lisvol.script -Encoding utf8 -Force
     [int32]$VolNumber = 999; [int32]$VolNumber = (diskpart /s &amp;quot;$env:Temp\diskpart-lisvol.script&amp;quot; | ? { $_ -like &amp;quot;* $Volume *&amp;quot; }).ToString().Trim().Substring(7,1)
     if($VolNumber -eq 999) { Write-host &amp;quot;No VolNumber Found For Volume [$Volume]&amp;quot;;break }
     #&amp;quot;sel vol $VolNumber`r`nclean all`r`ncre par pr`r`nsel par 1`r`nassign letter=$Vol`r`nexit&amp;quot; | Out-File $env:Temp\diskpart-cleanall.script -Encoding utf8 -Force
     #$dpResult = diskpart /s &amp;quot;$env:Temp\diskpart-lisvol.script&amp;quot;

     &amp;lt;# Cipher Section #&amp;gt;
     #Try { gci $Vol -Recurse | ? { $_.IsReadOnly -eq $true } | Set-ItemProperty -Name IsReadOnly -Value $false } Catch {}
     #cipher /E /H /S:$Vol
     #for($i=0; $i -lt $([System.Math]::Round($Pass/3)); $i++) { cipher /W:$Vol }

     &amp;lt;# Format Section #&amp;gt;
     #format $Vol /FS:NTFS /V:SecureWiped /X /P:$Pass /Y

     &amp;lt;# SDelete Section #&amp;gt;
     if(!(Get-Command &amp;quot;sdelete.exe&amp;quot; -ErrorAction SilentlyContinue))
         {
             Try
                {
                    $NPSDResult = New-PSDrive -Name LiveSysinternals -PSProvider FileSystem -Root \\live.sysinternals.com\tools -ErrorAction Stop
                    if(!(Test-Path LiveSysinternals:\sdelete.exe -PathType Leaf)) { Debug-Echo &amp;quot;ERROR: Unable to locate sdelete&amp;quot;; Remove-PSDrive $NPSDResult -Force -ea SilentlyContinue; break }
                    $OrigLocation = (Get-Location).Path
                    Set-Location LiveSysinternals:
                }
             Catch { Debug-Echo &amp;quot;ERROR: Unable to connect to live.sysinternals.com&amp;quot;; break }
         }
     #sdelete.exe -accepteula -p $Pass -a -s -c -z $Vol
     if($OrigLocation) { Set-Location $OrigLocation; Remove-PSDrive $NPSDResult -Force -ea SilentlyContinue }
 }

 

Two things about this:

  1. I added a check to make sure you’re not accidentally blowing away your SystemDrive.
  2. All the damaging code is commented out to minimize accidental execution.  Only uncomment after you’ve debugged the script in  your environment.

It may not be worthy for use in corporate environments or to even replace whatever your current wiping solution is.
But if you’re not doing anything, please consider starting here!

 

Good Providence!

SCCM Bloated Driver Import

This is something that’s been heavily talked about:

This isn’t new.

Unlike some organizations, law firms are sometimes slower to react to things.  Since time is money (for real!) stability is of the utmost importance.  That said, we’re often slow to do certain things no matter how [seemingly] trivial it may be.

We’re still on CU1 and even though we’ve applied KB3084586, after importing drivers, we still noticed quite a bit of bloat.  Sure, some Driver Package were smaller than the source driver files, others were slightly larger (about 10% larger), a handful were 2-3x as large and a couple were as much as 10x the size.  (The Lenovo P500/P700/P900 drivers are about 2GB in size and after the import the Driver Package was 20GB – yikes!)

I opened a case with Microsoft just to confirm that applying CU2 or upgrading to v1511+ would resolve the issue once and for all and after they did their due diligence, they confirmed this.

How do we import drivers then?
How do we apply drivers during OSD?

Well if you don’t already know, there’s a very simple workaround:

  1. Create a Package containing your drivers
  2. Add a Run Command step in your OSD TS that uses that package
  3. The command line should be:
    Dism.exe /image:%OSDisk%\ /Add-Driver /Driver:. /recurse

I won’t bother detailing these steps since Model-Technology did such a fine job documenting it; I think it’s also where I first heard about this method.
After some quick testing we implemented it and it’s been working fine.

We’re getting ready to pilot Windows 10 and we got some new equipment recently that requires adding some drivers to the boot media in order for things to work (NIC, storage on some platforms with RAID controllers) so I began thinking about how we should proceed knowing that applying CU2 or upgrading to v1511 would, understandably, require quite a bit of planning and approval before execution.

Jason Sandys wrote a PowerShell script that enables one to import drivers, bypassing whatever bloat bug is present when done via the GUI.  I took a look at Jason’s script and modified it heavily for our needs and environment.  We have some up-and-coming Engineers in our organization who are also learning about PowerShell, so I tried to make it easy enough for them to use when they start helping us import drivers.

I’m not suggesting this is the best method – it’s just what works for us based on how things have been done prior to my arrival.  With Windows 10 on the horizon, we may opt for a simpler directory structure.

Doesn’t matter where you run this from but it needs access to the Configuration Manager module.

The script accepts a slew of arguments but I leave it up to you to do your own validation.

Jason’s version created really granular categories, like

  • Manufacturer
  • Architecture
  • OS
  • Model
  • etc.

We’re not knocking it at all, but while I appreciated the granularity, we felt it cluttered the category UI, so we settled on a simpler category format:

  • Manufacturer OS Architecture Model

Maybe you have some other format you want to use in addition to the above or a combination of the two.  Whatever the need, the script is pretty flexible in its current form get super granular or a little simpler.

KNOWN ISSUES

  • I made a last minute addition to provide feedback via console outside of the progress bar and that’s not fully tested.  In several executions it worked as expected, but one of my testers reported getting a substring error during execution.  We confirmed everything imported correctly making this fall squarely in my ‘purely an eye-sore’ category.
    I doubt I’ll ever come back & fix this but if I do, I’ll update accordingly.
    .
  • ALSO: Sometimes an autorun.inf or other invalid .inf is included and the import process throws an error.  This is expected for invalid driver files.  I considered adding an exclude for autorun.inf but for now it’s just another eye-sore.
[CmdletBinding()]
Param
(
   [Parameter(Mandatory=$false)]
        [string]$Model,

   [Parameter(Mandatory=$false)]
        [ValidateSet("HP","Lenovo","Dell","HGttG")]
        [string]$Vendor,

   [Parameter(Mandatory=$false)]
        [ValidateSet("Win7","Win8","Win10")]
        [string]$OS,

   [Parameter(Mandatory=$false)]
        [ValidateSet("x86","x64")]
        [string]$Architecture,

   [Parameter(Mandatory=$false)]
        [string]$ImportSource,

   [Parameter(Mandatory=$false)]
        [string]$PackageSourceRoot = "\\path\to\Driver Packages",

    [Parameter(Mandatory=$false)]
        [string]$UserCategory,

    [Parameter(Mandatory=$false)]
        [bool]$GranularUserCategories = $false,

    [Parameter(Mandatory=$false)]
        [ValidateSet('yourpss.f.q.d.n','yourpss')]
        [string]$SCCMServer,

    [Parameter(Mandatory=$false)]
        [bool]$GranularModelCategories = $false
)  

If(($Model) -and ($Vendor) -and ($OS) -and ($Architecture) -and ($ImportSource))
    {
        # Create a hash table array from the command line arguments
        $Drivers = @{$Model = $Vendor,$OS,$Architecture,$ImportSource;}
    }
Else
    {
        # Format of the hash table array:
                    # 'MODELA'                     = 'Vendor','OSv1','Arch','path\to\drivers\OSv1\Arch\modela'
                    # 'MODELB MODELC'              = 'Vendor','OSv1','Arch','path\to\drivers\OSv1\Arch\modelb_modelc'
                    # 'MODELD MODELE MODELF'       = 'Vendor','OSv2','Arch','path\to\drivers\OSv2\Arch\modeld_modele_modelf'

        $Drivers = @{#"FakeModel0 FakeModel1"      = 'Dell',  'Win7',' x64',"\\path\to\Driver Source\Dell\Win7\x64\Fake_Models_0_1";
                     #"FakeModel2 FakeModel3"      = 'HP',    'Win8',' x86',"\\path\to\Driver Source\HP\Win7\x86\Fake_Models_2_3";
                     #"FakeModel6 FakeModel7"      = 'Lenovo','Win10','x64',"\\path\to\Driver Source\Lenovo\Win10\x64\Fake_Models_6_7";
                     #"FakeModel8 FakeModel9"      = 'Dell',  'Win7', 'x64',"\\path\to\Driver Source\Lenovo\Win7\x86\Fake_Models_8_9";
                     #"Model42"                    = 'HGttG', 'Win7', 'x64',"\\path\to\Driver Source\HGttG\Win7\x64\Fake_Models_8_9";
                     #"FOR MANUAL SCRIPT EDITING"  = 'VENDOR','OS','ARCHITECTURE',"PATH\TO\DRIVERS";
                    }
    }

# Make sure we're not in the SCCM PSDrive otherwise Get-Item/Test-path fails
if($presentLocation) { if((Get-Location).Path -ne $presentLocation) { Set-Location $presentLocation } }
Else { Set-Location "$env:windir\System32" }

If(-not $Drivers.Count -ge 1) { write-error "ERROR: NO DRIVERS IN HASH TABLE ARRAY"; Set-Location $presentLocation; Exit 1 }

Foreach($Pair in ($Drivers.GetEnumerator()))
    {
        Write-Host "vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv"

        $Model = $Pair.Name
        $DriverDetails = $Pair.Value -split ','
        $Vendor = $DriverDetails[0]
        $OS = $DriverDetails[1]
        $Architecture = $DriverDetails[2]
        $ImportSource = $DriverDetails[3]
        #$packageName = "$Vendor $Model - $OS $Architecture"
        $packageName = "$Vendor $OS $Architecture $Model"
        $packageSourceLocation = "$PackageSourceRoot\$Vendor\$OS`_$Architecture`_$($Model.Replace(' ','_'))"
        $UserCategory = "$Vendor $OS $Architecture $Model"

        # Verify Driver Source exists.
        Write-Host "Checking Import Source [$ImportSource]"

        If (Get-Item $ImportSource -ErrorAction SilentlyContinue)
            {
                $presentLocation = (Get-Location)

                # Get driver files
                write-host "Building List of Drivers..."
                $infFiles = Get-ChildItem -Path $ImportSource -Recurse -Filter "*.inf"
                write-host "Found [$($infFiles.Count)] INF files"

                # Import ConfigMgr module
                if(!(Get-Module -Name ConfigurationManager)) { Import-Module (Join-Path $Env:SMS_ADMIN_UI_PATH.Substring(0,$Env:SMS_ADMIN_UI_PATH.Length-4) 'ConfigurationManager.psd1') }

                $PSD = Get-PSDrive -PSProvider CMSite

                Set-Location "$($PSD):"

                $driverPackage = Get-CMDriverPackage -Name $packageName

                If ($driverPackage)
                {
                    Write-Warning "Driver Package Directory Already Exists [$packageName]"
                }
                else
                {
                    If (Get-Item FileSystem::$packageSourceLocation -ErrorAction SilentlyContinue)
                    {
                        Write-Error "ERROR: Package Source Location Already exists [$packageSourceLocation] "
                        Set-Location $presentLocation; Exit 1
                    }
                    else
                    {
                        Write-Host "Creating New Driver Package Source Directory [$packageSourceLocation]"
                        New-Item -ItemType Directory FileSystem::$packageSourceLocation | Out-Null
                    }

                    Write-Host "Creating New Empty Driver Package for [$packageName]"
                    $driverPackage = New-CMDriverPackage -Name $packageName -Path $packageSourceLocation
                }

                if($UserCategory)
                    {
                        if($GranularUserCategories -eq $true)
                            {
                                $usrCategory = @()
                                foreach($userCategory in $UserCategory -split ' ')
                                    {
                                        $tmpUserCategory = Get-CMCategory -Name $UserCategory
                                        if(-not $tmpUserCategory) { $usrCategory += New-CMCategory -CategoryType DriverCategories -Name $UserCategory }
                                        Else { $usrCategory += $tmpUserCategory }
                                    }
                            }
                        Else
                            {
                                $usrCategory = Get-CMCategory -Name $UserCategory
                                if(-not $usrCategory) { $usrCategory = New-CMCategory -CategoryType DriverCategories -Name $UserCategory }
                            }
                    }

                if($GranularModelCategories -eq $true)
                    {
                        $modelCategory = @()
                        foreach($Mdl in $Model -split ' ')
                            {
                                $tmpModelCategory = Get-CMCategory -Name $Mdl
                                if(-not $tmpModelCategory) { $modelCategory += New-CMCategory -CategoryType DriverCategories -Name $Mdl }
                                Else { $modelCategory += $tmpModelCategory }
                            }

                        $architectureCategory = Get-CMCategory -Name $Architecture
                        if(-not $architectureCategory) { $architectureCategory = New-CMCategory -CategoryType DriverCategories -Name $Architecture }

                        $osCategory = Get-CMCategory -Name $OS
                        if(-not $osCategory) { $osCategory = New-CMCategory -CategoryType DriverCategories -Name $OS }

                        $vendorCategory = Get-CMCategory -Name $Vendor
                        if(-not $vendorCategory) { $vendorCategory = New-CMCategory -CategoryType DriverCategories -Name $Vendor }
                    }

                if($UserCategory) { $categories = @((Get-CMCategory -Name $Model), (Get-CMCategory -Name $Architecture), (Get-CMCategory -Name $OS), (Get-CMCategory -Name $Vendor), (Get-CMCategory -Name $UserCategory)) }
                Else { $categories = @((Get-CMCategory -Name $Model), (Get-CMCategory -Name $Architecture), (Get-CMCategory -Name $OS), (Get-CMCategory -Name $Vendor)) }

                $totalInfCount = $infFiles.count
                $driverCounter = 0
                $driversIds = @()
                $driverSourcePaths = @()

                foreach($driverFile in $infFiles)
                    {
                        $Activity = "Importing Drivers for [$Model]"
                        $CurrentOperation = "Importing: $($driverFile.Name)"
                        $Status = "($($driverCounter + 1) of $totalInfCount)"
                        $PercentComplete = ($driverCounter++ / $totalInfCount * 100)
                        Write-Progress -Id 1 -Activity $Activity -CurrentOperation $CurrentOperation -Status $Status -PercentComplete $PercentComplete

                        if($PercentComplete -gt 0) { $PercentComplete = $PercentComplete.ToString().Substring(0,$PercentComplete.ToString().IndexOf(".")) }
                        write-host "$Status :: $Activity :: $CurrentOperation $PercentComplete%"

                        try
                            {
                                $importedDriver = (Import-CMDriver -UncFileLocation $driverFile.FullName -ImportDuplicateDriverOption AppendCategory -EnableAndAllowInstall $True -AdministrativeCategory $categories | Select *)
                                #$importedDriver = (Import-CMDriver -UncFileLocation $driverFile.FullName -ImportDuplicateDriverOption AppendCategory -EnableAndAllowInstall $True -AdministrativeCategory $categories -DriverPackage $driverPackage -UpdateDistributionPointsforDriverPackage $False | Select *)

                                &lt;#                                 if($importedDriver)                                     {                                         Write-Progress -Id 1 -Activity $Activity -CurrentOperation "Adding to [$packageName] driver package [$($driverFile.Name)]:" -Status "($driverCounter of $totalInfCount)" -PercentComplete ($driverCounter / $totalInfCount * 100)                                         Add-CMDriverToDriverPackage -DriverId $importedDriverID -DriverPackageName $packageName                                         write-host "importedDriver.CI_ID [$($importedDriver.CI_ID)]"                                         if($SCCMServer) { $driverContent = Get-WmiObject -Namespace "root\SMS\Site_$PSD" -class SMS_CIToContent -Filter "CI_ID='$($importedDriver.CI_ID)'" -ComputerName $SCCMServer }                                         else { $driverContent = Get-WmiObject -Namespace "root\sms\site_$PSD" -class SMS_CIToContent -Filter "CI_ID='$($importedDriver.CI_ID)'" }                                         write-host "driverContent.ContentID [$($driverContent.ContentID)]"                                         $driversIds += $driverContent.ContentID                                         write-host "ContentSourcePath [$($importedDriver.ContentSourcePath)]"                                         $driverSourcePaths += $importedDriver.ContentSourcePath                                     }                                 #&gt;
                                Clear-Variable importedDriver -ErrorAction SilentlyContinue
                            }
                        catch
                            {
                                Write-Host "ERROR: Failed to Import Driver for [$Model]: [$($driverFile.FullName)]" -ForegroundColor Red
                                Write-Host "`$_ is: $_" -ForegroundColor Red
                                Write-Host "`$_.exception is: $($_.exception)" -ForegroundColor Red
                                Write-Host "Exception type: $($_.exception.GetType().FullName)" -ForegroundColor Red
                                Write-Host "TargetObject: $($error[0].TargetObject)" -ForegroundColor Red
                                Write-Host "CategoryInfo: $($error[0].CategoryInfo)" -ForegroundColor Red
                                Write-Host "InvocationInfo: $($error[0].InvocationInfo)" -ForegroundColor Red
                                write-host
                            }
                    }

                Write-Progress -Id 1 -Activity $Activity -Completed

                &lt;#                 $rawDriverPackage = Get-WmiObject -Namespace "root\sms\site_$($PSD)" -ClassName SMS_DriverPackage -Filter "PackageID='$($driverPackage.PackageID)'"                 $result = $rawDriverPackage.AddDriverContent($driversIds, $driverSourcePaths, $false)                 if($result.ReturnValue -ne 0)                     {                         Write-Error "Could not import: $($result.ReturnValue)."                     }                 #&gt;

                Set-Location $presentLocation
            }
        else
            {
                Write-Warning "Cannot Continue: Driver Source not found [$ImportSource]"
            }

        write-host "Finised Processing Drivers for [$Model]"
        Write-Host "^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^"
        Write-Host
    }

Myself and several of my colleagues have used this to import loads of drivers for a slew of hardware and it’s worked fine for us.

Again, many many thanks to Jason for the core of this script.

SCCM Redistributing Failed Content

Back in early 2014, I went through my first SCCM implementation and was excited at the prospect of finally getting exposure to Configuration Manager.  Naturally there was an incredible learning curve as I slowly wrapped my head around the solution as a whole.

We started off slow with a local Primary Site Server, local MP and local DP.  As things started coming together we stood up more DP’s in remote offices.  Over time, I noticed that packages were failing to distribute (more on that later) and it got really tiring to redistribute everything manually.  Being the lazy resourceful person I am, I looked for an automated solution.

Let me start off by saying that I fully believe in giving credit where its due.  As such, I will always do my best to cite the author or website where the code was found or inspired from.
The purpose of this blog is to:

  1. help me remember things
  2. help others who are up and coming
  3. help others who may be encountering similar issues
  4. share my limited knowledge and limited experiences
  5. get constructive criticism and feedback from the community

That said, I’m pretty sure I got this from David O’Brien and made some minor modifications to it.  The last time I looked at this code was July of 2014 so I don’t know if David has updated it since.

OK – on with the show.

The code is run-ready with the exception that you have to uncomment the RefreshNow property and Put method.  This is merely a precautionary safety measure.

The code isn’t super perfect, elegant, setup to trap all errors and provide remediation guidance etc. – Function over form. 🙂

[CmdletBinding()]
Param
(
    [Parameter(Mandatory=$false)]
        [string]$Server,

    [Parameter(Mandatory=$false)]
        [string]$SiteCode
)

Try
    {

        if((-not $SiteCode) -or (-not $SiteCode))
            {
                if(-not $Env:SMS_ADMIN_UI_PATH) { write-host "ERROR: You need to supply the Server and SiteCode since [`$Env:SMS_ADMIN_UI_PATH] isn't set." -ForegroundColor Red; break }
                if(!(Get-Module -Name ConfigurationManager)) { Import-Module (Join-Path $Env:SMS_ADMIN_UI_PATH.Substring(0,$Env:SMS_ADMIN_UI_PATH.Length-4) 'ConfigurationManager.psd1') -ErrorAction Stop }
                if(-not $Server) { $Server = (Get-PSDrive -PSProvider CMSite -ErrorAction Stop).Root }
                if(-not $SiteCode) { $SiteCode = (Get-PSDrive -PSProvider CMSite -ErrorAction Stop).Name }
            }

        $Failures = $null

        # Find all content with State = 1, State = 2 or State = 3
        # http://msdn.microsoft.com/en-us/library/cc143014.aspx
        $Failures = Get-WmiObject -ComputerName $Server -Namespace root\sms\site_$SiteCode -Query "SELECT * FROM SMS_PackageStatusDistPointsSummarizer WHERE State = 1 or State = 2 Or State = 3" -ErrorAction Stop

        if($Failures -ne $null)
	        {
		        foreach ($Failure in $Failures)
			        {
				        # Get the Distribution Points that the content couldn't distribute to
				        $DistributionPoints = Get-WmiObject -ComputerName $Server -Namespace root\sms\site_$SiteCode -Query "SELECT * FROM SMS_DistributionPoint WHERE SiteCode='$($Failure.SiteCode)' AND  PackageID='$($Failure.PackageID)'" -ErrorAction Stop

				        foreach ($DistributionPoint in $DistributionPoints)
				            {
					            # Confirm we're really looking at the correct Distribution Point
					            if ($DistributionPoint.ServerNALPath -eq $Failure.ServerNALPath)
						            {
                                        write-host "[$($Failure.ServerNALPath.Substring(12,$($Failure.ServerNALPath.IndexOf('\"')-12)))]`t[$($Failure.PackageID)]`t[$($Failure.SecureObjectID)]" -ForegroundColor Yellow

                                        # Use the RefreshNow Property to "Refresh Distribution Point"
                                        # https://msdn.microsoft.com/en-us/library/hh949735.aspx
                                        #$DistributionPoint.RefreshNow = $true
                                        #$DistributionPoint.put()
		    				        }
			    	        }
			        }
	        }
        Remove-Variable Failures -ea SilentlyContinue
    }
Catch
    {
        Write-Host "ERROR: AN ERROR WAS ENCOUNTERED" -ForegroundColor Red
        Write-Host "`$_ is: $_" -ForegroundColor Red
        Write-Host "`$_.exception is: $($_.exception)" -ForegroundColor Red
        Write-Host "Exception type: $($_.exception.GetType().FullName)" -ForegroundColor Red
        Write-Host "TargetObject: $($error[0].TargetObject)" -ForegroundColor Red
        Write-Host "CategoryInfo: $($error[0].CategoryInfo)" -ForegroundColor Red
        Write-Host "InvocationInfo: $($error[0].InvocationInfo)" -ForegroundColor Red
    }
Remove-Variable Server,SiteCode -ea SilentlyContinue

Let me know how it goes and if it helps you.

Good Providence!