Finding a Local Domain Controller

I created this function ages ago as part of a larger script that needed to be executed completely unattended.

Since I started down the System Engineer / Desktop Engineer / Application Packager path a handful of years ago, I’ve seen some interesting behaviors.  For example, when joining a domain, a machine might reach out to a domain controller overseas versus one at the local site or even in the same continent.  And because things were working, those types of anomalies were never really looked into.

This only ever really came up when there were OU structure changes.  For instance, when going from XP to Windows 7, or after completing an AD remediation project.  As machines were being moved around, we noticed the Domain Controller used varied and the time it took to replicate those changes varied from:

  • instantaneously – suggesting the change was made on a local DC OR a DC with change notification enabled
  • upwards of an hour – suggesting a remote DC (and usually verified by logs or checking manually) or on a DC where change notification was not enabled.

So I decided to write a small function for my scripts that would ensure they were using a local Domain Controller every time.

There are a handful of ways to get a domain controller

  • dsquery
  • Get-ADDomainController
  • Other third-party utilities like adfind, admod

But either they were not always reliable (as was the case with LOGONSERVER which would frequently point to a remote DC) or they required supporting files (as is the case with dsquery and Get-ADDomainController).  Nothing wrong with creating packages, adding these utilities to %ScriptRoot% or even calling files on a network share.
I simply preferred something self-contained.

Find Domain Controllers via NSLOOKUP

I started exploring by using nslookup to locate for service record (SRV) resource records for domain controllers and I landed on this:

nslookup -type=srv _kerberos._tcp.dc._msdcs.$env:USERDNSDOMAIN

Problem was it returned every domain controller in the organization.  Fortunately  you can specify an AD site name to narrow it down to that specific AD Site:

nslookup -type=srv _kerberos._tcp.$ADSiteName._sites.$env:USERDNSDOMAIN

Bingo!  This gave me exactly what I was hoping to find and the DC’s it returns are in a different order every time which helps to round-robin things a bit.

From there it was all downhill:

  1. Put AD Sites in the script
  2. Figure out which Site the machine is in by it’s gateway IP
  3. Use nslookup to query for SRV records in that site
  4. Pull the data accordingly.

Yes I know – hard coding AD Sites and gateway IP’s is probably frowned upon because

What if it changes?

I don’t know about your organisation but in my [limited] experience the rationale was that AD Sites and gateway IP’s typically don’t change often enough to warrant that level of concern, so it didn’t deter me from using this method.  But I do acknowledge that it is something to remember especially during expansion.

Also, I already had all of our gateway IP’s accessible in a bootstrap.ini due to our existing MDT infrastructure making this approach much simpler workwise.


And here’s where I ended up.  The most useful part – to me anyway – is that it works in Windows (real Windows) at any stage of the imaging process and doesn’t require anything extra to work.  The only gotcha is that it does not work in WinPE because nslookup isn’t built-in and I don’t know why after all these years it still isn’t built-in.

Function Get-SiteDomainController

        Try { return (@(nslookup -type=srv _kerberos._tcp.$Site._sites.$Domain | where {$_ -like "*hostname*"}))[0].SubString(20) }
        Catch { return $_ }


I’ve been using the above method for determining the local DC for years and its been super reliable for me.  It wasn’t until recently that I learned that the query I was using would not always return a domain controller since the query simply locates servers that are running the Kerberos KDC service for that domain.  Whoops.

Instead, I should have used this query which will always only return domain controllers:

nslookup -type=srv _kerberos._tcp.$ADSiteName._sites.dc._msdcs.$env:USERDNSDOMAIN


Honorable Mention: NLTEST

I would have been able to accomplish the same task with nltest via

nltest /dclist:$env:USERDNSDOMAIN

And then continuing with the gateway IP to AD Site translation to pull out just the entries I need.

nltest /dclist:$env:USERDNSDOMAIN | find /i ": $ADSiteName"

One interesting thing to note about nltest is that it seems to always return the servers in the same order.  This could be good for consistency’s sake, but it could also be a single point of failure if that first server has a problem.

The nslookup query on the other hand returns results in a different order each time it’s executed.  Again, this could also be bad, making it difficult to spot an issue with a Domain Controller, but it could be good in that it doesn’t halt whatever process is using the output of that function.


Anyway, since the nslookup script has been in production for some time now and works just fine, I’m not going to change it.  However, any future scripts I create that need similar functionality will likely use the updated nslookup query method.


Good Providence!


Determine AD Site

I created this function ages ago as part of a larger script that needed to be executed completely unattended.

For a variety of reasons, I needed a reliable way to determine which AD Site a particular machine was in.  Over the years I’ve seen some really odd behaviors that – technically – shouldn’t happen.  But as with many organizations (one hopes anyway…) things don’t always go as planned so you need that backup plan.

Get-Site Prerequisites

My script has a few prerequisites:

I personally try to keep everything self contained where it makes sense to do so.  It’s one of my [many(?)] quirks.

Finding the Current Default Gateway IP

Better see this post for that.

Get-Site Explained

In our environment, the machine names contain the office location which makes it easy to figure out where the machine is physically.  However, not every machine is named ‘correctly’ (read: follows this naming convention) as they were either renamed by their owners or are running an image prior to this new naming convention.  This is why I’m leveraging a hash table array of IP’s as a backup.

It seemed tedious at first, but since I already had a list of IP’s in the bootstrap.ini so it made this a little easier.  Also, Site IP’s don’t change often which means this should be a reliable method for years to come with only the occasional update.

Function Get-Site

                { 'HQ', 'TX', 'FL' -contains $_ } { $Site = $_ }
                        $htOfficeGateways = @{
                            "Headquarters"  = "","";
                            "Office1"       = "","";
                            "Office2"       = "","";

                        $DefaultGatewayIP = ''
                        #$DefaultGatewayIP = ''
                        #$DefaultGatewayIP = ''

                        Foreach($Office in ($htOfficeGateways.GetEnumerator() | Where-Object { $_.Value -eq $DefaultGatewayIP } ))
                                        "Headquarters" { $Site = 'HQ' }
                                        "Office1"      { $Site = 'TX' }
                                        "Office2"      { $Site = 'FL' }
        return $Site


I didn’t add a check prior to returning $Site to confirm it was actually set but that can be handled outside of the function as well.  Otherwise you could do $Site = 'DDOJSIOC' at the top and then check for that later since that should never be a valid site … unless you’re Billy Rosewood.


Get Gateway IP Address

I created this function ages ago as part of a larger script that needed to be executed completely unattended.

I needed a method to determine which office a particular asset was in at the time.  I settled on using the gateway IP address since I already had a list of them from our BootStrap.ini and CustomSettings.ini.

Originally I was using this query:

[array]$arrDefaultGatewayIP = gwmi Win32_NetworkAdapterConfiguration -ErrorAction Stop | select -ExpandProperty DefaultIPGateway

But this proved problematic on machines running older versions of PowerShell so I ended up using this instead.

[array]$arrDefaultGatewayIP = gwmi Win32_NetworkAdapterConfiguration -ErrorAction Stop | Where { $_.DefaultIPGateway -ne $null } | select -ExpandProperty DefaultIPGateway

Alternatively I suppose I could have Try’ied the first then executed the latter if PoSH Caught any exceptions.

And converted it to a function

Function Get-GatewayIP

Function Get-GatewayIP
try { return [string](gwmi Win32_networkAdapterConfiguration -Filter 'IPEnabled = True' -ErrorAction Stop | ? { $_.DefaultIPGateway -ne $null } | select -ExpandProperty DefaultIPGateway | Select -First 1) }
catch { return $_ }

From there it was just a matter of validating that the return result looked like an IP before feeding them through a hash table array to determine the office location.

So Why is this Relevant?

My original plan was to rely on the computer name since, in our environment, machine names contain an abbreviated version of the office name – the office code for lack of a better term.  For example,

  • a machine based out of the New York office might be called SOME-NY-THING
  • another in our Nation’s Capital might be OTHER-DC-THING etc.

However some machines don’t follow this format because:

  1. they’re running an older build imaged prior to the new naming convention
  2. some overzealous/tenacious individuals have renamed their machines

I had also considered querying AD to get the machine’s current OU but since this script didn’t always run under the context of an AD account (e.g.: in WinPE) it meant baking the credentials into the script, and I really don’t like doing that any more than I have to.
(Hint: Webservices is the answer to that problem)

I’ve been using this approach for well over a year now and its come in handy countless times.  Hopefully you’ll find some value for this as well.

Good Providence!

Preparing for Windows 10: OU Structure

Our team has been playing around with Windows 10 in our corporate environment for months now and one thing that was apparent from day one was that some something was breaking the Task Bar rendering the Start Menu & Cortana completely useless.  (i.e.: You click on it and it doesn’t work.)  We were certain it was a GPO, but we didn’t know which GPO or more specifically which setting within was creating the problem.  We started digging into it a little bit but it wasn’t long before we realized it was almost a moot point because:

  1. Some GPOs were specific to Windows 7 & wouldn’t apply to Windows 10
  2. Some GPOs were specific to Office 2010 & wouldn’t apply to Office 2016
  3. Others were simply no longer needed like our IEM policy for IE9.
  4. Finally, and perhaps most importantly: we wanted to take this opportunity to start from scratch: Re-validate all the GPO’s, the settings within & consolidating where possible.

And in order to do all that, we needed to re-evaluate our OU structure.

When it comes to OU structures, depending on the audience, it makes for very “exciting” conversation as some OU structures have a fairly simple layout:

  • Corp Computers
    • Hardware Type

While others look like a rabbit warren:

  • Continent
    • Country
      • Region (or equivalent)
        • State  (or equivalent)
          • City
            • Role/Practice Group/Functional Group
              • Hardware Type
                • Special Configurations
                  • Other
Note: I’m not knocking the latter or suggesting the former is better.  Really just depends on the specific needs of the organization.
No one size fits all; but some sizes may fit many.

When we did an image refresh some time ago – prior to our SCCM implementation – we took a ‘middle of the road’ approach with the main branches being location and hardware type. Later during during our SCCM implementation last summer, knowing that Windows 10 was on the horizon we revised the layout adding the OS branch in order to properly support both environments.

Bulk Creating OU’s

Once you know your layout, for example:

  • CorpSystems
    • Location
      • OS
        • Hardware Type

It’s time to create them all.  I wasn’t about to do this by hand so I hunted around for a way to recursively create OUs.  The solution comes courtesy of a serverfault question, and I just tweaked it to work for us.

Function Create-NewOU
        # http://serverfault.com/questions/624279/how-can-i-create-organizational-units-recursively-on-powershell
                [parameter(Position = 0,Mandatory = $true,HelpMessage = "Name of the new OU (e.g.: Win10)")]

                [parameter(Position = 1,Mandatory = $false,HelpMessage = "Location of the new OU (e.g.: OU=IT,DC=Contoso,DC=com)")]

        # For testing purposes
        #$NewOU = 'Win10'
        #$NewOU = 'OU=SpecOps,OU=IT,OU=Laptops,OU=Win10,OU=Miami,OU=Florida,OU=Eastern,OU=NorthAmerica'
        #$Path = 'DC=it,DC=contoso,DC=com'
        #$Path = (Get-ADRootDSE).defaultNamingContext

                if($Path) { if($NewOU.Substring(0,3) -eq 'OU=') { $NewOU = $NewOU + ',' + $Path } else { $NewOU = 'OU=' + $NewOU + ',' + $Path } }
                else { if($NewOU.Substring(0,3) -eq 'OU=') { $NewOU = $NewOU + ',' + (Get-ADRootDSE).defaultNamingContext } else { $NewOU = 'OU=' + $NewOU + ',' + (Get-ADRootDSE).defaultNamingContext } }

                # A regex to split the distinguishedname (DN), taking escaped commas into account
                $DNRegex = '(?<![\\]),'

                # We'll need to traverse the path, level by level, let's figure out the number of possible levels
                $Depth = ($NewOU -split $DNRegex).Count

                # Step through each possible parent OU
                for($i = 1;$i -le $Depth;$i++)
                        $NextOU = ($NewOU -split $DNRegex,$i)[-1]
                        if(($NextOU.Substring(0,3) -eq "OU=") -and ([ADSI]::Exists("LDAP://$NextOU") -eq $false)) { [String[]]$MissingOUs += $NextOU }

                # Reverse the order of missing OUs, we want to create the top-most needed level first

                # Now create the missing part of the tree, including the desired OU
                foreach($OU in $MissingOUs)
                        $NewOUName = (($OU -split $DNRegex,2)[0] -split "=")[1]
                        $NewOUPath = ($OU -split $DNRegex,2)[1]

                        write-host "Creating [$NewOUName] in [$NewOUPath]"
                        New-ADOrganizationalUnit -Name $newOUName -Path $newOUPath -Verbose
        Catch { return $_ }
        return $true


With that done, it was on to the next bit which was creating and linking GPOs.


Good Providence!

Non Sequitur: Creating Dynamic Variables on the Fly in PowerShell

I’m only mentioning this because of something that came up recently and I thought it was kind of neat.

To keep things simple, I’m going to break this up into a few posts:

  1. Batch
  2. VBScript
  3. PowerShell


In the last 3 years or so I’ve tried to focus exclusively on PowerShell.  I’m no Jeff Snover, Dan Cunningham or any other PoSh guru you might know.  It’s still very new to me but I’m slowly working at it – so bear with me!

If you read my last two posts on this subject, you’ll know that I was all about creating variables dynamically for various reasons. I started each post the same:

I’m only mentioning this because of something that came up recently…

So here’s that something.

I had a repetitive task that involved doing something in a specific order so I challenged myself to create jobs and have the script wait for those jobs to finish before moving on.  I wanted a way to create a unique enough variable name for the jobs to help keep track of what was going on.  This is what really sparked that whole trip down memory lane of creating variables dynamically, on the fly, and I wanted to see if I could do it in PowerShell.

This is what I ended up with:

[array]$Colors = @('Blue','Black','Silver','Yellow','Orange')
Foreach($Color in $Colors)
        $varName = '$Paint_' + $Color
        write-host "varName [$varName]"

        $tmpVar = $varName + ' = "Paint this car [' + $Color + ']"'
        write-host "tmpVar [$tmpVar]"

        Invoke-Expression $tmpVar
        Invoke-Expression $varName
Remove-Variable varName,tmpVar -force -ea silentlycontinue
Get-Variable Paint_*

This allowed me to retrieve the individual jobs via $Paint_Color which was really useful for the task at hand as it had about 6 different moving parts some of which were dependencies.

Anyway, maybe you’ll find an application for this in your environment and if you do, I’d really like to hear about it.


Good Providence!

If You’re Paranoid, Remove TeamViewer

So, naturally, this is in response to the recent allegations that TeamViewer has been hacked…

While TeamViewer hasn’t admitted to having been breached, and although what they’ve suggested is completely plausible, one thing is clear: What has been reported thus far doesn’t give me the warm and fuzzy … so I’m going to play it safe for now.

I put together a script to remove TeamViewer from not only ma own machines, but also from the machines of friends and family I often support.  I’ve run this on Windows 7+ and so far it works as expected.  If you run into an issue, let me know and I’ll do what I can to troubleshoot asap.


  1. If you’re not using a password manager or are still using easy to remember passwords or are recycling/reusing passwords across multiple sites;
  2. If you’re not using two-factor authentication (2FA)

You really should reconsider.  Check yourself out on https://haveibeenpwned.com/ to see what accounts may have been compromised in a data breach and take the necessary precautions.

This needs to be run from an elevated PowerShell console or ISE.

# Define TeamViewer Installation directory array for use below
$arrTVInstallDirs = @()

# Define TeamViewer Uninstaller EXE's for use below
$arrTVUninstallers = @()

# Get TeamViewer Install Directories for both architectures
$arrTVInstallDirs += gci $env:ProgramFiles *TeamViewer*
if($env:PROCESSOR_ARCHITECTURE -eq 'AMD64') { $arrTVInstallDirs += gci ${env:ProgramFiles(x86)} *TeamViewer* }

# Loop through each 'TeamViewer' directory for EXE's and kill those processes
foreach($TVInstallDir in $arrTVInstallDirs)
        write-host "Processing TVInstallDir [$($TVInstallDir.FullName)]"
        Foreach($TVEXE in $(gci -Path $($TVInstallDir.FullName) -Recurse *.exe))
                if($TVEXE.Name -eq 'uninstall.exe') { $arrTVUninstallers += $TVEXE }
                write-host "Killing Process [$($TVEXE.Name)]"
                Stop-Process -Name $($TVEXE.Name) -Force -ErrorAction SilentlyContinue

# Stop Team Viewer services
Foreach($TVService in $(Get-WmiObject -Class Win32_Service -Filter "Name like '%TeamViewer%'"))
        # Stop Service
        write-host "Stopping Service [$($TVService.Name)]"
        $TVService.StopService() | Out-Null

        # Disable Service
        write-host "Disabling Service [$($TVService.Name)]"
        If($TVService.StartMode -ne 'Disabled') { Set-Service -Name $TVService.Name -StartupType Disabled | Out-Null }

        # Delete Service
        write-host "Deleting Service [$($TVService.Name)]"
        $TVService.Delete() | Out-Null

# Loop through the uninstallers
Foreach($TVUninstaller in $arrTVUninstallers)
        $PSI = New-Object -TypeName 'System.Diagnostics.ProcessStartInfo' -ErrorAction 'Stop'
        $PSI.Arguments = '/S'
        $PSI.CreateNoWindow = $false
        $PSI.FileName = $TVUninstaller.FullName
        $PSI.UseShellExecute = $false
        $PSI.WindowStyle = 'Normal'
        $PSI.Verb = 'runas'

        $Proc = New-Object -TypeName 'System.Diagnostics.Process' -ErrorAction 'Stop'
        $Proc.StartInfo = $PSI

        write-host "Uninstalling TeamViewer [$($TVUninstaller.FullName)]"
        if($Proc.Start() -eq $true)
                write-host "Uninstall started - waiting for it to finish..."
                Do { $Proc.Refresh(); Start-Sleep -Seconds 3 } while($Proc.HasExited -ne $true)
                if($Proc.ExitCode -eq 0) { write-host "Uninstall completed successfully! [$($Proc.ExitCode)]" -ForegroundColor Green }
                else { write-host "ERROR: Uninstall completed WITH ERRORS [$($Proc.ExitCode)]" -ForegroundColor Red }
            else { write-host "ERROR Failed to start uninstall [$($TVUninstaller.FullName)] [$($Proc.ExitCode)]" -ForegroundColor Yellow }


Good Providence and be safe!

Lenovo BIOS Manipulation: Getting Pretty Data

I touched on this subject previously as I worked through a strategy to reconfigure our Lenovo machines from Legacy BIOS to UEFI.

Lenovo has different engineering teams for the various hardware they have and they’ve all taken different approaches to how they expose and allow you to manipulate the BIOS via WMI.  I really like what the ThinkCentre team did with the M900:  Not only do they allow you to view and set various BIOS options but they also let you see what the valid values are for said options!  It’s a thing of beauty and a stroke of genius.

I hope one day Lenovo gets the various teams together to take the best of the best and normalize the BIOS across the board.


I wrote a small PowerShell script to query the BIOS and display it in a manner I hope anyone will appreciate.  Although it works on ThinkPads, it really shines when executed from a recent ThinkCenter, like an M900, as it will also display the valid values for said BIOS option.

$tmpBIOSSetting = @()
$BIOSSetting = @()
gwmi -class Lenovo_BiosSetting -namespace root\wmi | % { if ($_.CurrentSetting -ne "") { $tmpBIOSSetting += $_.CurrentSetting } } 
Foreach($tmpBIOSSetting in $tmpBIOSSetting)
        [string]$Setting = $tmpBIOSSetting.SubString(0,$($tmpBIOSSetting.IndexOf(',')))
        if($tmpBIOSSetting.IndexOf(';') -gt 0) { [string]$CurrentValue = $tmpBIOSSetting.SubString($($tmpBIOSSetting.IndexOf(',')+1),$tmpBIOSSetting.IndexOf(';')-$($tmpBIOSSetting.IndexOf(',')+1)) }
        else { [string]$CurrentValue = $tmpBIOSSetting.SubString($($tmpBIOSSetting.IndexOf(',')+1)) }

        if($tmpBIOSSetting.IndexOf(';') -gt 0) { [string]$OptionalValue = $tmpBIOSSetting.SubString($($tmpBIOSSetting.IndexOf(';')+1)) } 
        Else { [string]$OptionalValue = 'N/A' } 
        [string]$OptionalValue = $OptionalValue.Replace('[','').Replace(']','').Replace('Optional:','').Replace('Excluded from boot order:','')

        $BIOSSetting += [pscustomobject]@{Setting=$Setting;CurrentValue=$CurrentValue;OptionalValue=$OptionalValue;}
        Remove-Variable Setting,Currentvalue,OptionalValue 

$BIOSSetting = $BIOSSetting | Sort-Object -Property Setting

Reminder: This just displays the data and doesn’t actually set anything.  Stay tuned for a future post on that subject.


Good Providence!

Preparing for Windows 10: Switching to UEFI on Lenovo ThinkPad & ThinkCentre

think this has been talked about elsewhere but I don’t have the direct link/s(?) anymore so … sorry if you think I’m stealing thunder.

You know how people say “Oh I hate that” when they really don’t really hate it?  Well I truly abhor the idea of people doing things that could be automated.  I’m not trying to put people out of a job here!  But our time is expensive and better suited for more important tasks like putting out the occasional fire, providing excellent customer service and just contributing to IT being an agile and proactive entity in the organization.

As we prepare to pilot Windows 10, we need to go from Legacy BIOS to UEFI on our fleet of Lenovo workstations and, to help our teams on the ground make this transition as smooth as possible, I started exploring how to go about doing this.

When I initially looked at Lenovo hardware a handful of years ago now I learned that Lenovo provided some sample VBScripts to help configure the BIOS on various hardware.  I leveraged those scripts to enable TPM on our demo ThinkPads and ThinkCentres and set boot order.  Fortunately it was nothing but a bunch of WMI calls making it easy to manipulate in VBScript.  Now that I’m on the PowerShell boat, it’s even easier.  (That isn’t to say there aren’t challenges because there’s always a challenge!)


In its simplest form,  you can query the BIOS on a Lenovo via:

gwmi -class Lenovo_BiosSetting -namespace root\wmi | % { if ($_.CurrentSetting -ne "") { $_.CurrentSetting } }

And you can set a BIOS setting on a Lenovo via:

(gwmi -class Lenovo_SetBiosSetting -namespace root\wmi).SetBiosSetting("$Setting,$Value")

At the moment, we have several models of machines in different families (ThinkPad, ThinkCentre and ThinkStation) spanning anywhere from 1 to 4 generations.  To further complicate things, each of those families, and the generations within, don’t necessarily have the same BIOS options or BIOS values which makes figuring things out a little tricky.

The good news is that once you figure out what needs to change it’s easy.
The bad news is that you have to figure out what needs to change, and that includes order of operations.

Bare Bones Config

I could be mistaken, but I do believe that the X240’s and T440’s and up share similar BIOS options which means if you get one working, you pretty much have them all working.  Still, once you think you have it sorted, I’d do a quick query to verify the settings and values match up across them all.

You’d be forgiven for thinking that you could enable UEFI  on a ThinkPad system via something like:

(gwmi -class Lenovo_SetBiosSetting -namespace root\wmi).SetBiosSetting("Boot Mode","UEFI Only")
(gwmi -class Lenovo_SetBiosSetting -namespace root\wmi).SetBiosSetting("Boot Priority","UEFI First")

Turns out those options are not exposed because, well, that would make sense so of course they’re not there.  Instead you have to enable ‘Secure Boot’ which flips those bits for you:

(gwmi -class Lenovo_SetBiosSetting -namespace root\wmi).SetBiosSetting("SecureBoot","Enable")

Ok semi smart!  So you mosey on over to your ThinkCentre, like an M900, and try to do the same but that doesn’t work either.  Why would it – that would be too easy.

Reminds me of one of my favorite scenes in Groundhog Day.

As it turns out the ThinkCentre is the complete opposite of the ThinkPad:
You can set the ‘Boot Priority’ and ‘Boot Mode’ but you cannot set ‘Secure Boot’.

(gwmi -class Lenovo_SetBiosSetting -namespace root\wmi).SetBiosSetting("Boot Mode","UEFI Only")
(gwmi -class Lenovo_SetBiosSetting -namespace root\wmi).SetBiosSetting("Boot Priority","UEFI First")

*Le sigh*

It’s completely nonsensical but that’s what happens when you have siloed engineering teams working on different, but similar, products.

At the moment, I don’t have an answer for enabling Secure Boot on ThinkCentre’s but it will likely require using SRWIN or SRDOS, and I believe it may require human intervention whereas the WMI calls do not.  If I find a solution, you’ll be the second to know. 🙂


Good Providence!

Remotely Checking for Local Administrators

I got a call from a very good friend of mine recently who was tasked with finding out which machines had users designated as local admins.  I told him I didn’t have anything in my bag-o-tricks for that but off the top of my head it would require:

  • Querying AD for a list of machines
  • Checking to see if the machine was online
  • Connecting to the asset and checking to see if users were added to the local admins group (e.g.: net localgroup administrators)

Thinking this might prove valuable here, I figured I’d try my hand at it.  I began my AD query targeting a specific OU and while contemplating how best to proceed, it occurred to me: Surely someone has already done this.
And sure enough, the Intarwebs hath provided abundantly.

I decided to go with the net localgroup administrators approach from PowerShell.org not because its better than ADSI or some of the other methods out there, but only because that’s where my head was at at the time.

Please do note that my approach is not nearly as elegant (or complete) as Boe Prox’s solution on the TechNet Script Centre and likely where I would go if I needed to this in our environment.

I opted to feed the script an array of OU’s versus doing something like

$PCs = Get-ADComputer -Filter * -Properties Name,DistinguishedName | ? { $_.DistinguishedName -like "*OU=LocalAdmins,*" } | select Name,DistinguishedName

I don’t know that I can support one method over the other but they both work fine.

Here’s where I ended up:

[string[]]$cstm_LocalGroupName = 'Administrators'
$cstm_arrSearchBase = @('OU=OU,OU=Test,OU=Some,DC=domain,DC=tld')
        $cstmRslt_arrLocalMembers = @()
        Foreach($cstm_SearchBase in $cstm_arrSearchBase) { $cstm_Computer += Get-ADComputer -SearchBase $cstm_SearchBase -SearchScope Subtree -Filter * -Properties Name,DistinguishedName -ErrorAction Stop | select Name,DistinguishedName }

        Foreach($cstm_Machine in $cstm_Computer)
                if($cstm_ADRetrieved -ne $true) { $cstm_Machine = [pscustomobject]@{$cstm_Machine = $cstm_Machine; Name = $cstm_Machine; DistinguishedName = 'LOCAL'} }
                if(Test-ComputerOnline $cstm_Machine.Name)
                                Foreach($cstm_LocalGroup in $cstm_LocalGroupName) { $cstmRslt_arrLocalMembers += Invoke-Command -ComputerName $cstm_Machine.Name -ScriptBlock {[pscustomobject]@{Computername = $env:COMPUTERNAME;DistinguishedName = $args[1];Group = $args[0];Members = $(net localgroup $args[0] | where { $_ -AND $_ -notmatch "command completed successfully" } | select -skip 4)}} -ArgumentList $cstm_LocalGroup,$($cstm_Machine.DistinguishedName) -HideComputerName -ErrorAction Stop | Select * -ExcludeProperty RunspaceID }
                                write-host "Online`t$($cstm_Machine.Name)`tSUCCESS" -ForegroundColor Green
                                $cstmRslt_arrLocalMembers += [pscustomobject]@{Computername = $($cstm_Machine.Name);DistinguishedName = $($cstm_Machine.DistinguishedName);Group = $cstm_LocalGroup;Members = "ERROR: $($_.exception.GetType().FullName)"}
                                write-host "Online`t$($cstm_Machine.Name)`tERROR" -ForegroundColor Red
                        $cstmRslt_arrLocalMembers += [pscustomobject]@{Computername = $($cstm_Machine.Name);DistinguishedName = "OFFLINE";Group = $cstm_LocalGroup;Members = "OFFLINE"}
                        write-host "Offline`t$($cstm_Machine.Name)`tOFFLINE" -ForegroundColor Gray
Catch { $_ }

# Raw Results

After targeting a couple of OU’s and touching a few hundred machines, I got some really unusual results.
Instead of seeing what I expected to see, I saw some odd entries within

Computername      : COMPUTER001
DistinguishedName : CN=COMPUTER001,OU=LocalAdmins,OU=Queens,OU=NewYork,DC=domain,DC=tld
Group             : Administrators
Members           : {Administrator, DOMAIN\IT-Engineers, DOMAIN\ServiceAccounts}

Computername      : COMPUTER002
DistinguishedName : CN=COMPUTER002,OU=LocalAdmins,OU=Houston,OU=Texas,DC=domain,DC=tld
Group             : Administrators
Members           : {Administrator, DOMAIN\IT-Engineers, DOMAIN\User002}

IsReadOnly     : False
IsFixedSize    : False
IsSynchronized : False
Keys           : {Group, Computername, DistinguishedName, Members}
Values         : {Administrators, COMPUTER003, CN=COMPUTER003,OU=LocalAdmins,OU=Fresno,OU=California,DC=domain,DC=tld, Administrator DOMAIN\IT-Engineers DOMAIN\User003 DOMAIN\ServiceAccounts}
SyncRoot       : System.Object
Count          : 4

IsReadOnly     : False
IsFixedSize    : False
IsSynchronized : False
Keys           : {Group, Computername, DistinguishedName, Members}
Values         : {Administrators, COMPUTER004, CN=COMPUTER004,OU=LocalAdmins,OU=Fresno,OU=California,DC=domain,DC=tld, Administrator DOMAIN\IT-Engineers DOMAIN\ServiceAccounts}
SyncRoot       : System.Object
Count          : 4

Computername      : COMPUTER005
DistinguishedName : OFFLINE
Group             : Administrators
Members           : OFFLINE

Turns out those machines returning those objects are still running PowerShell 2.0!
So the work around:

# This works for PowerShell 2.0
Foreach($cstm_LocalGroup in $cstm_LocalGroupName) { $cstmRslt_arrLocalMembers += Invoke-Command -ComputerName $cstm_Machine.Name -ScriptBlock {New-Object -TypeName PSObject -Property @{'Computername' = $env:COMPUTERNAME;'DistinguishedName' = $args[1];'Group' = $args[0];'Members' = $(net localgroup $args[0] | where { $_ -AND $_ -notmatch "command completed successfully" } | select -skip 4)}} -ArgumentList $cstm_LocalGroup,$($cstm_Machine.DistinguishedName) -HideComputerName -ErrorAction Stop | Select * -ExcludeProperty RunspaceID }

# This works for PowerShell 1.0
Foreach($cstm_LocalGroup in $cstm_LocalGroupName) { $cstmRslt_arrLocalMembers += Invoke-Command -ComputerName $cstm_Machine.Name -ScriptBlock {$tmpPSO = New-Object -TypeName PSObject;$tmpPSO | Add-Member -MemberType NoteProperty -Name ComputerName -Value $env:COMPUTERNAME;$tmpPSO | Add-Member -MemberType NoteProperty -Name DistinguishedName -Value $args[1];$tmpPSO | Add-Member -MemberType NoteProperty -Name Group -Value $args[0];$tmpPSO | Add-Member -MemberType NoteProperty -Name Members -Value $(net localgroup $args[0] | where { $_ -AND $_ -notmatch "command completed successfully" } | select -skip 4);$tmpPSO} -ArgumentList $cstm_LocalGroup,$($cstm_Machine.DistinguishedName) -HideComputerName -ErrorAction Stop | Select * -ExcludeProperty RunspaceID }

Seeing that a chain is only as strong as its weakest link, I’m using the PowerShell 2.0 line above to work in our environment.  Also, to tidy up your PowerShell ISE or shell, you can use the following to clean up the variables:

Get-Variable -Name cstm_* | Remove-Variable | Out-Null
Get-Variable -Name cstmRslt_* | Remove-Variable | Out-Null

I hope my friend finds this useful, if anything from a PowerShell learning perspective but I’m going to steer him towards the other, more complete, solutions found elsewhere.


Good Providence to ya Buddy!

Practical Disk Wiping (NOT for SSDs)


This post specifically speaks to traditional mechanical or spindle-based hard drives.
This post is NOT for solid-state drives (aka SSD)
See this post for guidance on how to wipe SSD’s.

I’m not data recovery or security expert so feel free to take this with a grain of salt.


Over the years I’ve collected quite a few mechanical/spindle drives at home.  Most of them were inherited from other systems I acquired over time, some gifts and some purchased by myself.  As needs grew and technology evolved, I upgraded to larger drives here and there, made the move to SSDs on some systems and kept the old drives in a box for future rainy day projects.  Well there haven’t been too many rainy days and I’m feeling it’s time to purge these drives.

I’ve explored some professional drive destruction services, but honestly I don’t trust some of the more affordable ones, and the ones I do trust are almost prohibitively expensive.  And by that I mean: While I don’t want someone perusing through my collection of family photos, music and what little intellectual property I may like to think I have, I also don’t know what dollar amount I’m willing to pay to keep said privacy.

I’ve always been a big fan of dd and dban, the latter of which has been my recommended go-to disk wiping solution, and was even the solution employed by one of my past employers.  But being that I live in a Windows world driven largely by MDT and SCCM,  I wanted something I could easily integrate with that environment, leveraging “ubiquitous utilities” and minimizing reliance on third-party software.


Leverage some built-in and easily accessible Windows utilities to secure and wipe your mechanical disks.


Diskpart has the ability to ‘zero out’ every sector on the disk.  It only does a single pass so its not ideal, but its at least a start.

sel vol $VolumeNumber
clean all
cre par pri
sel par
assign letter=$VolumeLetter
"@ | Out-File $env:Temp\diskpart.script

diskpart /s $env:Temp\diskpart.script



Cipher is something of a multi-tool because you can not only use it to encrypt directories and files, but also perform a 3 pass wipe – 0x00 on the first, 0xFF on the second & random on the third – that will remove data from available unused disk space on the entire volume.

Unfortunately cipher doesn’t encrypt read-only files which means you’d have to run something ahead of time to remove that property before encrypting and you should use the /B switch to catch errors otherwise it will just continue.

# remove read-only properties
Try{gci $Volume -Recurse | ? { $_.IsReadOnly -eq $true } | Set-ItemProperty -Name IsReadOnly -Value $false }Catch {}

# encrypt
cipher /E /H /S:$Volume

# 3-pass wipe
cipher /W:$Volume


BitLocker won’t wipe your disk, but it will allow you to securely encrypt whatever data may be on there.

Enable-BitLocker -MountPoint $Volume -EncryptionMethod XtsAes256 -Password $(ConvertTo-SecureString "123 enter 456 something 789 random 0 here!@*@#&" -AsPlainText -Force) -PasswordProtector


Format allows you to zero out every sector on the volume and also overwrite each sector with a different random number on each consecutive pass.

format $Volume /FS:NTFS /V:SecureWiped /X /P:$Pass /Y


Dating back to the Windows XP and Server 2003 days, this a tried and true utility’s sole purpose is to wipe data securely erase, both file data and unallocated portions of a disk, and offers an appreciable number of options:

  1. Remove Read-Only attribute
  2. Clean free space
  3. Perform N number of overwrite passes
  4. Recurse subdirectories
  5. Zero free space

In fact, sdelete implements the Department of Defense clearing and sanitizing standard DOD 5220.22-M.  Boom goes the dynamite.

sdelete.exe -accepteula -p $Pass -a -s -c -z $Volume

The Plan

I do most of my wiping from a Windows 10 machine, versus WinPE, so diskpart, cpher, bitlocker and format are all available to me.  Sdelete however does require downloading the utility or at least accessing it from the live sysinternals site.

Knowing this, I follow this wiping strategy that I feel is ‘good enough’ for me.

Please Note: If you research ‘good enough security’, you’ll see that in reality it’s really not good enough so please don’t practice that.

1 – Run diskpart’s clean all on the Drives

I mostly do this just to wipe the partitions and zero everything out.

2 – Encrypt the Drives with BitLocker

If anything’s left, its encrypted using the XTS-AES encryption algorithm using a randomly generated 128 ASCII character long password.  (256 is the max by the way…)

3 – Wipe the Drives

When it comes to wiping drives, one pass is arguably sufficient, but I typically cite the well known & ever popular DoD 5220.22-M standard.  But what exactly is this alleged well-known standard?

The ODAA Process Manual v3.2 published in November of 2013 contains a copy of the ‘DSS Clearing and Sanitization Matrix‘ on page 116, which outlines how to handle various types of media for both cleaning and sanitization scenarios:


  • (a) Degauss with a Type I, II, or III degausser.
  • (c) Overwrite all addressable locations with a single character utilizing an approved overwrite utility.


  • (b) Degauss with a Type I, II, or III degausser.
  • (d) Overwrite with a pattern, and then its complement, and finally with another unclassified pattern (e.g., “00110101” followed by “11001010” and then followed by “10010111” [considered three cycles]).  Sanitization is not complete until three cycles are successfully completed.
  • (l) Destruction

I’m guessing the average person doesn’t have a degausser, and since I left mine in Cheboygan, I have to consider other options.  If you’re planning on donating, selling or otherwise repurposing this hardware – as I am – physical destruction of the drive isn’t an option.  This leaves me with performing a combination of ‘c’ and ‘d’ using a utility of my choosing as the document doesn’t specify what an “approved overwrite utility” is.

Because of sdelete’s reputation, its the most desirable utility from the get go.  But I’m of the opinion that between cipher and format, you have another multi-pass wipe solution at your disposal.  Also, I default to a 3 pass wipe since 7, 35, 42 (or more) passes really are not necessary
But if you’re paranoid, sky’s the limit pass wise and you should consider destroying your drives.

4 – Lock the Drive

The drive has been encrypted and wiped so before I move forward with the validation/verification phase, I take the key out of the ignition by locking the drives:

Lock-BitLocker -MountPoint $Volume

5 – Verification

I don’t have access to a state-of-the-art data recovery facility, so I do my best with a handful of utilities that recover both files and partitions.  Just search for data recovery, partition recovery, undelete software and that’s pretty much what I used.

For this post I downloaded 16 data recovery utilities to see what could be recovered:

  • The large majority of them couldn’t even access the drives and failed immediately.
  • Some fared better and were able to perform block-level scans for data and found nothing.
  • A few applications allegedly found multiple files types ranging from 200MB to 4GB.
    • For example, two different apps claimed to have found several 1-2GB SWF’s, a few 1GB TIFF’s and some other media formats as well.
    • I know for a fact I didn’t have any SWF’s and if I did have a one or two TIFF’s, they were nowhere near 1GB.
    • I’m guessing the applications are using some sort of algorithm to determine file type in an effort to piece things together.
  • I restored a few of those files but I wasn’t unable to reconstruct them into anything valuable.  Please note that I’m not known for being a hex-editing sleuth.

Where Did I End Up?

If NIST Special Publication 800-88r1 is to be believed:

… a single overwrite pass with a fixed pattern such as binary zeros typically hinders recovery of data even if state of the art laboratory techniques are applied to attempt to retrieve the data.

I’m pretty confident there’s no real tangible data left considering I used a few multi-pass wipes then encrypted the drive.

Gotcha’s and Other Considerations

  • In my environment, Internet access isn’t a problem so I use sdelete from the live sysinternals site in both Windows and WinPE (MDT and SCCM).  The plus side is that I don’t have to bother adding it to the boot image, packaging it, downloading it somewhere or storing it in %DeployRoot% or %ScriptRoot%.
  • However, at the moment, sdelete will not work in an amd64 WinPE environment which is problematic for UEFI environments booting x64 boot media.
  • The XTS-AES encryption algorithm is only supported by BitLocker on Windows 10 but you could fall back on AES256 if you’re on Windows 7 or 8/8.1.
  • If you don’t have BitLocker cmdlets (I’m looking at you Windows 7) you’ll have to automate it using manage-bde or do it manually via the Control Panel.

When I think of that the old “security is like an onion…” adage, I believe there’s value in taking multiple approaches:

  • Use diskpart to zero out the drive.
  • Use cipher to wipe the drive a few times taking into consideration that each execution does 3 passes.
  • Perform an N pass format.
  • Encrypt the drive with BitLocker using up to a 256 (max) ASCII character long password.
  • Use sdelete from the live sysinternals site to perform an N pass wipe
  • Lock the drive leaving it wiped but encrypted.
  • Destroy the drive yourself with a hammer and a handful of nails in a few strategic locations.

Below is the framework for the ‘fauxlution’ I came up with.
It does include the cipher encryption step but not BitLocker. (See above for that)


            [int]$Pass = 7

if(!($Volume)) { [array]$Volume = $(((Read-Host -Prompt "Enter volume to utterly destroy.") -split ',') -split ' ').Split('',[System.StringSplitOptions]::RemoveEmptyEntries) }
if(!($Pass)) { [int]$Pass = Read-Host -Prompt "Enter number of passes." }

Foreach($Vol in $Volume)
        $Vol = $Vol.Substring(0,1) + ':'
     if($Vol -eq $env:SystemDrive) { write-host "ERROR: Skipping volume [$Vol] as it matches SystemDrive [$env:SystemDrive]"; continue }
     If(!(Test-Path $Vol)) { Write-Host "Error: Volume [$Vol] does not exist!"; continue }

     <# Diskpart Section #>
     "lis vol`r`nexit" | Out-File $env:Temp\diskpart-lisvol.script -Encoding utf8 -Force
     [int32]$VolNumber = 999; [int32]$VolNumber = (diskpart /s "$env:Temp\diskpart-lisvol.script" | ? { $_ -like "* $Volume *" }).ToString().Trim().Substring(7,1)
     if($VolNumber -eq 999) { Write-host "No VolNumber Found For Volume [$Volume]";break }
     #"sel vol $VolNumber`r`nclean all`r`ncre par pr`r`nsel par 1`r`nassign letter=$Vol`r`nexit" | Out-File $env:Temp\diskpart-cleanall.script -Encoding utf8 -Force
     #$dpResult = diskpart /s "$env:Temp\diskpart-lisvol.script"

     <# Cipher Section #>
     #Try { gci $Vol -Recurse | ? { $_.IsReadOnly -eq $true } | Set-ItemProperty -Name IsReadOnly -Value $false } Catch {}
     #cipher /E /H /S:$Vol
     #for($i=0; $i -lt $([System.Math]::Round($Pass/3)); $i++) { cipher /W:$Vol }

     <# Format Section #>
     #format $Vol /FS:NTFS /V:SecureWiped /X /P:$Pass /Y

     <# SDelete Section #>
     if(!(Get-Command "sdelete.exe" -ErrorAction SilentlyContinue))
                    $NPSDResult = New-PSDrive -Name LiveSysinternals -PSProvider FileSystem -Root \\live.sysinternals.com\tools -ErrorAction Stop
                    if(!(Test-Path LiveSysinternals:\sdelete.exe -PathType Leaf)) { Debug-Echo "ERROR: Unable to locate sdelete"; Remove-PSDrive $NPSDResult -Force -ea SilentlyContinue; break }
                    $OrigLocation = (Get-Location).Path
                    Set-Location LiveSysinternals:
             Catch { Debug-Echo "ERROR: Unable to connect to live.sysinternals.com"; break }
     #sdelete.exe -accepteula -p $Pass -a -s -c -z $Vol
     if($OrigLocation) { Set-Location $OrigLocation; Remove-PSDrive $NPSDResult -Force -ea SilentlyContinue }


Two things about this:

  1. I added a check to make sure you’re not accidentally blowing away your SystemDrive.
  2. All the damaging code is commented out to minimize accidental execution.  Only uncomment after you’ve debugged the script in  your environment.

It may not be worthy for use in corporate environments or to even replace whatever your current wiping solution is.
But if you’re not doing anything, please consider starting here!


Good Providence!

SCCM Bloated Driver Import

This is something that’s been heavily talked about:

This isn’t new.

Unlike some organizations, law firms are sometimes slower to react to things.  Since time is money (for real!) stability is of the utmost importance.  That said, we’re often slow to do certain things no matter how [seemingly] trivial it may be.

We’re still on CU1 and even though we’ve applied KB3084586, after importing drivers, we still noticed quite a bit of bloat.  Sure, some Driver Package were smaller than the source driver files, others were slightly larger (about 10% larger), a handful were 2-3x as large and a couple were as much as 10x the size.  (The Lenovo P500/P700/P900 drivers are about 2GB in size and after the import the Driver Package was 20GB – yikes!)

I opened a case with Microsoft just to confirm that applying CU2 or upgrading to v1511+ would resolve the issue once and for all and after they did their due diligence, they confirmed this.

How do we import drivers then?
How do we apply drivers during OSD?

Well if you don’t already know, there’s a very simple workaround:

  1. Create a Package containing your drivers
  2. Add a Run Command step in your OSD TS that uses that package
  3. The command line should be:
    Dism.exe /image:%OSDisk%\ /Add-Driver /Driver:. /recurse

I won’t bother detailing these steps since Model-Technology did such a fine job documenting it; I think it’s also where I first heard about this method.
After some quick testing we implemented it and it’s been working fine.

We’re getting ready to pilot Windows 10 and we got some new equipment recently that requires adding some drivers to the boot media in order for things to work (NIC, storage on some platforms with RAID controllers) so I began thinking about how we should proceed knowing that applying CU2 or upgrading to v1511 would, understandably, require quite a bit of planning and approval before execution.

Jason Sandys wrote a PowerShell script that enables one to import drivers, bypassing whatever bloat bug is present when done via the GUI.  I took a look at Jason’s script and modified it heavily for our needs and environment.  We have some up-and-coming Engineers in our organization who are also learning about PowerShell, so I tried to make it easy enough for them to use when they start helping us import drivers.

I’m not suggesting this is the best method – it’s just what works for us based on how things have been done prior to my arrival.  With Windows 10 on the horizon, we may opt for a simpler directory structure.

Doesn’t matter where you run this from but it needs access to the Configuration Manager module.

The script accepts a slew of arguments but I leave it up to you to do your own validation.

Jason’s version created really granular categories, like

  • Manufacturer
  • Architecture
  • OS
  • Model
  • etc.

We’re not knocking it at all, but while I appreciated the granularity, we felt it cluttered the category UI, so we settled on a simpler category format:

  • Manufacturer OS Architecture Model

Maybe you have some other format you want to use in addition to the above or a combination of the two.  Whatever the need, the script is pretty flexible in its current form get super granular or a little simpler.


  • I made a last minute addition to provide feedback via console outside of the progress bar and that’s not fully tested.  In several executions it worked as expected, but one of my testers reported getting a substring error during execution.  We confirmed everything imported correctly making this fall squarely in my ‘purely an eye-sore’ category.
    I doubt I’ll ever come back & fix this but if I do, I’ll update accordingly.
  • ALSO: Sometimes an autorun.inf or other invalid .inf is included and the import process throws an error.  This is expected for invalid driver files.  I considered adding an exclude for autorun.inf but for now it’s just another eye-sore.





        [string]$PackageSourceRoot = "\\path\to\Driver Packages",


        [bool]$GranularUserCategories = $false,


        [bool]$GranularModelCategories = $false

If(($Model) -and ($Vendor) -and ($OS) -and ($Architecture) -and ($ImportSource))
        # Create a hash table array from the command line arguments
        $Drivers = @{$Model = $Vendor,$OS,$Architecture,$ImportSource;}
        # Format of the hash table array:
                    # 'MODELA'                     = 'Vendor','OSv1','Arch','path\to\drivers\OSv1\Arch\modela'
                    # 'MODELB MODELC'              = 'Vendor','OSv1','Arch','path\to\drivers\OSv1\Arch\modelb_modelc'
                    # 'MODELD MODELE MODELF'       = 'Vendor','OSv2','Arch','path\to\drivers\OSv2\Arch\modeld_modele_modelf'

        $Drivers = @{#"FakeModel0 FakeModel1"      = 'Dell',  'Win7',' x64',"\\path\to\Driver Source\Dell\Win7\x64\Fake_Models_0_1";
                     #"FakeModel2 FakeModel3"      = 'HP',    'Win8',' x86',"\\path\to\Driver Source\HP\Win7\x86\Fake_Models_2_3";
                     #"FakeModel6 FakeModel7"      = 'Lenovo','Win10','x64',"\\path\to\Driver Source\Lenovo\Win10\x64\Fake_Models_6_7";
                     #"FakeModel8 FakeModel9"      = 'Dell',  'Win7', 'x64',"\\path\to\Driver Source\Lenovo\Win7\x86\Fake_Models_8_9";
                     #"Model42"                    = 'HGttG', 'Win7', 'x64',"\\path\to\Driver Source\HGttG\Win7\x64\Fake_Models_8_9";

# Make sure we're not in the SCCM PSDrive otherwise Get-Item/Test-path fails
if($presentLocation) { if((Get-Location).Path -ne $presentLocation) { Set-Location $presentLocation } }
Else { Set-Location "$env:windir\System32" }

If(-not $Drivers.Count -ge 1) { write-error "ERROR: NO DRIVERS IN HASH TABLE ARRAY"; Set-Location $presentLocation; Exit 1 }

Foreach($Pair in ($Drivers.GetEnumerator()))
        Write-Host "vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv"

        $Model = $Pair.Name
        $DriverDetails = $Pair.Value -split ','
        $Vendor = $DriverDetails[0]
        $OS = $DriverDetails[1]
        $Architecture = $DriverDetails[2]
        $ImportSource = $DriverDetails[3]
        #$packageName = "$Vendor $Model - $OS $Architecture"
        $packageName = "$Vendor $OS $Architecture $Model"
        $packageSourceLocation = "$PackageSourceRoot\$Vendor\$OS`_$Architecture`_$($Model.Replace(' ','_'))"
        $UserCategory = "$Vendor $OS $Architecture $Model"

        # Verify Driver Source exists.
        Write-Host "Checking Import Source [$ImportSource]"

        If (Get-Item $ImportSource -ErrorAction SilentlyContinue)
                $presentLocation = (Get-Location)

                # Get driver files
                write-host "Building List of Drivers..."
                $infFiles = Get-ChildItem -Path $ImportSource -Recurse -Filter "*.inf"
                write-host "Found [$($infFiles.Count)] INF files"

                # Import ConfigMgr module
                if(!(Get-Module -Name ConfigurationManager)) { Import-Module (Join-Path $Env:SMS_ADMIN_UI_PATH.Substring(0,$Env:SMS_ADMIN_UI_PATH.Length-4) 'ConfigurationManager.psd1') }

                $PSD = Get-PSDrive -PSProvider CMSite

                Set-Location "$($PSD):"

                $driverPackage = Get-CMDriverPackage -Name $packageName

                If ($driverPackage)
                    Write-Warning "Driver Package Directory Already Exists [$packageName]"
                    If (Get-Item FileSystem::$packageSourceLocation -ErrorAction SilentlyContinue)
                        Write-Error "ERROR: Package Source Location Already exists [$packageSourceLocation] "
                        Set-Location $presentLocation; Exit 1
                        Write-Host "Creating New Driver Package Source Directory [$packageSourceLocation]"
                        New-Item -ItemType Directory FileSystem::$packageSourceLocation | Out-Null

                    Write-Host "Creating New Empty Driver Package for [$packageName]"
                    $driverPackage = New-CMDriverPackage -Name $packageName -Path $packageSourceLocation

                        if($GranularUserCategories -eq $true)
                                $usrCategory = @()
                                foreach($userCategory in $UserCategory -split ' ')
                                        $tmpUserCategory = Get-CMCategory -Name $UserCategory
                                        if(-not $tmpUserCategory) { $usrCategory += New-CMCategory -CategoryType DriverCategories -Name $UserCategory }
                                        Else { $usrCategory += $tmpUserCategory }
                                $usrCategory = Get-CMCategory -Name $UserCategory
                                if(-not $usrCategory) { $usrCategory = New-CMCategory -CategoryType DriverCategories -Name $UserCategory }

                if($GranularModelCategories -eq $true)
                        $modelCategory = @()
                        foreach($Mdl in $Model -split ' ')
                                $tmpModelCategory = Get-CMCategory -Name $Mdl
                                if(-not $tmpModelCategory) { $modelCategory += New-CMCategory -CategoryType DriverCategories -Name $Mdl }
                                Else { $modelCategory += $tmpModelCategory }

                        $architectureCategory = Get-CMCategory -Name $Architecture
                        if(-not $architectureCategory) { $architectureCategory = New-CMCategory -CategoryType DriverCategories -Name $Architecture }

                        $osCategory = Get-CMCategory -Name $OS
                        if(-not $osCategory) { $osCategory = New-CMCategory -CategoryType DriverCategories -Name $OS }

                        $vendorCategory = Get-CMCategory -Name $Vendor
                        if(-not $vendorCategory) { $vendorCategory = New-CMCategory -CategoryType DriverCategories -Name $Vendor }

                if($UserCategory) { $categories = @((Get-CMCategory -Name $Model), (Get-CMCategory -Name $Architecture), (Get-CMCategory -Name $OS), (Get-CMCategory -Name $Vendor), (Get-CMCategory -Name $UserCategory)) }
                Else { $categories = @((Get-CMCategory -Name $Model), (Get-CMCategory -Name $Architecture), (Get-CMCategory -Name $OS), (Get-CMCategory -Name $Vendor)) }

                $totalInfCount = $infFiles.count
                $driverCounter = 0
                $driversIds = @()
                $driverSourcePaths = @()

                foreach($driverFile in $infFiles)
                        $Activity = "Importing Drivers for [$Model]"
                        $CurrentOperation = "Importing: $($driverFile.Name)"
                        $Status = "($($driverCounter + 1) of $totalInfCount)"
                        $PercentComplete = ($driverCounter++ / $totalInfCount * 100)
                        Write-Progress -Id 1 -Activity $Activity -CurrentOperation $CurrentOperation -Status $Status -PercentComplete $PercentComplete

                        if($PercentComplete -gt 0) { $PercentComplete = $PercentComplete.ToString().Substring(0,$PercentComplete.ToString().IndexOf(".")) }
                        write-host "$Status :: $Activity :: $CurrentOperation $PercentComplete%"

                                $importedDriver = (Import-CMDriver -UncFileLocation $driverFile.FullName -ImportDuplicateDriverOption AppendCategory -EnableAndAllowInstall $True -AdministrativeCategory $categories | Select *)
                                #$importedDriver = (Import-CMDriver -UncFileLocation $driverFile.FullName -ImportDuplicateDriverOption AppendCategory -EnableAndAllowInstall $True -AdministrativeCategory $categories -DriverPackage $driverPackage -UpdateDistributionPointsforDriverPackage $False | Select *)

                                <#                                 if($importedDriver)                                     {                                         Write-Progress -Id 1 -Activity $Activity -CurrentOperation "Adding to [$packageName] driver package [$($driverFile.Name)]:" -Status "($driverCounter of $totalInfCount)" -PercentComplete ($driverCounter / $totalInfCount * 100)                                         Add-CMDriverToDriverPackage -DriverId $importedDriverID -DriverPackageName $packageName                                         write-host "importedDriver.CI_ID [$($importedDriver.CI_ID)]"                                         if($SCCMServer) { $driverContent = Get-WmiObject -Namespace "root\SMS\Site_$PSD" -class SMS_CIToContent -Filter "CI_ID='$($importedDriver.CI_ID)'" -ComputerName $SCCMServer }                                         else { $driverContent = Get-WmiObject -Namespace "root\sms\site_$PSD" -class SMS_CIToContent -Filter "CI_ID='$($importedDriver.CI_ID)'" }                                         write-host "driverContent.ContentID [$($driverContent.ContentID)]"                                         $driversIds += $driverContent.ContentID                                         write-host "ContentSourcePath [$($importedDriver.ContentSourcePath)]"                                         $driverSourcePaths += $importedDriver.ContentSourcePath                                     }                                 #>
                                Clear-Variable importedDriver -ErrorAction SilentlyContinue
                                Write-Host "ERROR: Failed to Import Driver for [$Model]: [$($driverFile.FullName)]" -ForegroundColor Red
                                Write-Host "`$_ is: $_" -ForegroundColor Red
                                Write-Host "`$_.exception is: $($_.exception)" -ForegroundColor Red
                                Write-Host "Exception type: $($_.exception.GetType().FullName)" -ForegroundColor Red
                                Write-Host "TargetObject: $($error[0].TargetObject)" -ForegroundColor Red
                                Write-Host "CategoryInfo: $($error[0].CategoryInfo)" -ForegroundColor Red
                                Write-Host "InvocationInfo: $($error[0].InvocationInfo)" -ForegroundColor Red

                Write-Progress -Id 1 -Activity $Activity -Completed

                <#                 $rawDriverPackage = Get-WmiObject -Namespace "root\sms\site_$($PSD)" -ClassName SMS_DriverPackage -Filter "PackageID='$($driverPackage.PackageID)'"                 $result = $rawDriverPackage.AddDriverContent($driversIds, $driverSourcePaths, $false)                 if($result.ReturnValue -ne 0)                     {                         Write-Error "Could not import: $($result.ReturnValue)."                     }                 #>

                Set-Location $presentLocation
                Write-Warning "Cannot Continue: Driver Source not found [$ImportSource]"

        write-host "Finised Processing Drivers for [$Model]"
        Write-Host "^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^"

Myself and several of my colleagues have used this to import loads of drivers for a slew of hardware and it’s worked fine for us.

Again, many many thanks to Jason for the core of this script.

SCCM Redistributing Failed Content

Back in early 2014, I went through my first SCCM implementation and was excited at the prospect of finally getting exposure to Configuration Manager.  Naturally there was an incredible learning curve as I slowly wrapped my head around the solution as a whole.

We started off slow with a local Primary Site Server, local MP and local DP.  As things started coming together we stood up more DP’s in remote offices.  Over time, I noticed that packages were failing to distribute (more on that later) and it got really tiring to redistribute everything manually.  Being the lazy resourceful person I am, I looked for an automated solution.

Let me start off by saying that I fully believe in giving credit where its due.  As such, I will always do my best to cite the author or website where the code was found or inspired from.
The purpose of this blog is to:

  1. help me remember things
  2. help others who are up and coming
  3. help others who may be encountering similar issues
  4. share my limited knowledge and limited experiences
  5. get constructive criticism and feedback from the community

That said, I’m pretty sure I got this from David O’Brien and made some minor modifications to it.  The last time I looked at this code was July of 2014 so I don’t know if David has updated it since.

OK – on with the show.

The code is run-ready with the exception that you have to uncomment the RefreshNow property and Put method.  This is merely a precautionary safety measure.

The code isn’t super perfect, elegant, setup to trap all errors and provide remediation guidance etc. – Function over form. 🙂




        if((-not $SiteCode) -or (-not $SiteCode))
                if(-not $Env:SMS_ADMIN_UI_PATH) { write-host "ERROR: You need to supply the Server and SiteCode since [`$Env:SMS_ADMIN_UI_PATH] isn't set." -ForegroundColor Red; break }
                if(!(Get-Module -Name ConfigurationManager)) { Import-Module (Join-Path $Env:SMS_ADMIN_UI_PATH.Substring(0,$Env:SMS_ADMIN_UI_PATH.Length-4) 'ConfigurationManager.psd1') -ErrorAction Stop }
                if(-not $Server) { $Server = (Get-PSDrive -PSProvider CMSite -ErrorAction Stop).Root }
                if(-not $SiteCode) { $SiteCode = (Get-PSDrive -PSProvider CMSite -ErrorAction Stop).Name }

        $Failures = $null

        # Find all content with State = 1, State = 2 or State = 3
        # http://msdn.microsoft.com/en-us/library/cc143014.aspx
        $Failures = Get-WmiObject -ComputerName $Server -Namespace root\sms\site_$SiteCode -Query "SELECT * FROM SMS_PackageStatusDistPointsSummarizer WHERE State = 1 or State = 2 Or State = 3" -ErrorAction Stop

        if($Failures -ne $null)
		        foreach ($Failure in $Failures)
				        # Get the Distribution Points that the content couldn't distribute to
				        $DistributionPoints = Get-WmiObject -ComputerName $Server -Namespace root\sms\site_$SiteCode -Query "SELECT * FROM SMS_DistributionPoint WHERE SiteCode='$($Failure.SiteCode)' AND  PackageID='$($Failure.PackageID)'" -ErrorAction Stop

				        foreach ($DistributionPoint in $DistributionPoints)
					            # Confirm we're really looking at the correct Distribution Point
					            if ($DistributionPoint.ServerNALPath -eq $Failure.ServerNALPath)
                                        write-host "[$($Failure.ServerNALPath.Substring(12,$($Failure.ServerNALPath.IndexOf('\"')-12)))]`t[$($Failure.PackageID)]`t[$($Failure.SecureObjectID)]" -ForegroundColor Yellow

                                        # Use the RefreshNow Property to "Refresh Distribution Point"
                                        # https://msdn.microsoft.com/en-us/library/hh949735.aspx
                                        #$DistributionPoint.RefreshNow = $true
        Remove-Variable Failures -ea SilentlyContinue
        Write-Host "ERROR: AN ERROR WAS ENCOUNTERED" -ForegroundColor Red
        Write-Host "`$_ is: $_" -ForegroundColor Red
        Write-Host "`$_.exception is: $($_.exception)" -ForegroundColor Red
        Write-Host "Exception type: $($_.exception.GetType().FullName)" -ForegroundColor Red
        Write-Host "TargetObject: $($error[0].TargetObject)" -ForegroundColor Red
        Write-Host "CategoryInfo: $($error[0].CategoryInfo)" -ForegroundColor Red
        Write-Host "InvocationInfo: $($error[0].InvocationInfo)" -ForegroundColor Red
Remove-Variable Server,SiteCode -ea SilentlyContinue

Let me know how it goes and if it helps you.

Good Providence!