Task Sequence Fails to Start 0x80041032

Recommended reading: https://blogs.msdn.microsoft.com/steverac/2014/08/25/policy-flow-the-details/

After preparing a 1730 upgrade Task Sequence for our 1607 machines, I kicked it off on a box via Software Center and walked away once the status changed to ‘Installing thinking I’d come back to an upgraded machine an hour later.  To my surprise, I was still logged on and the TS was still ‘running’.  A few hours later the button was back to ‘Install’ with a status of ‘Available’.  Thinking I goofed I tried again and the same thing happened.

I assumed it was unique to this one machine so I tried it on another and it behaved the same way.  I then ran the upgrade sequence on 5 other machines and they all exhibited the same behavior.  I knew the Task Sequence was sound because others were using it, so it was definitely something unique to my machines but what?

Since I was doing this on 1607 machines I tried upgrading form 1511 to 1703 and 1511 to 1607 but they too failed the same way confirming it was not Task Sequence specific but again unique to my machines.  After spending a quite a few cycles on this, my original machine started failing differently: I was now seeing a ‘Retry’ button with a status of ‘Failed’.  I checked the smsts.log but it didn’t have a recent date/time stamp so it never got that far.  Hmm…

Check the TSAgent.log

Opening the TSAgent.log file I could see some 80070002 errors about not being able to delete HKLM\Software\Microsoft\SMS\Task Sequence\Active Request Handle but the real cause was a bit further up.

TSAgentLog

The lines of interest:


Getting assignments from local WMI. TSAgent 9/1/2017 12:58:27 PM 1748 (0x06D4)
pIWBEMServices->;;ExecQuery (BString(L"WQL"), BString (L"select * from XXX_Policy_Policy4"), WBEM_FLAG_FORWARD_ONLY, NULL, &pWBEMInstanceEnum), HRESULT=80041032 (e:\nts_sccm_release\sms\framework\osdmessaging\libsmsmessaging.cpp,3205) TSAgent 9/1/2017 1:03:55 PM 1748 (0x06D4)
Query for assigned policies failed. 80041032 TSAgent 9/1/2017 1:03:55 PM 1748 (0x06D4)oPolicyAssignments.RequestAssignmentsLocally(), HRESULT=80041032 (e:\cm1702_rtm\sms\framework\tscore\tspolicy.cpp,990) TSAgent 9/1/2017 1:03:55 PM 1748 (0x06D4)
Failed to get assignments from local WMI (Code 0x80041032) TSAgent 9/1/2017 1:03:55 PM 1748 (0x06D4)

The source of error code 80041032 is Windows Management (WMI) and translates to ‘Call cancelled’ which presumably happened while running the query select * from XXX_Policy_Policy4, where XXX is the site code.

I ran a similar query on my machine to get a feel for the number of items in there:


(gwmi -Class xxx_policy_policy4 -Namespace root\xxx\Policy\machine\RequestedConfig).Count

Which ended up failing with a Quota violation error suggesting I’ve reached the WMI memory quota.

Increase WMI MemoryPerHost & MemoryAllHosts

Fortunately, there’s a super helpful TechNet Blog post about this.  Since all of my test machines were running into this, I decided to make life easier for myself and use PowerShell to accomplish the task on a few of them thinking I’d have to raise the limit once.


$PHQC = gwmi -Class __providerhostquotaconfiguration -Namespace root
$PHQC.MemoryPerHost = 805306368
# Below is optional but mentioned in the article
#$PHQC.MemoryAllHosts = 2147483648
$PHQC.Put()
Restart-Computer

After the machine came up I ran the same query again, and after 2 minutes and 38 seconds it returned over 1800 items.  Great!  I ran it again and after 5 minutes it failed with the same quota violation error.  Boo urns.  I kept raising MemoryPerHost and MemoryAllHosts to insane levels to get the query to run back to back successfully.

The good news is that I made progress suggesting I’m definitely hitting some sort of memory ceiling that has now been raised.

The bad news is why me and not others?  Hmm…

Root Cause Analysis

I checked the deployments available to that machine and wowzers it was super long.  I won’t embarrass myself by posting an image of that but it was very long.  This helped to identify areas of improvement in the way we’re managing collections & deploying things, be it Applications, Packages, Software Updates and so on.

On my patient zero machine I ran the following to clear out the policies stored in WMI:


gwmi -namespace root\xxx\softmgmtagent -query "select * from ccm_tsexecutionrequest" | remove-wmiobject

gwmi -namespace root\xxx -query "select * from sms_maintenancetaskrequests" | remove-wmiobject

restart-service -name ccmexec

I then downloaded the policies and tried to image – it worked!  I decided to let that machine go and focus on some administrative cleanup in SCCM.  After tidying things up a bit, the rest of my 1607 machines started the 1703 upgrade Task Sequence without issue and the 1511 machines ran the 1607 upgrade Task Sequence as well.

As we continue to phase out Windows 7, we’ll hopefully update our methods to help avoid problems like this and perform that routine maintenance a little more frequently.