Important Notice: On February 29th, this community was put into read-only mode. All existing posts will remain but customers are unable to add new posts or comment on existing. Please feel to join our Community Discord for any questions and discussions.

PDQ Deploy during Audit

My question is whether it is possible to deploy a PDQ package during audit mode while building an image?

I have an audit mode image that I attempt to deploy a baseline package to. I am using local account credentials and the file copy mode is push in order to not need credentials for the repository. The machine (while in audit mode) passes all remote repair steps and the package deploys successfully. The problem is that once I sysprep /generalize with my unattend.xml I get an error on Specialize pass, specifically the Shell Setup.

I have tested just deploying a single app (7-zip), not the whole baseline package from PDQ and I still get the "could not parse" error for the specialize-> Shell Setup error. 

I can deploy a ninite install of just 7-zip and it succeeds in completing the sysprep process and the rest of my imaging completes.

What would be happening during the PDQ deployment that could cause this error?

I noticed that the AdminArsenal directory is left behind in the C:\Windows directory, but even if I delete it that doesn't fix the error.

Is this just not recommended practice? I get that a lot of people are moving to deploying via snapin w/ FOG after imaging. This isn't preferable for me since it would add 20-30 minutes to my imaging process and during the Summer I image close to 2500 machines. I want to explore and evaluate that option in the future, but for now it makes more sense for me to update apps in audit mode and have the imaging process go faster.

We have always done app installs via ninite before and updated them via PDQ Deploy and haven't had issues, but I would rather use the PDQ packages to perform the baseline install as I know how much work goes into making sure they are compatible between versions, and I also like the idea of auto updates being turned off, etc.

0

Comments

4 comments
Date Votes
  • Just to add, I restored an identical snapshot before deploying via PDQ Deploy vs Ninite and performed the same process besides changing install method.

    0
  • So there is a process that runs as part of Sysprep that runs a script call SetupComplete.cmd. I can't recall the *exact* path at the moment, but it's in my notes at work on imaging. 

     

    What you could do is create a .ps1 file inside that folder, and inside of SetupComplete.cmd add a line something along the lines of 

    powershell.exe -NoProfile -ExecutionPolicy Bypass -NoNewWindow -File YourFile.ps1

    Your ps1 file should contain something along the following:

    Invoke-Command -Computername yourpdqserver.fqdn -Scriptblock { pdqdeploy.exe -Package BaselinePackageName -Targets $args[0] } -ArgumentList $env:COMPUTERNAME

     

    This script runs during the last part of the sysprep process during the "We're getting things ready for you" phase of a Windows 8.1 or Windows 10 deployment, it's been too long since I've touched 7 to know the equivalent screen there, but it's after the step that produces that error. 

    You could try doing that and seeing what results you get. I'll update this post in the morning once I'm at work with the exact location to create those folders. I believe you need to create a folder called Scripts in C:\windows\setup and put SetupComplete.cmd and the ps1 file in there, but I need to verify.

    No matter which way you do it, this way, or via FOG (which is what I do, btw), you are still going to add the time it takes for PDQ to complete the installs to the process. But still, doing it this way......as long as you have a decent disk (SSD preferable) and plenty of resources available to the PDQ Deploy server it should handle the increased load with no issues. I routinely do 30-40 machines at a time during summer (higher education), and have amazing success.

    0
  • Thank you for your reply and insight Stephen. I do not want to go the after sysprep route, however, the desire to have the baseline packages deployed by PDQ is pretty strong. I will experiment with the method you've described and see how it works for me. I do like the idea of doing it during the script execution part of our sysprep process and before FOG would rename and join the domain. This would be easier on the other employees as they would know once the machine is at the logon screen and joined to domain it is ready to go.

    Couple other questions if you don't mind. 

    I can see that you've used an environmental variable for computename in your pdqdeloy command and that makes sense to me. Are you changing the computer name and joining the domain before you invoke the pdqdeploy command? If so, then are you doing that as part of the unattend.xml? Also, how do you get your unique computer name? Is that done by FOG and commands after imaging? I think I read that was possible to at least change computer name if not join the domain before first boot.

    Any additional advice is appreciated!

    0
  • Not a problem! We actually use a Snapin in FoG to kick off the imaging process. So for us, we push our packages outside of sysprep. The Snapin kicks off after FoG has renamed the macine and joins it to the domain. That's how I am able to use the environment variable, as the machine has the correct name already. 

    I'm thinking how to handle things in your particular situation. PDQ has a message step, but that doesn't have a method to terminate it gracefully from the deployment side that I can think of aside from killing the process that displays the message on the client side (I may test this). 

    Along the same lines as the message step though would be to use the Powershell script itself to display the message. Something along the lines of:

    While ((Get-Process pdqdeploy-runner.exe)) {

    msg.exe command here

    }

    #after the while breaks

    Stop-Process msgprocess.exe

    Then once the script exits, whatever processes you have in place will continue. It sounds like you are using FoG as well?? So you could do this inside of a snapin the same as I do. Once the machine is on the domain, a msg will be displayed on the lock screen and disappear after the deploy process on the client is terminated. How you handle it is entirely up to you. Personally, I kind of like using a message step at the top of the deployment and at the end of the package use a powershell step to kill that message process. 

    I might whip up a quick test in the lab and see what I can come up, as this has me intrigued. I love any excuse to use Powershell haha.

    0