Important Notice: On February 29th, this community was put into read-only mode. All existing posts will remain but customers are unable to add new posts or comment on existing. Please feel to join our Community Discord for any questions and discussions.

Push Deployments at Remote Sites

We run an MPLS from our main site to our remote sites with a T1 connection from here to those step sites. I copied a deployment down to a server that is local at each site, re-created the deployment with a custom variable pointing to that server, and set the deployments up in "Pull" mode. The only problem is that I am seeing less than 100 kb copy speeds on each of the deployments and the link starts acting like it is becoming saturated.

I was operating under the understanding that the computers were sent a list of files to pull from the server listed in the deployment and the computers copied it directly from those directories. Is there some step in the middle where the data passes back through the PDQ Deploy server back here in our main site?



Date Votes
  • Hmmmm. That's interesting behavior. Pull acts just as you describe, wherein the deployment should contact the server hosting the file and "pull" it into the Runner directory of the client where the deployment is running currently.

    Can you tell me a little bit about how you have these file shares setup? Is it just on a server somewhere with a directory shared out, or are you using DFS to keep things copied/consistent between sites (If not, you should be as that would save you headaches).

    Once I better understand the layout of your deployment scenario I will probably have a couple of suggestions, but until then I won't throw ideas out there that may/may not work. I'd rather have some info, and then give you a solution rather than a bundle of straws to pick from.

  • I agree, we should be using DFS.

    Right now, there are only a hand full of applications that the manufacturing sites use that need regular updating and we do not have DFS set up. I manually copied over the Firefox ESR folder into a shared Software folder on each site's server, then duplicated the package, changed each one from $(Repository) to @(sitename), looked for mappings directly to our main repo, then set them to pull.

    I am starting to plan for rolling out more functionality at these remote sites, but it will be a slow process. We are also going to be upgrading our infrastructure from site to site soon so hopefully these problems will mostly be eliminated eventually, but I just had to upgrade the Firefox installations and will need to update some other, larger apps here in a few months time.

  • Could you post a picture perhaps of the Firefox package? Particularly the install step so I can see its settings? I have a suspicion but I want to be sure.

  • Here is the main package. This one is set to push at our main site.


    Here is one of the remote installs. It is set for pull.

    It is adapted directly from the regular Firefox install in the PDQ package library.

  • So in my head I have your layout something like this: (Simplified of course)

    If I am correct in how you laid out your packages, to me everything on the part of PDQ is working as expecting. I know MPLS links can be especially......finnicky. I suspect a network related issue is causing your slow link speed. Depending on whether you have managed routers for your links (usually with MPLS they are, at least here in the States.)

  • The install files at the remote sites are on a server (at that site) that has a 1Gb connection going to an enterprise-level 1Gb switch (again, at the site) and the clients all have 1Gb NICs. With the pull option on, regardless of an MPLS, shouldn't this network setup provide more than 80 to 100 kb of download throughput to the clients? Especially when it is only two to eight clients running at a time? I can understand the backplane of a switch becoming saturated with twenty installs running simultaneously, but it is capped in the preferences of PDQ Deploy.

    If that is the case and the PDQ Deployment server is somehow still throttling the software package pulls, then I will plan accordingly for larger future installs, but somehow I don't believe this is the way that it is supposed to be working.

    I will try running a few more deployments today just to verify that it was not a weird network issue at the sites I was monitoring (I really only watched deployments at two of the sites), maybe I'm just looking at colloquial data from a one-time issue.

    I will let you know. In the mean time, let me know if you think this network setup should really only be pulling at 80kb per client. Is it possible that the software is mis-representing the download speeds? I have no idea how it is monitoring the speeds, perhaps one of the PDQ programmers can chime in here?

  • You are correct. For this, you could be seeing well above 100kb of transfer activity. To my knowledge there is no exchange between the PDQ Deploy server, and the client in a pull-type deployment aside from Showing Speed/sec of the copy operation. There *could* be something that's causing a misrepresentation of the speed, but you could easily tell that by how long the deployment actually took to complete.


    For example, my Office 2016 package I've created pushes out to clients on average at ~80Mbps on the local subnet, but I have a repository on the PDQ Deploy server that everything pushes from. I've never used pull-type except for one particular one-off thing I did.

    I guess one thing you could try, and this is pretty nuts, but you could mirror a port on your switch and do a Wireshark capture of the traffic and see exactly what's going on with a machine during a deployment to help you narrow it down.

    Jason/Brigg, you guys have anything to add here? 

  • What is the value of your @(Kingsbury) variable?

  • @(Kingsbury) is the UNC path to the share of the local server in Kingsbury, IN. In our case, "\\servername\Storage\Software". There are three more like it.

    I am going to assume that the data was misrepresented and try to monitor a client while a pull is happening to see what the actual speed is. Looking back at the deployments, the times seem to be about right for a local copy, somewhere between three and five minutes per deployment. Very strange behavior from PDQ, but I think I was probably just being a little bit too attentive if there is such a thing...

  • It would be really cool if you could define repositories based on subnet(s) for pull scenarios. This would be done in preferences so that Auto Deploy jobs could leverage that rather than import your packages and create a different package for each subnet. We have 16 locations so this would get messy.

  • Indeed. It would make managing app versions easier to see in PDQ Inventory too. A single group for each app instead of groups for each site and then for each app again like we have set up now. Plus you could deploy to each machine as a single deployment and watch them all run from that deployment.

    The argument will no doubt be that you should be using DFS and that it will handle all of that for you based on server response time, but there's a larger investment for the hardware and licensing when compared to having just a NAS at each site.

  • DFS adds a huge amount of complexity to an environment. We moved away from it because it caused more issues than it solved. I chose PDQ because it's so simple and easy to use. Simplicity is what I'm aiming for. Most of our endpoints at our remote sites are thin clients but we have some thick clients. I'm looking into a way to do this with DNS. I want to make an A record in DNS for each site that uses the same DNS name but points to a different IP. Not sure MS DNS can do that though. Once I get that figured out, I could then use robocopy scripts to keep my repositories in sync. 

  • You wouldn't want to create duplicately named A records. You could create CNAME records and do some trickery with that.


    To keep things in sync I'd rather use a robocopy or Powershell script in a scheduled task. Me being me, I'd lean more towards powershell because of the things you can do with error handling to let the script keep track of itself and let you know when there are issues with a sync.