Important Notice: On February 29th, this community was put into read-only mode. All existing posts will remain but customers are unable to add new posts or comment on existing. Please feel to join our Community Discord for any questions and discussions.

Best-Practice? Using PDQ across international sites

Hello,

We have 3 sites spread across the globe, connected via VPN, with the PDQ server in the main HQ. Connectively to the remote sites is pretty poor as they are so far away, so characterised by high latency, low bandwidth.

The PDQ repo is sync'd across these sites via DFSR and referenced in PDQ using DFS-N. All deployment packages use the repo variable, so are pushed/pulled from the local repo in each site. 

Understandably, deployments and inventory scans in the HQ complete much quicker than in the remote sites. All sites have fast local networks, so the slow performance must be in the time taken to send commands to workstations and retrieve results back. 

I'm probably being picking, as most deployments/inventory scans occur on a schedule. But on the occasions where I do these manually, I'm a little frustrated by the time it takes to complete. 

Is there something I'm missing? Is there a best-practice approach to remote sites? I appreciate there isn't anything PDQ can do about topology and WAN speeds. I just want to make sure I haven't missed a webinar on this topic.

0

Comments

2 comments
Date Votes
  • Hi Michael, 

    I was about to raise a different topic based on international sites.

    Do you use Central server mode? Or just standalone with the Repo's sync'd?

    I want to use central server mode but would love the ability to use local repos as apposed to the one on the central server as that would mean clients copying files over the WAN.

    0
  • Hi Mark,

    We use Central Server mode, so the database and main repo is stored at HQ. In PDQ Deploy, I have set the repository path to a DFS-N share, shown below. Each site therefore resolves this path to their local repo.

    Site 1: resolves to \\site1fileserver\shares\pdqrepo$
    Site 2: resolves to \\site2fileserver\shares\pdqrepo$
    Site 3: resolves to \\site3fileserver\shares\pdqrepo$

    The repo contents are then replicated (using DFSR) to the other two sites. But you could use many other methods of keeping these repo's in sync.

    Then in my PDQ Deploy packages, I use the $repository variable in all the paths, such as...

    The only thing to watch out for is to ensure in each package Properties you set the 'copy mode' to "pull", see below. This makes the endpoint "pull" the package from the repo, so it can resolve DFS-N paths to its local repo. If you used "push" mode instead, it will push the files to the endpoint from the central server (over the slow WAN/VPN), which we don't want to happen.

    What is frustrating is the time it takes to send, complete and return Deploy and Inventory commands to my remote sites. I'm sure 99% of this due to slow WAN/VPN speeds, but having a distributed architecture might go so may to improve this. I'm comparing to PRTG, which is a monitoring tool we use, where you deploy a remote probe at a remote site for all local monitoring, which then sends the results back to a central server.

    Cheers

    0