I upgraded to v5.5.90546, I've already applied the consolidated hotfix, and applied k1_hotfix_5.5_ConsolidateFixForPatching_20140107.kbin.  The problem is that when the K1000 tries to replicate to my shares, it copies about 1GB, then deletes and starts replicating the same data over and over.  I've even gone so far as deleting all patches from the K1000, and redownloading all patches.  I even deleted everything in the patches folder on one of my shares, as requested by first level Kace support.  The issue is still not resolved.

I'm using a Buffalo Terastation as my share, and a Server 2008 VM as my replcation machine.  All agents on my replication servers are 5.5.30275.  Everything worked fine before the upgrade.  I have a ticket opened with support, but wanted to see if anyone else had the same issue.

Answer Summary:
There is a bug in 5.5 regarding UNC paths and authentication. I tried mapping a drive from my VM to my replication share, but the replication would not occur after I tried to set it up as a local drive on the K1000. The only way I was able to get the patches to replicate correctly, was to remove authentication altogether and setup the share with no username and password. The issue is not supposed to be resolved until release 6.1 I must have missed this bug in the release notes before I upgraded.
1 Comment   [ + ] Show Comment


  • Is there a way access the patches directly on the K1000 and copy them to my replication share manually? I'm experiencing the same problems. Ever since 5.5, my replication share have not worked properly.
Please log in to comment

Community Chosen Answer


I experienced a very similar issue with my 12 replication shares never completing successfully once I upgraded my K1000 to v5.5. The problem continued even after I upgraded to v6.0 My replication shares would either restart completely or the amount of files/data in the replication queue would fluctuate, increasing or decreasing randomly. Also, when attempting to perform a patch job on my desktops, it would fail with a "Error (Handshake Failed)" status.

I eventually resolved this issue on my own, even though I had worked with KACE support. In my replication share settings, under Destination Share > Path, I used a UNC path of \\ServerA\KACE_Share. I had a User and Password specified. The issue was resolved when I specified a local path of C:\KACE_Share (my replicating device was also storaging the replicated data) with no Username and no Password (neither are needed for local paths). Once I made this change, my replication completed successfully and patching from replication shares had no errors. Specifying a UNC path was never an issue before v5.5 and KACE support made no mention of it when they reviewed my replication share settings.

I hope that writing about my experience saves someone else a lot of time. 

Answered 05/23/2014 by: Ronny
Senior White Belt

Please log in to comment



I have experienced some odd behavior with replication since upgrading to 5.5. Everything replicates just fine, as it did prior to the upgrade. The issue I have is that I can not force replication outside of the time window that was previously set. Sometimes I do that to test a script or managed install. It will replicate fine in the evening when the scheduled window opens but I can no longer force replication in real-time. I have deleted and recreated one of my replication shares to see if that would resolve the issue and it has not. 

Answered 02/10/2014 by: rockhead44
Tenth Degree Black Belt

Please log in to comment

Try deleting the replication share all together and recreating it.  Also, upgrade the agent on the replication machine if you haven't already.

Answered 01/22/2014 by: jknox
Red Belt

  • I'm not having any issues with software or client distributions replicating, only patches. All agents on my replcation machines are version 5.5.30275
    • It's still not a bad idea to recreate the replication share. I've seen some weird problems fixed by doing that.

      Enable debug and see what the logs say when it tries to copy the patches over. My guess is that a specific patch is causing it to fail out.
Please log in to comment
Answer this question or Comment on this question for clarity