Lots of KACE customers ask us, when we configure their systems for patching, what is best practice?
Despite asking, discussing and, to be quite frank kicking Quest, KACE continues to sit firmly on the fence when it comes to coming up with a definition.
With the introduction of the Cyber Essentials certification, here in the UK and Europe, we at least are starting to see a little more guidance from the industry, however still nothing that we can really use when configuring our KACE SMA’s.
Compliance Definition - the state or fact of according with or meeting rules or standards
The truth of the matter is every KACE engagement we undertake we always ask the same question of the customer.
“What does Compliance mean to you?”
Guaranteed the answers given are always different, so when looking at the question of Best Practice in the light of this variance of opinion, we shouldn’t really be expecting KACE to be setting the agenda.
But what do we do about Best Practice guidance?
Going back to the Cyber Essentials guidelines, the only real target, KPI or advice given is as follows, Managed Endpoints should be:
"Patched within 14 days of an update being released in cases where the patch fixes a vulnerability with a severity the vendor describes as ‘critical’ or ‘high risk’."
Simple right? so any patch released as “Critical” should be patched within 14 days to achieve compliance.
So how do we use that as a tool to get to a KPI that we can actually measure patch compliance with?
Well first, I think, we need to have some underpinning targets to ensure that our systems are connecting, checking and at least attempting to update within the 14-day target. I would suggest we start with:
- % of devices that have run a Detect in the last 7 days
- % of devices that have run a Deployment in the last 7 days
This would at least give every machine two opportunities to detect and deploy before the 14-day target and would also highlight those Devices that have not checked in, connect or failed to run successful patch routines. This is key with remote workers as we may need to ask them to connect via VPN and leave the session open for an extended period to allow for patch jobs to complete.
So clearly having disconnected systems will be highlighted by the above KPI’s and is important as calculating pc’s that run detect/deploy jobs as a percentage of your whole inventory will clearly be a skewed target if a number of your devices are missing in action (MIA).
Next, I would think about a KPI that highlights when a device may be stuck part way through the deployment of patches, this would most typically be when a patch requires a reboot. If you give your end-users the chance to delay or even cancel patching reboots, and the end-user does not tend to shutdown at the close of every business day, you will start to see extended delays in patch deployment, which will affect your compliancy. Therefore, I would think about measuring:
- % of Devices with a status of “Pending Reboot”
You will be able to target those Devices and make sure that they do not fall too far behind.
To make sure that your Patch subscriptions are correct and that you are targeting the correct machines, I would also think about a report that shows:
- Critical Patches released not installed on any Device and with no detect showing.
So now we have a system that ensures our Devices are checking in, running routines and using the correct patches for compliance, all we need to do is define the criteria for the compliance report itself. I would articulate that as follows:
- % of Devices showing Critical Patches pending, where the patch release date is older than 2 weeks
That just leaves you to set the level of compliance that you now require, as a KPI is a target, right?
For those of your requiring full compliance, 100% is the target, but for those of us that are mere mortals, maybe 95% or 90% as a target is a good place to be, depending on the number of Endpoints you have in your SMA.
With regards to the frequency of this report, I would suggest every 2 weeks will give you a comprehensive view, the first report being run 2 weeks after any “Patch Tuesday” to give you a starting point.
You could then apply similar targets to all other patches, but maybe setting a monthly target and a lower KPI, maybe 85% or 80%.
All that’s left for you to do is to write the reports in your SMA to support your Patching Best Practice Compliance.
Please add comments below to feedback and let us know how you define Patch Compliance
Indigo Mountain is a UK based, KACE Professional Services partner, proud to have a track record of 10 years successful delivery of excellent KACE training, consultancy and complimentary products to the Global KACE community.
Visit our website and contact us here
Other Indigo Mountain Products and Services
BarKode for KACE - Enabling effective Asset Management Best Practice
For more information about BarKode click here or download a demo appliance here
DASHboard for KACE - Your Metrics, Your Way, Your DASHboard
For more information about DASHboard click here
ServiceNow/ KACE integration appliance
For more Information click here or download a demo appliance here
KACE Benchmark Survey
For the SMA version click here, for the SD Version click here
Another idea I have is to see if we can report on detect/deploy success by machine retrospectively. Say you deploy either weekly or monthly, to see by machine, last 2 weeks ran fine, week before failed, week before ran fine, so you can identify machines having issues regularly and you can address them individually. Any thoughts? Please add comments.... - Hobbsy 2 years ago