Best practices for test environment
We have a K1000 and want to test things in a controlled environment. I'm wondering what others do in the community. It's been suggested here that we get another K1000 for testing but i'm not sure that's the best way to go. My thought is that we just need a test lab of computers to mess around with. I know you can cause issues if you make the wrong change on the K1000. Any best practices tips? Our K1000 admin was cut and we're redistributing work.
SMal and nheyne both have great suggestions.
You could use a different organization for your test group. Submit a support ticket if you do not have organizations enabled. Otherwise, you can purchase a VM license for a test K1000 if you want to have a separate K1000.
Either way, my suggestion would be to use a test group either through separate orgs or labels, such as your IT department, to deploy to initially to test out patches, MIs, etc.
I'm a big fan of using VMs for this purpose because it's very easy to revert to a snapshot if something goes wrong. The only caveat to this is that you can't test things like Dell updates on VMs.
We have gotten by using the test lab method so far, if we want to implement anything using the K1000 we use those machines first. Then if all testing goes smooth, we pick a school (we're a K-12) and deploy to only that one location next. If everything is still smooth, we consider it ready for production and take it global. I've never really understood having a second K1000, with the exception of version upgrades. Other than that, I don't know what the benefit would be.
We use labels to deploy. We have a couple of each deployed model we test against labeled testlab. Then if they all go well we deploy to the label "ITdept", and our staff gets the update. Then if that goes well we add one site and wait a day and finally the others sites get added by their labels.
We just created a special label "execption" and applied that to certain machines we do want to kace update, now push to a new label: all machines at a site but machines with execption label.
I have test VMs where all first initial testing happens to get Managed Installs or Scripts verified as successful. No risk of harm in that environment. Then I have a first test group in the prodcution environment which is a group of workstudy/intern machines where if something blew up it wouldn't necessarily be a 4 alarm fire. Then I leak things out in smaller groups before targeting all machines. If I'm doing an update for an application I've done hundreds of times I may speed things up a bit - which sometimes bites me back.
Labels and a test group. We tried the lab setup with one of each type of hardware we use, but that was unreliable in that the machines were not being actively used in the same way the machines "in the wild" were being used. Evaluating impact on performance, for example, never went well that way and without fail once we got to the higher level managers and their computers slowed down (patching, software deployments) we were basically told to turn it off. Now we try it on a machine or two (if it's hardware related), then the IT department label gets to be the first round of victims, and then we move on department by department from there to minimize impact on company work flows. As others have suggested, VMs are perfect for software deployment testing and the labels leverage the controlled environment perfectly for that, and we just have all service desk staff use those VMs during the testing phase.
We have a pretty sophisticated setup that is necessitated by the size of our IT environment. We have a dedicated test environment that consists of a vK1200 and less than 25 physical and virtual machines. The physical PCs are used for evaluating hardware-dependent components like driver injection during build time, custom inventory rules, etc. We'll be moving to a fully-virtualized PC environment for software package deployment testing in 2014. The virtual PCs are snap-shotted so we can quickly roll-back to a pristine state. Currently, software deployment testing occurs on our physical PCs, which is somewhat tedious to roll-back during iterative testing. Once validation is completed (e.g., install occurs without error with the desired result, it installs silently for the user, roll-back/uninstall is successful), the package is promoted (export > import) to QA.
We have a mutli-org'ed, physical K1200. Our QA lab (mock-up of a retail location) is placed in a single org so the lab admins and testers can perform integration testing via scripted, test cases against their targeted systems without the possibility of releasing the new software to production systems. Once QA has certified the package, my team takes back over and export s > imports the package to our production org, consisting of 18K+ end-points.
The software is scheduled for release via script to a live, beta group of PCs. After a few days of testing, we begin releasing to targets in selected locations via script (so that the software is installed while our retail locations are closed). Once we feel we've reached a desired level of saturation, we leave the script enabled for break/fix and create a managed installation to catch up machines that may have been offline. This activity also forces the software package to be installed at build time.
For the most part, this process works for us.