As I've spent more and more time in recent years working in Scrum environments, it's got me thinking how this agile process might adapt itself well to projects outside product development. And being the "AppDeploy" guy was naturally compelled to think how it might apply well to the world of application packaging.
For starters, when I say packaging, I am not referring to repackaging (although that could well be a part of your packaging process). Rather, I mean the process of establishing a payload you can distribute to silently install applications in the way you need them installed. That can mean a simple command line, a more complex script, a couple of executables and an answer file, or any number of other ways to get the job done. The actual setup could be from a vendor or one of your own creations.
Next, what is Scrum you may ask? The good people at the Scrum Alliance put it best and also set up my intent to apply it to packaging: "Scrum is an Agile framework for completing complex projects. Scrum originally was formalized for software development projects, but it works well for any complex, innovative scope of work. The possibilities are endless. The Scrum framework is deceptively simple." - Scrum Alliance
Whereas a backlog would traditionally consists of features and bugs, we’ll have packages and bugs in the case of a package development process. Deployment packages are clear but do we have bugs to deal with? Sure! That dialog that should go away but doesn't, that forced reboot that is upsetting users logging in in the morning needs to be corrected, that misconfiguration that causes first launch to take the user off to a web page someplace needs to be eliminated. These are all things you of course need to address, but are they as important as the next package in line? Maybe and maybe not.
In Scrum, you assign those tasks with the most business value to the top of the backlog to get done first. So how do you derive business value from a list of applications you need to deploy? It will vary by organization for sure, but some things you might consider in ordering your backlog include:
- Number of users that need it to do their job
- A new application may more often be more important than an upgrade
- A security update may be need to be the highest priority
- Are the target users executives?
- Number of users waiting for it
We always want to fix a problem (bug) but if it is still functional we need to weigh the value of fixing it against the other tasks that demand time and resources waiting at the top of the backlog.
Requests for new applications certainly generate new backlog items, but you can also be looking for major and minor updates to applications you've deployed in order to add to your backlog yourself. An update nobody is asking for may sit on the bottom of your backlog and never get done until you deem its business value has changed. Keep in mind that a backlog is not a task list you seek to complete and should include things that would be nice to do, even if you don't feel things will ever let up enough for you to get to them.
A user story describes the work to be done in a way that conveys who it is for, what needs to be done and why it needs to be done. The template for such a statement is: "As a [role], I can [feature] so that [reason]". For package deployment, here are a couple of examples of user stories:
As a general user, I need the latest version of Adobe Reader so that I can view PDF documents with the least exposure to potential security risks.
As a user in the graphics department, I need Corel Draw so that I can manipulate files being sent from a contractor.
To guide a user story, we also may include "acceptance criteria". This helps to set any necessary limits to how the user story is executed. If certain defaults need to be put in place, if known permission changes are required for operation, if specific file associations or other installation preference choices are needed—all of these are good examples of what may be listed with a user story as acceptance criteria.
There is also a thing called "Definition of Done" which is essentially a set of acceptance criteria that applies to all user stories. This way you need not write that all packages must be in Windows Installer format as criteria for every user story you write (if such is an organizational requirement of yours). The Definition of Done is a living document to be well-understood (and updated as necessary) by the entire team.
All that said, when writing a user story it should speak to what needs to be done and why but not "how". While you may have a standard in your organization that dictates criteria best left out, the person executing the user story should have as much freedom as possible to come up with the fastest method to execute the user story. An answer file, a simple command line, or repackage a custom deployment; depending on the application and acceptance criteria the best approach can differ (but ideally should be up to the person doing the work). This has the added benefit of making the work done by the team more satisfying; rely on your team for their skills and creative capacity to get the work done as opposed to dictating how they are to do their job (where possible).
By getting it done quickly, you end up with a rapid result. In some cases you may find that by being so quick and not taking the time to document what needs to be done up front, the result may satisfy the user story and acceptance criteria but ultimately not be acceptable due to something overlooked. In such a case, a new ticket should be added to the backlog to address it. Taking this iterative approach means that you may need to revisit some packages, but such should be the exception leading to faster overall packaging output (though admittedly at the expense of having some packages potentially deferred to the next sprint in order to get them just right). Sprint?
A sprint is normally a 2-4 week period where an agreed to number of work is to be completed. For a packaging environment, a one week sprint may be appropriate. I recommend starting with a week longer than you first determine a good length knowing you can always shorten the period once you've made it through.
Unlike a simple task list you are always working on, this process will mean you bite off on a sprint's worth of work, call it done and then move on to the next sprint's worth of work. This will mean no packages or bugs are being delivered to production for this period of time. Therefore a small sprint length will probably seem more palatable when starting out.
As for who does what, the normal scrum practice dictates that developers self-assign work. So rather than assigning a package to a specific admin, the admin should be permitted to look at what is not yet assigned in the sprint and assign what they are comfortable with to themselves. Your more advanced team members may need to do more of the difficult packages, but this gives the team a chance to change things up and expand their knowledge by biting off more challenging work as desired. It is yet a second concept that may help to increase job satisfaction among the team.
Story Points and the Estimation Process
To determine how much can be done in a given sprint, each package and bug needs to be estimated with a metric referred to as a story point. The number is completely relative to other tasks. The values can be whatever you like, but most prefer to utilize the Fibonacci sequence (where the next number is found by adding up the two numbers before it).
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144
If you consider a typical "easy" package like Adobe Reader to be an 8, maybe you'd call a more complex application like ArcGIS a 21 and a bug to make a simple change to an existing package a 1 or a 2. There is no correct answer, but the team should come to a consensus on the estimate derived. A good way to establish this estimate is to play a little game called "planning poker" (and online version may be found at planningpoker.com). What happens is that everyone on the team discusses the task, comes up with a story point value but doesn’t share it until everyone has decided on his or her own estimate. This works very well because if everyone picks the same value, you can move on—no need to discuss it further. However if one person has a much higher or much lower number than the others, it generates a very valuable discussion: why do you think WinZip is so easy Bob? Don't you know you need to test all the file type associations and their ability to be successfully reverted to their original state upon uninstall? One person may know a better/faster way to satisfy the user story or perhaps not everyone is interpreting the user story the same way but this process helps everyone get on the same page and come up with an estimate as to how much work it is. Note I didn’t say how long it will take to do. The idea of using these numbers is that they are NOT tied to an amount of time but are intentionally more abstract.
An 8 point ticket may equate to 4 hours or 4 days. It really does not matter when it comes to estimation. Simply estimate the story point value for a task by comparing it to an estimate everyone understands and agrees upon (like Adobe Reader). Is it much harder or much easier? A little harder or a little easier?
But how can you estimate the difficulty of an application with which nobody on the team has experience? In these cases I’d suggest looking to the ITNinja software tip library. Yes, here is the brief plug in this article: By looking the software up at ITNinja.com you can have your best chance at establishing a good estimate for an application because there is a crowd-sourced difficulty rating value, a list of tips and insight as to if people are repackaging it or not, if it is a windows installer setup or not—all things that should help you to effectively estimate a package development user story. Perhaps the answer to how to deploy it is right there waiting for you and what you though to be a difficult package just got a whole lot easier.
What needs to be included in the estimate? Perhaps it is part of your Definition of Done, but consider the need for documentation and testing (installation and removal) and ensure it is part of your estimation process. When your sprint is complete, these packages need to be ready to deploy—not to enter the next stage in a waterfall process. Done is done and the result of a sprint should be potentially releasable work. Maybe you won’t deploy it right away for any number of reasons, but the work should be 100% done with only the task of its actual distribution being the outstanding step in getting it out.
So what fits in a sprint? What is the capacity of a one or two week sprint? We calculate and adjust this from sprint to sprint based on how well we do with meeting our goal set for the previous sprint. The measurement of what fits is calculated in story points and is referred to as the teams "velocity".
For the first sprint, you'll have to take a bit of a blind stab at a starting point. Just look at the list of packages and the length of your sprint. Suppose you think you can complete 5 packages all with a story point value of 8. That would put your velocity at 40 story points. Take the top 40 points of stuff off the top of the backlog and you've got yourself a sprint. This represents a working period, which you have estimated to complete those tasks that have the most business value. At the end of the sprint period, you will have the most important things completed and ready to deploy and will have made the best possible use of that time in the process.
Now let's suppose the sprint ends and your team didn't get one of the packages done on time. It is up to you if you want to extend the sprint to get done or if you can take that incomplete package and put it at the top to be worked on in the next sprint. Either way, you’ve determined your velocity is actually 32 and not 40. So next sprint, you only put 32 story points worth of work in the sprint so you have a better chance of meeting your goal. Say then you finish early, what then? Simply end the sprint and start the next one adjusting your velocity as your results dictate. But avoid adding to a sprint if at all possible. The goal of working as a team to meet estimates can be easily sabotaged by scope creep. It is important to help the team succeed. By being agile and keeping your sprint short, it should typically be acceptable for any important work to wait for inclusion in the next sprint.
Okay, but what about time needed for the things done outside working on the sprint tasks? Fires need to be put out all the time. While it is true you cannot estimate unanticipated interruptions, the good news is you don’t have to. By constantly adjusting velocity based on the results of the previous sprint, you will naturally account for the average time the team is interrupted without having to account for it in your estimates. Estimates are strictly relative measurements of difficulty; the variable you can adjust for interruptions and "other stuff" is handled by adjusting the velocity of your sprints. A team that is constantly interrupted will simply get a lot less done than a team that does not—the results are what they are. It is not the goal of Scrum to help you get more work out of a day, but to expose how much work you really can do so it can be measured realistically.
During a sprint, the team should be focusing their time on the work needed to complete the sprint and so meetings are to be avoided or to be kept as brief and productive as possible. There are a couple such meetings prescribed by the Scrum process suitable for our purposes here:
A Scrum (or "standup") meeting is held daily but is no longer than 15-minutes in length where members of the team each state three things: what they did yesterday, what they will be doing to day and if there are any impediments hindering their work. Don't have a license key for a product you are supposed to deliver? That would certainly be an impediment, but ideally you’d have made that a requirement of getting the package into a sprint. Which brings us to another meeting key to the process:
A sprint review meeting is a meeting held prior to starting a sprint where you go through each estimated package and bug making sure everyone understands what is needed and agrees it is eligible to be in the sprint. Based on the user story and acceptance criteria, do you have the media, licenses, instructions, etc. to successfully complete the work? If the answer is no, it should not be in the sprint.
At the end of a sprint, there is another meeting held: the sprint retrospective meeting. Here, the team is encouraged to discuss and record what went well and what did not go well in the sprint with the goal of identifying changes you might make to the process in order to improve it. Is the length to short or long? Is there something that should be added or removed from the Definition of Done? With good feedback, the team will steer the process in a direction that will help it to evolve toward the best it can be for your team/organization.
When the sprint is complete, you'll have a set of tested and ready to deploy packages and bug fixes that can be passed to your systems management team for deployment. I realize that in many organizations the packagers and the deployment team are one in the same. When that is the case, I'd suggest a period of deployment before moving to the next sprint of package development. Maybe that is a couple of days, a week, perhaps it will vary and you can determine a set amount of time necessary based on the number of packages generated by a sprint.
Much of this you might do already, and some of it may seem forced to better apply to an agile methodology, but take the parts you think valuable and see what you can do to evolve your package development process.
What do you get for the effort? Just having an established process with some rules has benefits of its own but I feel the biggest benefit is a double-edged sword and that is visibility. You (and everyone else) will have an understanding of priorities and progress against those priorities. The question "when will this package be done" or "when will this bug get fixed" becomes something you can answer more definitively and stakeholders can look in and see where the work they are interested in sits. But consider the fact that you’ll grow to be able to definitively show how much work can get done in a given period and the reality of what can be done with the resources assigned becomes clear. Contention will move toward what should be a priority versus why you can’t get everything important done yesterday. In the end though, if you’d prefer not to have everyone understanding your priorities and the pace with which you are working through it, moving to an agile process may not be for you ;)
About the Author
Bob Kelly is the founder of AppDeploy.com/ITNinja.com and has had a number of IT-related books and papers published. He is a Scrum Alliance certified ScrumMaster and Product Owner. Bob is currently Product Management Director for Dell KACE. For more on Bob, visit http://bkelly.com