said, "Our adaptive approach is at odds with management's usual "go that way and come back when you're there" thinking. Another issue, however, is that no matter how hard you try to be on top of things, and to communicate them effectively, you can always do better."
Hmm. So the PlanningGame
didn't propagate up to CIO? Seems like getting that to happen ought to be a priority for your superiors; without it, they're effectively preventing the CIO from making informed decisions. Your CIO was wise - this was not your problem.
On the second issue, [trying again] continuous improvement is important, but so is accommodating surprises and disappointments. Sometimes you can't do better - the game is rigged, the deck is stacked, and when you discover that's so you need to not
whip yourself or your team. One of the things I admire in the PlanningGame
is its use of empiricism to banish the "usual" management demands; sometimes continuous improvement means making no change when no change is called for. -- PeterMerel
CIO understands the PlanningGame
. That's why CIO supported us. New intermediate managers didn't understand. And whether you understand or not, it is difficult to change years of built-in company behavior. When your boss's boss thinks you promised to be done on Tuesday, your boss really does need you to be done on Tuesday. The trick is how do you do it. See FixedTimeBudget
One of the difficulties in XP is that it is prone to TheAthleticSkier
problem. My impression is that the curve of performance (ProjectVelocity
) versus compliance (how well you adhere to XP practices) is very sharp, like the graph of X-squared. In heavyweight methodologies, you can actually gain ProjectVelocity
when you drop from 100% compliance to 90% compliance (which makes me wonder how useful they are), or at least stays fairly level. -- RobMandeville
I would say that you can appear to gain ProjectVelocity
when you drop from 100% compliance ... -- GaryBrown?
<rant>I would say XP being so hard is a sign it's not adequate to a given situation.</rant>
In fact, I'd say this just about any agile method.
In the few cases we tried agile methods (on request, wasn't our idea), customers were quickly disappointed by the lack of financial and temporal predictability. Which makes me conclude that agile methods in general, not just XP, are OK for projects where predictability in general is less of an issue, but adaptability is. Which isn't that many projects when we talk about small, custom-made business apps, from what I could observe until now.
Another issue to which nobody gave me a reasonable answer until now: how does XP handle a handover of a project's codebase to another team?
I want more qualification of your statement, as my experiences with agile methods
directly contradicts everything, yes everything, you've written above. In my experience, customers were merely upset at the lack of financial predictability, but on the other hand, it drove home 10,000% just what kind of work goes into producing all the features the customer demanded. They quickly realized that software ain't cheap, and scaled back their features significantly, producing a leaner, and ultimately more useful, program as a result. The reason they decided to cut back instead of go somewhere else is simple: the agile methods gave remarkable temporal predictability. Applying knowledge of project velocity enabled, rather than disabled, precise estimations of feature completion.
Per the above comment on "financial and temporal predictability", I dare say that XP has some of the best predictability of the industry. What it sacrifices to get this predictability is the illusion
of predictability. XP starts off saying that we really have no clue about estimates, and gives us a way to learn how to estimate quickly, and we quickly converge on the cost in time of every feature the customer asks for. It doesn't say "We can do this project with six developers in three months" at the start, and most methods that do are just lying about it. How many projects go over time and over budget? Give a couple of months for the developers to find their velocity, and XP can tell you what you can have in three months, or how long it will take for these eight major features that you want. If you want a fixed-spec, fixed-time budget, with a guarantee that the project won't go over budget, then you don't want a dev team: you want a plumber. --RobMandeville
Hm, the rant above could have been written by me (but probably wasn't, or at least I can't remember writing it, and the language doesn't quite sound like me to me).
So your statement is that besides customers being initially upset, temporal and financial predictability got to levels acceptable for the customer after a few iterations, while doing small, custom-made business apps. Right?
Here's the context I work in: a not so large pool of developers has to work on several different projects for different customers. The pool of programmers is however large enough that there is no possibility to manage it as a single team of programmers - more than 20 people.
The projects, although mostly alike in their nature, differ in size from a few man-weeks to maybe several man-years.
Some projects have to be finished in the shortest possible time, and have all the budget approved upfront. Others have to be stretched over a longer time, since they have a monthly or weekly budget to spend for a defined period of time.
Most customers, however, insist on fixed price projects. Since change always happens, there is a rather heavy-weight process of getting changes approved by the customer, including an estimation of changes to price and deadline of the project - everything else we tried put us in the unpleasant situation of not getting the customer to pay.
Since change has to be evaluated against something
, we need to be heavier on documentation than agile methods prescribe/recommend it. For example, a one man-year project requires about 60-70 pages of documentation for us to feel safe.
What happened when we were required to do a project in an agile-ish way (it's a real-life example): customer agreed to pay for all effort, but wanted to start coding right away, based on a specification he created - already not agile, but you can't argue with the customer. Due to improper requirements analysis, changes made up maybe 90% of the total effort. The changes were mostly meaningless, i.e. there were cases where we recoded the same feature four or five times, sometimes just having to switch back and forth between two functional variants because the customer couldn't decide on one of them. Most meaningful
changes could have easily be found to be necessary right from the start, if we were allowed to do a proper requirements analysis. Our estimate is that the project took maybe three times longer to complete and was maybe more than five times as expensive as it should have been - as it would have been if we were allowed to follow our usual process.
Another example: we are working now for another customer in a more agile-like way - it's not yet agile, for both customer-related and developer-related issues. We agreed on a two weeks delivery rhythm, on something like a product backlog and a feature list for one iteration - something like a scrum backlog. However, the customer still doesn't want to pay for any kind of testing, but insists that he tests and reports errors, and the team is not homogeneous and changes in time. There is constant dissatisfaction at the customer: he often expects features to be implemented even if they were not in the last iteration's feature list. Which bites both him and us in the ass. Furthermore, he isn't satisfied with not knowing in advance when everything will be ready, and how much it will cost - which is simply impossible, since he doesn't want to go through the effort of gathering all requirements up front. Only, there are some constraints on his side which make this way of working necessary, and he's satisfied with the quality we deliver. Also, the customer insists on development stages - a lump of features covering several iterations, for which he wants a fixed price and deadline in advance. After each such stage development breaks down for one or two iterations, until the new stage starts. Still, because we were allowed to do a proper initial requirements analysis, and we do so before each stage, the project is going much smoother and with less overhead caused by stupid changes than the previous example, where we were asked to code right away. And there's place for refactoring, so the code base looks almost neat.
None of these problems appear when we are allowed to use our standard way of working: do a workshop, gather requirements, properly analyse them to eliminate contradictions and holes, have a document containing the functional specs approved by the customer, do an estimation and an upfront design, then start coding. These initial phases may contain some haggling about scope, delivery deadline and so on, so they happen iteratively, to some extent. Design precedes coding, however, design typically changes a lot during coding, but only in detail, not in the overall application structure. Design is mostly limitted to the definition of the data model and of interfaces, detailed code structure is left open for developers. With this process, we typically deliver within a 10% error margin on duration and budget. In spite of changing teams. What we discovered is that the critical stages are the early ones: if the requirements analysis or the design are sloppy, the project will be late and over budget. If design is not performed, it is probable that the resulting data structure will be awful, and the code grown upon it will stink, even if it will work.
All in all, when we use our standard process, we make the first delivery late into the project. However, overall effort and duration are significantly lower, and so is predictability, both temporal and financial.