Can we believe the Chaos report?

The Standish Group have been publishing the Chaos report into the state of software development annually since at least 1995 and the figures have been reproduced all over the place, especially when trying to explain why Agile is a good idea. If they are to be believed, the software development industry is in serious crisis, with only 32% of projects completing successfully (in 2009).

But some people have been questioning the figures. Ask yourself – How many failed projects are you aware of? I haven’t seen too many, to be honest.

Interestingly, the Standish Group have never published their data, nor the names of companies that completed their surveys. Which raises questions about the integrity of the surveys. But more to the point, the definitions of what constitute a failed or challenged project is seriously open to question. The University of Amsterdam and the IEEE highlight the problems with the way the Chaos study has been conducted and challenges the results as being unrealistic.

“…Standish defines a project as a success based on how well it did with respect to its original estimates of cost, time, and functionality”. In other words, they are judging not the projects themselves, but merely the estimation of those projects. And that is something entirely different. That project success can be meaningfully defined as the accuracy of estimation is clearly nonsense.

Scott Ambler regularly conducts surveys on a host of IT topics, especially but not limited to Agile. According to these surveys, in 2011, traditional projects were perceived to be successful about 50% of the time, Agile or iterative projects about 70%. Yet, in 2010, agile projects were perceived to be successful just 55% of the time, challenged 35% of the time, with 10% failing. Interestingly, 33% said that no agile projects had failed.

While these figures are better than those published by Standish, I think there is some way to go before we properly understand how successful projects are, especially in an agile context. Are we all measuring the same thing? In my recent experience, I can think of only two project failures in the last couple of years – less than 5 percent! One was stopped part way through, the other was completed but had to be ‘switched off’ afterwards.

Perhaps we need a new definition for success, failure and what constitutes compromised, that can be applied to all IT projects, regardless of Method. Then we can realistically assess how successful Agile methods are.

What does success mean in your organisation and what are your success rates? Answers in the comments box below please.

About aterny

Agile enthusiast and evangelist, DSDM practitioner, trainer and coach. Specialist in Agile project and programme management, governance and organisational transformation
This entry was posted in Agile and tagged , , , , . Bookmark the permalink.

3 Responses to Can we believe the Chaos report?

  1. Carl@Equinox IT says:

    As with all things, its difficult to be objective over time, especially when you’re personally involved. I’ve seen lots of projects that by the strict Chaos definition are “challenged”, but, by the internal definition of the project team or it’s governance group, are judged to be successful all the way through (if we ignore those pesky early estimates that formed the investment case).

    Ignoring the Chaos figures because they are more a measure of estimation accuracy is ignoring the basic fact that those estimates drive investment decisions. Yes, you could argue that a project can guarantee “success” by massively bloating its business case numbers. But what typically plays out in organsations is consistent downwards pressure being applied; business case numbers being knocked back, project plan numbers and contingency allowances being knocked back, project managers putting pressure on team member estimates…

    Arguably organisations that ignore uncertainty in these ways get what they deserve when the actual cost comes in, but they may also engage in this practice expecting actual costs to be much higher anyway (which Landmark may have been doing as its hard to see how they could fail to learn from such consistent under-estimation).

    What Standish have done recently is of more interest. In 2013 “smallness” became an important thing. Whether you’re using Waterfall or Agile, you’re going to be more successful when you’re small. In 2014 “value” became a consideration. In the 2015 report you can see complexity as a factor.

    Strangely, and happily, the Standish definition of what’s important is getting pretty well aligned with Agile thinking…

  2. Alan L says:

    In the world of projects ( ) there are three (maybe four) metrics: scope, cost, schedule (and maybe quality but that is usually treated with lower priority).

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s