This is one of a pair of articles: This one looks at the myths behind “Balanced Scorecard projects fail”. The second looks at the reality of why they might fail and what you can do about them. First let us look under the detail of the most some common myths about the failure of balanced scorecard projects.
This article looks at the two main types of myth and associated statistic out there. It tracks the original original articles from which the statistics were taken and the myths created. We examine what the articles actually mean. It also asks why these myths and statistics are put about by people. A second article explains why balanced scorecard projects actually fail, and what you can do to ensure they succeed.
Myth 1: Balanced scorecard projects fail by association with other projects that do not manage change
You will find many claims that 70% of balanced scorecard projects fail. However, if these people actually searched for and read the source material (as I did) they would not make such wild claims.The original article was actually about ‘management system implementations’ failing in manufacturing organisations’. Here are two articles that explore the actual source material and de-bunk that myth.
- Do 70% of balanced scorecard projects fail (NO! read why this is a myth)
- Do 70% of balanced scorecard projects fail No and more reasons why…
This is a useful piece of research, because the lessons apply to any implementation of a management system. However, it does not specifically refer to a balanced scorecard management system. The article does not say that 70% of balanced scorecard projects fail. It says that that 70% of management system projects fail. That is quite different. A much wider class of projects, that includes HR systems, budgeting systems, strategic planning, quality systems (it was after all in manufacturing), as well as possibly performance management systems.
Also, it only applied to manufacturing organisations. Back when this research was carried out, was there a specific, simplistic approach to managing change in manufacturing organisations, (or the sample chosen). It is a possibility, but the general insight is useful.
It is very clear from the source material that these ‘management system projects’ are failing to manage the change associated with them. They are being treated as a technical implementation, not a cultural of behavioural change project.
Basically, fail to manage a management system project as a project that requires the management of change, and it will fail.
Myth 2: Balanced Scorecard projects fail by mis-naming the project
Though they might claim that their project is designing and implementing a balanced scorecard, not all performance management projects are using proper balanced scorecards as conceived by Norton & Kaplan. Many “Balanced scorecard” projects are implementing simple measurement systems or dashboards. They are calling it a balanced scorecard, because they call every report that contains measure a scorecard, or balanced scorecard, to make it sound impressive.
Here is some older McKinsey research that suggests that many “Balanced scorecards” fail, but then goes on to say that most of these ‘balanced scorecards’ fail the basic test of being balanced: they are just scorecards. The Performance Management Dilemma: Why do so many “balanced scorecards” fail? If these projects are failing and they are not proper balanced scorecards, then do not say they are. (See not all scorecards are balanced scorecards)
3. Why do these “Balanced scorecard projects fail” myths exist?
Both of these myths about balanced scorecard project failure are about wider projects in general. They are simply saying that poorly designed management system projects and performance management ones in general fail. However, for some reason the perpetrators of these myths are choosing to refer to them all as ‘balanced scorecard’ projects.
These ‘myth statistics’ exist for a combination of reasons. First, it is simply lazy thinking. They have never bothered to look up the source research and simply associate one problem with another.
Who benefits by putting these messages out? I think there are broadly three groups:
- People who have experience of one project and generalise to every example.The disgruntled “I told you so” person.
- Some who just like making a noise and spreading myths, without having done any proper research, because it gets them attention.The “Listen to me, we are all doomed” brigade.
- People who think they have invented a better way. The “Not invented here” brigade. Boy are there a lot of them!
I have a suspicion that they are all probably hoping to undermining the balanced scorecard approach. An approach that the authors, and people spreading these myths, have chosen to neither like, nor understand. So, they have a go at the balanced scorecard by association. A combination of lack or research and “Not invented here”.
What they forget is that the strategic balanced scorecard approach is based on praxis, not dogma or untested theory. There is a real track record of successful implementations. Of course, that sort of evidence is not in their interests.
However, as the old proverb says, “Those that say it cannot be done, should not get in the way of those who are doing it”.
Let us ignore these people. Let us analyse why even a proper balanced scorecard projects might fail and how you can make your balanced scorecard project succeed.