Management tools do not automatically confer strategic advantage. In principle any commercially available modern management tool from Total Quality Management to Lean Six Sigma, from Supply Chain Management to Price Optimization Models, is available to any and all paying customers on equal terms. Two competitors in the same industry space may employ the exact same suite of management tools, but it is a good bet that their relative performance will vary considerably over time. I don’t find this particularly surprising: generally speaking I subscribe to the view of competitive strategy vis a vis productivity enhancement tools eloquently expressed by Michael Porter in his 1996 Harvard Business Review article “What is Strategy?” To wit: “Competitive strategy is about being different. It means deliberately choosing a different set of activities to deliver a unique mix of value”. That is to say, the act of hiring a Process Re-engineering implementation team or reinventing oneself overnight as a Learning Corporation will not automatically confer sustainable advantage. Rather it is how (and if) those tools are integrated into a portfolio of aligned, mutually reinforcing organizational activities distinctive from those of competitors that will most likely make the advantage difference.
This makes sense to me. Nonetheless I am often astonished by the frequent tendency among many corporate decision-makers to conflate the application of some management tool with a fabulous consultant-ese moniker into a “magic bullet” that will effortlessly change the organization overnight from a laggard to a market driving leader. Then, as egregiously as they confer magic powers on the tools, after a few fiscal quarters the decision-makers realize they are not getting sustainable performance improvement, decide in their infinite wisdom that the inherent inadequacy of the tools is at fault, and consign them to the trash heap of unrealized expectations.
This misguided tendency – to ascribe awesome powers to something and then discard it for the wrong reasons – brings to mind one of my favorite management lessons: a timeless exercise developed by W. Edwards Deming called the Red Beads Experiment (actually, what I call “timeless” Deming himself calls “a stupid experiment you will never forget”). Deming was one of the founding fathers of Statistical Process Control, itself a prototype of the management tools that abound in our age, and something of an iconic hero for several generations of Japanese business leaders dating back to the 1950s. The phrase “you can’t improve what you can’t measure” is often attributed to Deming, though not always in the right context. A more accurate reflection of his philosophy would perhaps be “measuring the wrong thing is much worse than not measuring at all”, and that brings us back to the Red Bead Experiment and its lessons for managers of today in the use and misuse of performance management tools.
The Red Bead Experiment is quite simple. It starts with the simulation of a factory tasked with the sole objective of making white beads. The factory’s customers will only accept white beads; beads of any other color are rejected as unacceptable. In the simulated experiment we represent the operations of the factory with a sampling device that contains a total population of 80% white beads and 20% red beads. The red beads in turn represent defects caused by one or more organizational or operational flaws (such as poor design, faulty machinery, improper order communication, inadequate resource allocation, shoddy quality control and similar shortcomings).
In the first step of the experiment a manager selects an operational team consisting of six workers, two quality inspectors and a chief inspector. This team simulates the factory’s “production process” as follows: every day, each worker draws an independent sample of 50 beads from the sampling device. When a sample is drawn each inspector will separately record the number of red beads in the draw and report that number to the chief inspector, who will record the results. This initial simulation can go on for several days, i.e. by the end of, say, four days each of the six workers will have drawn four independent samples of 50 beads and the number of red beads (i.e. “defects”) will be recorded for each draw and the results will be averaged to produce a consolidated “performance result” for each worker over this period.
At this point the experiment calls for the manager to employ a combination of suggestions, processes, incentives, threats and so forth (which we can think of as “management tools”) to extract better performance from the workers. For example the manager may tell one worker whose “defect score” was higher than average to use a different technique when using the sampling paddle to extract the 50 beads (“flip your wrist a bit to the right – yes, like that!”), while telling another whose draw of red beads was lower than that of the group as a whole to “keep up the good work, expand your knowledge of white beads and there will be a year-end bonus in store for you”. The experiment will repeat over several further iterations, each recording different performance results and with the manager constantly discarding and implementing performance tools in response to the results achieved.
The point of all these performance improvement devices, of course, is that they are pointless: the “system” from which the samples are drawn contains 80% white beads and 20% red beads. Actual results will simply reflect random, independent deviations from this 80/20 distribution and over successive iterations the average of all the draws will converge towards that 80/20 split. The real underlying message is that measuring the effect of any given performance tool (whether it be based on incentive, threat, knowledge or process improvement) is useless without a grounded understanding and (where possible) measurement of the system itself. In the language of Deming’s experiment, if you want to optimize the system for minimal red bead production then figure out how to change the 80/20 stasis at the system’s heart – then use appropriate management performance tools to align the activities of all the organizational resources in a self-reinforcing manner to achieve this desired strategic outcome.
Management tools have proliferated in the years since Deming’s heyday, and many of them offer the potential for real performance improvement. For example, organizations have the ability to surgically manipulate the operational levers at their disposal through performance approaches such as Supply Chain Management on the cost side and Price Optimization on the revenue side. However, translating the benefits from such approaches into sustainable competitive advantage requires something more than the mere implementation of these (or other) tools: a granular understanding of each activity underlying the organization’s supply and demand chains, an ability to disentangle and measure the impact of numerous variables on cost and revenue performance, a deep and holistic understanding of the constraints presented by different management and operational decisions, and a transparent view of the full portfolio of activities from all the silos and subsystems throughout the organization. That is no easy accomplishment – which of course is why sustainable advantage is no easy thing.