Administration officials are conducting what one called a “test run” of the metrics, comparing current numbers in a range of categories — including newly trained Afghan army recruits, Pakistani counterinsurgency missions and on-time delivery of promised U.S. resources — with baselines set earlier in the year. The results will be used to fine-tune the list before it is presented to Congress by Sept. 24.
Why worrisome? Here are five reasons this move is cause for alarm:
- 50 measurements. It’s far too many, at least for the executive and public level. The number should be closer to 6-10 if the goal is to show the big picture and enable insightful questions about interconnected metrics. If the White House is beginning with 50 in order to whittle down to the ones that matter, that could be useful, but that approach would lead one to believe it is…
- Measuring what it can measure rather than what matters. It’s really very simple–when we measure what’s available, we back into strategy rather than leading with it and defining metrics that reflect the strategy. The metrics are generally very easy to define (though not necessarily to report) when the strategy is clear. When it isn’t, the activity risks becoming an…
- Steering the ship by its wake. Comparing 50 metrics to a baseline set “earlier in the year” will at best give some sense of what has happened over the last several months without necessarily explaining why it happened or what will happen next. One reason for this dilemma is that the process is…
- Taking a long time. I suspect two dynamics are at play: one is probably a push for perfection rather than rough and dirty KPIs (something understandable in a political environment) and a tendency among data owners to select and massage data to present their performance in the best possible light. These suck the relevance out of the data, and both are exacerbated by a tendency to spend too much time…
- Defining metrics out of meaning. What I mean by this is expanding and contracting definitions of terms such as “newly”, “trained”, “Afghan”, “army”, “recruits”, etc. in ways that allow for wiggle in what gets reported. When project managers tasked with reporting project data ask, “what do you mean by ‘project’?”, it’s time to return to first principles.
There are other causes for concern, but these are the ones I suspect are most predictive of where the metrics might fall short as well as providing a roadmap for avoiding that.
There is a bright spot in the article:
Gen. David H. Petraeus, head of U.S. Central Command, “personally gets a daily update — daily, mind you,” on supplies shipped to Pakistan, the U.S. defense official said. “That should give you some sense of how riveted we are on this.”
I have high regard for Petraeus based on what I read and on what people who have worked with him have said about his intellect and acumen. If there is a leader who can get this right, he can.
Scorecards represent a highly political process, usually far more so than we acknowledge in our work and in the literature. I don’t mean “political” only in the public meaning of the term–I mean it in the sense of power and positioning that occurs in organizations every day. A scorecard is a tangible artifact of a complex and interconnected set of organizational dynamics that usually exist beneath the surface. One of our natural tendencies is to keep those dynamics beneath the surface. Doing so serves a purpose for individuals and groups that is often at odds with the purposes of the whole.
War moves faster than this. One doesn’t have to be a general to recognize that information–better, faster, more relevant–is strategically valuable and paramount to success. It’s not clear to me from the above that anyone knows yet what success looks like.
We can do better.