Charity and Welfare

Behavioral metrics: endless ways to burn money and achieve no real metrics


Behavioral metrics: endless ways to burn money and achieve no real metrics
By Martin Morse Wooster
Senior Fellow, Capital Research Center

(originally posted at Philanthropy Daily)

As readers know, I have strong reservations about reducing philanthropy to programs that can be measured. Of course metrics are fine for some grants. If the Gates Foundation wants to test whether or not a vaccine works or not, they can perform a randomized clinical trial in which some people can get a vaccine and some a placebo and you can therefore test to see if the vaccine is effective.

But it’s awfully hard to reduce programs that are designed to change human behavior to variables that can fit into a controlled experiment. Suppose you’re considering funding a rescue mission that judges it success on whether or not it wins souls for Christ. The moral lessons these missions teach can’t be reduced to data points.

Ricardo Hausmann, who teaches at Harvard’s Kennedy School, raises more objections to metrics in a column he wrote last month.

I am grateful to Hausmann for alerting me to a conference the White House held last year on metrics. “Data-driven, evidence-based policy can be a game changer for people and communities in need,” writes David Wilkinson, director of the White House Office of Social Innovation and Civic Participation. “When we know what works best and act on it, we achieve better results—increase reading levels, decrease homelessness, help more working families join the middle class—while making smarter use of taxpayer dollars.”

I can measure the importance of David Wilkinson and the White House office he heads very simply—I’ve never heard of either him or his office, and philanthropy is what I write about. Moreover, I cannot imagine any set of PowerPoint slides or photo-laden booklets designed by his metrics-mad office that would decrease homelessness or help the working poor climb the social ladder.

What Hausmann argues is that even if you do design a randomized clinical trial, you won’t come up with useful information. Suppose you want to see if children learn when being given tablets. You’d have to find 300 schools, 150 of which would get tablets and 150 of which wouldn’t. Then you do a “baseline survey” to see what the kids know before they get the tablets. Then you conduct the trial and see what happens.

But what happens if the trial happens and children don’t learn with the tablets? “It would be wrong to assume that you learned that tablets (or books) do not improve learning,” Hausmann writes. “What you have shown is that that particular tablet, with that particular software, used in that particular pedagogical strategy, and teaching those particular concepts did not make a difference.”

The tablet trial would enable you to only test two or three different ways to use a tablet and burn up quite a lot of grant money doing so. Far better, Hausmann argues, for all teachers to be given tablets, and then come up with their own ways to use the tablets. If they could easily transmit their results, investigators would get dozens of ways to use tablets instead of two or three, and might be able to come up with several good ways to teach children that could be duplicated.

Another problem with metrics is that grant recipients often transmit the results they think their funders will want so that they can get more money. This is a very old problem in philanthropy. In Great Philanthropic Mistakes, I discuss how the Ford Foundation in the early 1960s hoped that organizations wanting its “Grey Areas” welfare reform grants would spontaneously organize with innovative new ideas. To do this, the foundation refused to explain what it wanted. So what happened were groups guessing what Ford wanted, and Ford refusing to answer, resulting in endless delays.

I learned from a paper from the Center for Global Development by Hausmann’s colleagues, Matt Andrews and Lant Pritchett, and Michael Woolcock of the World Bank that there is a term for the process I’ve discussed in the last paragraph: “isomorphic mimicry.”

“Governments and organizations pretend to reform by changing what policies or organizations look like rather than what they actually do,” the authors write. “This dynamic facilitates ‘capability traps’ in which state capability stagnates, or even deteriorates, over long periods of time, even though governments remain engaged in developmental rhetoric and continue to receive development resources.”

In other words, grantees mimic what they think the funders want even as they pursue the policies they like instead of what the funders want them to do. Andrews, Pritchett, and Woolcock stress a process they call “problem-driven iterative adaptation,” which consists of more local experiments and more freedom to try different ideas (or what they call “positive deviance”) and less eagerness to fulfill protocols designed by mandarins in Washington, London, or Turtle Bay.

“Most social interventions have millions of design possibilities and outcomes depend on complex combinations between them,” Hausmann argues, contending that randomized clinical trials are the wrong way to find out useful information.

I agree. Surely we have learned by now that massive programs designed to change human behavior nearly always fail. The best way to help the poor advance is through thousands of small experiments about what policies might work, rather than a few titanic ones.

Support Capital Research Center's award-winning journalism

Donate today to assist in promoting the principles of individual liberty in America.

Read Next