In his Bad Science column this week, Ben Goldacre looked at the evidence for various methods of teaching children to read. As he points out, the evidence either way (as compiled in a systematic review done for the Department for Education and Skills of the last UK government) is not compelling – ie in favour of a phonic type scheme or not, and clearly better evidence is needed. However, instead of taking a scientific approach to assessing the evidence, a report named ‘So why can’t they read’ from the Centre for Policy Studies, proposes running a competition , in which schools will choose which approach they use and then , er, compete for the best results. This would essentially be “a kind of Booker Prize for literacy, perhaps sponsored by one of the large corporations … Every child and all the relevant teachers in the winning schools would then be given an award at a large prizegiving party”
It’s hard to imagine a less scientific way of going about deriving new evidence, and Goldacre, rightly, ridiculed it. There are many instances of trials that have been successfully carried out in schools, and though they are not simple, at least it is possible to have some degree of confidence in the results from them: the proposers of the competition would do well to study them.
So it was something of a relief to see that in the Economist, which is running a Readers’ Award, in which readers can vote on a short list of ideas that the Economist editors believe have the potential to radically change society, that one of the proposals is for randomised trials of aid and development schemes. This is an approach to be hugely supported. Though the difficulties of doing such trials will be substantial, it is really only by taking a systematic approach to developing new evidence, that the new evidence can ultimately be trusted.