Tom Bennett

Home » research

Category Archives: research

What’s red and green and pointless? The endless, anxious debate about the colour of marking.

What’s got two thumbs and couldn’t give a damn what colour his marking pen was?

*points two thumbs at chest*
Me!

The meme de jour of horror flicks is to have the  final frame hinting at the imminent return of the hellish antagonist- see: Carrie; Saw; Freddy; Jason, ad nauseum. Well here’s another teaching myth that I thought had been staked a long time ago, but apparently keeps rising from the grave with  the certainty of sunrise. Does it matter in what colour you mark students’ books?
No it doesn’t  
I only mention this because someone emailed me this question recently, and I had to rub my eyes and pinch myself (not easy) to check if I was dreaming. Are people still asking this? Apparently, yes. Dracula has returned. So it’s time to dip my crossbow bolts in holy water and bless my silver candelabra, and get ready to knock the brains out of this one, although believe me, it won’t take much.

When I started teaching, this was received wisdom; it was dogma; it was part of the catechisms of the Church of Progressive Teaching: do not mark in red ink, I was told. When I asked why, I was solemnly told that it was ‘bad’. Which is great, because for a minute I thought they were going to be vague about it. I doubted it then, because it just seemed so counter-intuitive. How could it matter in any real sense what colour I used? But, like many axioms absorbed in the infancy of one’s education, I complied, and dutifully stocked up on soothing, somehow more supportive shades of emerald. I wondered then which shade in particular was supposed to have the best effect? I think more work needs to be done.
But let’s settle this. There is no research whatsoever to support the view that marking should be done in green ink, purple, vermilion, or a thousand other shades of the spectrum, as a preference to red. Let me repeat that: there is none. So why do so many people think it’s true? I’ve investigated, so you don’t have to. Jus’ doing my job, ma’am.
Well, what there has been is an enormous amount of psychological investigation into colour preferences in an enormous number of groups. There’s also been a lot of studies that have looked into the symbolic associations that people have towards certain colours. This is nothing new: people have always attached meaning to just about everything: black has connotations of mystery, magic, fear, fascism; blue connotes to calm, the ocean, etc. You can add your own. There are recurring themes that reflect the cultural expectations of the group investigated. Within each group there will be understandable variations ascribed to any particular colour- I find Sunflower Yellow irritating (mainly because I associate it with one of the companies I used to work for), whereas I’m told by SENCOs that autistic children apparently fall into soporific calm at the sight of it. I have no idea.
This kind of thing can be studied quite easily- get a sample, ask them what they associate with each colour, collate the data. Or perhaps be even more clever and do it a bit more ‘blind’- don’t tell them what you’re investigating, and show them pictures of people wearing different shades and hues and ask them to express their thoughts towards them, that sort of thing. The point is that this kind of data is easy to gather, and assuredly marketing people have been doing this since dinosaurs walked the Earth and they wanted to know what colour of loincloth sold best to AB1 Hunter/ Gatherers.
Of course, the idea that the colour of your marking pen matters goes beyond even this: the idea seems to be that people associate the colour red with aggression, threat, danger and negativity. People- seemingly quite sensible ones- have expressed the opinion that they remember  their own books being ‘covered with red ink,’ in a process that they describe as traumatic. Ergo, we should stop using red ink and get busy with shades more arborial.
Show me the research
Well, quite. So what research is there to support this? I’m happy to report that the answer is ‘bugger all’. Curious, I decided to do a bit of digging, and I was amazed to see that nothing on paper could explicitly support this theory at all. In fact, even the most recent research seems to struggle to prop it up, or barely offers any insight into the question except in the most tangential way. To give one example,  a 2010 study in the European Journal of Social Psychology by Rutchick, Slepian (of the gorgeously named Tufts University) and Ferris offers the cutting edge of research into this area, and frankly, it’s a hoot. It shows, or attempts to, that using red ink can prime markers (not students) to tackle papers more aggressively, more critically. Its central premise is that just seeing red (ah, such a loaded term) is enough to make markers…well, see red. Those using red pens in the research seemed to grade papers lower, and notice a higher level of mistakes than the other, non-red-ink group. They also seemed to indicate that, when confronted with word stems to complete, they usually went with more negative, aggressive ones over the altogether more harmonious, Scandinavian alternatives.Let me give you some examples; see how you do. Finish these words:
FAI_.
MIN_ _.
_LUNK.
WRO_ _.
Now, I don’t know about you, but I went straight for ‘fail’, ‘minus’, ‘flunk’ and ‘wrong’. And so did more of the red ink group. It’s results like these that convinced the researchers that they were on to something with their red-ink bad hypothesis. The blueys, incidentally, had higher proportions of ‘fair’, ‘minty’, ‘clunk’ and ‘wrote’ apparently. You can’t argue with science.
Except that you can certainly argue with this science. For a start, what’s to say that the ‘negative’ connoting words weren’t simply more common idiom? “Minty’ isn’t exactly something that trips off my tongue regularly, but then, I don’t sell toothpaste. And what’s to say that there weren’t fewer toothpaste salespeople (for example) in this group than in the other? It’s these kind of uncontrolled variables that knock the guts out of a piece of research, because without reassurances that one group isn’t unfairly weighted with certain types of respondents there’s no way of knowing if your analysis is statistically significant.
What about the other part of the research- that the red inkers were more stringent/ harsher/ more negative in the number of mistakes they corrected? Again, we need to know if there was any meaningful way of making sure that one group wasn’t unfairly (or should that be unfailly?) loaded with pedants, grammar Nazis and careful essayists. I checked the paper to see how the group was selected. It says, and I quote:
‘The current findings are qualified by additional limitations, primarily concerning the participants in the studies. Due to time constraints associated with conducting the experiments in a realistic setting, little was known about the participants beyond their presence in the university environment; their age, ethnic background, level of education, and other factors were not assessed. Several of these individual differences, such as verbal ability, educational background, and field of study, could influence participants’ ability to detect errors, their propensity to mark them, and the harshness with which they make evaluations.’
You don’t say. So in essence, we have a bunch of people marking with red pens, and a bunch of people marking with blue pens, but we don’t know if there are any factors in each group that could produce the results that we found. But we think it’s got something to do with the pens. In fact, it’s not just the pens, it’s the fact that the ‘we propose that the red pen effect is driven by increased accessibility of the concepts of errors, poor performance, and evaluative harshness.’
Or, if I can reformulate that again, ‘we don’t know why, out of all the possible differences between the two groups, there should be a statistical difference in such things as number of errors recorded etc, but we’ve decided it’s the colour of the pen ink, because that’s what we’re investigating.’ Ladies and gentlemen, it’s research like this that makes your line manager tell you with an air of authority to ditch all your crimson ballpoints and go green in the classroom
So what do the authors think of this inability to account for the variations in the participants? ‘However, these uncontrolled differences should manifest as random variability, and thus make it more difficult to detect the effects we report.’ In other words, ‘We don’t know if they have any factors that could affect the experiment, but because we don’t know we reckon it should all work out jess’ fine, than ‘ee.’ Give me strength.
I often blow pretty hard about social science, for reasons just like this: although I don’t necessarily believe that all social science should be circumscribed by the methodology of the natural sciences, I do think that whenever it makes a specific empirical claim about the way people think and routinely act, that there should be some kind of empirical method to show that this is the case, and that the experiment can be meaningfully reproduced in other situations, tested and confirmed. If all you want your paper to do is to start a conversation, or to add to a debate, then by all means keep it anecdotal, or unrepresentative, or subjective. But when you start trying to prove something predictive, or worse, start telling others to change their behaviour on the basis of your research, then you need to turn up to the fancy dress party with more than a fig leaf, saying you’ve come as Adam.
Surely there’s more evidence?
Hang on, I hear you say, is that all the evidence there is? Well, yes, at least on this specific topic. Oh to be sure there are many studies that show that certain colours have certain connotations amongst certain people. And there are many other studies that show that certain colours can ‘prime’ us to respond in a variety of unconscious ways. But nothing- and I believe I’m throwing my perfumed gauntlet out here- NOTHING to suggest any clear link between the use of the red pen and…well, what is it the critics say is happening anyway? For one thing, the claims they make are maddeningly vague: red ink is ‘negative’; it’s ‘discouraging’. Oh really? And how would you even go about measuring that kind of thing? How on Earth could you control for it? Do we ask 1000 children aged 14 to describe their feelings towards corrections in their books? And another 1000 to say how they would have felt if it had been in mauve rather than vermilion? *slaps forehead at the boneheaded nonsense of it all*
But hang on: our heroes have something to say about the claimed ‘effect’ of red ink on students:
‘To our knowledge, this demoralization has not been empirically demonstrated, and would be an important complement to the current findings.’
That might just be the understatement of the red-inked century. Let me just repeat that:’ no empirical evidence’. No studies. None. No proof whatsoever other than a gut feeling in educators that when kids look at a book covered in red ink corrections, they get a funny feeling in their tummies, and perhaps it’s the red ink that’s to blame? As many people have commented, it would seem perhaps more intuitively likely that people brought negative associations to the fact that they were corrections rather than the fact that it was red. After all, red can have millions of associations: danger, certainly- the red of a wound – but also thrills, passion, romance-  the blood racing in your veins because you’re alive, for example.
Finally I would like to make one point: maybe I WANT kids to feel a little bit alert when they see red on their books? Maybe the connotations aren’t exclusively negative, but rather represent a state of heightened alertness corresponding to increased attention paid to corrections. Maybe, just maybe. The point is that, as Ben Goldacre famously repeats, ‘things are a bit more complex than that.’
And I sincerely hope that I haven’t offended anyone by typing in black ink. Click here for a more soothing draft in magenta.
Finally, I think what particularly offends me about this subject  is that, were it relating to an aspirin, or a new technique for removing surgical stitches, it would be subjected to an enormous level of scrutiny before it could be released, as it were, into the wild. Not so in the field of educational science, where mutants, hybrids and sickly, runtish ideas are set free to breed and settle where they will. If this was an aspirin, it would have been laughed at; in education, it’s adopted as a mantra. It would be laughable were it not so ubiquitous, or so representative of the slavish manner in which teachers are expected to assume a new position, no matter how servile, or follow any fashion, no matter how impractical, if their masters demand it.
I haven’t been able to track down exactly when this idea escaped the laboratory from which it spawned, or how, or by whom. I can find reports of schools in the UK (primaries at first, then secondaries, Australia (Queensland) and America (loads of places) adopting the abolition of scarlet scribing dating back to 2003, in this report from the BBC. To quote:
‘Penny Penn-Howard, head of school improvement for Sandwell Council, said: “The colour of the pen used for marking is not greatly significant except that the red pen has negative connotations and can be seen as a negative approach to improving pupils’ work. Therefore, it is quite legitimate for a school to have a consistent policy that it uses a different colour.”‘
Which is another example of why I’ll be glad to see the back of some people who claim to be employed to ‘improve schools’.  If it isn’t significant, why have a policy? How does red connote ‘negatively’ etc? Does that mean that tomato sauce and Christmas have negative associations? Blimey, someone better call the head of marketing at Coca Cola and tell them they’ve been getting it all wrong. Penny Penn-Howard has the answers.

My Stella Challenge to the red-ink flat-earthers

I’m not James Randi, and I don’t have a million dollars, but I bet anyone a pint that they can’t produce meaningful research that shows that the colour of the pen has any significant effect at all. I suspect my pint jar is safe from the toffee hammer for some time. In the meantime, I’ll be choosing to mark in red as much as humanly possible, more in spite of the apparently still current dogma than for any other reason. Mind you, it stands out nicely against all the black ink of my student’s work.  
Plus of course I tend to write in human blood. I like the connotations. It shows I care.

New study shows that something is possibly true but it might not be: the Fog of Social Science.

‘Kill me.’

These are apocalyptic days for many school schemes; in the present age of neo-austerity, it seems like anything not related to life support and child protection is being pared down to the marrow. I’m not sure people are aware yet of how much is on the way out, thanks to a cartel of financial hucksters and their sub-prime lending habits that made the lifestyles of termites seem modest and restrained. Some of the things on their way out were definitely dirty bathwater: the GTC, for example. But some were babies. As the FT comments:

‘The schools resource budget, which covers day-to-day running costs, will rise in real terms by 0.4 per cent. But a rise in the number of pupils will mean current spending per pupil will be cut by 2.25 per cent…The education department’s budget for buildings, which is almost entirely spent on schools, will be cut from £7.6bn to £3.4bn – a real terms cut of 60 per cent….Michael Gove, education secretary, admits that many schools will enter a tough period.’

Which means we’ll be holding wet hankies on the platform as we watch many extra-curricular schemes, clubs and so on  wave at us through the steam from the train now leaving the platform. This, to be fair, isn’t news any more, although many in schools still have to adjust to this reality: if it can go, it will. I’ve been reading professional Dear John letters from LEA consultants and liaisons all week, wishing me well as they pack their belongings into  red handkerchiefs tied to sticks as they set out for London with their little black cats.

One of many, many schemes teetering on the end of the gangplank is Sing Up, (click on the link while you still can), an organisation that, unsurprisingly, believes that ‘Every child deserves the chance to sing every day.’ While you could greedily take issue with the origins of this alleged right (is it intrinsic? Divine? Legally prescribed?), I would never antagonise such a well-meant, noble cause. If I were Educational King for a Day (it keeps me awake at night sometimes, plotting and dreaming…) this is the kind of group I would give money to; I want schools with choirs; I want schools with voice coaches and singing lessons; I want parents to set up Papparazi Nests on Talent Nights, weeping and filming, weeping and filming. This is the world I want.

But for Sing Up, it’s the last scene in Casablanca, Braveheart, Butch and Sundance, Angels with Dirty Faces. It’s curtains; the scheme will be funded up until 2012, and after that, all is silence. (I presume that after everyone has gone home from the Olympics, Britain will dramatically revert to Blitz-sepia, rationing will be reintroduced, and Park Lane will become a gated community. I suggest you buy bottled water and plenty of tinned goods otherwise you’ll be eating your hands or something.) From looking through their website, this appears to be an event we should genuinely regret. Plus ça change.

Do not approach these men.

But where there’s a cause, there’s a claim. In this case, a report was released this week by the Institute of education, which claimed that projects like Sing Up were enormously beneficial to the well being of children.
This was reported on the BBC, presumably from a news release via agencies such as the Press Association,  and was obviously proudly trailed on the Sing Up website. Now I don’t wish to put the boot into what, to me, appears to be a fine and meaningful project. But the way in which this research has been positioned has a lot more to do with marketing and a lot less to do with authentic science. And incidentally, I’m not taking issue with the people who conducted the survey, either, and least of all with Sing Up. But it’s a perfect example of how social science is misused to justify values and interests in education.

For a start, the report was commissioned by Sing Up themselves:

‘The Institute of Education’s independent three-year study, commissioned by the Sing Up programme, is based on data collected from 9,979 children at 177 primary schools in England.’

The words ‘independent’ and ‘commissioned by the Sing Up programme’ placed together in such close proximity must indicate some new, alternative meaning of the phrase independent that I haven’t yet heard of, unless they mean something else.This by itself doesn’t exclude the research from the realms of credibility, but it should at the very least allow us to reposition the findings in a different context. In much the same way that homoeopaths and cigarette manufacturers are fond of quoting from research that supports their products, it trips alarms when you find out that research has been carried out by vested interests. (‘Getting up early is dangerous’, a new report commissioned by the National Union of Students warned today. That kind of thing.) This doesn’t mean that there is actual researcher bias in this case, simply that the choice to publish or not publish becomes a political decision based on a utilitarian assessment of benefits.

Go on- I dare you.

Secondly, there’s the issue of the report itself:  try as I might  I can’t see it anywhere. And the only  link from the Sing Up website to an IOE  report takes us to a paper published on the their website, in which I can’t find any specific reference to the Sing Up program at all. Oh, there’s plenty about singing, and lots of claims for the benefits of a musical education. Which means that either I’m looking at an old report, or it hasn’t been published yet. Or maybe I just can’t find it. Like I say, I might be wrong, but that suggests to me that it hasn’t been published in a journal and exposed to peer review and assessment by the academic community. And if that’s the case, then mere mortals like myself have no purchase on the information- we rely, of course, on the weight of a community assessment to judge if such material meets the standards of rigour and academic ethics. Until that happens, it’s about as authoritative as an opinion piece.

Again (and I know I’m stressing this a lot, but this isn’t meant to be a criticism of the report itself, or the project, and I’m at pains to be civil), for research to be meaningful in a public sphere, it has to be subject to public scrutiny. There are a lot of people out there with PhDs. Some of them are Gillian McKeith. One of the first thing I learned at university was that there are plenty of opinions out there, and none of them have a guaranteed  copyright on certainty.

Then there are the claims, or at least the claims as reported.

a) Singing in school can make children feel more positive about themselves and build a sense of community I bet it can. So can chess clubs, being in a gang and joining a cult. So can just about any other activity in the right context.

b) There is ‘a clear link between singing and well-being’. Could you define clear? Can you define well being? Pupils that sing feel better about themselves; even assuming we have overcome the definitional challenges of such a subjective term, how on earth can one draw a clear causal relationship between the two, and disentangle that relationship from a million other factors that could accompany the proposed cause and effect? Perhaps being part of a group promotes well being, and the singing is incidental. Perhaps if you’re the sort of person who likes to sing then you’ll also be the type of person on average, feels better about yourself. Perhaps, perhaps, perhaps. I’m still not getting a causal relationship here.


c) ‘Children who took part in the programme had a strong sense of being part of a community.’ I don’t wish to be churlish here, but the idea that people who participate in communities feel like they’re in a community doesn’t exactly sound like headline shattering stuff. But thank you, science. I look forward to your assessment of what the effect of punching myself in the pipes feels like.

d) “A clear inference may be drawn that children with experience of Sing Up are more likely… to have a positive self-concept,”  What’s your point caller? It sounds like this means that x causes y, when in fact it shows no such thing, at least by itself. They may be more likely for other reasons. Maybe y causes x, and having a positive self concept causes people to join Glee clubs, I don’t know. But that’s the point. I don‘t know. Nobody does.

e) ‘Sing Up children were up to two years ahead in their singing development than those of the same age who did not take part in the programme’. Sorry, I thought we had finished with tautologies. Are they seriously implying that children who are involved in singing practise actually improve at singing? You’ll be telling me that people who climb ladders get higher up, next. Honestly, it’s an open goal.

This may sound petty, because at least on the surface, who can disagree with the idea that singing lessons are a great thing for children to be exposed to, and to made available for as many as want or need them to flourish? I enjoyed singing at school. Others hated it, in much the same way I didn’t enjoy the ritual humiliation of rugby in December, where your alleged friends would barrel into you at full tilt in a manner that would provoke charges were they to be repeated off the field. And I certainly would mourn the loss of any scheme that promoted such activities (singing, not assault).

Helen Goddard. Not an ideal role model, to be fair.

But this story nicely summarises many things that are wrong with the use of scientific research in education, and especially social science. Humanities research is commonly used to promote a myriad of causes and interest in schools, and almost always in the advocacy of a new initiative or in an attempt to convince headmasters and teachers that they should be teaching in a particular way, or running a school to a particular model. And that has led to a suffocating number of ideas and initiatives drowning the practise of teaching for decades, each one justified by a clutch of optimistic, hand-picked research and statistics.

And the problem with this is that social science research just doesn’t provide anything like the level of probability that the physical sciences, however problematically, offer. If someone asserts that water boils at 100 degrees at sea level, then I can comfortably and easily assess that theory by testing it to my heart’s delight. But if someone then claims that they have shown that children learn best with a three part lesson then I run into an enormous number of problems:

1. How do I check that their progress wasn’t down to some other factor? Isolating a causal point of origin is almost impossible in an environment as wild and complicated as human interaction, with its plethora of reasons, internal causes, external, invisible factors, and unknowns.
2. How do I create a control to provide the above?
3. How do I know I’m not biasing my own research with my own intentions, however implicit?
4. How do I know my participants aren’t skewing the data by some form of bias on their part?

And so on. Social science is not, and never can, offer predictive powers. The pursuit of certainty in the Humanities is a fool’s errand, because we can barely claim such a principle in the natural sciences. That isn’t to discount social scientific research, but merely to contextualise it appropriately. As the MMR non-scandal showed, even the biological sciences can be subject to misinterpretation, especially when an arbitrary bundle of studies are offered as representative when in fact they are not. Social science is an invaluable commentary on how we live, who we are, and the exploration of meaning in the human sphere. But what it isn’t, is science, at least not as Joe Public knows it.

And that’s the shame of it: that education has been drowned in pseudo science, in the name of progress, when what it really represents is the justification of the values of the educational policy makers. The policy is decided for a thousand reasons, and then research is selected or created that justifies the decision.

If you want to say that singing programs should be exempt from deletion in the next rounds of cuts, then you should do so by dwelling on the intrinsic value of the activity itself- singing is an art form, a pleasure and one of the ways in which we express ourselves as humans. You value it or you don’t. But what you shouldn’t do is try to justify its value by reference to an extrinsic factor- ‘it improves well being’ and so on. That’s the argument of the boardroom and the abacus (‘What use is singing?’), and should have no place in our consideration of what is and isn’t a valuable part of a child’s education. (But of course I get the feeling that the values have already been decided: what does the economy need?) And we certainly shouldn’t rely on one piece of social science research to provide justification for a proposal, no matter how well intentioned. Because as teachers, I think we’ve had quite enough of that.

It’s an emergency! For God’s sake, get me a social scientist! Why misunderstanding the aims of research is crippling education.

I’m elbow deep in gizzards this week with the number of geese I’ve slaughtered in the name of prognostication. I haven’t developed an emergent tendency towards serial killing; I’ve just been trying to answer an age-old educational conundrum: do schools need more money? And answering that seemingly simple question led me to question the whole educational research racket, or at least its misappropriation by the people we trust to run the show.

My unconventional approach to divination and revelation was prompted  when the government published school-by-school spending figures along with last weeks’ league tables. Although the DfE is being coy, claiming that this publication is purely linked to the aim of greater transparency, we all know that nosey Noras will be asking if schools give value for money. Very sneaky. So how do we know if more money actually leads to better results in education anyway?  A BBC report from the 14th of January looked at the evidence:

‘A recent Pisa study from the OECD, compared academic performance across a wide range of countries and offered some support for the government’s view that money is not a key factor. Another study, by Francois Leclerque for UNESCO in 2005, surveyed a wide range of other economists’ attempts to find a correlation between resources and results. Some found a positive correlation. Others found the opposite. Leclerque concluded that, whichever view you took, it was as much a matter of one’s previous belief and opinion as it was of scientific knowledge. (1)

One major study (by Hanushek and Kimko, 2000) looked at pupils’ international maths scores and compared them to several different measures of school spending.It is not clear whether spending more on schools leads to better results. Their conclusion was: “The overall story is that variations in school resources do not have strong effects on test performance.” (1)

So that’s all perfectly clear then. At least we have all the data we need to make a decision. Not.

Think about what’s happening here: tens of millions of pounds spent, an equivalent proportion of academic labour, the finest minds in education, all focused on one point, one question, like shining a million light bulbs onto a spot and turning it into a laser. Only to find that all you have is a very bright room, and an army of moths dive bombing the window.

If you turned that focus, funding and fervour on to a physical task, you can imagine the mountains that could be built, or abysses excavated. If it was directed to an object of material interest such as ‘how high can a house of cards be built?’ then we’d have the answer by tea time and all be driving home in our 1976 Gran Torinos with the overspend. So why the problem uncovering truths in educational research?

The answer lies in the methodology and expectations of social science itself, and their differences with the Natural Sciences: chemistry, physics, biology, astronomy, oceanography, etc- anything that is amenable to the scientific method of study. The social sciences- and I’ll be coming back to that term later- is the attempt to replicate  that method in the field of human behaviour. As the latest marketing meme-worm would say, simples.

What is the scientific method? In essence it is based on the following process:

1. Data regarding physical phenomena are collected by observation that is measurable and comparable.
2. This information is collated and a hypothesis is constructed which offers some kind of explanatory description of the events described by the data; to look at it another way, we discern a pattern in the data that offers the potential to predict or define, usually on the assumption of causality, but often with a purely descriptive intent.
3. This hypothesis is tested by experimentation. The hypothesis is then either immediately discarded with the introduction of this new data, or tested again. The more profound and extensive the testing, the less uncertain the hypothesis is claimed to be.

I’ve simplified the process on a similar scale to describing Moby Dick as ‘a big fish’ so forgive my brevity. There are long established difficulties with this method that offer challenges to both the philosopher and the scientist: have I tested enough? Is my interpretation of the data biased? Have I collected the data in an ethical manner? Have I performed relevant tests? Are there alternative explanations? Have I mistaken correlation for causality? And so on.

But scientists have one fairly large trump card to play when contesting with chippy Humanities graduates about all this: science seems to work. Your car works; your phone reliably transmits emails of funny dog pictures around the world; planes have a habit of not falling from the skies. If the scientific method isn’t perfect, it’s the closest thing we’ve got.

And of course there is a much more profound question: is anything certain? Rationalists like Descartes would say that there are things that can be ascertained by the pure light of reason itself, such as his own existence (in the much misquoted Cogito, Sum). But what about the world? Descartes’ argument for the proof of an external world is as convincing as the plot line to My Family, and most people (certainly anyone other than lonely, friendless hermits) turn to our observation of the world as the best basis for understanding how things work: broadly speaking, the empirical approach.

But Hume (certainly one of the most readable of the British Empiricists) famously drove a bus through the empirical claims to certainty, by describing all predictive statements about the world (The Sun will rise tomorrow; water boils at 100 degrees Celsius at sea level, etc.) as inductive inferences. In other words, they rely on our assumption that the future will be like the past, which of course is something we can never test. To understand the importance of this, we can look to the example of Popper’s Black Swan Problem; until the discovery of said sooty avian, any European would have said that all swans were white, and they would have had millions of observations over centuries by millions of people to back this hypothesis up. Of course, no hypotheses can ever be established beyond doubt, and any decent scientist is aware of this.

But this isn’t a problem of science; it’s only a problem of people who misunderstand the scientific method: it never sets out to establish foundational, necessarily true propositions; it only seeks to establish more or less probable hypotheses, nothing more but certainly nothing less. It’s enormous success has led many people to become acolytes of this New God, ascribing to it the infallibility normally reserved for the theistic God or his chosen representatives. But science doesn’t make these claims. It simply observes, records, considers, and reflects. And when something seems to work, it runs with it. No other method comes close to its predictive and descriptive powers, so until something better comes along, we work with it, and ignore the spoon benders and the homoeopaths who chant and caper, and believe that because empirical scientific claims lack certainty that they can be contested, dismissed and replaced with their own particular and peculiar branches of witch craft and ju-ju.

Which brings me to social science finally, and its germane offspring, educational social science. The desire to apply the methods of the natural sciences to the social sphere is entirely understandable; after all, the benefits that have been obtained from the laboratories and notebooks of the men in white coats have given long life, comfort, leisure time and most importantly, Television and Mad Men. Imagine the benefits we could glean if we turned our microscopes and astrolabes away from covalent bonds and meteorological taxonomy and towards the thing we love and value most: ourselves. Cue: psychology, anthropology, history, politics, educational theory, etc. Now all we have to do is send out the scientists, and sit back and wait for all that  lovely data to be turned into the cure for sadness, the end to war, the answer to life’s meaning and while you’re at it, how best to teach children.

And yet, here we are, still waiting. The example I gave at the start of this article serves as just one illustration. For every study you produce that demonstrates red ink lowers pupil motivation, or brings them out in hives or something, I can show you a study that says, no, it’s green ink that does the trick. For any survey that shows the benefits of group work, there are equivalent surveys that say the same about project work, or individual work, or the Montessori method, or learning in zero gravity or whatever. It is, to be frank, maddening, especially if you’re a teacher and on the receiving end of every new initiative and research-inspired gamble that comes along. The effect is not dissimilar to being at the foot of an enormous well and wondering not if, but how many buckets of dog turds will rain on you that day, and how many soufflés you’ll be expected to make out of it. To quote Manzi:

‘Unlike physics or biology, the social sciences have not demonstrated the capacity to produce a substantial body of useful, nonobvious, and reliable predictive rules about what they study—that is, human social behavior, including the impact of proposed government programs. The missing ingredient is controlled experimentation, which is what allows science positively to settle certain kinds of debates.'(2)

And that, I think, summarises the problems teaching has with the terrifying deluge of educational research that has emerged in the twentieth century and beyond, and the apparently awful advice that has drenched the education sector for decades with its well-intentioned by essentially childish misunderstandings. When I entered the profession I met many old hands who would greet each new initiative with a pained, ‘Not that again,’ expression in the style of Jack Lemmon chewing tinfoil. At first I thought they were merely stubborn old misanthropes, but now I see that they were at least partially motivated by desensitisation; that they had sucked up scores of magic bullets and educational philosopher’s stones catapulted at them over the decades, and had learned to wear tin helmets to deflect as many of them as possible. None of this justifies ignoring new ideas, but it’s easy to understand why teachers become immune to the annual initiative.

And yet, even this is to be unfair about the nature of social scientific research and its alleged conclusions. In the field of Religious Studies, for example, I find an enormous deficit of research that claims to point to anything intrinsically predictive or definitive. Much of the research in this area is acutely aware of its limitations, possibly because of the explicit understanding that any discussion of faith matters automatically put one in the proximity of discussions about truth and validity, opinion and subject bias. Of course, there is a lot of bogus research that deserves to be laughed at too, but it’s interesting that in a field so contested one should find such care. Social science only gets itself into hot water when people take its findings as more than what social scientists would actually claim, namely that it possesses any kind of claim to finality and certainty.

Any good piece of social science I have read relating to education is always upfront about the limitations of its method of testing; is always tentative in its assertions, and always hesitates to assert anything substantially beyond the data obtained. But I have also read a great deal of bad research that appears to think itself a branch of physics: this method, it thunders, produces this result. A key problem here is what might be called high causal density: when we attempt to ascribe a social phenomena to a particular causal precedent, we immediately run into the problem that any one behaviour (such as improved grades or behaviour) is extremely hard to trace back to a given event; there are enormous numbers of factors that could correspond to the outcomes under examination. Thus, if I introduce a new literacy scheme in school based on memorising the Beano, and next year I see a 15% rise in pupils obtaining A*-C in English GCSE, any claim I made that the two were connected would have to wrestle with other possible claims, such as the group being observed were smarter than previous groups; or they had better teachers; or they were born under a wandering star, ad infinitum. This causal density is particularly noticeable in endeavour that studies human behaviour, with its multitude of perspectives, invisible intentions and motives. Put simply, people are infuriatingly difficult to second guess and predict.

The position is similar to the weather forecasting. We might be able, broadly speaking, to predict that Winter will be colder than Summer. But anything much more specific than that gets harder and harder; even the Met office doesn’t issue long term forecasts any more; there just isn’t any point. And their daily forecasts update every few hours or so; that’s because the factors involved, while potentially measurable in principle, are just too complex and numerous to be done in practise. The problem is multiplied when we consider that human behaviour may not, after all, be reducible to materialist explanations, and therefore escape causal circumscription entirely. The debate over freewill is far from over; indeed, it is as alive as ever.

This problem possibly wouldn’t upset too many people (namely that many people engaged in the field of social scientists have a shaky grasp as to the powers and frailties of the scientific method itself, and produce papers that are riddled with subject bias, observer bias, researcher bias, and the desire to produce something that justifies their tenure and funding), except that as a concomitant to its claims to provide meaningful guidance in social affairs, it also expects to be used- and sometimes succeeds- in driving the engine of policy making in front of it. And that, dear friends, is where people like me come into the equation.

Here are some of the things that are assumed to be axiomatic truths in the contemporary classroom:

1. Lessons should be in three parts
2. Children putting their hands up is bad
3. Red ink will somehow provoke them to become drug dealers and warlords
4. Every lesson must have a clear aim
5. Every lesson must conclude with a recap
6. Every lesson must show clear evidence of progression, in a way that can be observed by a blind man on the moon with a broken telescope.
7. Levelling children’s work is better than giving them grades. Grades are Satanic

I could go on, but their aren’t enough tears in the world. These are just some of the shackles that teachers are burdened with, dogma with which they must comply. Why? Because someone, somewhere produced a study that ‘proved’ this. And that proof was taken to be gospel, and then passed down by well-meaning ministers, the vast majority of whom have never stepped in a classroom in a pedagogic manner, unless accompanied by cameras.

So that’s where we stand right now; social science being produced by the careless, consumed by the gullible, and transmitted down to the practitioner, who waits at the foot of the well with an umbrella. In this arena, is it any wonder that the teacher has been devolved from respected professional, reliant on judgement, wisdom and experience, to a delivery mechanism, regurgitating the current regime’s latest, fashionable values? No wonder teaching is in a bit of a mess right now. We’re not expected to be teachers; they want us to be postmen.

In this vacuum of credible knowledge, is it any wonder that teachers feel uncertain, misguided, confused about their roles, about the best way to teach, and troubled by the nagging suspicion that the best ways to teach are staring right at them?

The most certain assertions are those that make the least specific claims, and fit the greatest number of observations and data. These are the principles that teachers should be guided by, and that’s why your own professional experience is at least as good a guide as the avalanche of ‘best practise’ and OfSTED criteria that resulted from the misappropriation of science; and in many cases, your own experience will be better. If you have  years of experience and genuinely reflect on your practice, if your classes are well behaved, the children express enjoyment and the grades are good, then some would say your experiences were merely anecdotal; but I would say they were a necessary part of professional wisdom and judgement.

In fact, I would say they were better.

A priori, the social scientific method is best used as a commentary on human beings and their behaviour, not as a predictive or reductive mechanism. So the next time you read another piece of educational research hitting Breakfast TV, feel free to say, ‘Oh really? That’s interesting.’ But make sure you hold your breath. And get your umbrella and saucepan out.

1. BBC News What Does Spending Show? http://www.bbc.co.uk/news/education-12175480
2: Jim Manzi, http://www.city-journal.org/2010/20_3_social-science.html 
3. http://playthink.wordpress.com/2010/08/03/on-the-limits-of-social-science/
4. http://www-personal.umd.umich.edu/~delittle/Encyclopedia%20entries/philosophy%20of%20social%20science.pdf      

 See? I put references and everything this time. That was so people would take it more seriously. Homoeopaths are really good at this, especially when they’re referring to other homoeopaths, quack PhDs and dodgy journals run from the back of someone’s health food shop.
  

Welsh School Children ‘damned to the Hell of Broken Mirrors’ by losing league tables.


A report by the University of Bristol today claims that it has unearthed evidence that the decision in 2001 by the Welsh Assembly to do away with league tables in schools has directly led to thousands of Welsh children being condemned to 999 years in the Purgatory of Ravnak, the Soul-Flayer.

League tables, which still exist in England, were abolished in Wales after claims that it led to schools circumventing real education, and instead focussing on meaningless scams to leapfrog the league rankings; for example by introducing BTECs or other qualifications that were GCSE-equivalent but lacked academic rigour or credibility. It was also claimed at the time that, even if schools were reluctant to engage in these practises- described as ‘whorish and anti-education’ even by the heartless Tin Man from the Wizard of Oz, yesterday- then they were forced to participate in order not to suffer by comparison with other, less scrupulous institutions who had spotted the first, and fallen on it like starving rats.

But this new research claims to give the lie to that, and says that once the decision was taken to dispense with tables, a portal was opened in the space/time continuum that enabled the Damned Legion Hordes of Azazel to cross over into our world and steal the souls of every second pupil in year 9, and 2 out of every three in Key Stage 4, due to their particular susceptibility to rap music and badly-spelled swearing.

‘It’s clear,’ said Grand Vizier Phillips, leader of the research. ‘There is a clear correlation between losing the tables, and feeding the furnaces of Satan. It really is a huge pity.’ When asked to respond to allegations that the report had missed the obvious differences between correlation and causation, and that the link between tables and purgatory had not been definitively demonstrated, the Grand-Vizier’s response was unequivocal: ‘A hex upon thee! Vade Retro, Satanus! The power of Christ compels thee! I hope that clears things up.’

Teachers in Wales were jubilant at the news. ‘Brilliant,’ said one, ‘For years we’d been labouring under the misapprehension that education meant more than simply getting a better result than the previous year- you know, a bit like the market model, which is premised on infinite expansion, even though we’re fairly sure that the universe might not actually be infinite. At last we can get back to doing what we do best: finding out which exam board offers the easiest syllabus and focussing on the children who are borderline C/D candidates. Fantastic. F**k the rest of them,’ he said.

Moloch the Devil, chained at the bottom of the Lake of Tears is 27,337 years old.