Home » research
Category Archives: research
What’s red and green and pointless? The endless, anxious debate about the colour of marking.
My Stella Challenge to the red-ink flat-earthers
New study shows that something is possibly true but it might not be: the Fog of Social Science.
![]() |
‘Kill me.’ |
These are apocalyptic days for many school schemes; in the present age of neo-austerity, it seems like anything not related to life support and child protection is being pared down to the marrow. I’m not sure people are aware yet of how much is on the way out, thanks to a cartel of financial hucksters and their sub-prime lending habits that made the lifestyles of termites seem modest and restrained. Some of the things on their way out were definitely dirty bathwater: the GTC, for example. But some were babies. As the FT comments:
‘The schools resource budget, which covers day-to-day running costs, will rise in real terms by 0.4 per cent. But a rise in the number of pupils will mean current spending per pupil will be cut by 2.25 per cent…The education department’s budget for buildings, which is almost entirely spent on schools, will be cut from £7.6bn to £3.4bn – a real terms cut of 60 per cent….Michael Gove, education secretary, admits that many schools will enter a tough period.’
Which means we’ll be holding wet hankies on the platform as we watch many extra-curricular schemes, clubs and so on wave at us through the steam from the train now leaving the platform. This, to be fair, isn’t news any more, although many in schools still have to adjust to this reality: if it can go, it will. I’ve been reading professional Dear John letters from LEA consultants and liaisons all week, wishing me well as they pack their belongings into red handkerchiefs tied to sticks as they set out for London with their little black cats.
One of many, many schemes teetering on the end of the gangplank is Sing Up, (click on the link while you still can), an organisation that, unsurprisingly, believes that ‘Every child deserves the chance to sing every day.’ While you could greedily take issue with the origins of this alleged right (is it intrinsic? Divine? Legally prescribed?), I would never antagonise such a well-meant, noble cause. If I were Educational King for a Day (it keeps me awake at night sometimes, plotting and dreaming…) this is the kind of group I would give money to; I want schools with choirs; I want schools with voice coaches and singing lessons; I want parents to set up Papparazi Nests on Talent Nights, weeping and filming, weeping and filming. This is the world I want.
But for Sing Up, it’s the last scene in Casablanca, Braveheart, Butch and Sundance, Angels with Dirty Faces. It’s curtains; the scheme will be funded up until 2012, and after that, all is silence. (I presume that after everyone has gone home from the Olympics, Britain will dramatically revert to Blitz-sepia, rationing will be reintroduced, and Park Lane will become a gated community. I suggest you buy bottled water and plenty of tinned goods otherwise you’ll be eating your hands or something.) From looking through their website, this appears to be an event we should genuinely regret. Plus ça change.
![]() |
Do not approach these men. |
But where there’s a cause, there’s a claim. In this case, a report was released this week by the Institute of education, which claimed that projects like Sing Up were enormously beneficial to the well being of children.
This was reported on the BBC, presumably from a news release via agencies such as the Press Association, and was obviously proudly trailed on the Sing Up website. Now I don’t wish to put the boot into what, to me, appears to be a fine and meaningful project. But the way in which this research has been positioned has a lot more to do with marketing and a lot less to do with authentic science. And incidentally, I’m not taking issue with the people who conducted the survey, either, and least of all with Sing Up. But it’s a perfect example of how social science is misused to justify values and interests in education.
For a start, the report was commissioned by Sing Up themselves:
‘The Institute of Education’s independent three-year study, commissioned by the Sing Up programme, is based on data collected from 9,979 children at 177 primary schools in England.’
The words ‘independent’ and ‘commissioned by the Sing Up programme’ placed together in such close proximity must indicate some new, alternative meaning of the phrase independent that I haven’t yet heard of, unless they mean something else.This by itself doesn’t exclude the research from the realms of credibility, but it should at the very least allow us to reposition the findings in a different context. In much the same way that homoeopaths and cigarette manufacturers are fond of quoting from research that supports their products, it trips alarms when you find out that research has been carried out by vested interests. (‘Getting up early is dangerous’, a new report commissioned by the National Union of Students warned today. That kind of thing.) This doesn’t mean that there is actual researcher bias in this case, simply that the choice to publish or not publish becomes a political decision based on a utilitarian assessment of benefits.
![]() |
Go on- I dare you. |
Secondly, there’s the issue of the report itself: try as I might I can’t see it anywhere. And the only link from the Sing Up website to an IOE report takes us to a paper published on the their website, in which I can’t find any specific reference to the Sing Up program at all. Oh, there’s plenty about singing, and lots of claims for the benefits of a musical education. Which means that either I’m looking at an old report, or it hasn’t been published yet. Or maybe I just can’t find it. Like I say, I might be wrong, but that suggests to me that it hasn’t been published in a journal and exposed to peer review and assessment by the academic community. And if that’s the case, then mere mortals like myself have no purchase on the information- we rely, of course, on the weight of a community assessment to judge if such material meets the standards of rigour and academic ethics. Until that happens, it’s about as authoritative as an opinion piece.
Again (and I know I’m stressing this a lot, but this isn’t meant to be a criticism of the report itself, or the project, and I’m at pains to be civil), for research to be meaningful in a public sphere, it has to be subject to public scrutiny. There are a lot of people out there with PhDs. Some of them are Gillian McKeith. One of the first thing I learned at university was that there are plenty of opinions out there, and none of them have a guaranteed copyright on certainty.
Then there are the claims, or at least the claims as reported.
a) Singing in school can make children feel more positive about themselves and build a sense of community I bet it can. So can chess clubs, being in a gang and joining a cult. So can just about any other activity in the right context.
b) There is ‘a clear link between singing and well-being’. Could you define clear? Can you define well being? Pupils that sing feel better about themselves; even assuming we have overcome the definitional challenges of such a subjective term, how on earth can one draw a clear causal relationship between the two, and disentangle that relationship from a million other factors that could accompany the proposed cause and effect? Perhaps being part of a group promotes well being, and the singing is incidental. Perhaps if you’re the sort of person who likes to sing then you’ll also be the type of person on average, feels better about yourself. Perhaps, perhaps, perhaps. I’m still not getting a causal relationship here.
c) ‘Children who took part in the programme had a strong sense of being part of a community.’ I don’t wish to be churlish here, but the idea that people who participate in communities feel like they’re in a community doesn’t exactly sound like headline shattering stuff. But thank you, science. I look forward to your assessment of what the effect of punching myself in the pipes feels like.
d) “A clear inference may be drawn that children with experience of Sing Up are more likely… to have a positive self-concept,” What’s your point caller? It sounds like this means that x causes y, when in fact it shows no such thing, at least by itself. They may be more likely for other reasons. Maybe y causes x, and having a positive self concept causes people to join Glee clubs, I don’t know. But that’s the point. I don‘t know. Nobody does.
e) ‘Sing Up children were up to two years ahead in their singing development than those of the same age who did not take part in the programme’. Sorry, I thought we had finished with tautologies. Are they seriously implying that children who are involved in singing practise actually improve at singing? You’ll be telling me that people who climb ladders get higher up, next. Honestly, it’s an open goal.
This may sound petty, because at least on the surface, who can disagree with the idea that singing lessons are a great thing for children to be exposed to, and to made available for as many as want or need them to flourish? I enjoyed singing at school. Others hated it, in much the same way I didn’t enjoy the ritual humiliation of rugby in December, where your alleged friends would barrel into you at full tilt in a manner that would provoke charges were they to be repeated off the field. And I certainly would mourn the loss of any scheme that promoted such activities (singing, not assault).
![]() |
Helen Goddard. Not an ideal role model, to be fair. |
But this story nicely summarises many things that are wrong with the use of scientific research in education, and especially social science. Humanities research is commonly used to promote a myriad of causes and interest in schools, and almost always in the advocacy of a new initiative or in an attempt to convince headmasters and teachers that they should be teaching in a particular way, or running a school to a particular model. And that has led to a suffocating number of ideas and initiatives drowning the practise of teaching for decades, each one justified by a clutch of optimistic, hand-picked research and statistics.
And the problem with this is that social science research just doesn’t provide anything like the level of probability that the physical sciences, however problematically, offer. If someone asserts that water boils at 100 degrees at sea level, then I can comfortably and easily assess that theory by testing it to my heart’s delight. But if someone then claims that they have shown that children learn best with a three part lesson then I run into an enormous number of problems:
1. How do I check that their progress wasn’t down to some other factor? Isolating a causal point of origin is almost impossible in an environment as wild and complicated as human interaction, with its plethora of reasons, internal causes, external, invisible factors, and unknowns.
2. How do I create a control to provide the above?
3. How do I know I’m not biasing my own research with my own intentions, however implicit?
4. How do I know my participants aren’t skewing the data by some form of bias on their part?
And so on. Social science is not, and never can, offer predictive powers. The pursuit of certainty in the Humanities is a fool’s errand, because we can barely claim such a principle in the natural sciences. That isn’t to discount social scientific research, but merely to contextualise it appropriately. As the MMR non-scandal showed, even the biological sciences can be subject to misinterpretation, especially when an arbitrary bundle of studies are offered as representative when in fact they are not. Social science is an invaluable commentary on how we live, who we are, and the exploration of meaning in the human sphere. But what it isn’t, is science, at least not as Joe Public knows it.
And that’s the shame of it: that education has been drowned in pseudo science, in the name of progress, when what it really represents is the justification of the values of the educational policy makers. The policy is decided for a thousand reasons, and then research is selected or created that justifies the decision.
If you want to say that singing programs should be exempt from deletion in the next rounds of cuts, then you should do so by dwelling on the intrinsic value of the activity itself- singing is an art form, a pleasure and one of the ways in which we express ourselves as humans. You value it or you don’t. But what you shouldn’t do is try to justify its value by reference to an extrinsic factor- ‘it improves well being’ and so on. That’s the argument of the boardroom and the abacus (‘What use is singing?’), and should have no place in our consideration of what is and isn’t a valuable part of a child’s education. (But of course I get the feeling that the values have already been decided: what does the economy need?) And we certainly shouldn’t rely on one piece of social science research to provide justification for a proposal, no matter how well intentioned. Because as teachers, I think we’ve had quite enough of that.
It’s an emergency! For God’s sake, get me a social scientist! Why misunderstanding the aims of research is crippling education.
I’m elbow deep in gizzards this week with the number of geese I’ve slaughtered in the name of prognostication. I haven’t developed an emergent tendency towards serial killing; I’ve just been trying to answer an age-old educational conundrum: do schools need more money? And answering that seemingly simple question led me to question the whole educational research racket, or at least its misappropriation by the people we trust to run the show.
My unconventional approach to divination and revelation was prompted when the government published school-by-school spending figures along with last weeks’ league tables. Although the DfE is being coy, claiming that this publication is purely linked to the aim of greater transparency, we all know that nosey Noras will be asking if schools give value for money. Very sneaky. So how do we know if more money actually leads to better results in education anyway? A BBC report from the 14th of January looked at the evidence:
‘A recent Pisa study from the OECD, compared academic performance across a wide range of countries and offered some support for the government’s view that money is not a key factor. Another study, by Francois Leclerque for UNESCO in 2005, surveyed a wide range of other economists’ attempts to find a correlation between resources and results. Some found a positive correlation. Others found the opposite. Leclerque concluded that, whichever view you took, it was as much a matter of one’s previous belief and opinion as it was of scientific knowledge. (1)
One major study (by Hanushek and Kimko, 2000) looked at pupils’ international maths scores and compared them to several different measures of school spending.It is not clear whether spending more on schools leads to better results. Their conclusion was: “The overall story is that variations in school resources do not have strong effects on test performance.” (1)
So that’s all perfectly clear then. At least we have all the data we need to make a decision. Not.
Think about what’s happening here: tens of millions of pounds spent, an equivalent proportion of academic labour, the finest minds in education, all focused on one point, one question, like shining a million light bulbs onto a spot and turning it into a laser. Only to find that all you have is a very bright room, and an army of moths dive bombing the window.
If you turned that focus, funding and fervour on to a physical task, you can imagine the mountains that could be built, or abysses excavated. If it was directed to an object of material interest such as ‘how high can a house of cards be built?’ then we’d have the answer by tea time and all be driving home in our 1976 Gran Torinos with the overspend. So why the problem uncovering truths in educational research?
The answer lies in the methodology and expectations of social science itself, and their differences with the Natural Sciences: chemistry, physics, biology, astronomy, oceanography, etc- anything that is amenable to the scientific method of study. The social sciences- and I’ll be coming back to that term later- is the attempt to replicate that method in the field of human behaviour. As the latest marketing meme-worm would say, simples.
What is the scientific method? In essence it is based on the following process:
1. Data regarding physical phenomena are collected by observation that is measurable and comparable.
2. This information is collated and a hypothesis is constructed which offers some kind of explanatory description of the events described by the data; to look at it another way, we discern a pattern in the data that offers the potential to predict or define, usually on the assumption of causality, but often with a purely descriptive intent.
3. This hypothesis is tested by experimentation. The hypothesis is then either immediately discarded with the introduction of this new data, or tested again. The more profound and extensive the testing, the less uncertain the hypothesis is claimed to be.
I’ve simplified the process on a similar scale to describing Moby Dick as ‘a big fish’ so forgive my brevity. There are long established difficulties with this method that offer challenges to both the philosopher and the scientist: have I tested enough? Is my interpretation of the data biased? Have I collected the data in an ethical manner? Have I performed relevant tests? Are there alternative explanations? Have I mistaken correlation for causality? And so on.
But scientists have one fairly large trump card to play when contesting with chippy Humanities graduates about all this: science seems to work. Your car works; your phone reliably transmits emails of funny dog pictures around the world; planes have a habit of not falling from the skies. If the scientific method isn’t perfect, it’s the closest thing we’ve got.
And of course there is a much more profound question: is anything certain? Rationalists like Descartes would say that there are things that can be ascertained by the pure light of reason itself, such as his own existence (in the much misquoted Cogito, Sum). But what about the world? Descartes’ argument for the proof of an external world is as convincing as the plot line to My Family, and most people (certainly anyone other than lonely, friendless hermits) turn to our observation of the world as the best basis for understanding how things work: broadly speaking, the empirical approach.
But Hume (certainly one of the most readable of the British Empiricists) famously drove a bus through the empirical claims to certainty, by describing all predictive statements about the world (The Sun will rise tomorrow; water boils at 100 degrees Celsius at sea level, etc.) as inductive inferences. In other words, they rely on our assumption that the future will be like the past, which of course is something we can never test. To understand the importance of this, we can look to the example of Popper’s Black Swan Problem; until the discovery of said sooty avian, any European would have said that all swans were white, and they would have had millions of observations over centuries by millions of people to back this hypothesis up. Of course, no hypotheses can ever be established beyond doubt, and any decent scientist is aware of this.
But this isn’t a problem of science; it’s only a problem of people who misunderstand the scientific method: it never sets out to establish foundational, necessarily true propositions; it only seeks to establish more or less probable hypotheses, nothing more but certainly nothing less. It’s enormous success has led many people to become acolytes of this New God, ascribing to it the infallibility normally reserved for the theistic God or his chosen representatives. But science doesn’t make these claims. It simply observes, records, considers, and reflects. And when something seems to work, it runs with it. No other method comes close to its predictive and descriptive powers, so until something better comes along, we work with it, and ignore the spoon benders and the homoeopaths who chant and caper, and believe that because empirical scientific claims lack certainty that they can be contested, dismissed and replaced with their own particular and peculiar branches of witch craft and ju-ju.
Which brings me to social science finally, and its germane offspring, educational social science. The desire to apply the methods of the natural sciences to the social sphere is entirely understandable; after all, the benefits that have been obtained from the laboratories and notebooks of the men in white coats have given long life, comfort, leisure time and most importantly, Television and Mad Men. Imagine the benefits we could glean if we turned our microscopes and astrolabes away from covalent bonds and meteorological taxonomy and towards the thing we love and value most: ourselves. Cue: psychology, anthropology, history, politics, educational theory, etc. Now all we have to do is send out the scientists, and sit back and wait for all that lovely data to be turned into the cure for sadness, the end to war, the answer to life’s meaning and while you’re at it, how best to teach children.
And yet, here we are, still waiting. The example I gave at the start of this article serves as just one illustration. For every study you produce that demonstrates red ink lowers pupil motivation, or brings them out in hives or something, I can show you a study that says, no, it’s green ink that does the trick. For any survey that shows the benefits of group work, there are equivalent surveys that say the same about project work, or individual work, or the Montessori method, or learning in zero gravity or whatever. It is, to be frank, maddening, especially if you’re a teacher and on the receiving end of every new initiative and research-inspired gamble that comes along. The effect is not dissimilar to being at the foot of an enormous well and wondering not if, but how many buckets of dog turds will rain on you that day, and how many soufflés you’ll be expected to make out of it. To quote Manzi:
‘Unlike physics or biology, the social sciences have not demonstrated the capacity to produce a substantial body of useful, nonobvious, and reliable predictive rules about what they study—that is, human social behavior, including the impact of proposed government programs. The missing ingredient is controlled experimentation, which is what allows science positively to settle certain kinds of debates.'(2)
And that, I think, summarises the problems teaching has with the terrifying deluge of educational research that has emerged in the twentieth century and beyond, and the apparently awful advice that has drenched the education sector for decades with its well-intentioned by essentially childish misunderstandings. When I entered the profession I met many old hands who would greet each new initiative with a pained, ‘Not that again,’ expression in the style of Jack Lemmon chewing tinfoil. At first I thought they were merely stubborn old misanthropes, but now I see that they were at least partially motivated by desensitisation; that they had sucked up scores of magic bullets and educational philosopher’s stones catapulted at them over the decades, and had learned to wear tin helmets to deflect as many of them as possible. None of this justifies ignoring new ideas, but it’s easy to understand why teachers become immune to the annual initiative.
And yet, even this is to be unfair about the nature of social scientific research and its alleged conclusions. In the field of Religious Studies, for example, I find an enormous deficit of research that claims to point to anything intrinsically predictive or definitive. Much of the research in this area is acutely aware of its limitations, possibly because of the explicit understanding that any discussion of faith matters automatically put one in the proximity of discussions about truth and validity, opinion and subject bias. Of course, there is a lot of bogus research that deserves to be laughed at too, but it’s interesting that in a field so contested one should find such care. Social science only gets itself into hot water when people take its findings as more than what social scientists would actually claim, namely that it possesses any kind of claim to finality and certainty.
Any good piece of social science I have read relating to education is always upfront about the limitations of its method of testing; is always tentative in its assertions, and always hesitates to assert anything substantially beyond the data obtained. But I have also read a great deal of bad research that appears to think itself a branch of physics: this method, it thunders, produces this result. A key problem here is what might be called high causal density: when we attempt to ascribe a social phenomena to a particular causal precedent, we immediately run into the problem that any one behaviour (such as improved grades or behaviour) is extremely hard to trace back to a given event; there are enormous numbers of factors that could correspond to the outcomes under examination. Thus, if I introduce a new literacy scheme in school based on memorising the Beano, and next year I see a 15% rise in pupils obtaining A*-C in English GCSE, any claim I made that the two were connected would have to wrestle with other possible claims, such as the group being observed were smarter than previous groups; or they had better teachers; or they were born under a wandering star, ad infinitum. This causal density is particularly noticeable in endeavour that studies human behaviour, with its multitude of perspectives, invisible intentions and motives. Put simply, people are infuriatingly difficult to second guess and predict.
The position is similar to the weather forecasting. We might be able, broadly speaking, to predict that Winter will be colder than Summer. But anything much more specific than that gets harder and harder; even the Met office doesn’t issue long term forecasts any more; there just isn’t any point. And their daily forecasts update every few hours or so; that’s because the factors involved, while potentially measurable in principle, are just too complex and numerous to be done in practise. The problem is multiplied when we consider that human behaviour may not, after all, be reducible to materialist explanations, and therefore escape causal circumscription entirely. The debate over freewill is far from over; indeed, it is as alive as ever.
This problem possibly wouldn’t upset too many people (namely that many people engaged in the field of social scientists have a shaky grasp as to the powers and frailties of the scientific method itself, and produce papers that are riddled with subject bias, observer bias, researcher bias, and the desire to produce something that justifies their tenure and funding), except that as a concomitant to its claims to provide meaningful guidance in social affairs, it also expects to be used- and sometimes succeeds- in driving the engine of policy making in front of it. And that, dear friends, is where people like me come into the equation.
Here are some of the things that are assumed to be axiomatic truths in the contemporary classroom:
1. Lessons should be in three parts
2. Children putting their hands up is bad
3. Red ink will somehow provoke them to become drug dealers and warlords
4. Every lesson must have a clear aim
5. Every lesson must conclude with a recap
6. Every lesson must show clear evidence of progression, in a way that can be observed by a blind man on the moon with a broken telescope.
7. Levelling children’s work is better than giving them grades. Grades are Satanic
I could go on, but their aren’t enough tears in the world. These are just some of the shackles that teachers are burdened with, dogma with which they must comply. Why? Because someone, somewhere produced a study that ‘proved’ this. And that proof was taken to be gospel, and then passed down by well-meaning ministers, the vast majority of whom have never stepped in a classroom in a pedagogic manner, unless accompanied by cameras.
So that’s where we stand right now; social science being produced by the careless, consumed by the gullible, and transmitted down to the practitioner, who waits at the foot of the well with an umbrella. In this arena, is it any wonder that the teacher has been devolved from respected professional, reliant on judgement, wisdom and experience, to a delivery mechanism, regurgitating the current regime’s latest, fashionable values? No wonder teaching is in a bit of a mess right now. We’re not expected to be teachers; they want us to be postmen.
In this vacuum of credible knowledge, is it any wonder that teachers feel uncertain, misguided, confused about their roles, about the best way to teach, and troubled by the nagging suspicion that the best ways to teach are staring right at them?
The most certain assertions are those that make the least specific claims, and fit the greatest number of observations and data. These are the principles that teachers should be guided by, and that’s why your own professional experience is at least as good a guide as the avalanche of ‘best practise’ and OfSTED criteria that resulted from the misappropriation of science; and in many cases, your own experience will be better. If you have years of experience and genuinely reflect on your practice, if your classes are well behaved, the children express enjoyment and the grades are good, then some would say your experiences were merely anecdotal; but I would say they were a necessary part of professional wisdom and judgement.
In fact, I would say they were better.
A priori, the social scientific method is best used as a commentary on human beings and their behaviour, not as a predictive or reductive mechanism. So the next time you read another piece of educational research hitting Breakfast TV, feel free to say, ‘Oh really? That’s interesting.’ But make sure you hold your breath. And get your umbrella and saucepan out.
1. BBC News What Does Spending Show? http://www.bbc.co.uk/news/education-12175480
2: Jim Manzi, http://www.city-journal.org/2010/20_3_social-science.html
3. http://playthink.wordpress.com/2010/08/03/on-the-limits-of-social-science/
4. http://www-personal.umd.umich.edu/~delittle/Encyclopedia%20entries/philosophy%20of%20social%20science.pdf
See? I put references and everything this time. That was so people would take it more seriously. Homoeopaths are really good at this, especially when they’re referring to other homoeopaths, quack PhDs and dodgy journals run from the back of someone’s health food shop.
Welsh School Children ‘damned to the Hell of Broken Mirrors’ by losing league tables.
A report by the University of Bristol today claims that it has unearthed evidence that the decision in 2001 by the Welsh Assembly to do away with league tables in schools has directly led to thousands of Welsh children being condemned to 999 years in the Purgatory of Ravnak, the Soul-Flayer.
League tables, which still exist in England, were abolished in Wales after claims that it led to schools circumventing real education, and instead focussing on meaningless scams to leapfrog the league rankings; for example by introducing BTECs or other qualifications that were GCSE-equivalent but lacked academic rigour or credibility. It was also claimed at the time that, even if schools were reluctant to engage in these practises- described as ‘whorish and anti-education’ even by the heartless Tin Man from the Wizard of Oz, yesterday- then they were forced to participate in order not to suffer by comparison with other, less scrupulous institutions who had spotted the first, and fallen on it like starving rats.
But this new research claims to give the lie to that, and says that once the decision was taken to dispense with tables, a portal was opened in the space/time continuum that enabled the Damned Legion Hordes of Azazel to cross over into our world and steal the souls of every second pupil in year 9, and 2 out of every three in Key Stage 4, due to their particular susceptibility to rap music and badly-spelled swearing.
‘It’s clear,’ said Grand Vizier Phillips, leader of the research. ‘There is a clear correlation between losing the tables, and feeding the furnaces of Satan. It really is a huge pity.’ When asked to respond to allegations that the report had missed the obvious differences between correlation and causation, and that the link between tables and purgatory had not been definitively demonstrated, the Grand-Vizier’s response was unequivocal: ‘A hex upon thee! Vade Retro, Satanus! The power of Christ compels thee! I hope that clears things up.’
Teachers in Wales were jubilant at the news. ‘Brilliant,’ said one, ‘For years we’d been labouring under the misapprehension that education meant more than simply getting a better result than the previous year- you know, a bit like the market model, which is premised on infinite expansion, even though we’re fairly sure that the universe might not actually be infinite. At last we can get back to doing what we do best: finding out which exam board offers the easiest syllabus and focussing on the children who are borderline C/D candidates. Fantastic. F**k the rest of them,’ he said.
Moloch the Devil, chained at the bottom of the Lake of Tears is 27,337 years old.