Following Louise Tickle’s piece in the Guardian, there has been widespread speculation regarding my motivation for wanting 11+ test marks to be made public. As Judge Kennedy pointed out, selective education is not on trial and I am not, as has been suggested, trying to get my 16 year old son into year 7 at the local grammar. That would be silly.
This is purely about whether information should or should not be disclosed.
The reasons the Information Commissioner gave for non-disclosure are:
- It is in the public interest to develop tutor proof tests, however the Commons Select Committee of Education found, “the Government has yet to demonstrate how an admissions system could be designed in a manner which would be immune to gaming, or down to the ability to pay.” Giving evidence, the chief scientific advisor for education Tim Leunig said, “Buckinghamshire had attempted to come up with one [a tutor-proof test] but it had not worked.” and schools minister Nick Gibb has likened this objective to the quest for the holy grail.
- The Commissioner says that CEM’s rival is not subject to Freedom of Information thus placing CEM at a commercial disadvantage. GLA are indeed a private company, but the information is held by schools so if, for example, I thought GLA were selling snake oil I’d just ask the schools for information.
- The revenue from these tests helps fund Durham Vice Chancellor’s £470,000 basic salary.
The Information Commissioner’s exact words put a bit more lipstick on this particular pig but still boils down to not releasing information because Durham are making money from something. Exactly what, has me confused. Each time I mentioned “tutor proof tests” I was reprimanded by the judge who tells me Durham’s unique selling point is that their tests are seen to be more resistant to tutoring and that it works hard to design tests that are as resistant to tutoring as possible.
To understand why releasing the information is important requires an understanding of standardisation although ironically if it was released everything would be crystal clear.
As any statistician will explain, a standardised score measures the individual relative to the population. What 11+ providers don’t tell you is, behind the scenes, they interpret population as just one year’s cohort. Give me a set of test marks, a desired pass mark and how many you want to “pass” and I can do this down to the individual. It is disingenuous to abuse statistical maths in order to imply there is such a thing as grammar school standard but given the amount of trouble I’ve had explaining this I made a five-minute film explaining it graphically here.
Reasons for disclosure
If there is a downturn in overall admissions numbers then, like the self-levelling suspension on those funny old Citroëns, grammar school standard is automatically lowered. That’s good for the grammars as they remain full and therefore fully funded but no school is an island. In Kent, which is fully selective, this is achieved by taking pupils who in other years would have attended the neighbouring secondary moderns. Those schools are faced with dwindling numbers and loss of funding. The heads can’t complain about moving goal posts because they don’t even get to see the goal posts (raw marks).
The law states, “Admission authorities must ensure that their arrangements will not disadvantage unfairly, either directly or indirectly, a child from a particular social or racial group.” GLA have told the Schools Adjudicator that Verbal Reasoning tests are biased against EFL children so it would be in the public interest to understand what weighting each authority gives to each discipline (Maths, Verbal Reasoning, Non-Verbal Reasoning). That way any particular social or racial group who may feel they are unfairly prejudiced can challenge these arrangements but these groups don’t get to see the goal posts either.
Reading School’s 2015 entry set a cut-off mark of 110. The standardised results show that one boy scored 110.00 whilst another only 109.99. To quote the school’s published admissions criteria the second was, “deemed not to benefit from the style of education provided at Reading School.” It continues, “The following criterion will be used to allocate places as between borderline candidates who cannot be distinguished in terms of academic ability. With regard to the historic links with the Borough of Reading, eligible boys living nearer to the School will be accorded a higher priority in the allocation of day-boy places.” This tie-breaker has never been used!! If a child in Newbury scores 0.01 higher than one in Newtown he’s given the place. Admissions law says, ‘Parents should be able to look at a set of arrangements and understand easily how places for that school will be allocated’ but the basis on which the school are ‘distinguishing’ between these two candidates is completely hidden.
When the location of the itinerant goal posts is a closely guarded secret, there isn’t much incentive for the referee to pay attention. One school ‘standardised’ six candidates, who sat a late test, as an entirely separate population. As standardisation selects a proportion of the population regardless of absolute attainment, this blunder guaranteed two of these candidates a place. The candidate originally ranked 100th (for 100 places) in the main test scored 113/150 but was pushed down two places by applicants with raw scores of 87/150 and 76/150. Standardisation might come across as a bit of a black art but if you check the raw marks, this gaffe stands out like a sore thumb. The parents of the two children denied a place as a result of this mistake would have had no trouble whatsoever picking up on this if only the information was not routinely withheld.
Durham explained the presence of negative scores in the 2016 Bucks standardised results were a result of, “… adjusting the standard deviation to account for a fixed qualifying score of 121 and a year-on-year increase in the number of candidates.” If that’s not moving goal posts I don’t know what is. The scores for the 2016 Verbal Reasoning test consisted of over six thousand different standardised scores. I’d like to understand how slightly over a half an hour’s multiple choice questions can differentiate candidates to such a degree. I can conceive some particularly ingenious mathematical ways to generate such arbitrary precision but it’s as sensible as making a sundial so big you are able to read the time to the nearest second. (Who needs a space elevator when you have a Durham ™ sundial?)
In Coombs vs ICO (2016) Judge Hamilton considered, “… the appellant had provided evidence that the claimed USP of tutorproofing was highly questionable and that the public interest warranted close examination of this claim which could only be achieved through the disclosure of the disputed material.”
There is general public interest in public authorities being transparent in their decision making. Durham’s income from the 11+ in 2016 was £815,801 but this is only part of the picture. These tests are used to decide which children benefit by attending one of 163 grammar schools. Estimating their the annual budget as £5m per school, the best part of a billion pounds of public funding each year depends on the outcome of 11+ tests. Transparency specifically over whether the allocation of school places is based on sound decision making is in the public interest.
The hearing on 25 January lasted about six hours without concluding whether the exemption on which the refusal was based, section 43(2) of the Freedom of Information Act was even engaged in the first instance before being adjourned. Over the weekend I wrote to all parties conceding that Durham do appear to be commercially profiting from something so we can get on to the next stage and decide which is most important; Durham’s profits or opening up the can of worms.