The UK is excluded from
the results of the latest Programme for International Student
Assessment. Donald Hirsch explains why and considers what we
learn from the results.
The exclusion of the United Kingdom from this week's report
on the Pisa 2003 survey is a bitter setback for a Government
that has embraced the idea of international benchmarking of
our educational performance far more enthusiastically than its
predecessors.
The disappointment arises partly because it makes it harder
for Tony Blair to rave about the UK's strong results in Pisa's
first round, carried out in 2000. It is also because ministers
and their advisers have genuinely bought into performance
monitoring as a driver of improvement. Perversely, our
obsession with assessing performance has probably contributed
to our exclusion from the survey results.
The Programme for International Student Assessment, run by
the Organisation for Economic Co-operation and Development, is
the most important regular global survey of student
performance. It tests the extent to which 15-year-olds can
muster reading, maths and science skills to tackle tasks they
will confront in adult life.
It is repeated every three years but, in the latest survey,
the UK failed to meet the technical requirements for inclusion
in the main results, because more than a third of schools and
nearly a quarter of students selected for the survey did not
take part. The main performance results for the UK as a whole
are therefore not being published, although Scotland's results
are included, showing that it continues to perform above
average for the OECD.
What went wrong? The Department for Education and Skills
has commissioned a survey to find out, but several things are
already clear. Unlike in some other countries where education
ministries more or less ordered schools to take part, heads in
the UK's relatively autonomous schools felt able to say no to
the survey.
They seem to have become increasingly reluctant to take
part in international surveys, given the heavy pressure of
UK-based testing. Many British 15-year-olds, unlike those in
other countries, were approaching a high-stakes national exam
at the time of the Pisa study in March 2003.
Two-thirds of those eligible were Year 11s taking GCSEs.
After insufficient schools took part in March, the UK was
given a chance to test students in more schools in July. But
by then many Year 11s had disappeared on jaunts to Newquay or
beyond. There is no way of knowing whether the students who
did take the test are representative of students as a whole.
The irony is that the Pisa results overall suggest that
education systems tend to do better if they give schools
managerial autonomy and monitor student performance through
regular assessment. The very things that the OECD has praised
in countries such as the UK are making it harder to collect
international data here. It is no coincidence that the one
country failing to meet the sampling requirements in the
earlier Pisa 2000 survey was the Netherlands, another country
where school autonomy and accountability are both emphasised.
The DfES can now learn from the Dutch authorities, which
made a great effort to convince their schools of the benefits
of Pisa, and succeeded in getting the required response to the
2003 survey. The UK has 15 months to achieve the same in time
for Pisa 2006.
In the meantime, there is still much that we can learn from
the results of other countries published this week. In 2003,
the main assessment was of mathematics. Here are four
important lessons:
* Levelling up - to narrow the gap between the best and
worst - is an achievable goal. In general in Pisa, high
overall performance does not come at the expense of equality.
Among the four top-scoring OECD countries in mathematics in
2003, Finland has the narrowest gap between the best and worst
students of any country.
Korea and the Netherlands have greater than average
equality, and Japan an average distribution.
While country scores have not generally changed much since
2000, the best improved in mathematics, Poland, is a case
study in levelling up. Having been one of the poorest
performers in earlier international surveys, it is now almost
at the OECD average, and on a par with the United States.
The recent improvement has been achieved entirely through a
rise in the performance of Poland's lower achievers - the
better students were already scoring close to their
equivalents elsewhere in the OECD, and their scores have not
changed.
In the late 1990s, Poland reformed its secondary school
system, creating an extra year of comprehensive general
education at age 15, rather than dividing up students between
school streams after an extended primary education. This has
clearly helped under-achieving 15-year-olds to catch up with
their peers in other countries.
* The most successful secondary school systems are not
selective. The Pisa 2000 results showed that countries such as
Germany that divide secondary children into different status
schools not only perform unimpressively overall but also tend
to end up with the biggest differences in achievement between
advantaged and disadvantaged pupils.
Following the storm unleashed in Germany's education system
by its below-average Pisa 2000 results, the country has shown
a modest improvement overall, with results now around average.
This is likely to result from a greater focus by teachers on
performance, rather than on fundamental reforms provoked by
Pisa, which are only just getting underway. However, Germany's
divided schools continue to produce wide social differences.
* The gender gap in mathematics is today more important at
a psychological than at a cognitive level. While boys continue
to score better than girls in many countries, the difference
is now very small. In contrast, girls questioned in Pisa
showed substantially less interest, less confidence and
greater anxiety in relation to mathematics than boys.
Figures on gender differences are among the selected UK
results that the OECD has published, and these indicate that
in this country there is no significant gender gap in maths
performance, but a wider than average gap in attitudes. This
underlines the need to focus on building girls'
confidence in the mathematical abilities that they are now
clearly displaying.
* Narrow tests of achievement are not the only measures of
the outcomes of secondary education. The most innovative
aspect of Pisa 2003 was to try for the first time to measure
general skills internationally, by testing students on their
problem-solving abilities.
Based on tasks such as deciding how to trouble-shoot a
malfunctioning fridge using an instruction manual, or planning
a travel itinerary under complex constraints, this module
demonstrated that it is possible to make robust comparisons of
such skills across cultures.
Although in most cases country rankings for problem solving
were similar to those for reading, mathematics and science,
this was not always so.
Analysis of the results confirms that Pisa as a whole is
measuring something other than simple mastery of the
curriculum, the focus of the more traditional Trends in
International Maths and Science Study being published next
week.
For secondary heads with an interest in new approaches to
assessing outcomes in 14-19 education, this may be the most
important reason for participating in Pisa next time.
Donald Hirsch is an international consultant on education
policy