The Report on the 2010 Pan-Canadian Assessment of Mathematics, Science, and Reading (PCAP) landed on our staffroom table this week. The overall results were very good news, but as is usually the case when these system wide testing results are released, the media sifted through the mounds of data to focus the public’s attention on some bad news. This time around, it was, among other things, the growing performance gap between boys and girls, particularly in reading.
CEA by no means takes the issue of increasing gender performance gaps lightly, but varying literacy rates and gender issues are hardly new in education, and the public needs to understand that there are many male students who are excelling in their studies and many girls who are not. In his Education Canada article, Failing Boys, Beyond Crisis, Moral Panic, and Limiting Stereotypes, University of Western Ontario’s Wayne Martino explains the dangers associated with constantly reinforcing and exaggerating gender differences.
As typically happens with the media dissection of the PISA scores, negative headlines send some Ministries of Education searching for someone to blame, such as the case in Quebec with decreased reading scores and in Manitoba with overall lower scores. But what about the often-heard comment by Math teachers that one of the biggest challenges they face is students having difficulty reading and understanding written problems – yet reading test scores in Quebec were down, but Quebec Math scores ranked amongst the top in Canada?
As I have stated in the past, we don’t use singular measures or a “test” to diagnose a medical issue. When a person coughs, we don’t jump to the conclusion that the person has a serious lung disease. We insist on multiple tests to ensure a proper diagnosis. In education, however, one test does the trick and shows all the problems and weaknesses. It is long overdue that when it comes to diagnosing challenges, strengths, and weaknesses in education, we move away from the overly simplistic and incorrect “One test says it all” mindset. Parents, educators, and students deserve better than this.
As Jodene Dunleavy articulated in her Education Canada article, Ranking Our Responses to PISA 2009 :
“I’d like to put some of the blame for public reaction to PISA scores on the OECD, itself. It’s easy to feel intimidated by the volume of figures and explanations that flow from each assessment. But this alone cannot explain the overwhelming amount of attention paid to a single, league-style table ranking the 65 participating countries on combined reading, mathematic, and scientific literacy scores. Witnessing how results get taken up in the public domain, it is hard not to feel that the PISA country rankings have become the Olympics of the education world.”
So around our water cooler, many questions about PCAP arose: Are we asking the right questions on these performance assessments of school systems? PCAP, just like PISA, is measuring how well students are doing in math, reading, and science, but it doesn’t attempt to take approaches to learning, student engagement, and teaching environments into account in comparing provinces.
It’s encouraging that there is considerable debate in Europe about the need to have PISA measure creativity, but what else should we be measuring? What about measuring student engagement? Equity? And a breakdown by subpopulation groups, not just boy and girls?
We think more could and should be measured. Do you think so?