IF YOU ARE A PARENT, you likely recall your baby’s wellness visits with the family doctor. During these frequent visits, the focus was on monitoring physical growth and development to identify any issues that might require attention. By plotting children’s individual growth curves and comparing them with standardized charts, physicians can determine whether satisfactory growth is occurring and when intervention is needed.
Just as health practitioners monitor physical growth with charts, educators can monitor learning growth with universal screeners. A universal screener is a short assessment administered to all students in a classroom that tests sub-skills predictive of a more complex skill. In the case of literacy screeners, the sub-skills of phonological awareness and alphabet knowledge are assessed because they are essential for decoding (Biel et al., 2020). Similarly, numeracy screeners include counting, number relations, and basic arithmetic items because they are components of early numeracy (Devlin et al., 2022).
To discover any learning gaps and ascertain progress, universal screeners are often conducted three times over the course of a school year. Initial screener use provides educators with a baseline of students’ abilities that can be used to guide instruction and flag students who may require additional support. A key tracking feature of universal screeners is that target scores are connected with a child’s grade or age. Students who meet target scores are progressing as anticipated, whereas students who are close to or below target scores may be struggling with foundational skills. For students who do not meet target scores, instructional support or intervention is recommended. Tracking students’ progress over time allows teachers, schools, or districts to evaluate the effectiveness of interventions, ultimately determining whether learning gaps have been reduced or closed.
As an educator or school administrator, you are probably aware of the importance of literacy screeners for identifying and supporting children who may be at risk for reading difficulties. You may also be familiar with specific literacy screeners and interventions used in classrooms. Unfortunately, information about numeracy development is not as readily available as it is for literacy, since the field of mathematics cognition research is still fairly new. However, strides in understanding mathematics development and growing interest in supporting early mathematics learning have led to the creation of evidence-based universal numeracy screeners. This article features one such numeracy screener, the Early Math Assessment at School (EMA@School), which is licensed by Alberta Education as the Provincial Numeracy Screening Assessment (PNSA). PNSA data was used by the Grande Prairie Public School Division in Alberta to target students for intervention and assess whether the interventions worked as intended to remediate identified students.
Math learning is cumulative (increasingly complex skills build on one another), so it is important to lay a strong foundation in early mathematics (Sarama & Clements, 2009).
Moreover, when young children begin school, they vary widely in their mathematical understanding and skills, meaning an achievement gap already exists in Kindergarten (Duncan et al., 2007; Jordan et al., 2009). If this math achievement gap is not addressed early, children with less mathematical proficiency will continue to fall behind their peers. Numeracy screeners are a practical tool for identifying students who require extra instruction and intervention to grasp foundational skills.
Data from Alberta suggests that early achievement gaps may have been exacerbated during the COVID-19 pandemic (Child and Youth Well-Being Panel, 2021). In response, Alberta Education has implemented literacy and numeracy screeners for children in Grades 1–3 to help students get back on track. While there is an abundance of evidence-based literacy screeners available for classroom use, comparable numeracy screeners are lacking. For this reason, Alberta Education contacted the Mathematical Cognition Lab (MCL) at Carleton University in the spring of 2021 to discuss the creation of a provincial numeracy screener. Based on their expertise in mathematical development, the MCL constructed a grade-specific numeracy screener for students in each of the primary grades. The screener consists of items assessing number knowledge, number relations, and number operations, because these related subdomains tap into early mathematical knowledge, but they all predict mathematics learning separately (Devlin et al., 2022). Although many tasks are common across grades (with questions reflecting grade-specific knowledge), there are some differences between grades. For students in the younger grades, the screener has a stronger emphasis on number knowledge and number relations (e.g. counting, number naming, comparing numbers), while there is more of a focus on number operations (e.g. arithmetic fluency, principles of addition) for older grades.
During the 2021/2022 school year, classroom teachers administered the Provincial Numeracy Screening Assessment (PNSA) to over 50,000 primary students. The Grade 1 PNSA involved both one-on-one testing (5 minutes per student) and whole-class testing (15 minutes). For Grades 2–3, the PNSA was implemented in a whole-class setting during a 20- to 30-minute session. Target scores for the PNSA were established, so the tool could be used to identify students who were at risk for low achievement in mathematics. Alberta Education developed intervention lessons that accompanied the PNSA that included: activities for each numeracy sub-skill encompassing concrete-to-representational-to-abstract instructional processes, explicit mathematical vocabulary, and mathematical symbols. The Alberta government provided funding to school divisions to both administer the PNSA and provide needed interventions for students.
In September 2022, the Grande Prairie Public School Division (GPPSD) in Alberta administered the PNSA to students in Grades 2 to 3. Grade 1 students completed the PNSA in January 2023 to allow for some initial mathematics instruction and acclimatization to school prior to screening. To meet the needs of students in the
division, the Numeracy Coordinator designed a comprehensive early numeracy intervention approach, consisting of the following elements:
Figure 2. A math mat used to capture student learning during early numeracy intervention.
Once students demonstrate strong understanding in most sub-skills, they are discharged from the intervention program. Students who exhibit little to no growth have lessons adapted to meet their needs. In the case where growth is limited even after lesson adaptations, intervention work is used as evidence that students may require formal psychoeducational assessments for learning disabilities.
Overall, the intervention is being well received by the school community. One educational assistant commented on the success of the program: “Because we see the students daily, in small groups, we can target the help they need more individually, giving them the opportunity to ask questions and learn in a small-group setting at their level.” From a classroom perspective, a Grade 2 teacher noted that “the kids come back from intervention with more confidence and willingness to take risks!” To date, intervention tracking data indicates that 431 students (across 15 schools) have received targeted support. Of those students, 330 (77 percent) have advanced to meet target scores in most numeracy sub-skills, earning a discharge from the targeted support. The remaining 101 students, although considered “still at risk” after the six-week cycle, made significant gains, specifically in the sub-skills of number line and computations.
As the GPPSD continues to enhance their early numeracy intervention design, they are focusing on fostering greater collaboration between classroom instruction and the early intervention program to reinforce the learning in both environments. There is no question that numeracy screeners are a powerful tool for helping educators focus on foundational learning needed for future mathematics and life success.
For more information about numeracy screeners and early numeracy intervention, check out these resources:
Assessment and Instruction for Mathematics (AIM) Collective website: www.aimcollective.ca
Fuchs, L.S., Newman-Gonchar, et al. (2021). Assisting students struggling with mathematics: Intervention in the elementary grades (WWC 2021006). National Center for Education Evaluation and Regional Assistance (NCEE), Institute of Education Sciences, U.S. Department of Education. http://whatworks.ed.gov/
Youmans, A., & Colgan, L. (Eds.). (in press). Beyond 1, 2, 3: Strengthening early math education in Canada. Canadian Scholars Press.
Biel, C., Conner, C., et al. (2022). How does the science of reading inform early literacy screening? Virginia State Literacy Association. https://literacy.virginia.edu/sites/g/files/jsddwu1006/files/2022-03/How%20Does%20the%20Science%20of%20Reading%20Inform%20Early%20Literacy%20Screening9888e091cc0c17d238d1c54ce31de7afc4bbc396863e07e1d942a4505c5a17a0.pdf
Child and Youth Well-Being Panel. (2021). Child and youth well-being review: Final report. Government of Alberta. https://open.alberta.ca/publications/child-and-youth-well-being-review-final-report#summary
Devlin, D., Moeller, K., & Sella, F. (2022). The structure of early numeracy: Evidence from multi-factorial models. Trends in Neuroscience and Education, 26. doi:10.1016/j.tine.2022.100171
Duncan, G. J., Dowsett, C. J., et al. (2007). School readiness and later achievement. Developmental Psychology, 43(6), 1428–1446.
Jordan, N. C., Kaplan, et al. (2009). Early mathematics matters: Kindergarten number competence and later mathematics outcomes. Developmental Psychology, 45(3), 850–867. doi:10.1037/a0014939
Sarama, J., & Clements, D. H. (2009). Early childhood mathematics education research. Taylor & Francis.
Photo: iStock
First published in Education Canada, September 2023
CONCERNS WITH ACADEMIC DISHONESTY have intensified with the advance of artificial intelligence (AI) technologies. Now, students can enter essay questions into bot-technology, like ChatGPT, to generate text-based responses that can appear to be authentic student work. While these AI bots cannot generate novel or creative ideas, they can synthesize existing knowledge and organize it into logical arguments.
We are now entering what we would call the third epoch of academic integrity. The first relates to the period preceding digital technology, the second coincides with the gradual use of Information Communication Technology (ICT), and the current epoch includes advanced and responsive ICT including AI applications. In many respects, these AI applications have ushered in a new age of plagiarism and cheating (Xiao et al., 2022). So, what should educators do next?
Cheating and artificial intelligence
Estimates of cheating vary widely across national contexts and sectors. For example, more than 50 percent of high school students in the United States reported some form of cheating that could include copying an internet document to submit as part of an assignment and/or cheating during a test (Eaton & Hughes, 2022). Cheating in Canada is also reported by more than half of high school students, with higher percentages (73 percent) reported for written assignments (Eaton & Hughes). In both Canada and the U.S., the incidence rates for undergraduate students are significantly lower (approximately five percent), but are still a noteworthy issue. What is less known is how the recent launch of ChatGPT by OpenAI will impact cheating in both compulsory and higher education settings within and outside of Canada. Perhaps in recognition of this potential issue, OpenAI’s terms of use state that “you must be at least 13 years old to use the Services. If you are under 18, you must have your parent or legal guardian’s permission to use the Services” (OpenAI, 2023).
The ability of popular plagiarism detection tools to identify cheating using ChatGPT remains a formidable challenge. For example, one study found that 50 essays generated using ChatGPT were able to generate sophisticated texts that were able to evade the traditional check software (Khalil & Er, 2023). In other studies, ChatGPT achieved the mean grade for the English reading comprehension national high school exam in the Netherlands (de Winter, 2023) and passed law school exams (Choi et al., 2023). Given that ChatGPT reached 100 million active users in January 2023, just two months after its launch, it is understandable why some have argued AI applications such as ChatGPT will precipitate a “tsunami effect” of changes to contemporary schooling (García-Peñalvo, 2023).
Current policy responses
Not surprisingly, there are opposing views on how to respond to ChatGPT and other AI language models. Some argue educators should embrace AI as a useful tool for teaching and learning, provided the application(s) is cited correctly (Willems, 2023). Others assert that additional training and resources are needed so that educators can better detect cheating (Abdelaal et al., 2019). Still others suggest that the educational challenges posed by AI described above must ultimately lead to assessment reforms (Cotton & Cotton, 2023) that will prevent students from using AI to complete their assignments, so that this threat is minimized. Even with likely further advances in cheating detection software, schools at all levels need to rethink their pedagogical and assessment approaches to respond to a continually evolving information world, one in which computers and technology are increasingly capable at synthesizing and organizing information.
Interestingly, some educators are actively exploring how to incorporate AI into their teaching and assessment methods. Fyfe (2022) describes a “pedagogical experiment” in which he asked students to generate content from a version of GPT-2 and intentionally weave this content throughout their final essay. Students were then asked to confront the availability of AI as a writing tool and reflect on the ethical use of emergent AI language models. This example suggests AI could be used to not only support student learning of core content, but extend critical digital literacy skills, too.
To put a finer point on this, when AI is integrated into teaching and learning, students’ engagement in their learning is higher, according to learning taxonomies. Take for instance a simple learning taxonomy like I.C.E. (Fostaty-Young & Wilson, 1995), where the “I” represents a student’s capacity to remember and work with basic content ideas (e.g. facts, figures, knowledge); the “C” represents a student’s ability to make connections between ideas (e.g. to organize ideas into a logical argument, to compare and contrast, to synthesize); and “E” represents a student’s capacity to make extensions. The “extensions” level of learning, which has also been referred to as “higher order thinking,” is where novel, critical, and creative outputs occur. At this point, AI is unable to achieve extensions, thus this becomes the role and function of students: to understand the ideas presented by texts, teachers, and AI bots and use them to establish novel extensions.
The challenge, of course, is that not all curriculum expectations require extension-level learning. Sometimes students need to learn and demonstrate their learning of basic ideas and connections. So, the question remains, how can AI and assessment work together to support all types of learning? Phrased differently, how can a teacher ensure their teaching and assessment practices are not susceptible to academic integrity issues?
Rethinking assessment with artificial intelligence in mind
There is little doubt that the emergence of ChatGPT represents the “tip of the iceberg” in terms of the use of AI in society and in education. In preparation for its growing presence in education, we provide six key practices to deter the misuse of AI in assessment and evaluation processes.
These six key practices have already proven to support more effective learning and assessment. Importantly, their continued use may either work with AI where appropriate, or deter the use of AI when necessary. For example, while we do not devalue the importance of learning goals that include foundational knowledge and conceptual understanding, the presence of AI creates an opportunity to identify more complex learning goals. These goals may build on teaching and learning that uses AI but then requires learners to evaluate or create extensions in their learning. Similarly, clarity of criteria helps students focus their learning, and the co-creation of criteria with students can lead to discussions regarding those aspects of an assignment or task that may use AI to supplement the work. Feedback cycles better reflect the processes we actually use to complete complex tasks, and the use of peer, self, and teacher feedback improves the quality of work and learning. While AI may be incorporated within early drafts, the revision process will require additional learner effort. Collectively, performance and authentic assessments require a high level of student engagement to demonstrate a number of integrated learning outcomes. As above, AI may supplement some of the foundational aspects of the work and/or task, but the final product will be illustrative of higher-order and critical thinking skills. Lastly, collaborative grading has a number of benefits, including greater assessment consistency, reduced bias, and, we would argue, a greater potential for detection of inappropriate use of AI and/or plagiarism.
Taken together, these practices not only make clear the role of AI in teaching, learning, and assessment, but also encourage students to be more agentic in the learning and assessment process. Effective learning requires students to engage actively, collaboratively, and orally in their learning and to demonstrate their learning through effective assessment. Assessment practices that are embedded within the learning process (formative assessment) will help reduce academic integrity concerns while encouraging more authentic and alternative assessments. The current debate around the presence of AI technologies such as Chat GPT must quickly shift from one of concerns about assessment integrity to one about how we use these technologies in our classrooms to enable our students to demonstrate more-complex and valued learning outcomes. In this respect, AI provides the necessary impetus to spur more forward-thinking assessment practices and policies within provincial and national education systems.
Abdelaal, E., Gamage, S. W., & Mills, J. E. (2019). Artificial Intelligence is a tool for cheating academic integrity. Proceedings of the AAEE2019 Conference. Artificial-Intelligence-Is-a-Tool-for-Cheating-Academic-Integrity.pdf (researchgate.net)
Choi, J. H., Hickman, K. E., et al. (2023). ChatGPT goes to law school. Minnesota Legal Studies Research Paper No. 23-03. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4335905
Cotton, D. R. E., & Cotton, P. A. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. EdArXiv Reprints. https://edarxiv.org/mrz8h?trk=public_post_main-feed-card_reshare-text
de Winter, J. C. F. (2023). Can ChatGPT pass high school exams on English language comprehension. ResearchGate. https://www.researchgate.net/publication/366659237_Can_ChatGPT_pass_high_school_exams_on_English_Language_Comprehension
Eaton, S. E., & Hughes, J. C. (2022). Academic Integrity in Canada. Springer. https://library.oapen.org/bitstream/handle/20.500.12657/53333/1/978-3-030-83255-1.pdf#page=99
Fyfe, P. (2022). How to cheat on your final paper: Assigning AI for student writing. AI & Society. doi.org/10.1007/s00146-022-01397-z
García-Peñalvo, F. J. (2023). The perception of Artificial Intelligence in educational contexts after the launch of ChatGPT: disruption or panic? Ediciones Universidad de Salamanca. https://repositorio.grial.eu/handle/grial/2838
Khalil, M., & Er, K. (2023). Will ChatGPT get you caught? Rethinking of plagiarism detection. arXiv. doi.org/10.48550/arXiv.2302.04335
OpenAI. (2023). Terms of use. https://openai.com/policies/terms-of-use
Willems, J. (2023). ChatGPT at universities – the least of our concerns. SSRN Journal. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4334162
Xiao, Y., Chatterjee, S., & Gehringer, E. (2022). A new era of plagiarism the danger of cheating using AI. Proceedings of the 20th International Conference on Information Technology Based Higher Education and Training (ITHET). https://ieeexplore.ieee.org/abstract/document/10031827
Although statistics vary across provinces, Canadian schools in general were closed for a total of 51 weeks during the pandemic – placing the nation in the highest bracket globally for school closures (UNESCO, n.d.). Unsurprisingly, provincial policymakers across Canada continue to be concerned about the negative short- and long-term impacts of the disruptions created by these closures on students’ learning and have been focusing their attention on improving achievement in traditional content areas such as reading, writing, mathematics, and science. The dominant political and popular media discourse is that students have fallen behind and need to “catch up” in these foundational subject areas. Certainly, international research suggests that this concern is well-founded, and that students’ learning has been significantly disrupted during the pandemic.
Learning loss within and outside of Canada
International research is beginning to document the learning losses students experienced due to school closures, shifts toward online and hybrid learning, and other impacts associated with successive waves of the pandemic. Although these studies are relatively sparse, a limited number of Western nations such as the Netherlands (Engzell et al., 2021), Germany (Depping et al., 2021), Belgium (Maldonato & De Witte, 2021), and the U.S. (Bailey et al., 2021) suggest learning essentially stalled during the pandemic. These studies also suggest that the pandemic may have exacerbated existing inequalities, with lower socio-economic status (SES) students falling even further behind their more affluent peers. Collectively, the emergent literature suggests that learning and the academic resilience of students globally have been particularly threatened during the pandemic (Volante & Klinger, 2022a).
Unfortunately, Canadian large-scale assessment research, which is used to draw reliable and comparative measures of student achievement and system-level judgments, has been particularly constrained during the pandemic. Indeed, the administration of international, national, and provincial assessments have all been adversely impacted, with numerous cancellations during the initial waves of the pandemic. Further, those assessment programs that did occur met with high levels of non-participation, impacting sampling designs. These challenges have made it difficult to make provincial comparisons of student achievement. Those studies that do exist are confined to select geographical contexts such as Toronto (Toronto District School Board, 2021), or offer predicted losses extrapolated from summer learning research (Aurini & Davies, 2021). Collectively, the available research in Canada has been unable to quantify, with any level of certainty, the pandemic’s impact on students’ achievement.
Nevertheless, Canadian education systems, including higher education settings, are reporting important gaps in student learning, suggesting that learning losses have occurred for students in K–12 and beyond. International organizations such as the Organisation for Economic Co-operation and Development (2020) and the United Nations Educational, Scientific and Cultural Organization (2022) have also reported that lower SES students and their families have been unable to secure the necessary resources needed to succeed in online and hybrid learning environments amidst the turmoil created by the pandemic. These challenges are also well documented in popular media stories across Canada and reflected in the policy interventions adopted by various provincial governments to try to support our most vulnerable student groups. Nonetheless, the relative success of these efforts and interventions has not been measured.
Policy trends across Canada
One of our recent studies provided a pan-Canadian analysis of educational policy developments from January 2020 to December 2021 that were specifically related to academic resilience in the wake of the initial waves of the pandemic. Not surprisingly, our findings suggested greater attention was devoted to academic issues – namely learning outcomes in cognitive domains – with relatively fewer policies and resources to support mental health and general physical wellness (Volante et al., 2022c). Our analysis also suggested that there was also a general lack of policy differentiation in terms of how specific resources and supports were to be directed within provincial educational jurisdictions to help support at-risk students. Without such differentiation, we have argued that the resources developed will not be fully realized, and will undoubtedly fail to stem the growing disparities between low- and high-SES student populations that have been amplified by the pandemic.
Collectively, our policy research also underscores the importance of reconsidering how provincial education systems operate to achieve positive outcomes for students and how these outcomes might be “measured” and evaluated. Although a great deal of work is already underway by provincial testing bodies, large-scale assessment measures currently do not offer a multifaceted picture of student development. Conversely, international achievement measures such as those administered by the OECD and/or the International Association for the Evaluation of Educational Achievement (IEA), provide background questionnaires that attempt to capture student-, school-, and system-level factors that may be related to student outcomes. As an example, these international measures increasingly include factors and outcomes that could be classified as non-cognitive skills, drawing attention to the importance of non-cognitive and mental/physical wellness outcomes.
Challenging the dominant discourse
One would be hard-pressed to find any educational stakeholder group that does not recognize the importance of student achievement in traditional subject areas such as reading, writing, mathematics, and science. Nevertheless, the pandemic has highlighted how achievement in traditional cognitive domains offers a necessary, but incomplete picture of the pressing challenges that face Canadian youth. As Volante, Klinger, and Barrett (2021) noted in a previous Education Canada article, Canadian children reported disturbing trends in relation to mental health and general wellness. Similarly, the promotion of non-cognitive skills such as growth mindset represents an increasingly important cadre of key attributes that contribute to resilient students, schools, and education systems in general (Volante & Klinger, 2022b).
Thus, provincial policymakers are faced with an important dilemma. Namely, to develop a comprehensive vision of student learning and wellbeing that emphasizes cognitive (i.e. reading, writing, mathematics, science achievement), non-cognitive (i.e. learning habits, self-beliefs, growth mindset), and general wellness in the face of dominant historical and political ideologies that have focused almost exclusively on standards-based education reform. Indeed, standards-based reform and achievement of the “three R’s” (reading, writing, arithmetic), has largely driven large-scale reform agendas in much of the Western world for more than half a century (Volante et al., 2022d). In spite of the concerns and evidence that have arisen with respect to the impact of the pandemic, every provincial jurisdiction in Canada continues to adhere to a standards-based reform model that emphasizes a hierarchy of subject areas and achievement outcomes. The importance of other critical factors and outcomes may be acknowledged, but receives little if any formal attention, and there is little effort to build on the information being collected by international assessments that now include such measures.
Rethinking large-scale reform
It is often written that adversity is a catalyst for growth and change. Certainly, the last several years have likely presented the most formidable adversity that many students, families, and teachers may face in their lifetimes. Rather than return to status quo approaches that emphasize a narrow set of achievement outcomes, this critical epoch in our collective history offers an opportunity to rethink our approaches to large-scale education reform to provide a more nuanced recognition of the skills and attributes required to face the challenges of the future. Certainly, any student, parent, or teacher will tell you that more than academic content was lost during the pandemic – capturing and addressing the multifaceted complexity of this “loss” requires a new conception of what quality education looks like in a post-COVID world. Failing to recognize the latter could undoubtedly result in students catching up in academic content, only to fall behind in the non-cognitive skills they require for further success. It is time for us to look for ways to link provincial, national, and international assessments and surveys in order to obtain the data needed to examine the complexity of learning that supports the whole child.
This research is supported by the Social Sciences and Humanities Research Council of Canada (SSHRC).
Aurini, J., & Davies, S. (2020). COVID-19 school closures and educational achievement gaps in Canada: Lessons from Ontario summer learning research. Canadian Review of Sociology, 58(2), 165–185. doi.org/10.1111/cars.12334
Bailey, D. H., Duncan, G. J., Murnane, R. J., & Yeung, N. A. (2021). Achievement gaps in the wake of COVID-19. Educational Researcher, 50(5), 266–275. doi.org/10.3102%2F0013189X211011237
Depping, D., Lücken, M. et al. (2021). KompetenzständeHamburger Schülerinnen vor und während der Corona-Pandemie [Alternative pupils’ competence measurement in Hamburg during the Corona pandemic]. DDS – Die Deutsche Schule, Beiheft, 17, 51–79. www.pedocs.de/volltexte/2021/21514/pdf/DDS_Beiheft_17_2021_Depping_et_al_Kompetenzstaende_Hamburger.pdf
Engzell, P., Frey, A., & Verhagen, M. D. (2021). Learning loss due to school closures during the COVID-19 pandemic. Proceedings of the National Academy of Sciences of the United States of America, 118(17), 1-7.www.pnas.org/content/pnas/118/17/e2022376118.full.pdf
Maldonato, J. E., & De Witte, C. (2021). The effect of school closures on standardised student test outcomes. British Educational Research Journal. doi.org/10.1002/berj.3754
Organisation for Economic Co-operation and Development (2020). The impact of COVID-19 on student equity and inclusion: supporting vulnerable students during school closures and school re-openings. OECD Publishing. https://oecd.org/education/strength-through-diversity/OECD%20COVID-19%20Brief%20Vulnerable%20Students.pdf
UNESCO. (n.d.). Dashboards on the Global Monitoring of School Closures Caused by the COVID-19 Pandemic. https://covid19.uis.unesco.org/global-monitoring-school-closures-covid19
Volante, L., & Klinger, D. A. (2022a). PISA, global reference societies, and policy borrowing: The promises and pitfalls of academic resilience. Policy Futures in Education. https://journals.sagepub.com/doi/10.1177/14782103211069002
Volante, L., & Klinger, D. A. (2022b, January 27–28). Assessing non-cognitive skills to promote equity and academic resilience [Paper presentation]. Advancing Assessment and Evaluation Virtual Conference. https://aaec2022.netlify.app/_main.pdf
Volante, L., Klinger, D. A., & Barrett, J. (2021). Academic resilience in a post-COVID world: a multi-level approach to capacity building. Education Canada, 61(3), 32–34.
Volante, L., Lara, C., Klinger, D. A., & Siegel, M. (2022c). Academic resilience during the COVID-19 pandemic: a triarchic analysis of education policy developments across Canada. Canadian Journal of Education, 45(4), 1112–1140.
Volante, L., Schnepf, S., & Klinger, D. A. (Eds.) (2022d). Cross-national achievement surveys for monitoring educational outcomes: Policies, practices, and political reforms within the European Union. Publications Office of the European Union. https://data.europa.eu/doi/10.2760/406165
Photo: iStock
First published in Education Canada, April 2023
The close coupling of content standards with standardized testing brought about by Margaret Thatcher’s U.K. government in the late 1980s ushered in a new form of school accountability that has become the dominant education reform model used by industrialized governments around the world (Volante, 2012). Student performance on large-scale assessment measures are intended to hold school administrators and teachers accountable while also providing the “data” to spur system and school-level improvements. Indeed, every single Canadian province and territory administers and reports achievement in relation to these external provincial measures and also participates in varying degrees in prominent international tests such as the Organisation for Economic Co-operation and Development’s (OECD) Programme in International Student Assessment (PISA).
The OECD, and PISA in particular, has increasingly exerted a pronounced influence in the governance of education systems both nationally and internationally and forced policymakers to grapple with consistent and recurring challenges, such as achievement gaps between different segments of their national and provincial student populations (Volante et al., 2018). One key achievement gap that is often reported is the difference between high and low socio-economic status (SES) groups. The OECD provides national profiles – which can also be disaggregated at the provincial level – to indicate the differences in student achievement that exist between the most and least socioeconomically disadvantaged students. Countries that possess a higher relative share of low SES students who achieve well are said to have a more academically resilient population.
As previously suggested, academic resilience is the notion that there are some students who achieve favourable achievement outcomes despite coming from lower SES backgrounds. Yet, to the average person, the word “resilient” means something quite different. Indeed, the Oxford dictionary defines resilience as “the ability of people or things to recover quickly after something unpleasant, such as shock, injury, etc.” Clearly, the general notion of resilience is much broader than what is typically captured and often widely reported when discussing students and education systems. At the same time, the unprecedented and generational challenges presented by COVID-19 have provided an important impetus to reconsider how we support students in contemporary schools. It is highly likely that the pandemic has created even greater inequities with respect to students’ access to learning resources and supports due to socio-economic factors. Further, the impact of these inequities will impact more than just academic outcomes.
The COVID-19 pandemic has underscored the growing necessity of broader notions of academic resilience that recognize important mental health as well as physical well-being concerns in children and adolescent populations – elements of resilience that are typically not captured by large-scale assessment measures. Rarely does a day go by without public recognition of the daily struggles students, particularly those from poorer households, are facing given the upheaval caused by school closures, social isolation, and familial economic losses – to name but a few factors. Certainly, federal resources such as the recently released Guide to Student Mental Health During COVID-19 (Health Canada, 2020) underscores some of the growing challenges students are facing during the pandemic.
Canadian children may be facing an impending epidemic of mental health and general wellness struggles when the virus eventually subsides. For example, a pan-Canadian survey of the impact of the COVID pandemic on physical activity found less than 5 percent of children 5–11 years old and 0.6 percent of youth 12–17 years old were meeting required guidelines (Moore et al., 2020). Similarly, a recent study by the Hospital for Sick Children in Ontario found a staggering 67–70 percent of children/adolescents experienced deterioration in at least one of six mental health domains during the COVID-19 pandemic: depression, anxiety, irritability, attention, hyperactivity, and obsessions/compulsions (Cost et al., 2021). What steps should be taken by policymakers, district leaders and educators, and teacher education institutions to help alleviate these challenges, both in the short and long term?
There are scant examples within Canada where policymakers report on the overall mental health and/or physical well-being of their student populations. Although international and provincial metrics of student proficiency in such content areas as reading, mathematics, and science abound, measures of health and wellness are typically not reported in a consistent manner or given the same status in policy communities.
Perhaps the Health Behaviour in School-Aged Children (HBSC) survey can serve as a model for provincial/territorial education systems. The HBSC is a cross-national survey conducted in collaboration with the World Health Organization (WHO) that is administered every four years and focuses on the health and well-being of young people (Public Health Agency of Canada, 2020). This survey is administered in Canada to 11-, 13-, and 15-year-olds, and includes much broader aspects of health than those reported by large-scale assessments such as PISA. Provincial governments could develop a similar annual survey to provide more timely comparative data to inform policy directions during and after the pandemic. Ultimately, we need to provide and recognize markers of mental health and physical well-being with the same reverence that has been traditionally ascribed to student achievement measures.
In addition to policy reform considerations, building capacity for more healthy schools will ultimately depend on effective leadership and teaching practices. On a national level, we see Physical and Health Education Canada’s 2021–2024 strategic plan outline the organization’s aim to emerge from COVID-19 with clearly defined intentions targeting pan-Canadian education efforts to improve the well-being of children and youth (Physical and Health Education Canada, 2021). The proposed efforts are wide-ranging and build on current (e.g. Schonert-Reichel & Williams, 2020) and former (e.g. Ontario Ministry of Education, 2017) provincial-territorial healthy schools policy and practice priorities targeting student well-being (i.e. development of national competencies, innovations, testing, sharing of best practices, and professional development). For their part, school districts across Canada will need to devote the necessary resources and provide appropriate professional development opportunities so that teachers are equipped to better identify and intervene in the worsening physical and mental health crisis that is facing Canadian education systems.
Now more than ever, congruent efforts to expand universal screening measures will need to be deployed to address these worrisome trends. Screening in elementary and secondary schools would primarily involve the completion of student questionnaires (American Psychological Association, 2020) – albeit with notable adaptations to account for the unique challenges encountered during distance learning and social isolation. Emerging from this pandemic era of education, measures considerate of academic, personal, physical, cultural, and social circumstances should be considered to promote greater understanding of the relationships between student success and student well-being. Such surveys in provincial and territorial education systems could complement the school climate surveys that many schools and districts already use, but with the necessary specificity to provide more granular data for specific student interventions. Just as governments around the world have echoed the importance of contact tracing to tackle the pandemic, district leaders and teachers will need timely data to help direct their resources and efforts to where they are needed most.
Lastly, any discussion on addressing mental health and physical well-being issues must include considerations for the education of future teachers. Pre-service education programs across Canada will need to continually evolve to ensure aspiring teachers are equipped with the latest pedagogical approaches in both face-to-face and distance learning environments. In addition to instructional time devoted to traditional subject-areas (i.e. language arts, mathematics, science, etc.) is a greater recognition of health and physical literacy, which are regarded as desired outcomes of health and physical education teaching, and important system and school health promotion goals to be achieved (Physical and Health Education Canada, 2021).
Indeed, the COVID-19 pandemic has illustrated with brute force that our traditional hierarchy of subjects, content knowledge, and associated skills are insufficient to “measure” the effectiveness of schools if we expect our students to thrive in a post-COVID world. Collectively, capacity building efforts geared at provincial policy reforms, districts and schools, and teacher education institutions represent a viable multi-level approach to strengthening the resilience of student populations. As one interesting example of a response to this growing need, New Zealand is developing a well-being curriculum that will be integrated across other curriculum streams.
Given the novelty of the current circumstances facing teachers and school-aged children across Canada, there will be a need to research and document the relative impact of different school structures and pedagogical approaches being utilized in online, blended, and socially distanced classroom learning environments. Understanding how these different structures and strategies interact and impact the most at-risk student populations will require an iterative process where recent research findings inform teaching and teaching informs subsequent research. This cyclical process is essential to establish a “best-practice” literature that policymakers and school leaders can draw upon to support their students in rapidly evolving school environments.
The effectiveness of these structures and approaches, and the impact of policies and programs utilized during the COVID-19 pandemic, must be rigorously researched and judged against a broader range of success criteria. Unfortunately, most of the current research in many international contexts appears to be focused on “learning loss” – which is essentially the examination of average drops in standardized test scores in different education systems during the pandemic (Kaffenberger, 2021). Yet virtually every school-based practitioner would acknowledge and echo the significant mental health and physical well-being “losses” that students are also experiencing. Certainly, it is possible for our education systems to attend to both the academic and mental health and physical wellness issues of Canadian youth to help build resilient schools.
Photo: iStock
First published in Education Canada, September 2021
American Psychological Association (2020, September 22). Student mental health during and after COVID-19: How can schools identify youth who need support? www.apa.org/topics/covid-19/student-mental-health
Caldwell et al. (2020). Physical literacy, physical activity, and health indicators in school-aged children. International Journal of Environmental Research and Public Health, 17. www.mdpi.com/1660-4601/17/15/5367
Cost et al. (2021). Mostly worse, occasionally better: Impact of COVID-19 pandemic on the mental health of Canadian children and adolescents. European Child Adolescent Psychiatry. https://pubmed.ncbi.nlm.nih.gov/33638005/
Ministry of Education. (2017). What we heard: Well-being in our schools, strength in our society. Government of Ontario. www.edu.gov.on.ca/eng/about/wb_what_we_heard_en.pdf
Health Canada (2020). Guide to Student Mental Health During COVID-19. Government of Canada. www.mentalhealthcommission.ca/sites/default/files/2020-09/covid_19_tip_sheet_student_mental_health_eng.pdf
Kaffenberger, M. (2021). Modelling the long-run learning impact of the Covid-19 learning shock: Actions to (more than) mitigate loss. International Journal of Development, 81. www.sciencedirect.com/science/article/pii/S0738059320304855#
Moore et al. (2020). Impact of the COVID-19 virus outbreak on movement and play behaviours of Canadian children and youth: A national survey. International Journal of Behaviour Nutrition and Physical Activity, 17. https://doi.org/10.1186/s12966-020-00987-8
Physical and Health Education Canada. (2021). 2021-2024 PHE Canada Strategic Plan: A clear path forward. https://phecanada.ca/about/strategic-plan
Public Health Agency of Canada. (2020, November). Health Behaviour in School-Aged Children. Government of Canada. www.canada.ca/en/public-health/services/health-promotion/childhood-adolescence/programs-initiatives/school-health/health-behaviour-school-aged-children.html
Schonert-Reichel, K., & Williams, J. (2020). Assessment of Schoolwide Well-Being & Social-Emotional Learning. Well-Being BC. www.wellbeingbc.ca/images/school-toolkit/Well-Being-BC—Assesment-Tool—FULL-Workbook.pdf
Volante, L. (2012). Educational reform, standards, and school leadership. In L. Volante (Ed.), School Leadership in the Context of Standards-Based Reform: International Perspectives (pp. 3–20). Springer.
Volante, L. (Ed.). (2018). The PISA Effect on Global Educational Governance. Routledge.
Over the past two decades, classroom assessment for formative purposes has taken centre stage in curriculum policies, assessment standards, and professional learning conversations across Canada. Educators have increasingly embraced and implemented formative assessment approaches under the umbrella of assessment for learning. This endorsement of formative assessment is unsurprising as it has been shown to improve student achievement, metacognition, and motivation (Hattie, 2013) and to aid in promoting more equitable outcomes for lower-achieving students (Black & Wiliam, 2009). As a result, assessment is now more integrated within teaching and learning in Canadian classrooms than ever before, fostering an assessment culture that prioritizes ongoing feedback and the growth mindset (Shepard, 2019).
In this article, we ask: Is the ongoing pandemic and related disruptions to Canadian education threatening the positive assessment culture we’ve worked so hard to create? Classroom teachers have been thrust into online or blended learning contexts, often with little notice and preparation, forcing them to reimagine and transform their instructional and assessment practices in real time. While summative assessment remains a required component of schooling, many teachers are challenged by how to adapt formative assessment practices for online and blended learning contexts. With screens now interfacing so much of our interactions with students, the teaching profession faces pressing questions such as: How can we effectively engage assessment for learning with our students when learning is mediated by technology? How do we maintain the spirit of formative assessment when we don’t “see” or “hear” our students as much as we used to, if at all? and How do we avoid reverting to an emphasis on summative assessment in our online and blended classrooms?
Indeed, emerging research confirms these concerns. A recent report by Doucet and his colleagues (2020) highlights five key assessment-related challenges currently experienced by educators around the world:
While there is cause for optimism that these global challenges in K–12 education will dissipate, it is likely that current conditions will persist for some time and that elements of online or blended learning will take on greater precedence in future classrooms. As we collectively pivot and adapt our approaches to assessment in online and blended learning contexts, it is critical that classroom teachers, school and system leaders, policymakers, researchers, and teacher educators come together to rethink how we assess in online and blended K–12 learning. The changes we make now will not only serve our current students but also inform how we integrate technology in assessment after the pandemic subsides. In this vein, we offer three foundational tenets to help us move forward together to continue to foster a productive assessment culture – whether in online, blended, or face-to-face classrooms.
In rethinking how we assess online, it is essential to remember that we need not start from scratch. Instead, we can look beyond the surface of tried-and-true assessments to their underlying first principles and focus on: the learning we need to assess from our students (purpose), how students may demonstrate their learning (process), and what it is that we might do with that assessment information (use). In keeping an assessment’s purpose, process, and use top of mind, we are better positioned to incorporate technological tools that enable the assessment – whether in a face-to-face, blended, or online learning context. For instance, technology has now made it easy to capture how an idea or a product has evolved over time. Students can save multiple iterations of their work easily and with minimal burden, and easily share their work with others for feedback. Adopting these new technological options serves to strengthen the validity of the assessment by generating richer and more numerous observations of the learning, allowing for better triangulation of student assessment data. While there is no shortage of technological tools and applications that support assessment for learning in K–12 learning – which can be overwhelming in and of itself – emphasizing first principles allows us to confidently select the tool that best aligns with our assessment’s purpose, process, and use.
The shift to online and blended learning has created new professional challenges for educators and led to new stresses for students and families. Now more than ever, we must keep students’ needs, interests, and well-being at the centre of all teaching and assessment activities. Whether face-to-face, blended, or online, we can use assessment for learning to build relationships with our students and support their sense of inclusion. Leveraging one of the greatest strengths of assessment for learning – its capacity to build community – is essential in this time of prolonged isolation. Engaging students in peer feedback processes through group work, collaborative problem-solving activities, breakout rooms, or discussion boards can be a productive place to start. In addition, ongoing teacher-student conversations provide opportunities to celebrate successes, provide feedback, and show our students care and compassion. This supports not only their growth as learners but also their development as individuals. Further, allowing multiple opportunities for students to engage in self-assessment and reflection can serve to support their self-regulation and mental health. And importantly, aside from providing feedback on learning itself, assessment for learning can enable teachers’ ongoing communication with students and their parents/guardians to ensure students have access to the necessary infrastructure to support their learning and address potential equity or social-emotional issues students might be facing.
As we experience and reflect on the sudden and widespread shift to online and blended classrooms, we must continue to learn together about how assessment supports our teaching and our students’ learning and well-being. In the decade prior to the pandemic, educators were increasingly exploring and using various new technologies in the classroom to support teaching, learning, and assessment. However, the pandemic has forced our hand as a profession, requiring widespread adoption of technology in all aspects of our teaching practice, including assessment (Doucet et al., 2020). So, while systematic professional learning about assessment was already essential prior to 2020, the global pandemic has magnified the need to help classroom teachers develop new strategies and leverage technology to support both formative and summative assessment in online and blended contexts. As a result, it is critical that educators across classrooms, schools, boards, regions, and provinces engage in various forms of professional learning and inquiry – whether through self-directed learning, collaborative professional inquiry, professional webinars, social media networks, or formal coursework. We are all learning at a rapid pace that has been forced upon us by circumstances beyond our control, but we can use this opportunity to grow and develop as individuals and as a profession. We particularly encourage a system-wide approach to professional learning within boards and engagement with online professional learning networks such as the Canadian Assessment for Learning Network1 (CAfLN) so that educators may generate relevant and appropriate insights to their local contexts.
While education is constantly evolving and changing, the global pandemic has intensified the need to adapt how we teach and assess our students to better support their learning, development, and well-being. As a profession, we have been forced to change, expand, and redefine the assessments we were doing face to face into online and blended learning contexts. We must acknowledge the steep learning curve we are experiencing as a profession and prioritize open and honest communication among all stakeholders involved – students, teachers, school leaders, system leaders, policymakers, parents/guardians, other professionals, researchers, and teacher educators. We must also pause to celebrate our successes and progress to date in forging new territory in K–12 assessment amid a challenging time. Moreover, we must continue to allocate time, resources, and supports as we continue to learn and grow in our understanding and practice of assessment.
The pandemic has altered many things in our world, but it has not eradicated what we know about the value and importance of assessment for learning and our shared desire to sustain a productive assessment culture in schools and classrooms. Nor has it changed the spirit of assessment, which is captured by the etymology of the word assess itself: to “sit beside” our learners and support their learning. At the end of the day, we need to continue to come together as an education community to use research-based practices to collectively navigate online assessment and promote a positive assessment culture that transcends context.
1 Canadian Assessment for Learning Network: www.cafln.ca
Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21, 5–31.
Doucet, A., Netolicky, D., Timmers, K., et al. (2020). Thinking about pedagogy in an unfolding pandemic: An independent report on approaches to distance learning during COVID19 school closures to inform Education International and UNESCO. Education International.
https://issuu.com/educationinternational/docs/2020_research_covid-19_eng
Hattie, J. (2013). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. Routledge.
Shepard, L. (2019). Classroom assessment to support teaching and learning. The ANNALS of the American Academy of Political and Social Science, 683(1), 183–200.
The stress that teachers experience has many sources. Teachers often report feeling undervalued, underprepared, unsupported, overworked, isolated, and marginalized. A Canadian Teachers’ Federation (2014) survey found that eight in ten teachers feel their stress levels have increased over the previous five years. Reasons cited for elevated stress levels include an inadequate amount of preparation time, limited opportunities for planning and collaboration with colleagues, lack of professional development opportunities, and insufficient support with curriculum implementation. Stress impacts teacher well-being, social emotional competence, and the ability of teachers to provide the emotional and instructional support that is characteristic of safe, caring, learning environments.1
When educators experience burnout, the emotional exhaustion that sets in can negatively impact teacher instruction and elevate student stress levels, leading to further mental health issues in the classroom.2 Teacher burnout can cause students to perceive the classroom as negative, which can lead to increased behavioural problems – and it is problem behaviours that teachers cite as a major source of job dissatisfaction, turnover, and lowered expectations. In Canada, stress and burnout has contributed to the high number (25 to 30 percent) of teachers who leave the field entirely within the first five years of beginning their career.3
A district-wide model for supporting teacher well-being
The OECD (2016) has conceptualized an integrated School as Learning Organization model4 with seven key dimensions, described in Figure 1 below. The Surrey School District (SD 36) has developed and implemented a district-wide shared vision for learning – Learning by Design – that puts into action each of the OECD dimensions.
A key aspect of Learning by Design is our commitment to supporting ongoing professional learning through research, innovation, and collaboration as part of our four Priority Practices:
Below we describe a collection of district-wide initiatives, aligned with our four Priority Practices that promote educator well-being, self-efficacy, and connectedness. These initiatives were underpinned by research into best practices, and designed and implemented by multi-department teams. Within our systems change approach, we have in place formalized research, monitoring, and evaluation activities to ensure program challenges are identified and addressed.
Strategies for supporting teacher well-being
Social Emotional Learning for educators When educators demonstrate social emotional competencies (SEC), it translates into positive impacts on teacher-student relationships, a healthier classroom climate, greater student Social Emotional Learning (SEL) and academic achievement, and implementation of more effective SEL and classroom management strategies. Research finds that teachers high in SEC tend to have lower levels of workplace stress, a greater sense of personal accomplishment and satisfaction in their career, and are often better able to provide emotional and instructional support to students.5
Providing professional development grounded in SEL for educators not only provides them with skills to improve their own mental well-being and emotional resiliency, it also provides them with tools to take those skills and implement them with the learners in their schools.
The Surrey Schools’ SEL for Educators (SEL4E) initiative has been offering a series of workshops that provide educators with tools to increase their SEC by developing:
Educators who took part in this initiative reported that they have the confidence to bounce back from a challenging day due to a better understanding of SEL practices, that the skills they learned will support them in overcoming challenges in their career, and that they can identify, assess, and implement strategies that will support their SEC and resiliency.
A district of our size faces barriers to offering SEL professional development opportunities on a wide scale. While it’s great that we have many requests from school staff to take part in SEL4E, the number of requests often exceed what our resources will allow. The district is committed to increase capacity and provide for more collaborative and experiential learning opportunities.
Supporting an SEL climate in schools Educational research finds that system-wide approaches can contribute to improved social emotional competencies for student populations and school staff.6 The Surrey SEL Initiative is a school-wide systems approach to integrating academic, social, and emotional learning (SEL) across the district as a means to promote equitable outcomes for all students, while also promoting teacher wellness and resiliency.
To fuse SEL practices at the level of a school community, our approach incorporates capacity building, collaboration, and reflection on the pedagogical practices teachers adopt and implement in their classrooms. It uses resources from the CASEL Guide to School-wide Social and Emotional Learning to support our schools in assessing the SEL climate of the student population and staff, and to guide planning and monitoring of implemented SEL programs and activities.
Teachers and administrators form a school-based SEL Team and participate in a collaborative process, supported by the District SEL Team. Each school receives release time for one teacher (SEL Lead) one day per week to support the implementation of quality SEL practices. The SEL Lead works side-by-side with classroom teachers in their school to co-plan and co-facilitate the implementation of SEL-based curriculum to enhance learners’ skills development.
This model builds on relationships that exist within a staff. While changes in staff are always the concern, we have found that the nature of this work is taking root and proving sustainable beyond one individual. Evaluation efforts are currently underway to better understand the impacts of the Surrey SEL initiative and the sort of district supports that are needed at all levels of the school system.
Comprehensive mentoring support Mentor 36 is a joint initiative between Surrey Schools (SD 36) and the Surrey Teachers’ Association. It aims to foster a sustained culture of collaborative mentorship at every site in Surrey, supporting professional growth and a sense of belonging for Surrey teachers, through strength-based, non-evaluative learning opportunities such as:
Currently, Mentor 36 has 91 elementary mentors and 126 elementary mentees, as well as 136 secondary mentors and 110 secondary mentees. Feedback data revealed that the majority of mentees (59 percent) were comfortable sharing their vulnerabilities and discussing instructional strategies with their own mentors. About two-thirds of mentors (67%) felt they had developed a safe and trusting mentoring relationship, as evidenced by their mentees reaching out and connecting, feeling comfortable and safe to ask for support, and discussing classroom issues.
At a Mentor Learning Session, mentors created a collaborative drawing depicting the benefits of teacher mentorship.
Collaboration time and efficacy building as strategies to address stress
A review of best practices by the Centre for the Use of Research Evidence in Education7 finds that collaboration that enables co-learning, co-development, and joint work for educators is linked to improved professional knowledge, skills and practices, and increased expectations for student learning. Increased collaboration and communication between teachers often both reduces feelings of isolation and improves teachers’ knowledge and skills. These in turn lead to lower teacher burnout and greater feelings of capability to meet challenges in the classroom.8
Two of Surrey’s programs to support students also support teacher wellness by providing opportunities to collaborate and share learning to better meet the needs of specific students:
The Inner-City Early Learning initiative provides early literacy and numeracy support for “at-promise” students in Kindergarten and Grade One who may be demonstrating challenges in literacy and/or numeracy development since the 2012-2013 academic year. Specialist teachers in literacy and numeracy work collaboratively with classroom teachers in 26 inner-city schools.
Grade One students collaborate to build (with onset-rime trains) and record words with the support of their Early Literacy Teacher.
These early literacy and numeracy supports have provided a success story for our inner-city schools. One challenge is that only three of the original 25 early literacy and early numeracy teachers from the 2012-2013 start-up are still with the program. This rate of turnover impacts the professional capacity building of the department as a whole, and it is more difficult to maintain connections and trust when relationships between early learning support staff and classroom teachers have to begin anew each school year. Despite these challenges, this initiative continues to successfully support some of the district’s most vulnerable learners.
The Knowing Our Learners Initiative offers School Teams (teachers and administrators) the opportunity to participate in Knowing Our Learners (KOL), a collaborative initiative that aims at enhancing instructional and assessment practices with the support of Curriculum and Instructional Helping Teachers. This year, 175 teachers in 55 schools participated in this program, which focuses on knowing our learners’ stories, strengths and challenges, and using this knowledge to design effective learning environments that keep social and emotional well-being, quality assessment, and evidence of learning at the heart of every child’s learning experience.
KOL activities led to teachers feeling supported and more aware of ways to align their learning intentions with student needs and to make informed decisions about teaching strategies and interventions based on research and other supports. Beyond the impacts KOL sessions had on teacher practices, the initiative was grounded in teacher-to-teacher relationships, open dialogue, and peer reflections. KOL was effective because it was “situated in relationships.” As one participant commented, KOL sessions “made me feel working with [my colleague] made me more [sensitive to] what is working and what’s not. I felt it improved our capacity to know how each other worked to help students.”
Knowing our Learners Honeycomb Activity: Teachers wrote and made connections of strengths and stretches of their own Core Competencies with those of their students.
Sustainability challenges and the road ahead
While many Surrey schools have embraced the initiatives previously discussed, maintaining motivation and engagement year over year can be challenging. Additionally, while we have found school-wide buy-in for many sites, in some instances only pockets of teachers have been engaged in these collaborative opportunities. We are working toward overcoming these challenges by ensuring rigorous research and evidence collection activities are embedded throughout each initiative, in order to make adjustments for future planning of district-led initiatives. Our process and outcomes-based program evaluations are formative in nature, grounded in best practices in evaluation designs, and include the stories and reflections of school staff.
The district also ensures that participation in its professional learning opportunities is predicated on school staff either forming or being part of a team at their school site (e.g. SEL School-based Teams). This can still pose a challenge when team members bring to the initiative different goals and competencies on which they wish to focus. We address these differences by connecting Helping Teachers and teacher mentors with specific school sites to help facilitate team discussions and bring clarity around the team’s goals and activities.
With a district focus on social and emotional learning, we are addressing not only the health and well-being of students, but of our teachers as well. We believe that healthy and capable teachers foster healthy school classrooms and give students the best possible chances for success. Surrey School’s Learning by Design framework reflects our systems approach to cultivating well-being. By building teachers’ SEL competencies, we are able to address teacher burnout and stress, support teacher wellness, elevate teacher autonomy and voice, and build resiliency through cross-departmental collaborations.
The primary authors wish to recognize contributions from: Gloria Sarmento (Director of Instruction, Building Professional Capacity Department), Taunya Shaw (District Helping Teacher, Social and Emotional Learning), Courtney Jones (District Helping Teacher, Inner City Early Learning Support), and Sharon Lau (District Helping Teacher, Mentoring)
Illustration: iStock
Photos: Courtesy authors
First published in Education Canada, September 2020
1 M. T. Greenberg, J. L. Brown, and R. M. Abenavoli, Teacher Stress and Health: Effects on teachers, students, and schools, (Edna Bennett Pierce Prevention Research Center and Pennsylvania State University, 2016).
2 Schonert-Reichl, “Social and Emotional Learning and teachers,” The Future of Children, 27, 137-155.
3 P. A. Jennings and M. T. Greenberg, “The Prosocial Classroom: Teacher social and emotional competence in relation to student and classroom outcomes,” Review of Educational Research 79 (2009): 491-525.
4 OECD, What Makes A School a Learning Organisation? A guide for policy makers, school leaders and teachers, (Paris: OECD, 2016).
5 K. A. R. Richards, C. Levesque-Bristol, T. J. Templin, and K. C. Graber, “The Impact of Resilience on Role Stressors and Burnout in Elementary and Secondary Teachers,” Social Psychology of Education 19 (2016): 511-536.
6 G. G. Bear, S. A. Whitcomb, M. J. Elias, and J. C. Blank, J. C. (2015). “SEL and Schoolwide Positive Behavioral Interventions and Supports,” in Handbook of Social and Emotional Learning: Research and practice (New York, NY: Guilford Press, 2015).
7 CUREE, Understanding What Enables High-Quality Education: A report on the research evidence (London: Pearson Education, 2016).
8 Richards et al., “The Impact of Resilience on Role Stressors and Burnout in Elementary and Secondary Teachers.”
In reference to the article Straight to the Source: Student self-assessment of learning skills and work habits
Does your group still have burning questions or comments? Encourage them to send their questions to Stefan Merchant, lead author for this article.
Email: stefan.merchant@queensu.ca
Or, send us your message and we’ll make sure one of our experts gets in touch.
Work habits or learning skills, a section on virtually all Canadian report cards, are well suited to student self-assessment. Not only does self-assessment give teachers access to students’ thought processes about their work, but it is also positively associated with better self-regulation, motivation, and achievement.
I started my teaching career in British Columbia. As a novice teacher, I was especially concerned that my grades be accurate and defensible. When it came time to complete my first set of report cards, I had a spreadsheet containing student scores on every test, quiz, and assignment they had completed that semester. Using a weighting formula that had been communicated to students and parents, I calculated everyone’s grade. I knew some kids would be disappointed, and others elated, but in either case, I could defend my grading decision with rigorously collected evidence. However, there were other assessment criteria I had to report on for which I had collected no evidence. This section of the report card was called “Work Habits” and was supposed to reflect… Now that I think about it, I am not sure what it was supposed to reflect. I considered student effort, and the number of missing or late assignments when rating students’ work habits, but I had no idea if this is what I was supposed to be doing, or if my colleagues considered the same things when they assigned work habits grades. My experience is a common one.
Teachers in all Canadian provinces assess and report aspects of student performance beyond academic achievement. This portion of the report card has different titles in different provinces such as “Learning Skills and Work Habits” in Ontario and “Cross-curricular Competencies” in Quebec, but it is always there. My own research shows that teachers often struggle to complete these assessments and frequently have little evidence on which to base their ratings of students’ work habits. As a result, teachers rely upon their holistic judgment of the student. As an assessment researcher, I should be appalled by this practice, but instead I am sympathetic. For one, I did the same thing when I taught. For another, assessing constructs like collaboration, responsibility, and organization is hard to do.
Assessing work habits is so difficult because they are not easy to observe. When a teacher assesses writing quality, they have a concrete student product in front of them to evaluate. If they want a second opinion, they can show the writing to a colleague, or leave the work and reread it later. But how can a teacher make a reliable judgment of the effort put into the work? Teachers must observe 20 or 30 students at the same time, making it impossible to determine how much time a student puts into a task. Even if a teacher were to focus on a single student for an entire lesson, how can she discern daydreaming from deep thought? The problem is further compounded if the student worked on a task outside of the classroom. Does completed homework reflect an engaged, conscientious student or a helicopter parent?
When trying to assess skills such as self-regulation, researchers most often rely on self-report instruments. These are typically questionnaires completed by the student. Self-report instruments are useful measurement tools because they access respondents’ internal thought processes. This characteristic makes them ideal for classroom assessment of work habits. Many teachers recognize that students have important things to say about their learning and ask students to self-assess their work habits. For example, research conducted in Ontario suggests that about half of high school teachers ask students to complete self-assessments of their work habits. Teachers use these self-assessments to prompt student’ reflection on their learning and improve their metacognition. However, they do not consider the results of the self-assessment when assigning the work habits ratings on report cards – despite believing their students complete their self-assessment honestly.
Student self-assessment of work habits is an encouraging trend, but this practice could be even more widespread. Not only does self-assessment give teachers access to students’ latent thought processes about their work, but it is also positively associated with better self-regulation, motivation, and achievement. However, for student self-assessments of work habits to be effective, it is necessary to implement practices such as co-constructing definitions and expectations. How can a student give a reasonable self-assessment of their collaborative skills if they do not have a firm grasp of what collaboration means? Co-creating definitions of the skills being assessed not only improves students’ self-assessments, but also ensures the teacher and students share a common understanding of that skill. One way of achieving a shared understanding is to create a rubric for each skill with students. The rubric breaks down components of the skill and describes differences in skill levels. When students are able to use the rubric, they develop an understanding of what separates different ratings. For instance, how is excellent collaboration different from good collaboration?
Another tactic known to improve the effectiveness of self-assessments is to have teachers give students guidance on how to complete the self-assessments. Researchers have consistently found that when students are given training on how to self-assess, not only are their assessments more accurate, but the learning benefits are greater. The benefits are even greater when teachers provide their own assessment of the work habits and discuss their assessment, and the self-assessment, with the student. These discussions are critical to helping students become better assessors of their own skills. Students (especially younger ones) are often not accurate reporters of their own skills. Most students tend to overestimate their abilities, and this is especially true for weaker students. Paradoxically, the strongest students are the ones most likely to give themselves low ratings. If we want students to develop an accurate self-concept, they need to be privy not only to the teacher’s ratings, but also the rationale behind them. As a former teacher, I recognize finding time to have these discussions is difficult, but doing so will improve not only students’ ability to self-assess but also their work habits.
Lastly, the students’ self-assessments should appear on the report card. Not only does doing so give them meaningful input into the report card, but it also allows the parents to see the student’s self-assessment. This has the potential to lead to fruitful discussions between parents and children about their work habits.
If you are a teacher, I encourage you to start implementing student self-assessment of work habits now. The information you gain about your students and their self-concept will lead to rich discussions about their learning and work habits and will also enhance your relationship with them. School administrators can highlight teachers who use self-assessment of the work habits as role models and support teacher development in this area through training and professional learning communities. I urge superintendents to consider what assessment policies, procedures, and training can help improve teachers’ assessment of work habits. The types of skills that fall under work habits (e.g. responsibility, organization, collaboration, initiative, perseverance) lead not only to better learning, but also to better jobs, relationships, and health. Further, these skills are closely aligned with the broader aims of education espoused by school districts and ministries of education. Given the critical importance of these skills for our students as individuals, and for our society as a whole, it is vital that we help teachers develop the capacity to improve students’ work habits. Helping them understand how to effectively implement student self-assessment of work habits would be an excellent first step.
Download the pro-learning session 1.2 – Assessing Students’ Work Habits
Illustration: Dave Donald
First published in Education Canada, March 2019
As Canadian education systems contemplate how they will reliably and validly measure the learning that goes on in the increasingly complex classrooms within their jurisdiction, it is important that insights, ideas, experiences and research from all regions in the country be examined and shared.
All Canadian provincial jurisdictions develop and administer large-scale assessment programs in education. A variety of factors must be considered at the development stage of the program as a jurisdiction decides what information needs to be collected, how it should be collected and why. In Canada, despite many similarities between provincial large-scale assessment programs, different approaches have been taken recently as jurisdictions make changes, or consider making changes, to these programs. These changes reflect ongoing discussion about the purpose of large-scale assessment.
Generally, in Canada, students are tested in mathematics and language arts in early and late elementary grades as well as in high school. Provincial ministry or department websites cite various purposes for these tests. Data is used to identify areas of need at all levels: provincial, board, school, classroom, and individual students. At the high school level, marks from large-scale examinations contribute to a student’s final course mark in some provinces, while other provinces administer a literacy and/or numeracy test as a graduation requirement. Data is also used to report to the public on the effectiveness of the system, with aggregated results made available to school boards or regions and often posted publicly. Individual student results have been made available to schools, students and/or parents.
Over the last few years, many provinces have implemented or are publicly considering implementing significant changes in their large-scale assessment programs. Although these changes are different in nature, they all reflect a desire on the part of jurisdictions to clarify the purpose and improve the value of large-scale assessment as part of education reform initiatives.
Examples of changes at the elementary level include Alberta’s Student Learning Assessment in Grade 3 which, since 2015, can be administered at any time during the school year and is not used for accountability purposes. The Alberta Achievement Tests in Grades 6 and 9, however, continue to be written at the end of the year and provide data to “report to Albertans how well students have achieved provincial standards at given points in their schooling.”1
As of 2018, the Foundation Skills Assessment in British Columbia includes collaboration and self-reflection activities in addition to traditional written questions.2 At the high school level, B.C.’s end-of-course examination program is being phased out and replaced with a literacy and a numeracy assessment that are requirements for graduation.3
In Nova Scotia, the Grade 12 end-of-course examinations in literacy and in mathematics have been moved to the Grade 10 level.4
In March 2018, the Ontario Ministry of Education released a report of an independent review of the province’s assessment, evaluation, and reporting practices. The review included a close look at the Education Quality and Accountability Office (EQAO), the agency responsible for developing the province’s large-scale assessments. In a letter published at the beginning of the report, the assessment review committee summarizes: “We propose a system of assessment that prioritizes classroom assessments to support each student’s learning and development, engage parents/guardians meaningfully in knowing about their child’s achievements and progress, and enable educators to develop and share their professional practices.”5 The report recommends increased focus on high-quality classroom diagnostic, formative and summative assessment to provide information about individual students, and suggests that schools and teachers should no longer use large-scale individual student data for diagnostic purposes: “ … student reports should clarify this is a snapshot of performance on a system assessment and is not intended for diagnostic or evaluative purposes.”6 It goes on to recommend the discontinuation of large-scale assessment in Grade 3 and the development of a new high school test which would no longer be a graduation requirement. It is recommended that large-scale assessment data from testing all students continue to be collected to identify the needs of groups of students who require further support as well as to report to the public about system performance. A shift in the role of large-scale assessment in Ontario is recommended:
“We propose large-scale provincial assessments that provide public information about the performance of Ontario’s education system overall and to inform future improvements to benefit all students to succeed, including identifying inequities in outcomes for groups of students whose diverse experiences and needs require further attention.”7
These examples of change in various provinces, despite their differences, all indicate a desire to clarify the role of large-scale assessment and to create new developmentally and pedagogically appropriate ways of assessing students in a large-scale format.
As the value of large-scale assessment is widely discussed and as provincial governments implement, or consider implementing, educational reforms, two opposing sides seem to have emerged. On the one hand are those who argue that provincial assessments serve important purposes, including holding school systems to account and providing information about where supports should be placed to improve student achievement. Education department or ministry business plans cite provincial assessment data as a critical measure of the success of important initiatives. School improvement can be measured using provincial data. The public can be informed about how well the education system is doing by reporting aggregated provincial data. Finally, provincial tests provide data on how individual students are doing in relation to provincial standards.
On the other side are those who argue that teachers know their students best and that large-scale assessment data is only a snapshot in time that may or may not reflect individual student performance. Furthermore, large-scale assessments take time and attention away from the real business of classroom teaching, and teachers spend too much time preparing for and administering tests which do not reflect classroom practice. Finally, teachers feel pressure to improve scores, and they may not see any link between improving scores and improving student learning.
Two surveys in public attitudes towards education indicate that these two sides have become unnecessarily polarized. In Public Education in Canada: Fact, Trends and Attitudes, a 2007 nationwide survey of attitudes towards education by the Canadian Education Association (CEA), 77 percent of Canadians agreed that high school students should be assessed using province-wide tests.8 In Public Attitudes Towards Education in Ontario, a recent OISE survey of Ontarians’ attitudes towards education, 66 percent agreed that each secondary student should be assessed using a province-wide test.9 This survey shows somewhat less support for testing students at the elementary level than at the high school level, although a majority still support testing at this level, with 49 percent agreeing that “every student should be tested” and 19 percent that “a sample of students should be tested.”
While many Canadians see value in provincial large-scale testing, they also value teachers’ work. Seventy percent of Canadians agreed that “teachers are doing a good job,” and 60 percent agreed that high school grades should mainly reflect teachers’ assessments. In the OISE survey, Just over half of Ontarians reported being somewhat satisfied or satisfied with the job elementary teachers are doing, and half were somewhat satisfied or satisfied with the job high school teachers are doing. Interestingly 20 percent responded that they are neither satisfied nor dissatisfied with the job teachers are doing. Fifty-five percent of Ontarians agreed that “high school students’ final grades should mainly reflect their teachers’ assessments, not the results of province-wide tests.” Once again, 20 percent neither agreed nor disagreed.
In general, Canadians see value in large-scale testing and at the same time they value teachers’ professional judgments in determining student achievement. Each has an important role to play and they are not necessarily mutually exclusive. The main role of large-scale assessment is to provide consistent province-wide data that can be tracked over time and, at the high school level, to provide a province-wide measure of each student’s achievement in key areas. The main role of teachers’ classroom assessment practices is to provide detailed achievement information that can be used to plan instructional strategies for individual students over the course of an academic year or term. As Lorna Earle writes: “Large-scale assessments and classroom assessments done by teachers both make important contributions to continuous improvement in education. It is important that we continue to support both approaches and ensure that both forms of assessment provide high-quality information that the public can have confidence in and value as fair representations of students’ learning.”10 Similarly, in the OISE report the authors summarize: “… although most want EQAO testing retained as a way of monitoring outcomes, there is little support for ‘high stakes’ province-wide testing that would determine the advancement of individual students. In other words, both province-wide testing and teacher assessments are valued for different reasons.”
Canada’s educational jurisdictions are attempting to clarify the purpose of large-scale assessment programs in different ways. As one province implements new large-scale assessments as a graduation requirement, another considers phasing out an older assessment with a similar requirement. As one province implements significant changes to its Grade 3 assessment program, another considers phasing out its 20-year-old Grade 3 assessment program entirely. It is likely that most provinces are having internal conversations about the purpose of their programs and discussing potential avenues for change.
As jurisdictions consider making changes, it is important not to lose sight of the fact that the value of large-scale assessment data increases over time. Data collected yearly over 20 years is rich in information, since trends can be identified, monitored and acted upon only after several years of data is available. For this reason, changes in a large-scale assessment program must be carefully planned and must take into consideration its value over the long term.
It is also important to note that the cost of developing a large-scale assessment is not tied to the number of students who take the tests. The resources required for the development of high-quality assessment tools are the same for all jurisdictions, regardless of their size. Requirements include subject-matter and psychometric expertise as well as robust item banking, field testing and standard-setting procedures. It takes two years or longer to develop a high-quality test, whether it is administered to 10,000 students or 130,000 students. Furthermore, this development process must be repeated regularly to create new questions for each administration of the test.
As provinces consider changes, best practices in large-scale assessment should be identified so that all Canadian students can benefit from innovative assessment practices tailored to their individual needs, and so that residents of all jurisdictions can rely on high-quality data about their education systems. As Canadian education systems contemplate how they will reliably and validly measure the learning that goes on in the increasingly complex classrooms within their jurisdiction, it is important that insights, ideas, experiences and research from all regions of the country be examined and shared.
Photo: iStock
First published in Education Canada, March 2019
Notes
1 https://education.alberta.ca
2 https://curriculum.gov.bc.ca/assessment-reporting/new-foundation-skills-assessment
3 https://curriculum.gov.bc.ca/provincial-assessment/graduation/literacy
5 C. Campbell, J. Cinton, M. Fullan, et al., Ontario, A Learning Province: Findings and recommendations from the independent review of assessment and reporting (Province of Ontario, March 2018), 2. https://www.oise.utoronto.ca/preview/lhae/UserFiles/File/OntarioLearningProvince2018.pdf
6 Ibid, 70.
7 Ibid, 3.
8 Public Education in Canada: Fact, Trends and Attitudes 2007, Canadian Education Association (2007), 8.
www.edcan.ca/articles/public-education-in-canada-facts-trends-and-attitudes-2007
9 Doug Hart and Arlo Kempf, Public Attitudes Towards Education in Ontario 2018: The 20th OISE survey of educational issues (The Ontario Institute for Studies in Education/University of Toronto, 2018), 30. www.oise.utoronto.ca/oise/UserFiles/Media/Media_Relations/OISE-Public-Attitudes-Report-2018_final.pdf
10 Public Education in Canada, p. 8
In reference to the article Grading across Canada: Policies, practices, and perils
By Christopher DeLuca, Liying Cheng, and Louis Volante
Does your group still have burning questions or comments? Encourage them to send their questions to Dr. Christopher DeLuca, lead author for this article.
Email: cdeluca@queensu.ca
Or, send us your message and we’ll make sure one of our experts gets in touch.
Comparative judgment (CJ) is an assessment methodology based on the ranking of two pieces of work at a time. CJ can be used both as a professional development tool to sharpen assessment skills and develop shared standards, and as a way of identifying exemplars of quality that allow students to better understand learning goals and expectations.
Learning is unpredictable, and students do not learn everything they are taught; therefore simply providing learning opportunities in school is not by itself sufficient. Assessment must be embedded within educational settings to bridge teaching and learning. Teachers who fail to assess what pupils do cannot determine whether they are contributing to or impeding pupils’ progress. Assessment data must be elicited, inferred from and used to adapt classroom practice to better meet students’ needs.
We know that teachers make inferences based on what happens in the classroom activities. My colleague, Professor Inga-Britt Skogh, and I were curious to find out what teachers are focusing on while assessing student progress in the context of Swedish technology education. Our study involved six teachers and a class of 11-year-old students who were undertaking an open-ended design scenario. The students designed a model robot to help them complete various tasks they needed help with at home. They identified problems such as recording NHL games, walking the dog, completing homework, scanning and submitting homework, and baking cupcakes. During classroom activities, the students built Web-based synchronous e-portfolios of their learning and the product development, using text, photos, moving pictures and sketches on their iPads. In order to unpack what teachers emphasized as criteria for success, we decided to use a methodology called comparative judgment.1
Comparative judgment is an assessment methodology where the judgers compare two pieces of student work and identify which one of them is better, without saying how much better it is. Their decision is based on quality of the work.
To identify the motives behind teachers’ choices, the research team asked them to describe the reasons for their choices by speaking into an MP3 recorder while doing the pairwise comparisons. These think-aloud protocols were recorded and transcribed. They provided valuable insights on the rationale for each choice. Results generated a judge consistency above .9 and the qualitative data revealed what the assessors agreed upon: the importance of seeing the narrative of the portfolio/design process. The teachers – our judgers – were also invited to a session where we interviewed them.
Comparative judgment has been used in different settings, such as psychology and perfume making, and also quite recently (last 10–15 years) in educational settings. Comparative judgment stems from the work of Luis Thurstone who, in the 1920s, tried to find methods for measuring things that are difficult to measure – such as attitudes and opinions, for example how serious a crime is considered to be. Thurstone argued that while people find it hard to say how serious a crime is, they can compare one crime to another relatively easily and reliably in terms of which crime they think is more serious. He explained that when two phenomena are placed in comparison with one another, individuals can use their knowledge to compare and identify which qualities are superior with high fidelity. He showed that by repeatedly comparing pairs of items, a ranking could be made of all items assessed with very high reliability. Based on his studies, he formulated the Law of Comparative Judgment, which in short means that people are more reliable when comparing two stimuli, such as two crimes, than when giving an absolute value to a stimulus.2 Laming built on Thurstone’s work and said that all assessment is a comparison of one thing to something else.3
Comparative judgment is an iterative process, where assessors are presented with a series of two objects and select the better of the two. They assign an instinctive ranking to each object based on their expertise, previous experience and the object quality, instead of awarding an absolute score.
This iterative process may be undertaken manually, where you, for example, pick two random essays from your pile of student work, compare them, and pick one as a winner; you then repeat the process iteratively until every essay has been compared to each other, like the Swiss tournament. This manual process is a bit complicated, especially when you want to work with others. It can be facilitated with comparative judgment software, where student work is presented two at a time and then from a mathematical formula compared to each other. This software for comparative judgment generates quantitative data of high reliability, usually above 80 percent, and also facilitates the inclusion of multiple assessors.
Research studies have been conducted with comparative judgment, both regarding validity and reliability, in Ireland, England, Belgium, Sweden and the U.S. In the studies, comparative assessment has been used primarily to assess creative work in, for example, technology education and essay writing. The high reliability achieved reflects a professional consensus in the group of assessors. The software system allows assessors to leave comments and explain why they judged one example to be better than other. These comments can be used to identify criteria that describe what teachers consider to be important competencies in the subject and can also be given as feedback to both teachers and students while learning.
The comparative judgment process can be undertaken wherever is convenient, something that I and my American friends Dr. Scott Bartholomew and Dr. Greg Strimel, from Purdue University, took advantage of when we wanted to investigate differences and commonalities among teachers’ assessment practices in open-ended design scenarios across nations.
Teachers and educational researchers from the U.S., U.K. and Sweden were invited to assess an open-ended design scenario in engineering/ technology education (a pill dispenser for a fictional forgetful client) made by 760 high school students. The judgers assessed 175 portfolios and 175 products with comparative judgment via the cloud-based software Compare Assess. We undertook the whole study via the Internet and the judgers did their assessment from their home couches or wherever they liked.4
I strongly believe that teachers do everything in their power to move their pupils forward in their learning journeys. However, the direction of forward movement is not always obvious! Still, it is crucial for teachers to be clear about what they expect of their students. Such clarity benefits all students, and especially low achievers, and thus it may dramatically reduce the gap between low and high achievers.
Clarifying the learning intentions, consequences, and results of an assessment increases validity and reliability. But this clarity can be hard to achieve without spoiling the joy of learning. Furthermore, students’ perceptions of learning intentions may not match teachers’ expectations. Addressing this discrepancy is crucial to reducing the gap between low and high achievers, since low achievers generally find it more difficult to interpret what their teachers consider as criteria for success.
How do we overcome the discrepancy between teachers’ intention and students’ comprehension, and at the same time promote thinking by encouraging pupils to express themselves, reflect upon their own and others’ ideas, and expand their horizons? The Irish Technology Education Research Group (TERG) approached this challenge in the technology teacher education program at the University of Limerick by letting students peer assess one another’s work using comparative judgment. Specifically, students were asked to compare two pieces of work, choose which one was better and provide peer feedback comments in an iterative process. Feedback was matched to each exemplar and given back to the students, who were then given time to consider the feedback and develop their work before handing it in to the teacher for final assessment. The research team was overwhelmed by the students’ positive response to this intervention, reporting that the iterative process of comparative judgment was valuable for improving their understanding of the nature of technology – much better than with rubrics, according to the students themselves.
The TERG study5 also reported how valuable students found providing and receiving peer feedback via comparative judgment. Follow-up of student’s progress suggested that low-achieving students benefitted more than high-achieving students from seeing exemplars of other students’ work, as the low-achieving students made the greatest leap.
There are different ways to share learning intentions. Using comparative judgment to identify exemplars of quality work is one example. More traditionally, a teacher can post learning intentions on the blackboard and then have the pupils copy them into their workbooks, where they will likely never be reviewed again.
One popular approach involves the use of rubrics. However rubrics are often written in teacher-friendly language, such that students and teachers may interpret them in different ways. Furthermore, I wonder why rubrics are so often divided into three columns? Is learning always a three-step process? Therefore I prefer the use of exemplars to rubrics. However, this preference is not just based on what I like – the advantage with exemplars is considerable as they articulate learning intentions in a richer way.
Using exemplars is like wine tasting, where you actually taste and discuss the wine. Rubrics, by comparison, are like reading a review of a wine without smelling or tasting it. By sharing exemplars from different contexts, educators can help students explore the true construct more deeply. Annotated exemplars give students an understanding of what quality looks like, especially when exemplars of different quality can be contrasted. Exemplars of student work may also promote discussion among learners. Using exemplars to explicate expectations and criteria for success for students is not cheating; instead, it is a way to invite students into a discussion of quality. Exemplars are valuable for learning, especially when used as part of instruction and in open-ended and problem-solving tasks, and have been found to reduce cognitive load.6 They have the greatest impact on learners at a lower level of mastery, and the effect on learning decreases as expertise grows. Therefore, evidence suggests that students gain the most when exemplars are presented at the beginning of the learning journey.
Using comparative judgment software systems is one way of working with exemplars. However, the software system is not required. Figure 1 is from a Japanese secondary classroom. Technology teachers used these exemplars in a dialogue with their pupils to articulate different quality work in electronics by comparing the three different exemplars to each other and with the students’ own work as well.
Figure 2 is from an arts classroom in Sweden. The teacher has illustrated the national criteria for grading with sunflowers of different quality. I showed these exemplars at a workshop on formative assessment and at first the participating teachers all agreed – but then suddenly a man raised his hand and objected to the shared consensus that the sunflower at the top was of highest quality. He informed us that he was more into abstract art, and therefore thought the sunflower at the bottom should be rated highest. Then the discussion about quality in artwork really took off; I wish I could have recorded it. The discussion ended in an agreement that is summed up by Winnie the Pooh when he says, “It’s best to know what you’re looking for before you look for it.” With this particular exemplar, the purpose of the task should be clarified. The sad part of this story is that this was the first time these teachers had had the opportunity to discuss this in depth with their peers.
Knowing where they are going makes it easier for students to get there, especially when they know what next step to take and in which direction. Conversely, when pupils are often left on their own, trying to decode the mystery path of learning themselves without guidance and opportunities for reflection, they may lose interest and opportunities. When students are able to consider exemplars in dialogue with their peers, they may gain a richer understanding of what quality work looks like – just as teachers do when discussing exemplars with their professional peers.
I believe that teachers can develop their assessment literacy and their nose for quality by being exposed to exemplars via the comparative judgment process and by being “forced” to justify their choices and discuss them with others within the profession. And why not start this journey during their teacher education program by reviewing authentic exemplars and practicing how to provide feedback, while they are taking their teacher training courses? How often did you get a chance to see student work during your teacher training, and how often have you had the opportunity to share exemplars with peers? My experience tells me it is not a frequent experience.
Comparative judgment via digital software is also a fairly easy way to invite others within the profession to your classroom practices. The teachers that I have worked with in Sweden were particularly fond of seeing other than their own students’ work, as it expanded their horizon. The interviews in the Hartell and Skogh study7 showed that teachers felt that the comparative judgment method answered the need to collaborate with other teachers in the assessment process. Comparative judgment is useful for both training and on-going refining of teachers’ assessment practices. For example, you can investigate if your standards have changed by blending last year’s students’ essays with the ones you have now, and then checking your comparative judgment outcomes against how you have graded the work. To discover how your standards compare to your peers’, you can invite others to participate, then share and discuss your results together. A school in Oxford used this model to share consensus in terms of quality of student work. The project was initiated by the school head, not for accountability purposes but instead with the aim of strengthening teachers’ assessment practices to enhance equity for their pupils.
It is easy to get carried away with new approaches, and even though there are multiple applications of comparative judgment, appropriate use should always be kept in mind. What decisions are to be made? Then choose what data to collect and present. Depending on what a teacher wants his or her students to learn, the teacher must choose appropriate tasks and exemplars.
The foremost value I see with comparative judgment and exemplars are to serve as a catalyst for discussion. Similar to how wine connoisseurs taste and discuss wine, I see the potential of comparative judgment to foster teachers’ assessment literacy and self-efficacy. Comparative judgment is a useful tool to unpack teachers’ assessment practices, to uncover epistemological values and constructs, and to explicate criteria for success in a much deeper way. Above all, I see great potential as a way to invite learners into the mystery of learning.
Original illustrations: iStock
First published in Education Canada, March 2019
1 A. Pollitt, “The Method of Adaptive Comparative Judgment,” Assessment in Education: Principles, Policy & Practice19, no. 3 (2012): 281–300.
2 L. L. Thurstone, “A Law of Comparative Judgment,” Psychological Review 34 (1927).
3 D. Laming, Human Judgment: The eye of the beholder (London: Thomson Learning, 2004).
4 See e.g. S. R. Bartholomew, E. Yoshikawa-Ruesch, E. Hartell, and G. J. Strimel, “Design Values, Preferences, Similarities, and Differences across Three Global Regions,” in PATT 36. Research and Practice in Technology Education: Perspectives on human capacity and development, eds. Seery, Buckley, Canty and Phelan (Athlone, Ireland: TERG, 2018), 432–440.
5 N. Seery, J. Buckley, T. Delahunty, and D. Canty, “Integrating Learners into the Assessment Process Using Adaptive Comparative Judgment with an Ipsative Approach to Identifying Competence Based Gains Relative to Student Ability Levels,” International Journal of Technology and Design Education (2018); N. Seery, D. Canty, and P. Phelan, “The Validity and Value of Peer Assessment Using Adaptive Comparative Judgment in Design Driven Practical Education,” International Journal of Technology and Design Education 22, no. 2 (2012): 205–226.
6 J. Sweller, (1988). “Cognitive Load During Problem Solving: Effects on learning,” Cognitive Science 12, no. 2 (2988): 257–285.
7 E. Hartell, and I. -B. Skogh, (2015). “Criteria for Success: A study of primary technology teachers’ assessment of digital portfolios,” Australasian Journal of Technology Education, 2, no. 1 (2015).
How can we define and grade quality in writing? Ken Draayer has wrestled with this question for 30 years, and concludes that it cannot be captured by a rubric or a list of features. He demonstrates how checking off a series of requirements, though it might earn you 100% on the marking schema, does not add up to good prose.
From accountability reforms beginning in the 1990s in Ontario, I learned that grading is the measurement of student, school and system results to strengthen educational management. From Robert Pirsig’s Zen and the Art of Motorcycle Maintenance I learned that grading is a search for quality and how to care for it.
I had not read Pirsig when I began grading English papers at a private school in the 1970s and had to define “quality” in student work. My undergrad university training and subsequent experience in journalism suggested it was some combination of subject knowledge, coherent organization and audience appeal. On the evening of my first grading experience, however, these categories proved unhelpful as they collided with actual student writing.
I decided to grade by approximating quality-as-a-whole using the stairs leading up to the bedrooms in the house – the further up the stairs, the higher the quality. The top stair, I decided, represented 90 percent. The steps below declined by 10 percent each until I reached a personal floor for failure at 40 percent – my compassion at the bottom level countered by severity at the top. I had never scored 100 percent on an essay; why should my students?
My approximate staircase kept me on task to completion. I then introduced the common categories – content, structure, language – and fudged those more precise 68s, 76s and 83s until I was ready to hand the papers back – though perhaps not so much to respond to I don’t understand this mark (and neither will my parents), and I’ve never got less than 80 in English. Then and after, the experience of grading remained unsettling, always hedged round by doubts.
In Pirsig’s novel, grading is an ethical dilemma. Phaedrus, a teacher of rhetoric, strives for a shared understanding of quality with his students and confesses his doubts about grades and their actual relationship to quality. At one point, he drops grades entirely. Rules about content, structure and language, he tells his students, were imposed on writing after it was done. Teachers who prescribed, and students who wrote by prescriptions, produced writing that “had a certain syrup, as Gertrude Stein once said, but didn’t pour.” Quality was the goal, but “when you try to say what quality is, apart from the things that have it, it all goes poof.”
In my 30 years teaching in high schools and in pre-service courses at Brock University’s Faculty of Education, I would sometimes read six assignments to set my “spidey sense” about quality before marking. Other times I would read and try to maintain consistency and focus by commenting on quality, one category at a time. Like Pirsig, I wrestled with the supposed relationship between quality and my grades.
And what about grades in math and the sciences, where 100 percent on a test or assignment is entirely possible, and a final of 98.5 not unheard of? In the humanities, we seem to be stuck with imperfection. Why does my 72 percent always require turgid explanations, and elicit wrinkled brows and “whatever”? Why should English marks be such a dead weight on averages needed for university or college applications?
I decided to thwart this notion that quality, in English, was so infected by innumerable and inscrutable sins that perfect marks were out of the question. Writing for Indirections – a journal for English teachers in Ontario – I described my new “publishable” grading method, introducing the prospect of 100-percent essay results to my students.
I commissioned from a local trophy shop a rubber “publishable” stamp and inkpad and explained to my students that, while I would return papers with estimated marks, I would read them wearing an Editor’s visor, looking for that “something,” that quality that attracts readers to read and editors to say, “Let’s publish this!”
The “publishable” deal was this: If you agreed to one or two editorial conferences with me and – in the interests of understanding quality – to apply revisions intended to make the piece more clear and readable, I would then guarantee a final mark of 100 percent to reflect our mutual striving for quality. Real writers don’t work alone. They have editors. It’s a team thing.
The first recipient of the “publishable” stamp was a piece on hacking. The writer was keen and knowledgeable about his theme, but not so much about his writing. But my “spidey sense” again said quality was there and, confident of its eventual improvement, I gave it a 40 percent and exchanged my identity as marker for editor which, if you think about it, is a significant shift in the politics of the classroom. The final product, though not a silk purse, got 100 percent. And I got my first accomplishment as editor.
Perhaps this sounds like a disingenuous game (mark inflation!). But it did produce interest and thoughtfulness about how quality comes about in writing. In some iterations I added that students could request the publishable stamp. I created student editor/writer partnerships and suggested they should share the resulting grade, both being responsible for final results. I recall adding once, for self-protection, that I would only stamp six assignments “publishable” in any one batch because of the increased time imposed on me and on the student writers. I’d like to say that the method had such appeal that I was inundated with editorial work and giving out 100s by the 100s. But I was not. Human nature saved me. The line-up to do drafts and revisions was short.
These variations on the publishable method changed my writing instruction for good. One final variation I had not entirely foreseen. I had a set of papers I’d been putting off marking. One night I put the stack beside the computer and decided, in my Pirsig-ness about Quality, to set aside my red pen and take up the Editor’s visor for every writer. As I read each paper I typed editorial responses to purpose, to the ideas, to coherence, to language – to any aspect, in fact, where I thought I might coax out more quality. I especially conveyed my enthusiasm for their enthusiasm, or suggested a little more enthusiasm, and I remember thinking, when done, what a delight this had been for me – but I had no marks!
Next day, returning the papers, I struck a bargain. Would it seem fair, I asked my students, given the extensive editorial comments and the opportunity to improve results, that they would agree to give me time management considerations and accept an unexplained mark on the final draft? No categories. Just a quality-as-a-whole mark? And they agreed. After all, what could be more fair?
Grading that supposes strict measurement by rule and precept imposes its own game (less worth playing), in which standards and rubrics presume quality and imply that teaching professionals need not search for it with their students. Under the weight of measurement, quality goes poof.
Grading practices reflect the personal knowledge, intelligence, and ethical sense of teachers. Students know this. Uniform standards and rubrics sweep aside the personal discussion and experimentation with this necessary but complex part of teaching and learning. In its place we have Ontario’s current “Achievement Chart,” the mother of all rubrics in which, over four levels, there is simply limited, some, sufficient, or thorough quantities of skill or knowledge. Bad-a-boom, bad-da-bing: quantity, not quality. The Ministry of Education could be forgiven for thinking this now defines teacher practice.
Here’s the kind of grading and teaching modeled by accountability: Early versions of the Grade 10 Test of Literacy contained instructions for an Opinion Piece. Students were given a theme (e.g. the Welland Canal) and a list of related facts. The task: write a three-paragraph response organizing selected facts – a kind of Lego approach to writing. Several facts about the canal were listed. A teacher in our board, experienced in EQAO marking, delivered a workshop revealing the Opinion Piece marking rubric and its use by trained markers in Toronto.
Uniform standards and rubrics sweep aside the personal discussion and experimentation with this necessary but complex part of teaching and learning.
On the basis of this workshop, a local teacher, so armed and busy teaching to the test to raise his school’s results, devised the following strategy to constrain the Toronto markers, using their rubric, to award his students 100 percent on the opinion piece. His instruction to them was:
Using grading for accountability and neglecting any notion of quality, thousands of Ontario students were declared “literate” and the system of test and measure was declared a success. But a wise Curriculum Superintendent once reminded me of the old homily: “You can’t fatten a hog by weighing it.”
Photo: iStock
First published in Education Canada, March 2019
In this edition of Education Canada we look at our assessment practices, with a special focus on the thorny issue of grading. What are grades for and what do they actually tell us? How accurate are they, really? What are the alternatives? And what is the effect of grading on student learning?
Having a bright son with ADHD opened my eyes to some of the real difficulties with grading. He was admittedly tough to assess because of the inconsistencies in his performance. But what were we to make, for example, of the fact that he scored just shy of 80 percent on a Macbeth essay, yet failed the essay assignment? How was that even possible? Well, the assignment included a lot of preparatory and presentation requirements (the bane of any student with ADHD), and the value given to these materials actually outweighed the essay itself. His attention to these details was sketchy. According to the grading scheme, the fail was legit. But it did not reflect either his understanding of the play or his writing ability.
So what a grade should actually measure is one of the first big questions in grading – and the more complex the learning task, the more grading becomes a tricky exercise in judgment. Ken Draayer recounts his struggle to define and measure quality in composition, and to encourage students to strive for improvement. Swedish researcher Eva Hartel discusses the value of comparative judgment and exemplars in helping to arrive at a shared understanding of quality work. Chris DeLuca and his colleagues examine grading practices across Canada, including the complex factors that go into assigning a grade. Another sticky wicket is the fact that grade-based college/university admission requirements make it difficult to change traditional grading practices at the secondary school level. David Burns and his colleagues share their learning from a pilot project in Burnaby, B.C., using portfolio-based university admission as an alternative to grades. Our web exclusive articles consider the use of student self-assessment of “work habits” (Stefan Merchant) and the relevance of knowledge acquisition in the internet age (Myron Dueck).
Whether used as a learning tool or as admission criteria to an elite program of study, assessment and grading practices have a significant impact on our students and on our education systems. This issue challenges us to rethink how we can evaluate learning in a fair and equitable way for all students.
Photo: Dave Donald
First published in Education Canada, March 2019
Grades are a powerful gatekeeper within our educational system, yet little is known about the consistency of grades across classes, schools, and districts or how grades are constructed, interpreted, and used. In this article, the authors examine grading policies and practices across Canada, looking specifically at current grading policies, what drives teachers’ grading decisions, and the influence of provincial large-scale testing.
There is no denying that grades have a significant impact on the lives of students. From boosting self-confidence, admittance to university and college programs, and gaining access to funding, to potential negative outcomes including bullying, lowered self-esteem, and limited career choices, grades not only represent learning but are connected to important social consequences. For some students, the difference between 84 percent and 85 percent on a final grade could mean getting into their desired university and chosen career path; for another, grades could mean the chance to immigrate to Canada, or not. Grades have been, and continue to be, a powerful gatekeeper within our educational system, and across educational systems globally. And yet, little is known about the consistency of grades across classes, schools, and districts or how grades are constructed, interpreted, and used.
Experts point to the inherent subjectivity in grades, and the ample room for error and difference across teachers. In efforts to reduce this subjectivity, provinces, school districts, and schools implement grading policies to promote more consistent grading practices. Policy-based grading encourages alignment between what is taught (i.e. curriculum expectations) and what is assessed. Policy-based grading also provides teachers with explicit criteria to help them distinguish an A from a B or a Level 3 from a Level 2. In this article, we explore grading policies and practices, and their potential perils, across Canada. Drawing on recently published research, we look specifically at what current grading policies are signalling to teachers, what drives teachers’ grading decisions, and how provincial large-scale testing influences students’ grades.
Grading is a longstanding tradition in education, dating back to the imperial exams administered in ancient China. These methods became more formalized for students in 1792, when grading was established by William Farish, a tutor at Cambridge University, as a quantitative method for efficiently teaching and tracking students. Grades have become the primary method for summarizing and communicating student achievement.
Grading is the process of collecting and evaluating evidence of student achievement, performance, and learning skills. As any teacher knows, grading is a complex practice that often requires negotiating evidence in relation to curriculum expectations and students’ unique learning progressions. As grades are used to make public statements to students, parents, and principals about achievement, and often used for higher-stakes consequences including access to specialized programs and learning supports or admission to university or college programs, grading is an important professional practice with significant implications. Teachers across Canada are expected to follow provincial policies when determining student grades, a practice known as “policy-based grading.”1 However, due to the decentralized nature of educational policies in Canada, research suggests that significant variability in grading practices exist from one jurisdiction to the next.2 This variability is in part due to different priorities within policies across regions and in how policies are interpreted and used by teachers and administrators at classroom and school levels. In examining policies across Canada, we found several important similarities and differences in grading policies:3
Measurement experts suggest that grades should only reflect student achievement of learning expectations. However, when assigning grades, teachers typically include both achievement (e.g. exams, quizzes, class presentations) and non-achievement factors (e.g. attendance, effort, independence), or what is often called “learning skills.” For example, a study by Resh6 found that teachers weighed effort nearly as much (17 percent) as student performance (18 percent). Other researchers have shown that teachers sometimes assign greater weight to non-achievement evidence in their grade construction.7 The effect of including both achievement and non-achievement factors in a single score is that you cannot distinguish what the student knows about the content from the student’s learning skills and behaviour. The result is that grades then provide less valid information for remediation, acceptance for programs, or accurate self-perceptions. Further, research has demonstrated that teachers adopt different weightings of achievement and non-achievement factors based on the contexts and use of grades. For example, teachers’ consideration of student effort appears to be correlated with student ability and behaviour, particularly for low-ability students: teachers give better grades to low-ability students and borderline cases (i.e. students at risk of failing) if they are well-behaved and put effort into their work.
In deciding what to prioritize when determining grades and when faced with grading dilemmas, teachers tend to return to the question, “What would be most fair for the individual student and for the class as a whole?” In one of our recent studies that involved talking with Ontario teachers about their grading practices and challenges, we found that teachers consistently aim to provide “fair” grades to their students; however, “fairness” held different meanings depending upon the teacher and the grading context. What might be fair in one situation might not be fair in another, or to different stakeholders. Often, fairness meant balancing what was best for an individual student in relation to what was consistent and fair for all students in the class.
Through our analysis, fairness was viewed as the overarching value that helps teachers navigate grading tensions that arise in relation to four common themes: 1) context and classroom management, 2) learning values: grades as academic enablers, 3) policy and external pressures, and 4) consequences of grade use. For example, teachers reported that “bumping up” a grade to allow a student to be admitted into their desired university or college program was fairer than increasing a grade if there were no immediate consequences.
Provincial and territorial large-scale assessment programs tend to have “high-stakes” consequences for students, but not teachers or administrators across Canada.8 For example, a quick scan of these programs suggests they account for a significant percentage of a secondary students’ final grade. Indeed, between 10 and 50 percent of a students’ final grade in certain provinces is based on student performance on provincial large-scale assessment programs in the form of exit examinations.9 In some cases, a passing grade on these large-scale assessments also serves as a requirement for graduation or admittance to post-secondary institutions, as is the case in Ontario, Quebec, and New Brunswick. Thus, it is fair to assert that large-scale assessments exert a pronounced influence on teachers’ own grading practices in that educators across Canada – particularly those in secondary schools – will want to have general alignment between their classroom grades and students’ achievement on large-scale assessments. In some respects, the relationship between large-scale assessments and classroom grades may be used as a proxy for concurrent validity. In this way, a high correspondence between the results of a particular test (in this case provincial large-scale assessment program) and an established measurement for the same or similar learning expectations (in this case teachers’ grades in the same tested subject domain) strengthens the perceived accuracy of both. The agreement or lack thereof between large-scale and classroom assessment is bound to create tensions and discussion on the utility and rigour of each method of assessment.
The results of large-scale assessments also provide an important accountability and/or gatekeeping function across Canadian school systems. Given that these measures are routinely given priority status as more “reliable” and “valid” indicators of student achievement, teachers and administrators may adopt and promote inappropriate test preparation practices, such as teaching to the test, in order to have more favourable student, classroom, and school results. The latter presents an interesting dichotomy with respect to the previous point related to concurrent validity, in that teaching to the test artificially inflates students’ large-scale assessment scores and inadvertently may present teachers’ grading practices as less accurate or rigorous. Perhaps more disconcerting is that teaching to the test inflates student performance at the expense of authentic forms of learning that allow for transfer of knowledge and skills.
Ultimately, large-scale assessment programs across Canada present opportunities and challenges for existing grading policies and practices, leading to intended and unintended consequences. Understanding the evolving nature and impact of these large-scale assessment programs on teacher’s pedagogical approaches and grading practices requires sustained longitudinal studies. Certainly, the literature abounds with international jurisdictions that have largely succumbed to a testing-focused education model that has undermined teachers’ classroom assessment literacy. Ironically, those systems tend to fare quite poorly on international measures of student achievement such as the Programme in International Student Assessment (PISA), which assesses reading, mathematics, and science literacy every three years across more than 70 educational jurisdictions around the world.
What all this means is that grades – despite their influence, power and potential consequences – are complex indicators of student learning (both achievement and non-achievement) with variability in policies and practices across Canada. While such variability is not necessarily a negative quality, as it could enable more fair treatment and valid reporting in relation to unique student learning progressions and classroom contexts, it does challenge our ability to consistently compare students when making selection, admission, and ranking decisions. Grading is one of the most high-stakes classroom assessment practices, sitting at the critical intersection of teaching, learning, and assessment and representing the most public professional statement made by teachers about student learning. The more aware teachers are of the complexity of grades, the more likely grading can be ensured to be a valid, reliable and, most importantly, fair practice beneficial to student learning.
Download the pro-learning session 1.1 – Rethinking How You Grade
Original Illustration and Photo: iStock
First published in Education Canada, March 2019
1 B. Noonan,“Interpretation Panels and Collaborative Research,” Brock University 12 (2002): 89-100.
2 M. Simon, S. Chitpin, and R. Yahya, “Pre-service Teachers’ Thinking About Student Assessment Issues,” International Journal of Education 2. no. 2 (2010): 1-20.
3 C. DeLuca, H. Braund, A. Valiquette, and L. Cheng, “Grading Policies and Practices in Canada: A landscape study,” Canadian Journal of Educational Administration and Policy 184 (2017): 4-22.
4 AOL refers to Asessment of Learning, AFL Asessment for Learning, and AAL Assessment as Learning.
5 B. Noonan,“Interpretation Panels.”
6 N. Resh, “Justice in Grades Allocation: Teachers’ perspective,” Social Psychology of Education 12 (2009): 315–325.
7 Y. Sun and L. Cheng, “Teachers’ Grading Practices: Meanings and values assigned,” Assessment in Education: Principles, Policy & Practice 21, no. 3 (2014): 326–343.
8 L. Volante and S. Ben Jaafar, “Educational Assessment in Canada,” Assessment in Education: Principles, Policy & Practice 15, no. 2 (2008): 201-210.
9 D. Klinger, C. DeLuca, and T. Miller, (2008). “The Evolving Culture of Large-scale Assessments in Canadian Education,” Canadian Journal of Educational Administration and Policy 76 (2008): 1-34.
Joe Feldman provides a vision for equitable grading with a focus on coherence and mastery learning. Drawing on research and interweaving voices of teachers, researchers, school administrators and students, the author defines grading for equity using three pillars: equitable grading is accurate, bias-resistant, and motivational. Linking theory and practice, the author provides a practical guide using research-informed examples to convince readers that commonly used assessment practices are ineffective and should be replaced with equitable grading practices to improve learning for all students, particularly those who are underserved or vulnerable.
The author provides a historical account of traditional grading practices and challenges readers to consider how shifting to equitable grading practices leads to an improved representation of student learning. Some recommendations for equitable grading practices discussed in the book include: use a 4-point grading scale, weight more recent performances, promote productive group work and high-quality work without a group grade, exclude behaviours from the grade (e.g., lateness, effort, participation), provide non-grade consequences for cheating, use alternatives for late work, reframe homework, allow retakes and opportunities to improve grades, use rubrics to calibrate learning intentions, promote students’ self-regulation and agency through student trackers and goal setting, and more. Zero-grades, averaging, and extra credit, by contrast, are practices Feldman argues should be dropped. Using mathematical comparisons, as well as sample gradebooks, the author dispels myths and demonstrates how formative and summative assessment divisions are not fixed and that arriving at a final grade requires coherent and equitable grading practices, including a teachers’ professional judgment.
Each chapter builds on the next and provides teachers with a valuable guide book and arguments for changing practice and moving towards a standards-based grading model. Using approaches that are mathematically sound, prioritizing knowledge and understanding, supporting hope and a growth mindset, and providing students with clarity for how to succeed, can motivate students to improve their learning. Each chapter concludes with a summary of key concepts and thought-provoking questions, making this a perfect book to discuss with a group of colleagues. The book also has a supporting website with additional resources and examples of equitable grading practices: https://gradingforequity.org
Corwin, 2019.
ISBN: 9781506391571
Competency-based assessment runs into a roadblock when university admission is driven by traditional grades. This article describes a project at Kwantlen Polytechnic University experimenting with basing university admissions on student portfolios that demonstrate competencies.
If I had to summarize the problem with conventional grading systems (letter or percentage grades derived from classroom-based and standardized testing), it would be that educators tend to overstate their significance. When we say that “she got an A in English,” we usually mean to say “she is a good or excellent student” in that area. That is, of course, not something that letters are able to tell us.
In most grading systems I have encountered, grades in the A range represent unusually good achievement. In other words, if everyone gets an A, people start asking questions about how you mark. Letter grades rarely come, however, with data about the population to which that student has been compared (either explicitly or implicitly). We are making a relative claim without a frame of reference. Since high school English teachers are typically free to set their own assignments, it is also fair to say that this grade comes from an unknown number of unknown assessments. One of those assessments, depending on one’s province, might be a standardized exam, but that is not the only data point.
Grade 12 English, in any given provincial iteration, furthermore represents only one set of outcomes that one might associate with competency in spoken and written English, rhetoric, literature, and so forth. It obviously does not assess one’s ability to articulate an idea in any other language, nor does it typically include a robust non-European focus.
Even more confusingly, many teachers persist in assigning marks to non-curricular performances like attendance, participation and the like. A student who knows literally everything she needs to know about English literature might still, by dint of her poor attendance, score badly in her English course.
So, much like the warning labels on cigarette boxes or the side effects listed in advertisements for medication, conventional letter grades should come with a warning reminding us of their limitations.
This is why I, as a university teacher of primarily first-year students, am excited to see British Columbia’s next generation of Grade 12 graduates. A new, competency-based system will supplement existing letter and number grades, and will offer more ambitious opportunities to build portfolios, demonstrate competencies, and solve problems. Fortunately, Surrey Schools (one of our local districts) has already begun to incorporate many of these practices.
We wanted a viable blueprint for competency-based admission… to carry the K-12 changes into the post-secondary sector.
This is also why, a few years ago, it became clear to my research team (myself and a rotating cast of rising undergraduate stars) that we needed to do something. University admission policies – as any parent, student or high school teacher will tell you – drive a great deal of behaviour. Traditional admission policy incentivizes attention to conventional letter grading assessments, rather than the more authentic, but qualitative, demonstrations of achievement to which people naturally gravitate. So Surrey Schools could bravely press forward with the new curriculum, but if they were left to do so without their post-secondary partners, the results would be sadly predictable. As supportive of the new curriculum as I am, if my child came home and told me she was doing extensive extracurricular work on her portfolio, I would tell her to study for her exams first. She isn’t getting into university with her portfolio (outside of a few disciplines, such as design).
Why, though, isn’t she getting into university with her portfolio? Well, insofar as I could tell, no one had tried. After a few emails to my academic leaders, and a healthy dose of literature review, the Surrey Portfolio Pathway Partnership got started. We wanted a way to build an admission system that used authentic student assignments to carry the K-12 changes into the post-secondary sector. We wanted a viable blueprint for competency-based admission. We didn’t need a huge group of students; we just needed enough diversity of achievement to see what a portfolio might look like in a few different academic contexts. We began with the collection of about two dozen portfolios created in the existing curriculum’s career planning course. What we saw in that analysis was that, in order to be useful in university admission, student portfolios would need to be structured and supported to a much greater extent.
Knowing this, we formed a closer partnership with Surrey Schools and asked them for the names of 5-10 students who had interesting and creative ideas – irrespective of their formal grades. By special permission of our university’s senate, we were able to offer these students admission to the university on the basis of their competencies, rather than their grades (to this day, I have not seen their grades). What competencies? Well, that was up to them.
My research team split the group up so that each student would have an undergraduate mentor to help them build a competency-based portfolio over the course of their final months in high school. Pretend, we said, that we did not offer you admission. What would you show us to prove you are ready for university?
Over the next six months we guided the group through the collection and editing of the work they thought would show us their preparedness for university. The interests of the group were diverse – including nursing, criminology, poetry, and science teaching. The assignments they chose were similarly eclectic. We received hand-drawn geographical diagrams, speeches posted on YouTube, essays, standardized test results, worksheets and more.
We then mapped two layers of learning outcome information onto those assignments: the Grade 12 curricular competencies (learning outcomes), and the outcomes of a group of popular first-year undergraduate courses in our university. We listed every instance we could wherein a connection could be made between the assignment and these outcomes. The result was a web of about 1,400 connections. That is, there were 1,400 instances in which we saw an assignment and judged that it at least partially demonstrated a given outcome. This does not mean that every portfolio will be so richly interwoven, or even that we are correct in asserting the connections we did. What it does mean is that, for each of these students, the single letter grade we would usually see misses potentially hundreds of meaningful data points that their high school teachers had already seen.
One student – let’s call her Olivia – is looking to study in the liberal arts. She submitted English essay writing, creative writing, a written speech, a reflection on her work experience at a part-time job, a package of math assignments, and hand-drawn diagrams of environmental phenomena. When we compared this work to a sampling of our first-year undergraduate objectives, we found they partially demonstrated achievement in the expected areas of English, creative writing and math. We also found connections to the mental health topics in first-year health courses, the portfolio work in first year interdisciplinary courses, and other connections to global education, geography, education, and earth science.
I don’t know Olivia’s Grade 12 grades – but let us pretend for the purposes of argument that I do know those grades. Under conventional admissions policy, she would be admitted to my university on the basis of either her English grade, or an average of that grade and a few others. The institution gains or loses all those interesting ideas, and Olivia gains or loses all those life opportunities, while a package of broader and more meaningful assessment data sits literally down the street.
It is as if we had said, “I know you have achieved quite a bit in English, creative writing, mathematics, geography, interdisciplinary studies, global education, geography and earth science… but I only need to see your English 12 grade, please.”
The reason we do this has always been twofold, though. First, it has historically been difficult to collect assignments like these into a portfolio that can be sent with ease around the country. A single-page transcript, however, is easily sent anywhere. Second, whatever institution receives a portfolio needs to engage in a costly and time-consuming review of the material.
The first justification is simply no longer relevant. The age of the paper transcript was once characterized by very high costs for both computing and data transfer. Neither is the case today. Many individual high school students carry the computing and data transfer technology they need with them to school every day, and the industrial-scale servers used for cloud computing work far harder to provide us all with up-to-date photo and video libraries than they would to collect even a large proportion of a student’s high school work.
The second justification for conventional grading and admissions is far more pertinent. If we send portfolios from high schools to post-secondary institutions, we are seemingly saddling those institutions with an enormous new responsibility – reading and assessing all those portfolios. I was in a meeting a few years ago in which we discussed how much that would cost. It wasn’t comforting, and we couldn’t imagine a way to make it all work.
But this project has led me to conclude I had entered that meeting with a false premise. Post-secondary institutions will never know as much about student achievement as high school teachers do. Even with unrealistic budget increases, including seconding professors to admissions offices, secondary educators will still have a better longitudinal look at student achievement, and will have a wider and deeper range of performances to draw on. The question, then, isn’t how a university could read all those portfolios, but rather how we can build a better way to communicate what is in them.
Since secondary teachers are already evaluating student outcomes (in B.C., curricular competencies), it would be relatively unproblematic to shift the recording of that achievement to a new mode. Rather than taking assessments on a range of outcomes and then aggregating all that assessment into a single letter, why not leave the assessment at the level of assignments and outcomes? Why not say that Olivia has met the following Grade 12 competencies? She could then attach her portfolio work as evidence of those competencies should she wish to share it. Her future university could then examine which Grade 12 competencies its students should achieve in order to be strong candidates for undergraduate study and could receive those competency certifications (as assessed by her Grade 12 teachers), much like grades are received today.
Such a system of competency certification would enable students to use an incredibly wide range of possible assignments to prove their achievement. Anything, in principle, would be fair game if it could demonstrate to the teachers who know a student best that the competencies have been demonstrated. This would also sidestep the need to have post-secondary institutions review each and every portfolio. The portfolios could be linked to the competencies, but the competencies would be certified as a layer above the work itself. (See Figure 1.) Everyone would, in other words, get the more detailed analysis Olivia received.
While we are working on a series of more technical explanations and proposals, the arguments I have offered here hint at what I think the future of assessment looks like. I can say with more clarity that a person can do X and not Y than I can say that a person achieved A and not A-. It is more practically meaningful to say that a person can do X, than it is to say that a person is an A student. A system that allows students to carry their portfolios, but that does not result in the creation of massive administrative overhead, seems possible.
If we are going to close or open the door to future opportunity, we owe it to students like Olivia that we see the full range of her competencies as assessed by the teachers who work with her most closely. The Surrey Portfolio Pathway Partnership provides a small glimpse through a doorway to one possible future for assessment and admission. We intend to push it.
Original photos: iStock
First published in Education Canada, March 2019
Quebec students finish at the head of the class when it comes to mathematics. On the Pan-Canadian Assessment Program (PCAP) tests of Grade 8 students, written in June 2016 and released in early May 2018, it happened once again.1 Students from Quebec finished first in Mathematics (541), 40 points above the Canadian mean score and a gain of 26 points over the past six years.
Quebec’s position as our national leader in mathematics achievement has solidified on every comparative test over the past 30 years. How and why Quebec students continue to dominate and, in effect, pull up Canada’s international math rankings deserves far more public discussion. Every time math results are announced, it generates a flurry of interest, but relatively little in-depth analysis of the contributing factors.
Since the first International Assessment of Educational Progress (IAEP) back in 1988, and in the next four national and international mathematics tests up to 2000, Quebec’s students generally outperformed students from other Canadian provinces at Grades 4, 8 and 11.2 That pattern has continued right up to the present and was demonstrated impressively on the most recent Program of International Student Assessment (PISA 2015), where Quebec 15-year-olds scored 544, ranking among the top countries in the world.
One enterprising venture, launched in 2000 by the B.C. Ministry of Education under Deputy Minister Charles Ungerleider, did tackle the question by comparing British Columbia’s and Quebec’s mathematics curricula. That comparative research project identified significant curricular differences between the two provinces, but the resulting B.C. reform initiative ran aground on what University of Victoria researchers Helen Raptis and Laurie Baxter aptly described as the “jagged shores of top-down educational reform.”3
The reasons for Quebec dominance in K-12 mathematics performance are coming into sharper relief. The initial B.C. Ministry of Education research project exposed and explained the curricular and pedagogical factors, but subject specialists, including both university mathematics specialists and mathematics education professors, have gradually filled in the missing pieces. Mathematics education faculty with experience in Quebec and elsewhere help to complete the picture.
The scope and sequence of the math curriculum is clearer, demonstrating an acceptance of the need for integration and progression of skills. “The way math is presented makes the difference,”4 says Genevieve Boulet, a Mathematics Education professor at Mount St. Vincent University who has prior experience preparing mathematics teachers at Quebec’s University of Sherbrooke.
The Quebec Ministry of Education curriculum, adopted in 1980, set the pattern. In teacher education and in the classroom, much more emphasis was placed upon building sound foundations before progressing to problem solving. Quebec’s Grade 4 objectives made explicit reference to the ability to develop speed and accuracy in mental and written calculation and to multiply larger numbers as well as to perform reverse operations. Curriculum guidelines emphasize subject mastery, particularly in algebra, and tend, in Grade 11, to be more explicit about making connections with previously learned material.
Fewer topics tend to be covered at each grade level, but in more depth than in B.C. and other Canadian provinces. In Grade 4, students are generally introduced, right away, to numbers/operations and the curriculum unit on measurement focuses on mastering three topics – length, area, and volume – instead of a smattering of six or seven topics. Secondary school in Quebec begins in Grade 7 (secondaire I) and ends in Grade 11 (secondaire V) and, given the organizational model, that means students are more likely to be taught by mathematics subject specialists. Senior mathematics courses, such as Mathematics 536 (Advanced), Mathematics 526 (Transitional) and Mathematics 514 (Basic), were once explicitly focused on “cognitive growth and the development of basic skills,” covering a range of topics at different depths.5 Recent curriculum changes, instituted in 2017 under the “Diversified Basic Education” program, presented the renamed courses as three streams, each preparing students for different pathways, aligned with post-secondary CGEP programs. The revised Quebec Program of Study cast Mathematics within a broader “Areas of Learning” model, but the prescribed knowledge and provincial examination questions remained consistent with past practice.6
Teacher preparation programs in Quebec universities are four years long, providing students with double the amount of time to master mathematics as part of their teaching repertoire – a particular advantage for elementary teachers. In Quebec faculties of education, prospective elementary school math teachers must take as many as 225 hours of university courses in math education; in other provinces, they receive as little as 39 hours.7
Teacher-guided or didactic instruction has been one of the Quebec teaching program’s strengths. Annie Savard, a McGill University education professor, points out that Quebec teachers have a clearer understanding of “didactic” instruction, a concept championed in France and French-speaking countries.8 They are taught to differentiate between teaching and learning. “Knowing the content of the course isn’t enough,” Savard says. “You need what we call didactic [teaching]. You need to unpack the content to make it accessible to students.” Four-year programs afford education professors more time to expose teacher candidates to the latest research on cognitive psychology, which challenges the efficacy of child-led exploratory approaches to the subject.9
Students in Quebec still write provincial examinations and achieving a pass in mathematics is a requirement to secure a graduation (Secondaire V) diploma. Back in 1992, Quebec mathematics examinations were a core component of a very extensive set of ministry examinations, numbering two dozen, and administered in Grades 9 (Sec III), 10 (Sec IV), and 11 (Sec V). Since 2011-12, most Canadian provinces, except Quebec, have moved to either eliminate Grade 12 graduation examinations, reduce their weighting, or make them optional. In the case of B.C., the Grade 12 provincial was cancelled in 2012-13 and in Alberta the equivalent examination now carries a much-reduced weighting in final grades. In June of 2018, Quebec continued to hold final provincial exams, albeit fewer and more limited to Mathematics and the two official languages. Retaining exams has a way of keeping students focused to the end of the year; removing them has been linked to both grade inflation and the lowering of standards.10
Academic achievement in mathematics has remained a system-wide priority and, despite recent initiatives to improve graduation rates, there is much less emphasis in Quebec on pushing every student through to high school graduation. From 1980 to the early 2000s, the Quebec mathematics curricula was explicitly designed to prepare students for mastery of the subject, either to “prepare for further study” or to instill a “mathematical way of thinking” – reflecting the focus on subject matter. The comparable B.C. curriculum for 1987, for example, stated that mathematics was aimed at enabling students to “function in the workplace.” Already, by the 1980s, the teaching of B.C. mathematics was seen to encompass sound reasoning, problem-solving ability, communications skills, and the use of technology.11 This curriculum fragmentation never really came to dominate the Quebec secondary mathematics program.
Quebec’s education system remains that of “a province unlike the others.” Since the first IAEP study on the achievement of 13-year-olds, ministry officials have been keenly aware that the three provinces with the best student results, Quebec, Alberta and B.C., all had the lowest graduation rates. Raising the passing grade from 50 to 60 across Quebec in 1986-87 had a direct impact upon high school completion rates. But student achievement indicators, particularly in mathematics, still drove education policy and, until recently, unlike other provinces, student preparedness remained a higher priority than raising graduation rates.12
SCHOOL SYSTEMS are, after all, products of the societies in which they reside. While Canadian provinces outside Quebec are greatly influenced by North American pedagogy and curricula, Quebec schooling is the creature of a largely French educational milieu.13 Teaching philosophy, methods and curriculum continues to be driven more by the French tradition, exemplified in mastery of subject knowledge, didactic pedagogy, and a uniquely different conception of student intellectual development. Socio-historical factors weigh far more heavily than is recognized in explaining why Quebec continues to set the pace in Mathematics achievement.
First published in Education Canada, December 2018
1 Council of Ministers of Education Canada, Pan-Canadian Assessment Program, PCAP 2016: Report on the Pan-Canadian Assessment of Reading, Mathematics and Science (Toronto: CMEC, May 2018), Table 2.1, 36.
2 Anna Stokke, What to Do about Canada’s Declining Math Scores, C.D. Howe Institute Commentary No. 427 (Toronto: C.D. Howe Institute, May 2015), p.
3 Helen Raptis and Laurie Baxter, “Analysis of an abandoned Reform Initiative: The case of Mathematics in British Columbia,” Canadian Journal of Educational Administration and Policy 49 (January 26, 2006).
4 Genevieve Boulet, Mount Saint Vincent University, Personal Interview, May 3, 2018. See also “Nova Scotia math curriculum ‘doesn’t make any sense’: education expert,” CBC News Nova Scotia (May 2, 2018).
5 Program of Study, Mathematics, Mathematics 536 (Quebec, 1997), 2.
6 Program of Study, Mathematics, Mathematics, Science and Technology (Quebec, 2017), 4-16 and 31-64.
7 Kate Hammer and Caroline Alphonso, “Tests Show Provincial Differences in Math, Reading, Science Education,” The Globe and Mail (October 7, 2014).
8 “It Adds Up: The eason students’ math scores are higher in Quebec than the rest of Canada,” The National Post (Canadian Press) (September 6, 2017).
9 See Daniel Ansari, “The Computing Brain,” in Mind, Brain and Education: Neuroscience implications for the classroom, ed. D. Souza (Bloomington, Indiana: Solution Tree Press, 2010) 201-227; and Daniel T. Willingham, “Is It True That Some People Just Can’t Do Math?” American Educator (Winter 2009-2010): 1-7.
10 Jim Dueck, Education’s Flashpoints: Upside down or set up to fail (Lanham, MD: Rowman & Littlefield, 2015), 100-103.
11 Charlie Smith, “Battling B.C.’s Math Education Crisis,” the Georgia Straight (October 31, 2012).
12 Robert Maheu, “Education Indicators in Quebec,” Canadian Journal of Education 20, No. 1 (1995): 56-64.
13 Chad Gaffield, “Children’s Lives and Academic Achievement in Canada and the United States,” Comparative Education Review 38, No. 1 (February, 1994): 53-58.
For further background on the Quebec socio-cultural context, see Norman Henchey and Donald Burgess, Between Past and Future: Quebec education in transition (Calgary: Detselig Enterprises, 1987).
Evidence suggests that new teachers are not confident taking on formative and differentiated approaches to assessment. What supports could help them refine their assessment skills?
TAKE A MOMENT to picture your classroom. Imagine you are planning an upcoming unit for your students. Would you start by designing a summative evaluation, then backward plan your lessons? Or would you first create your formative assessments and let the information you gather from these tasks guide your subsequent lessons, learning activities, and final assignments? Would you perhaps review the curriculum expectations with your students and ask them to design personal learning plans or co-plan an inquiry for the unit? Or maybe none of these approaches would work for you and your students.
While there is considerable latitude in how you implement assessment policies within your own classroom to support teaching and learning, research shows that how you approach your assessment decisions has tremendous impact on the learning culture in your classroom. A teachers’ approach to classroom assessment not only influences what students learn but also how they learn.1
Our aim in this article is to reflect on the experiences that shape teachers’ approaches to classroom assessment, in particular those early in a teacher’s career. teachers become more aware of the classroom assessment practices, but to outline how teacher education and in-service mentorship can support early career teachers in effectively interpreting and implementing assessment policies that meaningfully support student learning.
Previous measures of teachers’ classroom assessment literacy have tended to diminish the influence of classroom context, instead focusing on teachers’ assessment knowledge (e.g. norm vs. criterion assessments) and/or specific skills (e.g. test construction). Through this approach, assessment literacy was understood as a set of learnable skills that teachers were required to know and use. By overlooking the importance of the classroom context, teachers could be scored, compared, and ranked through a multiple-choice test on their classroom assessment knowledge and practices. However, such a de-contextualized measure of teachers’ knowledge and skills does not accurately capture teachers’ preparedness for classroom assessment practices.
In contrast, recognition of the significance of the classroom context deters the scoring and ranking of teachers’ knowledge and skills, as an assessment practice appropriate in one context may not be in another. For instance, the construction of multiple-choice questions may be appropriate for a teacher in one grade or subject, but may not be used by another teacher, yet both could have sound assessment practices for their context. Furthermore, a teacher with multiple classes of the same course may value producing reliable assessments that can be used across sections, while a teacher with a range of dissimilar courses may value producing assessments that reflect the specific learning progress of each class.
Teachers’ classroom assessment practices are also shaped by their own teaching and learning experiences.
For a new teacher, few things are as daunting as the first days of school. Pre-planned routines can devolve into trial-and-error, and a well-crafted philosophy of education can gravitate towards just trying to get through the day. While these feelings generally dissipate over time, they may profoundly impact early career teachers’ approaches to assessment.2
Compared to teacher candidates, early career teachers with less than five years’ classroom experience are more than three times as likely to focus on adhering to reporting mandates set out by assessment policies. Unlike teacher candidates and later career teachers, who both tend to support differentiated approaches to assessment, early career teachers are more than three times as likely to endorse an equal assessment protocol for all students (in which all students receive the same assessment tasks) and almost four times as likely to value producing consistent assessment tasks (utilizing similar assessments across courses and/or years).
Early career teachers’ orientation toward a more standardized and summative assessment approach may be fuelled by their need to simply survive the first few years of teaching, and is likely further intensified by the current accountability climate of Canadian schools. Importantly, as teachers pass the five-year mark and develop more extensive classroom experience, their approaches to assessment begin to gravitate towards more formative and differentiated approaches. Given that this shift towards standardized and summative approaches appears only within early career teachers, it is important to consider the supports that could be provided to help teachers early in their career enact a more balanced approach to assessment.
Teacher education programs play a central role in the development of teachers’ approaches to assessment. These programs are typically the first instance in which teachers are explicitly exposed to theories of teaching, learning, and assessment and are also when they first venture forth into the classroom as a teacher. While there are a plethora of experiences that shape teachers’ approaches to assessment (such as coursework, instructor pedagogy, and practicum experiences), stand-alone assessment courses are the dominant source of assessment education across Canadian teacher education programs.
Within stand-alone assessment courses, teacher candidates are expected to acquire knowledge and skills related to classroom assessment practices. What is rarely addressed is how to utilize their assessment knowledge and skills to navigate the principles of teaching, learning, and assessment that permeate our educational system (e.g. outcome-based accountability, transparent and equitable practices3). While some of these alignment issues are likely addressed in curriculum courses and during practicum placements, the role of assessment education should be to support teachers’ capacity to align their assessment knowledge and skills to their approach to assessment in order to navigate these underlying principles. If this doesn’t occur, teachers may start their careers without a firm understanding of how their approaches to assessment can be used as a bridge between the knowledge and skills they have developed and underlying principles of teaching, learning, and assessment.
For the past 40 years, formal mentorship via teacher candidates’ practicum experiences has dominated our models of teacher education.4 Upon certification and securing a teaching position, depending on school board and province, some teachers have the opportunity for formal early career mentorship, whether from their administrator or more established peer teacher (e.g. the teacher induction program in Ontario), but these opportunities do not necessarily maintain a consistent focus on classroom assessment. However, informal mentorship can provide crucial supports for early career teachers to better equip them to confidently take on a range of assessment strategies.
As teachers move beyond the first five years of teaching, subtle yet important shifts in their approaches to assessment occur. The most apparent is that the prioritization of more standardized summative assessments diminishes in favour of differentiated and formative approaches that support students throughout learning. Furthermore, within their formative assessment practices, experienced teachers are far better able to distinguish and prioritize assessment for and as learning practices, a distinction that appears more ambiguous for teacher candidates and early career teachers. Based on these findings, it appears that experienced teachers are better able to use fluid assessment practices that suit individual students’ needs, rather than being driven by accountability mandates that tend to emphasize summative assessment results.5
Given the changing nature of teachers’ approaches to assessment over their career, more established teachers could play an important role in mentoring beginning teachers as they negotiate current accountability mandates and assessment responsibilities in the service of student learning. For example, more experienced teachers could help early career teachers understand the alignment between the knowledge and skills they developed during teacher education and the expectations set out by classroom assessment policies. This mentorship could equip teachers to effectively interpret and implement assessment policies in ways that are meaningful to teachers’ practice and effective in the service of student learning.
IN PREPARING TEACHERS for the realities of current and future classrooms, there is a need to focus on the drivers of teachers’ classroom assessment decisions. To effectively navigate the pressures of classroom assessment and support student learning, early career teachers need ongoing support tailored to their career stage. With this support, we hope that early career teachers’ pronounced shift towards a standardized and summative approach to classroom assessment could be moderated toward more balanced approach that equally values formative and differentiated approaches aimed at using assessment to support and promote student learning.
Acknowledgements
We would like to thank Dr. Lorraine Godden and Alice Johnston for their feedback throughout the writing process.
It is worthwhile, particularly for teachers new to the profession, to critically reflect on what influences shape their assessment decisions.
The Approaches to Classroom Assessment Inventory is a professional learning tool to help teachers identify and develop their approaches to assessment through scenario-based questions and a personalized assessment profile.
First published in Education Canada, September 2018
1 A. Coombs, C. DeLuca, D. LaPointe-McEwan, and A. Chalas, “Changing Approaches to Classroom Assessment: An empirical study across teacher career stages,” Teaching and Teacher Education 71 (2018): 134-144.
2 Ibid.
3 Ontario Ministry of Education, Growing Success: Assessment, evaluation, and reporting – improving student learning (Toronto, ON: Queen’s Printer for Ontario, 2010); Manitoba Education, Citizenship & Youth, Rethinking Classroom Assessment with Purpose in Mind: Assessment for learning, assessment as learning, and assessment of learning (2006).
4 A. J. Hobson, P. Ashby, A. Malderez, and P. D. Tomlinson, “Mentoring Beginning Teachers: What we know and what we don’t,” Teaching and Teacher Education 25, no. 1 (2009): 207-216.
5 Coombs et al., “Changing Approaches to Classroom Assessment.”
One year into his teaching career, a recent graduate reflects on the value – and limitations – of his BEd program.
It’s the spring of 2013 and I’m sitting down with my supervisor in my co-operating teacher’s office. On the last day of my first field experience, I’m incredibly anxious to receive my evaluation. My supervisor starts with something like, “How do you think you’ve done this week?” I begin by explaining how professionalism is one of my core values but she immediately cuts me off. “And that’s the key: professionalism. And you haven’t acted very professional thus far.” Due to a misunderstanding on my part (more on that later), I was absent from first period and had not notified the school. In an instant, all my accomplishments from my placement are disregarded and reduced to this single incident.
I believe my supervisor’s reaction is a fair example of how many, not necessarily most, student teachers are treated during their teacher training program. While my perspective is that of only one person, I believe it is valuable to share as it can be very difficult to obtain a forthright account of student-teacher placements. After five years in the education department, and a year as a classroom teacher, my assessment of teacher preparation programs is that there are a number of areas that may be improved in the course work and practicum components.
A number of teachers will proclaim that the classes in teacher training programs are useless. In truth, like any class at any level of education, you get what you put in; if you have a genuine interest in the subject and bring passion to your projects, then you will learn much from the course. However, even the most riveting topics may seem a waste of time if the course format is the traditional lecture/note-taking session. It is unfortunate that professors are choosing to transmit content in this way, when they should be acting as models for the latest, and most effective, practices in teaching. Unfortunately, this is still the case, in part or whole, for many of the undergraduate courses in education departments.
For instance, one of my first methods courses was on Canadian history. This was a course offered by the Education Department and the instructor was a professor of Education. As such, one might expect a model for high quality social sciences teaching. This was not the case; the professor literally read slides to the class while we took notes. That’s all it was: no skill building, no pedagogical practice, just learn this stuff and repeat it back on a test. Since I could read on my laptop in 30 minutes what the instructor covered in 60, and since the syllabus specified that attendance was not marked, I simply stopped attending, coming only for the three required exams.
Of course, actions have consequences, and I was prepared to face them. The professor asked to meet with me regarding my attendance and he explained to me how, as a teacher, I’ll often have to do things I don’t like and attend meetings which I would rather not attend. I’m not sure that this was meant to encourage me to go into the teaching profession.
To my professor, I posed the question, “May I speak to something constructive? What if, in future years, the course would have more class discussions, team projects, or interactive segments?” The response was that students would probably just fool around or have side conversations during any non-lecture time. Of course, this is precisely what our professors of education must coach us on: how we may engage our students in a fun and interactive way, while holding their attention and keeping them on task. It is unfortunate that the professor did not feel comfortable modeling these strategies with the next generation of teachers.
Is it fair to take my experience from a single class and generalize it across all education courses? Of course not. However, the experience made me think of three takeaways. I would hope that, even if viewed in a vacuum, the recommendations seem reasonable and proactive as a way of enhancing teacher preparation:
A field experience should be the best part of every teacher training program; it’s a chance to be in the classroom and practice doing the job that you’ve been training for. I remember being incredibly excited for my first placement; every morning I would get up early, choose a stellar tie, and have breakfast at the café near my host school. It was a cool experience because all the science teachers were doing their practicum at the same time, at the same school. This meant we could go out for lunch together and talk about our experiences.
While the first placement is meant to be strictly observation, my co-operating teacher (CT) let me teach a couple of lessons. At this point, I was learning super basic teacher stuff (wait time after asking a question, choosing high-quality photos for lessons, etc.). I felt quite accomplished by the end of the week.
On the last day, the schedule worked out that I did not have a class to observe. I thought this would be a perfect opportunity to get some planning done as I had volunteered to teach another lesson that day. At this point in my training, it never even occurred to me that I should be in the school even if I didn’t have a class; I thought nothing of taking time to plan my lesson from home.
I arrived at school for second period and went to my CT’s class. She asked me where I had been during first period. From her tone, I knew that something was wrong. I offered the truth: that I was planning today’s lesson from home. She scolded me and I was a bit down for the rest of the day – particularly because my supervisor would be there that afternoon for my evaluation.
Unfortunately, my lesson didn’t count in my evaluation as this was an observation placement. Essentially, the entire meeting for my evaluation was criticism of the non-constructive variety. Without hyperbole, not a single positive aspect of my field experience was mentioned. Nevertheless, I was moving on to the next practicum.
Why was my placement considered a success rather than a failure? After comparing notes with my colleagues, it became apparent to me that the evaluation was scored arbitrarily. The numbers didn’t seem to match what was said in the meeting nor the comments written in the report; this seemed to be the case with my colleagues’ evaluations as well.
Professors… should be acting as models for the latest, and most effective, practices in teaching.
A field experience is extremely challenging to judge because it is so personal to the student teacher. It is even more complicated to judge the student-teaching system as a whole because each supervisor and CT are as unique as their own personality. I have had wonderful supervisors and CTs, but I chose the story of my first placement to showcase the incredible power that the supervisor has in the system. Unlike in coursework, where a student is judged on strict criteria, my university allows supervisors and CTs near-absolute discretion to pass or fail a student teacher. I was fortunate that my supervisor decided on a pass. However, I know many of my former colleagues were not so fortunate. In one incident, a supervisor wanted the student teacher to pass their practicum but the CT refused.
Why is the evaluation of the student teacher so reliant on the discretion of the supervisor and CT? As a current teacher, I wouldn’t be able to use my discretion to decide whether a student passes a class or not; I would be obliged to consider that student’s marks and the criteria for a pass.
Having said this, the expectations held by the CTs are not their fault because no one has trained them how to be CTs. Program coordinators may assume that, because CTs are teachers, they do not need any training to train others. Unfortunately, all they are provided with is a piece of paper with what to look for in a good student teacher. This means that there are often hidden expectations which may not come to light until your CT has already written your evaluation. Sometimes, those expectations are quite unreasonable. For instance, the experience of a number of my colleagues has been that, by the final placement, some CTs consider that a student teacher’s role as a learner is essentially over. Instead, they expect that the student teacher is there to show what they can do as an independent teacher; requests for guidance or advice may be met with scorn or the idea that, if you have to ask, maybe you’re not ready to be a teacher.
On my first day as a full-time teacher in a small, rural high school, I stood in front of about 20 twelve-year-olds, ready to introduce myself as their Math and Science teacher. I thought about what I’d learned during four years of preparation, and there was no doubt in my mind that the most important place to start was by building a relationship with my students. We did talk a bit about what we would learn that year and classroom expectations. Yet most of the time was spent discussing what students did over the summer, what books they read, what video games they played. As I was new to the school, I let them ask questions about me on a personal level (with discretion).
As the weeks and months went on, I realized that I was fairly competent with the soft skills required of teaching: relationship building, classroom management, lesson planning, etc. The greatest learning curve seemed to be keeping up with the course itself, and those nitty-gritty things like pacing a chapter, the best way to engage students in certain topics, the amount of time explaining concepts compared to the time when students are working independently or in teams.
There were several instances in the Math and Science courses where I felt that I was learning the material the day before I was meant to teach it. This was not because the subjects were overly complicated; they were simply facets which I had not explored when I was in secondary school myself. This made lesson planning, particularly, more challenging and stressful at times.
Moreover, at university, it was understood that the provincial curriculum documents were the bible for teachers. In fact, the end-of-year exam tends to line up more with the textbooks, and workbooks, which are chosen by the school board. In many ways, this makes sense; students’ class work should prepare them for what to expect on the exam. For instance, the curriculum may say that a student must be able to construct a histogram by the end of Grade 8, but it is actually evaluated at the end of Grade 7; the only way to know this is if a teacher is familiar with the workbook being used for that course. This may be a critical piece missing from teacher training programs: to provide future teachers with the opportunity to work with authentic classroom tools that their students are expected to use. This is a practice which should extend beyond field placements.
Having been a full-time teacher for a year, I am a big fan of the Western Quebec School Board (WQSB)’s Teacher Induction Program (TIP). The program is provided to all teachers who are new to the school board, regardless of how many years of experience each teacher comes with. This is a great opportunity for teachers of varying levels of experience to learn from each other while reinforcing the mantra that teachers are lifelong learners.
Part of what makes this program worthwhile for me is that I am able to develop professional skills of my choosing with the aid of a mentor-coach who helps me synthesize SMART goals (specific, measurable, attainable, relevant, timely). In addition, I don’t have to worry about being a burden on my mentor-coach, because they are given additional compensation for their time.
Every future teacher deserves a meaningful preparation program – one that allows those who are new to the profession to feel empowered and ready to lead a classroom of students on an adventure of discovery. Ultimately, as research-focused institutions, universities are well aware of what the best practices in education are. Now, the key is to implement those best practices in creating the teachers of the future.
Photo: iStock
First published in Education Canada, September 2018
Assessment is top of mind in schools these days. Teachers are required to use assessments continuously in their practice, from initial Kindergarten readiness assessments to high school exit exams to accountability-driven provincial testing programs. Increasingly, teachers across grades are also expected to engage students in ongoing assessments during learning periods to provide regular formative feedback about their progress in relation to provincial standards.1 Clearly, teacher assessment fluency has become a fundamental skill for teaching in today’s schools. So, what does it mean for a teacher to be fluent in assessment? And, more importantly, what does it look like in practice?
Fluency – the skill to communicate with ease in a language – has the Latin root word fluere, “to flow.” For teachers, then, we conceptualize assessment fluency as the skill to integrate and use assessments within the flow of teaching and learning. Assessment fluency combines teaching, curriculum, and assessment to effectively support and report on student learning in classrooms.
Despite its importance, research suggests that many teachers do not feel sufficiently prepared to effectively use assessment in their classrooms to optimize student learning.2 In this article, we present eight key dimensions of assessment fluency as a framework for enhancing classroom assessment practices.
In the past two decades, defining assessment fluency has been a focus of research and policy worldwide.3 Since the 1990 publication of the influential Standards for Teacher Competence in Educational Assessment of Students, a number of documents have been developed to articulate standards for classroom assessment and to guide teacher practice.
By analyzing changes over time in 15 of these documents (from Canada, the U.K., the U.S., Europe, Australia, and New Zealand), we were able to identify eight key dimensions for teacher assessment fluency.4 These dimensions are:
Each of the eight dimensions reflects key considerations when assessing student learning, and combined they represent a framework for understanding and supporting assessment fluency. This framework serves as a tool to help teachers unleash the power of assessment fluency in practice. In the table on pp. 41-42, we identify core topics related to each of the eight dimensions of assessment fluency, then describe what it looks like in practice. We also provide key professional learning goals to support the teachers’ development across the dimensions.
The Assessment Fluency Framework can be used to support teachers’ development of assessment fluency at individual, school, and district levels. Individual teachers can use the framework to identify personal areas of strength and areas for development. For example, with respect to the Assessment Purposes dimension of the framework, a teacher might reflect on the following questions:
Teachers can then use the learning goals identified in the framework to target areas for personal learning.
Similarly, school and district administrators can use the Assessment Fluency Framework to determine professional learning strengths and goals in assessment across classrooms in schools and across schools in districts. Goals can then be incorporated into school and district improvement plans, providing the basis for professional learning opportunities for teachers. These opportunities might include school-based collaborative inquiries focused on specific aspects of classroom assessment or district-wide professional development sessions for teachers with assessment experts.
Becoming assessment fluent is a career-long pursuit requiring sustained and intentional professional learning. This framework provides a tool to engage focused learning toward teachers’ effective use of assessment information in their classrooms. Supporting teacher learning in assessment across schools and districts will ultimately ensure that students gain the benefit of assessment-driven teaching.
Download the Assessment Fluency Framework
Want to learn more about your approach to classroom assessment?
Try the Approaches to Classroom Assessment Inventory (ACAI) at http://educ.queensu.ca/acai and generate your personal assessment profile. Based on your profile, you can identify and select personal learning goals to enhance your assessment fluency.
En Bref : L’évaluation constitue une compétence essentielle pour enseigner dans les écoles contemporaines. Cet article présente huit dimensions destinées à soutenir et à renforcer la maîtrise de l’évaluation par les éducateurs et à améliorer les pratiques d’évaluation en classe. Les sujets fondamentaux et objectifs d’apprentissage de chaque dimension sont indiqués pour guider l’apprentissage professionnel des enseignants en évaluation.
First published in Education Canada, March 2017
1 C. DeLuca, L. Volante and L. Earl, “Assessment for Learning across Canada,” Education Canada 55, no. 2 (2015): 48-52.
2 C. A. Mertler, “Teachers’ Assessment Knowledge and their Perceptions of the Impact of Classroom Assessment Professional Development,” Improving Schools 12, no. 2 (2009): 101–113; L. Volante and X. Fazio, “Exploring Teacher Candidates’ Assessment Literacy: Implications for teacher education reform and professional development,” Canadian Journal of Education 30 (2007): 749-770.
3 C. M. Gotch and B. F. French, “A Systematic Review of Assessment Literacy Measures,” Educational Measurement: Issues and Practice 33 (2014): 14-18; S. Brookhart, “Educational Assessment Knowledge and Skills for Teachers,” Educational Measurement: Issues and Practice 30 (2014): 3–12.
4 C. DeLuca, D. LaPointe and U. Luhunga, “Teacher Assessment Literacy: A review of international standards and measures,” Educational Assessment, Evaluation, and Accountability 21, no. 4 (2016): 248-266.