A sense of emerging mastery is one of the factors that motivate, and thus engage, teachers but the only hard data they have to gauge their success is both inadequate to fully represent their goals and deferred until after the end of the unit, term or year.
Summative data is necessary for credentialing and accountability, and it does provide useful information for improving curriculum and for policy development, which are part of mastering the craft of teaching, so it is an important part of a balanced assessment program that can help a teacher, or a school system, to learn from experience. However, because it comes “after the fact” of learning, it has little value for supporting student learning and also little value for supporting teacher engagement.
Moreover, because most summative data is used in aggregated form, information about individual students is lost. There may be some minor disaggregation (e.g., by gender or school), but summative data is generally useful only in revealing overall trends. If it is broken down into groups that are too small (e.g., individual classes) the standard error of measurement tends to become so great that although the data remains “valid’ it is no longer “reliable.” Thus, in addition to being deferred, summative data just doesn’t relate strongly to any individual. It is a conceptual abstraction with little emotional or motivational impact.
Unfortunately, summative data is what gets the most attention. Somehow it has gained an unwarranted reputation for objectivity and certainty. This is perhaps the biggest problem with it; we treat it with too much naive respect, forgetting that it comes from instruments that may or may not be well designed and that it has no meaning until it is interpreted, which may or may not be done well. As Mark Twain remarked, “There are three kinds of lies: lies, damned lies and statistics” so lets not forget that all those precise numbers are a house built on sand.
What students need, and what teachers would find most informative, is an ongoing dashboard of information about learning as it is occurring. That’s why there is so much emphasis on formative assessment these days. Feedback (aka formative assessment) trumps evaluation (aka summative assessment) if your interest is in supporting learning rather than merely sorting students.
The strength of formative assessment is its immediacy, but its weakness is a lack of precision and the complex task of understanding what it means. The evidence drawn from ongoing observation of student behaviour is best viewed not “scientifically” but through what Eliot Eisner has called “connoisseurship” or “the enlightened eye;” that is, through professional wisdom. Of course, simply being certified as a teacher does not automatically impart the enlightened eye necessary to divine the meaning within the evidence of classroom life. One has to develop this professional capacity through experience and earn the trust of students and parents in one’s ability to “see” what is going on for students and to use this “insight” to support learning. Many—probably most —teachers do, but some do not.
Formative assessment is complex, but no more so than summative assessment and it is of far more importance in the teaching and learning nexus, not only for students but also for teachers. Perhaps the best source of feedback for teachers themselves is students. The student voice, as subjectively biased as it must necessarily be, may offer the greatest hope for monitoring one’s emergent mastery as a teacher and thus for providing motivation that carries one through the exuberantly arduous turmoil of teaching. In terms of teacher engagement, this is the data that counts.