Mapping Assessment

We need as a profession to better understand the complexities, forms, purposes, practices, goals, and meanings of assessment.

Let’s face it.  Assessment can be vexing as a word, a concept, and a practice.  Some educators fixate on it to the exclusion of good teaching practices:  “This is all well and good, but how do we assess it?” It’s as if they see no point in creating optimal conditions for learning that engage students if in the end it doesn’t produces a learning outcome that can’t be easily quantified.  Such teachers may even privilege assessments as the goal of teaching rather than that of creating optimal conditions for learning.  Other times, we as educators get confused by the language of assessment and hence talk by, over, or past our colleagues and students because we each assume the other is using the term in the same way we are.   Some may use the word to mean evaluation, grades, and scores while others mean feedback, a conversation, or taking stock of our teaching.  Even when assessment is understood to take different forms, it might be simply dichotomized as summative or formative and reduced to merely describe different types of tasks and how they get dealt with in terms of our grading.  And then there is the plethora of assessment practices to contend with:  rubrics, portfolios, tests, conferences, exit tickets, surveys, tests, projects, and so on.  To be sure, we need as a profession to better understand the complexities, forms, purposes, practices, goals, and meanings of assessment.  What might it look like to map the broad terrain of assessment? 

 

I propose that we think of assessment as occurring on two dimensions.  The first dimension (let’s set this on a horizontal continua) is the degree of evaluation in which we engage.  At the far end of this continua (we’ll place it on the right), we are highly evaluative, desiring scores and measures that quantify outcomes in a fairly precise way.  Here, we judge work against clearly defined criteria that we apply to see just how close to the mark a student gets.  Such evaluation can produce ranks and comparisons. On the other end of this continua (we’ll place it on the left) we might seek to understand students where they are, making sense of their actions and respond through our grounded interpretation.  Here, rather than come with predetermined criteria, we open ourselves to the possibilities and variations in both learning styles and outcomes that a close examination of our students’ learning might provide.

 

With this map of the terrain in hand, we can begin to place our various assessment practices in the appropriate quadrant. 

The second dimension (let’s set this on a vertical continua) is the extent to which our assessments are integrated in our instruction and part of the ongoing learning of the classroom.  At one end (we’ll place it at the top) we have assessment that is highly embedded in our teaching and students’ learning.  That means that we don’t stop or pause our instruction in order to assess but instead embed it as a regular part of our practice.  At the other end of the continua (placed at the bottom) we have assessment that is set apart from instruction and student learning.  Here, we declare a formal end to our instruction and move into a deliberate assessment phase that we hope will reveal something about students’ learning.  A basic graph of these two dimensions produces four quadrants that we might use to map the terrain of assessment (see Figure 1).

Figure 1:  Mapping Assessment on Two Dimensions

Figure 1: Mapping Assessment on Two Dimensions

 With this map of the terrain in hand, we can begin to place our various assessment practices in the appropriate quadrant.  Let’s begin with Quadrant D since this represents the traditional way schools have thought about assessment.  This quadrant is characterized as highly evaluative practices that are set apart from our teaching practice.   The teaching is completed and now it is time for students to show their learning in a manner we can evaluate. Here we have practices such as tests and formal summative assessments.  In the farthest extreme of the lower, right-hand corner we have district, state or provincial tests, national exams, or tests administered by outside agencies.


Continuing in the “set apart” range, let’s consider the assessment practices of Quadrant C.  These practices once again are characterized as not an embedded part of our teaching, but they differ from Quadrant D in that they are “low judgement” and more interpretive in nature.  Examples here would be the practice of “Looking at Student Work” (LASW), examination of teacher’s documentation of learning, or video analysis.  Typically, these practices are done as part of a professional learning group and aided by the use of protocols.  The goal here is not to evaluate students or score their work, but to look for learning in whatever form it might take.  Another assessment practice that fits in this quadrant is clinical interviewing in which we remove students from class to engage in one-on-one interviews to help us learn more about students’ learning.  Such interviews can be evaluative, if one has in mind a ranking or comparison of students, or interpretative, such as Piagetian-style interviews.

 

Moving to the upper part of our map, let us consider Quadrant B.  Here we have assessment that is low in its judgment and embedded in our instruction.  This is the assessment that happens in class and informs our instruction on the spot.  We might call this kind of assessment part of our formative and responsive practice as teachers.  We read the class, we gather clues and evidence, we check for understanding and misconceptions all with an eye toward modifying our instruction.  This is perhaps one of the key skills teachers acquire over time and one that beginning teachers strive to master.  One way to help develop these skills is by spending time engaging in Quadrant B practices.  When we look carefully at documentation of learning in a low-stress, set-apart context, we begin to develop the eyes with which to see learning as it unfolds in front of us.

This is the assessment that happens in class and informs our instruction on the spot.  We might call this kind of assessment part of our formative and responsive practice as teachers.

 Finally, we arrive at Quadrant A in which our assessment practices are embedded in our teaching and students’ learning, but also evaluative in nature.  To be evaluative should not be considered negatively.  One wants their swim coach to evaluate one’s stroke and consistency, for instance.  Building class criteria of what quality work looks like can be extremely useful.  Nor does evaluation always have to come from the outside.  As a learner, one can set goals for oneself that can be used to evaluate progress.   One practice in this quadrant is the feedback we provide to students on their performance.  To give effective feedback there has to be some criteria applied.  Providing good feedback on students’ writing requires that we understand what constitutes quality writing, how far the student is from that goal, and what immediate actions might allow them to make some progress.  The use of rubrics or success criteria as well as student and peer assessment practices also fit in this quadrant.

 

The point of this mapping is not to label any of these sets of practices as good or bad but to map the terrain, to provide a bird’s eye view if you will, about what assessment can mean in different contexts.  All of these assessment practices have their place and purpose.  Another point of a map is to help us navigate, to know where we are, and where we might go or want to be.  Where do your current assessment practices fit on the map?  Are you and your colleagues spending most of your time in Quadrant D as many teachers do?  What other quadrants need to be explored? 

Previous
Previous

Using Thinking Routines: 10 Ways You Can Die

Next
Next

A Bridge to Cross the Rivers of Life: Inspiring (In)dependent Learners