Where’s the E in assessment?

It’s a standard kind of things (lots of our) lecturers do. Weekly tests to keep students on their toes and keep them thinking.  In my case, it’s a final year module on web services, with eleven weeks divided into five main topics with fortnightly “objective” tests, delivered to 20-30 students.

In this post, I want to consider this particular type of assessment and see how the use of technology can impact upon it.

What

A small number of multiple-choice questions are used, sometimes in conjunction with code samples, to test basic understanding.  Over the last couple of years the questions have been delivered in a variety of formats including:

  1. physical, paper based tests in class with emailed feedback
  2. in-class tests with paired students, with (non-E) voting systems, and immediate feedback
  3. downloadable question sheets, uploadable answers, emailed feedback
  4. online MCQ testing with immediate online feedback

In the first two formats, there is very little E.  The third relies on the VLE for communicating information, while the last is the most typical form of e-assessment, relying on the use of an independent MCQ platform.

Why Test

If asked why do regular testing (or when asked – by Octel), my justification or explicit objectives would be based on a subset of something like Chickering and Gamson’s principles such as:

  1. time on task – giving students something to aim for and to ensure engagement with the basic material
  2. high expectations – showing students the kinds of questions we would expect them to be able to answer
  3. prompt feedback – letting students know how well they are doing and whether they need to be changing what/how they are studying

And routine testing can help meet these objectives.  However, there is a risk that this approach does <not> deepen knowledge and understanding.  Instead it might just direct students into learning for the test – a very superficial approach.

A almost equally important question to “why test?” would be “why use technology” to support testing? While some may say that tech supported testing offers a richer testing environment (as shown by the use of video to present alternative routes through a real-life scenario), in practice many of my issues around e-testing are more to do with practicalities rather than pedagogy. It is all too easy to embed simple MCQ questions into online material to give an impression of interaction, without doing anything with the information.

Why Not Test

So how to avoid the pitfalls of superficial online testing?  Its interesting to use the 12 REAP principles to reflect more on my practice, to make sense of what I have tried and think where else I could go. Although principles 1 (good performance), 2 (time and effort) and 3 (quality feedback) match A-C above and could be viewed as already covered, there is clearly much more thinking that could be done.

First up, it is possible to argue that the MCQ testing in itself is not a challenging or interesting learning task, something that REAP promotes (principle 2).  Fortunately, in the module under discussion the MCQ testing is not done in isolation.  Alongside the formative testing, there is a parallel stream of (summatively) assessed practical tasks which provides more challenge.  Making clearer links between these tasks and/or synchronizing the timing could reinforce the value of the formative tests and encourage a deeper approach to learning.  More detailed feedback could also provide an opportunity for the testing to impact learning (principles 4 and 5) as measured or guided by the other summative tasks, provided the learner engages in reflection (principle 7).

The avoidance of a superficial approach can also be addressed by supporting social interaction around the formative testing that promotes peer supported, self-directed collaborative learning.  This is implicit in the classroom approach (II) that uses low tech, “strictly coming dancing” style, colour coded response cards which are shared by pairs of students.  The pairing approach works well in promoting discussion to select the correct answer, and the relatively low number of scorecards provide an easy way of assessing overall performance and providing feedback.  The fact that the feedback is provided in a face to face environment provides more opportunity and encourages for diaglogue (principle 6)

Challenges and Opportunities of e-Testing

The different ways of engaging in testing (on-line/off line, open/closed book, synchronous/ asynchronous) emphasise different REAP principles which might find favour with different teachers.  Interestingly, when students are asked which method they favour there appears to be less variation as they consistently prefer option III – the open book, asynchronous, VLE facilitated tests.

While Involving learners in decision making about assessment practice is one of the REAP principles (9), the preferred student option feels less authentic than the more interactive face to face option (II), or less demanding than the full blown online MCQ with personalised feedback (IV).  However, constraints on time (for option II) or institutional support (for option IV), mean that option III is pragmatically more manageable for the number of students involved.

Despite the challenges of adopting a more varied testing format, REAP inspired reflection does suggest a number of refinements to the testing process, in particular for entirely online students.  One way to increase student reflection, dialogue and the development of learning groups (principle 10) might be to start with individual tests, using the results to select mix-ability learning groups.  The groups could then be tasked with debating and submitting just a single set of agreed answers for each group.  If gameification is seen as a motivating factor, group results could be published via a leaderboard.

The role of technology in testing

In thinking about testing, my first question was where is the “e-“ in this type of assessment.  Or more importantly, what makes it an e-assessment?  And by the way, does being an e-assessment mean it is not possible to undertake it without any technology support?  But on reflection, the lines are blurred and the why is clearly more important that the how.  Technology shouldn’t be the deciding factor in deciding whether we want to do paired or group testing – but it sure helps scale things up from 30 students to 130.

And thinking about technology as an enabler makes it possible to think about re-engineering other assessment opportunities. Rather than just relying on students commenting on other people’s project suggestions in a forum, why not build a more structured online peer review element into the proposal stage … now there’s a (not very novel) idEa!

Advertisements

2 thoughts on “Where’s the E in assessment?

  1. Rose Heaney

    I think you have summed things up well here Guy.

    I work with Bioscience staff who make extensive use of formative & summative MCQs on Moodle (MCQ is shorthand for a variety of question formats by the way) and find that it is efficient from a teaching point of view (200 + student cohorts) but also pays dividends for learners in terms of some of the principles you refer to above. I think another advantage of the ‘e-‘ in this context is that you can build up a rich bank of questions over time though I suppose it is possible offline too but harder to manage. You are also able to analyse responses to online tests quickly and make improvements for future iterations. In terms of peer assessment in this context, have you heard of Peerwise – a system that enables students to create their own questions and share with each other? Writing questions is a more demanding task than answering but it requires a higher degree of motivation than is always evident in the average student. I heard someone from UCL describe how students there are using it but when I’ve mooted it with staff at my institution they are less confident of its likely take up.

    Reply
    1. guy75 Post author

      Rose

      thanks as ever for informative and thought provoking comments. I haven’t come across Peerwise, but its an interesting approach. I think it would be really interesting to use this as a framework for identifying the questions people find hard, and the common misconceptions. I wonder if making it a group activity would be better for increasing motivation and generating more of those tricky, plausible distractors!

      As for managing banks of questions, this is supposed to be one of the big pay offs, but not having good institutional support, or not doing it often enough myself generates some level of resistance!

      Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s