I am busy with a project at the moment, on which a bit of a sticking point has developed around the assessments. Some of the assessment questions being forward are, well, inappropriate.
One could argue (and I often do) that the most valid form of assessment is the performance management process. One's team leader and colleagues will be best positioned to assess whether your learning has resulted in any changed behaviours, increased skill, etc.
But there are times when compliance requirements must override such intangible evidence, and quantifiable evidence must be recorded. This generates the need for assessment.
Matters are not helped by the fact that the word that has passed into common parlance (thanks to the terminology used in the various applications) is "Quiz". It's hard to place much credence on an assessment that is termed a quiz, but it is what it is, so we'll move past that point.
The expectations of SMEs and sponsoring clients (and now I'm not just referring to my current project) seem often to be for multiple choice questions that are lifted straight from the text of the learning resource.
If it is possible for a person with no prior knowledge of a concept to take the assessment and pass, then I'd question the validity of the assessment. If the question is followed by a reference to page on which the answer may be found, even more so! At the very least, the questions must call for inference. Primary school children do comprehension tests on extracts from literature, where the answers may be found verbatim in the text. Once they get to secondary school, children are expected to be able to draw conclusions and to conjecture based on the given text. Why, when we're dealing with work-based adults, do we feel the need to revert to the verbatim approach?
We should be able to say with confidence that passing an assessment is indicative of something. Based on the fact that Joe Bloggs has passed this assessment, I should be able to assume that he has a good working knowledge of whatever-it-is. If this is not the case, then surely the assessment is a waste of time and resources.
When there is a compliance requirement for an assessment, this usually means that there is a regulatory body in the background. Let's imagine that a representative of this body is visiting our site. Proudly we tell him/her that 97% of our staff have passed the assessment on, say, first aid in the workplace. The rep asks to see the assessment.
Q1. Do you know where the first aid kit is kept?
(Correct answer: yes)
Q2. The position in which we place a person is known as:
(a) The pike position
(b) The first aid position
(c) The missionary position
(d) The recovery position
(Correct answer: d)
I can't imagine that this will fill the rep with any sense of confidence that, were he/she to take a tumble during that visit, any number of employees would be able to provide appropriate first aid. In fact, perhaps it is worth mentioning that I have a certificate that says I am a qualified first aider, yet I faint at the sight of blood - surely the assessment should have taken this small impediment into consideration?
I would also argue that a straightforward multiple choice assessment has limited validity. There are all manner of urban legends that tell of small children/monkeys/random generators that pass such assessments. All these aside, there are limits as to the challenge that such questions can present. Although it is worth bearing in mind that, with the technology at our disposal, there is much that we can do with multiple choice - instead of four text options from which to select, the user might be faced with images, video/audio clips or cartoon strips.
The trouble with assessment questions that go beyond "select the correct answer" is that they are likely to require some human intervention. Even if the answers are a couple of lines of text, someone will have to "mark" them, since the automated options are a bit thin and unreliable on this front. And this is where the rope hits the rudder. Quite often, elearning resources are requisitioned to obviate the need for human involvement in learning and development. I have lost count of the number of times I have suggested a project-based learning-cum-assessment to be submitted to the line manager within a specified timeframe, only to be assured that (a) the learners wouldn't do it and (b) the line managers would never agree to it.
Let's look at it this way:
- Do you really need a formal assessment? If not, skip it!
- Why? Your justification of your requirement will shape the assessment.
- How will it benefit the learner? And yes, this is the point... unless of course, this is purely a box ticking exercise.
- What do you want it to prove? What conclusions should I be able to draw from each person's results on the assessment?
And above all, it must be possible to fail. If you have created an assessment that anybody can pass then what you have is not an assessment at all. It is an attendance register.