Via Trainingzone I downloaded a white paper from Questionmark with this title, authored by Eric Shepherd, John Kleeman, Joan Phaup, Kay Fair, Martin Belton. I found this paragraph significant:
Over-engineering low-stakes assessments can result in unnecessary costs and wasted time. Under-engineering high-stakes assessments can undermine people's confidence, organization's processes, and undermine the face validity of the assessment.The paper distinguishes between formative, summative and diagnostic assessments, defines four different assessment types (exam, quiz, survey and test) and categorises the consequences of passing, failing or completing each of these respectively. Delivery considerations are listed in relation to each of the four types.
Different types of assessment environments are considered. In the spirit of the subject matter it seems, symbols are allocated to the different environments to indicate their suitability for delivery of the four assessment types. For example, professionally controlled centres score an A+ for very high-stakes exams, but a lowly F for quizzes and surveys, while the best overall scores seem to belong to training rooms, with scores ranging from a B- for very high-stakes exams to and A for surveys and tests.
Attention is given to the creation of the right environment and the deployment of software to ensure security, touching on authoring, communication, scheduling, monitoring and browsers.
I found some of the terminology a bit odd ("reduce forgetting" as opposed to the more positive "improve retention"), but that might simply be my own pedantry rearing its head, and the section numbering is inconsistently applied, but it makes for interesting reading.
Some years ago, I worked at a college that offered the City & Guilds IT courses. These courses all culminated in an exam with a very rigorous marking system from which the tutors were not permitted to deviate. A pass or fail meant the difference between receiving or not receiving the coveted certificate. It was heart-rending to have to fail a learner who had eveidently grasped the concepts, but whose single error happened to have been one of the mandatory points, while passing another whose multiple errors were on non-mandatory points and fell within the permitted quota (it was also very difficult to try to explain this to learners in the former category!).
During my time there, the e-Quals were due to have been introduced, with some form of online assessment replacing the traditional hard copy exam papers. However, there must have been some complications associated with this process, since the e-Qual failed to land (apologies Jack Higgins!) during my time there. I have no idea whether this transition has now taken place and if so, how successful a step it proved to be.
Nowadays my involvement with assessment tends to be restricted to quizzes and surveys, so I do not feel qualified to judge the accuracy or usefulness of the content of the white paper. I would be interested to hear the views of those who are closer to the coalface.
No comments:
Post a Comment