Authors
Lester Gilbert, Veronica Gale, Gary Wills, Bill Warburton
Publication date
2009
Publisher
University of Southampton
Description
Commissioned by the Joint Information Systems Committee (JISC) in 2008, the ‘Report on Summative e-Assessment Quality (REAQ)’ project surveyed quality assurance (QA) activities commonly undertaken in summative e-assessment by UK Higher Education (HE) practitioners and others. The project focused on what denotes high quality in summative e-assessment for the interviewees and the steps that they take to meet their own standards. An expert panel guided the project. What denotes high quality summative e-assessment Expert opinion focused, in this order of priority, on: • Psychometrics (reliability, validity), • Pedagogy (mapping to intended learning outcomes), and • Practical issues (security, accessibility). What ‘high quality’ meant to our interviewees depended on the role they played in the process of creating and using e-assessments. They listed the following matters, in this order of volume: • Using the medium to give an extra dimension to assessment, including creating e-assessments that are authentic to the skills being tested; • Issues around delivery including security, infrastructure reliability, and accessibility; • Fairness and ease of use; • Supporting academic, managerial, and organisational goals; • Addressing the intended learning outcomes; and • Validity and reliability, mainly in their ‘non-psychometric’ senses. Interviewees with the role of learning technologist (or similar roles designed to aid academics in the use of e-assessment) used these terms in their psychometric senses. Interviewees focused on the e-assessment issues that were foremost in their mind. As processes to deliver e-assessment are rarely embedded in …
Total citations
2010201111