These guidelines are not intended to stifle valuable and innovative methods of assessment, but to give departments a means of choosing the most appropriate method of moderation.
To ensure fairness, accuracy and consistency in marking and the provision of results which are an accurate reflection of performance and can be relied upon by students and staff within the institution, as well as by other individuals and external organisations (for example employers, accrediting bodies).
It is the process by which the University ensures:
• the consistency of marking for particular assignments and exams within modules, and consistency of assessment for all students taking a module; and
• that marking within a module is appropriate and conforms to University-wide grade and mark descriptors.
It is an integral element of the marking process, which takes place after initial marks have been awarded to individual papers and serves the purpose of reviewing marking standards for the module as whole. This is achieved by two processes:
(i) second marking (of all scripts or more usually, a sample); and
(ii) review of the array of marks to enable consideration of the overall standard and to permit comparison of the marking standards applied by different markers and for different elements of assessment.
Normally a second marker is used to check the consistency of a first markers’ marking.
Where all papers are subject to second marking, this may properly lead to negotiated variations in the marks for individual papers. Where second marking of a sample takes place, this may lead to mark variations for categories of papers. For instance, it may be judged that marker A’s scripts marked at 48 would be marked at 52 by fellow markers and that all papers in this category should accordingly be raised. Where, exceptionally, moderation discloses that a particular marker is inconsistent with other markers or internally inconsistent, it may be necessary to institute a general re-mark of affected papers.
Where exceptionally extensive re-marking is required as a consequence of rigorous moderation, this would be a matter for consideration in relation to a request to extend the relevant feedback deadline.
Moderation is required for all assessments weighted at more than 3 CATS (see note in the table below under “Single marking only”). This will include essays, assignments, dissertations and examinations. Where an assessment requiring moderation involves presentations, practical work or performance it should be observed by either one or two markers and should normally be recorded to permit moderation and later scrutiny by the external examiner. Where marks are attributed to contributions to a group exercise, the material on which this assessment is based must be retained in a durable form (e.g. written reports, video recordings etc.) in order to permit moderation.
• Assessment weighted at 3 CATS or fewer in modules in which other elements of assessment are moderated.
• Observed Structured Clinical Examinations (OSCEs) in the Warwick Medical School, which are subject to audit by the General Medical Council, are exempted from moderation.
Previous agreements with departments about exemptions from moderation will cease from 2014/15 onwards.
|Single marking only||Marked by 1st marker – no second marker or moderation||Assessment weighting 3 CATS or fewer. Noting that in a module where all pieces of assessment are 3 CATS or fewer, moderation should take place for some of the elements of assessment, using an appropriate moderation process.||This “3-CAT rule” is not intended for situations where all elements of assessment in a module are 3 CATS or fewer, and where this would result in the single marking of all assessment for that particular module. Moderation of some elements of assessment should be undertaken in these cases.|
|Double marking (informed)||1st marker marks and comments/2nd marker marks whole cohort with sight of first markers mark and comments. Mark is either confirmed or amended after discussion between markers||
||May result in adjustments to marks for individual papers.|
|Double marking (blind)||1st marker marks and comments/2nd marker marks blind (i.e. without access to 1st markers comments). Final mark agreed after discussion between markers. [Simple averaging of marks is not moderation].||
||May result in adjustments to marks for individual papers.|
|Standardisation and moderation||Answer key markingORA two-stage process in which marks are agreed for a sample of papers to establish standards prior to the main marking exercise. A further sample to be checked against the standards at the end of the exercise.||Suitable where marking is performed by a team.||May require reconsideration and adjustment of marks on categorical basis (e.g. papers marked by a particular marker, or in a particular mark band).|
|Centralised moderation||Following first marking, a sample of papers from each marker is read and marked by a nominated moderator (who may also be a first marker in the case of team marking) – see notes below on sample size||Suitable where marking is performed by a team or where a single marker is responsible for first marking.||May require reconsideration and adjustment of marks on categorical basis (e.g. papers marked by a particular marker, or in a particular mark band). May trigger categorical re-marking.|
|Cross-moderation||Following first marking, samples of scripts are exchanged between markers for second marking. See notes below on sample size.||Suitable where marking is performed by a team||May require categorical reconsideration and adjustment of marks.May trigger categorical re-marking.|
|Consideration of array of marks.||Moderator compares array of marks for (i) comparability with marking in other modules: (ii) comparability of marks awarded by different markers (iii) comparability of marks for different elements of the assessment.||A number of markers.A number of different elements of assessment. Should be used alongside other forms of moderation,||May trigger categorical reconsideration and adjustment. May trigger categorical re-marking.|
|Marking assistance||Request for assistance (e.g. double marking) from moderator or fellow marker.||A paper presenting particular difficulties in assessment.||An extended process to produce the first mark.|
|Sampling (an element in moderation)||Selection of papers for moderation. A sample should contain a random selection of all grades present. Sample size should usually be at least 20% of total or a minimum of 20 papers, whichever is the lower (see notes). For cohorts of 200 or above this should be a minimum of 30 papers.||See standardisation and moderation, centralised moderation and cross moderation methods above – sampling will not be appropriate for a new or inexperienced first marker.||Sample size should be appropriate to allow moderation of all classifications, all elements of the assessment (where candidates have a choice of tasks) and the work of all markers. Sample sizes may need to be larger than 20% or 20 papers to encompass all factors in large cohorts.|
|Moderation of Fails||All fails should be considered by a moderator||Confirmation or adjustment of first mark on the individual paper.|
|Moderation of marking for new and/or inexperienced markers||Use of a more rigorous moderation method than might normally be required – for example, a larger sample used||Where new members of staff, PGR students or other inexperienced staff are involved with marking, a more rigorous moderation method should be selected in the first instance.|
The QAA Code states that the role of the External Examiner is to look at the marking process and not to look at individual marks. The External Examiner should be engaged after the internal moderation process has been completed and should not be treated as a second (or third) marker.
The External Examiner should be presented with a complete set of marks and a sample set of scripts/assignments after the completion of the internal moderation process. The External Examiner should be provided with an explanation of the marking/moderation process and this process should be visible to the External Examiner on the basis of the papers sent. The External Examiner’s role is to audit/validate the marking and moderation process.
As long as the conditions of use of the methods of moderation described in the table above have been met, the module convenor should agree with the examiners which is the most appropriate method of moderation to be used. For example, an undergraduate dissertation could use double marking (informed) or double marking (blind), but is highly unlikely to meet the conditions for standardisation or sampling.
If scaling of marks is to be applied it must be carried out in accordance with the Guidance on Scaling.
If scaling of marks is to be applied it must be carried out in accordance with the Guidance on Scaling
Adjustment of marks in the process of moderation should be within the 17 point marking scale if this scale is being used. The adjustment of marks for a category of assessments should only be used where marker and moderator are satisfied that the mark outcomes will be appropriate for all candidates. If the issue identified by the moderator is that a particular mark (say 48) spans a range of quality and that better papers in this range should be upgraded, then it would be necessary to re-consider all papers within this category on an individual basis. Aggregate marks for modules may fall outside the 17 point mark scale and in the process of moderation it is permissible to amend the aggregate mark for a module.
When feedback is given before an exam board, it should be clearly communicated to students that the marks they are receiving are not final. Some departments address this issue by giving feedback before the exam board, but not actual marks. As a general principle, definitive feedback is only possible after the moderation process has taken place, but the timing of an exam board should not delay students receiving feedback in adherence within the University’s turnaround time policy.
In designing placement learning assessments, departments are asked to consider the way in which moderation will take place. Please see further guidance at the Placement Learning Assessment webpage.
It is recommended that management information tools be developed, and other support as appropriate to allow moderators to fulfil their role of reviewing the array of marks in a modules. Such a tool might provide key statistical data such as maximum and minimum marks, and standard deviation and would permit comparisons between different markers and between different elements of assessment. SITS currently allows for a quick analysis of uploaded marks and further support and training for staff in relation to this function might be provided to departments.
At departmental level, moderation methods in use should be communicated to staff and students in module and/or course handbooks, and in departmental assessment strategy documents. Scaling methods should be clearly described.
Download a full copy of the moderation guidance:
Moderation Guidance pdf