Jump to: Page Content, Site Navigation, Site Search,
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
Published 22 August 2008, doi:10.1136/bmj.a1282
Cite this as: BMJ 2008;337:a1282
Chris Ricketts, director of assessment 1, Julian Archer, NIHR academic lecturer in medical education 1
1 Peninsula College of Medicine and Dentistry, Plymouth PL4 8AA
Correspondence to: J Archer julian.archer{at}pms.ac.uk
Chris Ricketts and Julian Archer argue that a national test is the only fair way to compare medical students, but Ian Noble (doi: 10.1136/bmj.a1279) believes that it will reduce the quality of education
The General Medical Councils consultation on student assessment1 and the inquiry into Modernising Medical Careers2 have prompted interest in national examinations for medical students or newly qualified doctors. We believe that national examinations are the only fair way to rank medical students because they offer a unique opportunity for standardisation, consensus, and pooling of resources.
The UK already has a system for ranking medical students as part of the application process for their first postgraduate position. Students are ranked 1, 2, 3, or 4 depending on their performance within their medical school. In 2007 this rank provided 45 marks of the total application score of 100 (45 being the maximum mark and 30 the minimum allocated according to each rank) and therefore had a major effect on every students chance of getting his or her preferred post. Each medical school uses it own internally devised assessments to rank students.
The current system is inherently unfair because it erroneously assumes that all medical schools have the same spread of candidate capability and that their assessment data are of equal value or validity for ranking. However, recent studies of medical school final examinations in the UK have shown several important differences in their qualifying assessments3 and standard setting,4 and the value of the current ranking system in the application process has recently been down-weighted. These differences may persist through a doctors working life because graduates from different medical schools show significantly different performance in subsequent postgraduate examinations.5 6
Fair ranking requires good reliability. Reliability requires standardisation and structure.7 Standardisation is best achieved by all candidates experiencing the same assessment tasks rather than the current plethora of non-standardised local assessments. National examinations would provide a common set of assessment tasks for every student, a prerequisite for fair ranking.
Any national examination with high stakes would need to be designed and delivered to current standards of best practice in test procedures.8 This includes a robust and defensible approach to definition of the test, implementation, standard setting, and quality assurance. These criteria are best achieved by consensus of stakeholders, including employers,9 and pooling rare assessment expertise, as is currently done in the United States and Canada. However, such an approach is impossible in the UK while resources for assessment are divided among medical schools. It costs as much to set a high quality test for 100 students as for 10 000. Pooling of resources to create one national examination could therefore reduce costs or make better use of available funds.
A national examination has other advantages. Firstly, it would support the establishment of a common curriculum. This will be increasingly important with the establishment of private medical schools. Secondly, because national examinations are independent, they can remove any local bias. Some students perform particularly well in one high profile area: the subsequent "halo" effect10 11 may bias their local ranking. Thirdly, a national examination would provide prospective students \with a more robust comparison of how medical schools performed. Applicants to seemingly expensive medical school courses will increasingly demand better data to inform their choices. The performance of each medical schools graduates in a national examination would be important information. Fourthly, with the movement of doctors globally, especially freely within the European Economic Area, a national examination would allow direct comparison of all graduates and doctors wanting to enter practice.
Finally, a national qualifying examination is likely to improve patient care. Although much of the evidence for the impact of national examinations on patient outcomes is from postgraduate certification examinations,12 there is also evidence that performance on earlier licensing examinations also affects patient care.13
A further question relates to whether a national examination should automatically be a qualifying examination. There are clear benefits for a national examination and its unique role in ranking students. The main difference between a ranking and a qualifying examination is that the first provides information about candidates relative ability whereas the second also provides a pass or fail decision. Several nationally delivered examinations primarily provide ranking or grading information without pass or fail decisions—for example, A levels and medical school admission tests. The main purpose of national ranking of medical students is to aid recruitment to the first postgraduate training post. So, a national examination may not need to be a qualifying examination.
However, relying on national ranking but local qualification may produce anomalies. A student may qualify from one medical school but be ranked lower nationally than a student who has failed to qualify from another. Using a national examination for both ranking and qualification is therefore fair to medical students, standardises the minimum qualification level, and is likely to improve patient care.
Cite this as: BMJ 2008;337:a1282.
Read all Rapid Responses