Lisa Fairfax poses the following issue:
As I stare at my 100+ Business Associations exams, I ask myself why I have yet to give a multiple choice exam. Since I have been in academia, my exams have been a combination of short and long essay questions. At a meeting over the weekend when I asked a couple of law professors what type of exams they gave, I was surprised to learn that almost everyone gave or had given a multiple choice exam in some form. That is, some people gave half or some portion of their exam as multiple choice, while others gave an entire multiple choice exam. Of course everyone agrees that creating multiple choice exams takes significantly more time than creating an essay exam, and hence the ease of grading that such exams provide requires input of time on the front-end. However, the brief survey over the weekend, did have me wondering if more people have begun to gravitate toward multiple choice exams in lieu of the essay.
I first used a multiple choice exam when I was teaching at Illinois. The Associate Dean had assigned me two sections of Business Associations in the same semester, with a total enrollment of 230 students. Unable to face that many bluebooks, I worked up a set of objective questions and haven't looked back since.
In terms of student preferences, my evaluations over the years have been unhelpful. About half the students who express an opinion say they prefer objective exams, about half prefer essay. So that doesn't get us very far.
How about pedagogical validity?
It's been well said that:
There has been a long-standing debate on academic testing--which is better, essay testing or multiple choice testing? Anyone in education as long as we knows that when-ever a question is phrased that way, the answer is "neither," followed by a long string of qualifiers.
In the first place, I think it's easier to justify the use of objective exams in the second and third year than in the first year of law school. This is so because:
The first and most important qualifier in determining which question type is preferable is the instructional objective being tested. To coin a phrase, you can't weigh a duck with a yardstick." The measure you choose must match the purpose of the measurement, if not physically, at least conceptually. So if your purpose is to measure the student's grasp of basic facts, the type of test items you choose should require just that and nothing more. If, on the other hand, you want to measure a student's ability to communicate facts concisely, you need to choose a totally different task for the test.
In the first year, we need to test the student's ability to "think like a lawyer." By the second and third year, however, if they haven't learned how to think like a lawyer by now, they probably never will. At this point, we are teaching - and testing - knowledge more than basic reasoning and communication skills. It's the same argument I give for not using the Socratic method in upper division classes, by the way.
There's another set of reasons I like objective questions as a testing tool:
... essay items allow the student to throw in everything he knows about a subject; and while these items may be more sensitive to breadth of learning, they are more vulnerable to padding or skirting the question. ... [Accordingly,] students may try to "pad" their way through an essay item without knowing the subject.
Frankly, I get very tired of trying to slog through bloated essay answers that never seem to get to the point.
Finally, I think objective questions have certain technical advantages:
Since multiple choice items require less time to answer than do essay items, more items can be included in an exam; therefore, more areas can be tested. The wider the sample range, the more valid and reliable a single test can be.
It's much easier to cover a wide range of topics, especially since I find that only certain topics in a course justify - or can support - a 60 minute essay, which means you cover the same topics every year.
A second consideration is test reliability. Multiple choice tests, in general, are more reliable than essay tests. The greater reliability is primarily a function of the required grading procedures. Essay items are subject to variability due to errors or influences on grading as well as achievement level differences among the test takers, while the variability of multiple choice items comes mostly from differences among test takers. It is possible to train essay graders to be more consistent and reliable, but such training occurs infrequently.
When I give essay exams, I always feel that grading those exams - like sausage making and law making - is something you don't want to know how it's done. Trying to explain to a student why they came up one point shy of an A on an essay exam is always a lot tougher than with an objective exam.
Finally, multiple choice items lend themselves to statistical analysis for evaluation and improvement purposes. Measurement and evaluation centers can assess the effectiveness of test items or provide instructors with information for completing their own. Although comparative analyses can be done on essay items, the procedures must be done by hand and are far less reliable.
Every year, determine the ten least effective questions in that year's set and throw them out, never to be used again. As a result, over time, one develops a testing instrument in which one can have a high degree of confidence.
A final advantage of objective questions is that it levels the playing field for the growing number of international students in my class, who must be graded on the same scale, but whose communication skills obviously are likely to be less effective than those of my USA students.
Of course, I freely admit that all this might be just rationalizing. But, then again, sometimes self-interest and doing the right thing are one and the same.