Chaotic Multiple Choice Test/OG

From Open Pattern Repository for Online Learning Systems
Jump to navigation Jump to search


Chaotic Multiple Choice Test
Contributors Sus Lundgren
Last modification May 15, 2017
Source Lundgren (2014)[1]
Pattern formats OPR Alexandrian
Usability
Learning domain
Stakeholders

In order to reduce guessing in multiple choice question tests and to reduce effort in test construction construct the test so that the ratio of correct answers is comparatively high (e.g. 50%) and distribute correct answers unevenly (that is a question may have zero, one, or more than one correct answer options).


Problem

Constructing Multiple Choice Questions (MCQs) for classroom use has two basic problems:

– Students – especially test-wise ones – may gain extra points by guessing.

– Writing of possible distractors – i.e. incorrect, but yet plausible, alternative answers – is difficult and time consuming (especially if each question is to have as many as four or five alternatives).


Forces

One cluster of forces is related to guessing and penalties for guessing. If penalties are not used, students will gain extra points by guessing, since they have nothing to lose, as pointed out by Scharf and Baldwin[2]. If penalties are used, guessing is a matter of calculating the odds[3]. These odds are improved if one or more of the distractors can be spotted, either by knowing the subject – the aim of the test – or by applying meta-analysis skills to the test and how the questions and alternatives are formulated (cf. Biggs, pp. 180-181[4]). If the test is badly constructed (often due to poor distractors) the latter may well be a suitable strategy that does not require a lot of tiresome studying. If penalties are used, there is a risk that students who are insecure may be too cautious and score less than they ought to since they might refrain from answering some questions in fear of the penalty. Gamblers’ results may vary a lot, depending on chance.

Another cluster of forces is related to the time the teacher can and wants to spend on the construction of the particular test. MCQs are easy to assess, a hundred tests can easily be graded in a few hours. However constructing them takes time, especially writing the distractors. Haladyna[5] has concluded that ‘Distractors are the most difficult part of the test item to write’ and similarly McDonald[6] states that ‘good distractors are hard to write’. Additionally, it is the quality of the distractors that often determine the quality of the test; if the distractors are nonplausible some of them will be easy to rule out, increasing the odds of guessing the right answer, so there is a link between the two issues of reducing guessing and the time it takes to construct the test.


Context

If one wants to simplify the task of writing a MCQ test, in turn saving time in constructing it, as well as reduce the risk of students gaining extra points by guessing, Chaotic Multiple Choice Test (Chaotic Multiple Choice Test) may be suitable. As with any MCQ test it is fast to assess.


Solution

The solution to the above issues builds on three design choices in combination:

– the correct answers are unevenly distributed throughout the test, so that each question may have zero, one, more or all answers that are correct

– there is a penalty for picking an incorrect answer

– the ratio of correct answers is comparatively high, circa 50%.

– Students get points for each correct answer they find, rather than one point per correctly answered question.

By distributing correct answers unevenly, one muddles the odds for the test-wise student. Not knowing whether a certain question has zero, one or more correct answers eliminates a number of guessing strategies such as eliminating two options that say essentially the same thing. Similarly, using penalties reduces guessing. However none of these strategies simplify the construction of the test. Allowing more correct answers does, but it increases the odds when guessing and therefore it should be combined with penalties. If combined with an uneven distribution the effect is twofold; guessing is reduced even further and constructing the test is simplified since the uneven distributions allows some slack – if it is hard to write distractors for a certain question then it may have more correct answers and vice versa.


Preparations

It can be a good idea to go through the material beforehand, trying to write down possible questions, their correct answers and possible distractors. This will indicate how many questions the test should contain, how many alternatives each question should have, and the ratio of correct answers to distractors that may be suitable. Decide upon the following parameters:

– the number of questions

– how many alternative answers each question should have

– the ratio between correct and incorrect answers - note that when the ratio of correct answers increases, so does the need for penalties

– how high the penalty for selecting a distractor should be.


A couple of days before the test students should be introduced to this somewhat unusual approach so that they do not make the mistake of marking one answer per question as in a standard MCQ. Here, it is important to demonstrate what a distractor can be like e.g. it could be one where only half of the statement is true.


Making the test

Then, prepare the test. Randomly distribute the order of correct and incorrect answers, making sure that you get the correct number of each. Do this first. Then, populate the test with questions and correct and incorrect answers in this order. Write instructions regarding the uneven distribution, what the ratio of correct answers is and what the penalty is. Lastly, clarify that students do not have to pick as many alternatives as there are correct answers.


Support

Source

The source for the pattern Chaotic Multiple Choice Test (Chaotic Multiple Choice Test) is the design narrative ‘Adding a twist to the multiple choice test’.[7][8]


Theoretical justification

Strictly speaking this test is a form of the Multiple True False (MTF) format since it features several correct and several incorrect answers to each answer albeit evenly distributed as described by Haladyna[9]. Haladyna also describes the Alternate Choice format[10] where each question only has two alternatives, one correct and one distractor, stating that one of its advantages is that it is easy to write since one ‘…only has to think about a right answer and one plausible distractor.’[11]. In both cases Haladyna comments that the 50% chance of guessing the right answers needs to be countered, suggesting an adjusted grading scale. Similarly, Osterlind[12] describes the very similar True-False tests, stating that they are criticized for their 50% chance of guessing the correct answer.

So, why not adjust the grading scale then, but use uneven distribution instead? Firstly, an adjusted grading scale in these 50/50 cases means that it is in effect shrunk to 50-100% which can give very small differences between the grades, especially if using few questions or using a grading scale which has many pass grades. Of course that in turn can be countered by having many questions but then the time for constructing the test increases again. It seems that using penalties may be a better option in that respect.

As for guessing, Scharf and Baldwin[2]discuss the mathematics of MCQs, i.e. the odds and possible outcomes, discussing the three formats of not using penalties, penalizing incorrect answers, and penalizing incorrect or missing answers, stating that the former encourages guessing, the latter has the least justification, whereas using penalties for incorrect answers ‘on average, penalizes blind guessing, although a partial knowledge lessens the negative impact on a student’s final mark’ (p. 17). According to Biggs[13] one of the problems with the ordinary multiple choice test is that there are simple strategies that can be applied such as avoiding jargon-ridden alternatives in favour of long alternatives. Scharf and Baldwin[2] similarly comment that the odds can be increased by omitting some of the alternatives. Using meta-analysis strategies rather than actually learning the content can be countered by taking great care in writing the distractors but as stated by Haladyna[5]Distractors are the most difficult part of the test item to write. A distractor is an unquestionably wrong answer. Each distractor must be plausible to test takers…’ This suggests that allowing a higher ratio of correct answers would simplify test construction.


References

  1. Lundgren, S. (2014). Pattern: Chaotic Multiple Choice Test. In Mor, Y., Mellar, H., Warburton, S., & Winters, N. (Eds.). Practical design patterns for teaching and learning with technology (pp. 301-304). Rotterdam, The Netherlands: Sense Publishers.
  2. 2.0 2.1 2.2 Scharf, E. M., & Baldwin, L. P. (2007). Assessing multiple choice question (MCQ) tests – a mathematical perspective. Active Learning in Higher Education, 8(1), 31–47.
  3. McKeachie, W. J. (2002). Teaching tips, strategies, research and theory for College and University Teachers (p. 81). Boston, MA: Houghton Mifflin Company.
  4. Biggs, J (2003). Teaching for quality learning at university: What the student does (2nd ed.). Maidenhead, UK: Open University Press.
  5. 5.0 5.1 Haladyna, T. M. (2004). Developing and validating multiple-choice test items (p.69). London: Routledge.
  6. McDonald, M (2002). Systematic assessment of learning outcomes: developing multiple-choice exams (p.95). Burlington, MA: Jones & Bartlett Learning.
  7. Lundgren, S. (2014). Design Narrative: Adding A Twist to the Multiple Choice Test. In Mor, Y., Mellar, H., Warburton, S., & Winters, N. (Eds.). Practical design patterns for teaching and learning with technology (pp. 251-254). Rotterdam, The Netherlands: Sense Publishers.
  8. Ramsden, P. (1992). Learning to teach in Higher Education. London: Routledge.
  9. Haladyna, T. M. (2004). Developing and validating multiple-choice test items (pp.81-84). London: Routledge.
  10. Haladyna, T. M. (2004). Developing and validating multiple-choice test items (pp. 75-77). London: Routledge.
  11. Haladyna, T. M. (2004). Developing and validating multiple-choice test items (p. 76). London: Routledge.
  12. Osterlind, S. J. (1997). Constructing test items: multiple-choice, constructed-response, performance and other formats (2nd ed.) (p.223). Boston, MA: Kluwer Academic Publishers.
  13. Biggs, J (2003). Teaching for quality learning at university: What the student does (2nd ed.) (pp. 180-181). Maidenhead, UK: Open University Press.