C) Multiple Choice can be done right


Following up Monday’s post for my fellow Instructional Designers about how to score assessments, today let’s look at how to make the multiple choice kind consistent and halfway decent.

Now some people claim that multiple choice test questions are always a bad idea. But just because a tool can be misused doesn’t make that tool itself bad. I work in Learning & Development (the field formerly known as Corporate Training), and I find that multiple choice assessments can be very valuable if done right, and they do happen to be the standard in all forms of testing and surveying, anyway.

So since we use them, let’s at least use them correctly, shall we?

Here’s a quick summary of my two decades of experience on how to do that:
  • Have 3 answer options. No less, no more (and NO True/False). There’s plenty of science about why this is best, but most people like to ignore it.
  • Link every question to a specific learning objective or behavioral outcome. Know exactly what are you measuring, and why, and how.
  • Expect the question and answer take 20% of our design time, and writing effective distractors that actually test what we want to measure as the other 80% of effort. Don’t skimp on this, it will show.
  • “All of the above” and “None of the above” are a sign that we’re cutting corners. Take a break, come back, do better. We don’t need to do this in one sitting.
  • Be clear and brief. If we can subtract a word and still make sense, do it. The fewer words there are, the better we are measuring our actual objective instead of the learner’s reading comprehension. Subject Matter Experts will tend to add language, but we are the Learning & Development Experts, so it’s our job to advocate for our Learners by subtracting it.
  • Randomize both the questions and the answers whenever appropriate, and draw questions from a larger bank (or multiple question banks, if we want to get fancy). Make it obviously more work to cheat than learn, and people won’t try to cheat — or if they do they’ll end up learning the content even better as a byproduct, and at that point who really cares?
  • Have simple question standard and communicate it clearly. If it’s an 80% pass/fail threshold, give the Learner multiples of 5 questions (10, 15, 20…). If it’s a 75% threshold, have questions in multiples of 4 (8, 12, 16…). Tell Learners upfront the number of questions they can miss and still pass, and we reduce their anxiety and our support needs.
  • Don’t repeat. If we have knowledge checks, quizzes, pre-assessments, or other not-the-final-exam-yet kinds of evaluations, don’t use the same (or worse yet very similar sounding) questions in the graded exam. It is disorienting to Learners, and adds risks without adding any returns.

Write your test first
Additionally, I’d recommend that we always write all assessments first. Only once these measurements are approved by stakeholders and vetted by SMEs should we bother creating the rest of the content that supports it. Yes, it’s a little weird for people, but in practice it’s just easier that way. This focuses the content on business outcomes and circumvents scope creep in our project.

If anyone bugs us about teaching to the test, we can remind them that valuable supplemental resources can be made available to via other means, separate from any assessment components. This makes it easy for Learners to differentiate “nice to know” from “need to know”, and that if it’s needed, it’s on the test already.
Now, it’s time for a sample multiple choice question!

Now for a sample! Please use the comments box below to post your best answer. Feel free to add additional feedback as you see fit.

This blog post has been:
A) Extremely valuable. I can implement it immediately.
B) Okay, but a bit long. I only skimmed it, really.
C) Not very helpful to me. I was expecting something different and/or thought this was just bad advice.

Categories ,