Training Question Standards, p1

Posted

Whenever I talk about using questions to assess understanding of training content, I get some familiar pushback:
  • Why are you only talking about multiple choice? Those aren’t nearly as effective as open ended!” I agree completely. Yet multiple choice questions do exist in the world and I’d like to just talk about those and fill-in-the-blanks first. We have to start somewhere, so I’m starting with the most common forms of assessment questions that currently exist eLearning.
  • Why are you only talking about in-course questions? The real assessment is against performance!” Yes, I do agree. But before we send people back out into the world to assess their performance, we need to know that they understood the content properly. If they didn’t then performance impact will likely be negative, not positive.
  • Why are you locking down how questions work? This is an art not a science!” Absolutely, great questions are a bit of an art. And like any accomplished artist will tell you, they worked to become good at what they do. No matter their innate talent, they likely studied, and they definitely practiced. That’s all I’m getting to here.

Got other objections? Please use the comments below so I can speak to those too :)

Design Sequence

Alright. So, here’s a general Instructional Design Sequence that I’ve used more often than not in the last 20 years in my work in Learning & Development:
  1. determine metrics driving training & define target audience
  2. write well-formed learning objectives that align to metrics & audience
  3. write assessment with at least one question per learning objective
  4. test mathematically-valid assessment against target audience & metrics
  5. write content that helps audience meet learning objectives
  6. test content & assessment against target audience
  7. refine, document, release, & promote/assign training to target audience
  8. revisit metrics, audience definition, promotion/assignment triggers, objectives, assessments, and content in review cadence as documented

Learning Objectives

The best Learning Objectives are ones that can be measured by the person’s ACTIONS. Understanding and Awareness are always weaker targets. When the person does understand or is aware, how can anyone tell? This likely corresponds to a metric that the organization is already tracking. If so, find it and align your objective to it. If not, consider dropping this objective altogether. Yes, really. If we don’t get things right in the objective, nothing that builds upon it will work. There is also the option to introduce a new metric that the business was not tracking previously. That’s a slippery slope, and though it can work in theory, I’ve yet to see it last more than a few months in practice.

Question Design

Always use as few words as possible to be clear in your question, answer, and feedback. Avoid anything that can be ambiguous or has multiple meanings or interpretations. We’re not testing reading comprehension, so let’s eliminate anything unnecessarily tricky. This isn’t easy, but the better you know your audience, the easier it gets.

Always ensure that each question maps to at least one learning objective. Otherwise, what ARE you testing for? I like to restate the Learning Objective in full in a hidden or commented out area in the question itself. Whenever these lose parity, this lets me know when the question needs to change or the LO does. Later in my career I took to tracking these things in text files as with the Digital Learning Asset Framework but for some reason that for the life of me I can’t fathom, plaintext files seem to intimidate people. I think there about as unintimidating as a little file can be! But because the wide majority of Instructional Designers just don’t want use them, I’ve largely stopped recommending them. Still use them myself, though ;)

Please make inclusivity your default. We don’t need to pick fights on behalf of the structurally disadvantaged, but we should at least beware of stereotypes and avoid reinforcing them. How? First, plan to test your content with people who don’t look like you, and listen to them. Some handy tricks I use in writing questions are reversing status roles, changing gender pronouns, and challenging other common biases. Your organization may or may not have strong appetite for this, be ready to defend that your inclusivity does not directly detract from the learning objective and that you’re making the learning more effective for the people who need it. Of course if you doubt this, you can always test for it.

Using characters in scenarios that look like the target audience in real life makes those scenarios feel more trustworthy. Using common names that work across genders or culture are also preferred.

Examples: Pat, Sam, Alex, Jo, Sascha

For each assessment, use instructions to clearly communicate the rules and create the expectation of success for the learner.

Example: Up next, a chance to show what you know! This assessment has 5 questions, and you’ll need to get at least 4 correct to pass. If you don’t, you will be able to retake it, but next time you will see different questions and/or a different sequence of answers. Don’t worry, there are no trick questions. Ready?

For multiple choice questions, keep to just three choices/selectors for whenever there is one right answer. And please do not ever use “all of the above” or “none of the above”. Again, we are not testing reading comprehension, you are testing content comprehension. We can always do better than lazy questions like this.

For fill-in-the-blank questions, do not test spelling or syntax unless that is critical to the specific learning objective. And if you’re wondering if it is, then it likely isn’t.

Example: Sam loves the colors of the trees in _____ [Fall] (testing syntax differentiation between season and action)

Also, we need to include ALL VARIANTS or we risk losing the Learner’s trust. So plan add multiple correct answers and multiple spellings or don’t use this question type.

Example: Sam loves the colors of the trees in _____ [Fall, Autumn]

Note the location(s) of content that gives the answer to the question in the CL field.

Example: How many planets are in our solar system? [8, eight]
CL: our solar system, history of astronomy
FI: That answer may have been right at one time, but as of this writing it is not. Please review the “Our Solar System” section.

Don’t give the correct answer in the incorrect feedback, please just don’t. Instead, rephrase the question or give feedback that shows the best location where the correct answer can be found.

Why? Because this is not a proctored exam. Nobody’s watching.

One very effective strategy your Learners will tend to employ is to try the assessment first, usually failing. When they note all the right answers you share as feedback along the way, they now largely have an answer key. Also, depending on the tool used to create your elearning, it may be possible to see all the feedback with a simple right-click and view source or in-line inspection of the web developer panel.

No matter what policies our organizations may have against cheating, if we make it too easy for people to cheat, they will cheat. Answer keys tend to circulate are hard to kill. What I recommend is making it to where no matter what people do to cheat, they also learn. Then it doesn’t really matter if they tried to cheat or not. Though generally, if it’s more work to cheat than it is to learn, people will take the easier path.

Pass/Fail Structure

Never hardcode navigational language indicating pass/fail state. This language must be dynamic and conditional to the persons’ responses or they could encounter a message that is untrue and causes confusion which necessitates support intervention. I can’t tell you the number of times I’ve seen Instructional Designers do this wrong. It is as inexcusable as it is avoidable. Just don’t do it.

Last, but not least, create your own secured answer key for every assessment. This is basic, but most Instructional Designers fail to do it. Why? Because it’s a hassle and you have to keep things updated and it takes time and there’s always someone pressuring to make deadline by that stage.

We need to push back on this. Whenever we deliver an assessment without an answer key, we are not creating a solution for the organization, we’re creating a ticking time bomb. This is going to blow up and hurt everyone at some point. I’ve seen it happen, and I’ve even seen entire departments get fired over this kind of thing. Don’t be that person.

How do you notate a plaintext answer key? How do you secure it? I’ll post about that soon.

In the meantime, if you have any comments or ideas or stories to share, please do so in the comments below! Thanks.

Author
Categories ,