Here’s another rough draft/excerpt from the book I’m trying not to write, tentatively titled “Ground Floor Learning”
In the Learning & Development industry, we usually categorize the training we make based upon how it was delivered. “This is an online class, that’s a face-to-face one, this one is on Zoom so we’ll call that one virtual.” Our language about these things is not great because it’s based on delivery modality, something that will always iterate and vary as media does. It’s not pinned down or organized, it never will be, and people like it that way.
What makes much more sense to me is to speak in terms of what we can know about the training experience or training asset we delivered. I break this down in terms of measures.
When we call something learning, but fail to take any measure of it, we are essentially bluffing. Whether or not anyone calls that bluff, we need to know that we are bluffing. We may have conveyed information, but we can never know if learning happened or not.
When the only thing that’s measured is at the end of a training experience, again it is not possible to say that any learning happened just because training did. Why? Because learning is change, and you can’t measure change by taking one snapshot anywhere in the process.
The before and the after, this is the basis of all testing, and all the empirical sciences. Now we can finally start to talk about learning with some degree of honesty. This is not to say that taking a before and after measure is enough, or that it will automatically result in learning to do so. But this is the minimum bar that must be cleared to move from the realm of pseudo-science to actual science. _This is the ground floor.
Pre-tests to baseline success
What did people know or have reasonable confidence that they could find out if they needed to before the training? The answer to this is required for getting up to the ground floor start to learning. For example, if you start with a second year grade school student and a second year grad school student and try to teach them the same content, you will get very very different results — that have nothing to do with you or your content. It’s the audience that makes this training successful or not. One of the easy ways to do this is to assess before providing training. We wouldn’t want to judge our training (or ourselves) based on an audience full of idiots, nor can we take credit when geniuses easily understand our content. Gauging where someone’s understanding/experience lies before supplying training allows for us to gauge their learning, not just their knowledge.
Weights of Measures: how people feel vs what people learn
It feels good to feel good. Learning can feel good too, though often it doesn’t. That’s because learning is a form of growth, and growth usually comes with some amount of pain. So while how people feel is a valid indicator for how they feel, it is not a valid indicator for if they learned or if they learned what we were trying to train them on. It’s good to ask for this “level one feedback” as it’s known in the industry, but it’s good for entirely different reasons. The reason to ask how people feel is to influence how they feel. It is well-documented that reflecting on how we feel about an experience after the fact changes how we feel about it. Yes, I know it shouldn’t work that way. But it does, and it’s totally in bounds to manipulate people like this. We do it all the time, and it’s mostly harmless. What isn’t harmless is treating the data about how people feel about learning as if it is a measure of the learning. They are different. This is not a problem, but it is when we confuse the two.
Your thoughts are welcome! The comments below are for you to tell me what you get from this. Here I do want to hear how you feel about what you just read. I’m trying this out. Any feedback is helpful. Thanks!