How do you solve a problem like measuring learning? In schools, the traditional answer has been to give students a test. I’m a firm believer that tests have a time and place within education, but not necessarily in all the times and places that they’re often used. They’re great for giving students the opportunity to see what they know and what they need to work on, giving teachers the chance to see what they need to go over again, and giving schools a feel for how their students are doing. Tests are even great for giving students a deadline to focus on – I know I work better with clear deadlines, especially just before those deadlines. But how good is a test as a measure of learning? As Bernie Sanders knows well, tests often cause people to study only the test material, neglecting other opportunities for growth. So, tests are good for measuring a bit of what a person has learnt, but not everything. And what about adults in the workplace? If you are a company trying to understand how well your workers are learning, can you just give them a test? Is that a good way to understand if they’re learning?
This question of how do you know if adults are learning in a workplace was an issue that I encountered during my PhD when working with energy companies. In the energy sector, an accident can mean somebody loses their life; so a lot of resources, time, and money are invested in safety and learning from incidents. Accidents and almost-accidents are investigated to understand how something similar could be avoided in the future. These insights are generally summarised and sent out in an email blast so that all workers can learn from the mistakes of others. But how could a company know if people are actually learning from these summaries of incidents? Would a test be useful in this scenario?
It could definitely have a place, but it’s likely that any test would only make people study the details of that particular incident, which isn’t really the point. Ideally, workers should be learning by reading about an incident, its causes, and then thinking about which bits apply to them. As anyone who has ever seen Bloom’s Taxonomy of Learning can tell you, applying ideas from one place to another is a more difficult type of learning than simply memorising facts (I acknowledge that not all tests are about memorising facts, but it’s tough to write a really good one!). So a test could be useful, but it’s probably not enough on its own.
I’ve now finished my PhD and I still don’t have the ultimate answer for how you can measure learning in workplaces. But there are two ideas from outside education that I think could be helpful. The first is the idea of an index. Environmental researchers work with indexes a lot to try and put numbers to something that’s difficult to quantify. A habitat suitability index, for example, will strive to come up with a single number that represents how suitable a location is as a habitat for a particular species, therefore inferring how likely you are to find that species there. This is done by quantifying different metrics based on location variables, such as geographic surroundings and climate, and on species needs, such as food and shelter availability.
Something similar could be useful for learning. Instead of placing all your faith into a single number, like a test score, chose a couple of different metrics and create a learning index. Inputs could include amounts of time dedicated to discussion, motivation of workers, amount of jargon in slides. Just like in the life sciences, companies could put a number to each of these, to get an index of how conducive to learning the materials and methods the company uses are, and therefore how likely it is that learning is happening in the workforce.
The second idea that I think has some use in this context is bricolage. What’s important for learning will depend a lot on what it is you’re trying to learn and why. As a company, you want to limit time and energy spent collecting, quantifying, and analysing lots of different sorts of data to measure learning (it’s a full-time job for many researchers!). Bricolage is the idea of selectively bringing together the materials you already have to solve a problem. Rather than designing the perfect solution to a problem, you create something from what you have. As my US friends would say, you MacGyver it.
I think this could be a really useful approach for learning. Think about what’s important for creating a suitable learning environment; think about what data you already have that gives you insight into those important factors, and create an easily updated baseline. Especially in a world where every click and hover on your computer can be recorded, this method becomes increasingly feasible. There are likely to be important things that are not captured in the numbers you have available, but at least it’s scalable and a starting point to know where to look a bit deeper. The trick is to not get drawn into the trap of believing your data tells the complete story and represents reality.
Measuring intangible things like learning is not a problem unique to education. Outside of education, there are several approaches used that, perhaps if modified, can offer insight into how to look at what is happening and adjust accordingly. Maybe one (or a combination!) of these approaches can have the final say on whether people are learning.