Ever hang out with a scientist, just to learn new information? Whenever I get the chance, I listen to scientists at Caltech about their research. This allows me to add new knowledge to my mental database. And with some reflection, discover new connections to the world of learning and development.
Historically, some ideas start out as observations which then leads to theories. At times, it’s hard to prove theories because the tools and technologies are not available to do thorough testing and validation. A good example is how we understand viruses. Just a few years ago, it was difficult to demonstrate how viruses mutate. Today, high-power microscopes let us see the intricacies of how viruses behave.
Another example is the Hubble telescope. This mighty eye in space helps scientists to see billions of galaxies and stars, which they couldn’t observe before. This aids them in building and proving theories.
My point? Proving a theory requires the right measuring tools.
Follow Your Intuition
Microlearning is a new concept and theory to many of us. Intuitively, we know microlearning works. Intuition tells us why learners/workers want shorter study time, and they prefer practical answers to help them in their jobs.
Even though our intuition is correct, it’s difficult to find a method to measure the impact of microlearning. If we could measure the impact of microlearning, we would have proof that it works. Here are ideas I have tried and tested. I hope you can use some of them.
Looking for Application Points
Microlearning aims to help learners/workers apply answers to problems at work. It is not about memorizing content. Naturally, we need to design or use some tools to help us measure applications at work or as I call it “application points.”
Traditional training aims for recall and practice. This approach creates a learning bubble. The measurement tool is memory tests and surveys. These measure static conditions. In microlearning, we are trying to measure dynamic application points. We live in a dynamic world, not a static one.
A large group of manufacturing facilities was installing a massive new software to digitize data. Engineers, frontline team members, technicians, customer service teams, and others, have to touch a portion of the software to digitize data. Each department went through group in-person training and webinars to launch the project. A six-month post-launch session was conducted to follow-up on the change. The problem was that nothing was provided as a form of learning and reinforcement in the six intervening months.
Capturing Applications Points
In this case, we experimented on identifying the application points and capturing microlearning impacts.
|Measure the usage of the microlearning answers
|Easy access and searchable microlearning instant answers were deployed. Managers, supervisors, technical teams, and end-users readily knew where to find the microlearning lessons. Microlearning was positioned in the flow or work very close to the use of the software application as pop-ups, suggested lists, troubleshooting tips, and FAQs.
|Within six months, we captured the usage data including number of clicks, length of time, and repeat use of the microlearning answers.
|The conclusion arrived at by the leaders and managers after reviewing the data is that microlearning helped in “filling the gaps” of applications. Furthermore, “there would be more errors, complaints, and slow adoption if the microlearning answers were absent.”
|Measure using frequent snapshot surveys
|Part of probing the impacts of application points in microlearning is to frequently ask team members to provide impact reports. We used a very short snapshot survey – microsystem – asking these questions:
|Although the responses were far lower from the measuring of usage, they were very insightful. Leaders were able to identify instances at work where users were passing and sharing alerts and Slack messages with comments like “this worked for me, try it.”
|Measure reduction of customer support calls
|Part of the launch of the software was a set-up to assign a few call support team members to respond to issues regarding the software launch. After six months, these were the findings:
Microlearning is ideally used in parallel with actual work, for example, the launch of the software. Application points provide good opportunities to measure data. The measurement is different from our traditional testing or evaluation in training programs. The measurement of microlearning results is in the application points – the areas where users and workers use the answers to solve specific work issues.
So, the next time you look up at the stars, think about ways you can incorporate microlearning measurements with application points in your projects. As Jack Horkheimer would say, “Keep looking up!”
Using Commonplace Microtechnologies in Your Microlearning Strategies
Want to Burst the Learning Bubble? – Workshop Tip # 259
6 Most Popular Microlearning Ideas- Flashcards, Mapping, Short PDFs, etc. – Workshop Tip #258
10 Microlearning Impact Areas (video)
3-Minute eLearning Book by Ray Jimenez, Ph.D.
Ray Jimenez, PhD
“Helping Learners Learn Their Way”
Vignettes Learning Workshops
1. Developing Critical Thinking for Modern Learners
2. Microlearning for Disruptive Results
3. Instructional and Experience Design for Workflow Learning
4. The Masterful Virtual Trainer Online Workshop
5. Hyper Story-Based eLearning Design Workshop
6. Training Frontline Leaders as Learning Accelerators
7. HYBRID Remote and Hands-on Training
8. Advanced Skills in Webinars