When it comes to rolling out new learning solutions, it all starts swimmingly right? You’ve identified a knowledge gap – perhaps a course on mental health in the workplace – and you get a great attendance.
All trainees successfully complete the two-hour session and return glowing happy sheets, full of capitalised praise. But once everyone is back at work and just before the back patting ensues, you’re asked (by the COO no less) where’s the proof that it worked? What’s our return on investment?
Ah, well, there was a need and we swiftly addressed it. Plus, everyone attended didn’t they? Job done.
Not quite. At least not for discerning stakeholders who expect more than the Kirkpatrick Model – Level one. There’s arguably value in gathering immediate reactions; it helps you to understand how engaged learners are, assess the trainer and facilities, and highlight early issues in a multi-stage programme.
But this data alone leaves you far from the comprehensive evaluation and proof of success needed. As Paul Matthews puts it, ‘The quality of the wedding ceremony does not predict the quality of the subsequent marriage.’
L&D is increasingly expected to demonstrate tangible impact on business outcomes, and to deliver this you need to elevate your evidence game. The challenge is, how? Especially when expectations differ across the board and there’s little consensus on what constitutes results.
We’ve come up with four key principles to keep in mind when seeking to demonstrate value which satisfies stakeholders. Plus, how to handle a few specific asks across financial, behavioural and cultural return.
Let’s rewind a bit – to maximise your chances of demonstrating significant ROI, you need to first investigate if a learning solution is really the answer. This requires adopting a performance consultant lens instead of just taking orders – so you can properly assess the situation. How did the need arise and what data supports it? Is the requested scale warranted and necessary? You might discover that a communication or operation solution is better placed, or at least needs to form part of the approach.
Once you’re clear on the driver and what success looks like, you can confidently launch a results-focused plan.
Action Mapping (AM) can be really valuable here (and speaks to the next principle). Rather than an all-encompassing information dump, this approach encourages starting with the required business change and a measurable goal.
What actions are vital to prompting change and, for these to happen, what knowledge is vital? It also helps to identify potential barriers to be addressed such as lack of skills, lack of incentive etc. AM also goes beyond this into consideration of practice activities and is a great way of starting out with an ROI anchor.
When comprehensive evaluation is relevant and valuable, you need to build it in from the very beginning. An early conversation with your key stakeholders should include what they consider to be proof of impact – what data, metric, change etc. are they anticipating and looking for? These priorities are integral to project planning and will help you to start with the end in mind. Awareness of the results required informs your assessment choices (regular quizzes, line manager observations etc.) and timescales. When, and for how long, will you be checking in? When are results expected?
If you design learning with the desired outcome(s) at the fore, you can map out routes to ROI – in effect, a toolkit of key performance indicators, data sources, qualitative measures, whatever you need to determine results. It also highlights what to benchmark pre-launch so you can showcase the difference your solution has made.
We can all, consciously and unconsciously, default to seeking out information which supports what we already assume and believe. And naturally, we want to succeed and find evidence which implies we have.
Starting out with a hypothesis and confidence in your solution is positive, but you have to remain open to finding evidence to the contrary. If left unchecked, confirmation bias can harm your ROI gathering and potentially your long-term credibility. Your data, correlations and findings should stand up to thorough interrogation and attempt to represent the full story, with an emphasis on quality over quantity. If there are limitations or influencing factors at play, these should be acknowledged, explored and expressed.
The Phillips ROI model is (in part) focused on how to isolate the training impact amongst contributing elements – not always a straight-forward task but a necessary one nonetheless. That is, if your ambition is to be a trusted business partner with a growth mindset, who sees missteps and improving as integral on the journey to getting it right.
Chances are, the expected ROI won’t be a one-off nice and neat tick box after the initial delivery. Stakeholders may be interested in sustainable, long-term results – the gift that keeps on giving. To demonstrate ROI on a continued basis, you need to consider (as part of designing the solution), if creating ongoing opportunities for application and refreshment are necessary and if so, what they look like. Further touchpoints may contribute to the overall costs but they can also be integral to releasing the full benefit of the programme.
Some experimentation may be required, especially with a brand spanking new initiative, around how regularly nudges and analysis of results are valuable. A popular strategy with learning on this scale is a blended campaign – where two or more different learning approaches (media, mode etc.) work symbiotically – be it to deliver separate aspects or to reinforce one another. This offers a myriad of efficient ways to engage with the subject, combat the limitations of other methods, and keep learning interesting.
Whatever route you take, just be mindful – if demonstrating continued ROI/retention is crucial, then you’ll need a plan of attack on how to maintain and measure training impact.
We hope you have enjoyed this article, keep a look out for part 2 which will cover ROI challenges with L&D.