In part 1 we looked at the four key principles of demonstrating ROI to key stakeholders. In part 2 we will look at common challenges you may face and possible strategies and solutions. We also have a final thought for you about how your ROI may be affected by your learning platform.
ROI Challenges with Learning and Development
Expectation: “I’m ultimately interested in the financial benefit.”
Strategy: Assuming profit is integral to the objectives and outcomes, find out the particulars here. For example, if they’re after a specific or ballpark number or percentage, especially one that’s a deal breaker, you need to know. It will allow you to manage expectations from the off and factor this into the design, scale and duration.
It’s valuable to unpick their conclusion – how has it been calculated? What needs to happen to achieve it? Is it purely aspiration without a solid foundation or does it give you a great, financially-sound approach?
This might prompt in-depth cost analysis, spanning less obvious and hidden spends such as time spent building or attending, and potentially the trickiest – lost revenue due to time in training (such as in a sales role). If monetary expectations just aren’t feasible given the context or available resources, it means your inevitably tricky conversation is steeped in evidence and can provide an idea of what is actually feasible.
Also, put some work into justifying the required costs once calculated – why is an external subject matter expert essential to the objective? Why won’t a one-hour classroom session deliver the same results as five e-learning courses online? Your workings might be subject to intense scrutiny so put them through their paces before presenting.
It’s not easy to (even approximately) calculate how much profit your initiative will generate in advance – learning transfer and application is far from an exact science. But it will help to:
- Get as close as possible to the problem you’re trying to solve and start with an understanding of the cost of not fixing it. Is the problem losing money?
- Start with a small, measurable goal – it will be easier to prove causation, not just correlation and scale up from a simple trial which shows the initiative has legs.
- Benchmark – does industry research suggest realistic targets, inspire approaches or detail results from similar initiatives?
- Agree on an ROI formula with stakeholders – ensure you’re aligned on the best way to calculate and what’s to be included. A simple formula is subtracting the investment cost from the resulting profit and multiplying by 100 to get your ROI percentage.
(Profit – Investment Cost) x 100 = ROI%
Expectation: “I want to see that the solution is clearly responsible for the outcome.”
Strategy: A/B testing, control group, pilots – these are now your best friends. If time allows, you can try and isolate impact via one of these routes:
A/B testing – run two different variations side-by-side with two otherwise similar groups (the closer the better). Group A has one experience (training initiative, approach, resource etc.) and Group B has another. Your data will then be meticulously collected and findings are compared and analysed. Ideally, one variation will out-perform the other so it’s clear that your input is responsible – providing you with evidence which warrants a larger-scale roll out. If not, it’s back to the drawing board for some focused ‘why’ time.
Control Group – again, select two similar groups, put only one group through the training and measure the results. Did the training have a tangible impact? What’s changed? The control group (the one that’s just carrying on as usual without intervention) acts as your baseline.
Piloting – no groups this time, just a limited sample of learners from your target population, undergoing a brief trial. This type of test has its advantages; it’s handy when time is squeezed, and the intervention is a one-off and/or for a small demographic – it will give you some indication of success. But in terms of whether it will work once scaled up and capturing the long-term effect…it leaves a lot to be desired.
It all comes back to design consideration. What areas/aspects are most valuable to test in advance? What’s the contribution and drawback(s) of each test type in that context? What do you need to confirm in order to confidently move forward?
Arguably, and especially for large-scale and/or expensive projects, all initiatives should include some variety of testing in the given context – launching without any indication of effectiveness is optimistic at best. You might be setting yourself up to fail on several fronts (including ROI) if you’re not following a logical, evidenced thread.
Expectation: “How can we demonstrate improved soft skills or behavioural change?”
Strategy: This one can be a real head-scratcher. We’re often talking about qualities which have no physical form and can’t directly be quantified: emotional intelligence, awareness of unconscious bias, effective decision making (to name but a few). But it can be successfully approached:
Back to Business
Proof stems from the business reason(s) for the training – if you’re training managers in how to handle difficult conversations than presumably this is because data has exposed the need for it – exit interviews citing poor manager communication or empathy, for example. So, you’ll want to monitor the same attrition data for intervention impact. You could also survey their direct reports before training and then at several stages after to see if the impact is felt and has longevity. The managers themselves could give feedback and examples of how they’ve applied their new skills. In other words, the training requirement was born from a gap – has its delivery closed it?
There’s bound to be more of a lean on qualitative data, but we can also create connections between business outcomes and the intangible. For instance, recognising the role that skills like communication and collaboration play in delivering results and developing others. The spotlight is firmly on agility and resilience at present. We need to join the dots between ability in these areas and business success.
What does improved agility look like in your organisation – quicker or more seamless change management? More flexibility in responding to customer requests? Think about the metrics you’re intending to effect with this training and the supporting data which would evidence them.
Expectation: “I’d like to quantify the impact on our customers.”
Strategy: Another beauty to unpack. What will this impact look like and on what scale? Then – are you tracking the areas you need, to the level of detail required? Do you have the capabilities and/or software in place?
As in all scenarios where impact is sought, it’s useful to benchmark – if you know where you’re starting from, you can monitor the difference – that’s at least part of the evidenced value battle (we know causation is another).
If you’re changing a few things at once, or there are other variations at play (e.g. a new sales process or management), then the plot thickens and it might be best to wait for the status quo to re-establish.
So, what do you have at your disposal for understanding how your customers really feel? Their continued business for one – renewing contracts, appetite for throwing more work your way or exploring further avenues of collaboration.
Ad-hoc feedback, wash-ups at the end of projects, structured surveys – you can add new aspects to these conversations to capture their experience, specific to the change. A lot of this will depend on how willing they are to engage in giving feedback beyond just getting the job done (telling in of itself). Imbedded, longer-term clients will likely be more interested in longer feedback sessions, whereas new clients might resent lengthy questionnaires every five minutes. Know your audience – you can always just ask what they’d prefer.
- Impact on your CSAT (Customer Satisfaction Score) might be relevant, especially if it’s the problem you’re targeting. It’s a no-fuss, quick exercise for clients which should result in high response, even when regularly utilised. However, if you’re after the detail or something specific, this approach alone will leave you wanting.
- Your NPS (Net Promoter Score) is another source to consider – which signifies how willing your customers are to recommend your organisation to others – typically on a sliding scale. If the initiative is high-risk or leading into unchartered territory, you might want to start small and measure often, even convincing a small group of loyal customers to be regularly and thoroughly transparent with you during the transition.
Consider ROI when choosing your learning solution
Demonstrating return on investment from learning – whether using e-learning or classroom techniques – can be tricky. But by challenging and determining stakeholder expectations early on, it will inform a lot of your decisions when designing a learning solution that will satisfy not only your learners, but also your leadership team.
It may take a little trial and error sometimes but if you consider the four key principles outlined above, you’ll be off to a great start and be ready for battle.
If you’re looking to improve your learning solutions and would like a chat about what works best for your organisation, feel free to get in touch with Gemma to get the conversation started.