Traditionally in business, assessing the value of anything usually comes down to pecuniary outcomes, with figures being used to answer the question; ‘is it worth the investment?’ But when it comes to understanding and appreciating the value of a well-planned L&D strategy, sometimes money isn’t the answer.
When measuring many aspects of business performance, the traditional evaluation method is to assess Return on Investment. ROI is a financial account of whether an investment was worth it or not, in essence this approach answers the question ‘was the investment worth it?’
This question is often used for new products or services that a company rolls out, you will want to know whether it’s worth continuing to promote it and ROI is the ideal evaluation; you can measure the money invested in the new venture, and assuming it’s a chargeable service, you can work out how much it brings in. But in L&D you’re dealing with people, people’s skills, goals and aspirations, which are much harder to put a price on.
It hasn’t stopped people from trying of course. For many years ROI has been used to try and assess the value and effectiveness of a company’s L&D strategy with questions focussing on ‘did the money we spent on training lead to improved outcomes and how much are these worth?’
The trouble comes when trying to assess what an improved outcome is, and how to measure this in monetary terms. How do you calculate how much better leadership is worth to a company? The answer is you can’t, not accurately at any rate.
What’s the alternative?
When it comes to assessing the value of an L&D plan, ROI probably isn’t going to be the answer, or at least not the whole answer. The alternative technique that many experts in the field support is the use of Return on Expectation.
Whereas ROI often happens at the end of the intervention, ROE starts at the beginning, identifying in a collaborative manner the target outcomes. These outcomes can still be financial but are generally not, instead focusing on less tangible measures which, whilst often referred to as “soft” measures such as behavioural impacts are still of critical importance to the business.
The approach focusses less on the monetary output, and instead looks at the alignment between what was desired/expected and what the actual result is. For example did attendance at a communications course lead to improved communication in the workplace?
If the answer is yes, then this is usually a fair indicator that the intervention was worth the investment.
As stated above, ROE requires up front collaboration in advance of the intervention. The trouble with this is that it requires a more intensive assessment from the very beginning. Before the training even begins, all the individuals involved (e.g. the delegate, their line manager, the sponsor and perhaps the training provider) need to explore what they want the outcomes of the training to be.
This can be a little tricky to do, particularly for behavioural skills development where an individual may not be able to articulate what they want to get out of it until they’ve attended. Another difficulty faced is that different people involved will likely have different views on what the outcomes should be; a line manager for example may feel that an appropriate KPI may be higher work output, while the individual may view improved personal awareness as a positive outcome.
It is therefore vital when considering ROE, that there is a coherent discussion about what the expectations are and what outcomes will be measured for the purpose of evaluation. Similarly, there will also need to be a comprehensive discussion around what ‘met expectation’ looks like, again the views of different people may make it hard to unanimously decide if an expectation has been met, particularly if you are asking a number of different line managers to make the assessment.
It is factors such as those above that likely contribute to the reluctance in many cases to use ROE as an evaluation platform. At first glance it can appear time consuming and complex to implement, but this isn’t necessarily the case and certainly shouldn’t be reason not to do it; with the right strategies and support it can be an extremely valuable tool, and having the right evaluation tools in place can make all the difference to a business.
Why does evaluation even matter?
For some organisations training is simply a formality they need to adhere to in order to remain compliant with regulations; this mandatory training is often a tick box exercise where the aim is to alleviate the organisation from accountability in case of incident. In this instance, evaluation probably isn’t that high on the priority list, the key measure is that the right info has been provided to the employee at the right time.
Fortunately this is the exception rather than the rule, and the majority of organisations recognise L&D as an integral part of their business strategy. Thus in order to make sure they are investing in the right type of training for their workforce, facilitating development of their people and simultaneously driving change in the business, evaluation is essential.
It is the only way to know whether the strategy is having the desired effect. However, more important than that, evaluation is absolutely critical in generating change, and creating more efficient practices; if you don’t know how well you’re performing in a certain area, how can you make changes to improve?
In terms of L&D, the evaluation tools used are often dramatically underutilised, they may be used competently to assess whether a training intervention is being delivered well, and they may be used to investigate whether an employee is performing better as a result. But the missing link in many cases is using this information to understand how an organisation can create a bigger impact with less investment.
By using the information gained from evaluation an organisation can make better decisions when it comes to their L&D strategy, which if used effectively will help contribute to higher impact for lower investment. If for example, you are seeing significant behavioural changes in your staff as a result of one off experiential sessions, compared to moderate change after extensive classroom based theoretical sessions, then that immediately tells you which method is having the biggest impact. It is then a simple matter of maths to estimate which method is a better investment.
It’s important though, to remember that investment isn’t all about the money. Staff, commitment of time is the most important resource most companies have, and this needs to be factored in to the equation. Most businesses neglect that factor in their calculations but businesses with a desire for development and competitive advantage will use that measure as a positive statement of intent, committing to a time investment. If a one off session costs more but takes less time away from the business, and its effects are implemented more quickly, then that is surely a better investment than the lower cost approach that takes significantly more time from the business and also takes longer to be put into practice.
Being able to conduct effective evaluations is vital to the success of any business, and whether it’s being applied in the L&D context or elsewhere, knowing how you and your team are performing gives you the power to drive progression and growth in all areas, and that’s why evaluation is so important.
About the author
Lyndon Wingrove is director of capabilities and consulting at Thales L&D.