The perceived time for completing ROI exercises, difficulty of isolating the impacts of specific initiatives from other influences and the seeming impossibility of quantifying "soft" outputs are common reasons for putting evaluation exercises on the back burner. Yet such worries may be largely overcome, whilst the level 4 (or "results") outputs in Kirkpatrick's approach can be determined in a single initial step.
Recent thinking on training evaluation has pointed away from attempting to hunt down a "magic number", or just taking a snapshot of cost-benefit returns at a particular moment in time. Rather, emphasis has been put on understanding what benefits a programme or other initiative is delivering over a period of time - the financial, performance and intangible returns - as well as giving qualified examples of the outcomes and insights into both how and why it's directly having impact.
Quantifying results shouldn't just stop at considering sponsor expectations either, important though these may be, not least because the actual benefits which might result from a particular initiative may not be known at the outset or even made public (objectives in 1:1 coaching often fall into this category).
A simple extension of this thinking can show how the returns actually contribute to an organisation's key performance indices and business drivers - in other words, how they impact on the bottom line. This has been an important direction in the development of our own approach, albeit an optional element.
Our approach results from a process of developing and refining our thinking through several client prototype projects and began as a response to earlier research into coaching implementations, which revealed that very few organisations could confidently identify the impacts of their investments.
We have drawn on a popular analysis model commonly used in management consultancy, as well as integrating with popular strategic planning techniques such as value chain analysis and balanced scorecard contribution. In developing and prototyping the approach, we have kept in mind the need to create an approach which can be simply learned by anyone, is universally relevant and does not make large demands on individuals' time or resource - or take several weeks' effort to produce results!
We've also sought to address many fallacies found in some approaches to impact assessment, such as learning analytics. These are described in our white paper "Fallacies and Fixes", which may be viewed by clicking here: 'Fallacies and Fixes'.
The core part of our approach is to interpret and correlate a range of data sources to establish both the "what?" and the "why?" of a programme's impact, as well as to provide answers to possible devil's advocate questions. This may include published performance data, savings which can be costed with confidence, insights drawn from anecdotal information, sponsor and staff perceptions. The time taken to gather and analyse data is deliberately kept brief, focusing on what can be quickly collected and analysed. The approach doesn't require control group comparison or for an initiative to have had pre-defined outcome objectives. Attention is also given to what data is already available, the business processes learning initiatives may relate to, and the availability of individuals to participate in a study.
The approach is fully compliant with Kirkpatrick's thinking, however focuses on minimising both the evaluator and others' time. Very few organisations want to consider having to prove the ROI on an ROI exercise after all!
Return on investment studies can pay for themselves many times over, allowing crucial decisions to be made on what to continue and what to change (often small tweaks can have a large impact), as well as providing quantified insights of what will be effective for future initiatives. The impacts of these decisions can then be valued to project future returns and savings. Being able to reassure sponsors with more than a good feeling allows funding decisions to be made with confidence. Proving the return of key human capital investments doesn't have to remain the untold story.
PLEASE CLICK HERE TO FIND OUT MORE ABOUT HOW WE MAY BE ABLE TO HELP YOU.
Note : Reference: "Evaluating Training Programs, The Four Levels" (Second edition), Donald L. Kirkpatrick, Berrett-Koehler Publishers Inc., 1998. To find out more about Kirkpatrick's "four levels" model and how it relates to our approach, please click here: 'Kirkpatrick Revisited'
: References for statistics and survey-based statements quoted on this page: People Management, March 2008; NESS Survey, Learning and Skills Council, June 2006; Ford and Weissbein, Performance Improvement Quarterly, 1997; PDI, Coaching at Work, March 2008; Human Capital Evaluation, CIPD, Summer 2007.
: The term 'Return on Investment' or 'ROI' is usually applied to measure the pure financial benefits of carrying out a particular course of action, such as a coaching programme. In the case of human capital initiatives, absolute measures aren't always possible or even desirable, whilst it's usually highly relevant to take account of intangible returns such as improved staff motivation and changed behaviours also.
We therefore use the term in a general as well as in an absolute sense, and may suggest that measuring return on expectations (ROE), achievement of ROI thresholds or examining opportunity costs may be most appropriate. Further information on this topic is given in our article, 'Measuring Return on People'.
: Reference to "learning and development" and "training" on this web page should also be interpreted to refer to coaching, mentoring and any type of "soft skills" intervention. The terms "programme" or "initiative" may be taken to refer to an individual course, a full schedule of training modules, or any intervention, initiative or development method (e.g. action learning sets, e-learning and remote learning).