Best Practices
Learning, Measurement & Analytics

How can be achieve effective measures of learning processes? What data can be analyzed in a learning process, and how can be leverage that information to increase students' knowledge and achieve continuous improvement of learning experiences? How to calculate ROI of elearning projects? To get answers to these questions, we interviewed the following ATD 2015 speakers: Wendy Kirkpatrick and Patti Phillips. Jim Kirkpatrick also participates in this report.

 

ATD 2015 is the premier event for training and talent development professionals, with more than 10,500 attendees from over 80 countries. The conference allows to share insights and best practices with colleagues, network with the smartest people in the profession, and learn from world-renowned thought leaders. The appointment is in Orlando (FL, United States) between May 17–20, 2015.

 

Wendy Kirkpatrick and Patti Phillips will participate in the ATD 2015 track 'Learning, Measurement & Analytics Speakers Spotlight'.

 

Dr. Patti Phillips is president and CEO of the ROI Institute, Inc., the leading source of ROI competency building, implementation support, networking, and research. She helps organizations implement the ROI Methodology in over 60 countries. Patti serves as Principal Research Fellow for The Conference Board and board member of the Center for Talent Reporting. Patti also serves on the faculty of the UN System Staff College in Turin, Italy, and The University of Southern Mississippi’s PhD in Human Capital Development program. Her work has be featured on CNBC, EuroNews, and over a dozen business journals.

 

Dr. Jim Kirkpatrick is a Senior Consultant for Kirkpatrick Partners, a company founded by his wife, Wendy Kirkpatrick, who serves as the President. Jim is a thought leader in training evaluation and the creator of the New World Kirkpatrick Model. Wendy Kirkpatrick is a global driving force of the use and implementation of the Kirkpatrick Model, leading companies to measurable success through training and evaluation. Jim and Wendy have written three books, including Training on Trial, and have also served as the subject matter experts for the United States Office of Personnel Management’s Training Evaluation Field Guide: Demonstrating the Value of Training at Every Level.

 

How can achieve effective measures of learning processes?

 

Jim and Wendy Kirkpatrick, owners of Kirkpatrick Partners: Training professionals often follow tradition when measuring their learning processes, using methods and tools that are already in place without considering what might be effective. Use usefulness and credibility as your guide in determining what to evaluate, and to what degree.

Useful information is that which is gathered to assist the learning function in ascertaining if training programs impart the desired level of knowledge and skill to participants, and if they are suitably enjoyable. Other useful information may relate to the relevance of the material to the participants, and the degree to which they understand the importance of applying it on the job. This data primarily relates to Kirkpatrick Levels 1 and 2.

Information for validating the credibility of the learning function and the learning programs themselves is that which is presented to stakeholders, such as executives, and the managers of the training participants. Stakeholders are likely more interested in the successful outcomes that occur after the training than in the training program itself. Measures of interest to them typically include performance improvement (Kirkpatrick Level 3) and subsequent business results (Kirkpatrick Level 4), such as reduced errors, increased output, higher sales or better profitability.

 

Patti Phillips, Ph.D., President & CEO, ROI Institute, Inc.: Effective measures of learning begin with a framework that follows a logical sequence. Within this framework are measures important to all stakeholders. The five-level evaluation framework shown below, ensures that evaluation of learning programs result in output important to all decision makers.

© Copyright ROI Institute, Inc. Phillips five-level framework is the copyright of ROI Institute, Inc. - Used with permission.

The framework also serves as the basis for program alignment with business needs. This begins with clarifying the stakeholder needs at the highest level. In the ideal world, before a learning program is put into place, the learning and development function identifies opportunities for the organization to make money, save money, or avoid cost or do some greater good in an effort to achieve the organization mission and strategy.

Once the ultimate goal is clear, identify the specific business needs that align with the higher-level opportunities. Business needs are specific measures of output, quality, cost, time, customer satisfaction, job satisfaction, work habits, and innovation. For example, a measure of output may be an increase in sales to new customers. A measure of quality may be a reduction in the amount of rework or a reduction in warranty claims. Once the business measures are identified, the next step is to identify performance needs, which are the actions or behaviors that need to change in order to improve the business measures. At this point discuss potential solutions, and identify the knowledge, skill, or information that will enable successful change in behaviors and actions given the various solutions. From there, identify the best way to deliver the knowledge, skill or information. This alignment process ensures that the learning and development function implements programs that target the specific measures that matter to the organization. Program objectives are an output of the needs assessment process. Objectives are the basis for program design and delivery; they are also the basis for the program evaluation. The figure below demonstrates the alignment between needs, objectives, and evaluation.

© Copyright ROI Institute, Inc.  The business alignment model is copyright of ROI Institute, Inc. - Used with permission.

Alignment between needs, objectives, and evaluation along with a systematic process and a set of standards ensures consistency in the evaluation implementation and reliability in the results. Together the elements serve as the foundation for delivering effective measures of learning.

 

What data can be analyzed in a learning process, and how you can leverage that information to increase students' knowledge and achieve continuous improvement of learning experiences?

 

Jim and Wendy Kirkpatrick, owners of Kirkpatrick Partners: The New World Kirkpatrick Model (Figure 2) is a good guide for determining what data to collect and analyze throughout the learning and performance process. While the question asks just about learning, it is important to note that training and knowledge, in and of themselves, have little or no business value unless what is learned gets applied on the job, and that application contributes positively to key organizational outcomes.

Build formative evaluation elements into your instructional design plans. This involves monitoring and evaluating the components of Level 1 Reaction and Level 2 Learning during the training program itself, when you can still make adjustments and maximize outcomes.

Formative evaluation can be as simple as asking the class, “How might you use this information in your work?” Other techniques include having participants write on a note card or sticky note any concerns they have about applying what they have learned, or having table groups discuss how they will apply what they have learned. Any activities or interaction during a training program can provide data for the instructor to react to during the program, or gather for later reporting. What data is collected depends upon what is useful to the training department, and what is needed to establish credibility in the eyes of the stakeholders.

After training, summative evaluation methods, such as surveys and interviews, can include questions such as, “How well did the training prepare you to perform new skills on the job?” and “To what degree are you clear about what is now expected of you when you get back to your job?” This data can be used to ensure that future training focuses on what the participants need to know and practice to perform well on the job.

 

Patti Phillips, Ph.D., President & CEO, ROI Institute, Inc.: There are a variety of types of data that can be analyzed in a learning process. All too often however, the focus is on the activity of learning rather than the results achieved through the learning. The concept of levels of evaluation has been around for a decades. In the 1950s, Don Kirkpatrick, through his doctoral dissertation, developed a concept that he called the four steps of training evaluation. For some years after that, no significant work toward implementation took place until the early 1970s. Jack Phillips was working in the aerospace industry and was asked to demonstrate the return on investment (ROI) on a cooperative education program. Because there was little information available at the time to support his efforts, he built upon the four steps by bringing a new theory, cost-benefit analysis so that he could generate the economic output required by his executives. He identified critical measures group along his five levels and, to ensure reliability in his results, he developed a model and set of standards.

In 1983, Jack put the concept of these five levels of evaluation and his process on the map through his Handbook of Training Evaluation and Measurement Methods (1983, Gulf Publishing). Between Jack’s work, Don Kirkpatrick’s early work and his publication Four Levels of Training Evaluation (1994, Berrett-Koehler), the Kirkpatrick Four Levels (formerly four steps) and the Phillips’ Five Level framework, process model, and standards have become a mainstay in the way in which we evaluate programs and categorize results. Given this, the five types of data we can analyze to determine the effectiveness of a learning process include:

  • Level 1: Reaction and Planned Action.

Data at this level, Level 1 Reaction and Planned Action, give learning professionals the first indicator of potential success of a program. Participants must be willing to apply what they learn, and it’s through the delivery and design of a program that learning professionals can influence participant buy-in into that program. Critical measures captured at this level are measures of relevance, importance, intent to use, amount of new information, and recommend to others. Success with these measures are influenced by participant perception of the courseware, as well as the facilitation.

  • Level 2: Learning.

Learning data are important because in every type of program some learning occurs, whether it’s sales training, job skills training, or leadership development, new knowledge, skill, and information are obtained through the learning process. While participants may indicate a willingness to apply what they learn in a program, if they lack the knowledge, skill, and information, they are unable to do so. So we capture learning data in an effort to understand the extent to which people are ready to apply what they’ve learned in a program. These two levels of evaluation, Levels 1 and Level 2, tell us the extent to which people are willing and able to apply the knowledge, skill, and information they acquire through programs.

  • Level 3: Application.

Level 3 Application data include not only behavior change, but also actions, and system performance. Application data include measures such as, frequency of use, effectiveness with use, percentage of content applied, and specific behavior changes.  In addition to collecting data around the actual application of knowledge and skill, we capture data that describe barriers and enablers to application. These data inform us as to the extent to which the organization as a system is supporting the transfer of learning. These data enable us to influence others in the system that may be inhibiting learning transfer.

  • Level 4: Business Impact

Level 4 data, Business Impact, answers a fundamental question and that is “so what?” While at Level 3 the focus is on changing behavior and applying knowledge, Level 4 data help us understand what improvements in business measures occur as a result of that change in behavior. As I like to say in my workshops, “We don’t invest hundreds of thousands of dollars just to change behavior so that people can all act the same. We do it because there is a business reason for doing it.” Those business reasons are categorized as either hard data or soft data. Hard data are data such as output, quality, cost, and time. Soft data are categorized as customer satisfaction, job satisfaction, work habits, and innovation. Within each category there are specific measures that we take depending on the needs of the organization. In addition to collecting data around the change or improvement of the business measure. At Level 4, we also answer a critical question- “How do you know it was your program that caused the results?” This is one of the elements of the evaluation process that Phillips interjected in the 1970s that was missing in the concept developed by Don Kirkpatrick. Learning and Development professionals must be able to answer this question if their efforts are to be viewed as direct contributors to the forward motion of their organization.

  • Level 5: ROI

Level 5, ROI, goes beyond impact data. At Level 4, improvement in business measures are considered intangible measures; intangible measures are those measures not converted to money. Tangible measures are those measures that are converted to money. To calculate ROI, we must convert the business measures improved by the program to monetary value. Additionally, at Level 5, we are bringing in a new data set and those data are the costs of the program. Costs of the program include the fully-loaded costs. Mistakenly, many people assume that the cost of a learning program is based on the budget. That assumption however, gives us a false sense of security because in some cases the various cost items of a program are budgeted outside the learning function; for example, labor costs. I remember working with a client and I asked to see the program costs. I was shocked to see just how low the cost was given the nature of the program. When I asked the client about the labor cost, she said they did not count that because it was not in their budget. The misconception that budget represents investment is unfortunate. When stakeholders make a decision to cut a program, they are considering the additional costs associated with programs. At Level 5, we capture the fully-loaded costs of a program, regardless of where those cost lie in the budgeting process.

These different types of data give us a variety of ways in which we can measure success of our programs. They represent all of the data that stakeholders would need in order to make decisions about their programs.  If a stakeholder is interested in behavior change, the data are available. If the stakeholder is interested in knowledge acquisition, the data are available. And if a stakeholder is interested in the economic contribution a program brings to the organization, the data are available. What’s important to keep in mind, is that while we can answer critical questions at each level individually, it’s when we report data in total (i.e. all levels in logical order) that that the story becomes compelling.

 

What do you recommend to calculate the ROI of elearning projects?

 

Jim and Wendy Kirkpatrick, owners of Kirkpatrick Partners: You can follow the same principles to show the value of any initiative, regardless of the medium. However, the Kirkpatrick methodology discourages the calculation of the traditional return on investment. ROI is strictly a financial approach that attempts to isolate the impact of the training program itself, which, as noted previously, produces little or no organizational value in and of itself. It does this at the expense of all of the other factors, such as coaching, on-the-job learning and employee motivation that go into maximizing performance and impact. In so doing, it separates the training department from the business it serves, which is counter to the partnership approach required for training and the business to accomplish their collective goals. Finally, the ROI number, expressed rigidly in dollars, typically is not relevant or familiar to most executives outside of the financial arena and is often misunderstood or disbelieved, further separating training from the business.

A better way to evaluate the value of a program is to use a business partnership approach. At the beginning of a major initiative, discuss which business goals are being targeted for improvement (Level 4 Results), what areas of performance need to improve on the job to accomplish them (Level 3 Behavior) and what training, if any, will increase that performance (Level 2 Learning). Notably, more often than not, training is not the primary intervention that will increase performance. In this case, training professionals can apply their expertise to partnering with the business to design on-the-job support programs, such as mentoring, coaching and accountability systems.

Once the program goals are established, then a plan for what needs to occur, and the roles and responsibilities of all parties, can be defined. Agreed-upon metrics can be set, and methods to gather the required data can be designed at the same time as the training program and any other collateral activities. Remember that rather than trying to quickly and easily measure, “How did the training work?”, the Kirkpatrick approach is to first ask and answer, “Is it working?” along the way. In order to do this, regular checkpoints should be defined so that preliminary program findings can be reviewed and, if necessary, the plan adjusted or remedies employed to ensure that the program produces the desired outcome.

With a plan in place at the onset, implementing a program that yields desired results is straightforward and creates a positive team atmosphere. Instead of simply measuring what happened after the program is complete, you will be driving performance and results, maximizing the outcomes the initiative was designed to yield in the first place. Findings will be reported in the ways that stakeholders requested them, so there is little chance of misunderstanding or dissatisfaction. Specifically, these findings are composed of quantitative and qualitative data that tell a story of value, and the relative contribution of the most significant success factors rather than just one. This is how training can demonstrate unquestionable value to the business and leave you with a formula to replicate for future mission-critical programs.

 

Patti Phillips, Ph.D., President & CEO, ROI Institute, Inc.: Calculating the ROI of e-Learning requires the same steps as calculating the ROI of any type of learning program, project, or initiative. The five levels of evaluation are just as relevant to e-learning as they are to any other type of initiative. All too often, people assume that the ROI of an e-Learning program is developed differently than face- to- face learning events. But that’s not true. The difference between face-to-face and e-Learning is delivery.

For example, if you have a project management course that you are delivering in a face-to-face, brick and mortar setting, you’ll likely have business objectives that revolve around time savings and cost savings. If you offer the same program via e-Learning, those time savings and cost savings objectives are the same. The difference is in the cost of delivery. In the early days of offering e-Learning, when organizations were doing it strictly for cost savings reasons, we found when comparing a face-to-face delivery versus e-Learning of the same program, the ROI for face-to-face was generally higher than the e-Learning, even though they tended to be more expensive. This was because there was greater improvement in the business measures from the face-to-face program than there was with e-Learning.

ROI is based on two inputs: benefits and cost. Benefits reside in the numerator and costs reside in the denominator. The more benefits you add to the numerator, the higher the ROI. The less costs you add to the numerator, the higher the ROI. The key with e-learning is to ensure that not only do you manage the cost of delivery, that is the “e” in e-learning, but that the program generates the benefits necessary to overcome the cost and to achieve the targeted ROI. While e-learning may be the direction the learning process is going, the question of value is being asked routinely. Many of the organizations who subscribe to e-Learning content, such as a library of content, are starting to ask the questions, “why?” and “are we really getting value by offering this e-Learning content or do we need to think through a different approach?” By approaching the measurement and evaluation of e-Learning the same as other types of learning investments, learning professionals can ensure they have the most powerful program using the most efficient and engaging platform and generating the highest return on investment.

 

 

May 2015