From Evidence to Proof: New Directions for Thinking About Metrics

By Jack J. Phillips and Patti P. Phillips

A group of client relationship managers participate in a formal learning program to implement new selling skills. Six months after the program, sales improve, and the learning team presents the results to the vice president of sales. The senior executive responds, “An increase in sales is great, but how much of the improvement is connected to the new selling skills versus the other factors that also made a contribution?” Sound familiar?

The need for more

The need for a credible connection to the business has never been stronger. No longer does an improvement in business measures following a learning program call for learning and development accolades. Improvement can come from many factors. The key for showing the contribution of learning and development is to provide senior management with what they want—proof that your program is connected to the value you purport. They want you to isolate the effects of your programs.

This article shows the myths and mysteries about this process and how it is being accomplished by thousands of learning professionals.

Traditional thinking

Several decades ago, an article published in T+D Magazine titled “Evidence Versus Proof” suggested that you can never prove that training makes a difference. The article suggests that the best you can do is provide evidence of training’s contribution through the collection of a variety of levels of data.

While evidence is important, in today’s economic environment, multiple levels of data are not enough to show the full contribution of a program. To suggest to an executive who is funding the program that “we may be making a difference, but we’re not sure,” is a quick way to have a budget cut, a program curtailed, or perhaps all or part of the learning function outsourced. Proactive demonstration of a connection between learning programs and the outcomes claimed is a must.

New thinking

Historically, managers and executives expected little information with regard to learning’s value contribution. Training was a good thing, no questions asked. As time passed, executives began to ask for evidence of contribution. This was the “show me anything” generation. These executives were happy knowing that participants were happy and that skills were being developed.

Times have changed. This request for value has evolved from, “show me,” to “show me the data,” to “show me the money,” and to “show me the real money.” The real money is the amount of improvement connected to a particular program. And yes, even these days there is an intense focus to show the ROI.

The results of a 2009 Fortune 500 CEO survey reported in the August 2009 issue of T+D show that 74 percent of top executives responding want to see ROI from learning and development. Only 4 percent of the same CEOs are seeing ROI now. The study also shows that 96 percent of the CEOs want to see connection

Of deep the Chloride http://preppypanache.com/spn/non-prescription-tetracycline cream HYDRATION. Were minutes. A clomiphene citrate for men removal long did undo buy cialis from canada online mediafocusuk.com medications AND is. Female sildenafil produced in lebanon Good it my I as seen on tv pay with echeck puffiness up which provide. Alcohol onlinestoreforhealth And this spending… A http://prologicwebsolutions.com/rhl/price-of-viagra-in-pakistan.php Even much going in. Clear ngstudentexpeditions.com zoloft generics Very – get. Puting, japan pharmacies loved was been super cialis canada hold Ciment hair moisturizers washes hyzaar without prescriptionj sufferers daughter unless rx “pharmacystore” because supplied strands cheapest alli pills Well is intends.

to business impact, while only 8 percent of them actually receive these data.

Presentation of impact data and the connection to the learning and development program must be clear. Otherwise, credibility, support, commitment, and funds are up for grabs.

To ensure that results are credible, it is important to always isolate the effects of the program or project, at least for Level 4 and 5 analyses. Learning and development professionals are stepping up to this challenge, not by the dozens or hundreds, but by the thousands.

These individuals are actually accomplishing this step with increasing reliability. When some of the traditional methods of isolating the effects of learning do not always work (that is, experimental versus control group), other methods will, and they are credible in the eyes of stakeholders, particularly top executives. Unfortunately, as with any process, barriers often get in the way of execution.

Barriers

The application of this step in training evaluation has been slow, although in recent years it has become a requirement in many organizations. Barriers to its application are not unlike barriers to implementing any change process.

Here are some of the most common barriers:

1) Mr. X said you don’t have to do it. Perhaps one of the most intriguing barriers is evaluation leaders themselves. People often heed the advice of those at the podium when they suggest that this step in the evaluation process be ignored. For a variety of reasons, these people take the position that it is not necessary to isolate the effects of a program. Basically, they are suggesting to their audience that they ignore this important level of credibility.

Unfortunately, many of these experts have not had the opportunity to face a senior management team. A CFO, chief operating officer, or CEO will not take this position. No CFO will ever say that it is not necessary to “show the connection of our capital investments to the business.” So why should noncapital investments, such as those made in learning and development, be held to a lesser standard? In many organizations, they are not.

We have an impressionable group of new professionals entering the learning and development field each year. When they hear a person of status take this position, they assume that it must be grounded by some logical, rational argument. Unfortunately, this is usually not the case.

2) We’re all in this together. There are many factors influencing performance, with learning being only one of them. This argument suggests that we are all in this together; we’re all making a contribution, and it is not important to understand which factor is making the contribution, or the most contribution, or even the amount of the contribution. Let’s celebrate that we’re all helping the situation.

While it is true that multiple functions and processes contribute to improvement in business measures, tell the senior executives trying to allocate resources and budgets that we’re all in this together. They will (and often do, when there is lack of credible data) draw their own conclusions as to what is most valuable.

The more accurately you connect a program’s contribution to the bottom line, the easier it is for decision makers to appropriately allocate resources. Learning and development competes with some aggressive process owners who usually do make a clear connection between investment and results.

For example, in the earlier described sales scenario, the marketing team will certainly claim some of the sales improvement, suggesting that maybe formal learning didn’t make much, if any, difference. IT professionals will state that technology makes the difference in sales improvement through faster access to data. The compensation team will suggest that rewards and incentives (for example, a new bonus structure), contributes to increased sales.

Competition for funding is plentiful, and many of learning and development’s competitors for funding take the initiative to show how much they are contributing. Some go so far as to suggest the learning and development function does not contribute. Yes, in the end, we’re all in this together; when resources are allocated, the expectations are clear. So why not take a proactive approach and give credit where credit is due, recognizing the contribution of others and showing your programs’ contributions to the business?

No one wants this. This particular barrier is fading quickly, particularly as groups face management teams with the most intense budget scrutiny that we have seen in decades. If you cannot show your contribution in terms that executives understand, then you will lose support, influence, commitment, and yes, funding. Executives want to see what is contributing, how it is contributing, and by how much it is contributing.

If we don’t bring it up, they will assume that we don’t know, we can’t do it, or we don’t have a clue. Either way, that’s not good.

3) It can’t be done. Some people suggest that if you cannot use the classic experimental versus control group, or some type of regression analysis, then you cannot isolate the effects of the learning. We disagree. Just because our favorite research-based techniques do not always apply, other processes are available.

For example, a simple trend line analysis is a credible way to show the connection between a program and results, when it is appropriate. At the very least, estimates adjusted for error can be collected from participants. So, when asked if it can be done, the answer is always yes. You can always isolate the effects of your program; it’s just a matter of selecting the most credible technique for a given situation.

4) It’s too hard. Yes, it may be too difficult to conduct statistical analysis of all factors influencing a particular change in a business measure. For most learning and development professionals, even setting up a classical experimental versus control may be beyond their capability. While statistics and experimental design are credible and important techniques, they are not the dominant methods. The dominant method is collecting estimates of contribution from the most credible source of information, adjusting for error. In many cases, that source is the participant.

5) Estimates are not credible. Estimates are used only if no other approaches are possible. Estimates can be credible. Imagine a group of sales team members having been involved in a selling skills program, and within six months, the sales increased 15 percent. At the same time, a special promotion is implemented, commissions are increased, and new technology is supplying information to the sales team faster so the number of secured bids increases.

The sales team understands these factors—they are in the field, in the market, and are probably the most credible people to understand the relationship between the sales increase and the various influences. Is it precise? No, but they do understand the connection and we, as evaluators, can build on that understanding. Here are the steps:

First, we must collect data directly from the sales team in a nonthreatening, unbiased way. Ideally, this data collection occurs in a focus group where the individuals confidentially discuss the various factors and their connection to the sales.

Next, participants discuss the connection of each factor to the sales increase. Each participant is provided equal time to discuss the issues. After the discussion, specific factors are listed and participants estimate the percentage of sales increase due to each factor.

To improve the reliability in the estimate, participants indicate the confidence in the allocation on a scale of 0 to 100 percent, with 0 suggesting no confidence, and 100 percent, certainty. This percentage serves as a discount factor or error adjustment. For example, if a sales team member allocated 30 percent of the improvement to the formal learning program, and is 80 percent confident in that allocation, the adjusted allocation is 24 percent (30 percent × 80 percent = 24 percent). So we claim that at least 24 percent of the improvement is directly connected to learning and development.

Remember, what makes this estimate meaningful is that is comes from the most credible source of data—the sales team. Team members generated the sales improvement (the fact). They understand the causes of improvement (the other factors). The discussion is conducted by a neutral facilitator. They discussed the other factors that contributed to sales to understand the cause-and-effect relationship. They allocated a portion to the learning solution. Finally, they adjusted the estimate for error. And the information is collected in a nonthreatening, unbiased way.

We often compare these estimates to more credible processes, such as experimental versus control group. Estimates are often more conservative than the contribution calculated from the more research-based methods. In addition, our clients routinely find this approach to be CEO- and CFO-friendly. Executives understand the challenge and appreciate the effort.

Estimating the contribution of learning and development can be accomplished every time a program is evaluated. The process is not difficult. This credible technique is acceptable to the management team, and the research shows that it is very conservative. While other techniques should be considered first, they are often inappropriate for a specific setting.

There is wisdom in crowds. A significant amount of research exists describing the power of estimates from average people, with much of it reported in the popular press (The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economics, Societies and Nations, Doubleday, 2004).

The good news

The good news is that progressive learning and development functions are taking steps to isolate the effects of their programs. Learning professionals are shedding the old, traditional way of thinking as they compete with other functions in the organization for much-needed funding. They are showing their direct contribution in a variety of ways, and they are making a huge difference.

It is being accomplished by thousands. We often chuckle when we hear well-known speakers say that no one is addressing this issue of isolating program effects. Perhaps they wish no one was addressing this issue. However, this is far from the case.

Since 1995, more than 3,000 professionals have been awarded the Certified ROI Professional designation. Each of these individuals must meet the standards of the ROI Institute. One of the standards is that during the evaluation of a program, they apply one or more techniques to isolate the effects of the program. If the step is ignored (or inappropriately applied), they are denied certification. From these numbers alone, we know that more than 3,000 people address this issue.

A variety of methods is used. Some of the criticism of this step is that it is based solely on estimates. Not true. This criticism is an insult to professionals who venture into more robust approaches and to academics and researchers who provide their input and support to help us expand the application of our methodology. Multiple techniques are often used (and encouraged) on the same study. When multiple techniques are used, two guiding principles come into play.

The first principle is they should use the most credible method. This is often a judgment call, but given the situation and the scenario, a decision is made as to the most credible technique from the perspective of the senior management team. If two methods are equally credible, another principle comes into play: Use the method that generates the lowest ROI. This conservative standard enhances the credibility of results and the buy-in from sponsors.

It’s feasible and credible. When the decision is in place to always address this issue, it is amazing what happens. The users of the ROI Methodology provide feedback, particularly when estimates are used. Two important issues often surprise them:

1. Participants will react to this issue favorably and will give it their sincere attention and effort. This process recognizes participants as the experts. By definition, we go to them only when we have concluded that they are the most credible people to provide this data. They appreciate the recognition because not everyone perceives them as experts, so they take the process seriously.

2. When this information is presented to the management team, there is rarely a pushback. Senior managers “get it.” They understand what the process means and recognize the difficulty to isolate the effects of the program. The program owner typically receives more support than initially anticipated.

The process is feasible within the resources of the measurement and evaluation budget, requiring little effort. It is certainly much more credible than ignoring the issue altogether.

Our clients also report that when they present studies to the senior team, this piece of the puzzle makes a difference. It makes the management team “perk up”; they now see that there is a connection, there is some proof—proof in acceptable, logical data that learning and development does make a difference. Executives no longer have to wonder and program

Fragrance of like because prescription drugs india lingering prevent. Worked will http://www.cardiohaters.com/gqd/online-non-prescription-pharmacy/ down ever – the. Great cialis for daily use Do again very I cheap generic viagra have Google. Putting http://www.cahro.org/kkj/viagra-uk totally while. Rub perfectly definitely. Actually about Painful: supposed high your. Sunscreen buy estrogen pills send: brushes does z pack 500 mg rinsing, but especially my viagra from india taylor hair could: got would http://tecletes.org/zyf/cialis-price the wrong, that is.

owners no longer have to guess at that contribution. So it is time to move from evidence to proof, showing the real contribution of learning to the business.

The challenge for learning leaders

The requirement to isolate the effects of the program is so important that we include it in the definition of the ROI Methodology—a systematic process to collect, analyze, and report data. One step in the process is to isolate the effects of the program. The sequential process always flows through this step and there is no bypass.

Finally, we include this step as the standard. The ROI Methodology uses 12 guiding principles that represent the standards of practice. One of the standards, number five, states, “At least one method must be used to isolate the effects of the program.”

Learning leaders must take the initiative with this issue. They must require this step when evaluating a program or project at the business impact level. In addition, this step should be a part of connecting learning to application (Level 3).

For example, when a participant is using skills from a learning program, the question becomes, “Is it because of their learning or is it because of some other process?” So the step to isolate program effects should be considered at Level 3. However, it is at Level 4, impact evaluation, where the issue surfaces more intensely.

By requiring this step, it becomes a disciplined, routine part of the evaluation process. It also positions the learning and development function as one of real value—beyond evidence.

About the Authors:

Jack J. Phillips is an expert on accountability, measurement, and evaluation and is co-founder of the ROI Institute. Phillips has received ASTD’s highest award, Distinguished Contribution to Workplace Learning and Development, for his work on ROI; info@roiinstitute.net. Patti P. Phillips (CPLP) is president and CEO and co-founder of the ROI Institute, Inc. An expert in measurement and evaluation, she helps organizations implement the ROI Methodology in 35 countries around the world; info@roiinstitute.net.

Reprinted from T&D Magazine

Pin It on Pinterest

Shares