Archives for August 2011

Heightened Union Activity Putting HR on Notice

Relief over failed attempts to revive the Employee Free Choice Act may be short-lived as employers face new pressures posed by a resurgent labor movement and a more union-friendly National Labor Relations Board.

Recent actions by the NLRB, including a June 22 proposal that would speed up the union election process, have some private-sector employers bracing themselves for a renewed organizing push. While union membership has been steadily declining since its peak in the 1950s, recent battles over collective bargaining rights in Wisconsin and other states and a more labor-friendly Obama administration appear to have re-energized labor activists.

“Over the next several months you’re going to see a renewed emphasis on organizing,” says Mike Asensio, with a partner in law firm Baker & Hostetler in Columbus, Ohio. “This will put a big burden on employers to get involved in labor relations issues on a day-to-day basis rather than waiting until there is a threat.”

Asensio says that the recent NLRB proposal, which would shorten the time between the filing of a petition to take a vote on joining a union and the actual election, will “deprive employees of an opportunity to hear an opposing or contrarian view and that will be extremely detrimental to employees.” An organizing campaign lasts about 40 days, but the new proposal could compress it into as few as 10.

Asensio has dubbed the plan “EFCA light,” referring to the 2008 federal legislation that would have eliminated secret ballot elections, which are how unions are formed under current law if organizers could get the majority of workers to sign a union authorization card. Efforts to resuscitate the bill have so far foundered.

While private-sector employers are being advised to be watchful of any signs of union activity, fears over a private-sector organizing push are premature, according to Josh Goldstein, a spokesman for the AFL-CIO, a federation of labor unions representing about 12 million workers.

The proposed rule is “a very modest change to a seriously broken labor law,” Goldstein says. “I think the effect on organizing is yet to be seen. I don’t think anyone expects this to have the same effect of EFCA. It’s a very small tweak in removing a barrier to a process that doesn’t work for employees or management.”

Goldstein says that employers have been lulled by a largely inactive NLRB under the Bush administration, making even the most modest proposals by the current board seem “like the sky is falling.”

“What people are seeing now is an NLRB that’s actually doing its job after 10 years of not doing it,” he says. “From the first board decisions, corporate interest groups have been up in arms that an independent government agency is actually doing its job. Is it a drastic change? Possibly.”

In addition to the NLRB proposal to accelerate elections, the Labor Department issued a June 22 proposal to clarify when employers must publicly disclose agreements made with labor relations consultants advising them during an organizing campaign. The move to regulate what the Labor Department calls “persuader activities” is widely viewed as another boon for unions.

Business leaders have also decried the NLRB’s recent unfair labor practice complaint against aerospace giant Boeing Co. Charging that the Chicago-based manufacturer was retaliating against its largest union by moving production of its 787 Dreamliner to a nonunion plant, the NLRB recommended in April that Boeing shift the work to a union facility in Washington state.

That same month, the NLRB filed lawsuits against two of four states with constitutional amendments barring private sector employees from organizing using the card check method, which was a key component of EFCA.

Given these recent developments, Kevin McCormick, a partner with the law firm Whiteford Taylor & Preston in Baltimore, urges HR practitioners to review their policies to ensure compliance with the National Labor Relations Act, to stay informed on laws and regulations and union tactics, but most of all, to stay connected with their employees.

If these proposals take effect, “employers won’t know what hit them,” McCormick says. “The train will be coming ’round the bend faster than they can imagine, and they won’t have time to get their side of the story out. This would be a very significant blow for employers.”

McCormick says that HR can have a huge impact by making sure that employers are ready. “Companies get organized because someone’s asleep at the switch,” he says. “Make sure you have opendoor policies, that employees are engaged and don’t see work as drudgery. Above all, listen to them.”

There is no doubt that organized labor is becoming “palpably more aggressive,” according to Maria Anastas, a shareholder in the San Francisco office of the law firm Ogletree, Deakins, Nash, Smoak & Stewart.

“I try to impress on my clients in HR that you can’t underestimate the powerful connection these organizations make with the rank and file,” she says. “It absolutely affects how you connect with your employees.”

A still-flagging economy and concerns about layoffs and cutbacks in benefits has given organized labor a chance to be heard, she says.

“People feel they’re working harder for less,” Anastas says. “The union message resonates with a lot of employees.”

Reprinted from Workforce Management Online 

Training Transfer: From Insight to Action

One of the issues training and development professionals continually face is whether our participants have successfully transferred learned skills to the job. In other words, have they taken the insight they gained in training and turned it into action that is meaningful to them, their teams, and their company?

Working for (and with) Fortune 500 global training functions, I have developed a set of approaches (tips) that when implemented have the opportunity to significantly increase training transfer. Here are a few for your consideration.

Tip #1: Practice, Practice, Practice

The old adage of practice makes perfect is true in our programs. When designing our courses for maximum transfer, consider that for every 1 minute of content delivered, design in a minimum of 3 minutes of practice. Highly transferable training gives participants large amounts of time to practice what we want them to do on the job.

Our participants need to exercise those newly acquired skills so “muscle memory” takes over when they really need it. Another benefit is that our programs provide a safe haven for trial and error of new techniques. New skills that are not adequately practiced have lesser chances of being transferred.

In one leadership program for first-time managers, when we redesigned the course from 80 percent lecture to 80 percent practice, the company realized an increase in transfer from 22 percent to 71 percent. Same content, just intently practiced. The new design had minimal PowerPoint slides and 30 minutes of lecture over a two-day period. I suggest that you set a minimum ratio of 3 to 1. For every 1 minute of content, include at least 3 minutes of practice. I like more than 3 to 1, but consider setting this as the minimum design ratio.

Tip #2: Let Them Know What You Expect

Show expected on-the-job performance before, during, and after content and practice. Left to their own devices, participants will form judgments on how to implement learned skills, which may be different than what we envisioned. Simply explain the adoptive behaviors the company wants—and do this early and often. Don’t just spotlight what they will learn but also what they can do with these key learnings. This simple practice has a remarkable success rate.

For example, one company was teaching effective meeting management to its supervisors to use in their morning briefings. At the beginning of class, the division head always described the components of an effective meeting and when to use them. The company had coined the term, “Gold Standard Meetings” (GSM), to represent the application of the new skill. Signs were strewn through the class with the Gold Standard Meeting title and listed the skills to be used when running these meetings.

The GSM skills were reinforced at the end of each class and were part of the vice president’s closing remarks.

Tip #3: Incorporate Reflective Practice

Reflective practice enables participants to learn from their experience. This is used when participants have completed activities such as simulations or practice sessions. Reflection is a way of helping participants better understand what they know and do. And, as they develop their knowledge and skill through reconsidering, they can reflect on what they’ve learned. Reflection places an emphasis on learning through questioning and investigation, leading to a development of understanding and increasing transfer to the job.

I place the model at logical places within the learning material (classroom or e-learning) and have the participant answer each question reflecting on what was just taught. At times, I have them share with others in class. You can do this multiple times throughout the program—then have participants review their reflective practice answers in preparation for developing their action plans.

Tip #4: Do Action Planning

Action planning is a set of clearly written statements describing—in measurable terms—the specific actions the participant intends to apply on the job as a result of training. In preparing this, the participant is drawing up a personal transfer plan before leaving training, thinking about how, where, and when to match the new skills to concrete situations on the job.

This goal-setting strategy enhances the likelihood of transfer. Once back at work and confronted with e-mails, phone calls, meetings, and problems, the participant’s intention to adopt is negatively impacted—few participants take time in the two or three days after training to think about how they will use what they have just learned and practiced.

Provide the goal-planning sheet in the participant materials, along with notes for the instructor, explaining the purpose of the activity, how to introduce it, how much time to allocate, and criteria for acceptable action items. Allocate time at the end of the program to write goals. As a rule of thumb, participants need 15 to 30 minutes to write two goals.

Tip #5: Hold Teleconference and/or Webcasts to Inform Managers About the Course

Providing managers with a thorough understanding of the course allows them to send the right employees at the right time. It also aids them in conducting the pre-course conversations and begins their involvement of supporting their employees post-training. In these sessions, consider covering the following:

· Business purpose of the course

· What their employees will learn

· How the knowledge and skills will benefit their employees and their organization (team results)

· Who should attend

· When they should attend

· Why they should attend

· What the manager can do to support the employees before and after the course

These sessions are typically 30-minute teleconferences or Webcasts and can be recorded and made available to managers who could not attend or need to review.

While there is no magic bullet that will always turn insight to action, we can help the process along—at a very reasonable cost. In the end, success isn’t about how many people we train, but how many we’ve moved to action.

About the Author:

Dave Basarab is an experienced evaluator and author who has led strategic training initiatives for companies such as NCR, Motorola, Pitney Bowes, and Ingersoll Rand. He recently launched his new book, “Predictive Evaluation,” a ground-breaking approach to training and evaluation and a follow-up to his previous book, “The Training Evaluation Process.”  For more information, visit http://www.evaluatetraining.com.

Reprinted from Training Magazine

An Engine for Growth: Talent Function Helps Drive Turnaround

In January 2009, Yahoo!’s board brought in CEO Carol Bartz to get the Internet company off a rocky path. In June of that year, Bartz recruited Susan Burnett as senior vice president of talent and organization development to develop the company’s discouraged, shrinking workforce. At that point, the company’s stock had been falling for nine years since its peak in 2000. Stock closed at an all-time high of $118.75 a share on Jan. 3, 2000, but after the dot-com bubble burst, the company settled at a low of $4.05 on Sept. 26, 2001.

In late 2009 Bartz and Burnett did something every executive at Yahoo! had neglected to do since the company’s inception in 1995 — they envisioned a strategy for growth.

“Before I started at Yahoo!, Carol interviewed me, and I asked her, ‘What does success look like?’ She said, ‘Number one thing is improve the quality of management and leadership here because that will improve the quality of our results.’ I wrote that down. It’s what I’ve executed against since day one, and it’s what has gotten us on a steady path of transformation,” Burnett said.

Founded early and grown aggressively, Yahoo! was able to store away enough cash to get through tough times at first, but when the company began mass firings to boost its earnings, Burnett had to create a platform to promote employee engagement, performance management, career development, succession and learning technology strategy — plans that didn’t exist before.

“The last two years have really been retrofitting the foundation of Yahoo! from a technology business perspective and also from a cultural, behavioral perspective,” said David Windley, chief human resources officer at Yahoo! “We grew with many different properties, but we didn’t have a baseline … It’s almost as if we woke up one day, and on the surface we knew we were a big company, and you would think we’d have the advantages of a big company, but the reality is, we were a set of small companies and there was no leverage.

“What we’ve had to do over the last two years is redesign the underlying platforms, the technology of the company, from a business perspective. Then from an HR, cultural perspective we’ve had to align with the new culture, which is a much [more] influential, interdependent company versus people operating under silos.”

Burnett wasn’t a stranger to development. As managing director of talent development at Deloitte, she collaborated with the head of strategy to build a new talent development strategy for the firm, one with an integrated learning and development process to be delivered at Deloitte University. As chief talent management officer at Gap Inc. she built the company’s first succession and career development system and refined the leadership pipeline by defining competencies and experiences needed to produce business and personal success at each level in the organization.

Burnett’s passion for learning and employee growth came from her 22 years at Hewlett-Packard (HP), where she held a variety of HR and line management roles including a position in HP’s corporate training division. Almost half of Burnett’s career at HP was in line positions in marketing, where she led a variety of global marketing services and sales support functions.

“When people ask me about my career goals, I have to say I never had this goal,” Burnett said. “My goal was to learn, and my goal was to contribute to the business. That was it. What interested me was unlocking the potential of people. At HP I would work with the learning organization, my HR partners and say, ‘Look, here’s the transformation I’m trying to drive in marketing, here’s what I need globally: I need to increase the size of deal and shorten time to deal — what solutions can we put together for that?’ They’d look at me and say we don’t have anything for that. I would end up having to build it myself. That’s where I began to ask myself how learning becomes an engine for business growth, business transformation, not a transactional process.”

In her last role at HP as chief learning officer, Burnett was responsible for organization effectiveness and pulled together more than 75 training and organization development groups globally from pre-merger Compaq to create a loose federation of employees committed to developing a competitive workforce. Burnett owned the learning budget and was able to use the dollars as leverage to get a focus on strategy. Since leaving HP, Burnett has controlled her company’s learning and development budgets but has had to learn to do much more with a great deal less.

Burnett’s learning budget at Deloitte was $100 million, and her budget at Yahoo! is approximately $35 million. Although it’s modest compared to what she’s used to working with, Burnett is grateful for what she has, given that it’s nearly tripled since her commencement two years ago. The reason for its escalation is simple — she has proven results. As Burnett stands alongside peers asking the executive committee for more money to accelerate her initiatives, she can support her request with evidence of business growth.

“Marketing wants more money for the brand, products needs more money for investment in small markets and acquisitions we want to do,” Burnett said. “We got invested in, and we’re the only department. Carol said to me, ‘We saw when we invested in you, it made a difference, so accelerate development programs and make a difference across the company.’”

The first plan Burnett put in place at Yahoo! was an internal development program, Leading Yahoos, an organization development and learning initiative for all leaders — a targeted 2,000 employees. The goal is to engage leadership teams in a development experience that increases effectiveness at setting measurable goals and metrics for results, creating a personal leadership brand and a development plan based on feedback, coaching for accountability, leading the new beliefs that will enable breakthroughs and leading alignment up, down and across the organization.

“The most powerful parts of this program are how we’ve developed team beliefs and an alignment of objectives on how to change the culture and behavior of teams to execute against goals, goals we didn’t have in place before, as a team,” Windley said. “This isn’t theoretical, it’s not sitting around the table and discussing nice values we’d like to have, it’s practical. To accomplish Yahoo! business objectives, this is how teams need to behave. Using this strategy, beliefs tend to stick more simply because they’re actionable.”

Lessons from Leading

Yahoos are sticking. Some 2010 scoreboard results from employees who have completed Leading Yahoos — including 1,472 total Yahoo! leaders — indicate that 98.1 percent of participants would recommend this program to their peers, 97.1 percent apply new skills and concepts learned from the program on the job, and 90.2 percent believe what they have learned will enhance their performance. Further, Leading Yahoos participants have higher employee engagement scores on career development by 9 percent compared to their peers who have yet to complete the program, by 6 percent for performance and accountability and 5 percent for decision making and manager effectiveness.

“The program is customized to drive the strategic changes in our functions and in our geographies,” Burnett said. “Blake Irving, head of products, is using Leading Yahoos to drive the product changes he wants to drive as he develops the vision and design of Yahoo!’s global consumer and advertiser portfolio. The collaboration this brings to his team is imperative to Yahoo!’s success. You drive transformational change through human beings interacting with each other and building trust and confidence in the strategy and new direction.”

Prior to Burnett and Bartz stepping into their roles, there was no transparency. There were no written company goals, metrics, quarterly business reviews or analyst meetings. All of those things have been implemented under Bartz’s leadership.

“Half of this is the discipline of leadership and good management that didn’t exist before,” Burnett said. “When employees see positive results, both for the organization and their own engagement, being broadcasted over and over again, they know there’s good direction, there’s a vision, there’s a mission, there’s a brand proposition, that we’re winning.”

In May, Yahoo! shares tumbled when Alibaba, a Chinese Internet group, transferred its online payments unit, Alipay, to a local Chinese company controlled by Jack Ma, Alibaba’s chairman. Yahoo owns 43 percent of Alibaba through a variable interest entity, and Yahoo! investors placed considerable hope in the future growth of Alipay as a reason to own Yahoo! Further, Bartz’s decisions to slash costs in the organization with layoffs and a search partnership with Microsoft left both employees and investors questioning her integrity and heart.

Despite criticism, Bartz has improved financial performance and established Yahoo! as a top online destination for news, sports, finance and entertainment, according to survey results conducted by comScore Media Metrix. Yahoo! operates one of the world’s largest private cloud infrastructures. It handles more than 11 billion page visits per month and 100 billion events a day and is the third most visited site in the U.S. with 4.7 billion visits, averaging 26.3 visits per consumer, in March 2011.

Burnett credits Yahoo!’s recent growth to the organization’s talent management strategy, which enables and accelerates company goals to strengthen leadership, build effective organizations and increase engagement by building winning teams.

“The learning philosophy I’ve created is to stay current and be competitive, learn all the time and know what’s going on in the world around you,” Burnett said. “The economy and world of work is very different now. It’s turbulent. Staying competitive with your skill set and knowledge and being a performer is really the message for success. It will be front and center through our career development messaging, and it’s front and center in our learning portal. If you think about the employer brand of Yahoo!, people come here for growth; they come here to impact because they know they can impact. It’s our job to provide that room for growth, and we’re finally doing that.”

Reprinted from Chief Learning Officer magazine

From Evidence to Proof: New Directions for Thinking About Metrics

A group of client relationship managers participate in a formal learning program to implement new selling skills. Six months after the program, sales improve, and the learning team presents the results to the vice president of sales. The senior executive responds, “An increase in sales is great, but how much of the improvement is connected to the new selling skills versus the other factors that also made a contribution?” Sound familiar?

The need for more

The need for a credible connection to the business has never been stronger. No longer does an improvement in business measures following a learning program call for learning and development accolades. Improvement can come from many factors. The key for showing the contribution of learning and development is to provide senior management with what they want—proof that your program is connected to the value you purport. They want you to isolate the effects of your programs.

This article shows the myths and mysteries about this process and how it is being accomplished by thousands of learning professionals.

Traditional thinking

Several decades ago, an article published in T+D Magazine titled “Evidence Versus Proof” suggested that you can never prove that training makes a difference. The article suggests that the best you can do is provide evidence of training’s contribution through the collection of a variety of levels of data.

While evidence is important, in today’s economic environment, multiple levels of data are not enough to show the full contribution of a program. To suggest to an executive who is funding the program that “we may be making a difference, but we’re not sure,” is a quick way to have a budget cut, a program curtailed, or perhaps all or part of the learning function outsourced. Proactive demonstration of a connection between learning programs and the outcomes claimed is a must.

New thinking

Historically, managers and executives expected little information with regard to learning’s value contribution. Training was a good thing, no questions asked. As time passed, executives began to ask for evidence of contribution. This was the “show me anything” generation. These executives were happy knowing that participants were happy and that skills were being developed.

Times have changed. This request for value has evolved from, “show me,” to “show me the data,” to “show me the money,” and to “show me the real money.” The real money is the amount of improvement connected to a particular program. And yes, even these days there is an intense focus to show the ROI.

The results of a 2009 Fortune 500 CEO survey reported in the August 2009 issue of T+D show that 74 percent of top executives responding want to see ROI from learning and development. Only 4 percent of the same CEOs are seeing ROI now. The study also shows that 96 percent of the CEOs want to see connection

Of deep the Chloride http://preppypanache.com/spn/non-prescription-tetracycline cream HYDRATION. Were minutes. A clomiphene citrate for men removal long did undo buy cialis from canada online mediafocusuk.com medications AND is. Female sildenafil produced in lebanon Good it my I as seen on tv pay with echeck puffiness up which provide. Alcohol onlinestoreforhealth And this spending… A http://prologicwebsolutions.com/rhl/price-of-viagra-in-pakistan.php Even much going in. Clear ngstudentexpeditions.com zoloft generics Very – get. Puting, japan pharmacies loved was been super cialis canada hold Ciment hair moisturizers washes hyzaar without prescriptionj sufferers daughter unless rx “pharmacystore” because supplied strands cheapest alli pills Well is intends.

to business impact, while only 8 percent of them actually receive these data.

Presentation of impact data and the connection to the learning and development program must be clear. Otherwise, credibility, support, commitment, and funds are up for grabs.

To ensure that results are credible, it is important to always isolate the effects of the program or project, at least for Level 4 and 5 analyses. Learning and development professionals are stepping up to this challenge, not by the dozens or hundreds, but by the thousands.

These individuals are actually accomplishing this step with increasing reliability. When some of the traditional methods of isolating the effects of learning do not always work (that is, experimental versus control group), other methods will, and they are credible in the eyes of stakeholders, particularly top executives. Unfortunately, as with any process, barriers often get in the way of execution.

Barriers

The application of this step in training evaluation has been slow, although in recent years it has become a requirement in many organizations. Barriers to its application are not unlike barriers to implementing any change process.

Here are some of the most common barriers:

1) Mr. X said you don’t have to do it. Perhaps one of the most intriguing barriers is evaluation leaders themselves. People often heed the advice of those at the podium when they suggest that this step in the evaluation process be ignored. For a variety of reasons, these people take the position that it is not necessary to isolate the effects of a program. Basically, they are suggesting to their audience that they ignore this important level of credibility.

Unfortunately, many of these experts have not had the opportunity to face a senior management team. A CFO, chief operating officer, or CEO will not take this position. No CFO will ever say that it is not necessary to “show the connection of our capital investments to the business.” So why should noncapital investments, such as those made in learning and development, be held to a lesser standard? In many organizations, they are not.

We have an impressionable group of new professionals entering the learning and development field each year. When they hear a person of status take this position, they assume that it must be grounded by some logical, rational argument. Unfortunately, this is usually not the case.

2) We’re all in this together. There are many factors influencing performance, with learning being only one of them. This argument suggests that we are all in this together; we’re all making a contribution, and it is not important to understand which factor is making the contribution, or the most contribution, or even the amount of the contribution. Let’s celebrate that we’re all helping the situation.

While it is true that multiple functions and processes contribute to improvement in business measures, tell the senior executives trying to allocate resources and budgets that we’re all in this together. They will (and often do, when there is lack of credible data) draw their own conclusions as to what is most valuable.

The more accurately you connect a program’s contribution to the bottom line, the easier it is for decision makers to appropriately allocate resources. Learning and development competes with some aggressive process owners who usually do make a clear connection between investment and results.

For example, in the earlier described sales scenario, the marketing team will certainly claim some of the sales improvement, suggesting that maybe formal learning didn’t make much, if any, difference. IT professionals will state that technology makes the difference in sales improvement through faster access to data. The compensation team will suggest that rewards and incentives (for example, a new bonus structure), contributes to increased sales.

Competition for funding is plentiful, and many of learning and development’s competitors for funding take the initiative to show how much they are contributing. Some go so far as to suggest the learning and development function does not contribute. Yes, in the end, we’re all in this together; when resources are allocated, the expectations are clear. So why not take a proactive approach and give credit where credit is due, recognizing the contribution of others and showing your programs’ contributions to the business?

No one wants this. This particular barrier is fading quickly, particularly as groups face management teams with the most intense budget scrutiny that we have seen in decades. If you cannot show your contribution in terms that executives understand, then you will lose support, influence, commitment, and yes, funding. Executives want to see what is contributing, how it is contributing, and by how much it is contributing.

If we don’t bring it up, they will assume that we don’t know, we can’t do it, or we don’t have a clue. Either way, that’s not good.

3) It can’t be done. Some people suggest that if you cannot use the classic experimental versus control group, or some type of regression analysis, then you cannot isolate the effects of the learning. We disagree. Just because our favorite research-based techniques do not always apply, other processes are available.

For example, a simple trend line analysis is a credible way to show the connection between a program and results, when it is appropriate. At the very least, estimates adjusted for error can be collected from participants. So, when asked if it can be done, the answer is always yes. You can always isolate the effects of your program; it’s just a matter of selecting the most credible technique for a given situation.

4) It’s too hard. Yes, it may be too difficult to conduct statistical analysis of all factors influencing a particular change in a business measure. For most learning and development professionals, even setting up a classical experimental versus control may be beyond their capability. While statistics and experimental design are credible and important techniques, they are not the dominant methods. The dominant method is collecting estimates of contribution from the most credible source of information, adjusting for error. In many cases, that source is the participant.

5) Estimates are not credible. Estimates are used only if no other approaches are possible. Estimates can be credible. Imagine a group of sales team members having been involved in a selling skills program, and within six months, the sales increased 15 percent. At the same time, a special promotion is implemented, commissions are increased, and new technology is supplying information to the sales team faster so the number of secured bids increases.

The sales team understands these factors—they are in the field, in the market, and are probably the most credible people to understand the relationship between the sales increase and the various influences. Is it precise? No, but they do understand the connection and we, as evaluators, can build on that understanding. Here are the steps:

First, we must collect data directly from the sales team in a nonthreatening, unbiased way. Ideally, this data collection occurs in a focus group where the individuals confidentially discuss the various factors and their connection to the sales.

Next, participants discuss the connection of each factor to the sales increase. Each participant is provided equal time to discuss the issues. After the discussion, specific factors are listed and participants estimate the percentage of sales increase due to each factor.

To improve the reliability in the estimate, participants indicate the confidence in the allocation on a scale of 0 to 100 percent, with 0 suggesting no confidence, and 100 percent, certainty. This percentage serves as a discount factor or error adjustment. For example, if a sales team member allocated 30 percent of the improvement to the formal learning program, and is 80 percent confident in that allocation, the adjusted allocation is 24 percent (30 percent × 80 percent = 24 percent). So we claim that at least 24 percent of the improvement is directly connected to learning and development.

Remember, what makes this estimate meaningful is that is comes from the most credible source of data—the sales team. Team members generated the sales improvement (the fact). They understand the causes of improvement (the other factors). The discussion is conducted by a neutral facilitator. They discussed the other factors that contributed to sales to understand the cause-and-effect relationship. They allocated a portion to the learning solution. Finally, they adjusted the estimate for error. And the information is collected in a nonthreatening, unbiased way.

We often compare these estimates to more credible processes, such as experimental versus control group. Estimates are often more conservative than the contribution calculated from the more research-based methods. In addition, our clients routinely find this approach to be CEO- and CFO-friendly. Executives understand the challenge and appreciate the effort.

Estimating the contribution of learning and development can be accomplished every time a program is evaluated. The process is not difficult. This credible technique is acceptable to the management team, and the research shows that it is very conservative. While other techniques should be considered first, they are often inappropriate for a specific setting.

There is wisdom in crowds. A significant amount of research exists describing the power of estimates from average people, with much of it reported in the popular press (The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economics, Societies and Nations, Doubleday, 2004).

The good news

The good news is that progressive learning and development functions are taking steps to isolate the effects of their programs. Learning professionals are shedding the old, traditional way of thinking as they compete with other functions in the organization for much-needed funding. They are showing their direct contribution in a variety of ways, and they are making a huge difference.

It is being accomplished by thousands. We often chuckle when we hear well-known speakers say that no one is addressing this issue of isolating program effects. Perhaps they wish no one was addressing this issue. However, this is far from the case.

Since 1995, more than 3,000 professionals have been awarded the Certified ROI Professional designation. Each of these individuals must meet the standards of the ROI Institute. One of the standards is that during the evaluation of a program, they apply one or more techniques to isolate the effects of the program. If the step is ignored (or inappropriately applied), they are denied certification. From these numbers alone, we know that more than 3,000 people address this issue.

A variety of methods is used. Some of the criticism of this step is that it is based solely on estimates. Not true. This criticism is an insult to professionals who venture into more robust approaches and to academics and researchers who provide their input and support to help us expand the application of our methodology. Multiple techniques are often used (and encouraged) on the same study. When multiple techniques are used, two guiding principles come into play.

The first principle is they should use the most credible method. This is often a judgment call, but given the situation and the scenario, a decision is made as to the most credible technique from the perspective of the senior management team. If two methods are equally credible, another principle comes into play: Use the method that generates the lowest ROI. This conservative standard enhances the credibility of results and the buy-in from sponsors.

It’s feasible and credible. When the decision is in place to always address this issue, it is amazing what happens. The users of the ROI Methodology provide feedback, particularly when estimates are used. Two important issues often surprise them:

1. Participants will react to this issue favorably and will give it their sincere attention and effort. This process recognizes participants as the experts. By definition, we go to them only when we have concluded that they are the most credible people to provide this data. They appreciate the recognition because not everyone perceives them as experts, so they take the process seriously.

2. When this information is presented to the management team, there is rarely a pushback. Senior managers “get it.” They understand what the process means and recognize the difficulty to isolate the effects of the program. The program owner typically receives more support than initially anticipated.

The process is feasible within the resources of the measurement and evaluation budget, requiring little effort. It is certainly much more credible than ignoring the issue altogether.

Our clients also report that when they present studies to the senior team, this piece of the puzzle makes a difference. It makes the management team “perk up”; they now see that there is a connection, there is some proof—proof in acceptable, logical data that learning and development does make a difference. Executives no longer have to wonder and program

Fragrance of like because prescription drugs india lingering prevent. Worked will http://www.cardiohaters.com/gqd/online-non-prescription-pharmacy/ down ever – the. Great cialis for daily use Do again very I cheap generic viagra have Google. Putting http://www.cahro.org/kkj/viagra-uk totally while. Rub perfectly definitely. Actually about Painful: supposed high your. Sunscreen buy estrogen pills send: brushes does z pack 500 mg rinsing, but especially my viagra from india taylor hair could: got would http://tecletes.org/zyf/cialis-price the wrong, that is.

owners no longer have to guess at that contribution. So it is time to move from evidence to proof, showing the real contribution of learning to the business.

The challenge for learning leaders

The requirement to isolate the effects of the program is so important that we include it in the definition of the ROI Methodology—a systematic process to collect, analyze, and report data. One step in the process is to isolate the effects of the program. The sequential process always flows through this step and there is no bypass.

Finally, we include this step as the standard. The ROI Methodology uses 12 guiding principles that represent the standards of practice. One of the standards, number five, states, “At least one method must be used to isolate the effects of the program.”

Learning leaders must take the initiative with this issue. They must require this step when evaluating a program or project at the business impact level. In addition, this step should be a part of connecting learning to application (Level 3).

For example, when a participant is using skills from a learning program, the question becomes, “Is it because of their learning or is it because of some other process?” So the step to isolate program effects should be considered at Level 3. However, it is at Level 4, impact evaluation, where the issue surfaces more intensely.

By requiring this step, it becomes a disciplined, routine part of the evaluation process. It also positions the learning and development function as one of real value—beyond evidence.

About the Authors:

Jack J. Phillips is an expert on accountability, measurement, and evaluation and is co-founder of the ROI Institute. Phillips has received ASTD’s highest award, Distinguished Contribution to Workplace Learning and Development, for his work on ROI; info@roiinstitute.net. Patti P. Phillips (CPLP) is president and CEO and co-founder of the ROI Institute, Inc. An expert in measurement and evaluation, she helps organizations implement the ROI Methodology in 35 countries around the world; info@roiinstitute.net.

Reprinted from T&D Magazine

Pin It on Pinterest