Changed The Reporting. Now Change the Accounting

The current perception of L&D is captured in a report by American Progress bemoaning the way that the training investment is currently reported.

“not on its own but lumped into selling, general, and administrative expenses, or SG&A, a measure that includes items such as company lunches and paper clips.”

Agreed.  They go on to stand up for L&D by showing the problem with this approach.

“Companies’ expenditures on worker training and skills show up not as a valuable investment similar to R&D but as an increase in general overhead, a measure that managers have shown a proclivity for cutting and whose reduction is often cheered by investors.”

They then end it with what I think was meant to be a compliment.

“This treatment of human capital ignores the findings of numerous studies:”

wait for it…

”Investments in human capital enhance productivity and are more valuable to a firm than general overhead expenses. 

L&D is a better use of a dollar than buying paper clips.  Stop all the ROI studies. We have our answer. Maybe we should start doing PSA’s that proclaim, “for the cost of just one executive expense account we can train an entire salesforce for that expansion region.” Or, “if everyone would just print out only the documents they needed we could save enough in paper supplies to onboard 1000 new hires.” L&D has a lot of work to do.  Time to get to it. 

Data as Author Pt. 2

Be careful about your data narratives, someone might actually believe them.

Billy Valentine, data whisperer

Numbers are nice but it is the narrative, the story, the matters. As an outside advisor for learning leaders for over two decades I know that the data may raise questions it is the narrative that raises eyebrows. Data can always be found to prop up one’s gut or one’s desired POV. Words must be carefully chosen and implications properly couched.

I have written a few times about the annual State of the Industry numbers and the narrative spun by ATD and LinkedIn Research. Spoiler alert: not a fan and not a believer. But if you want to put your learning organization’s numbers in a positive light there is no doubt you can find a stat that looks nice sitting next to the leading companies.

Data doesn’t lie. 

And we don’t have to either. But we do tell the truth in our own way.  For example:

A company has 1000 employees with the level of manager or above.  Last year L&D ran a pilot with 40 participant sin a new leadership program.  This year they ran the course multiple times putting 120 future leaders through the program.  The following data points are ALL accurate.

  • To date only 16% of the company’s leaders have received learning support.(makes the case for additional investment)
  •  We trained 3X as many leaders this year than last. (demonstrates commitment to future leaders)
  • Our leadership program has directly impacted (participant + direct reports) almost 2000 employees (assumes 10-12 direct reports per manager)

So when you write your narrative be careful.  The narrative can often say more about the author than it does about the data. Don’t overstate (tempting), know the limits of the data (it will have biases just like you), don’t over extend from data to assumption (without acknowledging it) and for heaven’s sake know the difference between causation and correlation.

FOMO vs. NPS

Missing Out on L&D

I thought The following excerpt from my book (v3 coming in later this month) would be appropriate as we enter the budget season. Two decades ago when Ed Trolley and I started doing executive interviews in support of our training organization assessments we were sure to ask a rating question for learning and development. As time went on it became the net promoter score question. Over time, and many assessments, we continued to improve our data collection to provide a deeper understanding of the potential, and realized, value that the learning organization delivered.  During this time we identified what we think was an even more powerful question.

Perceived Value Score (PVS) question:

“On a scale from “extremely impacted” [10] to “would not notice”[1], how would your business be affected if all of our L&D solutions went away?”

Startups are constantly seeking “product-market fit”. Simply defined by Marc Andreessen, founder of Netscape and leading startup investor, “Product-market fit means being in a good market with a product that can satisfy that market.” The fear of loss, captured by the PVS data, was identified as a strong indicator of startup success. Sean Ellis, founder of Growth Hackers, a community collecting actionable information on how to produce high growth for organizations, sets the bar for PMF at 40%.  If forty-percent of your users aren’t rating your product/service top box then your product or organization doesn’t have it.

This gauge as to how essential learning and development is perceived by business leaders can be a compelling endorsement or call-to-action. The coronavirus shock has resulted in the provision of learning changing dramatically over the last six months. What was done face-to-face is now provided virtually or not at all. New skills such as managing at a distance were needed in short order. So now perhaps the question is less about business leaders’ability to imagine a world without L&D and more about today’s reality.

What is your learning organization’s PVS?

Data as Star/Data as Author pt.1

This is a two-part post with some ideas about the stories that data tells (author) and using data to tell a story (star).

Data as Star

When I was running strategy for iZS, Zurich Scudder’s VC attempt during the first dot-com run, I read an HBR article (the physical, ridiculously priced magazine) talking about companies producing two things; a product and a data stream. Today’s environment has sensitized those of us who are paying attention to the fact that companies know a s*%$ ton about us. So with all this data flying around why are we still married to the old-school, direct data collection.  Or limited by the lack thereof. We need to get smarter to solve for two problems:

  • What we are asking our measures to measure has changed but the measure has not.
  • Measures that don’t exist.

In order to use data as the star of our story, it must exist and it has to be believable (source, methodology, red-face test). Other than that, data in support of a pre-constructed narrative too often a creative endeavor.  We use L&D spend benchmarks as a way of showing how our company is leading the pack in our industry.  we use the same metric but compare it to the highest performers in the world (where the company is a bit more middle of the pack)  when we submit our annual budget. While data authored narratives can also be fraught with biases, assumptions, and intent at least it starts with and is anchored by the data.

Two data scientists delivering a talk to a group of startups in Delaware remind the audience that data is like a bikini.  Very intriguing but revealing nothing. Loved those guys.

I have a job, kinda.

I worked with a company that did a large volume of formation documents.  These are the documents that need to be completed and filed with states and the Federal government when you start a company.  They reached out to me to look for additional revenue streams, partnerships, or any other ideas that would help strengthen their already leading position.

Of the many ideas that my team and the internal team developed the one I loved was a data play. At the time, ADP has just begun releasing their employment data index.  A set of publicly announced numbers that suggested the current state of employment.  You may ask, “don’t we already have a number from the Bureau of Labor Statistics (BLS) for that?” We do. It sucks. If you go underneath the BLS numbers you realize the number’s number is up.  Don’t take my word just google it.  One of the many voices you will come across is Dr. Robert Shapiro’s from EconVue, 

Dr. Shapira is no slouch having served as Under Secretary of Commerce for Economic Affairs for four years.  Here is how he describes the reason why people at the outset of the pandemic were deemed not “available” for work and therefore excluded from the unemployment calculation.

“In fact, millions of Americans were not “available” for work in April because they were caring for children whose schools were closed—and millions of people didn’t look for new jobs because the avalanche of layoffs made a job search pointless. They fit a textbook definition of individuals whom the BLS excludes from the ranks of the labor force, and so do not count as unemployed.”

So tell me again why we don’t need a data point that simply counts the number of paychecks it issues for its clients every two weeks. Seems closer to the source.  Still not perfect, but important.

So our data play for the formation king was just as simple as ADP’s.  We already knew how many new businesses were being formed.  We knew what percentage of the overall formations it represented (more than representative).  Hell, we even knew what type of business was being formed (barbershop, restaurant, welding.) So why not create an entrepreneurial index that showed how active the startup spirit was and in which sectors they were seeking their fortunes.  This information could be useful for commercial real estate, business banking, tax incentives, and more.  Again, imperfect but important. They chose other options but this post is also me pitching the idea to them again (call me).

How will the rapidly growing gig economy make the current measures even less calibrated to reality? We have to look at new and better measures for important things that must be measured properly.  If we are to use the unemployment data as a proxy for how well we are doing, as workers and as an economy, then it better be as close to right as possible.

The economy goes as pickup sales go 

This proxy is real and has been around for a long time. DataTrek is a believer that, “pickup truck sales can be a helpful indicator since the trucks are used in a vast array of businesses, and are typically a discretionary purchase.” More here. To paraphrase Forum Corporation co-founder, Richard Whitely, quoting some ancient wisdom from someone…

“We cannot see the wind.  We know it exists by watching the leaves.” – Unknown

True dat.”Me

So the pickup proxy may be a bit of a stretch but it has a history (relatively accurate directionally) and broad nodding acceptance. While these correlations may seem silly finding one that works can provide insight, and even advantage.  Say for example you are the first to see the relationships between two data sets, Home Depot revenue and Housing Sales. Whenever housing sales go up revenue goes down implying that people are buying rather than building. Or revenues rises when home sales go down implying more fixing up less moving.  Either way (or both) there is an insight into consumer behavior and as long as there is a time lag between dataset results there is time to exploit it.

So in the case of the health of the economy what other leaves could we be watching with our all-access pass to the data-sphere. So how about a healthy neighborhood index? It could consist of:

  • Credit card receipts for all the businesses in the ‘hood (local business revenue)
  • Utility late payments (resident disposable cash)
  • Major crime statistics (gun violence, home invasions, assaults)
  • Building permits issued (new development)
  • Employment by local businesses (growth, stability)
  • Aggregate savings growth
  • Property maintenance complaints filed
  • Utility events in the last time period (power outage, water quality, etc)
  • Solar installations (environmentally consciousness)

None of these ask residents how healthy is your neighborhood.  NPS is a great proxy but redemption of referral codes is better.  Leaves don’t lie.  People can say that being green or putting money away for retirement is very important to them but counting solar panels, recycling bins and 401k balances are the truth.

L&D some days feels paralyzed by not having the direct answer to the question, “did the learning make a difference?” Too many levels removed. Too many variables in motion. So we say we don’t have the data.  No data, no credit. It is a binary directional question.  Precision is not required just a solid proxy that meets our requirements, especially the red-face one. Look for meaningful correlations.  Not the “umbrella sales go up every time it rains,” kind.

Longer sales calls could be an indication of better customer engagement skills. Lower T&E expenses could indicate stronger adoption of Zoom.  Increased sticky note purchases could be the basis for a corporate innovation index by including cross-department meetings (pulled from the attendee lists in our work calendars) and Ted.com views from the work internet (IT has this just like they have your entire browser history.)

If you want or need data to be the star of your show, it is out there.  Sometimes you just need to not look directly at it. Some people say that selling is not convincing but rather helping a customer do the right thing.  Data can do both.  It can convince and help. How it is used is up to those of us who use it.

“Use your powers for good, you will.”Yoda?

Part 2 – What’s My Backstory? [coming soon]

Catching Lightning on the Back-of-an-Envelope

“How far away is it?” Depending on the reason for the question, the precision of the answer has a different value.  “No more than a mile” may be specific enough for you to make the decision between walking and grabbing an Uber.  Every tenth of a mile may make a huge difference if you are wondering if you have enough gas to get to the next service station.  Different uses for the results of a query help to define the valuable level of precision. I love the hacks, shortcuts, and rules-of-thumb that relieve me from spending energy on precision that is not valued.

One Mississippi… Two Mississippi…

I will let the National Weather Service explain one of the most well-known guesstimates.  It is also one where the level of precision in the answer matches the question being sought to be answered.

“Since you see lightning immediately and it takes the sound of thunder about 5 seconds to travel a mile, you can calculate the distance between you and the lightning. If you count the number of seconds between the flash of lightning and the sound of thunder, and then divide by 5, you’ll get the distance in miles to the lightning: 5 seconds = 1 mile, 15 seconds = 3 miles, 0 seconds = very close.”

So it is not a terribly precise measure.  It actually only takes a little over 4.8 seconds for sound to travel a mile,  Based on that difference alone a perfect count would still be off by half a football field for every “mile” counted. And that is not even including the many variations of “Mississippi” in spoken timekeeping. But who cares? No one cares!

The questions that this guesstimate seeks to provide input to are equally imprecise.  In answering, “how far away is the storm?” fifty yards is hardly relevant.  The actions taken to prepare for a storm that is 3 miles away are identical to those for one that is only two and one-half miles away. “Is the storm moving towards us?”  the other question frequently informed by this data, is equally imprecise.  It is simply a directional measure.  Did I count fewer Mississippis this time versus the time before? Having more precision adds little value to answering the question.

The ROI Misalignment

I am thinking about this alignment as I read yet another article promising to deliver a straight forward method for capturing ROI.  I appreciate the noble quest but wonder if it is really needed.  What are the questions seeking input?  What precision is valuable to these questions?

Binary, one-time questions are what come to mind first. “Is this a worthwhile investment?” This is a simple yes or no question which does not value a highly specific percentage to be calculated. Confidence that I am going to at least get my money’s worth may be the hurdle to be cleared. The difference between 125% and 140% is negligible for confidence building.

There is a slide that used to be part of the standard startup pitch deck that made me cringe.  The slide’s objective was to reduce the perceived risk associated with competition, prove the size of the market, and get potential investors excited about the startup’s potential.  We called it the 1% slide.   Often it was little more than a large pie chart showing the billion-dollar market size with a small 1% slice.  The slide’s commentary always included some variation of, “and if we are only able to capture 1% of the market we are still a $400 million business.” Translated, “even if we suck, you win.”

So what if, rather than saying a certain learning initiative has a certain return perhaps all we need to do is show that the return clears some hurdle.  The equivalent of the 1% slide. A blog on what is called the kaizen method sums it up this way.

“It might not seem like much, but those 1% improvements start compounding on each other.”

What L&D needs is a simple back-of-the-envelope calculation that allows it to confidently say to our business sponsors, “even if the initiative only moves the [profit/revenue] needle 1% we still show a positive return on your investment.”  In my next blog I will lay out my back-of-the-envelope (BOTE) calculation. Spoiler alert: even if we suck the business wins.  I look forward to your feedback and suggestions.

Bruce Lee on L&D Data

1_-waCAc_8EkedzXoaJQ2-SQ

“I am but a finger pointing to the moon. Don’t look at me; look at the moon.”

Love me some Bruce Lee.

I began this weekend’s mental wanderings with a thought that maybe, just maybe, when it comes to discussions on data the issue may be a “finger/moon” issue. For those not schooled in the ways of Enter the Dragon allow me to bring you up to speed.  In the movie, Sensei Lee is instructing his pupil on kicking.  He is actually exhorting his student for “more emotional content.” Maybe there are future Bruce Lee blog posts coming. Maybe ATD should re-issue this movie with associated CPEs. When he sits down to hammer home the lessons of the day with his student he explains to the student that,

“the finger is useful because of what it points us toward, not as an object of study for its own sake.”

Thanks to FakeBuddhaQuotes.com for the perfect summary. Upon reflection, I am now very convinced we have a finger/moon situation going on. And here is why we should care. The peaceful Essence of Buddhism Blog gives readers the big three reasons not to just look at the finger.

  1. You’ll miss the moon
  2. You think the finger is the moon
  3. You don’t know what is naturally bright (has enlightenment) vs what is naturally dark (lacks enlightenment)

We will leave #3 to others to ponder.  But #1 and #2 need some more time in the dohyō.

The moon is beautiful. Don’t miss it!

How do you know how fast you are going in your car?

How does your car know? Sensor on hub? Sensor on axle? GPS movement? Transferred on a tension wire? Onboard calculation?

Without knowing about “the moon” you can’t validate/invalidate a reading. You can’t know the impact changing out the axle for a thicker one or getting the big rimless tires on the odometer, speedometer and other measures.   And by understanding the moon you are able to draw the connections and queries that lead to actionable insight.

This is where most data conversations get awkward.  Most people don’t know the source data and so the conversation starts to sound like an interrogation.  But it is just genuine curiosity. There is a lot of recent talk about the importance of curiosity. Feel free to get curious and go find some of these great articles.  Scorecards are great but if you don’t trust where the numbers are coming from or don’t understand the calculations used you can’t understand why an initiative may or may not move the needle. The definitions of key data points are often undefined and it is only through this curiosity that the questions that need answers can get needed attention?

Let’s look at a simple question like, “How many FTE did your company have last year?” Answering this question is not as straightforward as it may seem. For example, how does your FTE answer treat elements such as working days per year? (220? Less due to vacation policy?) Hours per day? (8? 6.5?)  Answers to these questions open a range of >23%. in a 10,000 person company that is 2,300 jobs that can make a big impact on any metric.  Now the big moonbeam here is that FTE is part of the calculation of a ton of numbers.  Wherever you see the lovely phrase “per employee” there FTE is, somewhere in the Excel spreadsheet…giving you the finger.

The key to avoiding #1 is simple.  Get curious. Ask questions.  You will be rewarded. And if you are wondering what the answer to the speedometer question is.  Click here.  Warning, the answer is really cool and while it feels a little bit overcomplicated (showing off?) it is still awesome.

“Don’t look at me; look at the moon.”

Ok so let me start this round out by assuming the following:

  • You are not/no longer suffering from #1
  • You are “nice”
  • Your boss is only watching the finger (all she has time for? understanding of?)

Taking #1 off the table saves us a bunch of time. Many New Orleans restaurants/bars have a sign somewhere in their establishment that simply says, “Be nice or leave.”  I agree.  If you want to game the numbers go ahead.  Most good thieves have a deep understanding of the numbers so a slight hat tip to them. But since embezzlement and theft are not nice, they are out.

The last one…I will simply say this.  I get it.  If the finger is my $scorecard$ then yes I will look at the finger. Ignoring this dynamic is not going to help. We are all grown-ups and can talk about this stuff right? I wish business execs all wore their scorecards like handkerchiefs.  I could instantly find business alignment and have an idea of the economics on the business leader’s side.  A high impact learning event delivered in an area that moves a business sponsor’s personal scorecard is more valuable as the one for a non-scorecard business unit. Eye of the beholder and all, it just is.

And then there is 70/20/10

Please reconsider the value of this metric today. Blindly measuring Blend (delivered, available) is not valuable.  Like Malcolm Gladwell’s 100,000 hours, we love clear finish lines.   However, this part of the finger is my nomination for the most gamed stat in the L&D organization. What likely started as a slide to justify the costs of a digital library conversion became an industry gold-standard for a hot minute is pretty amazing.  Someone should do a map of the acceptance of the 70/20/10 concept (google search?) along with Skillsoft stock price. We all get lazy and when everyone is yelling 70/20/10 you know where your safe place is. Sometimes we need to remember that there is a moon out there.

As for re-imagining the stat, with the moon on my mind, here are my thoughts.  It should still be blend but from a learner’s perspective. So typical employee persona (please tell me you have these for your org) is seeing learning from all these channels at this %. The right mix is the one that drive results, just be prepared to defend your mix. By starting from the learner, not the media, we can now follow a valuable path of questioning:

  • How is this mix impacting employee experience?
  • How can mix be improved through scheduling?
  • How does this media mix compare with non-business related learner behaviors?
  • Every channel (online, in flow, etc) should have a channel objectives quant and qual. How are we doing against those channel objectives?

Sensei Lee would say to stay curious about the moon and remember to ask why of the finger.  Crazy uncle Elon would ask if we are prepared for Mars.  I would say that all we need to do is to get the boss curious about the moon.

Goofus Data

Goofus image

When I was a kid, one of the only exciting things about going to the dentist was the chance to catch up on my Highlights magazine reading.  The childrens’ magazine is famous for a monthly feature titled “Goofus and Gallant” which showed the behaviors of good children versus those of not-so-good kids.

I was reminded of these cartoons as I sat, frustrated once again, listening to the media and politicians discuss Covid data. If you wanted to put together some real life Goofus examples for dealing with data you don’t have to look any further than the local or network news. From “garbage in, garbage out” to mistaking the data as the end and not an input to a deeper insight, Goofus seems to be hard at work daily.

Don’t have unclear/inconsistent reporting standards.

What is the definition of a Covid death? When do numbers get reported (even on Sunday?)

Don’t focus on the wrong data.  

Infection count is only useful or important in the context of audience size or tests conducted.

Don’t look at daily data if the system operates on a different time scale.

We know there is a lag between action and impact with Covid.  Would a rolling 14 day average be more useful for planning and trend analysis?

Don’t lose the message in averages.

Pull out a few early states, or remove the elephant that is New York and watch how the chart of the country’s battle changes.

Don’t use the wrong units.

Percentages can be a marketers friend (100% growth of a small number sounds better than the actual number) but sometimes it is also the best way to understand the data. Percentage (%) of beds in use versus number (#) of hospitalizations is more readily understandable when ICU beds are a key capacity constraint.

Those are just some of my daily irritants.  And don’t get me started on false positive % or how an exponential function works (just watch this old shampoo commercial.) https://youtu.be/mcskckuosxQ

Do you work with data?  What would you add?

The Tail Will Wag the Dog

David Vance recently did a webinar regarding the pending legislation requiring the reporting of human capital metrics for public companies. I cannot state strongly enough the potential implications of this move. A move which I feel equally strongly is being largely ignored by my L&D colleagues. I do not have a crystal ball but simply applying the dynamics of other publicly reported numbers may help to clarify.

Publicly Reported Numbers Get C-Suite Attention

L&D has long asked for it but is it ready for it’s close up? While L&D’s current data reporting may make the industry feel good but will it stand up to the scrutiny given to financial data reporting. As a CLO can you sit with your CFO and defend the numbers, the methodology of collection and the actions taken as a result of them. Financial numbers (margins, expense, key ratios) are never good enough and always include the plan for improvement.

Transparency

Having the numbers out there without context is going to create some interesting dynamics. The L&D metrics for a company are highly contextual. This is something that I have long argued as a mitigating force to the use of benchmarks. Most companies are a cohort of one. The growth goals, competitive environment, geographic challenges and legacy paradigms are just a few a few of the things that can make a company’s metrics right for them and them alone. Without this context L&D may face pressure from new external and ill-informed senior internal sources.

Teaching the Test

If the metrics are what becomes the face of L&D the natural response is to game them. This is no different than sales organizations that pull sales forward to make a quarter look better or an operations department that delays a purchase to manage costs. When what gets delivered is in pursuit of two masters (performance and metrics) and one is highly visible, which one do you think wins.

Short-term Thinking

There are many who decry the behavior of public companies driven by a quarter-by-quarter mentality. We know that performance development occurs over time. How will our approaches to leadership, diversity, and upskilling change when they are held to the 90 day window of reporting. This is not to mention the fact that if the metrics are wrong. We all know that vanity metrics are a constant, although comforting, threat to true performance development.

What do you think will happen when L&D goes from opt-in self-reported numbers to a friendly industry organization to a federal requirement? Is your organization ready?

The Math of Upskilling

The case for learning versus hiring has long been a topic of discussion. With the recent job market as tight as ever the conversation continues.  Just this week Josh Bersin (or as I call him, “JB”, not because I know him that well, just because it sounds cool) released the highlights of a study done with three firms that concluded,

“It can cost as much as 6-times more to hire from the outside than to build from within.” – JB

While I can take issue with the phrase, “as much as”, since I have a dog who can be obedient “as much as” half the time.  Or perhaps the sample size, only three companies in different industries. Or maybe that the study used highly paid jobs +$150k salary to joust at.  But none of that will stop the industry from using this stat widely.  This may be fine at L&D conferences but try it with a CFO and you better be prepared with the math.

So that you know that I am not picking on JB (who I think is the Seth Godin of Human Capital) the issue that I have is with reports that don’t stay loyal to the kind of math that has credibility with finance folks.  While for some, being able to simply cite a case study with a recognizable company may be enough.  For me it is not. And for my own learning this blog is my attempt to take Jane Bozarth’s work out loud approach and show my work.

“And showing what we’re doing—narrating our work in a public way—helps make learning more explicit.” – the other JB

The Case for Upskilling

We start with the simple comparison of costs to determine value.  If the result is positive then reskilling wins.  If not, hire away.

The value of reskilling (V) = Cost of New Hire (CN) – Cost of Reskilling (CR)

Seems simple enough but the devil is in the details.  So lets break it down further.

V= [Cjn+Chn+Cpn+Co+Cs] – [Cjx+Chx+Cpx+Cu]

The cost  of new hire (CN) equals:

  • Cost of job opening (Cjn) plus
  • Cost of new hire (Chn) plus
  • Cost of lost productivity (Cpn) plus
  • Cost of onboarding (Co)
  • Cost of redundancy/severance (Cs)

The cost of reskilling (CR) equals:

  • Cost of job opening (Cjx) plus
  • Cost of transfer hire (Chx) plus
  • Cost of lost productivity (Cpx) plus
  • Cost of upskilling (Cu)

This approach leaves some very real variables out:

  • Calculation does not include fully loaded employee costs (benefits, occupancy, equipment, etc.  This is assumed to be a wash between CR and CN.
  • Does not include quantifiable costs associated with loss of investor confidence due to layoffs  which would likely show up in stock price.
  • Does not include quantifiable costs associated with loss of employee/candidate confidence due to layoffs such as; unplanned attrition, longer time to hire, reduction in candidate quality.
  • Does not include the 2X-3X higher turnover rate for new hires used by JB for his calculation.

Please let me know what I have missed and how this calculation can be more valid and useful.  In my next post I will further breakdown each of these costs, insert some assumptions (cost of onboarding/upskilling, recruiter fees, time to productivity, etc.) and share the excel spreadsheet plus the results it spits out.

 

L&D is a Master of VR

When Ed and David released Running Training Like a Business (RTLAB) it was clear to many that the industry needed a new way of looking at not just how and what we were training employees but why.  The book aspired to take the industry discussion up a level.  Away from the micro of courses, design methodologies and technology to the macro and meta.  The book encouraged a turn inward away from the course and curricula towards the creator, the L&D organizations itself.  What the factory was designed for, pre-determined what the output was.  Transforming the organization would transform the output and the value it produced.

In 2010 when I started writing the Learning Hacks blog as a way to capture my musings on L&D I began with a blog entitled “The Spark That Started It All”, the working title for this post can still be seen in the URL for the post.  It expressed my disappointment that many of the challenges described in RTLAB, over a decade prior, remained unaddressed.  In my book Running Training Like a Startup I cite one of my favorite Ed Trolley quotes.  A quote that was validated in many of the assessments we did for clients around the world.

“Business leaders have low expectations of training. And they are being met.”

-Ed Trolley

Yesterday, Harvard Business Review released an article entitled. “Where Companies Go Wrong with Learning and Development” that put things in clear perspective. In it Steve Glaveski highlights recent studies that show:

  • 75% of 1,500 managers surveyed from across 50 organizations were dissatisfied with their company’s Learning & Development (L&D) function;
  • 70% of employees report that they don’t have mastery of the skills needed to do their jobs;
  • Only 12% of employees apply new skills learned in L&D programs to their jobs; and
  • Only 25% of respondents to a recent McKinsey survey believe that training measurably improved performance.

Glaveski nets it out this way, “Not only is the majority of training in today’s companies ineffective, but the purpose, timing, and content of training is flawed.”  I don’t disagree.

While the L&D community hold conferences dominated by sessions on how to create compelling Powerpoint title slides, the use of chatbots, and incorporating podcasting into a curriculum, the businesses they support keep moving and changing desperate for employees that can perform.  In the late 90’s I was tasked to lead a project for Microsoft.  At the time they were under intense scrutiny for monopolistic practices.  It was also a time when Fred Reicheld (who would later create the Net Promoter Score) released the “Loyalty Effect” debunking the marketer “top-box” approach to assessing satisfaction.  I won’t go into it here but when retention does not show a drop off as satisfaction goes down there are other market forces at play. High switching costs, tie-ups and lack of alternatives can be some of those drivers. The retention results don’t reflect the satisfaction of customers (it may make it worse because they feel trapped) but it does give the provider an extremely distorted view of how it is performing.

Over 20 years post the release of RTLAB the data on L&D’s customer satisfaction continue to come in.  While the L&D industry focuses on budget amounts, spend per employee and other “vanity metrics”, the HBR article clearly shows it is long overdue for the learning organizations that are delivering leadership training to take a leadership role.  For the L&D groups supporting innovation initiatives to innovate.  For the industry, as a whole, to take off the goggles and stop living in its virtual reality world.