Catching Lightning on the Back-of-an-Envelope

“How far away is it?” Depending on the reason for the question, the precision of the answer has a different value.  “No more than a mile” may be specific enough for you to make the decision between walking and grabbing an Uber.  Every tenth of a mile may make a huge difference if you are wondering if you have enough gas to get to the next service station.  Different uses for the results of a query help to define the valuable level of precision. I love the hacks, shortcuts, and rules-of-thumb that relieve me from spending energy on precision that is not valued.

One Mississippi… Two Mississippi…

I will let the National Weather Service explain one of the most well-known guesstimates.  It is also one where the level of precision in the answer matches the question being sought to be answered.

“Since you see lightning immediately and it takes the sound of thunder about 5 seconds to travel a mile, you can calculate the distance between you and the lightning. If you count the number of seconds between the flash of lightning and the sound of thunder, and then divide by 5, you’ll get the distance in miles to the lightning: 5 seconds = 1 mile, 15 seconds = 3 miles, 0 seconds = very close.”

So it is not a terribly precise measure.  It actually only takes a little over 4.8 seconds for sound to travel a mile,  Based on that difference alone a perfect count would still be off by half a football field for every “mile” counted. And that is not even including the many variations of “Mississippi” in spoken timekeeping. But who cares? No one cares!

The questions that this guesstimate seeks to provide input to are equally imprecise.  In answering, “how far away is the storm?” fifty yards is hardly relevant.  The actions taken to prepare for a storm that is 3 miles away are identical to those for one that is only two and one-half miles away. “Is the storm moving towards us?”  the other question frequently informed by this data, is equally imprecise.  It is simply a directional measure.  Did I count fewer Mississippis this time versus the time before? Having more precision adds little value to answering the question.

The ROI Misalignment

I am thinking about this alignment as I read yet another article promising to deliver a straight forward method for capturing ROI.  I appreciate the noble quest but wonder if it is really needed.  What are the questions seeking input?  What precision is valuable to these questions?

Binary, one-time questions are what come to mind first. “Is this a worthwhile investment?” This is a simple yes or no question which does not value a highly specific percentage to be calculated. Confidence that I am going to at least get my money’s worth may be the hurdle to be cleared. The difference between 125% and 140% is negligible for confidence building.

There is a slide that used to be part of the standard startup pitch deck that made me cringe.  The slide’s objective was to reduce the perceived risk associated with competition, prove the size of the market, and get potential investors excited about the startup’s potential.  We called it the 1% slide.   Often it was little more than a large pie chart showing the billion-dollar market size with a small 1% slice.  The slide’s commentary always included some variation of, “and if we are only able to capture 1% of the market we are still a $400 million business.” Translated, “even if we suck, you win.”

So what if, rather than saying a certain learning initiative has a certain return perhaps all we need to do is show that the return clears some hurdle.  The equivalent of the 1% slide. A blog on what is called the kaizen method sums it up this way.

“It might not seem like much, but those 1% improvements start compounding on each other.”

What L&D needs is a simple back-of-the-envelope calculation that allows it to confidently say to our business sponsors, “even if the initiative only moves the [profit/revenue] needle 1% we still show a positive return on your investment.” The next release of the book set for early September will further detail some straightforward math to support this. Spoiler alert: even if we suck the business wins.  I look forward to your feedback and suggestions.

 

Bruce Lee on L&D Data

1_-waCAc_8EkedzXoaJQ2-SQ

“I am but a finger pointing to the moon. Don’t look at me; look at the moon.”

Love me some Bruce Lee.

I began this weekend’s mental wanderings with a thought that maybe, just maybe, when it comes to discussions on data the issue may be a “finger/moon” issue. For those not schooled in the ways of Enter the Dragon allow me to bring you up to speed.  In the movie, Sensei Lee is instructing his pupil on kicking.  He is actually exhorting his student for “more emotional content.” Maybe there are future Bruce Lee blog posts coming. Maybe ATD should re-issue this movie with associated CPEs. When he sits down to hammer home the lessons of the day with his student he explains to the student that,

“the finger is useful because of what it points us toward, not as an object of study for its own sake.”

Thanks to FakeBuddhaQuotes.com for the perfect summary. Upon reflection, I am now very convinced we have a finger/moon situation going on. And here is why we should care. The peaceful Essence of Buddhism Blog gives readers the big three reasons not to just look at the finger.

  1. You’ll miss the moon
  2. You think the finger is the moon
  3. You don’t know what is naturally bright (has enlightenment) vs what is naturally dark (lacks enlightenment)

We will leave #3 to others to ponder.  But #1 and #2 need some more time in the dohyō.

The moon is beautiful. Don’t miss it!

How do you know how fast you are going in your car?

How does your car know? Sensor on hub? Sensor on axle? GPS movement? Transferred on a tension wire? Onboard calculation?

Without knowing about “the moon” you can’t validate/invalidate a reading. You can’t know the impact changing out the axle for a thicker one or getting the big rimless tires on the odometer, speedometer and other measures.   And by understanding the moon you are able to draw the connections and queries that lead to actionable insight.

This is where most data conversations get awkward.  Most people don’t know the source data and so the conversation starts to sound like an interrogation.  But it is just genuine curiosity. There is a lot of recent talk about the importance of curiosity. Feel free to get curious and go find some of these great articles.  Scorecards are great but if you don’t trust where the numbers are coming from or don’t understand the calculations used you can’t understand why an initiative may or may not move the needle. The definitions of key data points are often undefined and it is only through this curiosity that the questions that need answers can get needed attention?

Let’s look at a simple question like, “How many FTE did your company have last year?” Answering this question is not as straightforward as it may seem. For example, how does your FTE answer treat elements such as working days per year? (220? Less due to vacation policy?) Hours per day? (8? 6.5?)  Answers to these questions open a range of >23%. in a 10,000 person company that is 2,300 jobs that can make a big impact on any metric.  Now the big moonbeam here is that FTE is part of the calculation of a ton of numbers.  Wherever you see the lovely phrase “per employee” there FTE is, somewhere in the Excel spreadsheet…giving you the finger.

The key to avoiding #1 is simple.  Get curious. Ask questions.  You will be rewarded. And if you are wondering what the answer to the speedometer question is.  Click here.  Warning, the answer is really cool and while it feels a little bit overcomplicated (showing off?) it is still awesome.

“Don’t look at me; look at the moon.”

Ok so let me start this round out by assuming the following:

  • You are not/no longer suffering from #1
  • You are “nice”
  • Your boss is only watching the finger (all she has time for? understanding of?)

Taking #1 off the table saves us a bunch of time. Many New Orleans restaurants/bars have a sign somewhere in their establishment that simply says, “Be nice or leave.”  I agree.  If you want to game the numbers go ahead.  Most good thieves have a deep understanding of the numbers so a slight hat tip to them. But since embezzlement and theft are not nice, they are out.

The last one…I will simply say this.  I get it.  If the finger is my $scorecard$ then yes I will look at the finger. Ignoring this dynamic is not going to help. We are all grown-ups and can talk about this stuff right? I wish business execs all wore their scorecards like handkerchiefs.  I could instantly find business alignment and have an idea of the economics on the business leader’s side.  A high impact learning event delivered in an area that moves a business sponsor’s personal scorecard is more valuable as the one for a non-scorecard business unit. Eye of the beholder and all, it just is.

And then there is 70/20/10

Please reconsider the value of this metric today. Blindly measuring Blend (delivered, available) is not valuable.  Like Malcolm Gladwell’s 100,000 hours, we love clear finish lines.   However, this part of the finger is my nomination for the most gamed stat in the L&D organization. What likely started as a slide to justify the costs of a digital library conversion became an industry gold-standard for a hot minute is pretty amazing.  Someone should do a map of the acceptance of the 70/20/10 concept (google search?) along with Skillsoft stock price. We all get lazy and when everyone is yelling 70/20/10 you know where your safe place is. Sometimes we need to remember that there is a moon out there.

As for re-imagining the stat, with the moon on my mind, here are my thoughts.  It should still be blend but from a learner’s perspective. So typical employee persona (please tell me you have these for your org) is seeing learning from all these channels at this %. The right mix is the one that drive results, just be prepared to defend your mix. By starting from the learner, not the media, we can now follow a valuable path of questioning:

  • How is this mix impacting employee experience?
  • How can mix be improved through scheduling?
  • How does this media mix compare with non-business related learner behaviors?
  • Every channel (online, in flow, etc) should have a channel objectives quant and qual. How are we doing against those channel objectives?

Sensei Lee would say to stay curious about the moon and remember to ask why of the finger.  Crazy uncle Elon would ask if we are prepared for Mars.  I would say that all we need to do is to get the boss curious about the moon.

Rotting Boards

L&D Advisory boards need a fresh look.

The current geo-political landscape aside, the topic of governance, with its advice, consent and oversight functions, remains at the forefront of discussions for L&D.  In the book Running Training Like a Startup I cite a report done by NIIT and CorpU to find out how learning organizations have adopted the core principles of van Adelsberg’s and Trolley’s book. Two findings from the report which jumped out at me were in the area of governance.  The study found that:

  • The use of governing boards with C-level executives to ensure strategic alignment is fine in theory but has little effectiveness in practice. Sixty percent (60%) of the companies have them to some degree, but only 10% deemed them highly effective.
  • The use of more tactical advisory boards, while slightly less prevalent (59%), are more useful in that 17% of the respondents deem them highly effective. 

While this study is now aged, my recent discussions with clients have confirmed  that this remains a sub-optimized tool for learning organizations.  While many have a documented governance structure much fewer are gaining the benefit.  Most of L&D seems to acknowledge the generally accepted three-tier model for governance. Executives form the strategic executive level with business heads and line managers operating in increasingly tactical functions in the lower levels.

Earlier this year, Training Industry Magazine published  a “Learning Governance Framework Cheat Sheet” that had been developed by Kaplan.  While fairly common sense, lots of “uh-huh and not a lot of “ohhh”, one pithy item on the checklist stuck with me as much for the pithiness as the “how?” questions that remained unaddressed.

“Construct a governance structure that is inclusive, agile, and commercially pragmatic with senior leader advocacy.”

I am currently working to expand the governance section of the book based on recent experience and research.  But, for those that are currently operating advisory boards at any level here are a few questions worth considering.  These are taken and adapted from a set that Jerry Colonna, executive coach to founders and author of Reboot, uses to assess the boards of startups.

  1. When the shit hits the fan, which of your advisory board members would you turn to and why?
  2. If your advisory board was your executive team, what experiences or temperaments are missing?
  3. What skills would you like to see on your advisory board?
  4. How do non-advisory board executives and managers view your current advisory board?

In today’s fast moving business environment L&D cannot fail to get the most out of this critical element of driving unmistakable value for the companies they serve. How are you using advisory boards and what challenges are you facing in getting the most out of them?

 

A Model Proposal

In Running Training Like a Business, we introduced a framework called the Dynamic Business Scorecard (DBS). The DBS was developed by Bill Fonvielle (@wfonviel). Bill saw that while the whole world was enamored with the work of Norton and Kaplan on the balanced business scorecard that metrics were best when the drivers and interconnectedness of the various elements could be made more visible. The DBS also measure the drivers of results, not simply the results themselves made diagnosis and communication a more straightforward affair. People knew how they impacted the overall results and when something went wrong, root cause was easier to determine.

We believe that the new leaders in learning need to be characterized as the experimenters. Those organizations that try, fail and try some more. Those that acknowledge, that in this period of rapid change, the best way to serve their company is to embrace a process of rapid releases and a quick iterative feedback cycle. Not just for the learning but for the learning organization itself. For this reason, we believe the framework by which we view learning organizations must support this as well.

Many of today’s fastest growing startups have addressed this by abandoning multi-year business plans in favor of the Business Model Canvas (BMC). The BMC was developed by Alexander Osterwalder, a Swiss business theorist, consultant and author of Business Model Generation.

The canvas is a powerful tool for focusing, creating a common language, making the system visible and driving stakeholder alignment. The BMC is also flexible, serving as; an assessment tool, an action checklist, a communication tool, and an ongoing scorecard. The BMC is a living document continuously evolving in this fast-moving world of business. Taking the learnings from the BMC we believe a new framework for thinking about a company’s learning organization is in order.

The Learning Model Canvas

Similar to the BMC our Learning Model Canvas is anchored by the value proposition we described in our post on the new role of the CLO. The model contains five core areas and is best developed and described from center (Products & Service) out. For each area, we have provided a brief description and offered a few quick questions that may help you to better understand the current state of your organization’s model.

RTLABv2 Model

Products & Services

This area brings the value proposition to life. What you are offering drives the delivery of the value proposition to your customers and defines what your service delivery model looks like. This area also is not just the product and services delivered but also the management of them including requirements for specifications, quality and impact.

  • What unique value do we offer our customers?
  • Are our products filling a need of our customer?
  • Do our products deliver that value?
  • How are we managing our products for maximum value?

Customer

A learnings organization’s customers fall across multiple segments, learners, managers and executives. Each of these segments with its own set of requirements. By focusing on the expectations of each of these customer segments, and the experience they are having, a learning organizations can inform its value proposition, and optimize its channels to learners and improve its products to drive better business results for the whole company.

  • Who are we serving?
  • What do they want?
  • What is their experience?
  • How valuable are we to our customer?

Service Delivery Model

This area describes the learning organization that meets the requirements defined by the value proposition and the customer expectations. This area includes what needs to be done, how it will be done and who will do it. This area also challenges us to ask what new activities, like curation or social media facilitation, not currently encompassed by the learning organization, need to be included based on what our customers expect.

This area requires us to re-evaluate what we do in-house versus outsourcing. When the way in which you accomplish key activities is rapidly changing, companies often push the burden of that change on partners rather than taking it on. A global technology company we worked with had a goal of turning over a percentage of its engineers every year. This allowed them to bring in knowledge of current programming technologies instead of utilizing outside partners to scale up and down and gain access to expertise and experience. Use of partners rather than driving turnover is a much better long-term approach.

  • Do I have the right team to deliver our value proposition?
  • Are we operating at our highest level?
  • Do we have the tools to achieve our mission?
  • Are we using our partnerships to highest value?

Strategy & Governance

Alignment and management of the learning organization is described in this area of the Learning Model Canvas. Through steering committees, KPI dashboards and communication strategies, this area ensures what the company needs are being met in the most efficient and effective ways possible.

  • How is the vision moving the company towards its goals?
  • Have I engaged the right executives?

Investment & Return

This final area captures the value being delivered and the total investment that the larger enterprise is investing in order to deliver on the value proposition. Frequently organizations look only at the direct line items related to learning (payroll, travel, third-party spend) and fail to capture the investment made in the form of SME time, lost productivity by being away from job, and others. In one organization we found that while their training staff to employee ratio reflected an efficient organization, the company was investing more than the entire training department’s payroll in Subject Matter Experts’ time alone.

  • What are we spending?
  • What are we getting for it?

Still Dynamic

Similar to the BMC, a learning organization’s Learning Model Canvas is meant to evolve as the needs of its customer change. Ongoing conversations with customers as well as feedback from steering committees and KPI’s must continuously be used to ensure the value proposition is continuing to deliver unmistakable value.