It’s that time of year where people are dusting off their strategic plans, hosting their parties and strategy workshops, and making sure the KPIs and metrics on their scorecards are the ones they want to be watching in 2021.
But most people really aren’t that religious about measurement systems, or tightly aligning specific actions with the needle they are most likely to move. The goal of “becoming data-driven” usually isn’t accompanied by the discipline and perseverance to make it happen, even though the payoffs are huge.
And none of us are immune to bad metrics, even when those things are really important. Sometimes, a metric is just too emotionally enticing to give up.
I use one bad metric myself, and no matter how bad I know it is, I keep using it to evaluate (one dimension of) my personal value. PSA: It is never good to tie your worth as a human to a metric (any metric). Gen Z may have more luck than us Gen Xers on this one.
My bad metric, the one I can’t emotionally detach from, is number of citations on Google Scholar. And the reason why I’m thinking about it today is because… I just achieved my 2020 goal of adding more citations than I added in 2019!
Here’s why this metric is so terribly bad:
Number of citations is a lagging indicator, and the lag can often be 3-5 years.
By the time the needle moves, it’s hard to figure out exactly what happened to make it move.
There are very few actions I can personally take to make that needle move.
Any actions I do take will be indirect. I can make people more aware of my papers, but can’t force anyone to cite one… so the actions I can take will influence reach, but not citations.
There’s an interesting social dimension to past number of citations. The more citations you have on a paper, the more likely you are to attract additional citations; similarly, the more citations you have, the better SEO you get on sites like Google Scholar. It’s a “fit get fitter” scenario.
I can’t monitor this metric on a weekly or monthly basis. If it dips, I won’t be able to respond by taking an action to restore growth.
I haven’t even thought about using this metric as a signal for when I should take action. Because of all the problems I listed above.
I didn’t do anything to achieve my 2020 goal. I just helplessly watched that number creep up, kept my fingers crossed, and (now) celebrated on December 19th when I (just barely) went over the wire before New Year’s.
The calendar is arbitrary anyway. What if I achieved the goal on January 5? Would I feel unaccomplished? Probably yes (this is pathetic)!
Ultimately, I am not in control of citations. I should have picked an intermediary metric that I am in control of… but that’s really difficult, and I’m not in academia any more so I really don’t even need to pay attention to this (another giant problem! Why am I still even paying attention? Attention is expensive!)
My Holiday wish for you is: Select your metrics carefully! Pick ones that are (ideally):
Not limited to lagging indicators with extraordinarily long lags
Monitorable on (at least) a weekly or monthly basis
Designed so that if you aren’t achieving your target level, you can immediately figure out where the problem is happening, and even know how to dig down deeper to figure out why that problem is happening
Triggers for action: every metric that’s not where you want it to be should be link to a thing you can do — that’s in your control — that you know has a pretty good chance of making that needle move the direction you want it to
When your metrics aren’t revealing, actionable, or in your control, you’ve just set yourself up for a special kind of paralysis the entire year.
I read well over a hundred books a year, and review many for Quality Management Journaland Software Quality Professional. Today, I’d like to bring you my TOP 10 PICKS out of all the books I read in 2019. First, let me affirm that I loved all of these books — it was really difficult to rank them. The criteria I used were:
Is the topic related to quality or improvement? The book had to focus on making people, process, or technology better in some way. (So even though Greg Satell’s Cascades provided an amazing treatment of how to start movements, which is helpful for innovation, it wasn’t as closely related to the themes of quality and improvement I was targeting.)
Did the book have an impact on me? In particular, did it transform my thinking in some way?
Finally, how big is the audience that would be interested in this book? (Although some of my picks are amazing for niche audiences, they will be less amazing for people who are not part of that group; they were ranked lower.)
Did I read it in 2019? (Unfortunately, several amazing books I read at the end of 2018 like Siva Vaidhyanathan’s Antisocial Media.)
The biggest obstacle in agile transformation is getting teams to internalize the core values, and apply them as a matter of habit. This is why you see so many organizations do “fake agile” — do things like introduce daily stand-ups, declare themselves agile, and wonder why the success isn’t pouring in. Scott goes back to the first principles of the Agile Manifesto from 2001 to help leaders and teams become genuinely agile.
#9 – Risk-Based Thinking (Muschara)
Muschara, T. (2018). Risk-Based Thinking: Managing the Uncertainty of Human Error in Operations. Routledge/Taylor & Francis: Oxon and New York. 287 pages.
Risk-based thinking is one of the key tenets of ISO 9001:2015, which became the authoritative version in September 2018. Although clause 8.5.3 from ISO 9001:2008 indirectly mentioned risk, it was not a driver for identifying and executing preventive actions. The new emphasis on risk depends upon the organizational context (clause 4.1) and the needs and expectations of “interested parties” or stakeholders (clause 4.2).
Unfortunately, the ISO 9001 revision does not provide guidance for how to incorporate risk-based thinking into operations, which is where Muschara’s new book fills the gap. It’s detailed and complex, but practical (and includes immediately actionable elements) throughout. For anyone struggling with the new focus of ISO 9001:2015, this book will help you bring theory into practice.
#8 – The Successful Software Manager (Fung)
Fung, H. (2019). The Successful Software Manager. Packt Publishing, Birmingham UK, 433 pp.
There lots of books on the market that provide technical guidance to software engineers and quality assurance specialists, but little information to help them figure out how (and whether) to make the transition from developer to manager. Herman Fung’s new release fills this gap in a complete, methodical, and inspiring way. This book will benefit any developer or technical specialist who wants to know what software management entails and how they can adapt to this role effectively. It’s the book I wish I had 20 years ago.
#7 – New Power (Heimans & Timms)
Heiman, J. & Timms, H. (2018). New Power: How Power Works in Our Hyperconnected World – and How to Make it Work For You. Doubleday, New York, 325 pp.
As we change technology, the technology changes us. This book is an engaging treatise on how to navigate the power dynamics of our social media-infused world. It provides insight on how to use, and think in terms of, “platform culture”.
#6 – A Practical Guide to the Safety Profession (Maldonado)
Maldonado, J. (2019). A Practical Guide to the Safety Profession: The Relentless Pursuit (CRC Focus). CRC Press: Taylor & Francis, Boca Raton FL, 154 pp.
One of the best ways to learn about a role or responsibility is to hear stories from people who have previously served in those roles. With that in mind, if you’re looking for a way to help make safety management “real” — or to help new safety managers in your organization quickly and easily focus on the most important elements of the job — this book should be your go-to reference. In contrast with other books that focus on the interrelated concepts in quality, safety, and environmental management, this book gets the reader engaged by presenting one key story per chapter. Each story takes an honest, revealing look at safety. This book is short, sweet, and high-impact for those who need a quick introduction to the life of an occupational health and safety manager.
# 5 – Data Quality (Mahanti)
Mahanti, R. (2018). Data Quality: Dimensions, Measurement, Strategy, Management and Governance. ASQ Quality Press, Milwaukee WI, 526 pp.
I can now confidently say — if you need a book on data quality, you only need ONE book on data quality. Mahanti, who is one of the Associate Editors of Software Quality Professional, has done a masterful job compiling, organizing, and explaining all aspects of data quality. She takes a cross-industry perspective, producing a handbook that is applicable for solving quality challenges associated with any kind of data.
Throughout the book, examples and stories are emphasized. Explanations supplement most concepts and topics in a way that it is easy to relate your own challenges to the lessons within the book. In short, this is the best data quality book on the market, and will provide immediately actionable guidance for software engineers, development managers, senior leaders, and executives who want to improve their capabilities through data quality.
#4 – The Innovator’s Book (McKeown)
McKeown, M. (2020). The Innovator’s Book: Rules for Rebels, Mavericks and Innovators (Concise Advice). LID Publishing, 128 pp.
Want to inspire your teams to keep innovation at the front of their brains? If so, you need a coffee table book, and preferably one where the insights come from actual research. That’s what you’ve got with Max’s new book. (And yes, it’s “not published yet” — I got an early copy. Still meets my criteria for 2019 recommendations.)
#3 – The Seventh Level (Slavin)
Slavin, A. (2019). The Seventh Level: Transform Your Business Through Meaningful Engagement with Customer and Employees. Lioncrest Publishing, New York, 250 pp.
For starters, Amanda is a powerhouse who’s had some amazing marketing and branding successes early in her career. It makes sense, then, that she’s been able to encapsulate the lessons learned into this book that will help you achieve better customer engagement. How? By thinking about engagement in terms of different levels, from Disengagement to Literate Thinking. By helping your customers take smaller steps along this seven step path, you can make engagement a reality.
#2 – Principle Based Organizational Structure (Meyer)
Meyer, D. (2019). Principle-Based Organizational Structure: A Handbook to Help You Engineer Entrepreneurial Thinking and Teamwork into Organizations of Any Size. NDMA, 420 pp.
This is my odds-on impact favorite of the year. It takes all the best practices I’ve learned over the past two decades about designing an organization for laser focus on strategy execution — and packages them up into a step-by-step method for assessing and improving organizational design. This book can help you fix broken organizations… and most organizations are broken in some way.
#1 Story 10x (Margolis)
Margolis, M. (2019). Story 10x: Turn the Impossible Into the Inevitable. Storied, 208 pp.
You have great ideas, but nobody else can see what you see. Right?? Michael’s book will help you cut through the fog — build a story that connects with the right people at the right time. It’s not like those other “build a narrative” books — it’s like a concentrated power pellet, immediately actionable and compelling. This is my utility favorite of the year… and it changed the way I think about how I present my own ideas.
Years ago I consulted for an organization that had an enticing mission, a dynamic and highly qualified workforce of around 200 people, and an innovative roadmap that was poised to make an impact — estimated to be ~$350-500M (yes really, that big). But there was one huge problem.
As engineers, the leadership could readily provide information about uptime and Service Level Agreements (SLAs). But they had no idea whether they were on track to meet strategic goals — or even whether they would be able to deliver key operations projects — at all! We recommended that they focus on developing metrics, and provided some guidelines for the types of metrics that might help them deliver their products and services — and satisfy their demanding customers.
Unfortunately, we made a critical mistake.
They were overachievers. When we came back six months later, they had nearly a thousand metrics. (A couple of the guys, beaming with pride, didn’t quite know how to interpret our non-smiling faces.)
“So tell us… what are your top three goals for the year, and are you on track to meet them?” we asked.
They looked at each other… then at us. They looked down at their papers. They glanced at each other again. It was in that moment they realized the difference between KPIs and metrics.
KPIs are KEY Performance Indicators. They have meaning. They are important. They are significant. And they relate to the overall goals of your business.
One KPI is associated with one or moremetrics. Metrics are numbers, counts, percentages, or other values that provide insight about what’s happened in the past (descriptive metrics), what is happening right now (diagnostic metrics), what will happen (predictive metrics or forecasts), or what should happen (prescriptive metrics or recommendations).
For the human brain to be able to detect and respond to patterns in organizational performance, limit the number of KPIs!
A good rule of thumb is to select 3-5 KPIs (but never more than 8 or 9!) per logical division of your organization. A logical division can be a functional area (finance, IT, call center), a product line, a program or collection of projects, or a collection of strategic initiatives.
Or, use KPIs and metrics to describe product performance, process performance, customer satisfaction, customer engagement, workforce capability, workforce capacity, leadership performance, governance performance, financial performance, market performance, and how well you are executing on the action plans that drive your strategic initiatives (strategy performance). These logical divisions come from the Baldrige Excellence Framework.
Similarly, try to limit the number of projects and initiatives in each functional area — and across your organization. Work gets done more easily when people understand how all the parts of your organization relate to one another.
What happened to the organization from the story, you might ask? Within a year, they had boiled down their metrics into 8 functional areas, were working on 4 strategic initiatives, and had no more than 5 KPIs per functional area.They found it really easy to monitor the state of their business, and respond in an agile and capable way. (They were still collecting lots more metrics, but they only had to dig into them on occasion.)
Remember… metrics are helpful, but:
KPIs are KEY!!
You don’t have thousands of keys to your house… and you don’t want thousands of KPIs. Take a critical look at what’s most important to your business, and organize that information in a way that’s accessible. You’ll find it easier to manage everything — strategic initiatives, projects, and operations.
I love horse racing. More specifically, I love betting on the horses. Why? Because it’s a complex exercise in data science, requiring you to integrate (what feels like) hundreds of different kinds of performance measures — and environmental factors (like weather) — to predict which horse will come in first, second, third, and maybe even fourth (if you’re betting a superfecta). And, you can win actual money!
I spent most of the day yesterday handicapping for Kentucky Derby 2015, before stopping at the track to place my bets for today. As I was going through the handicapping process, I realized that I’m essentially following the analysis process that we use as Examiners when we review applications for the Malcolm Baldrige National Quality Award (MBNQA). We apply “LeTCI” — pronounced like “let’s see” — to determine whether an organization has constructed a robust, reliable, and relevant assessment program to evaluate their business and their results. (And if they haven’t, LeTCI can provide some guidance on how to continuously improve to get there).
LeTCI stands for “Levels, Trends, Comparisons, and Integration”. In Baldrige parlance, here’s what we mean by each of those:
Levels: This refers to categorical or quantitative values that “place or position an organization’s results and performance on a meaningful measurement scale. Performance levels permit evaluation relative to past performance, projections, goals, and appropriate comparisons.”  Your measured levels refer to where you’re at now — your current performance.
Trends: These describe the direction and/or rate of your performance improvements, including the slope of the trend data (if appropriate) and the breadth of your performance results.  “A minimum of three data points is generally needed to begin to ascertain a trend.” 
Comparisons: This “refers to establishing the value of results by their relationship to similar or equivalent measures. Comparisons can be made to results of competitors, industry averages, or best-in-class organizations. The maturity of the organization should help determine what comparisons are most relevant.”  This also includes performance relative to benchmarks.
Integration: This refers to “the extent to which your results measures address important customer, product, market, process, and action plan performance requirements” and “whether your results are harmonized across processes and work units to support organization-wide goals.” 
Here’s a snapshot of my Kentucky Derby handicapping process, using LeTCI. (I also do it for other horse races, but the Derby has got to be one of the most challenging prediction tasks of the year.) Derby prediction is fascinating because all of the horses are excellent, for the most part — and what you’re trying to do is determine on this particular day, against these particular competitors, how likely is a horse to win? Although my handicapping process is much more complex than what I lay out below, this should give you a sense of the process that I use, and how it relates to the Baldrige LeTCI approach:
Levels: First, I have to check out the current performance levels of each contender in the Derby. What’s the horse’s current Beyer speed score or Bris score (that is, are they fast enough to win this race)? What are the recent exercise times? If a horse isn’t running 5 furlongs in under a minute, then I wonder (for example) if they can handle the Derby pace. Has this horse raced on this particular track, or with this particular jockey? I can also check out the racing pedigree of the horse through metrics like “dosage”.
Trends: Next, I look at a few key trends. Have the horse’s past races been preparing him for the longer distance of the Derby? Ideally, I want to see that the two prior races were a mile and a sixteenth, and a mile and an eighth. Is their Beyer speed score increasing, at least over the past three races? Depending on the weather for Louisville, has this horse shown a liking for either fast or muddy tracks? Has the horse won a race recently?
Comparisons: Is the horse paired with a jockey he has been successful with in the past? I spend a lot of time comparing the horses to each other as well. A horse doesn’t have to beat track records to win… he just has to beat the other horses. Even a slow horse will win if the other horses are slower. Additionally, you have to compare the horse’s performance to baselines provided by the other horses throughout the duration of the race. Does your horse tend to get out in front, and then burn out? Or does he stalk the other horses and then launch an attack in the end, pulling out in front as a closer? You have to compare the performance of the horse to the performance of the other horses longitudinally — because the relative performance will change as the race progresses.
Integration: What kind of story do all of these metrics tell together? That’s the real trick of handicapping horse races… the part where you have to bring everything together in to a cohesive, coherent way. This is also the part where you have to apply intuition. Do I really think this horse is ready to pull off a victory today, at this particular track, against these contenders and embedded in the wild and festive Derby environment (which a horse may not have experienced yet)?
And what does this mean for organizational metrics? To me, it means that when I’m formulating and evaluating business metrics I should take a perspective that’s much more like handicapping a major horse race — because assessing performance is intricately tied to capabilities, context, the environment, and what’s bound to happen now, in the near future.
Today’s gem comes from my former student Andy, who has heard me get excited about quality tools and continuous improvement – and the R statistical software – a LOT over the past few years! Even though he graduated in the spring of 2012, he’s still applying quality solutions to his own life – and this was a very unexpected place for me to find such a thing! I can’t hold back my own personal excitement for improvement and the pursuit of excellence, even as my standards for excellence evolve, and it’s so heartwarming to see how this has influenced Andy’s life.
”Studies show that being able to see your energy usage makes it easier to reduce it.”
This is the driver for their new Google PowerMeter project, which envisions a future where access to energy informatics is through your desktop. The project, an initiative of Google.org (the philanthropic research arm of Google), provides this as their pitch:
“How much does it cost to leave your TV on all day? What about turning your air conditioning 1 degree cooler? Which uses more power every month — your fridge or your dishwasher? Is your household more or less energy efficient than similar homes in your neighborhood? … At Google we’re committed to helping enable a future where access to personal energy information helps everyone make smarter energy choices. To get started, we’re working on a tool called Google PowerMeter which will show consumers their electricity consumption in near real-time in a secure iGoogle Gadget. We think PowerMeter will offer more useful and actionable feedback than complicated monthly paper bills that provide little detail on consumption or how to save energy.”
Indeed, it is hard to imagine how world trade could have grown so fast—quintupling in the last two decades—without the “intermodal shipping container,” to use the technical term. The invention of a standard-size steel box that can be easily moved from a truck to a ship to a railroad car, without ever passing through human hands, cut down on the work and vastly increased the speed of shipping. It represented an entirely new system, not just a new product. The dark side is that these steel containers are by definition black boxes, invisible to casual inspection, and the more of them authorities open for inspection, the more they undermine the smooth functioning of the system.
Although some people like to debate whether shipping containers were an incremental improvement or a breakthrough innovation, I’d like to note that a single process improvement step generated a multitude of benefits because the inspection step was eliminated. Inspection happened naturally the old way, without planning it explicitly; workers had to unpack all the boxes and crates from one truck and load them onto another truck, or a ship. It would be difficult to overlook a nuclear warhead or a few tons of pot.
To make the system work, the concept of what was being transported was abstracted away from the problem, making the shipping container a black box. If all parties are trustworthy and not using the system for a purpose other than what was intended, this is no problem. But once people start using the system for unintended purposes, everything changes.
This reflects what happens in software development as well: you code an application, abstracting away the complex aspects of the problem and attaching unit tests to those nuggets. You don’t have to inspect the code within the nuggets because either you’ve already fully tested them, or you don’t care – and either way, you don’t expect what’s in the nugget to change. Similarly, the shipping industry did not plan that the containers would be used to ship illegal cargo – that wasn’t one of the expectations of what could be within the black box. The lesson (to me)? Degree of abstraction within a system, and the level of inspection of a system, are related. When your expectations of what constitutes your components changes, you need to revisit whether you need inspection (and how much).