It’s that time of year where people are dusting off their strategic plans, hosting their parties and strategy workshops, and making sure the KPIs and metrics on their scorecards are the ones they want to be watching in 2021.
But most people really aren’t that religious about measurement systems, or tightly aligning specific actions with the needle they are most likely to move. The goal of “becoming data-driven” usually isn’t accompanied by the discipline and perseverance to make it happen, even though the payoffs are huge.
And none of us are immune to bad metrics, even when those things are really important. Sometimes, a metric is just too emotionally enticing to give up.
I use one bad metric myself, and no matter how bad I know it is, I keep using it to evaluate (one dimension of) my personal value. PSA: It is never good to tie your worth as a human to a metric (any metric). Gen Z may have more luck than us Gen Xers on this one.
My bad metric, the one I can’t emotionally detach from, is number of citations on Google Scholar. And the reason why I’m thinking about it today is because… I just achieved my 2020 goal of adding more citations than I added in 2019!
Here’s why this metric is so terribly bad:
Number of citations is a lagging indicator, and the lag can often be 3-5 years.
By the time the needle moves, it’s hard to figure out exactly what happened to make it move.
There are very few actions I can personally take to make that needle move.
Any actions I do take will be indirect. I can make people more aware of my papers, but can’t force anyone to cite one… so the actions I can take will influence reach, but not citations.
There’s an interesting social dimension to past number of citations. The more citations you have on a paper, the more likely you are to attract additional citations; similarly, the more citations you have, the better SEO you get on sites like Google Scholar. It’s a “fit get fitter” scenario.
I can’t monitor this metric on a weekly or monthly basis. If it dips, I won’t be able to respond by taking an action to restore growth.
I haven’t even thought about using this metric as a signal for when I should take action. Because of all the problems I listed above.
I didn’t do anything to achieve my 2020 goal. I just helplessly watched that number creep up, kept my fingers crossed, and (now) celebrated on December 19th when I (just barely) went over the wire before New Year’s.
The calendar is arbitrary anyway. What if I achieved the goal on January 5? Would I feel unaccomplished? Probably yes (this is pathetic)!
Ultimately, I am not in control of citations. I should have picked an intermediary metric that I am in control of… but that’s really difficult, and I’m not in academia any more so I really don’t even need to pay attention to this (another giant problem! Why am I still even paying attention? Attention is expensive!)
My Holiday wish for you is: Select your metrics carefully! Pick ones that are (ideally):
Not limited to lagging indicators with extraordinarily long lags
Monitorable on (at least) a weekly or monthly basis
Designed so that if you aren’t achieving your target level, you can immediately figure out where the problem is happening, and even know how to dig down deeper to figure out why that problem is happening
Triggers for action: every metric that’s not where you want it to be should be link to a thing you can do — that’s in your control — that you know has a pretty good chance of making that needle move the direction you want it to
When your metrics aren’t revealing, actionable, or in your control, you’ve just set yourself up for a special kind of paralysis the entire year.
I read well over a hundred books a year, and review many for Quality Management Journaland Software Quality Professional. Today, I’d like to bring you my TOP 10 PICKS out of all the books I read in 2019. First, let me affirm that I loved all of these books — it was really difficult to rank them. The criteria I used were:
Is the topic related to quality or improvement? The book had to focus on making people, process, or technology better in some way. (So even though Greg Satell’s Cascades provided an amazing treatment of how to start movements, which is helpful for innovation, it wasn’t as closely related to the themes of quality and improvement I was targeting.)
Did the book have an impact on me? In particular, did it transform my thinking in some way?
Finally, how big is the audience that would be interested in this book? (Although some of my picks are amazing for niche audiences, they will be less amazing for people who are not part of that group; they were ranked lower.)
Did I read it in 2019? (Unfortunately, several amazing books I read at the end of 2018 like Siva Vaidhyanathan’s Antisocial Media.)
The biggest obstacle in agile transformation is getting teams to internalize the core values, and apply them as a matter of habit. This is why you see so many organizations do “fake agile” — do things like introduce daily stand-ups, declare themselves agile, and wonder why the success isn’t pouring in. Scott goes back to the first principles of the Agile Manifesto from 2001 to help leaders and teams become genuinely agile.
#9 – Risk-Based Thinking (Muschara)
Muschara, T. (2018). Risk-Based Thinking: Managing the Uncertainty of Human Error in Operations. Routledge/Taylor & Francis: Oxon and New York. 287 pages.
Risk-based thinking is one of the key tenets of ISO 9001:2015, which became the authoritative version in September 2018. Although clause 8.5.3 from ISO 9001:2008 indirectly mentioned risk, it was not a driver for identifying and executing preventive actions. The new emphasis on risk depends upon the organizational context (clause 4.1) and the needs and expectations of “interested parties” or stakeholders (clause 4.2).
Unfortunately, the ISO 9001 revision does not provide guidance for how to incorporate risk-based thinking into operations, which is where Muschara’s new book fills the gap. It’s detailed and complex, but practical (and includes immediately actionable elements) throughout. For anyone struggling with the new focus of ISO 9001:2015, this book will help you bring theory into practice.
#8 – The Successful Software Manager (Fung)
Fung, H. (2019). The Successful Software Manager. Packt Publishing, Birmingham UK, 433 pp.
There lots of books on the market that provide technical guidance to software engineers and quality assurance specialists, but little information to help them figure out how (and whether) to make the transition from developer to manager. Herman Fung’s new release fills this gap in a complete, methodical, and inspiring way. This book will benefit any developer or technical specialist who wants to know what software management entails and how they can adapt to this role effectively. It’s the book I wish I had 20 years ago.
#7 – New Power (Heimans & Timms)
Heiman, J. & Timms, H. (2018). New Power: How Power Works in Our Hyperconnected World – and How to Make it Work For You. Doubleday, New York, 325 pp.
As we change technology, the technology changes us. This book is an engaging treatise on how to navigate the power dynamics of our social media-infused world. It provides insight on how to use, and think in terms of, “platform culture”.
#6 – A Practical Guide to the Safety Profession (Maldonado)
Maldonado, J. (2019). A Practical Guide to the Safety Profession: The Relentless Pursuit (CRC Focus). CRC Press: Taylor & Francis, Boca Raton FL, 154 pp.
One of the best ways to learn about a role or responsibility is to hear stories from people who have previously served in those roles. With that in mind, if you’re looking for a way to help make safety management “real” — or to help new safety managers in your organization quickly and easily focus on the most important elements of the job — this book should be your go-to reference. In contrast with other books that focus on the interrelated concepts in quality, safety, and environmental management, this book gets the reader engaged by presenting one key story per chapter. Each story takes an honest, revealing look at safety. This book is short, sweet, and high-impact for those who need a quick introduction to the life of an occupational health and safety manager.
# 5 – Data Quality (Mahanti)
Mahanti, R. (2018). Data Quality: Dimensions, Measurement, Strategy, Management and Governance. ASQ Quality Press, Milwaukee WI, 526 pp.
I can now confidently say — if you need a book on data quality, you only need ONE book on data quality. Mahanti, who is one of the Associate Editors of Software Quality Professional, has done a masterful job compiling, organizing, and explaining all aspects of data quality. She takes a cross-industry perspective, producing a handbook that is applicable for solving quality challenges associated with any kind of data.
Throughout the book, examples and stories are emphasized. Explanations supplement most concepts and topics in a way that it is easy to relate your own challenges to the lessons within the book. In short, this is the best data quality book on the market, and will provide immediately actionable guidance for software engineers, development managers, senior leaders, and executives who want to improve their capabilities through data quality.
#4 – The Innovator’s Book (McKeown)
McKeown, M. (2020). The Innovator’s Book: Rules for Rebels, Mavericks and Innovators (Concise Advice). LID Publishing, 128 pp.
Want to inspire your teams to keep innovation at the front of their brains? If so, you need a coffee table book, and preferably one where the insights come from actual research. That’s what you’ve got with Max’s new book. (And yes, it’s “not published yet” — I got an early copy. Still meets my criteria for 2019 recommendations.)
#3 – The Seventh Level (Slavin)
Slavin, A. (2019). The Seventh Level: Transform Your Business Through Meaningful Engagement with Customer and Employees. Lioncrest Publishing, New York, 250 pp.
For starters, Amanda is a powerhouse who’s had some amazing marketing and branding successes early in her career. It makes sense, then, that she’s been able to encapsulate the lessons learned into this book that will help you achieve better customer engagement. How? By thinking about engagement in terms of different levels, from Disengagement to Literate Thinking. By helping your customers take smaller steps along this seven step path, you can make engagement a reality.
#2 – Principle Based Organizational Structure (Meyer)
Meyer, D. (2019). Principle-Based Organizational Structure: A Handbook to Help You Engineer Entrepreneurial Thinking and Teamwork into Organizations of Any Size. NDMA, 420 pp.
This is my odds-on impact favorite of the year. It takes all the best practices I’ve learned over the past two decades about designing an organization for laser focus on strategy execution — and packages them up into a step-by-step method for assessing and improving organizational design. This book can help you fix broken organizations… and most organizations are broken in some way.
#1 Story 10x (Margolis)
Margolis, M. (2019). Story 10x: Turn the Impossible Into the Inevitable. Storied, 208 pp.
You have great ideas, but nobody else can see what you see. Right?? Michael’s book will help you cut through the fog — build a story that connects with the right people at the right time. It’s not like those other “build a narrative” books — it’s like a concentrated power pellet, immediately actionable and compelling. This is my utility favorite of the year… and it changed the way I think about how I present my own ideas.
Years ago I consulted for an organization that had an enticing mission, a dynamic and highly qualified workforce of around 200 people, and an innovative roadmap that was poised to make an impact — estimated to be ~$350-500M (yes really, that big). But there was one huge problem.
As engineers, the leadership could readily provide information about uptime and Service Level Agreements (SLAs). But they had no idea whether they were on track to meet strategic goals — or even whether they would be able to deliver key operations projects — at all! We recommended that they focus on developing metrics, and provided some guidelines for the types of metrics that might help them deliver their products and services — and satisfy their demanding customers.
Unfortunately, we made a critical mistake.
They were overachievers. When we came back six months later, they had nearly a thousand metrics. (A couple of the guys, beaming with pride, didn’t quite know how to interpret our non-smiling faces.)
“So tell us… what are your top three goals for the year, and are you on track to meet them?” we asked.
They looked at each other… then at us. They looked down at their papers. They glanced at each other again. It was in that moment they realized the difference between KPIs and metrics.
KPIs are KEY Performance Indicators. They have meaning. They are important. They are significant. And they relate to the overall goals of your business.
One KPI is associated with one or moremetrics. Metrics are numbers, counts, percentages, or other values that provide insight about what’s happened in the past (descriptive metrics), what is happening right now (diagnostic metrics), what will happen (predictive metrics or forecasts), or what should happen (prescriptive metrics or recommendations).
For the human brain to be able to detect and respond to patterns in organizational performance, limit the number of KPIs!
A good rule of thumb is to select 3-5 KPIs (but never more than 8 or 9!) per logical division of your organization. A logical division can be a functional area (finance, IT, call center), a product line, a program or collection of projects, or a collection of strategic initiatives.
Or, use KPIs and metrics to describe product performance, process performance, customer satisfaction, customer engagement, workforce capability, workforce capacity, leadership performance, governance performance, financial performance, market performance, and how well you are executing on the action plans that drive your strategic initiatives (strategy performance). These logical divisions come from the Baldrige Excellence Framework.
Similarly, try to limit the number of projects and initiatives in each functional area — and across your organization. Work gets done more easily when people understand how all the parts of your organization relate to one another.
What happened to the organization from the story, you might ask? Within a year, they had boiled down their metrics into 8 functional areas, were working on 4 strategic initiatives, and had no more than 5 KPIs per functional area.They found it really easy to monitor the state of their business, and respond in an agile and capable way. (They were still collecting lots more metrics, but they only had to dig into them on occasion.)
Remember… metrics are helpful, but:
KPIs are KEY!!
You don’t have thousands of keys to your house… and you don’t want thousands of KPIs. Take a critical look at what’s most important to your business, and organize that information in a way that’s accessible. You’ll find it easier to manage everything — strategic initiatives, projects, and operations.
Over the past 15 years, I’ve helped several organizations with continuous improvement initiatives at the strategic, executive level. There are a lot of themes that keep appearing and reappearing, so the purpose of this post is to call out just a few and provide some insights in how to deal with them!
These come up when you are engaged in strategic planning and when you are planning operations (to ensure that processes and procedures ultimately satisfy strategic goals), and are especially prominent when you’re trying to develop or use Key Performance Indicators (KPIs) and other metrics or analytics.
1) How do you measure innovation? Before you pick metrics, recognize that the answer to this question depends on how you articulate the strategic goals for your innovation outcomes. Do you want to:
Keep up with changing technology?
Develop a new product/technology?
Lead your industry in developing best practices?
Pioneer new business models?
Improve quality of life for a particular group of people?
All of these will be measured in different ways! And it’s OK to not strategically innovate in one area or another… for example, you might not want to innovate your business model if technology development is your forte. Innovation is one of those things where you really don’t want to be everything to everyone… by design.
2) Do you distinguish between improving productivity and generating impact?
Improving quality (the ability to satisfy stated and implied needs) is good. Improving productivity (that is, what you can produce given the resources that you use) is also good. Reducing defects, reducing waste, and reducing variation (sometimes) are all very good things to do, and to report on.
But who really cares about any improvements at all unless they have impact? It’s always necessary to tie your KPIs, which are often measures of outcomes, to metrics or analytics that can tell the story about why a particular improvement was useful — in the short term, and (hopefully also) in the long term.
You also have to balance productivity and impact. For example, maybe you run an ultra-efficient 24/7 Help Desk. Your effectiveness is exemplary… when someone submits a request, it’s always satisfied within 8 hours. But you discover that no tickets come in between Friday at 5pm and Monday at 8am. So all that time you spend staffing that Help Desk on the weekend? It’s non-value-added time, and could be eliminated to improve your productivity… but won’t influence your impact at all.
We just worked on a project where we had to consciously had to think about how all the following interact… and you should too:
Organizational Productivity: did your improvement help increase the capacity or capability for part of your organization? If so, then it could contribute to technical productivity or business productivity.
Technical Productivity: did the improvement remove a technical barrier to getting work done, or make it faster or less error-prone?
Business Productivity: did the improvement help you get the needs of the business satisfied faster or better?
Business Impact: Did the improvements that yielded organizational productivity benefits, technical productivity benefits, or business productivity benefits make a difference at the strategic level? (This answers the “so what” question. So you improved your throughput by 83%… so what? Who really cares, and why does this matter to them? Long-term, why does this awesome thing you did really matter?)
Educational/Workforce Development Impact: Were the lessons learned captured, fed back into the organization’s processes to close the loop on learning, or maybe even used to educate people who may become part of your workforce pipeline?
All of the categories above are interrelated. I don’t think you can have a comprehensive, innovation-focused analytics approach unless you address all of these.
3) Do you distinguish between participation and engagement?
Participation means you showed up. Engagement means you got involved, you stayed involved, your mission was advanced, or maybe you used this experience to help society. Too often, I see organizations that want to improve engagement, and all the metrics they select are really good at characterizing participation.
I’m writing a paper on this topic right now, but in the meantime (if you want to get a REALLY good sense of the difference between participation and engagement), readThe Participatory Museum by Nina Simon. Yes, it is “about museums” — and yes, I know you’re in business or industry — and YES, this book really will provide you with amazing management insights. So read it!
I love horse racing. More specifically, I love betting on the horses. Why? Because it’s a complex exercise in data science, requiring you to integrate (what feels like) hundreds of different kinds of performance measures — and environmental factors (like weather) — to predict which horse will come in first, second, third, and maybe even fourth (if you’re betting a superfecta). And, you can win actual money!
I spent most of the day yesterday handicapping for Kentucky Derby 2015, before stopping at the track to place my bets for today. As I was going through the handicapping process, I realized that I’m essentially following the analysis process that we use as Examiners when we review applications for the Malcolm Baldrige National Quality Award (MBNQA). We apply “LeTCI” — pronounced like “let’s see” — to determine whether an organization has constructed a robust, reliable, and relevant assessment program to evaluate their business and their results. (And if they haven’t, LeTCI can provide some guidance on how to continuously improve to get there).
LeTCI stands for “Levels, Trends, Comparisons, and Integration”. In Baldrige parlance, here’s what we mean by each of those:
Levels: This refers to categorical or quantitative values that “place or position an organization’s results and performance on a meaningful measurement scale. Performance levels permit evaluation relative to past performance, projections, goals, and appropriate comparisons.”  Your measured levels refer to where you’re at now — your current performance.
Trends: These describe the direction and/or rate of your performance improvements, including the slope of the trend data (if appropriate) and the breadth of your performance results.  “A minimum of three data points is generally needed to begin to ascertain a trend.” 
Comparisons: This “refers to establishing the value of results by their relationship to similar or equivalent measures. Comparisons can be made to results of competitors, industry averages, or best-in-class organizations. The maturity of the organization should help determine what comparisons are most relevant.”  This also includes performance relative to benchmarks.
Integration: This refers to “the extent to which your results measures address important customer, product, market, process, and action plan performance requirements” and “whether your results are harmonized across processes and work units to support organization-wide goals.” 
Here’s a snapshot of my Kentucky Derby handicapping process, using LeTCI. (I also do it for other horse races, but the Derby has got to be one of the most challenging prediction tasks of the year.) Derby prediction is fascinating because all of the horses are excellent, for the most part — and what you’re trying to do is determine on this particular day, against these particular competitors, how likely is a horse to win? Although my handicapping process is much more complex than what I lay out below, this should give you a sense of the process that I use, and how it relates to the Baldrige LeTCI approach:
Levels: First, I have to check out the current performance levels of each contender in the Derby. What’s the horse’s current Beyer speed score or Bris score (that is, are they fast enough to win this race)? What are the recent exercise times? If a horse isn’t running 5 furlongs in under a minute, then I wonder (for example) if they can handle the Derby pace. Has this horse raced on this particular track, or with this particular jockey? I can also check out the racing pedigree of the horse through metrics like “dosage”.
Trends: Next, I look at a few key trends. Have the horse’s past races been preparing him for the longer distance of the Derby? Ideally, I want to see that the two prior races were a mile and a sixteenth, and a mile and an eighth. Is their Beyer speed score increasing, at least over the past three races? Depending on the weather for Louisville, has this horse shown a liking for either fast or muddy tracks? Has the horse won a race recently?
Comparisons: Is the horse paired with a jockey he has been successful with in the past? I spend a lot of time comparing the horses to each other as well. A horse doesn’t have to beat track records to win… he just has to beat the other horses. Even a slow horse will win if the other horses are slower. Additionally, you have to compare the horse’s performance to baselines provided by the other horses throughout the duration of the race. Does your horse tend to get out in front, and then burn out? Or does he stalk the other horses and then launch an attack in the end, pulling out in front as a closer? You have to compare the performance of the horse to the performance of the other horses longitudinally — because the relative performance will change as the race progresses.
Integration: What kind of story do all of these metrics tell together? That’s the real trick of handicapping horse races… the part where you have to bring everything together in to a cohesive, coherent way. This is also the part where you have to apply intuition. Do I really think this horse is ready to pull off a victory today, at this particular track, against these contenders and embedded in the wild and festive Derby environment (which a horse may not have experienced yet)?
And what does this mean for organizational metrics? To me, it means that when I’m formulating and evaluating business metrics I should take a perspective that’s much more like handicapping a major horse race — because assessing performance is intricately tied to capabilities, context, the environment, and what’s bound to happen now, in the near future.
The Center for Environmental Journalism (CEJ) recently posted an interview with Roger Pielke, Jr., an authority on (as CEJ calls it) “the nexus of science and technology in decision making”. The interview seeks to provide a perspective on how journalists can more accurately address climate change in the context of public policy over the next several years.
I was really intrigued by this part:
Reporters could help clarify understandings by asking climate scientists: “What behavior of the climate system over the next 5-10 years would cause you to question the IPCC consensus?” This would give people some metrics against which to evaluate future behavior as it evolves.
Similarly, you could ask partisans in the political debate “What science would cause you to change your political position on the issue?” This would allow people to judge how much dependence partisans put on science and what science would change their views. I would be surprised if many people would give a concrete answer to this!!
For the first question, Pielke is recommending is that we take an approach conceptually resembling statistical process control to help us figure out how to evaluate the magnitude and potential impacts of climate change. (Could we actually apply such techniques? It would be an interesting research question. Makes me think of studies like Khoo & Ariffin (2006), for example, who propose one method based on Shewhart x-bar charts to detect process shifts with a higher level of sensitivity – only tuned for a particular policy problem.) For the second question, I’m reminded of “willingness to pay” or “willingness to recommend” or other related marketing metrics. I’m sure that one of these established approaches could be extended to the policy domain (if it hasn’t been done already).