Tag Archives: performance

KPIs vs Metrics: What’s the Difference? And Why Does it Matter?

Years ago I consulted for an organization that had an enticing mission, a dynamic and highly qualified workforce of around 200 people, and an innovative roadmap that was poised to make an impact — estimated to be ~$350-500M (yes really, that big). But there was one huge problem.

As engineers, the leadership could readily provide information about uptime and Service Level Agreements (SLAs). But they had no idea whether they were on track to meet strategic goals — or even whether they would be able to deliver key operations projects — at all! We recommended that they focus on developing metrics, and provided some guidelines for the types of metrics that might help them deliver their products and services — and satisfy their demanding customers.

Unfortunately, we made a critical mistake.

They were overachievers. When we came back six months later, they had nearly a thousand metrics. (A couple of the guys, beaming with pride, didn’t quite know how to interpret our non-smiling faces.)

“So tell us… what are your top three goals for the year, and are you on track to meet them?” we asked.

They looked at each other… then they looked at us. They looked down at their papers. They looked at each other again. It was in that moment they realized the difference between KPIs and metrics.

  • KPIs are KEY Performance Indicators. They have meaning. They are important. They are significant. And they relate to the overall goals of your business.
  • One KPI is associated with one or more metrics. Metrics are numbers, counts, percentages, or other values that provide insight about what’s happened in the past (descriptive metrics), what is happening right now (diagnostic metrics), what will happen (predictive metrics or forecasts), or what should happen (prescriptive metrics or recommendations).

For the human brain to be able to detect and respond to patterns in organizational performance, limit the number of KPIs!

A good rule of thumb is to select 3-5 KPIs (but never more than 8 or 9!) per logical division of your organization. A logical division can be a functional area (finance, IT, call center), a product line, a program or collection of projects, or a collection of strategic initiatives.

Or, use KPIs and metrics to describe product performance, process performance, customer satisfaction, customer engagement, workforce capability, workforce capacity, leadership performance, governance performance, financial performance, market performance, and how well you are executing on the action plans that drive your strategic initiatives (strategy performance). These logical divisions come from the Baldrige Excellence Framework.

Similarly, try to limit the number of projects and initiatives in each functional area — and across your organization. Work gets done more easily when people understand how all the parts of your organization relate to one another.

What happened to the organization from the story, you might ask? Within a year, they had boiled down their metrics into 8 functional areas, were working on 4 strategic initiatives, and had no more than 5 KPIs per functional area. They found it really easy to monitor the state of their business, and respond in an agile and capable way. (They were still collecting lots more metrics, but they only had to dig into them on occasion.)

Remember… metrics are helpful, but:

KPIs are KEY!!

You don’t have thousands of keys to your house… and you don’t want thousands of KPIs. Take a critical look at what’s most important to your business, and organize that information in a way that’s accessible. You’ll find it easier to manage everything — strategic initiatives, projects, and operations.

The Poison of Performance Appraisals – Part II

(Image Credit: Doug Buckley of http://hyperactive.to)

In May 2011, I wrote The Poison of Performance Appraisals – Part I, where I reflected on the concept of performance reviews as one of Deming’s Seven Deadly Diseases. This morning, I was reading a recent post by the President of the Association for Manufacturing Excellence, Paul Kuchuris. Originally brought to my attention via a tweet from @Baldrige_Barb, the article, entitled Employee Engagement: What’s in it for You? helped make the link between the inspiration that leads to engagement and performance reviews become more clear to me.

Kuchuris says:

Reinforce success: If you want positive behavior, you must reinforce positive behavior. Research has revealed that recognition is the greatest performance-enhancing tool. It does not have to be grandiose. A simple pat on the back as you do your rounds is fine. If performance is not quite right, you need to reinforce the effort and counsel on how it can be improved.

This immediately made me think of Barbara Fredrickson’s “Broaden and Build” theory of positive emotions, which asserts that the best way to improve performance is to build on experiences that are supportive and feel good:

The broaden-and-build theory of positive emotions suggests that positive emotions (viz. enjoyment/happiness/joy, and perhaps interest/anticipation)[1] broaden one’s awareness and encourage novel, varied, and exploratory thoughts and actions. Over time, this broadened behavioral repertoire builds skills and resources. For example, curiosity about a landscape becomes valuable navigational knowledge; pleasant interactions with a stranger become a supportive friendship; aimless physical play becomes exercise and physical excellence.

This is in contrast to negative emotions, which prompt narrow, immediate survival-oriented behaviors. For example, the negative emotion of anxiety leads to the specific fight-or-flight response for immediate survival. On the other hand, positive emotions do not have any immediate survival value, because they take one’s mind off immediate needs and stressors. However, over time, the skills and resources built by broadened behavior enhance survival.[1]

Performance appraisals will only be poisonous if they stir these narrow, immediate survival-oriented behaviors and emotions. So why don’t we appraise people based on strengths only, and help people shift their contributions to better leverage those strengths – and continually expand and broaden their capabilities? This will also help people develop into more authentic versions of themselves, following their natural strengths and interests instead of continually confronting things that are out of alignment with themselves.

Let’s focus on what we WANT TO SEE, not on what we DON’T WANT TO SEE… and see what happens 🙂

The Rubric as a General Purpose Quality Tool

According to dictionary.com, one of the definitions for rubric is “any established mode of conduct; protocol.” But the context you’ve probably heard this word in is education – where a grading rubric or a scoring rubric is used to evaluate a complex artifact like a student essay.

In my opinion, it’s time to move the concept of the rubric from the classroom into the mainstream, because it can be applied as a very practical general purpose quality tool! (Hear that, Nancy Tague? I think you should write about rubrics in your next edition of the very excellent book The Quality Toolbox. Let me know if you’d like me to help make this happen.)

A rubric is basically a grid with 1) levels of performance indicated along the top row, and 2) criteria or dimensions of performance listed down the leftmost column. Each cell of the grid contains a descriptive statement that explains how the level of performance in that column might be achieved for a specific dimension:

For example, here’s a rubric that one group constructed to evaluate the quality of the mind maps that they were producing. The performance levels are organized from high performance in the top left (smiley face giving a thumbs up) to low performance in the top right (smiley face that looks like he’s about to pass out):

The dimensions of performance are neatness and presentation, use of images/symbols, and use of color. The descriptive statements in each cell provide specific examples of how the performance level might be achieved, e.g. “has failed to include color in the mind map” is an indicator of a low performance level for the dimension of “use of color” – which is very understandable!

The concept of the rubric as a performance assessment tool is relatively new! Griffin (2009), in a brief history of the rubric, notes that since its introduction in 1981, “the scoring rubric has evolved into a more precise, technical, scientific-looking document. It carries a tone of certainty, authority, and exactitude.” However, she notes, the utility of a rubric will depend upon the thought and consideration that goes into its construction. “A rubric is a product of many minds working collaboratively to create new knowledge. It will, almost by definition, be more thoughtful, valid, unbiased and useful than any one of us could have conceived of being as we worked in isolation.”

Advantages of applying a well developed rubric include:

  • Provides a common language for sharing expectations and feedback
  • Helps to clarify and distinguish the differences between various performance levels
  • Helps to focus an individual or group’s ATTENTION on relevant aspects of each desired quality characteristic or skill area
  • Provides a mechanism to more easily identify strengths and opportunities for improvement
  • Helps lend objectivity to an evaluation process that might otherwise be subjective

Disadvantages:

  • Different rubrics may need to be devised for the different activities or artifacts that are to be evaluated using the rubric
  • Not all evaluators will apply the rubric in exactly the same way – there is a subjective element at work here – so people may need to be trained in the use of a rubric, or perhaps it would be more effective in a group consensus context where inter-rater variability can be interactively discussed and resolved
  • Creating a rubric can be time consuming
  • The rubric may limit exploration of solutions or modes of presentation that do not conform to the rubric

Using Rubrics for Quality Improvement

Rubrics are already applied in the world of quality, although I’ve never heard them go by that name. The process scoring guidelines for the Baldrige Criteria are essentially rubrics (although the extra dimension of ADLI and LeTCI has to be considered in the mind of the examiner). The International Team Excellence Award (ITEA) criteria in the Team Excellence Framework (TEF) also forms a rubric in conjunction with the performance levels of missing, unclear, meets expectations or exceeds expectations.

I see a lot of ways in which rubrics can be developed and applied in the quality community to help us establish best practices for some of our most common project artifacts, such as Project Charters. Nancy Tague includes a Project Charter Checklist in The Quality Toolbox to help us create better and more complete charters… but what if we added a second dimension, which includes performance levels, and turned this checklist into a rubric? Any checklist could be transformed into a rubric. Furthermore, to develop a good rubric, we can brainstorm and rank all of the potential criteria in the left hand column, using a Pareto chart to separate the vital few criteria from the trivial many.

Are any of you already using rubrics for purposes outside training or education? I would love to start a list of resources to share with the quality community.


Reference: Griffin, M. (2009). What is a rubric? Assessment Update, 21(6), Nov/Dec 2009.

Note: There is a comprehensive site containing many examples of rubrics at http://www.web.virginia.edu/iaas/assess/tools/rubrics.shtm – however, they won’t open in Google Chrome.

What is Quality Consciousness?

For the past few months, I’ve been working on an article to describe and define quality consciousness. Someone recently told me that there have been a lot of people asking about this concept lately (which I find really cool because as far as I know, I’m the only one actively studying it under this banner), and that I should blog about what quality consciousness is ahead of the publication. (That said, if you’re also researching quality consciousness, let me know in the comments section below! Let’s play with this idea together.)

So here’s a synopsis of the story of quality consciousness:

  • The existential question that motivated this line of inquiry: If ISO 8402:1994 says that quality is the “totality of characteristics of an entity that bear upon its ability to satisfy stated and implied needs,” then what if that entity is YOU? What are the totality of characteristics of YOU that bear upon YOUR ABILITY to satisfy the stated and implied needs of your stakeholders?
  • The term “quality consciousness” was first used, from what I can find, in a 1947 keynote by C.R. Sheaffer to the first convention of the American Society for Quality Control (ASQC), the predecessor to ASQ. To answer the question “what does top management expect from quality control [people and organizations]” he notes that a change in quality consciousness is expected. Attitudes must shift from an acceptance of what’s good enough to the constant pursuit of making things better. People must be able to take pride in their high-quality work. (from Borawski, 2006)
  • Consciousness, according to the Random House dictionary, is 1) awareness of one’s own thoughts feelings, and surroundings, 2) the full activity and engagement of the senses, and 3) the thoughts and feelings of individuals and groups.
  • Based on this definition, I believe that quality consciousness can be summed up by the “3 A’s” – Awareness, Alignment, and Attention. Quality consciousness implies awareness of yourself and the environment around you (including what constitutes quality and high performance for people, processes and products – most importantly, YOU). It also suggests that you must achieve alignment of your consciousness with the consciousness of the organization, which will aid in full activity and engagement of the senses. Your attention must be selectively focused onto what you can accomplish in the present moment according to that alignment (which implies that you are able to effectively filter the rapid and voluminous streams of information coming at you).
  • From reviewing the literature, I find that there are four elements that contribute to developing awareness, finding alignment, and focusing attention. These are Action, Reflection, Interaction, and Education. I’ll go into more detail in the article on how these are all related.
  • I think that quality consciousness is exactly what Deming was after… and that it’s the moral of the story of his 14 points. But whereas the unit of analysis for his 14 points was the organizational level, we need to internalize those points within ourselves. What if Deming’s 14 points were geared towards YOU developing your quality consciousness… what do you think he would have said differently?
  • The absence of focus on developing a quality consciousness is, I believe, the distinguishing factor between companies that have implemented the Toyota Production System successfully (ie. Toyota) and companies that have implemented the Toyota Production System with limited results (e.g. pretty much everyone else).
  • A personal path for developing quality consciousness might include asking yourself the following questions: What do YOU need to expand your awareness? To enhance your mood and affect so that you’re aware of the vast landscape of innovative potentials available to you (e.g. http://qualityandinnovation.com/2011/09/29/why-positive-psychology-is-essential-for-quality/)? What do YOU need to align yourself with your organization? What do YOU need to be able to focus your attention on the most productive thing you can do at any given moment – resulting in effortless action, optimal flow and productivity, and positive affect that will cycle back to expanding your awareness even more?

Borawski, P. (2006). The state of quality: 1947 and 2006. Journal for Quality and Participation, Winter 2006, p 19-24.

The Poison of Performance Appraisals – Part I

(Image Credit: Doug Buckley of Hyperactive Multimedia at http://www.hyperactive.to)

I call this Part I not because I know what Part II will be… but because I know there will be at least one followup to this post sometime in the future.

Last weekend I finished preparing the report that will be used for my Annual Performance Appraisal (or whatever they’re calling it). To do this, I had to reflect on (and in some cases, remember) all the contributions I made to my job and to society between May 15, 2010 and May 14, 2011. Our department head will use this report, and presumably his interactions with me over the year, to determine whether I’m worthy and where I need to improve. As a result, I’ve been thinking about the nature of performance appraisals over the past few days.

One of Deming’s “Seven Deadly Diseases” is the practice of performance appraisals, which naturally includes merit reviews and annual reviews. Such reviews are ubiquitous in most industries, and often, pay raises are tied to the results. Humorously, at my last job, the HR department routinely ensured the workforce that our performance reviews were NOT in any way related to our pay increases – even though as a manager, I was aware of many cases where the information did indeed influence the numbers (consciously or subconsciously).

I believe, from reading Deming’s books and a whole host of journal articles on Deming-related topics, that he had two reasons for feeling this way: 1) performance appraisals are anathema to “driving out fear,” one of his 14 points, and 2) the practice of performance appraisal assumes the duality of manager vs. employee; (s)he who wields power vs. (s)he who does not; the active evaluator vs. the passive evaluated. By instituting a performance review process, we are establishing a power structure whether we want to or not. Progressive managers might use the performance review as an opportunity for two-way dialogue and a cooperative exploration of strengths and opportunities for improvement. Progressive organizations might use a 360-degree approach, a la Jones & Bearley, but the underlying dynamic is the same: I’m telling you what I think about you and that’s my evaluation. I’m not familiar with any managers or organizations who can pull this off with impartiality and avoid the many sources of bias that can creep into the process.

What if I’m just a really bad observer? What if I’m observing you based on the wrong criteria – or worse, criteria that I just don’t have the experience to honestly evaluate you against? According to quantum physics, there’s no such thing as an observer. The presence of an observer makes him or her a participant in the dynamics – and thus outcomes – of the system. There’s no way to sit outside the system and just watch without affecting what happens.

So what’s the solution? Here’s my idea for today, and I don’t think it’s that revolutionary: 1) Each individual should actively take responsibility for his or her own performance to standards and performance improvement (yet another one of Deming’s 14 points), and 2) Teams should engage in continuous dialogue about individual and collective performance meaning that everyone is responsible for making sure the following questions are asked repeatedly by every member of the team:

  • What are we trying to accomplish together?
  • How are we doing? Are we making progress, and if so, what should we keep doing?
  • If so, what’s working right? If not, what can we do to fix it?
  • Then commit to doing whatever it is you decided to do.

Like I said, not really revolutionary. The only new idea here is that the individual should aggressively manage his or her own contribution to the objectives of the team or organization, seek out others’ opinions of his or her strengths and weaknesses, seek out quality standards to measure one’s self by, and take responsibility for helping the team ask the important questions above. If the individual is responsibly managing his or her own performance and demonstrating continuous improvement, the system of performance appraisals becomes moot.

It turns into a pull system rather than the push system than it is now.

The obvious problem? Not everyone cares about their own individual continuous performance improvement. As a manager for over a decade, I’ve had employees who were intellectual lazy, physically lazy, intellectually weak and trying hard to cover it up, or just in the job for the paycheck so they would do anything to avoid doing real work. Performance appraisals serve the purpose, in my opinion, of forcing dialogue with these employees who otherwise might not talk about their performance at all. Sometimes it works, sometimes it doesn’t. In many cases managers and employees trudge through the performance review process on autopilot. It becomes a drawn out paper exercise; an infrequent opportunity to shield a company from liability in case a disgruntled employee feels they’ve been wrongly terminated.