Tag Archives: quality tools

Quality Tools in Daily Life

Image Credit: Lucy Glover Photography

Image Credit: Lucy Glover Photography

This past month, ASQ asked the Influential Voices: “How do you incorporate quality tools into your daily life?” That’s a topic I’ve covered here often, and from many different perspectives:

Another simple way I apply principles from quality management to my day to day life is by structuring my problem-solving plans in terms of DMAIC, DMADV, or Root Cause Analysis. Sometimes, more than one methodology can be useful. How do you choose which methodology to use? Here’s how I do it:

  • DMAIC is applied to process improvement. The process has to exist already… and it’s already performing to specifications. But you want to make it even better. Applying the measurements and analysis tools associated with DMAIC can help.
  • DMADV is applied to new process design. The process doesn’t exist yet… and you need to create it so that it satisfies needs. This approach helps you articulate and implement innovative possibilities.
  • Root Cause Analysis is also applied to process improvement. The process exists already… but something’s wrong! Quality standards or performance standards are not being met, and we need to figure out why, so we can fix it. Applying the basic quality tools that are associated with RCA can help.

Here’s an examples of how we’ve applied this approach.

One of the members of my household is frustrated by the way the dishwasher is loaded. He thinks the process can be substantially improved, so that we can fit more dishes in at any given time (thus conserving water and dish detergent) and relieving his frustration. We applied RCA tools (FMEA and Pareto charts) to determine that it was a training issue… one person in the house needed to be trained on the appropriate process for queueing up the dishes and loading them. DMAIC was applied to make sure that this training occurred, and that there was a control plan in place to ensure that the lessons learned were consistently retained. This resulted in an increase in cycle time (good!) from once a day to once every three days, and a decrease in almost all frustration. 🙂

Quality and Innovation in the Counterculture

Inside the Temple of Grace at Burning Man 2014. Image Credit: John David Tupper (photographerinfocus.com)

Inside the Temple of Grace at Burning Man 2014. Image Credit: John David Tupper (photographerinfocus.com)

This week, I was the guest blogger at the American Society for Quality’s “View from the Q” where I shared some anecdotes about encountering quality tools and concepts at Burning Man this past August.

Check it out and learn what’s so great about “MOOP“.

A Day at “Quality Camp”!

On Saturday, March 24, I led the “Quality Tools for Problem Solving” sessions at the 2012 JMU Expanding Your Horizons conference, affiliated with the national program of the same name. In just 50 minutes, we learned how to use affinity diagrams, checksheets, and Pareto Charts to identify the “vital few” causes or reasons for each team’s problem of choice.

The mission of the conference is to stimulate interest in science, math, engineering and technology (STEM fields) among girls in junior high and early high school. In my sessions, I had about 30 young participants from 5th through 9th grade. Plus, I also had about 10 parents who very enthusiastically formed parents-only teams (including one that explored the question “Our children drive us crazy! What can they improve on to enhance their relationships with parents?”)

It was a great day. Everyone was engaged, several of the students commented on how easy it was to generate solid, data-driven conclusions, and a couple parents even remarked on what a great process this would be to solve problems more effectively at work!

Our 55-minute session was scheduled like this:

  • 10 minutes intro
  • 6 minutes to choose a team name and a problem to focus on
  • 6 minutes to brainstorm as many causes or reasons for the problem, and write each idea on a yellow post-it note
  • 6 minutes to categorize the yellow post-it notes into 6 themes, write those themes on colored post-it-notes, and create an affinity diagram out of all the post-it notes
  • 6 minutes to prepare a one-page checksheet for all participants to record their “Top 2 Choices”
  • 6 minutes for everyone to record their “votes” on each of the teams’ checksheets
  • 10 minutes for the teams to prepare their Pareto Charts by hand
  • 1 minute for each team to informally share their results!

Several parents asked if they could use the same exercise at their workplace and I enthusiastically agreed. This applies to everyone! Feel free to use the exercise and attached worksheets – just be sure to cite the URL and let us know how you’re making use of the materials. Have fun!

The exercise is at http://nicoleradziwill.com/courses/affinity-checksheet-pareto-exercise.pdf

The worksheets we used are at http://nicoleradziwill.com/courses/eyh-worksheets.jpg (better copies will be posted in a couple days)

Pareto Charts in R

A Pareto Chart is a sorted bar chart that displays the frequency (or count) of occurrences that fall in different categories, from greatest frequency on the left to least frequency on the right, with an overlaid line chart that plots the cumulative percentage of occurrences. The vertical axis on the left of the chart shows frequency (or count), and the vertical axis on the right of the chart shows the cumulative percentage. A Pareto Chart is typically used to visualize:

  • Primary types or sources of defects
  • Most frequent reasons for customer complaints
  • Amount of some variable (e.g. money, energy usage, time) that can be attributed to or classified according to a certain category

The Pareto Chart is typically used to separate the “vital few” from the “trivial many” using the Pareto principle, also called the 80/20 Rule, which asserts that approximately 80% of effects come from 20% of causes for many systems. Pareto analysis can thus be used to find, for example, the most critical types or sources of defects, the most common complaints that customers have, or the most essential categories within which to focus problem-solving efforts.

To find out how to implement a Pareto Chart in R, download this PDF: Radziwill_Pareto 

The Rubric as a General Purpose Quality Tool

According to dictionary.com, one of the definitions for rubric is “any established mode of conduct; protocol.” But the context you’ve probably heard this word in is education – where a grading rubric or a scoring rubric is used to evaluate a complex artifact like a student essay.

In my opinion, it’s time to move the concept of the rubric from the classroom into the mainstream, because it can be applied as a very practical general purpose quality tool! (Hear that, Nancy Tague? I think you should write about rubrics in your next edition of the very excellent book The Quality Toolbox. Let me know if you’d like me to help make this happen.)

A rubric is basically a grid with 1) levels of performance indicated along the top row, and 2) criteria or dimensions of performance listed down the leftmost column. Each cell of the grid contains a descriptive statement that explains how the level of performance in that column might be achieved for a specific dimension:

For example, here’s a rubric that one group constructed to evaluate the quality of the mind maps that they were producing. The performance levels are organized from high performance in the top left (smiley face giving a thumbs up) to low performance in the top right (smiley face that looks like he’s about to pass out):

The dimensions of performance are neatness and presentation, use of images/symbols, and use of color. The descriptive statements in each cell provide specific examples of how the performance level might be achieved, e.g. “has failed to include color in the mind map” is an indicator of a low performance level for the dimension of “use of color” – which is very understandable!

The concept of the rubric as a performance assessment tool is relatively new! Griffin (2009), in a brief history of the rubric, notes that since its introduction in 1981, “the scoring rubric has evolved into a more precise, technical, scientific-looking document. It carries a tone of certainty, authority, and exactitude.” However, she notes, the utility of a rubric will depend upon the thought and consideration that goes into its construction. “A rubric is a product of many minds working collaboratively to create new knowledge. It will, almost by definition, be more thoughtful, valid, unbiased and useful than any one of us could have conceived of being as we worked in isolation.”

Advantages of applying a well developed rubric include:

  • Provides a common language for sharing expectations and feedback
  • Helps to clarify and distinguish the differences between various performance levels
  • Helps to focus an individual or group’s ATTENTION on relevant aspects of each desired quality characteristic or skill area
  • Provides a mechanism to more easily identify strengths and opportunities for improvement
  • Helps lend objectivity to an evaluation process that might otherwise be subjective

Disadvantages:

  • Different rubrics may need to be devised for the different activities or artifacts that are to be evaluated using the rubric
  • Not all evaluators will apply the rubric in exactly the same way – there is a subjective element at work here – so people may need to be trained in the use of a rubric, or perhaps it would be more effective in a group consensus context where inter-rater variability can be interactively discussed and resolved
  • Creating a rubric can be time consuming
  • The rubric may limit exploration of solutions or modes of presentation that do not conform to the rubric

Using Rubrics for Quality Improvement

Rubrics are already applied in the world of quality, although I’ve never heard them go by that name. The process scoring guidelines for the Baldrige Criteria are essentially rubrics (although the extra dimension of ADLI and LeTCI has to be considered in the mind of the examiner). The International Team Excellence Award (ITEA) criteria in the Team Excellence Framework (TEF) also forms a rubric in conjunction with the performance levels of missing, unclear, meets expectations or exceeds expectations.

I see a lot of ways in which rubrics can be developed and applied in the quality community to help us establish best practices for some of our most common project artifacts, such as Project Charters. Nancy Tague includes a Project Charter Checklist in The Quality Toolbox to help us create better and more complete charters… but what if we added a second dimension, which includes performance levels, and turned this checklist into a rubric? Any checklist could be transformed into a rubric. Furthermore, to develop a good rubric, we can brainstorm and rank all of the potential criteria in the left hand column, using a Pareto chart to separate the vital few criteria from the trivial many.

Are any of you already using rubrics for purposes outside training or education? I would love to start a list of resources to share with the quality community.


Reference: Griffin, M. (2009). What is a rubric? Assessment Update, 21(6), Nov/Dec 2009.

Note: There is a comprehensive site containing many examples of rubrics at http://www.web.virginia.edu/iaas/assess/tools/rubrics.shtm – however, they won’t open in Google Chrome.