Category Archives: Quality Systems

Quality 4.0: Reveal Hidden Insights with Data Sci & Machine Learning (Webinar)

Quality Digest

What’s Quality 4.0, why is it important, and how can you use it to gain competitive advantage? Did you know you can benefit from Quality 4.0 even if you’re not a manufacturing organization? That’s right. I’ll tell you more next week.

Sign up for my 50-minute webinar at 2pm ET on Tuesday, October 16, 2018 — hosted by Dirk Dusharme and Mike Richman at Quality Digest. This won’t be your traditional “futures” talk to let you know about all of the exciting technology on the horizon… I’ve actually been doing and teaching data science, and applying machine learning to practical problems in quality improvement, for over a decade.

Come to this webinar if:

  1. You have a LOT of data and you don’t know where to begin
  2. You’re kind of behind… you still use paper and Excel and you’re hoping you don’t miss the opportunities here
  3. You’re a data scientist and you want to find out about quality and process improvement
  4. You’re a quality professional and you want to find out more about data science
  5. You’re a quality engineer and you want some professional preparation for what’s on the horizon
  6. You want to be sure you get on our Quality 4.0 mailing list to receive valuable information assets for the next couple years to help you identify and capture opportunities

Register Here! See you on Tuesday. If you can’t make it, we’ll also be at the ASQ Quality 4.0 Summit in Dallas next month sharing more information about the convergence of quality and Big Data.

Happy 10th Birthday!

10 years ago today, this blog published its first post: “How Do I Do a Lean Six Sigma (LSS) Project?” Looking back, it seems like a pretty simple place to have started. I didn’t know whether it would even be useful to anyone, but I was committed to making my personal PDSA cycles high-impact: I was going to export things I learned, or things I found valuable. (As it turns out, many people did appreciate the early posts even though it would take a few years for that to become evident!)

Since then, hundreds more have followed to help people understand more about quality and process improvement in theory and in practice. I started writing because I was in the middle of my PhD dissertation in the Quality Systems program at Indiana State, and I was discovering so many interesting nuggets of information that I wanted to share those with the world – particularly practitioners, who might not have lots of time (or even interest) in sifting through the research. In addition, I was using data science (and some machine learning, although at the time, it was much more difficult to implement) to explore quality-related problems, and could see the earliest signs that this new paradigm for problem solving might help fuel data-driven decision making in the workplace… if only we could make the advanced techniques easy for people in busy jobs to use and apply.

We’re not there yet, but as ASQ and other organizations recognize Quality 4.0 as a focus area, we’re much closer. As a result, I’ve made it my mission to help bring insights from research to practitioners, to make these new innovations real. If you are developing or demonstrating any new innovative techniques that relate to making people, processes, or products better, easier, faster, or less expensive — or reducing risks and building individual and organizational capabilities — let me know!

I’ve also learned a lot in the past decade, most of which I’ve spent helping undergraduate students develop and refine their data-driven decision making skills, and more recently at Intelex (provider of integrated environment, health & safety, and quality management EHSQ software to enterprises and smaller organizations). Here are some of the big lessons:

  1. People are complex. They have multidimensional lives, and work should support and enrich those lives. Any organization that cares about performance — internally and in the market — should examine how it can create complete and meaningful experiences. This applies not only to customers, but to employees and partners and suppliers. It also applies to anyone an organization has the power and potential to impact, no matter how small.
  2. Everybody wants to do a good job (and be recognized for it). How can we create environments where each person is empowered to contribute in all the areas where they have talent and interest? How can these same environments be designed with empathy as a core capability?
  3. Your data are your most valuable assets. It sounds trite, but data is becoming as valuable as warehouses, inventory, and equipment. I was involved in a project a few years ago where we digitized data that had been collected for three years — and by analyzing it, we uncovered improvement opportunities that when implemented, saved thousands of dollars a week. We would not have been able to do that if the data had remained scratched in pencil on thousands of sheets of well-worn legal paper.
  4. Nothing beats domain expertise (especially where data science is concerned). I’ve analyzed terabytes of data over the past decade, and in many cases, the secrets are subtle. Any time you’re using data to make decisions, be sure to engage the people with practical, on-the-ground experience in the area you’re studying.
  5. Self-awareness must be cultivated. The older you get, and the more experience you gain, the more you know what you don’t know. Many of my junior colleagues (and yours) haven’t reached this point yet, and will need some help from senior colleagues to gain this awareness. At the same time, those of you who are senior have valuable lessons to learn from your junior colleagues, too! Quality improvement is grounded in personal and organizational learning, and processes should help people help each other uncover blind spots and work through them — without fear.

 

Most of all, I discovered that what really matters is learning. We can spend time supporting human and organizational performance, developing and refining processes that have quality baked in, and making sure that products meet all their specifications. But what’s going on under the surface is more profound: people are learning about themselves, they are learning about how to transform inputs into outputs in a way that adds value, and they are learning about each other and their environment. Our processes just encapsulate that organizational knowledge that we develop as we learn.

Quality 4.0: Let’s Get Digital

Want to find out what Quality 4.0 really is — and start realizing the benefits for your organization? Check out this month’s issue of ASQ’s Quality Progress, where my new article (“Let’s Get Digital“) does just that. Quality 4.0 — which we’re working to bring to the practice of quality management and quality engineering at Intelex — asks how we can leverage connected, intelligent, automated (C-I-A) technologies to increase efficiency, effectiveness, and satisfaction: “As connected, intelligent and automated systems are more widely adopted, we can once again expect a renaissance in quality tools and methods. The progression can be summarized through four themes:

  • Quality as inspection: In the early days, quality assurance relied on inspecting bad quality out of the total items produced. Walter A. Shewhart’s methods for statistical process control helped operators determine whether variation was due to random or special causes.
  • Quality as design: Inspired by W. Edwards Deming’s recommendation to cease dependence on inspection, more holistic methods emerged for designing quality into processes to prevent quality problems before they occurred.
  • Quality as empowerment: TQM and Six Sigma advocate a holistic approach to quality, making it everyone’s responsibility and empowering individuals to contribute to continuous improvement.
  • Quality as discovery: In an adaptive, intelligent environment, quality depends on how quickly we can discover and aggregate new data sources, how effectively we can discover root causes and how well we can discover new insights about ourselves, our products and our organizations.”

Read more at http://asq.org/quality-progress/2018/10/basic-quality/lets-get-digital.html  or download the PDF (http://asq.org/quality-progress/2018/10/basic-quality/lets-get-digital.pdf)

Why FEMA is Monitoring Waffle House this Weekend

This article originally appeared on the Intelex Community on 9/14/2018 at https://community.intelex.com/explore/posts/why-fema-monitoring-waffle-house-weekend

Sometimes the most informative metrics show up in the strangest of places.

Case in point: with a hurricane making landfall today in North Carolina, and the prospect for catastrophic flooding over the weekend and into next week, emergency managers are mobilizing for action – and if you’re in the path of the storm, you may be doing the same. Have you started monitoring the Waffle House Index? The US Federal Emergency Management Agency (FEMA) has.

Originally devised by W. Craig Fugate, former FEMA Director, the Waffle House Index is based on the observation that the popular 24-hour breakfast chain has historically been unusually well prepared for disasters. Part of their business model is to be the spot for emergency personnel to rely on for their coffee and nourishment – a valuable role when power crews, rescue teams, and debris removal workers are working long, hard hours.

To do this, they make sure all employees have disaster training and stock all their restaurants with generators, and have a reduced menu specifically to be offered in the aftermath of a disaster. Over time, this even led to a more formal partnership between the organizations. FEMA first responders are known to set up initial operations in Waffle House locations. Waffle House now reports the status of each location to FEMA after a disaster to facilitate data collection.

The Waffle House Index is a red, yellow, or green marker placed on a map wherever a Waffle House location is found. Under normal conditions, the marker is green. If the restaurant has shifted into emergency operations and is offering their limited menu, the marker is yellow. If the marker is red, that means that the Waffle House is closed – either the site itself is damaged or destroyed, emergency staff can not reach the site, the emergency generators are down or out of fuel, or there is a food shortage. When FEMA sees one or more reds, they know an area is in particularly bad shape – and they’ll need to help.

What can you learn about risk-based thinking from the Waffle House index? Three things: first, that you can (and should) look outside your organization for risk indicators that might help you make better (and faster) decisions, particularly when those risks are activated. Second, that you should explore crowdsourced risk data as a source of up-to-date information.

And finally – if Waffle House is closed, there’s a serious problem.

 

Additional Reading: McKnight, B., & Linnenluecke, M. K. (2016). How firm responses to natural disasters strengthen community resilience: A stakeholder-based perspective. Organization & Environment, 29(3), 290-307.

Walter, L. (2011, July 6) What do waffles have to do with risk management? EHS Today. Available from https://www.ehstoday.com/fire_emergencyresponse/disaster-planning/waffles-risk-management-0706

Risk-Based Thinking: In ISO 9001 and Beyond (Interview)

On August 31, Quality Digest interviewed me on Quality Digest Live in advance of the webinar on Risk-Based Thinking that we held (sponsored by Intelex) on September 6. You can see it here on YouTube (13:42)! I answer the questions:

  • Is risk-based thinking different than enterprise risk management (ERM) or operations risk management (ORM)?
  • Who is risk-based thinking for?
  • Are there good and bad risks? Is opportunity really the “flip side” of risk?
  • Can focusing on risk inhibit innovation?

I’ll also be capturing the information from the webinar in a series of reports later this month that will be available to everyone. Stay tuned!

Practical Poka-Yoke

[Note: I’ve been away from the blog for several months now in the middle of very significant changes in my life. That’s about to change! In the next post, I’ll tell you about what happened and what my plans are for the future. In the meantime, I wanted to share something that happened to me today.]

A couple hours ago, I went to the ATM machine.

I don’t use cash often, so I haven’t been to an ATM machine in several months. Regardless, I’m fully accustomed to the pattern: put card in, enter secret code, tell the machine what I want, get my money, take my card. This time, I was really surprised by how long it was taking for my money to pop out.

Maybe there’s a problem with the connectivity? Maybe I should check back later? I sat in my car thinking about what the best plan of action would be… and then I decided to read the screen. (Who needs to read the screen? We all know what’s supposed to happen… so much so, that I was able to use an ATM machine entirely in the Icelandic language once.)

PLEASE TAKE YOUR CARD TO DISPENSE FUNDS, it said.

This is one of the simplest and greatest examples of poka-yoke (or “mistake-proofing”) I’ve ever seen. I had to take my card out and put it away before I could get my money! I was highly motivated to get the money (I mean, that’s the specific thing I came to the ATM to get) so of course I’m going to do whatever is required to accomplish my goal. The machine was forcing me to take my card — preventing the mistake of me accidentally leaving my card in the machine — which could be problematic for both me and the bank.

Why have I never seen this before? Why don’t other ATMs do this? I went on an intellectual fishing expedition and found out that no, the idea is not new… Lockton et al. (2010) described it like this:

A major opportunity for error with historic ATMs came from a user leaving his or her ATM card in the machine’s slot after the procedure of dispensing cash or other account activity was complete (Rogers et al., 1996, Rogers and Fisk, 1997). This was primarily because the cash was dispensed before the card was returned (i.e. a different sequence for Plan 3 in the HTA of Fig. 3), leading to a postcompletion error—“errors such as leaving the original document behind in a photocopier… [or] forgetting to replace the gas cap after filling the tank” (Byrne and Bovair, 1997). Postcompletion error is an error of omission (Matthews et al., 2000); the user’s main goal (Plan 0 in Fig. 3) of getting cash was completed so the further “hanging postcompletion action” (Chung and Byrne, 2008) of retrieving the card was easily forgotten.

The obvious design solution was, as Chung and Byrne (2008) put it, “to place the hanging postcompletion action ‘on the critical path’ to reduce or eliminate [its] omission” and this is what the majority of current ATMs feature (Freed and Remington, 2000): an interlock forcing function (Norman, 1988) or control poka-yoke (Shingo, 1986), requiring the user to remove the card before the cash is dispensed. Zimmerman and Bridger (2000) found that a ‘card-returned-then-cash-dispensed’ ATM dialogue design was at least 22% more efficient (in withdrawal time) and resulted in 100% fewer lost cards (i.e. none) compared with a ‘cash-dispensed-then-card-returned’ dialogue design.

I don’t think the most compelling message here has anything to do with design or ATMs, but with the value of hidden gems tucked into research papers.  There is a long lag time between recording genius ideas and making them broadly available to help people. One of my goals over the next few years is to help as many of these nuggets get into the mainstream as possible. If you’ve got some findings that you think would benefit the entire quality community (or quality management systems or software), get in touch… I want to hear from you!

 

Reference:

Lockton, D., Harrison, D., & Stanton, N. A. (2010). The Design with Intent Method: A design tool for influencing user behaviour. Applied ergonomics41(3), 382-392.

Value Propositions for Quality 4.0

In previous articles, we introduced Quality 4.0, the pursuit of performance excellence as an integral part of an organization’s digital transformation. It’s one aspect of Industry 4.0 transformation towards intelligent automation: smart, hyperconnected(*) agents deployed in environments where humans and machines cooperate and leverage data to achieve shared goals.

Automation is a spectrum: an operator can specify a process that a computer or intelligent agent executes, the computer can make decisions for an operator to approve or adjust, or the computer can make and execute all decisions. Similarly, machine intelligence is a spectrum: an algorithm can provide advice, take action with approvals or adjustments, or take action on its own. We have to decide what value is generated when we introduce various degrees of intelligence and automation in our organizations.

How can Quality 4.0 help your organization? How can you improve the performance of your people, projects, products, and entire organizations by implementing technologies like artificial intelligence, machine learning, robotic process automation, and blockchain?

A value proposition is a statement that explains what benefits a product or activity will deliver. Quality 4.0 initiatives have these kinds of value propositions:

  1. Augment (or improve upon) human intelligence
  2. Increase the speed and quality of decision-making
  3. Improve transparency, traceability, and auditability
  4. Anticipate changes, reveal biases, and adapt to new circumstances and knowledge
  5. Evolve relationships and organizational boundaries to reveal opportunities for continuous improvement and new business models
  6. Learn how to learn; cultivate self-awareness and other-awareness as a skill

Quality 4.0 initiatives add intelligence to monitoring and managing operations – for example, predictive maintenance can help you anticipate equipment failures and proactively reduce downtime. They can help you assess supply chain risk on an ongoing basis, or help you decide whether to take corrective action. They can also improve help you improve cybersecurity: documenting and benchmarking processes can provide a basis for detecting anomalies, and understanding expected performance can help you detect potential attacks.


(*) Hyperconnected = (nearly) always on, (nearly) always accessible.

« Older Entries