Category Archives: Socio-Technical Systems

Happy 10th Birthday!

10 years ago today, this blog published its first post: “How Do I Do a Lean Six Sigma (LSS) Project?” Looking back, it seems like a pretty simple place to have started. I didn’t know whether it would even be useful to anyone, but I was committed to making my personal PDSA cycles high-impact: I was going to export things I learned, or things I found valuable. (As it turns out, many people did appreciate the early posts even though it would take a few years for that to become evident!)

Since then, hundreds more have followed to help people understand more about quality and process improvement in theory and in practice. I started writing because I was in the middle of my PhD dissertation in the Quality Systems program at Indiana State, and I was discovering so many interesting nuggets of information that I wanted to share those with the world – particularly practitioners, who might not have lots of time (or even interest) in sifting through the research. In addition, I was using data science (and some machine learning, although at the time, it was much more difficult to implement) to explore quality-related problems, and could see the earliest signs that this new paradigm for problem solving might help fuel data-driven decision making in the workplace… if only we could make the advanced techniques easy for people in busy jobs to use and apply.

We’re not there yet, but as ASQ and other organizations recognize Quality 4.0 as a focus area, we’re much closer. As a result, I’ve made it my mission to help bring insights from research to practitioners, to make these new innovations real. If you are developing or demonstrating any new innovative techniques that relate to making people, processes, or products better, easier, faster, or less expensive — or reducing risks and building individual and organizational capabilities — let me know!

I’ve also learned a lot in the past decade, most of which I’ve spent helping undergraduate students develop and refine their data-driven decision making skills, and more recently at Intelex (provider of integrated environment, health & safety, and quality management EHSQ software to enterprises and smaller organizations). Here are some of the big lessons:

  1. People are complex. They have multidimensional lives, and work should support and enrich those lives. Any organization that cares about performance — internally and in the market — should examine how it can create complete and meaningful experiences. This applies not only to customers, but to employees and partners and suppliers. It also applies to anyone an organization has the power and potential to impact, no matter how small.
  2. Everybody wants to do a good job (and be recognized for it). How can we create environments where each person is empowered to contribute in all the areas where they have talent and interest? How can these same environments be designed with empathy as a core capability?
  3. Your data are your most valuable assets. It sounds trite, but data is becoming as valuable as warehouses, inventory, and equipment. I was involved in a project a few years ago where we digitized data that had been collected for three years — and by analyzing it, we uncovered improvement opportunities that when implemented, saved thousands of dollars a week. We would not have been able to do that if the data had remained scratched in pencil on thousands of sheets of well-worn legal paper.
  4. Nothing beats domain expertise (especially where data science is concerned). I’ve analyzed terabytes of data over the past decade, and in many cases, the secrets are subtle. Any time you’re using data to make decisions, be sure to engage the people with practical, on-the-ground experience in the area you’re studying.
  5. Self-awareness must be cultivated. The older you get, and the more experience you gain, the more you know what you don’t know. Many of my junior colleagues (and yours) haven’t reached this point yet, and will need some help from senior colleagues to gain this awareness. At the same time, those of you who are senior have valuable lessons to learn from your junior colleagues, too! Quality improvement is grounded in personal and organizational learning, and processes should help people help each other uncover blind spots and work through them — without fear.

 

Most of all, I discovered that what really matters is learning. We can spend time supporting human and organizational performance, developing and refining processes that have quality baked in, and making sure that products meet all their specifications. But what’s going on under the surface is more profound: people are learning about themselves, they are learning about how to transform inputs into outputs in a way that adds value, and they are learning about each other and their environment. Our processes just encapsulate that organizational knowledge that we develop as we learn.

Why FEMA is Monitoring Waffle House this Weekend

This article originally appeared on the Intelex Community on 9/14/2018 at https://community.intelex.com/explore/posts/why-fema-monitoring-waffle-house-weekend

Sometimes the most informative metrics show up in the strangest of places.

Case in point: with a hurricane making landfall today in North Carolina, and the prospect for catastrophic flooding over the weekend and into next week, emergency managers are mobilizing for action – and if you’re in the path of the storm, you may be doing the same. Have you started monitoring the Waffle House Index? The US Federal Emergency Management Agency (FEMA) has.

Originally devised by W. Craig Fugate, former FEMA Director, the Waffle House Index is based on the observation that the popular 24-hour breakfast chain has historically been unusually well prepared for disasters. Part of their business model is to be the spot for emergency personnel to rely on for their coffee and nourishment – a valuable role when power crews, rescue teams, and debris removal workers are working long, hard hours.

To do this, they make sure all employees have disaster training and stock all their restaurants with generators, and have a reduced menu specifically to be offered in the aftermath of a disaster. Over time, this even led to a more formal partnership between the organizations. FEMA first responders are known to set up initial operations in Waffle House locations. Waffle House now reports the status of each location to FEMA after a disaster to facilitate data collection.

The Waffle House Index is a red, yellow, or green marker placed on a map wherever a Waffle House location is found. Under normal conditions, the marker is green. If the restaurant has shifted into emergency operations and is offering their limited menu, the marker is yellow. If the marker is red, that means that the Waffle House is closed – either the site itself is damaged or destroyed, emergency staff can not reach the site, the emergency generators are down or out of fuel, or there is a food shortage. When FEMA sees one or more reds, they know an area is in particularly bad shape – and they’ll need to help.

What can you learn about risk-based thinking from the Waffle House index? Three things: first, that you can (and should) look outside your organization for risk indicators that might help you make better (and faster) decisions, particularly when those risks are activated. Second, that you should explore crowdsourced risk data as a source of up-to-date information.

And finally – if Waffle House is closed, there’s a serious problem.

 

Additional Reading: McKnight, B., & Linnenluecke, M. K. (2016). How firm responses to natural disasters strengthen community resilience: A stakeholder-based perspective. Organization & Environment, 29(3), 290-307.

Walter, L. (2011, July 6) What do waffles have to do with risk management? EHS Today. Available from https://www.ehstoday.com/fire_emergencyresponse/disaster-planning/waffles-risk-management-0706

Risk-Based Thinking: In ISO 9001 and Beyond (Interview)

On August 31, Quality Digest interviewed me on Quality Digest Live in advance of the webinar on Risk-Based Thinking that we held (sponsored by Intelex) on September 6. You can see it here on YouTube (13:42)! I answer the questions:

  • Is risk-based thinking different than enterprise risk management (ERM) or operations risk management (ORM)?
  • Who is risk-based thinking for?
  • Are there good and bad risks? Is opportunity really the “flip side” of risk?
  • Can focusing on risk inhibit innovation?

I’ll also be capturing the information from the webinar in a series of reports later this month that will be available to everyone. Stay tuned!

Practical Poka-Yoke

[Note: I’ve been away from the blog for several months now in the middle of very significant changes in my life. That’s about to change! In the next post, I’ll tell you about what happened and what my plans are for the future. In the meantime, I wanted to share something that happened to me today.]

A couple hours ago, I went to the ATM machine.

I don’t use cash often, so I haven’t been to an ATM machine in several months. Regardless, I’m fully accustomed to the pattern: put card in, enter secret code, tell the machine what I want, get my money, take my card. This time, I was really surprised by how long it was taking for my money to pop out.

Maybe there’s a problem with the connectivity? Maybe I should check back later? I sat in my car thinking about what the best plan of action would be… and then I decided to read the screen. (Who needs to read the screen? We all know what’s supposed to happen… so much so, that I was able to use an ATM machine entirely in the Icelandic language once.)

PLEASE TAKE YOUR CARD TO DISPENSE FUNDS, it said.

This is one of the simplest and greatest examples of poka-yoke (or “mistake-proofing”) I’ve ever seen. I had to take my card out and put it away before I could get my money! I was highly motivated to get the money (I mean, that’s the specific thing I came to the ATM to get) so of course I’m going to do whatever is required to accomplish my goal. The machine was forcing me to take my card — preventing the mistake of me accidentally leaving my card in the machine — which could be problematic for both me and the bank.

Why have I never seen this before? Why don’t other ATMs do this? I went on an intellectual fishing expedition and found out that no, the idea is not new… Lockton et al. (2010) described it like this:

A major opportunity for error with historic ATMs came from a user leaving his or her ATM card in the machine’s slot after the procedure of dispensing cash or other account activity was complete (Rogers et al., 1996, Rogers and Fisk, 1997). This was primarily because the cash was dispensed before the card was returned (i.e. a different sequence for Plan 3 in the HTA of Fig. 3), leading to a postcompletion error—“errors such as leaving the original document behind in a photocopier… [or] forgetting to replace the gas cap after filling the tank” (Byrne and Bovair, 1997). Postcompletion error is an error of omission (Matthews et al., 2000); the user’s main goal (Plan 0 in Fig. 3) of getting cash was completed so the further “hanging postcompletion action” (Chung and Byrne, 2008) of retrieving the card was easily forgotten.

The obvious design solution was, as Chung and Byrne (2008) put it, “to place the hanging postcompletion action ‘on the critical path’ to reduce or eliminate [its] omission” and this is what the majority of current ATMs feature (Freed and Remington, 2000): an interlock forcing function (Norman, 1988) or control poka-yoke (Shingo, 1986), requiring the user to remove the card before the cash is dispensed. Zimmerman and Bridger (2000) found that a ‘card-returned-then-cash-dispensed’ ATM dialogue design was at least 22% more efficient (in withdrawal time) and resulted in 100% fewer lost cards (i.e. none) compared with a ‘cash-dispensed-then-card-returned’ dialogue design.

I don’t think the most compelling message here has anything to do with design or ATMs, but with the value of hidden gems tucked into research papers.  There is a long lag time between recording genius ideas and making them broadly available to help people. One of my goals over the next few years is to help as many of these nuggets get into the mainstream as possible. If you’ve got some findings that you think would benefit the entire quality community (or quality management systems or software), get in touch… I want to hear from you!

 

Reference:

Lockton, D., Harrison, D., & Stanton, N. A. (2010). The Design with Intent Method: A design tool for influencing user behaviour. Applied ergonomics41(3), 382-392.

Value Propositions for Quality 4.0

In previous articles, we introduced Quality 4.0, the pursuit of performance excellence as an integral part of an organization’s digital transformation. It’s one aspect of Industry 4.0 transformation towards intelligent automation: smart, hyperconnected(*) agents deployed in environments where humans and machines cooperate and leverage data to achieve shared goals.

Automation is a spectrum: an operator can specify a process that a computer or intelligent agent executes, the computer can make decisions for an operator to approve or adjust, or the computer can make and execute all decisions. Similarly, machine intelligence is a spectrum: an algorithm can provide advice, take action with approvals or adjustments, or take action on its own. We have to decide what value is generated when we introduce various degrees of intelligence and automation in our organizations.

How can Quality 4.0 help your organization? How can you improve the performance of your people, projects, products, and entire organizations by implementing technologies like artificial intelligence, machine learning, robotic process automation, and blockchain?

A value proposition is a statement that explains what benefits a product or activity will deliver. Quality 4.0 initiatives have these kinds of value propositions:

  1. Augment (or improve upon) human intelligence
  2. Increase the speed and quality of decision-making
  3. Improve transparency, traceability, and auditability
  4. Anticipate changes, reveal biases, and adapt to new circumstances and knowledge
  5. Evolve relationships and organizational boundaries to reveal opportunities for continuous improvement and new business models
  6. Learn how to learn; cultivate self-awareness and other-awareness as a skill

Quality 4.0 initiatives add intelligence to monitoring and managing operations – for example, predictive maintenance can help you anticipate equipment failures and proactively reduce downtime. They can help you assess supply chain risk on an ongoing basis, or help you decide whether to take corrective action. They can also improve help you improve cybersecurity: documenting and benchmarking processes can provide a basis for detecting anomalies, and understanding expected performance can help you detect potential attacks.


(*) Hyperconnected = (nearly) always on, (nearly) always accessible.

Perception of Value & Today’s Cryptocurrency “Crash”

Artist’s rendering of Bitcoin. THERE ARE NO ACTUAL COINS THAT LOOK LIKE THIS. Don’t ever let anyone sell you one.

Today, many cryptocurrencies lost ~35-50% of their value. Reddit even posted contact information for the National Suicide Prevention Hotline in /r/cryptocurrency, knowing how emotional investors were bound to be today. Bitcoin, which was nearly $20K in mid-December and has been hovering near $14K this past week, dropped nearly $4K and almost sunk below the $10K milestone. I usually track the price of Bitcoin at http://bitcointicker.co, which can show the posted prices from several exchanges (web locations where people go to buy and sell, like Ebay). There are hundreds of cryptocurrencies and many of them dropped in value today.

Why did the prices drop so much on Tuesday? Here are some likely influences:

Market prices are usually driven by supply and demand — for example, if there aren’t that many lobsters available in a particular area at a particular time, and you go to a restaurant hoping to order one — you’ll pay a premium. But that price is also influenced by the quality of the product, the image of the product, which influences your perception of its value. Quality reflects how well something satisfies stated and implied needs or expectations.

Value, however, is quality relative to price, and influenced by image. And people are not always rational: they’ll pay a premium for image, even if the value of a product isn’t particularly high. Just think of all the Macs on display at schools, coffee shops, and airports. Price is related to value… usually, price goes up as value goes up.

Where’s the value of cryptocurrency? A Bitcoin does not, on its own, have any inherent value — just like a dollar or a Euro (a “fiat currency”). But the prospect of an asset that will increase in perceived value — where you can buy low, hold (sometimes just for a few days), and sell high because there are lots of people willing to buy it from you — will have perceived value. Hundreds of early adopters — or “Bitcoin millionaires” — are getting people excited about the prospect of making small investments and reaping huge rewards. That this has happened so recently lends a mystique to ownership of cryptocurrencies and Altcoins (or “alternatives to Bitcoin,” like Ether) in addition to the novelty.

Value is attributed to things by people, and cryptocurrencies are no exception. The quality of the currency itself, and the technical solidity of the platform upon which one is based, isn’t really tied to the cryptocurrency price right now — although this will probably change as knowledge and awareness increases.

Is this the end of Bitcoin? That’s doubtful — there are too many innovators who insist on exploring the technological landscape of cryptocurrencies and blockchain technology, and lots of investors willing to fund them. In the meantime, there are unlikely benefits: because cryptocurrencies are not yet mainstream, a “crypto crash” is not as likely to ripple through the whole economy (no pun intended) like the subprime mortgage crisis of 2008. But if you do decide to buy cryptocurrency, don’t invest any more than you can afford to lose.

Blockchain and Quality

Quality is all about satisfying stated and implied needs –now, or in the future. When we envision and design high-quality products and services for the future, that’s innovation. One of the most hyped innovations of 2017 was blockchain, which has the potential to transform business models and the way quality is managed. The purpose of this article is to explain this relationship in a simple way.

Blockchain is the innovative technology supporting the Bitcoin cryptocurrency. Bitcoin gained tremendous traction in 2017, starting at just over $1,000 in January and reaching nearly $20,000 by the end of the year.  It increased in value so much over this time that it’s been compared to the Dutch tulip market bubble of the 1630s.  After tulips were imported into Holland from Turkey, an alteration to the solid colors of the tulips caused the appearance of “flames” on the petals. This made people believe that the tulip bulbs held extreme value, and so many people traded their land and their savings to invest in what they felt was a “sure thing” – to lose everything not long after, when the market corrected itself.

Bitcoin (USD) prices, 1/1/17-12/13/17. Generated using https://www.coindesk.com/price/.

Bitcoin (USD) prices, 1/1/17-12/13/17. Generated using https://www.coindesk.com/price/.

The blockchain technology that supports Bitcoin is, at its core, a database. It’s a special kind of database, but no more magical, really – and easier to contextualize if you think about innovations in database technology over the past two decades.

Databases can be roughly classified into these categories:

  • Relational databases (Oracle, MySQL, PostgreSQL, Sybase): When you can organize your data in terms of tables, fields, and relationships between those entities, a relational database is often appropriate. For example, your customer data might be kept in the “people” table with fields like address, state, or gender. Each record in the people table might have a type – employee, partner, or customer. Although records can be changed, it’s easy to accidentally input bad data, and it’s also easy to accidentally generate duplicate records. Scaling a relational database can also be rather tricky.
  • Non-relational (NoSQL) databases (MongoDB, Cassandra, Redis): If most of your data comes in large blobs and you don’t want to split it up into fields and tables, these databases are useful. MongoDB is great for collections of documents, such as web pages, log data, or tweets. Cassandra works well for analytics applications. Sensor data and other data types that change frequently or need to be held in active memory (for example, in key-value stores) are handled well by databases like Redis. NoSQL databases are easier to scale than relational databases.
  • Other databases and data stores with special properties: Some databases are so unique they don’t feel or act like databases. Solr, for example, is traditionally used when you have to provide search functionality over a store of documents. Hadoop is a distributed file system, so it functions somewhat like a database even though it’s not one. Graph databases are designed for data stores where the relationships are the most important aspect, so they are gaining popularity for social networks. Large, institutional science projects often store their data in special binary files that have distinct formats, can be queried like databases, and in many ways act like databases – but they are not technically databases.

 

What Distinguishes Blockchain-based Databases from Ordinary Databases?

First, the blockchain is designed to handle transactions – it’s a digital ledger. So it’s not surprising that its first “successful” use cases are in the realm of cryptocurrency, where people engage in transactions with one another to exchange something of value.

Next, this database is immutable, meaning you can’t go back and change earlier records. Every time a new transaction occurs, a cryptographically sealed “snapshot” is taken of the entire database. When I first heard this, I was worried: so that means if we accidentally enter something incorrect into the database, it can never be changed, right? And its presence is memorialized forever? The answer to this question is: sort of. Thanks to “smart contracts”, we shouldn’t ever be in the situation where bad data gets entered into our blockchain-based system, because incoming data will be checked (by multiple agents) against the smart contract — and only allowed to join the blockchain database if it meets all the quality requirements specified by the contract. It’s like a fancy way to implement validation rules – with the added benefit of being totally traceable. Imagine how nice it would be to trace all the steps in the process that brought the fresh fruit into your kitchen – or any other product you use — just because all transactions in the production process were logged into a “supply blockchain.”

A blockchain database is also decentralized and distributed — you don’t just “buy a blockchain database” and install it at your company. Databases can be centralized, decentralized, or distributed. Most business databases in the past were centralized: there was one instance installed, and a database administrator (or team of them) ensured the performance and security of the database while everyone in the organization created and used applications that interacted with the data. Today, these databases are more commonly distributed: there’s not just one instance, but several – there is no central storage, but there may be storage on many computers, or over a network of connected computers (or “in the cloud”). 

Decentralized systems have many advantages – for example, nodes can join or leave the network at will. For example, you can create a web site or take it off the internet whenever you want, if you own and control it. In decentralized systems, there is no single point of control. If a business wants to implement blockchain but also wants to control all the nodes, that should be a big red flag. By its nature, blockchain is decentralized just like the internet itself.

Finally, blockchain is transparent. Any of the participants who own nodes can see all the transactions — so there should be fewer opportunities for fraud. This doesn’t mean that there isn’t opportunity for danger, though.

 

Why is Blockchain Potentially Useful for Quality Assurance?

In addition to enhancing provenance and traceability, one of the biggest envisioned applications of blockchain databases is to support machine to machine transactions. As intelligent agents grow in complexity and are trusted to handle more tasks, and as the Internet of Things (IoT) expands, there needs to be a high-quality record of how those objects and agents interact with other objects and agents – and with humans. Blockchain could also be used to support new business models like decentralized energy markets, where you can consume energy from the local power plant, but also potentially generate your own and contribute the excess energy to your local community for a fee. It could potentially transform middleware as well, which is software that allows different software systems to communicate with one another. (A long time ago, someone told me that it’s like “email for applications” – they can send messages to one another so they know how to react, for example, when a company receives an order and several systems need to be alerted that the order has arrived.)

In principle, transactions logged to a blockchain make it impossible to defraud participants in the process, and impossible to manipulate records after they are recorded. They are self-auditing and fully traceable. Blockchain won’t make quality assurance, tracking, or auditing EASY, but you should expect it to make the business landscape different – new business models will be possible, and it will be possible to entrust intelligent agents with more tasks.  

Blockchain can help us ensure that stated and implied needs are met, and do it in such a way that the integrity of our data is assured simply by its presence. But we’re not there yet. Developers still need to implement simple, demonstrable use cases to make it easier for managers and executives to map these technologies onto specific business needs. In addition, blockchain is slow compared to relational database systems, so this needs to be addressed as well before widespread adoption.

 

Read more in our December 2017 SQP article.

« Older Entries