[Note: I’ve been away from the blog for several months now in the middle of very significant changes in my life. That’s about to change! In the next post, I’ll tell you about what happened and what my plans are for the future. In the meantime, I wanted to share something that happened to me today.]
A couple hours ago, I went to the ATM machine.
I don’t use cash often, so I haven’t been to an ATM machine in several months. Regardless, I’m fully accustomed to the pattern: put card in, enter secret code, tell the machine what I want, get my money, take my card. This time, I was really surprised by how long it was taking for my money to pop out.
Maybe there’s a problem with the connectivity? Maybe I should check back later? I sat in my car thinking about what the best plan of action would be… and then I decided to read the screen. (Who needs to read the screen? We all know what’s supposed to happen… so much so, that I was able to use an ATM machine entirely in the Icelandic language once.)
PLEASE TAKE YOUR CARD TO DISPENSE FUNDS, it said.
This is one of the simplest and greatest examples of poka-yoke (or “mistake-proofing”) I’ve ever seen. I had to take my card out and put it away before I could get my money! I was highly motivated to get the money (I mean, that’s the specific thing I came to the ATM to get) so of course I’m going to do whatever is required to accomplish my goal. The machine was forcing me to take my card — preventing the mistake of me accidentally leaving my card in the machine — which could be problematic for both me and the bank.
Why have I never seen this before? Why don’t other ATMs do this? I went on an intellectual fishing expedition and found out that no, the idea is not new… Lockton et al. (2010) described it like this:
A major opportunity for error with historic ATMs came from a user leaving his or her ATM card in the machine’s slot after the procedure of dispensing cash or other account activity was complete (Rogers et al., 1996, Rogers and Fisk, 1997). This was primarily because the cash was dispensed before the card was returned (i.e. a different sequence for Plan 3 in the HTA of Fig. 3), leading to a postcompletion error—“errors such as leaving the original document behind in a photocopier… [or] forgetting to replace the gas cap after filling the tank” (Byrne and Bovair, 1997). Postcompletion error is an error of omission (Matthews et al., 2000); the user’s main goal (Plan 0 in Fig. 3) of getting cash was completed so the further “hanging postcompletion action” (Chung and Byrne, 2008) of retrieving the card was easily forgotten.
The obvious design solution was, as Chung and Byrne (2008) put it, “to place the hanging postcompletion action ‘on the critical path’ to reduce or eliminate [its] omission” and this is what the majority of current ATMs feature (Freed and Remington, 2000): an interlock forcing function (Norman, 1988) or control poka-yoke (Shingo, 1986), requiring the user to remove the card before the cash is dispensed. Zimmerman and Bridger (2000) found that a ‘card-returned-then-cash-dispensed’ ATM dialogue design was at least 22% more efficient (in withdrawal time) and resulted in 100% fewer lost cards (i.e. none) compared with a ‘cash-dispensed-then-card-returned’ dialogue design.
I don’t think the most compelling message here has anything to do with design or ATMs, but with the value of hidden gems tucked into research papers. There is a long lag time between recording genius ideas and making them broadly available to help people. One of my goals over the next few years is to help as many of these nuggets get into the mainstream as possible. If you’ve got some findings that you think would benefit the entire quality community (or quality management systems or software), get in touch… I want to hear from you!
Lockton, D., Harrison, D., & Stanton, N. A. (2010). The Design with Intent Method: A design tool for influencing user behaviour. Applied ergonomics, 41(3), 382-392.
In previous articles, we introduced Quality 4.0, the pursuit of performance excellence as an integral part of an organization’s digital transformation. It’s one aspect of Industry 4.0 transformation towards intelligent automation: smart, hyperconnected(*) agents deployed in environments where humans and machines cooperate and leverage data to achieve shared goals.
Automation is a spectrum: an operator can specify a process that a computer or intelligent agent executes, the computer can make decisions for an operator to approve or adjust, or the computer can make and execute all decisions. Similarly, machine intelligence is a spectrum: an algorithm can provide advice, take action with approvals or adjustments, or take action on its own. We have to decide what value is generated when we introduce various degrees of intelligence and automation in our organizations.
How can Quality 4.0 help your organization? How can you improve the performance of your people, projects, products, and entire organizations by implementing technologies like artificial intelligence, machine learning, robotic process automation, and blockchain?
A value proposition is a statement that explains what benefits a product or activity will deliver. Quality 4.0 initiatives have these kinds of value propositions:
Augment (or improve upon) human intelligence
Increase the speed and quality of decision-making
Improve transparency, traceability, and auditability
Anticipate changes, reveal biases, and adapt to new circumstances and knowledge
Evolve relationships and organizational boundaries to reveal opportunities for continuous improvement and new business models
Learn how to learn; cultivate self-awareness and other-awareness as a skill
Quality 4.0 initiatives add intelligence to monitoring and managing operations – for example, predictive maintenance can help you anticipate equipment failures and proactively reduce downtime. They can help you assess supply chain risk on an ongoing basis, or help you decide whether to take corrective action. They can also improve help you improve cybersecurity: documenting and benchmarking processes can provide a basis for detecting anomalies, and understanding expected performance can help you detect potential attacks.
(*) Hyperconnected = (nearly) always on, (nearly) always accessible.
Artist’s rendering of Bitcoin. THERE ARE NO ACTUAL COINS THAT LOOK LIKE THIS. Don’t ever let anyone sell you one.
Today, many cryptocurrencies lost ~35-50% of their value. Reddit even posted contact information for the National Suicide Prevention Hotline in /r/cryptocurrency, knowing how emotional investors were bound to be today. Bitcoin, which was nearly $20K in mid-December and has been hovering near $14K this past week, dropped nearly $4K and almost sunk below the $10K milestone. I usually track the price of Bitcoin at http://bitcointicker.co, which can show the posted prices from several exchanges (web locations where people go to buy and sell, like Ebay). There are hundreds of cryptocurrencies and many of them dropped in value today.
Why did the prices drop so much on Tuesday? Here are some likely influences:
The government of South Korea announced its plans to prepare a bill banning cryptocurrency trading (specifically Bitcoin, Ethereum, Ripple); trading volume has been high in South Korea this past year, and the transactions have propped up global cryptocurrency prices.
Market prices are usually driven by supply and demand — for example, if there aren’t that many lobsters available in a particular area at a particular time, and you go to a restaurant hoping to order one — you’ll pay a premium. But that price is also influenced by the quality of the product, the image of the product, which influences your perception of its value. Quality reflects how well something satisfies stated and implied needs or expectations.
Value, however, is quality relative to price, and influenced by image. And people are not always rational: they’ll pay a premium for image, even if the value of a product isn’t particularly high. Just think of all the Macs on display at schools, coffee shops, and airports. Price is related to value… usually, price goes up as value goes up.
Where’s the value of cryptocurrency? A Bitcoin does not, on its own, have any inherent value — just like a dollar or a Euro (a “fiat currency”). But the prospect of an asset that will increase in perceived value — where you can buy low, hold (sometimes just for a few days), and sell high because there are lots of people willing to buy it from you — will have perceived value. Hundreds of early adopters — or “Bitcoin millionaires” — are getting people excited about the prospect of making small investments and reaping huge rewards. That this has happened so recently lends a mystique to ownership of cryptocurrencies and Altcoins (or “alternatives to Bitcoin,” like Ether) in addition to the novelty.
Value is attributed to things by people, and cryptocurrencies are no exception. The quality of the currency itself, and the technical solidity of the platform upon which one is based, isn’t really tied to the cryptocurrency price right now — although this will probably change as knowledge and awareness increases.
Is this the end of Bitcoin? That’s doubtful — there are too many innovators who insist on exploring the technological landscape of cryptocurrencies and blockchain technology, and lots of investors willing to fund them. In the meantime, there are unlikely benefits: because cryptocurrencies are not yet mainstream, a “crypto crash” is not as likely to ripple through the whole economy (no pun intended) like the subprime mortgage crisis of 2008. But if you do decide to buy cryptocurrency, don’t invest any more than you can afford to lose.
Quality is all about satisfying stated and implied needs –now, or in the future. When we envision and design high-quality products and services for the future, that’s innovation. One of the most hyped innovations of 2017 was blockchain, which has the potential to transform business models and the way quality is managed. The purpose of this article is to explain this relationship in a simple way.
Blockchain is the innovative technology supporting the Bitcoin cryptocurrency. Bitcoin gained tremendous traction in 2017, starting at just over $1,000 in January and reaching nearly $20,000 by the end of the year. It increased in value so much over this time that it’s been compared tothe Dutch tulip market bubble of the 1630s. After tulips were imported into Holland from Turkey, an alteration to the solid colors of the tulips caused the appearance of “flames” on the petals. This made people believe that the tulip bulbs held extreme value, and so many people traded their land and their savings to invest in what they felt was a “sure thing” – to lose everything not long after, when the market corrected itself.
Bitcoin (USD) prices, 1/1/17-12/13/17. Generated using https://www.coindesk.com/price/.
The blockchain technology that supports Bitcoin is, at its core, a database. It’s a special kind of database, but no more magical, really – and easier to contextualize if you think about innovations in database technology over the past two decades.
Databases can be roughly classified into these categories:
Relational databases (Oracle, MySQL, PostgreSQL, Sybase): When you can organize your data in terms of tables, fields, and relationships between those entities, a relational database is often appropriate. For example, your customer data might be kept in the “people” table with fields like address, state, or gender. Each record in the people table might have a type – employee, partner, or customer. Although records can be changed, it’s easy to accidentally input bad data, and it’s also easy to accidentally generate duplicate records. Scaling a relational database can also be rather tricky.
Non-relational (NoSQL) databases (MongoDB, Cassandra, Redis): If most of your data comes in large blobs and you don’t want to split it up into fields and tables, these databases are useful. MongoDB is great for collections of documents, such as web pages, log data, or tweets. Cassandra works well for analytics applications. Sensor data and other data types that change frequently or need to be held in active memory (for example, in key-value stores) are handled well by databases like Redis. NoSQL databases are easier to scale than relational databases.
Other databases and data stores with special properties: Some databases are so unique they don’t feel or act like databases. Solr, for example, is traditionally used when you have to provide search functionality over a store of documents. Hadoop is a distributed file system, so it functions somewhat like a database even though it’s not one. Graph databases are designed for data stores where the relationships are the most important aspect, so they are gaining popularity for social networks. Large, institutional science projects often store their data in special binary files that have distinct formats, can be queried like databases, and in many ways act like databases – but they are not technically databases.
What Distinguishes Blockchain-based Databases from Ordinary Databases?
First, the blockchain is designed to handle transactions – it’s a digital ledger. So it’s not surprising that its first “successful” use cases are in the realm of cryptocurrency, where people engage in transactions with one another to exchange something of value.
Next, this database is immutable, meaning you can’t go back and change earlier records. Every time a new transaction occurs, a cryptographically sealed “snapshot” is taken of the entire database. When I first heard this, I was worried: so that means if we accidentally enter something incorrect into the database, it can never be changed, right? And its presence is memorialized forever? The answer to this question is: sort of. Thanks to “smart contracts”, we shouldn’t ever be in the situation where bad data gets entered into our blockchain-based system, because incoming data will be checked (by multiple agents) against the smart contract — and only allowed to join the blockchain database if it meets all the quality requirements specified by the contract. It’s like a fancy way to implement validation rules – with the added benefit of being totally traceable. Imagine how nice it would be to trace all the steps in the process that brought the fresh fruit into your kitchen – or any other product you use — just because all transactions in the production process were logged into a “supply blockchain.”
A blockchain database is also decentralized anddistributed — you don’t just “buy a blockchain database” and install it at your company. Databases can be centralized, decentralized, or distributed. Most business databases in the past were centralized: there was one instance installed, and a database administrator (or team of them) ensured the performance and security of the database while everyone in the organization created and used applications that interacted with the data. Today, these databases are more commonly distributed: there’s not just one instance, but several – there is no central storage, but there may be storage on many computers, or over a network of connected computers (or “in the cloud”).
Decentralized systems have many advantages – for example, nodes can join or leave the network at will. For example, you can create a web site or take it off the internet whenever you want, if you own and control it. In decentralized systems, there is no single point of control. If a business wants to implement blockchain but also wants to control all the nodes, that should be a big red flag. By its nature, blockchain is decentralized just like the internet itself.
Why is Blockchain Potentially Useful for Quality Assurance?
In addition to enhancing provenance and traceability, one of the biggest envisioned applications of blockchain databases is to support machine to machine transactions. As intelligent agents grow in complexity and are trusted to handle more tasks, and as the Internet of Things (IoT) expands, there needs to be a high-quality record of how those objects and agents interact with other objects and agents – and with humans. Blockchain could also be used to support new business models like decentralized energy markets, where you can consume energy from the local power plant, but also potentially generate your own and contribute the excess energy to your local community for a fee. It could potentially transform middleware as well, which is software that allows different software systems to communicate with one another. (A long time ago, someone told me that it’s like “email for applications” – they can send messages to one another so they know how to react, for example, when a company receives an order and several systems need to be alerted that the order has arrived.)
In principle, transactions logged to a blockchain make it impossible to defraud participants in the process, and impossible to manipulate records after they are recorded. They are self-auditing and fully traceable. Blockchain won’t make quality assurance, tracking, or auditing EASY, but you should expect it to make the business landscape different – new business models will be possible, and it will be possible to entrust intelligent agents with more tasks.
Blockchain can help us ensure that stated and implied needs are met, and do it in such a way that the integrity of our data is assured simply by its presence. But we’re not there yet. Developers still need to implement simple, demonstrable use cases to make it easier for managers and executives to map these technologies onto specific business needs. In addition, blockchain is slow compared to relational database systems, so this needs to be addressed as well before widespread adoption.
My first post of the year addresses an idea that’s just starting to gain traction – one you’ll hear a lot more about from me in 2018 and beyond: Quality 4.0. It’s not a fad or trend, but a reminder that the business environment is changing, and that performance excellence in the future will depend on how well you adapt, change, and transform in response.
Although we started building community around this concept at the ASQ Quality 4.0 Summits on Disruption, Innovation, and Change in 2017 and 2018, the truly revolutionary work is yet to come.
The term “Quality 4.0” comes from “Industry 4.0” – the “fourth industrial revolution” originally addressed at the Hannover (Germany) Fair in 2011. That meeting emphasized the increasing intelligence and interconnectedness in “smart” manufacturing systems, and reflected on the newest technological innovations in historical context.
The Industrial Revolutions
In the first industrial revolution (late 1700’s), steam and water power made it possible for production facilities to scale up and expanded the potential locations for production.
By the late 1800’s, the discovery of electricity and development of associated infrastructure enabled the development of machines for mass production. In the US, the expansion of railways made it easier to obtain supplies and deliver finished goods. The availability of power also sparked a renaissance in computing, and digital computing emerged from its analog ancestor.
The third industrial revolution came at the end of the 1960’s, with the invention of the Programmable Logic Controller (PLC). This made it possible to automate processes like filling and reloading tanks, turning engines on and off, and controlling sequences of events based on changing conditions.
The Fourth Industrial Revolution
Although the growth and expansion of the internet accelerated innovation in the late 1990’s and 2000’s, we are just now poised for another industrial revolution. What’s changing?
Production & Availability of Information: More information is available because people and devices are producing it at greater rates than ever before. Falling costs of enabling technologies like sensors and actuators are catalyzing innovation in these areas.
Connectivity: In many cases, and from many locations, that information is instantly accessible over the internet. Improved network infrastructure is expanding the extent of connectivity, making it more widely available and more robust. (And unlike the 80’s and 90’s, there are far fewer communications protocols that are commonly encountered so it’s a lot easier to get one device to talk to another device on your network.)
Intelligent Processing: Affordable computing capabilities (and computing power!) are available to process that information so it can be incorporated into decision making. High-performance software libraries for advanced processing and visualization of data are easy to find, and easy to use. (In the past, we had to write our own… now we can use open-source solutions that are battle tested.
New Modes of Interaction: The way in which we can acquire and interact with information are also changing, in particular through new interfaces like Augmented Reality (AR) and Virtual Reality (VR), which expand possibilities for training and navigating a hybrid physical-digital environment with greater ease.
New Modes of Production: 3D printing, nanotechnology, and gene editing (CRISPR) are poised to change the nature and means of production in several industries. Technologies for enhancing human performance (e.g. exoskeletons, brain-computer interfaces, and even autonomous vehicles) will also open up new mechanisms for innovation in production. (Roco & Bainbridge (2002) describe many of these, and their prescience is remarkable.) New technologies like blockchain have the potential to change the nature of production as well, by challenging ingrained perceptions of trust, control, consensus, and value.
The fourth industrial revolution is one of intelligence: smart, hyperconnected cyber-physical systems that help humans and machines cooperate to achieved shared goals, and use data to generate value.
Enabling Technologies are Physical, Digital, and Biological
These enabling technologies include:
Information (Generate & Share)
Affordable Sensors and Actuators
Big Data infrastructure (e.g. MapReduce, Hadoop, NoSQL databases)
IPv6 Addresses (which expand the number of devices that can be put online)
Internet of Things (IoT)
Machine Learning (incl. Deep Learning)
Augmented Reality (AR)
Mixed Reality (MR)
Virtual Reality (VR)
Diminished Reality (DR)
Automated (Software) Code Generation
Robotic Process Automation (RPA)
Today’s quality profession was born during the middle of the second industrial revolution, when methods were needed to ensure that assembly lines ran smoothly – that they produced artifacts to specifications, that the workers knew how to engage in the process, and that costs were controlled. As industrial production matured, those methods grew to encompass the design of processes which were built to produce to specifications. In the 1980’s and 1990’s, organizations in the US started to recognize the importance of human capabilities and active engagement in quality as essential, and TQM, Lean, and Six Sigma gained in popularity.
How will these methods evolve in an adaptive, intelligent environment? The question is largely still open, and that’s the essence of Quality 4.0.
Quality is the “totality of characteristics of an entity that bear upon its ability to meet stated and implied needs.” (ISO 9001:2015, p.3.1.5) Quality assurance is the practice of assessing whether a particular product or service has the characteristics to meet needs, and through continuous improvement efforts, we use data to tell us whether or not we are adjusting those characteristics to more effectively meet the needs of our stakeholders.
But what if the entity is a chatbot?
In June 2017, we published a paper that explored that question. We mined the academic and industry literature to determine 1) what quality attributes have been used by others to determine chatbot quality, we 2) organized them according to the efficiency, effectiveness, and satisfaction (using guidance from the ISO 9241 definition of usability), and 3) we explored the utility of Saaty’s Analytic Hierarchy Process (AHP) to help organizations select between one or more versions of chatbots based on quality considerations. (It’s sort of like A/B testing for chatbots.)
“There are many ways for practitioners to apply the materialin this article:
The quality attributes in Table 1 can be used as a checklist for a chatbot implementation team to make sure they have addressed key issues.
Two or more conversational systems can be compared by selecting the most significant quality attributes.
Systems can be compared at two points in time to see if quality has improved, which may be particularly useful for adaptive systems that learn as they as exposed to additional participants and topics.”