[Note: I’ve been away from the blog for several months now in the middle of very significant changes in my life. That’s about to change! In the next post, I’ll tell you about what happened and what my plans are for the future. In the meantime, I wanted to share something that happened to me today.]
A couple hours ago, I went to the ATM machine.
I don’t use cash often, so I haven’t been to an ATM machine in several months. Regardless, I’m fully accustomed to the pattern: put card in, enter secret code, tell the machine what I want, get my money, take my card. This time, I was really surprised by how long it was taking for my money to pop out.
Maybe there’s a problem with the connectivity? Maybe I should check back later? I sat in my car thinking about what the best plan of action would be… and then I decided to read the screen. (Who needs to read the screen? We all know what’s supposed to happen… so much so, that I was able to use an ATM machine entirely in the Icelandic language once.)
PLEASE TAKE YOUR CARD TO DISPENSE FUNDS, it said.
This is one of the simplest and greatest examples of poka-yoke (or “mistake-proofing”) I’ve ever seen. I had to take my card out and put it away before I could get my money! I was highly motivated to get the money (I mean, that’s the specific thing I came to the ATM to get) so of course I’m going to do whatever is required to accomplish my goal. The machine was forcing me to take my card — preventing the mistake of me accidentally leaving my card in the machine — which could be problematic for both me and the bank.
Why have I never seen this before? Why don’t other ATMs do this? I went on an intellectual fishing expedition and found out that no, the idea is not new… Lockton et al. (2010) described it like this:
A major opportunity for error with historic ATMs came from a user leaving his or her ATM card in the machine’s slot after the procedure of dispensing cash or other account activity was complete (Rogers et al., 1996, Rogers and Fisk, 1997). This was primarily because the cash was dispensed before the card was returned (i.e. a different sequence for Plan 3 in the HTA of Fig. 3), leading to a postcompletion error—“errors such as leaving the original document behind in a photocopier… [or] forgetting to replace the gas cap after filling the tank” (Byrne and Bovair, 1997). Postcompletion error is an error of omission (Matthews et al., 2000); the user’s main goal (Plan 0 in Fig. 3) of getting cash was completed so the further “hanging postcompletion action” (Chung and Byrne, 2008) of retrieving the card was easily forgotten.
The obvious design solution was, as Chung and Byrne (2008) put it, “to place the hanging postcompletion action ‘on the critical path’ to reduce or eliminate [its] omission” and this is what the majority of current ATMs feature (Freed and Remington, 2000): an interlock forcing function (Norman, 1988) or control poka-yoke (Shingo, 1986), requiring the user to remove the card before the cash is dispensed. Zimmerman and Bridger (2000) found that a ‘card-returned-then-cash-dispensed’ ATM dialogue design was at least 22% more efficient (in withdrawal time) and resulted in 100% fewer lost cards (i.e. none) compared with a ‘cash-dispensed-then-card-returned’ dialogue design.
I don’t think the most compelling message here has anything to do with design or ATMs, but with the value of hidden gems tucked into research papers. There is a long lag time between recording genius ideas and making them broadly available to help people. One of my goals over the next few years is to help as many of these nuggets get into the mainstream as possible. If you’ve got some findings that you think would benefit the entire quality community (or quality management systems or software), get in touch… I want to hear from you!
Lockton, D., Harrison, D., & Stanton, N. A. (2010). The Design with Intent Method: A design tool for influencing user behaviour. Applied ergonomics, 41(3), 382-392.
In previous articles, we introduced Quality 4.0, the pursuit of performance excellence as an integral part of an organization’s digital transformation. It’s one aspect of Industry 4.0 transformation towards intelligent automation: smart, hyperconnected(*) agents deployed in environments where humans and machines cooperate and leverage data to achieve shared goals.
Automation is a spectrum: an operator can specify a process that a computer or intelligent agent executes, the computer can make decisions for an operator to approve or adjust, or the computer can make and execute all decisions. Similarly, machine intelligence is a spectrum: an algorithm can provide advice, take action with approvals or adjustments, or take action on its own. We have to decide what value is generated when we introduce various degrees of intelligence and automation in our organizations.
How can Quality 4.0 help your organization? How can you improve the performance of your people, projects, products, and entire organizations by implementing technologies like artificial intelligence, machine learning, robotic process automation, and blockchain?
A value proposition is a statement that explains what benefits a product or activity will deliver. Quality 4.0 initiatives have these kinds of value propositions:
Augment (or improve upon) human intelligence
Increase the speed and quality of decision-making
Improve transparency, traceability, and auditability
Anticipate changes, reveal biases, and adapt to new circumstances and knowledge
Evolve relationships and organizational boundaries to reveal opportunities for continuous improvement and new business models
Learn how to learn; cultivate self-awareness and other-awareness as a skill
Quality 4.0 initiatives add intelligence to monitoring and managing operations – for example, predictive maintenance can help you anticipate equipment failures and proactively reduce downtime. They can help you assess supply chain risk on an ongoing basis, or help you decide whether to take corrective action. They can also improve help you improve cybersecurity: documenting and benchmarking processes can provide a basis for detecting anomalies, and understanding expected performance can help you detect potential attacks.
(*) Hyperconnected = (nearly) always on, (nearly) always accessible.
You may wonder why I’m reviewing a book written by the creator of the Occupy movement for an audience of academics and practitioners who care about quality and continuous improvement in organizations, many of which are trying to not only sustain themselves but also (in many cases) to make a profit. The answer is simple: by understanding how modern social movements are catalyzed by decentralized (and often autonomous) interactive media, we will be better able to achieve some goals we are very familiar with. These include 1) capturing the rapidly changing “Voice of the Customer” and, in particular, gaining access to its silent or hidden aspects, 2) promoting deep engagement, not just in work but in the human spirit, and 3) gaining insights into how innovation can be catalyzed and sustained in a truly democratic organization.
This book is packed with meticulously researched cases, and deeply reflective analysis. As a result, is not an easy read, but experiencing its modern insights in terms of the historical context it presents is highly rewarding. Organized into three sections, it starts by describing the events leading up to the Occupy movement, the experience of being a part of it, and why the author feels Occupy fell short of its objectives. The second section covers several examples of protests, from ancient history to modern times, and extracts the most important strategic insight from each event. Next, a unified theory of revolution is presented that reconciles the unexpected, the emotional, and the systematic aspects of large-scale change.
The third section speaks directly to innovation. Some of the book’s most powerful messages, the principles of revolution, are presented in Chapter 14. “Understanding the principles behind revolution,” this chapter begins, “allows for unending tactical innovation that shifts the paradigms of activism, creates new forms of protest, and gives the people a sudden power over their rulers.” If we consider that we are often “ruled” by the status quo, then these principles provide insight into how we can break free: short sprints, breaking patterns, emphasizing spirit, presenting constraints, breaking scripts, transposing known tactics to new environmental contexts, and proposing ideas from the edge. The end result is a masterful work that describes how to hear, and mobilize, the collective will.
Image Credit: Doug Buckley of http://hyperactive.to
The most important stage of problem-solving in organizations is often one of the earliest: getting everyone on the same page by defining the concepts, processes, and desired outcomes that are central to understanding the problem and formulating a solution. (“Everyone” can be the individuals on a project team, or the individuals that contribute actions to a process, or both.) Too often, we assume that the others around us see and experience the world the same way we do. In many cases, our assessments are not too far apart, which is how most people can get away with making this assumption on a regular basis.
I first realized this divergence in the work context a few years ago, when a colleague and I were advising a project at a local social services office. We asked our students to document the process that was being used to process claims. There were nearly ten people who were part of this claims-processing activity, and our students interviewed all of them, discovering that each person had a remarkably different idea about the process that they were all engaged in! No wonder the claims processing time was nearly two months long.
We helped them all — literally — get onto the same page, and once they all had the same mental map of the process, time-in-system for each claim dropped to 10 days. (This led us to the quantum-esque conclusion that there is no process until it is observed.)
Today, I read about how mathematician Keith Devlin revolutionized the process of intelligence gathering after 9/11 using this same approach… by going back to one of the first principles he learned in his academic training:
So what had I done? Nothing really — from my perspective. My task was to find a way of analyzing how context influences data analysis and reasoning in highly complex domains involving military, political, and social contexts. I took the oh-so-obvious (to me) first step. I need to write down as precise a mathematical definition as possible of what a context is. It took me a couple of days…I can’t say I was totally satisfied with it…but it was the best I could do, and it did at least give me a firm base on which to start to develop some rudimentary mathematical ideas.
The fairly large group of really smart academics, defense contractors, and senior DoD personnel spent the entire hour of my allotted time discussing that one definition. The discussion brought out that all the different experts had a different conception of what a context is — a recipe for disaster.
What I had given them was, first, I asked the question “What is a context?” Since each person in the room besides me had a good working concept of context — different ones, as I just noted — they never thought to write down a formal definition. It was not part of what they did. And second, by presenting them with a formal definition, I gave them a common reference point from which they could compare and contrast their own notions. There we had the beginnings of disaster avoidance.
Getting people to very precisely understand the definitions, concepts, processes, and desired outcomes that are central to a problem might take some time and effort, but it is always extremely valuable.
When you face a situation like this in mathematics, you spend a lot of time going back to the basics. You ask questions like, “What do these words mean in this context?” and, “What obvious attempts have already been ruled out, and why?” More deeply, you’d ask, “Why are these particular open questions important?” and, “Where do they see this line of inquiry leading?”
(You can read the full article about Devlin, and more important lessons from mathematical thinking,Here.)
What will the world look (and feel) like when everything you interact with has a “voice”?
How will the “Voice of the Customer” be heard when all of that customer’s stuff ALSO has a voice?
Will your stuff have “agency” — that is, the right to represent your needs and interests to other products and services?
Companies are also starting to envision how their strategies will morph in response to the new capabilities offered by the IoT. Starbucks CTO Gerri Martin-Flickenger, for example, shares her feelings in GeekWire, 3/24/2016:
“Imagine you’re on a road trip, diving across the country, and you pull into a Starbucks drive-through that you’ve never been to before,” she said at the Starbucks annual shareholder’s meeting Wednesday in Seattle. “We detect you’re a loyal customer and you buy about the same thing every day, at about the same time. So as you pull up to the order screen, we show you your order, and the barista welcomes you by name.”
“Does that sound crazy?” she asked. “No, actually, not really. In the coming months and years you will see us continue to deliver on a basic aspiration: to deliver technology that enhances the human connection.”
IoT to enhance the human connection? Sounds great, right? But hold on… that’s not what she’s talking about. She wants to enhance the feeling of connection between individuals and a company… nothing different than cultivating customer loyalty.
Her scenario is actually pretty appealing: I can imagine pulling up to a Starbuck’s drive-through and having everything disappear from the screen except for maybe 2 or 3 choices of things I’ve had before, and 1 or 2 choices for things I might be interested in. The company could actually work with me to help alleviate my sensory overload problems, reducing the stress I experience when presented with a hundred-item menu, and improving my user experience. IoT can help them hear my voice as a customer, and adapt to my preferences, but it won’t make them genuinely care about me any more than they already do not.
[Examples] highlight a paradox inherent in connected devices and the Internet of Things: although technology aims to weave data streams without human intervention, its deeper value comes from connecting people. By offloading data capture and information transfer to the background, devices and applications can actually improve human relationships. Practitioners can use technology to get technology out of the way—to move data and information flows to the side and enable better human interaction…
Image Credit: Doug Buckley of http://hyperactive.to
[This post is in response to ASQ’s February topic for the Influential Voices group, which asks: Where do you plan to take your career in 2016? What’s your view of careers in quality today—what challenges is this field facing? How can someone starting out in quality succeed?]
We are about to experience a paradigm shift in production, operations, and service: a shift that will have direct consequences on the principles and practice of design, development, and quality management. This “fourth industrial revolution” of cyber-physical systems will require more people in the workforce to understand quality principles associated with co-creation of value, and to develop novel business models. New technical skills will become critical for a greater segment of workers, including embedded software, artificial intelligence, data science, analytics, Big Data (and data quality), and even systems integration.
Over the past 20 years, we moved many aspects of our work and our lives online. And in the next 20 years, the boundaries between the physical world and the online world will blur — to a point where the distinction may become unnecessary.
Here is a vignette to illustrate the kinds of changes we can anticipate. Imagine the next generation FitBit, the personalized exercise assistant that keeps track of the number of steps you walk each day. As early as 2020, this device will not only automatically track your exercise patterns, but will also automatically integrate that information with your personal health records. Because diet strategies have recently been shown to be predominantly unfounded, and now researchers like Kevin Hall, Eran Elinav, and Eran Siegal know that the only truly effective diets are the ones that are customized to your body’s nutritional preferences , your FitBit and your health records will be able to talk to your food manager application to design the perfect diet for you (given your targets and objectives). Furthermore, to make it easy for you, your applications will also autonomously communicate with your refrigerator and pantry (to monitor how much food you have available), your local grocery store, and your calendar app so that food deliveries will show up when and only when you need to be restocked. You’re amazed that you’re spending less on food, less of it is going to waste, and you never have to wonder what you’re going to make for dinner. Your local grocery store is also greatly rewarded, not only for your loyalty, but because it can anticipate the demand from you and everyone else in your community – and create specials, promotions, and service strategies that are targeted to your needs (rather than just what the store guesses you need).
Although parts of this example may seem futuristic, the technologies are already in place. What is missing is our ability to link the technologies together using development processes that are effective and efficient – and in particular, coordinating and engaging the people who will help make it happen. This is a job for quality managers and others who study production and operations management
As the Internet of Things (IoT) and pervasive information become commonplace, the fundamental nature and character of how quality management principles are applied in practice will be forced to change. As Eric Schmidt, former Chairman of Google, explains: “the new age of artificial intelligence is beginning, and it’s a big deal.”  Here are some ways that this shift will impact researchers and practitioners interested in quality:
Strategic deployment of IoT technologies will help us simultaneously improve our use of enterprise assets, reduce waste, promote sustainability, and coordinate people and machines to more effectively meet strategic goals and operational targets.
Smart materials, embedded in our production and service ecosystems, will change our views of objects from inert and passive to embedded and engaged. For example, MIT has developed a “smart band-aid” that communicates with a wound, provides visual indicators of the healing process, and delivers medication as needed.  Software developers will need to know how to make this communication seamless and reliable in a variety of operations contexts.
Our technologies will be able to proactively anticipate the Voice of the Customer, enabling us to meet not only their stated and implied needs, but also their emergent needs and hard-to-express desires. Similarly, will the nature of customer satisfaction change as IoT becomes more pervasive?
Cloud and IoT-driven Analytics will make more information available for powerful decision-making (e.g. real-time weather analytics), but comes with its own set of challenges: how to find the data, how to assess data quality, and how to select and store data with likely future value to decision makers. This will be particularly challenging since analytics has not been a historical focus among quality managers. 
Smart, demand-driven supply chains (and supply networks) will leverage Big Data, and engage in automated planning, automatic adjustment to changing conditions or supply chain disruptions like war or extreme weather events, and self-regulation.
Smart manufacturing systems will implement real time communication between people, machines, materials, factories and warehouses, supply chain partners, and logistics partners using cloud computing. Production systems will adapt to demand as well as environmental factors, like the availability of resources and components. Sustainability will be a required core capability of all organizations that produce goods.
Cognitive manufacturing will implement manufacturing and service systems capable of perception, judgment, and improving quality autonomously – without the delays associated with human decision-making or the detection of issues.
Cybersecurity will be recognized as a critical component of all of the above. For most (if not all) of these next generation products and production systems, quality will not be possible without addressing information security.
The nature of quality assurance will also change, since products will continue to learn (and not necessarily meet their own quality requirements) after purchase or acquisition, until the consumer has used them for a while. In a December 2015 article I wrote for Software Quality Professional, I ask “How long is the learning process for this technology, and have [product engineers] designed test cases to accommodate that process after the product has been released? The testing process cannot find closure until the end of the ‘burn-in’ period when systems have fully learned about their surroundings.” 
We will need new theories for software quality practice in an era where embedded artificial intelligence and technological panpsychism (autonomous objects with awareness, perception, and judgment) are the norm.
How do we design quality into a broad, adaptive, dynamically evolving ecosystem of people, materials, objects, and processes? This is the extraordinarily complex and multifaceted question that we, as a community of academics and practitioners, must together address.
Just starting out in quality? My advice is to get a technical degree (science, math, or engineering) which will provide you with a solid foundation for understanding the new modes of production that are on the horizon. Industrial engineering, operations research, industrial design, and mechanical engineering are great fits for someone who wants a career in quality, as are statistics, data science, manufacturing engineering, and telecommunications. Cybersecurity and intelligence will become increasingly more central to quality management, so these are also good directions to take. Or, consider applying for an interdisciplinary program like JMU’s Integrated Science and Technology where I teach. We’re developing a new 21-credit sector right now where you can study EVERYTHING in the list above! Also, certifications are a plus, but in addition to completing training programs be sure to get formally certified by a professional organization to make sure that your credentials are widely recognized (e.g. through ASQ and ATMAE).
June 24, 1980 is kind of like July 4, 1776 for quality management… that’s the pivotal day that NBC News aired its one hour and 16 minute documentary called “If Japan Can, Why Can’t We?” introducing W. Edwards Deming and his methods to the American public. The video has been unavailable for years, butas of just last week, it’s been posted on YouTube. So my sophomore undergrads in Production & Operations Management took a step back in time to get a taste of the environment in the manufacturing industry in the late 1970’s, and watched it during class this week.
The last time I watched it was in 1997, in a graduate industrial engineering class. It didn’t feel quite as dated as it does now, nor did I have the extensive experience in industry as a lens to view the interviews through. But what did surprise me is that the core of the challenges they were facing aren’t that much different than the ones we face today — and the groundbreaking good advice from Deming is still good advice today.
Before 1980, it was common practice to produce a whole bunch of stuff and then check and see which ones were bad, and throw them out. The video provides a clear and consistent story around the need to design quality in to products and processes, which then reduces (or eliminates) the need to inspect bad quality out.
It was also common to tamper with a process that was just exhibiting random variation. As one of the line workers in the documentary said, “We didn’t know. If we felt like there might be a problem with the process, we would just go fix it.” Deming’s applications of Shewhart’s methods made it clear that there is no need to tamper with a process that’s exhibiting only random variation.
Both workers and managers seemed frustrated with the sheer volume of regulations they had to address, and noted that it served to increase costs, decrease the rate of innovation, and disproportionately hurt small businesses. They noted that there was a great need for government and industry to partner to resolve these issues, and that Japan was a model for making these interactions successful.
Narrator Lloyd Dobyns remarked that “the Japanese operate by consensus… we, by competition.” He made the point that one reason Japanese industrial reforms were so powerful and positive was that their culture naturally supported working together towards shared goals. He cautioned managers that they couldn’t just drop in statistical quality control and expect a rosy outcome: improving quality is a cultural commitment, and the methods are not as useful in the absence of buy-in and engagement.
The video also sheds light on ASQ’s November question to the Influential Voices, which is:“What’s the key to talking quality with the C-Suite?” Typical responses include: think at the strategic level; create compelling arguments using the language of money; learn the art of storytelling and connect your case with what it important to the executives.
But I think the answer is much more subtle. In the 1980 video, workers comment on how amazed their managers were when Deming proclaimed that management was responsible for improving productivity. How could that be??!? Many managers at that time were convinced that if a productivity problem existed, it was because the workers didn’t work fast enough, or with enough skill — or maybe they had attitude problems! Certainly not because the managers were not managing well. Implementing simple techniques like improving training programs and establishing quality circles (which demonstrated values like increased transparency, considering all ideas, putting executives on the factory floor so they could learn and appreciate the work being done, increasing worker participation and engagement, encouraging work/life balance, and treating workers with respect and integrity) were already demonstrating benefits in some U.S. companies. But surprisingly, these simple techniques were not widespread, and not common sense.
Just like Deming advocated, quality belongs to everyone. You can’t go to a CEO and suggest that there are quality issues that he or she does not care about. More likely, the CEO believes that he or she is paying a lot of attention to quality. They won’t like it if you accuse them of not caring, or not having the technical background to improve quality. The C-Suite is in a powerful position where they can, through policies and governance, influence not only the actions and operating procedures of the system, but also its values and core competencies — through business model selection and implementation.
What you can do, as a quality professional, is acknowledge and affirm their commitment to quality. Communicate quickly, clearly, and concisely when you do. Executives have to find the quickest ways to decompose and understand complex problems in rapidly changing external environments, and then make decisions that affect thousands (and sometimes, millions!) of people. Find examples and stories from other organizations who have created huge ripples of impact using quality tools and technologies, and relate them concretely to your company.
Let the C-Suite know that you can help them leverage their organization’s talent to achieve their goals, then continually build their trust.
The key to talking quality with the C-suite is empathy.