Tag Archives: LinkedIn

The Connected, Intelligent, Automated Industry 4.0 Supply Chain

ASQ’s March Influential Voices Roundtable asks this question: “Investopedia defines end-to-end supply chain (or ‘digital supply chain’) as a process that refers to the practice of including and analyzing each and every point in a company’s supply chain – from sourcing and ordering raw materials to the point where the good reaches the end consumer. Implementing this practice can increase process speed, reduce waste, and decrease costs.

In your experience, what are some best practices for planning and implementing this style of supply chain to ensure success?

Supply chains are the lifeblood of any business, impacting everything from the quality, delivery, and costs of a business’s products and services to customer service and satisfaction to ultimately profitability and return on assets.

Stank, T., Scott, S. & Hazen, B. (2018, April). A SAVVY GUIDE TO THE DIGITAL SUPPLY CHAIN: HOW TO EVALUATE AND LEVERAGE TECHNOLOGY TO BUILD A SUPPLY CHAIN FOR THE DIGITAL AGE. Whitepaper, Haslam School of Business, University of Tennessee.

Industry 4.0 enabling technologies like affordable sensors, more ubiquitous internet connectivity and 5G networks, and reliable software packages for developing intelligent systems have started fueling a profound digital transformation of supply chains. Although the transformation will be a gradual evolution, spanning years (and perhaps decades), the changes will reduce or eliminate key pain points:

  • Connected: Lack of visibility keeps 84% of Chief Supply Chain Officers up at night. More sources of data and enhanced connectedness to information will alleviate this issue.
  • Intelligent: 87% of Chief Supply Chain Officers say that managing supply chain disruptions proactively is a huge challenge. Intelligent algorithms and prescriptive analytics can make this more actionable.
  • Automated: 80% of all data that could enable supply chain visibility and traceability is “dark” or siloed. Automated discovery, aggregation, and processing will ensure that knowledge can be formed from data and information.

Since the transformation is just getting started, best practices are few and far between — but recommendations do exist. Stank et al. (2018) created a digital supply chain maturity rubric, with highest levels that reflect what they consider recommended practices. I like these suggestions because they span technical systems and management systems:

  • Gather structured and unstructured data from customers, suppliers, and the market using sensors and crowdsourcing (presumably including social media)
  • Use AI & ML to “enable descriptive, predictive, and prescriptive insights simultaneously” and support continuous learning
  • Digitize all systems that touch the supply chain: strategy, planning, sourcing, manufacturing, distribution, collaboration, and customer service
  • Add value by improving efficiency, visibility, security, trust, authenticity, accessibility, customization, customer satisfaction, and financial performance
  • Use just-in-time training to build new capabilities for developing the smart supply chain

One drawback of these suggestions is that they provide general (rather than targeted) guidance.

A second recommendation is to plan initiatives that align with your level of digital supply chain maturity. Soosay & Kannusamy (2018) studied 360 firms in the Australian food industry and found four different stages. They are:

  • Stage 1 – Computerization and connectivity. Sharing data across they supply chain ecosystem requires that it be stored in locations that are accessible by partners. Cloud-based systems are one option. Make sure authentication and verification are carefully implemented.
  • Stage 2 – Visibility and transparency. Adding new sensors and making that data accessible provides new visibility into the supply chain. Key enabling technologies include GPS, time-temperature integrators and data loggers.
  • Stage 3 – Predictive capability. Access to real-time data from supply chain partners will increase the reliability and resilience of the entire network. Enterprise Resource Planning (ERP), Manufacturing Execution Systems (MES), and radio frequency (RFID) tagging are enablers at this stage.
  • Stage 4 – Adaptability and self-learning. At this stage, partners plan and execute the supply chain collaboratively. Through Vendor Managed Inventory (VMI), responsibility for replenishment can even be directly assumed by the supplier.

Traceability is also gaining prominence as a key issue, and permissioned blockchains provide one way to make this happen with sensor data and transaction data. Recently, the IBM Food Trust has demonstrated the practical value provided by the Hyperledger blockchain infrastructure for this purpose. Their prototypes have helped to identify supply chain bottlenecks that might not otherwise have been detected.

What should you do in your organization? Any way to enhance information sharing between members of the supply chain ecosystem — or more effectively synthesize and interpret it — should help your organization shift towards the end-to-end vision. Look for opportunities in both categories.


References for Connected, Intelligent, Automated stats:
  1. IBM. (2018, February). Global Chief Supply Chain Officer Study. Available from this URL
  2. Geriant, J. (2015, October). The Changing Face of Supply Chain Risk Management. SCM World.
  3. IBM & IDC. (2017, March). The Thinking Supply Chain. Available from this URL

Imperfect Action is Better Than Perfect Inaction: What Harry Truman Can Teach Us About Loss Functions (with an intro to ggplot)

One of the heuristics we use at Intelex to guide decision making is former US President Truman’s advice that “imperfect action is better than perfect inaction.” What it means is — don’t wait too long to take action, because you don’t want to miss opportunities. Good advice, right?

When I share this with colleagues, I often hear a response like: “that’s dangerous!” To which my answer is “well sure, sometimes, but it can be really valuable depending on how you apply it!” The trick is: knowing how and when.

Here’s how it can be dangerous. For example, statistical process control (SPC) exists to keep us from tampering with processes — from taking imperfect action based on random variation, which will not only get us nowhere, but can exacerbate the problem we were trying to solve. The secret is to apply Truman’s heuristic based on an understanding of exactly how imperfect is OK with your organization, based on your risk appetite. And this is where loss functions can help.

Along the way, we’ll demonstrate how to do a few important things related to plotting with the ggplot package in R, gradually adding in new elements to the plot so you can see how it’s layered, including:

  • Plot a function based on its equation
  • Add text annotations to specific locations on a ggplot
  • Draw horizontal and vertical lines on a ggplot
  • Draw arrows on a ggplot
  • Add extra dots to a ggplot
  • Eliminate axis text and axis tick marks

What is a Loss Function?

A loss function quantifies how unhappy you’ll be based on the accuracy or effectiveness of a prediction or decision. In the simplest case, you control one variable (x) which leads to some cost or loss (y). For the case we’ll examine in this post, the variables are:

  • How much time and effort you put in to scoping and characterizing the problem (x); we assume that time+effort invested leads to real understanding
  • How much it will cost you (y); can be expressed in terms of direct costs (e.g. capex + opex) as well as opportunity costs or intangible costs (e.g. damage to reputation)

Here is an example of what this might look like, if you have a situation where overestimating (putting in too much x) OR underestimating (putting in too little x) are both equally bad. In this case, x=10 is the best (least costly) decision or prediction:

plot of a typical squared loss function
# describe the equation we want to plot
parabola <- function(x) ((x-10)^2)+10  

# initialize ggplot with a dummy dataset
library(ggplot)
p <- ggplot(data = data.frame(x=0), mapping = aes(x=x)) 

p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) +
     xlab("x = the variable you can control") + 
     ylab("y = cost of loss ($$)")

In regression (and other techniques where you’re trying to build a model to predict a quantitative dependent variable), mean square error is a squared loss function that helps you quantify error. It captures two facts: the farther away you are from the correct answer the worse the error is — and both overestimating and underestimating is bad (which is why you square the values). Across this and related techniques, the loss function captures these characteristics:

From http://www.cs.cornell.edu/courses/cs4780/2015fa/web/lecturenotes/lecturenote10.html

Not all loss functions have that general shape. For classification, for example, the 0-1 loss function tells the story that if you get a classification wrong (x < 0) you incur all the penalty or loss (y=1), whereas if you get it right (x > 0) there is no penalty or loss (y=0):

# set up data frame of red points
d.step <- data.frame(x=c(-3,0,0,3), y=c(1,1,0,0))

# note that the loss function really extends to x=-Inf and x=+Inf
ggplot(d.step) + geom_step(mapping=aes(x=x, y=y), direction="hv") +
     geom_point(mapping=aes(x=x, y=y), color="red") + 
     xlab("y* f(x)") + ylab("Loss (Cost)") +  
     ggtitle("0-1 Loss Function for Classification")

Use the Loss Function to Make Strategic Decisions

So let’s get back to Truman’s advice. Ideally, we want to choose the x (the amount of time and effort to invest into project planning) that results in the lowest possible cost or loss. That’s the green point at the nadir of the parabola:

p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) + 
     xlab("Time Spent and Information Gained (e.g. person-weeks)") + ylab("$$ COST $$") +
     annotate(geom="text", x=10, y=5, label="Some Effort, Lowest Cost!!", color="darkgreen") +
     geom_point(aes(x=10, y=10), colour="darkgreen")

Costs get higher as we move up the x-axis:

p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) + 
     xlab("Time Spent and Information Gained (e.g. person-weeks)") + ylab("$$ COST $$") +
     annotate(geom="text", x=10, y=5, label="Some Effort, Lowest Cost!!", color="darkgreen") +
     geom_point(aes(x=10, y=10), colour="darkgreen") +
     annotate(geom="text", x=0, y=100, label="$$$$$", color="green") +
     annotate(geom="text", x=0, y=75, label="$$$$", color="green") +
     annotate(geom="text", x=0, y=50, label="$$$", color="green") +
     annotate(geom="text", x=0, y=25, label="$$", color="green") +
     annotate(geom="text", x=0, y=0, label="$ 0", color="green")

And time+effort grows as we move along the x-axis (we might spend minutes on a problem at the left of the plot, or weeks to years by the time we get to the right hand side):

p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) + 
     xlab("Time Spent and Information Gained (e.g. person-weeks)") + ylab("$$ COST $$") +
     annotate(geom="text", x=10, y=5, label="Some Effort, Lowest Cost!!", color="darkgreen") +
     geom_point(aes(x=10, y=10), colour="darkgreen") +
     annotate(geom="text", x=0, y=100, label="$$$$$", color="green") +
     annotate(geom="text", x=0, y=75, label="$$$$", color="green") +
     annotate(geom="text", x=0, y=50, label="$$$", color="green") +
     annotate(geom="text", x=0, y=25, label="$$", color="green") +
     annotate(geom="text", x=0, y=0, label="$ 0", color="green") +
     annotate(geom="text", x=2, y=0, label="minutes\nof effort", size=3) +
     annotate(geom="text", x=20, y=0, label="months\nof effort", size=3)

Planning too Little = Planning too Much = Costly

What this means is — if we don’t plan, or we plan just a little bit, we incur high costs. We might make the wrong decision! Or miss critical opportunities! But if we plan too much — we’re going to spend too much time, money, and/or effort compared to the benefit of the solution we provide.


p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) + 
     xlab("Time Spent and Information Gained (e.g. person-weeks)") + ylab("$$ COST $$") +
     annotate(geom="text", x=10, y=5, label="Some Effort, Lowest Cost!!", color="darkgreen") +
     geom_point(aes(x=10, y=10), colour="darkgreen") +
     annotate(geom="text", x=0, y=100, label="$$$$$", color="green") +
     annotate(geom="text", x=0, y=75, label="$$$$", color="green") +
     annotate(geom="text", x=0, y=50, label="$$$", color="green") +
     annotate(geom="text", x=0, y=25, label="$$", color="green") +
     annotate(geom="text", x=0, y=0, label="$ 0", color="green") +
     annotate(geom="text", x=2, y=0, label="minutes\nof effort", size=3) +
     annotate(geom="text", x=20, y=0, label="months\nof effort", size=3) +
     annotate(geom="text",x=3, y=85, label="Little (or no) Planning\nHIGH COST", color="red") +
     annotate(geom="text", x=18, y=85, label="Paralysis by Planning\nHIGH COST", color="red") +
     geom_vline(xintercept=0, linetype="dotted") + geom_hline(yintercept=0, linetype="dotted")

The trick is to FIND THAT CRITICAL LEVEL OF TIME and EFFORT invested to gain information and understanding about your problem… and then if you’re going to err, make sure you err towards the left — if you’re going to make a mistake, make the mistake that costs less and takes less time to make:

arrow.x <- c(10, 10, 10, 10)
arrow.y <- c(35, 50, 65, 80)
arrow.x.end <- c(6, 6, 6, 6)
arrow.y.end <- arrow.y
d <- data.frame(arrow.x, arrow.y, arrow.x.end, arrow.y.end)

p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) + 
     xlab("Time Spent and Information Gained (e.g. person-weeks)") + ylab("$$ COST $$") +
     annotate(geom="text", x=10, y=5, label="Some Effort, Lowest Cost!!", color="darkgreen") +
     geom_point(aes(x=10, y=10), colour="darkgreen") +
     annotate(geom="text", x=0, y=100, label="$$$$$", color="green") +
     annotate(geom="text", x=0, y=75, label="$$$$", color="green") +
     annotate(geom="text", x=0, y=50, label="$$$", color="green") +
     annotate(geom="text", x=0, y=25, label="$$", color="green") +
     annotate(geom="text", x=0, y=0, label="$ 0", color="green") +
     annotate(geom="text", x=2, y=0, label="minutes\nof effort", size=3) +
     annotate(geom="text", x=20, y=0, label="months\nof effort", size=3) +
     annotate(geom="text",x=3, y=85, label="Little (or no) Planning\nHIGH COST", color="red") +
     annotate(geom="text", x=18, y=85, label="Paralysis by Planning\nHIGH COST", color="red") +
     geom_vline(xintercept=0, linetype="dotted") + 
     geom_hline(yintercept=0, linetype="dotted") +
     geom_vline(xintercept=10) +
     geom_segment(data=d, mapping=aes(x=arrow.x, y=arrow.y, xend=arrow.x.end, yend=arrow.y.end),
     arrow=arrow(), color="blue", size=2) +
     annotate(geom="text", x=8, y=95, size=2.3, color="blue",
     label="we prefer to be\non this side of the\nloss function")

Moral of the Story

The moral of the story is… imperfect action can be expensive, but perfect action is ALWAYS expensive. Spend less to make mistakes and learn from them, if you can! This is one of the value drivers for agile methodologies… agile practices can help improve communication and coordination so that the loss function is minimized.

## FULL CODE FOR THE COMPLETELY ANNOTATED CHART ##
# If you change the equation for the parabola, annotations may shift and be in the wrong place.
parabola <- function(x) ((x-10)^2)+10

my.title <- expression(paste("Imperfect Action Can Be Expensive. But Perfect Action is ", italic("Always"), " Expensive."))

arrow.x <- c(10, 10, 10, 10)
arrow.y <- c(35, 50, 65, 80)
arrow.x.end <- c(6, 6, 6, 6)
arrow.y.end <- arrow.y
d <- data.frame(arrow.x, arrow.y, arrow.x.end, arrow.y.end)

p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) + 
     xlab("Time Spent and Information Gained (e.g. person-weeks)") + ylab("$$ COST $$") +
     annotate(geom="text", x=10, y=5, label="Some Effort, Lowest Cost!!", color="darkgreen") +
     geom_point(aes(x=10, y=10), colour="darkgreen") +
     annotate(geom="text", x=0, y=100, label="$$$$$", color="green") +
     annotate(geom="text", x=0, y=75, label="$$$$", color="green") +
     annotate(geom="text", x=0, y=50, label="$$$", color="green") +
     annotate(geom="text", x=0, y=25, label="$$", color="green") +
     annotate(geom="text", x=0, y=0, label="$ 0", color="green") +
     annotate(geom="text", x=2, y=0, label="minutes\nof effort", size=3) +
     annotate(geom="text", x=20, y=0, label="months\nof effort", size=3) +
     annotate(geom="text",x=3, y=85, label="Little (or no) Planning\nHIGH COST", color="red") +
     annotate(geom="text", x=18, y=85, label="Paralysis by Planning\nHIGH COST", color="red") +
     geom_vline(xintercept=0, linetype="dotted") + 
     geom_hline(yintercept=0, linetype="dotted") +
     geom_vline(xintercept=10) +
     geom_segment(data=d, mapping=aes(x=arrow.x, y=arrow.y, xend=arrow.x.end, yend=arrow.y.end),
     arrow=arrow(), color="blue", size=2) +
     annotate(geom="text", x=8, y=95, size=2.3, color="blue",
     label="we prefer to be\non this side of the\nloss function") +
     ggtitle(my.title) +
     theme(axis.text.x=element_blank(), axis.ticks.x=element_blank(),
     axis.text.y=element_blank(), axis.ticks.y=element_blank()) 

Now sometimes you need to make this investment! (Think nuclear power plants, or constructing aircraft carriers or submarines.) Don’t get caught up in getting your planning investment perfectly optimized — but do be aware of the trade-offs, and go into the decision deliberately, based on the risk level (and regulatory nature) of your industry, and your company’s risk appetite.

Lack of Alignment is an Organizational Disease. Here are the Symptoms.

Streamlines on a field. Created using the pracma package in R.

Like a champion rowing team, your organization needs to make sure everyone is working together, engaged in synchronized work and active collaboration, and not working at cross-purposes.

But like risk management, working on alignment can seem like a luxury. No one really has time to slow down and make sure everyone’s moving in the same direction. And besides, alignment just happens naturally if each functional area knows what they’re supposed to be working on… right?

Neither of these statements are, of course, true. Synchronizing people and processes – and making sure they’re aware of the needs and desires of real customers instead of cardboard personas – takes dedicated effort and a commitment from senior leaders. There are other critical impacts too: lack of alignment negatively impacts not only project outcomes – but also professional relationships and the bottom line.

An Example of Diagnosing Misalignment

Although alignment is a many-to-many problem, and requires you to look at relationships between people in all your functional areas, a January 2018 survey from Altify examined one part of the organizational puzzle: alignment between sales and marketing. This is a big one, because sales teams use marketing materials to understand and sell the product or service your company offers. Their survey of 422 enterprise-level executives and sales leaders showed that:

  • 74% of marketers think they understood customer needs, but only 44% of sales people in their organizations agreed
  • 71% of marketers think sales and marketing are aligned, but only 59% of sales people in their organizations agreed

These differences may seem small, but they reveal a lack of alignment between sales and marketing. One group thinks they “get it” – while people in the other group are just shaking their heads.

Symptoms of Misalignment

…include things like:

  • Vague Feelings of Fear. Your organization has a strategic plan (knows WHAT it wants to do), but there is little to no coordination regarding HOW people across the organization will accomplish strategic objectives. You know what KPIs you’re supposed to deliver on, but you don’t know how exactly you’re supposed to work with anything in your power or control to “move the needle.”
  • Ivory Tower Syndrome. You’re in a meeting and get the visceral sense that things aren’t clear, or that different people have different expectations for a project or initiative. But you’re too nervous or uncertain to ask for clarification – or maybe you do ask, but you get an equally unclear answer. Naturally, you assume that everyone in the room is smarter than you (particularly the managers) so you shut up and hope that it makes sense later. The reality is that you may be picking up on a legitimate problem that’s going to be problematic for the organization later on.
  • Surprises. A department committed you to a task, but you weren’t part of that decision. Once you find out about it, the task just may not get done. Alternatively, you’ll have to adjust your workload and reset expectations with the stakeholders who will now be disappointed that you can’t meet their needs according to the original schedule. Or maybe work evenings and weekends to get the job done on time. Either way, it’s not pleasant for anyone.
  • Emergencies. How often are you called on to respond to something that’s absolutely needed by close of business today? How often are you expected to drop everything and take care of it? How often do you have to work nights and weekends to make sure you don’t fall behind?
  • Lead Balloons. In this scenario, key stakeholders are called into projects at the 11th hour, when they are unable to guide or influence the direction of an initiative. The initiative becomes a “dead man walking” that’s doomed to an untimely end, but since the organization has sunk time and effort into it, people will push ahead anyway.
  • Cut Off at the Pass. Have you ever been working on a project and find out – somewhere in the middle of doing it – that some other person or team has been working on the same thing? Or maybe they’ve been working on a different project, but it’s ultimately at cross purposes with yours. Whatever way this situation works out, your organization ends up with a pile of waste and potential rework.
  • Not Writing Things Down.You have to make sure everyone is literally on the same page, seeing the world in a similar enough way to know they are pursuing the same goals and objectives. If you don’t write things down, you may be at the mercy of cognitive biases later. How do you know that your goals and objectives are aligned with your overall company strategy? Can you review written minutes after key meetings? Are your organization’s strategic initiatives written and agreed to by decision makers? Do you implement project charters that all stakeholders have to sign off on before work can commence? What practices do you use to get everyone on the same page?

How do you fix it?

That’s the subject for more blog posts that will be coming this spring – as well as what causes misalignment in the first place (hint: it’s individual behaviors on an organizational scale). The good news is – misalignment can be fixed, and the degree of alignment can be measured and continuously improved. Sign up to follow this blog so you don’t miss the rest of the story.

What other symptoms of misalignment have you experienced?

Yes, You Do Need to Write Down Procedures. Except…

近代工芸の名品― [特集展示] 

A 棗 from http://www.momat.go.jp/cg/exhibition/masterpiece2018/ — I saw this one in person!!

Several weeks ago we went to an art exhibit about “tea caddies” at the Tokyo Museum of Modern Art. Although it might seem silly, these kitchen containers are a fixture of Japanese culture. In Japan, drinking green tea is a cornerstone of daily life.

It was about 2 in the afternoon, and we had checked out of our hotel at 11. Wandering through the center of the city, we stumbled upon the museum. Since we didn’t have to meet our friends for several more hours, we decided to check it out.

Confession: I’m not a huge fan of art museums. Caveat: I usually enjoy them to some degree or another when I end up in them. But I didn’t think tea caddies could possibly be useful to me. I was wrong!

When to Write SOPs

One of the features of the exhibit was a Book of Standard Operating Procedures. It described how to create a new lacquered tea caddy from paper. (Unfortunately, photography was prohibited for this piece in particular.) The book was open, laying flat, showing a grid of characters on the right hand side. The grid described a particular process step in great detail. On the left page, a picture of a craftsman performing that step was attached. The card describing the book of SOPs explained that each of the 18 process steps was described using exactly the same format. This decision was made to ensure that the book would help accomplish certain things:

  • Improve Production Quality. Even masters sometimes need to follow instructions, or to be reminded about an old lesson learned, especially if the process is one you only do occasionally. SOPs promote consistency over time, and from person to person. 
  • Train New Artists. Even though learning the craft is done under the supervision of a skilled worker, it’s impossible to remember every detail (unless you have an eidetic memory, which most of us don’t have). The SOP serves as a guide during the learning process.
  • Enable Continuous Improvement. The SOP is the base from which adjustments and performance improvements are grown. It provides “version control” so you can monitor progress and examine the evolution of work over time.
  • Make Space for Creativity. It might be surprising, but having guidance for a particular task or process in the form of an SOP reduces cognitive load, making it easier for a person to recognize opportunities for improvement. In addition, deviations aren’t always prohibited (although in high-reliability organizations, or industries that are highly regulated, you might want to check before being too creative). The art is contributed by the person, not the process.

When Not to Write SOPs

Over the past couple decades, when I’ve asked people to write up SOPs for a given process, I’ve often run into pushback. The most common reasons are “But I know how to do this!” and “It’s too complicated to describe!” The first reason suggests that the person is threatened by the prospect of someone else doing (and possibly taking over) that process, and the second is just an excuse. Maybe.

Because sometimes, the pushback can be legitimate. Not all processes need SOPs. For example, I wouldn’t write up an SOP for the creative process of writing a blog post, or for a new research project (that no one has ever done before) culminating in the publication of a new research article. In general, processes that vary significantly each time they’re run, or processes that require doing something that no one has ever done before — don’t lend themselves well to SOPs.

Get on the Same Page

The biggest reason to document SOPs is to literally get everyone on the same page. You’d be surprised how often people think they’re following the same process, but they’re not! An easy test for this is to have each person who participates in a process draw a flow chart showing the process steps and decisions are made on their own, and then compare all the sketches. If they’re different, work together until you’re all in agreement over what’s on one flow chart — and you’ll notice a sharp and immediate improvement in performance and communication.

There’s a Fly in the Milk (and a Bug in the Software)

Where “software bugs” got their name — the dead moth stuck in a relay in Harvard’s Mark II in 1947. From https://en.wikipedia.org/wiki/Software_bug

As one does, I spent a good part of this weekend reading the Annual Report of the Michigan Dairymen’s Association. It provides an interesting glimpse into the processes that have to be managed to source raw materials from suppliers, to produce milk and cream and butter, and to cultivate an engaged and productive workforce.

You might be yelling at your screen right now. DairyMEN’s? Aren’t we beyond that now? What’s wrong with them? The answer is: nothing. This is an annual report from 1915. Your next question is probably what could the dairymen be doing in 1915 that would possibly be interesting for production and operations managers in 2019?  The answer here, surprisingly, is a lot. Except for the overly formal and old-timey word choices, the challenges and concerns encountered in the dairy industry are remarkably consistent over time.

It turns out that flies were a particular concern in 1915 — and they remain a huge threat to quality and safety in food and beverage production today:

  • “…an endless war should be waged against the fly.”
  • “[avoid] the undue exposure of the milk cooler to dust and flies.”
  • “The same cows that freshen in July and August will give more milk in December it seems to me… because at that time of year the dairyman has flies to contend with…”
  • “Flies are known to be great carriers of bacteria, and coming from these feeding places to the creamery may carry thousands of undesirable bacteria direct to the milk-cans or vats.”

In a December 2018 column in Food Safety Tech, Chelle Hartzer describes not one but three (!!!) different types of flies that can wreak havoc in a food production facility. There are house flies that deposit pathogens and contaminants on every surface they land, moth flies that grow in the film inside drains until they start flying too, and fruit flies that can directly contaminate food. All flies need food, making your food or beverage processing facility a potential utopia for them.

In the controls she presented to manage fly-related hazards, I noticed parallels to controls for preventing and catching bugs in software:

  • Make sanitation a priority. Clean up messes, take out the trash on a daily basis, and clean the insides of trash bins. In software development, don’t leave your messes to other people — or your future self!  Bake time into your development schedule to refactor on a regular basis. And remember to maintain your test tools! If you’re doing test-driven development with old tools, even your trash bins may be harboring additional risks.
  • Swap outdoor lighting. In food production facilities, it’s important to use lighting that doesn’t bring the flies to you (particularly at night). Similarly, in software, examine your environment to make sure there are no “bug attractors” like lack of communication or effective version control, dependencies on buggy packages or third party tools, or lack of structured and systematic development processes.
  • Install automatic doors to limit the amount of time and space available for flies to get in to the facility. In software, this relates to limiting the complexity of your code and strategically implementing testing, e.g. test-driven development or an emphasis on hardening the most critical and/or frequently used parts of your system.
  • Inspect loading and unloading areas and seal cracks and crevices. Keep tight seals around critical areas. The “tight seals” in software development are the structured, systematic processes related to verifying and validating your code. This includes design reviews, pair programming, sign-offs, integration and regression testing, and user acceptable testing.
  • Clean drains on a regular basis. The message here is that flies can start their lives in any location that’s suitable for their growth, and you should look for those places and keep them sanitized too. In software, this suggests an ongoing examination of technical debt. Where are the drains that could harbor new issues? Find them, monitor them, and manage them.

Although clearly there’s a huge difference between pest management in food and beverage production and managing code quality, process-related pests have been an issue for at least a century — and likely throughout history. What are the flies in your industry, and what are you doing to make sure they don’t contaminate your systems and bring you down?

Happy World Quality Day 2018!

Each year, the second Thursday of November day is set aside to reflect on the way quality management can contribute to our work and our lives. Led by the Chartered Quality Institute (CQI) in the United Kingdom, World Quality Day provides a forum to reflect on how we implement more effective processes and systems that positively impact KPIs and business results — and celebrate outcomes and new insights.

This year’s theme is “Quality: A Question of Trust”.

We usually think of quality as an operations function. The quality system (whether we have quality management software implemented or not) helps us keep track of the health and effectiveness of our manufacturing, production, or service processes. Often, we do this to obtain ISO 9001:2015 certification, or achieve outcomes that are essential to how the public perceives us, like reducing scrap, rework, and customer complaints.

But the quality system encompasses all the ways we organize our business — ensuring that people, processes, software, and machines are aligned to meet strategic and operational goals. For example, QMS validation (which is a critical for quality management in the pharmaceutical industry), helps ensure that production equipment is continuously qualified to meet performance standards, and trust is not broken. Intelex partner Glemser Technologies explains in more detail in The Definitive Guide to Validating Your QMS in the Cloud. This extends to managing supplier relationships — building trust to cultivate rich partnerships in the business ecosystem out of agreements to work together.

This also extends to building and cultivating trust-based relationships with our colleagues, partners, and customers…

Read more about how Integrated Management Systems and Industry 4.0/ Quality 4.0 are part of this dynamic: https://community.intelex.com/explore/posts/world-quality-day-2018-question-trust

Quality 4.0 in Basic Terms (Interview)

On October 12th I dialed in to Quality Digest Live to chat with Dirk Dusharne, Editor-in-Chief of Quality Digest, about Quality 4.0 and my webinar on the topic which was held yesterday (October 16).

Check out my 13-minute interview here, starting at 14:05! It answers two questions:

  • What is Quality 4.0 – in really basic terms that are easy to remember?
  • How can we use these emerging technologies to support engagement and collaboration?

You can also read more about the topic here on the Intelex Community, or come to ASQ’s Quality 4.0 Summit in Dallas next month where I’ll be sharing more information along with other Quality 4.0 leaders like Jim Duarte of LJDUARTE and Associates and Dan Jacob of LNS Research.

« Older Entries