Category Archives: Management

Agile vs. Lean: Explained by Cats

Over the past few years, Agile has gained popularity. This methodology emerged as a solution to manage projects with a number of unknown elements and to counter the typical waterfall method. Quality practitioners have observed the numerous similarities between this new framework and Lean. Some have speculated that Agile is simply the next generation’s version of Lean. These observations have posed the question: Is Agile the new Lean?  

ASQ Influential Voices Roundtable for December 2019

The short answer to this question is: NO.

The longer answer is one I’m going to have to hold back some emotions to answer. Why? I have two reasons.

Reason #1: There is No Magic Bullet

First, many managers are on a quest for the silver bullet — a methodology or a tool that they can implement on Monday, and reap benefits no later than Friday. Neither lean nor agile can make this happen. But it’s not uncommon to see organizations try this approach. A workgroup will set up a Kanban board or start doing daily stand-up meetings, and then talk about how they’re “doing agile.” Now that agile is in place, these teams have no reason to go any further.

Reason #2: There is Nothing New Under the Sun

Neither approach is “new” and neither is going away. Lean principles have been around since Toyota pioneered its production system in the 1960s and 1970s. The methods prioritized value and flow, with attention to reducing all types of waste everywhere in the organization. Agile emerged in the 1990s for software development, as a response to waterfall methods that couldn’t respond effectively to changes in customer requirements.

Agile modeling uses some lean principles: for example, why spend hours documenting flow charts in Visio, when you can just write one on a whiteboard, take a photo, and paste it into your documentation? Agile doesn’t have to be perfectly lean, though. It’s acceptable to introduce elements that might seem like waste into processes, as long as you maintain your ability to quickly respond to new information and changes required by customers. (For example, maybe you need to touch base with your customers several times a week. This extra time and effort is OK in agile if it helps you achieve your customer-facing goals.)

Both lean and agile are practices. They require discipline, time, and monitoring. Teams must continually hone their practice, and learn about each other as they learn together. There are no magic bullets.

Information plays a key role. Effective flow of information from strategy to action is important for lean because confusion (or incomplete communication) and forms of waste. Agile also emphasizes high-value information flows, but for slightly different purposes — that include promoting:

  • Rapid understanding
  • Rapid response
  • Rapid, targeted, and effective action

The difference is easier to understand if you watch a couple cat videos.

This Cat is A G I L E

From Parkour Cats: https://www.youtube.com/watch?v=iCEL-DmxaAQ

This cat is continuously scanning for information about its environment. It’s young and in shape, and it navigates its environment like a pro, whizzing from floor to ceiling. If it’s about to fall off something? No problem! This cat is A G I L E and can quickly adjust. It can easily achieve its goal of scaling any of the cat towers in this video. Agile is also about trying new things to quickly assess whether they will work. You’ll see this cat attempt to climb the wall with an open mind, and upon learning the ineffectiveness of the approach, abandoning that experiment.

This Cat is L E A N

From “How Lazy Cats Drink Water”: https://www.youtube.com/watch?v=FlVo3yUNI6E

This cat is using as LITTLE energy as possible to achieve its goal of hydration. Although this cat might be considered lazy, it is actually very intelligent, dynamically figuring out how to remove non-value-adding activity from its process at every moment. This cat is working smarter, not harder. This cat is L E A N.

Hope this has been helpful. Business posts definitely need more cat videos.

How the Baldrige Process Can Enrich Any Management System

Another wave of reviewing applications for the Malcolm Baldrige National Quality Award (MBNQA) is complete, and I am exhausted — and completely fulfilled and enriched!

That’s the way this process works. As a National Examiner, you will be frustrated, you may cry, and you may think your team of examiners will never come to consensus on the right words to say to the applicant! But because there is a structured process and a discipline, it always happens, and everyone learns.

I’ve been working with the Baldrige Excellence Framework (BEF) for almost 20 years. In the beginning, I used it as a template. Need to develop a Workforce Management Plan that’s solid, and integrates well with leadership, governance, and operations? There’s a framework for that (Criterion 5). Need to beef up your strategic planning process so you do the right thing and get it done right? There’s a framework for that (Criterion 2).

Need to develop Standard Work in any area of your organization, and don’t know where to start (or, want to make sure you covered all the bases)? There’s a framework for that.

Every year, 300 National Examiners are competitively selected from industry experts and senior leaders who care about performance and improvement, and want to share their expertise with others. The stakes are high… after all, this is the only award of its kind sponsored by the highest levels of government!

Once you become a National Examiner (my first year was 2009), you get to look at the Criteria Questions through a completely different lens. You start to see the rich layers of its structure. You begin to appreciate that this guidebook was carefully and iteratively crafted over three decades, drawing from the experiences of executives and senior leaders across a wide swath of industries, faced with both common and unique challenges.

The benefits to companies that are assessed for the award are clear and actionable, but helping others helps examiners, too. Yes, we put in a lot of volunteer hours on evenings and weekends (56 total, for me, this year) — but I got to go deep with one more organization. I got to see how they think of themselves, how they designed their organization to meet their strategic goals, how they act on that design. Our team of examiners got to discuss the strengths we noticed individually, the gaps that concerned us, and we worked together to come to consensus on the most useful and actionable recommendations for the applicant so they can advance to the next stage of quality maturity.

One of the things I learned this year was how well Baldrige complements other frameworks like ISO 9001 and lean. You may have a solid process in place for managing operations, leading continuous improvement events, and sustaining the improvements. You may have a robust strategic planning process, with clear connections between overall objectives and individual actions.

What Baldrige can add to this, even if you’re already a high performance organization, is:

  • tighten the gaps
  • call out places where standard work should be defined
  • identify new breakthrough opportunities for improvement
  • help everyone in your workforce see and understand the connections between people, processes, and technologies

The whitespace — those connections and seams — are where the greatest opportunities for improvement and innovation are hiding. The Criteria Questions in the Baldrige Excellence Framework (BEF) can help you illuminate them.

How to Become a Successful Change Leader

For this month’s Influential Voices Roundtable, the American Society for Quality (ASQ) asks: “In today’s current climate, transformation is a common term and transformative efforts are a regular occurrence. Although these efforts are common, according to Harvard Business Review two-thirds of large-scale transformation efforts fail. Research has proven that effective leadership is crucial for a change initiative to be successful.  How can an individual become a successful Change Leader?

Change is hard only because maintaining status quo is easy. Doing things even a little differently requires cognitive energy! Because most people are pretty busy, there has to be a clear payoff to invest that extra energy in changing, even if the change is simple.

Becoming a successful change leader means helping people find the reasons to invest that energy on their own. First, find the source of resistance (if there is one) and do what you can to remove it. Second, try co-creation instead of feedback to build solutions. Here’s what I mean.

Find Sources of Resistance

In 1983, information systems researcher M. Lynne Markus wanted to figure out why certain software implementations, “designed at great cost of time and money, are abandoned or excessively overhauled because they were unenthusiastically received by their intended users.” Nearly 40 years later, enterprises still occasionally run into the same issue, even though Software as a Service (SaaS) models can (to some extent) reduce this risk.

Before her research started, she found these themes associated with resistance (they will probably feel familiar to you even today):

By studying failed software implementations in finance, she uncovered three main sources for the resistance. So as a change leader, start out by figuring out if they resonate, and then apply one of the remedies on the right:

As you might imagine, this third category (the “political version of interaction theory”) is the most difficult to solve. If a new process or system threatens someone’s power or position, they are unlikely to admit it, it may be difficult to detect, and it will take some deep counseling to get to the root cause and solve it.

Co-Creation Over Feedback

Imagine this: a process in your organization is about to change, and someone comes to you with a step-by-step outline of the new proposed process. “I’d like to get your feedback on this,” he says.

That’s nice, right? Isn’t that exactly what’s needed to ensure smooth management of change? You’ll give your feedback, and then when it’s time to adopt the process, it will go great – right?

In short, NO.

For change to be smooth and effective, people have to feel like they’re part of the process of developing the solution. Although people might feel slightly more comfortable if they’re asked for their thoughts on a proposal, the resultant solution is not theirs — in fact, their feedback might not even be incorporated into it. There’s no “skin in the game.”

In contrast, think about a scenario where you get an email or an invitation to a meeting. “We need to create a new process to decide which of our leads we’ll follow up on, and evaluate whether we made the right decision. We’d like it to achieve [the following goals]. We have to deal with [X, Y and Z] boundary conditions, which we can’t change due to [some factors that are well articulated and understandable].”

You go to the meeting, and two hours later all the stakeholders in the room have co-created a solution. What’s going to happen when it’s time for that process to be implemented? That’s right — little or no resistance. Why would anyone resist a change that they thought up themselves?

Satisficing

Find the resistance, cast it out, and co-create solutions. But don’t forget the most important step: recognizing that perfection is not always perfect. (For quality professionals, this one can be kind of tough to accept at times.)

What this means is: in situations where change is needed, sometimes it’s better to adopt processes or practices that are easier or more accessible for the people who do them. Processes that are less efficient can sometimes be better than processes that are more efficient, if the difference has to do with ease of learning or ease of execution. Following these tips will help you help others take some of the pain out of change.


Markus, M. L. (1983). Power, politics, and MIS implementation.  Communications of the ACM, 26(6), 430-444. Available from http://130.18.86.27/faculty/warkentin/papers/Markus1983_CACM266_PowerPoliticsMIS.pdf

An Easy Way to Make Minimum Viable Product (MVP) Totally Not Viable

The classic viral MVP cartoon from Henrik Kniberg (https://blog.crisp.se/2016/01/25/henrikkniberg/making-sense-of-mvp)

5 minute read

The Minimum Viable Product (MVP) concept has taken off over the past few years. Indeed, its heart is in the right place. MVP encourages product managers to scope features and functionality carefully so that customer needs are satisfied at every stage of development — not just in a sweeping finale at the end of development.

It’s a great way to shorten time-to-value and test new market concepts before committing. Zappos, for example, started by posting pictures of shoes on the internet without having an inventory. They wanted to quickly test to see whether people would even consider buying shoes without trying them on.

Unfortunately, adhering to MVP won’t guarantee success thanks to one critical caveat. And that is: if your product already exists, you have to consider your product’s base state. What can your customers do right now with your product? Failure to take this into consideration can be disastrous.

An Example: Your Web Site

Here’s what I mean: let’s say the product is your company’s web site. If you’re starting from scratch, a perfectly suitable MVP would be a splash page with one or two sentences about what you do. Maybe you’d add some contact information. Customers will be able to find you and communicate with you, and you’ll be providing greater value than without a web presence.

But if you already have a 5000-page site online, that solution is not going to fly. Customers and prospects returning to your site will wonder why it vaporized. If they’re relying on the content or functionality you previously provided, chances are they will not be happy. Confused, they may choose to go elsewhere.

The moral of the story is: in defining the scope of your MVP, take into consideration what your customers can already do, and don’t dare give them less in your next release.

The Connected, Intelligent, Automated Industry 4.0 Supply Chain

ASQ’s March Influential Voices Roundtable asks this question: “Investopedia defines end-to-end supply chain (or ‘digital supply chain’) as a process that refers to the practice of including and analyzing each and every point in a company’s supply chain – from sourcing and ordering raw materials to the point where the good reaches the end consumer. Implementing this practice can increase process speed, reduce waste, and decrease costs.

In your experience, what are some best practices for planning and implementing this style of supply chain to ensure success?

Supply chains are the lifeblood of any business, impacting everything from the quality, delivery, and costs of a business’s products and services to customer service and satisfaction to ultimately profitability and return on assets.

Stank, T., Scott, S. & Hazen, B. (2018, April). A SAVVY GUIDE TO THE DIGITAL SUPPLY CHAIN: HOW TO EVALUATE AND LEVERAGE TECHNOLOGY TO BUILD A SUPPLY CHAIN FOR THE DIGITAL AGE. Whitepaper, Haslam School of Business, University of Tennessee.

Industry 4.0 enabling technologies like affordable sensors, more ubiquitous internet connectivity and 5G networks, and reliable software packages for developing intelligent systems have started fueling a profound digital transformation of supply chains. Although the transformation will be a gradual evolution, spanning years (and perhaps decades), the changes will reduce or eliminate key pain points:

  • Connected: Lack of visibility keeps 84% of Chief Supply Chain Officers up at night. More sources of data and enhanced connectedness to information will alleviate this issue.
  • Intelligent: 87% of Chief Supply Chain Officers say that managing supply chain disruptions proactively is a huge challenge. Intelligent algorithms and prescriptive analytics can make this more actionable.
  • Automated: 80% of all data that could enable supply chain visibility and traceability is “dark” or siloed. Automated discovery, aggregation, and processing will ensure that knowledge can be formed from data and information.

Since the transformation is just getting started, best practices are few and far between — but recommendations do exist. Stank et al. (2018) created a digital supply chain maturity rubric, with highest levels that reflect what they consider recommended practices. I like these suggestions because they span technical systems and management systems:

  • Gather structured and unstructured data from customers, suppliers, and the market using sensors and crowdsourcing (presumably including social media)
  • Use AI & ML to “enable descriptive, predictive, and prescriptive insights simultaneously” and support continuous learning
  • Digitize all systems that touch the supply chain: strategy, planning, sourcing, manufacturing, distribution, collaboration, and customer service
  • Add value by improving efficiency, visibility, security, trust, authenticity, accessibility, customization, customer satisfaction, and financial performance
  • Use just-in-time training to build new capabilities for developing the smart supply chain

One drawback of these suggestions is that they provide general (rather than targeted) guidance.

A second recommendation is to plan initiatives that align with your level of digital supply chain maturity. Soosay & Kannusamy (2018) studied 360 firms in the Australian food industry and found four different stages. They are:

  • Stage 1 – Computerization and connectivity. Sharing data across they supply chain ecosystem requires that it be stored in locations that are accessible by partners. Cloud-based systems are one option. Make sure authentication and verification are carefully implemented.
  • Stage 2 – Visibility and transparency. Adding new sensors and making that data accessible provides new visibility into the supply chain. Key enabling technologies include GPS, time-temperature integrators and data loggers.
  • Stage 3 – Predictive capability. Access to real-time data from supply chain partners will increase the reliability and resilience of the entire network. Enterprise Resource Planning (ERP), Manufacturing Execution Systems (MES), and radio frequency (RFID) tagging are enablers at this stage.
  • Stage 4 – Adaptability and self-learning. At this stage, partners plan and execute the supply chain collaboratively. Through Vendor Managed Inventory (VMI), responsibility for replenishment can even be directly assumed by the supplier.

Traceability is also gaining prominence as a key issue, and permissioned blockchains provide one way to make this happen with sensor data and transaction data. Recently, the IBM Food Trust has demonstrated the practical value provided by the Hyperledger blockchain infrastructure for this purpose. Their prototypes have helped to identify supply chain bottlenecks that might not otherwise have been detected.

What should you do in your organization? Any way to enhance information sharing between members of the supply chain ecosystem — or more effectively synthesize and interpret it — should help your organization shift towards the end-to-end vision. Look for opportunities in both categories.


References for Connected, Intelligent, Automated stats:
  1. IBM. (2018, February). Global Chief Supply Chain Officer Study. Available from this URL
  2. Geriant, J. (2015, October). The Changing Face of Supply Chain Risk Management. SCM World.
  3. IBM & IDC. (2017, March). The Thinking Supply Chain. Available from this URL

Imperfect Action is Better Than Perfect Inaction: What Harry Truman Can Teach Us About Loss Functions (with an intro to ggplot)

One of the heuristics we use at Intelex to guide decision making is former US President Truman’s advice that “imperfect action is better than perfect inaction.” What it means is — don’t wait too long to take action, because you don’t want to miss opportunities. Good advice, right?

When I share this with colleagues, I often hear a response like: “that’s dangerous!” To which my answer is “well sure, sometimes, but it can be really valuable depending on how you apply it!” The trick is: knowing how and when.

Here’s how it can be dangerous. For example, statistical process control (SPC) exists to keep us from tampering with processes — from taking imperfect action based on random variation, which will not only get us nowhere, but can exacerbate the problem we were trying to solve. The secret is to apply Truman’s heuristic based on an understanding of exactly how imperfect is OK with your organization, based on your risk appetite. And this is where loss functions can help.

Along the way, we’ll demonstrate how to do a few important things related to plotting with the ggplot package in R, gradually adding in new elements to the plot so you can see how it’s layered, including:

  • Plot a function based on its equation
  • Add text annotations to specific locations on a ggplot
  • Draw horizontal and vertical lines on a ggplot
  • Draw arrows on a ggplot
  • Add extra dots to a ggplot
  • Eliminate axis text and axis tick marks

What is a Loss Function?

A loss function quantifies how unhappy you’ll be based on the accuracy or effectiveness of a prediction or decision. In the simplest case, you control one variable (x) which leads to some cost or loss (y). For the case we’ll examine in this post, the variables are:

  • How much time and effort you put in to scoping and characterizing the problem (x); we assume that time+effort invested leads to real understanding
  • How much it will cost you (y); can be expressed in terms of direct costs (e.g. capex + opex) as well as opportunity costs or intangible costs (e.g. damage to reputation)

Here is an example of what this might look like, if you have a situation where overestimating (putting in too much x) OR underestimating (putting in too little x) are both equally bad. In this case, x=10 is the best (least costly) decision or prediction:

plot of a typical squared loss function
# describe the equation we want to plot
parabola <- function(x) ((x-10)^2)+10  

# initialize ggplot with a dummy dataset
library(ggplot)
p <- ggplot(data = data.frame(x=0), mapping = aes(x=x)) 

p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) +
     xlab("x = the variable you can control") + 
     ylab("y = cost of loss ($$)")

In regression (and other techniques where you’re trying to build a model to predict a quantitative dependent variable), mean square error is a squared loss function that helps you quantify error. It captures two facts: the farther away you are from the correct answer the worse the error is — and both overestimating and underestimating is bad (which is why you square the values). Across this and related techniques, the loss function captures these characteristics:

From http://www.cs.cornell.edu/courses/cs4780/2015fa/web/lecturenotes/lecturenote10.html

Not all loss functions have that general shape. For classification, for example, the 0-1 loss function tells the story that if you get a classification wrong (x < 0) you incur all the penalty or loss (y=1), whereas if you get it right (x > 0) there is no penalty or loss (y=0):

# set up data frame of red points
d.step <- data.frame(x=c(-3,0,0,3), y=c(1,1,0,0))

# note that the loss function really extends to x=-Inf and x=+Inf
ggplot(d.step) + geom_step(mapping=aes(x=x, y=y), direction="hv") +
     geom_point(mapping=aes(x=x, y=y), color="red") + 
     xlab("y* f(x)") + ylab("Loss (Cost)") +  
     ggtitle("0-1 Loss Function for Classification")

Use the Loss Function to Make Strategic Decisions

So let’s get back to Truman’s advice. Ideally, we want to choose the x (the amount of time and effort to invest into project planning) that results in the lowest possible cost or loss. That’s the green point at the nadir of the parabola:

p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) + 
     xlab("Time Spent and Information Gained (e.g. person-weeks)") + ylab("$$ COST $$") +
     annotate(geom="text", x=10, y=5, label="Some Effort, Lowest Cost!!", color="darkgreen") +
     geom_point(aes(x=10, y=10), colour="darkgreen")

Costs get higher as we move up the x-axis:

p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) + 
     xlab("Time Spent and Information Gained (e.g. person-weeks)") + ylab("$$ COST $$") +
     annotate(geom="text", x=10, y=5, label="Some Effort, Lowest Cost!!", color="darkgreen") +
     geom_point(aes(x=10, y=10), colour="darkgreen") +
     annotate(geom="text", x=0, y=100, label="$$$$$", color="green") +
     annotate(geom="text", x=0, y=75, label="$$$$", color="green") +
     annotate(geom="text", x=0, y=50, label="$$$", color="green") +
     annotate(geom="text", x=0, y=25, label="$$", color="green") +
     annotate(geom="text", x=0, y=0, label="$ 0", color="green")

And time+effort grows as we move along the x-axis (we might spend minutes on a problem at the left of the plot, or weeks to years by the time we get to the right hand side):

p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) + 
     xlab("Time Spent and Information Gained (e.g. person-weeks)") + ylab("$$ COST $$") +
     annotate(geom="text", x=10, y=5, label="Some Effort, Lowest Cost!!", color="darkgreen") +
     geom_point(aes(x=10, y=10), colour="darkgreen") +
     annotate(geom="text", x=0, y=100, label="$$$$$", color="green") +
     annotate(geom="text", x=0, y=75, label="$$$$", color="green") +
     annotate(geom="text", x=0, y=50, label="$$$", color="green") +
     annotate(geom="text", x=0, y=25, label="$$", color="green") +
     annotate(geom="text", x=0, y=0, label="$ 0", color="green") +
     annotate(geom="text", x=2, y=0, label="minutes\nof effort", size=3) +
     annotate(geom="text", x=20, y=0, label="months\nof effort", size=3)

Planning too Little = Planning too Much = Costly

What this means is — if we don’t plan, or we plan just a little bit, we incur high costs. We might make the wrong decision! Or miss critical opportunities! But if we plan too much — we’re going to spend too much time, money, and/or effort compared to the benefit of the solution we provide.


p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) + 
     xlab("Time Spent and Information Gained (e.g. person-weeks)") + ylab("$$ COST $$") +
     annotate(geom="text", x=10, y=5, label="Some Effort, Lowest Cost!!", color="darkgreen") +
     geom_point(aes(x=10, y=10), colour="darkgreen") +
     annotate(geom="text", x=0, y=100, label="$$$$$", color="green") +
     annotate(geom="text", x=0, y=75, label="$$$$", color="green") +
     annotate(geom="text", x=0, y=50, label="$$$", color="green") +
     annotate(geom="text", x=0, y=25, label="$$", color="green") +
     annotate(geom="text", x=0, y=0, label="$ 0", color="green") +
     annotate(geom="text", x=2, y=0, label="minutes\nof effort", size=3) +
     annotate(geom="text", x=20, y=0, label="months\nof effort", size=3) +
     annotate(geom="text",x=3, y=85, label="Little (or no) Planning\nHIGH COST", color="red") +
     annotate(geom="text", x=18, y=85, label="Paralysis by Planning\nHIGH COST", color="red") +
     geom_vline(xintercept=0, linetype="dotted") + geom_hline(yintercept=0, linetype="dotted")

The trick is to FIND THAT CRITICAL LEVEL OF TIME and EFFORT invested to gain information and understanding about your problem… and then if you’re going to err, make sure you err towards the left — if you’re going to make a mistake, make the mistake that costs less and takes less time to make:

arrow.x <- c(10, 10, 10, 10)
arrow.y <- c(35, 50, 65, 80)
arrow.x.end <- c(6, 6, 6, 6)
arrow.y.end <- arrow.y
d <- data.frame(arrow.x, arrow.y, arrow.x.end, arrow.y.end)

p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) + 
     xlab("Time Spent and Information Gained (e.g. person-weeks)") + ylab("$$ COST $$") +
     annotate(geom="text", x=10, y=5, label="Some Effort, Lowest Cost!!", color="darkgreen") +
     geom_point(aes(x=10, y=10), colour="darkgreen") +
     annotate(geom="text", x=0, y=100, label="$$$$$", color="green") +
     annotate(geom="text", x=0, y=75, label="$$$$", color="green") +
     annotate(geom="text", x=0, y=50, label="$$$", color="green") +
     annotate(geom="text", x=0, y=25, label="$$", color="green") +
     annotate(geom="text", x=0, y=0, label="$ 0", color="green") +
     annotate(geom="text", x=2, y=0, label="minutes\nof effort", size=3) +
     annotate(geom="text", x=20, y=0, label="months\nof effort", size=3) +
     annotate(geom="text",x=3, y=85, label="Little (or no) Planning\nHIGH COST", color="red") +
     annotate(geom="text", x=18, y=85, label="Paralysis by Planning\nHIGH COST", color="red") +
     geom_vline(xintercept=0, linetype="dotted") + 
     geom_hline(yintercept=0, linetype="dotted") +
     geom_vline(xintercept=10) +
     geom_segment(data=d, mapping=aes(x=arrow.x, y=arrow.y, xend=arrow.x.end, yend=arrow.y.end),
     arrow=arrow(), color="blue", size=2) +
     annotate(geom="text", x=8, y=95, size=2.3, color="blue",
     label="we prefer to be\non this side of the\nloss function")

Moral of the Story

The moral of the story is… imperfect action can be expensive, but perfect action is ALWAYS expensive. Spend less to make mistakes and learn from them, if you can! This is one of the value drivers for agile methodologies… agile practices can help improve communication and coordination so that the loss function is minimized.

## FULL CODE FOR THE COMPLETELY ANNOTATED CHART ##
# If you change the equation for the parabola, annotations may shift and be in the wrong place.
parabola <- function(x) ((x-10)^2)+10

my.title <- expression(paste("Imperfect Action Can Be Expensive. But Perfect Action is ", italic("Always"), " Expensive."))

arrow.x <- c(10, 10, 10, 10)
arrow.y <- c(35, 50, 65, 80)
arrow.x.end <- c(6, 6, 6, 6)
arrow.y.end <- arrow.y
d <- data.frame(arrow.x, arrow.y, arrow.x.end, arrow.y.end)

p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) + 
     xlab("Time Spent and Information Gained (e.g. person-weeks)") + ylab("$$ COST $$") +
     annotate(geom="text", x=10, y=5, label="Some Effort, Lowest Cost!!", color="darkgreen") +
     geom_point(aes(x=10, y=10), colour="darkgreen") +
     annotate(geom="text", x=0, y=100, label="$$$$$", color="green") +
     annotate(geom="text", x=0, y=75, label="$$$$", color="green") +
     annotate(geom="text", x=0, y=50, label="$$$", color="green") +
     annotate(geom="text", x=0, y=25, label="$$", color="green") +
     annotate(geom="text", x=0, y=0, label="$ 0", color="green") +
     annotate(geom="text", x=2, y=0, label="minutes\nof effort", size=3) +
     annotate(geom="text", x=20, y=0, label="months\nof effort", size=3) +
     annotate(geom="text",x=3, y=85, label="Little (or no) Planning\nHIGH COST", color="red") +
     annotate(geom="text", x=18, y=85, label="Paralysis by Planning\nHIGH COST", color="red") +
     geom_vline(xintercept=0, linetype="dotted") + 
     geom_hline(yintercept=0, linetype="dotted") +
     geom_vline(xintercept=10) +
     geom_segment(data=d, mapping=aes(x=arrow.x, y=arrow.y, xend=arrow.x.end, yend=arrow.y.end),
     arrow=arrow(), color="blue", size=2) +
     annotate(geom="text", x=8, y=95, size=2.3, color="blue",
     label="we prefer to be\non this side of the\nloss function") +
     ggtitle(my.title) +
     theme(axis.text.x=element_blank(), axis.ticks.x=element_blank(),
     axis.text.y=element_blank(), axis.ticks.y=element_blank()) 

Now sometimes you need to make this investment! (Think nuclear power plants, or constructing aircraft carriers or submarines.) Don’t get caught up in getting your planning investment perfectly optimized — but do be aware of the trade-offs, and go into the decision deliberately, based on the risk level (and regulatory nature) of your industry, and your company’s risk appetite.

Designing Experiences for Authentic Engagement: The Design for STEAM Canvas

As Industry 4.0 and Digital Transformation efforts bear their first fruits, capabilities, business models, and the organizations that embody them are transforming. A century ago, we thought of organizations as machines to be rigidly designed and controlled. In the latter part of the 20th century, organizations were thought of as knowledge to be cultivated, shared, and expanded. But “as intelligent systems gain traction, we are once again at a crossroads – where organizations must create complete and meaningful experiences” for their customers, stakeholders, and employees.

Read our new paper in the STEAM Journal

How do you design those complete, meaningful, and radically engaging experiences? To provide a starting point, check out “Design for Steam: Creating Participatory Art with Purpose” by my former student Nick Kamienski and me. It was just published today by the STEAM Journal.

“Participatory Art” doesn’t just mean creating things that are pretty to look at in your office lobby or tradeshow booth. It means finding ways to connect with your audience in ways that help them find meaning, purpose, and self-awareness – the ultimate ingredient for authentic engagement.

Designing experiences to make this happen is challenging, but totally within reach. Learn more in today’s new article!

« Older Entries Recent Entries »