Author Archives: Nicole Radziwill

The Endowment Effect: The Ultimate Organizational Rose-Colored (Risk-Enhancing) Glasses

Fifteen or so years ago, I was a member of a review team that assessed a major, multi-million dollar software project. We were asked to perform the review because the project had some issues — it cost nearly $2M a year, was not yet delivering value to users, and had been running for 17 years.

Were I the ultimate decision-maker, my plan of action would have been simple: shut down the project, reconstitute a team with some representation from the old team, and use the lessons learned to rearchitect a newer, more robust solution. It would have customer involvement from the start to ensure a short time-to-value (and continuous flow of value). But there was one complication: the subject matter for this software package was highly specialized and required active involvement from people who had deep knowledge of the problem domain… and the team already had about 60% of the world’s experts on it.

Still, I was focused on the sunk costs. I felt that the organization should not choose to keep the project going just because over $20M had been poured into it… the sunk costs should not factor into the decision.

But then something very curious happened two years later, as the project was still hemorrhaging money… I was put in charge of it. So what did I do? Launched a two-month due diligence to reassess the situation, of course.

I was not on the review team this time, but their assessment was not a surprise — can the project, reconstitute the team, use the lessons learned to plan a new approach to delivering value quickly.

So that’s what I did… right? NOOOOO!!! I decided to try a little harder… because of course we could get the current software project to be stable and valuable, if we just gave it a little more time.

Even I was shocked by my transformation. Why was I feeling like this? Why was I ignoring the facts? Why was I, all of a sudden, powerless to make the appropriate and most logical choice?

Turns out, I was just demonstrating human nature via the Endowment Effect — which says, simplistically, that once you own something you value it more than before you own it. This is not just a curiosity though… because it can get in the way of effective decision-making.

Think about it:

  • Before you buy a house, you psychologically devalue it because you want to get a better deal. But once you move in, your psyche inflates the value because you stand to win as the value increases.
  • Why is it that leaders often value the opinions of consultants more than the opinions of full-time staff? Because consultants are more expensive, and once their reports have been submitted, you now own the intellectual property… and value it more.
  • The same effect occurs if you buy a company. You may be sensitive to issues and opportunities for improvement prior to the sale, but once your signature is on the dotted line… the endowment effect kicks in, and the rose-colored glasses are donned.

This has a huge implication for quality and process improvement. Once you own something, you are less able to see the problems that need to be solved. This is why we use external auditors for our ISO 9001 programs, or review panels for our government projects, or a quality award assessment process for evaluating how well we are translating strategy to action.

Other people can see areas for improvement that you can’t, if you’re an owner of the process that has problems. The lesson? Get external eyes on your internal issues, and pay attention to their insights.

The U.S. Constitution is a Quality System

In 2008, I defined a quality system as:

your organization’s blueprint: it identifies your business model and processes, provides details about how your people will work together to get things done, and establishes specifications for performance — so you can tell if you’re on track… or not.

https://qualityandinnovation.com/2008/10/18/quality-system/

By this definition, the U.S. Constitution is a quality system — just like ISO 9001, or any system developed using the Baldrige Criteria, or a system for strategy execution based on Hoshin planning and other lean principles. The Constitution defines the blueprint for how power will be distributed (among the three branches of government, and between the country and the states), provides details about how the branches will work together and what principles they will abide by, and establishes clear standards for performance right up front:

We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America.

https://constitutionus.com/

(The preamble is the Constitution’s quality policy.)

But even though I’ve been working with (and researching) quality systems since the late 90s, I didn’t see the connection until yesterday, when I read some excerpts from the Don McGahn case. McGahn, who was subpoenaed by the House of Representatives to testify in the Trump impeachment hearings, was instructed by the White House to disobey the order. He asked a court to decide whether or not he should be made to appear. Federal District Court Judge Ketanji Brown Jackson, in a 120-page response, called on the characteristics of the Constitution that make it a quality system to make the determination:

…when a committee of Congress seeks testimony and records by issuing a valid subpoena in the context of a duly authorized investigation, it has the Constitution’s blessing, and ultimately, it is acting not in its own interest, but for the benefit of the People of the United States. If there is fraud or abuse or waste or corruption in the federal government, it is the constitutional duty of Congress to find the facts and, as necessary, take corrective action.

Committee on the Judiciary, U.S. House of Representatives vs. Donald F. McGahn II – Civ. No. 19-cv-2379 (KBJ) – Filed 11/25/2019

This pattern should be really familiar to anyone who’s worked with ISO 9001 or similar quality systems! After your company’s processes and procedures are put in place, and your performance standards are defined (for products as well as processes), you implement a monitoring system to catch any nonconformance that might arise. Then, after root cause analysis, you implement a corrective action to improve the impacted process.

In the U.S., those nonconformances are fraud or abuse or waste or corruption or even injustice that one person (or entity) experiences at the hands of another. You can take up the issue with the courts, which will (in many cases) interpret the laws, implement countermeasures, and potentially lead to larger-scale corrective actions, like new laws.

How can you tell if the quality system defined by the Constitution is working? Evaluate it against the performance standards. Is justice taking place? Is there domestic tranquility, adequate defense, and general welfare? If not, then the structure of the quality system (e.g. the Amendments) should change to better enable the desired outcomes.

Although the system is imperfect, it does — by design — support continuous improvement that incorporates the Voice of the Customer (VoC). This is done through Congressional representation, carefully selected juries of peers, and NGOs that research and advance specific interests.

So the next time you’re wondering whether your ISO 9001 system adds value, ask yourself… does the U.S. Constitution add value? I think you’ll conclude that both can provide a necessary foundation.


The link between quality and structures in the U.S. government was also noted by Tim J. Clark in this 2008 article from the Indianapolis Star, entitled “People working together can make a more perfect union.” He notes that ‘The aim of the American system of government is to enable “We the People” to work together to make progress – not toward a “perfect” union, which would be impossible – but rather toward a “more perfect” union’ and explains how this aligns with Deming’s philosophy.

Things I’m Thankful For

Although there are many aspects of my life that I’m thankful for (e.g. my cat, the people around me, the amazing R community and the software they build), this post is going to be focused on THINGS I am thankful for. These things definitely improve my quality of life.

I don’t usually venture outside the bounds of management, quality, or data science, but I’m on “vacation” (ie, not working for a week) and my chemical exposure model has an Object of Type Closure is Not Subsettable error (which means I need to let it sit for a while) so want to share with you things in my life that spark joy. And… Happy (American) Thanksgiving!

#1 My Brita Water Filter

I drink a lot of water… and a lot of coffee. And I’m really sensitive to the way the water tastes, especially after an incident years ago where the water in my house had black mold (no one else could taste it, but I swore there was something with the water — they started believing me when I broke out in hives from drinking it.) When the Brita entered my life, my quality of life changed drastically.

We make coffee with Brita water… we give the cat Brita water. We even got a 2nd Brita so we wouldn’t have to wait if the water in the first Brita ran out. I love these things.

#2 My Behmor Coffee Machine

Initially, I was skeptical about this coffee machine. It looked like it didn’t hold enough cups, and for goodness sake, I don’t need my coffee maker to be hooked up to the internet. Turns out I do. Every time I wake up, the coffee is already made for me, and there’s a cool app where you can customize the temperature of the water and other factors. It is the nerd’s ultimate coffee machine. I have no idea why it’s got weak reviews on Amazon.

I like this so much, we bought a second one that’s sitting in the closet — waiting for the moment that the first one breaks.

#3 New Mexico Pinon Coffee

I always thought I’d live in New Mexico… I did work there for a while (if you count traveling to your office’s other site) and I do have a graduate certificate in management from the U of NM. But who needs to live there when you get to taste pinon in your coffee every morning! That’s right. We usually mix 40% pinon to 60% french roast.

#4 My Smart Mattress Cover

I’ve had pretty serious thyroid problems since I was a kid, and every so often, it gets out of control. The past several months, it’s been really bad. In addition to being super sluggish with dry skin, it also makes you cold which means (for me, at least) that it’s hard to fall asleep at night. But not when you have the smart mattress cover! A few minutes before I go to bed, I pick up my phone, connect to my mattress, and turn on the heat timer… which roasts me for up to 8 hours and then shuts off. I love this so much that I miss it when I’m away from home on travel. I have never missed being away from home while traveling, so this is a Big Deal.

I am not a huge fan of this company’s mattresses, but their mattress cover is A+. The app also comes with a sleep monitor, but I swear, they’ve got new software developers tweaking their algorithm on a daily basis. We don’t trust the sleep score any more, and the sleep app UI is so infuriating (and changes so often) that we’ve stopped using it. But it doesn’t matter because… the heater is blissful. Did I mention that the heater works separately for each side of the bed? So you can roast while your partner… doesn’t have to.

There’s another fringe benefit… you can always tell when someone else has been sleeping in your bed because your phone will alert you. And it will also tell you exactly what minutes your side of the bed was being occupied. I’m sure some people would like to have this kind of information.

#5 South Dakota Fry Bread

When I lived in South Dakota, I was introduced to fry bread via Indian tacos. Today, when I fry this up, I’m immediately taken to a part of my life that was simpler and more carefree. The fry bread might taste good just because it evokes these emotions in me, but who knows… it might work for you too. And you don’t even need to do the taco part! Just fry up the bread, eat it, and be transported to… South Dakota.

I may add more Things I’m Thankful For as I think of them.

KPIs vs Metrics: What’s the Difference? And Why Does it Matter?

Years ago I consulted for an organization that had an enticing mission, a dynamic and highly qualified workforce of around 200 people, and an innovative roadmap that was poised to make an impact — estimated to be ~$350-500M (yes really, that big). But there was one huge problem.

As engineers, the leadership could readily provide information about uptime and Service Level Agreements (SLAs). But they had no idea whether they were on track to meet strategic goals — or even whether they would be able to deliver key operations projects — at all! We recommended that they focus on developing metrics, and provided some guidelines for the types of metrics that might help them deliver their products and services — and satisfy their demanding customers.

Unfortunately, we made a critical mistake.

They were overachievers. When we came back six months later, they had nearly a thousand metrics. (A couple of the guys, beaming with pride, didn’t quite know how to interpret our non-smiling faces.)

“So tell us… what are your top three goals for the year, and are you on track to meet them?” we asked.

They looked at each other… then at us. They looked down at their papers. They glanced at each other again. It was in that moment they realized the difference between KPIs and metrics.

  • KPIs are KEY Performance Indicators. They have meaning. They are important. They are significant. And they relate to the overall goals of your business.
  • One KPI is associated with one or more metrics. Metrics are numbers, counts, percentages, or other values that provide insight about what’s happened in the past (descriptive metrics), what is happening right now (diagnostic metrics), what will happen (predictive metrics or forecasts), or what should happen (prescriptive metrics or recommendations).

For the human brain to be able to detect and respond to patterns in organizational performance, limit the number of KPIs!

A good rule of thumb is to select 3-5 KPIs (but never more than 8 or 9!) per logical division of your organization. A logical division can be a functional area (finance, IT, call center), a product line, a program or collection of projects, or a collection of strategic initiatives.

Or, use KPIs and metrics to describe product performance, process performance, customer satisfaction, customer engagement, workforce capability, workforce capacity, leadership performance, governance performance, financial performance, market performance, and how well you are executing on the action plans that drive your strategic initiatives (strategy performance). These logical divisions come from the Baldrige Excellence Framework.

Similarly, try to limit the number of projects and initiatives in each functional area — and across your organization. Work gets done more easily when people understand how all the parts of your organization relate to one another.

What happened to the organization from the story, you might ask? Within a year, they had boiled down their metrics into 8 functional areas, were working on 4 strategic initiatives, and had no more than 5 KPIs per functional area. They found it really easy to monitor the state of their business, and respond in an agile and capable way. (They were still collecting lots more metrics, but they only had to dig into them on occasion.)

Remember… metrics are helpful, but:

KPIs are KEY!!

You don’t have thousands of keys to your house… and you don’t want thousands of KPIs. Take a critical look at what’s most important to your business, and organize that information in a way that’s accessible. You’ll find it easier to manage everything — strategic initiatives, projects, and operations.

easyMTS: My First R Package (Story, and Results)

This weekend I decided to create my first R package… it’s here! easyMTS makes it possible to create and evaluate a Mahalanobis-Taguchi System (MTS) for pseudo-classification:

https://github.com/NicoleRadziwill/easyMTS

Although I’ve been using R for 15 years, developing a package has been the one thing slightly out of reach for me. Now that I’ve been through the process once, with a package that’s not completely done (but at least has a firm foundation, and is usable to some degree), I can give you some advice:

  • Make sure you know R Markdown before you begin.
  • Some experience with Git and Github will be useful. Lots of experience will be very, very useful.
  • Write the functions that will go into your package into a file that you can source into another R program and use. If your programs work when you run the code this way, you will have averted many problems early.

The process I used to make this happen was:

I hope you enjoy following along with my process, and that it helps you write packages too. If I can do it, so can you!

My First R Package (Part 3)

After refactoring my programming so that it was only about 10 lines of code, using 12 functions I wrote an loaded in via the source command, I went through all the steps in Part 1 of this blog post and Part 2 of this blog post to set up the R package infrastructure using testthis in RStudio. Then things started humming along with the rest of the setup:

> use_mit_license("Nicole Radziwill")
✔ Setting active project to 'D:/R/easyMTS'
✔ Setting License field in DESCRIPTION to 'MIT + file LICENSE'
✔ Writing 'LICENSE.md'
✔ Adding '^LICENSE\\.md$' to '.Rbuildignore'
✔ Writing 'LICENSE'

> use_testthat()
✔ Adding 'testthat' to Suggests field in DESCRIPTION
✔ Creating 'tests/testthat/'
✔ Writing 'tests/testthat.R'
● Call `use_test()` to initialize a basic test file and open it for editing.

> use_vignette("easyMTS")
✔ Adding 'knitr' to Suggests field in DESCRIPTION
✔ Setting VignetteBuilder field in DESCRIPTION to 'knitr'
✔ Adding 'inst/doc' to '.gitignore'
✔ Creating 'vignettes/'
✔ Adding '*.html', '*.R' to 'vignettes/.gitignore'
✔ Adding 'rmarkdown' to Suggests field in DESCRIPTION
✔ Writing 'vignettes/easyMTS.Rmd'
● Modify 'vignettes/easyMTS.Rmd'

> use_citation()
✔ Creating 'inst/'
✔ Writing 'inst/CITATION'
● Modify 'inst/CITATION'

Add Your Dependencies

> use_package("ggplot2")
✔ Adding 'ggplot2' to Imports field in DESCRIPTION
● Refer to functions with `ggplot2::fun()`
> use_package("dplyr")
✔ Adding 'dplyr' to Imports field in DESCRIPTION
● Refer to functions with `dplyr::fun()`

> use_package("magrittr")
✔ Adding 'magrittr' to Imports field in DESCRIPTION
● Refer to functions with `magrittr::fun()`
> use_package("tidyr")
✔ Adding 'tidyr' to Imports field in DESCRIPTION
● Refer to functions with `tidyr::fun()`

> use_package("MASS")
✔ Adding 'MASS' to Imports field in DESCRIPTION
● Refer to functions with `MASS::fun()`

> use_package("qualityTools")
✔ Adding 'qualityTools' to Imports field in DESCRIPTION
● Refer to functions with `qualityTools::fun()`

> use_package("highcharter")
Registered S3 method overwritten by 'xts':
  method     from
  as.zoo.xts zoo 
Registered S3 method overwritten by 'quantmod':
  method            from
  as.zoo.data.frame zoo 
✔ Adding 'highcharter' to Imports field in DESCRIPTION
● Refer to functions with `highcharter::fun()`

> use_package("cowplot")
✔ Adding 'cowplot' to Imports field in DESCRIPTION
● Refer to functions with `cowplot::fun()`

Adding Data to the Package

I want to include two files, one data frame containing 50 observations of a healthy group with 5 predictors each, and another data frame containing 15 observations from an abnormal or unhealthy group (also with 5 predictors). I made sure the two CSV files I wanted to add to the package were in my working directory first by using dir().

> use_data_raw()
✔ Creating 'data-raw/'
✔ Adding '^data-raw$' to '.Rbuildignore'
✔ Writing 'data-raw/DATASET.R'
● Modify 'data-raw/DATASET.R'
● Finish the data preparation script in 'data-raw/DATASET.R'
● Use `usethis::use_data()` to add prepared data to package

> mtsdata1 <- read.csv("MTS-Abnormal.csv") %>% mutate(abnormal=1)
> usethis::use_data(mtsdata1)
✔ Creating 'data/'
✔ Saving 'mtsdata1' to 'data/mtsdata1.rda'

> mtsdata2 <- read.csv("MTS-Normal.csv") %>% mutate(normal=1)
> usethis::use_data(mtsdata2)
✔ Saving 'mtsdata2' to 'data/mtsdata2.rda'

Magically, this added my two files (in .rds format) into my /data directory. (Now, though, I don’t know why the /data-raw directory is there… maybe we’ll figure that out later.) I decided it was time to commit these to my repository again:

Following the instruction above, I re-knit the README.Rmd and then it was possible to commit everything to Github again. At which point I ended up in a fistfight with git, again saved only by my software engineer partner who uses Github all the time:

I think it should be working. The next test will be if anyone can install this from github using devtools. Let me know if it works for you… it works for me locally, but you know how that goes. The next post will show you how to use it 🙂

install.packages("devtools")
install_github("NicoleRadziwill/easyMTS")

SEE WHAT WILL BECOME THE easyMTS VIGNETTE –>

« Older Entries