Category Archives: JIT/Manufacturing

A Simple Intro to Bayesian Change Point Analysis

The purpose of this post is to demonstrate change point analysis by stepping through an example of the technique in R presented in Rizzo’s excellent, comprehensive, and very mathy book, Statistical Computing with R, and then showing alternative ways to process this data using the changepoint and bcp packages. Much of the commentary is simplified, and that’s on purpose: I want to make this introduction accessible if you’re just learning the method. (Most of the code is straight from Rizzo who provides a much more in-depth treatment of the technique. I’ve added comments in the code to make it easier for me to follow, and that’s about it.)

The idea itself is simple: you have a sample of observations from a Poisson (counting) process (where events occur randomly over a period of time). You probably have a chart that shows time on the horizontal axis, and how many events occurred on the vertical axis. You suspect that the rate at which events occur has changed somewhere over that range of time… either the event is increasing in frequency, or it’s slowing down — but you want to know with a little more certainty. (Alternatively, you could check to see if the variance has changed, which would be useful for process improvement work in Six Sigma projects.)

You want to estimate the rate at which events occur BEFORE the shift (mu), the rate at which events occur AFTER the shift (lambda), and the time when the shift happens (k). To do it, you can apply a Markov Chain Monte Carlo (MCMC) sampling approach to estimate the population parameters at each possible k, from the beginning of your data set to the end of it. The values you get at each time step will be dependent only on the values you computed at the previous timestep (that’s where the Markov Chain part of this problem comes in). There are lots of different ways to hop around the parameter space, and each hopping strategy has a fancy name (e.g. Metropolis-Hastings, Gibbs, “reversible jump”).

In one example, Rizzo (p. 271-277) uses a Markov Chain Monte Carlo (MCMC) method that applies a Gibbs sampler to do the hopping – with the goal of figuring out the change point in number of coal mine disasters from 1851 to 1962. (Looking at a plot of the frequency over time, it appears that the rate of coal mining disasters decreased… but did it really? And if so, when? That’s the point of her example.) She gets the coal mining data from the boot package. Here’s how to get it, and what it looks like:

library(boot)
data(coal)
y <- tabulate(floor(coal[[1]]))
y <- y[1851:length(y)]
barplot(y,xlab="years", ylab="frequency of disasters")

coalmine-freq

First, we initialize all of the data structures we’ll need to use:

# initialization
n <- length(y) # number of data elements to process
m <- 1000 # target length of the chain
L <- numeric(n) # likelihood fxn has one slot per year
k[1] <- sample(1:n,1) # pick 1 random year to start at
mu[1] <- 1
lambda[1] <- 1
b1 <- 1
b2 <- 1
# now set up blank 1000 element arrays for mu, lambda, and k
mu <- lambda <- k <- numeric(m)

Here are the models for prior (hypothesized) distributions that she uses, based on the Gibbs sampler approach:

  • mu comes from a Gamma distribution with shape parameter of (0.5 + the sum of all your frequencies UP TO the point in time, k, you’re currently at) and a rate of (k + b1)
  • lambda comes from a Gamma distribution with shape parameter of (0.5 + the sum of all your frequencies AFTER the point in time, k, you’re currently at) and a rate of (n – k + b1) where n is the number of the year you’re currently processing
  • b1 comes from a Gamma distribution with a shape parameter of 0.5 and a rate of (mu + 1)
  • b2 comes from a Gamma distribution with a shape parameter of 0.5 and a rate of (lambda + 1)
  • a likelihood function L is also provided, and is a function of k, mu, lambda, and the sum of all the frequencies up until that point in time, k

At each iteration, you pick a value of k to represent a point in time where a change might have occurred. You slice your data into two chunks: the chunk that happened BEFORE this point in time, and the chunk that happened AFTER this point in time. Using your data, you apply a Poisson Process with a (Hypothesized) Gamma Distributed Rate as your model. This is a pretty common model for this particular type of problem. It’s like randomly cutting a deck of cards and taking the average of the values in each of the two cuts… then doing the same thing again… a thousand times. Here is Rizzo’s (commented) code:

# start at 2, so you can use initialization values as seeds
# and go through this process once for each of your m iterations
for (i in 2:m) {
 kt <- k[i-1] # start w/random year from initialization
 # set your shape parameter to pick mu from, based on the characteristics
 # of the early ("before") chunk of your data
 r <- .5 + sum(y[1:kt]) 
 # now use it to pick mu
 mu[i] <- rgamma(1,shape=r,rate=kt+b1) 
 # if you're at the end of the time periods, set your shape parameter
 # to 0.5 + the sum of all the frequencies, otherwise, just set the shape
 # parameter that you will use to pick lambda based on the later ("after")
 # chunk of your data
 if (kt+1 > n) r <- 0.5 + sum(y) else r <- 0.5 + sum(y[(kt+1):n])
 lambda[i] <- rgamma(1,shape=r,rate=n-kt+b2)
 # now use the mu and lambda values that you got to set b1 and b2 for next iteration
 b1 <- rgamma(1,shape=.5,rate=mu[i]+1)
 b2 <- rgamma(1,shape=.5,rate=lambda[i]+1)
 # for each year, find value of LIKELIHOOD function which you will 
 # then use to determine what year to hop to next
 for (j in 1:n) {
 L[j] <- exp((lambda[i]-mu[i])*j) * (mu[i]/lambda[i])^sum(y[1:j])
 }
 L <- L/sum(L)
 # determine which year to hop to next
 k[i] <- sample(1:n,prob=L,size=1)
}

Knowing the distributions of mu, lambda, and k from hopping around our data will help us estimate values for the true population parameters. At the end of the simulation, we have an array of 1000 values of k, an array of 1000 values of mu, and an array of 1000 values of lambda — we use these to estimate the real values of the population parameters. Typically, algorithms that do this automatically throw out a whole bunch of them in the beginning (the “burn-in” period) — Rizzo tosses out 200 observations — even though some statisticians (e.g. Geyer) say that the burn-in period is unnecessary:

> b <- 201 # treats time until the 200th iteration as "burn-in"
> mean(k[b:m])
[1] 39.765
> mean(lambda[b:m])
[1] 0.9326437
> mean(mu[b:m])
[1] 3.146413

The change point happened between the 39th and 40th observations, the arrival rate before the change point was 3.14 arrivals per unit time, and the rate after the change point was 0.93 arrivals per unit time. (Cool!)
After I went through this example, I discovered the changepoint package, which let me run through a similar process in just a few lines of code. Fortunately, the results were very similar! I chose the “AMOC” method which stands for “at most one change”. Other methods are available which can help identify more than one change point (PELT, BinSeg, and SegNeigh – although I got an error message every time I attempted that last method).

> results <- cpt.mean(y,method="AMOC")
> cpts(results)
cpt 
 36 
> param.est(results)
$mean
[1] 3.2500000 0.9736842
> plot(results,cpt.col="blue",xlab="Index",cpt.width=4)

coalmine-changepoint

I decided to explore a little further and found even MORE change point analysis packages! So I tried this example using bcp (which I presume stands for “Bayesian Change Point”) and voila… the output looks very similar to each of the previous two methods!!!):

coalmine-bcp

It’s at this point that the HARD part of the data science project would begin… WHY? Why does it look like the rate of coal mining accidents decreased suddenly? Was there a change in policy or regulatory requirements in Australia, where this data was collected? Was there some sort of mass exodus away from working in the mines, and so there’s a covariate in the number of opportunities for a mining disaster to occur? Don’t know… the original paper from 1979 doesn’t reveal the true story behind the data.

There are also additional resources on R Bloggers that discuss change point analysis:

(Note: If I’ve missed anything, or haven’t explained anything right, please provide corrections and further insights in the comments! Thank you.

Is the Manufacturing Outlook Sufficiently Far-Reaching?

view-from-qIn his January post, ASQ CEO Paul Borawski discusses the results from ASQ’s 2013 manufacturing outlook survey. Although the majority of manufacturers reported revenue growth in 2013, many are still very concerned about the state of the economy. Paul was inquiring whether this “optimistic but guarded” perspective was a good assessment, and asked for examples from manufacturers that might shed some more light on the situation.

I’ve alluded over Twitter that my relationship with quality – as a concept – has been changing over the past few months. That’s the main reason that I haven’t been posting as frequently on this blog… I’m trying to sort out my feelings. (It’s almost like what happens when you’ve been in a relationship for years, but then gradually discover that you’ve changed, and the relationship is no longer meeting your deepest needs.)

Paul’s January post has helped me clarify some of these feelings.

If we focus on revenue growth as a measure of “the health of manufacturing”, we’re all missing the point. Current trends in production indicate that the locus of power is increasingly shifting from large manufacturing companies to individuals and small groups. Examples such as the Maker movement, the success of online platforms like Etsy to support craftspeople, and the increasing availability of new technologies like 3D printing at reasonable costs are shifting the environmental dynamism of what has traditionally been a slowly evolving arena:

“with the Maker movement we will be increasingly surprised at what seemed small and local and small‑scale, now will continue to grow…”  —http://techonomy.com/conf/13-detroit/the-new-techonomy/can-the-maker-movement-re-make-america/

I was reminded of Clayton Christensen’s landmark 1997 book, The Innovator’s Dilemma. By successfully satisfying current needs, we are potentially blinded to the ability to satisfy future needs. We are so accustomed to the model of manufacturing working so well, over so many decades, that we may fail to recognize when the centralized approach is losing ground.

How are manufacturers addressing these shifts? Are they re-examining the core assumptions upon which their industries are based? I think this should be the focus, rather than continued revenue growth and “concern” about the state of the economy.

Sustaining Excellence for the Long Term

David Butler, VP of Innovation at Coca-Cola. Picture from http://www.coca-colacompany.com/coca-cola-unbottled/startup-refreshed-why-coke-is-joining-the-entrepreneurial-revolution

David Butler, VP of Innovation at Coca-Cola. Isn’t he cute? Picture from http://www.coca-colacompany.com/coca-cola-unbottled/startup-refreshed-why-coke-is-joining-the-entrepreneurial-revolution

In September’s question to the ASQ Influential Voices, CEO Paul Borawski asks how an established organization can maintain a record of excellence over the long term:

Let’s say you’ve reached the “holy grail” of quality and excellence. You make a great product. Your service is top-notch. You innovate. You’ve developed a culture of quality where employees and leaders are empowered. Now, how do you sustain all this…for years, decades, centuries? Everyone can name once-excellent companies that had trouble sustaining the very things that took them to the top.

I’m not going to summarize the messages of Jim Collins’ excellent summaries of research in Built to Last or Great by Choice, even though I think there are many important insights in both books. I want to focus on a new perspective on this question that I heard from Coca-Cola’s VP of Innovation, David Butler, at last week’s Business Innovation Factory (BIF-9) Summit in Providence, Rhode Island.

Butler acknowledges that startups are inherently great at launching new ideas and bringing them to fruition, whereas organizations like Coca-Coca are unparalleled in their ability to leverage their substantial assets (resources, skills, and networks) to scale ideas and broaden their impact.

This essential interplay between starting and scaling was what Butler wanted to capture within his organization.

By supporting the energy and enthusiasm within the maker movement, Coca-Cola is now participating in Startup Weekends that bring together Coca-Cola employees with community members to collaborate and explore possibilities for rapid innovation and a quick transition to commercialization. By providing the platform for entrepreneurs to explore new ideas alongside Coca-Cola employees who know the business, Coca-Cola is essentially acting as a hands-on Venture Capitalist who hops on board as idea generation is flourishing into actionable opportunity.

By inserting themselves into a unique slot in the value chain, Coca-Cola has found a novel way to sustain excellence for the long term.

The Origins of Just-In-Time

A couple weeks ago, the students in my ISAT 654 (Advanced Technology Management) class at JMU asked about where and when Just-In-Time (JIT) manufacturing actually started in the United States. Although I still can’t identify the FIRST company to adopt this approach, I was also curious about how the adoption of JIT in the US grew from the Toyota Production System (TPS).

Just-in-Time (JIT) is only one element of lean manufacturing, which is a broader philosophy that seeks to eliminate all kinds of waste in a process.  Although JIT is often considered an enterprise-wide philosophy of continuous improvement, I’d like to focus on the mechanistic aspects of JIT – that is, the development and operations of a production system that employs continuous flow and preventive maintenance. In an effectively implemented JIT production system, there is little or no inventory – which includes Work-In-Process (WIP) – and production is tightly coupled to demand.

The origin of JIT can be traced back to Henry Ford’s production line, in which he was keenly aware of the burdens of inventory. However, Ford’s production system generated large volumes of identical products created in large batches – there was no room for variety, and the system was not coupled to demand levels.

In post-war Japan, Taiichi Ohno (“Father of JIT”) adapted the system at Toyota to handle smaller batch sizes and more variety in the parts that could be used to construct assemblies. In 1952, work on their JIT system was initiated, with full deployment of the kanban pull system by 1962. This was the genesis of the Toyota Production System, an elegant (and sometimes elusive) socio-technical system for production and operations. This approach bridged the gaps between production and continuous improvement and became the basis for lean manufacturing as it is known today.

After the oil crisis in 1973, other Japanese companies started to take note of the success of Toyotaand the approach became more widely adopted. The JIT technique spread to the United States in the late 1970’s and 1980’s, but due to inconsistencies in implementation and a less mature grasp on the human and cultural elements of the Toyota Production System, western companies experienced limited success. The Machine that Changed the World by James Womack made the JIT+TPS concept more accessible to US companies in 1990, which led to the widespread adoption of lean manufacturing techniques and philosophies thereafter.

JIT is very sensitive to the external environment in which it is implemented. For a review of Polito & Watson’s excellent 2006 article that describes the key barriers to smooth JIT, read Shocks to the System: Financial Meltdown and a Fragile Supply Chain.

(P.S. Why the picture of butter? Because JIT, when implemented appropriately, is perfectly smooth and slippery and thus passes The Butter Test.)

Quality and the Great Contraction

From the July 6, 2009 issue of Business Week:

“A new world order is dawning – one in which the West is no longer dominant, capitalism (at least the American version) is out of favor, and protectionism is on the rise… the era of laissez-faire economics is over, and statism, once discredited, is making a comeback – even in the U.S…. global trade is set to fall this year, for the first time in more than two decades.”

We have been conditioned to think that the notion of space – geographic space – does not matter in the new economy. We have the Internet, and ideas can zing from one place to another with ease (and nearly instantaneously, for that matter). Add to this videoconferencing with Skype, and keeping up with your contacts on Twitter and Facebook in near-real time, and it’s no wonder that people have also become accustomed to assuming that materials can move from one place to another with similar relative ease.

Space does matter. We know this when we are designing facilities and plant layouts, for example, because one of our common considerations is to minimize traffic between areas and departments. More often than not, we do this to minimize the time spent moving people or equipment around a plant, so that time is not wasted. But the same concept could apply to our supply chains. Why aren’t we minimizing the time that components or goods spend traveling through the supply chain, when it could lead to reductions in energy costs? Furthermore, why aren’t we shortening our supply chains to strengthen local and regional businesses, and train the next generation of skilled workers (who can actually do something useful for the regional economy)?

The logic has been something like this: energy is cheap, therefore transportation is cheap, and transportation is easily available and accessible through third-party providers like FedEx and UPS. But I can’t shake the feeling that “supply chain status quo” is not good for quality in the long-term – because it encourages us to source the products and components that are most affordable, rather than the ones that might help us cultivate a quality consciousness in our local areas.

Inspection, Abstraction and Shipping Containers

maerskOn my drive home tonight, a giant “Maersk Sealand” branded truck passed me on the highway. It got me thinking about the innovation of the shipping container, and how introducing a standard size and shape revolutionized the shipping industry and enabled a growing global economy. At least that’s the perspective presented by Mark Levinson in The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger. A synopsis of the story and a sample chapter are available; Wikipedia’s entry on containerization also presents a narrative describing the development and its impacts.

Here’s how impactlab.com describes it:

Indeed, it is hard to imagine how world trade could have grown so fast—quintupling in the last two decades—without the “intermodal shipping container,” to use the technical term. The invention of a standard-size steel box that can be easily moved from a truck to a ship to a railroad car, without ever passing through human hands, cut down on the work and vastly increased the speed of shipping. It represented an entirely new system, not just a new product. The dark side is that these steel containers are by definition black boxes, invisible to casual inspection, and the more of them authorities open for inspection, the more they undermine the smooth functioning of the system.

Although some people like to debate whether the introduction of the shipping container represented an incremental improvement or a breakthrough innovation, I’d like to point out an entirely different aspect of this story: a process improvement step yielded a plethora of benefits because the inspection step was eliminated. Inspection happened naturally the old way, without planning it explicitly; workers had to unpack all the boxes and crates from one truck and load them onto another truck, or a ship. It would be difficult to overlook a nuclear warhead or a few tons of pot.

To make the system work, the concept of what was being transported was abstracted away from the problem, making the shipping container a black box. If all parties are trustworthy and not using the system for a purpose other than what was intended, this is no problem. But once people start using the system for unintended purposes, everything changes.

This reflects what happens in software development as well: you code an application, abstracting away the complex aspects of the problem and attaching unit tests to those nuggets. You don’t have to inspect the code within the nuggets because either you’ve already fully tested them, or you don’t care – and either way, you don’t expect what’s in the nugget to change. Similarly, the shipping industry did not plan that the containers would be used to ship illegal cargo – that wasn’t one of the expectations of what could be within the black box. The lesson (to me)? Degree of abstraction within a system, and the level of inspection of a system, are related. When your expectations of what constitutes your components changes, you need to revisit whether you need inspection (and how much).

Authenticity for Quality

In Good Business, Mihaly Csikszentmihalyi discusses some insights from Robert Shapiro, CEO of the chemical company Monsanto about authenticity and job design:

The notion of job implies that there’s been some supreme architect who designed this system so that a lot of parts fit together and produce whatever the desired output is. No one in a job can see the whole. When we ask you to join us, we are saying, “Do you have the skills and the willingness to shape yourself in this way so you will fit into this big machine? Because somebody did this job for you, somebody who was different from you. Someone will do it after you. Those parts of you that aren’t relevant to that job, please just forget about. Those shortcomings that you have that really don’t enable you to fill this job, please at least try to fake, so that we can all have the impression that you’re doing this job.”… We ought to be saying, “What can you bring to this that’s going to help?” Not, “Here’s the job, just do it.”

Later in the book, this concept of authenticity – the ability to be real, and get connected to your intrinsic motivation – is broken down into two distinct parts:

Differentiation – How and why are you unique? What can you alone bring to the workplace? What skills and talents are you dedicated to developing so that you can contribute those aspects of yourself to the team? Does the team know what specialized contributions each individual is there to bring, and do they value the contributions that are expected?

Integration – How well are you connected with the needs of others? Can you relate to – and empathize with – your manager’s needs? How well, and how honestly, do you hear the voice of the customer?  Do you have the willingness and the attitude to respond to it?

Authenticity within an organization can influence quality in many ways: people will feel more comfortable recommending and implementing changes, products and services will be tailored meet customer needs and demands more effectively, egos will be tempered, and teamwork will become natural.

Although Shapiro’s example considers differentiation and integration with respect to an individual, the concept also applies to teams in the workplace, and companies and how they relate to their customers and the external environment.

« Older Entries Recent Entries »