Tag Archives: simulation

A Discrete Time Markov Chain (DTMC) SIR Model in R

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

There are many different techniques that be used to model physical, social, economic, and conceptual systems. The purpose of this post is to show how the Kermack-McKendrick (1927) formulation of the SIR Model for studying disease epidemics (where S stands for Susceptible, I stands for Infected, and R for Recovered) can be easily implemented in R as a discrete time Markov Chain using the markovchain package.

A Discrete Time Markov Chain (DTMC) is a model for a random process where one or more entities can change state between distinct timesteps. For example, in SIR, people can be labeled as Susceptible (haven’t gotten a disease yet, but aren’t immune), Infected (they’ve got the disease right now), or Recovered (they’ve had the disease, but no longer have it, and can’t get it because they have become immune). If they get the disease, they change states from Susceptible to Infected. If they get well, they change states from Infected to Recovered. It’s impossible to change states between Susceptible and Recovered, without first going through the Infected state. It’s totally possible to stay in the Susceptible state between successive checks on the population, because there’s not a 100% chance you’ll actually be infected between any two timesteps. You might have a particularly good immune system, or maybe you’ve been hanging out by yourself for several days programming.

Discrete time means you’re not continuously monitoring the state of the people in the system. It would get really overwhelming if you had to ask them every minute “Are you sick yet? Did you get better yet?” It makes more sense to monitor individuals’ states on a discrete basis rather than continuously, for example, like maybe once a day. (Ozgun & Barlas (2009) provide a more extensive illustration of the difference between discrete and continuous modeling, using a simple queuing system.)

To create a Markov Chain in R, all you need to know are the 1) transition probabilities, or the chance that an entity will move from one state to another between successive timesteps, 2) the initial state (that is, how many entities are in each of the states at time t=0), and 3) the markovchain package in R. Be sure to install markovchain before moving forward.

Imagine that there’s a 10% infection rate, and a 20% recovery rate. That implies that 90% of Susceptible people will remain in the Susceptible state, and 80% of those who are Infected will move to the Recovered Category, between successive timesteps. 100% of those Recovered will stay recovered. None of the people who are Recovered will become Susceptible.

Say that you start with a population of 100 people, and only 1 is infected. That means your “initial state” is that 99 are Susceptible, 1 is Infected, and 0 are Recovered. Here’s how you set up your Markov Chain:

library(markovchain)
mcSIR <- new("markovchain", states=c("S","I","R"),
    transitionMatrix=matrix(data=c(0.9,0.1,0,0,0.8,0.2,0,0,1),
    byrow=TRUE, nrow=3), name="SIR")
initialState <- c(99,1,0)

At this point, you can ask R to see your transition matrix, which shows the probability of moving FROM each of the three states (that form the rows) TO each of the three states (that form the columns).

> show(mcSIR)
SIR
 A  3 - dimensional discrete Markov Chain with following states
 S I R 
 The transition matrix   (by rows)  is defined as follows
    S   I   R
S 0.9 0.1 0.0
I 0.0 0.8 0.2
R 0.0 0.0 1.0

You can also plot your transition probabilities:

plot(mcSIR,package="diagram")

dtmc-sir-transitionnetwork

But all we’ve done so far is to create our model. We haven’t yet done a simulation, which would show us how many people are in each of the three states as you move from one discrete timestep to many others. We can set up a data frame to contain labels for each timestep, and a count of how many people are in each state at each timestep. Then, we fill that data frame with the results after each timestep i, calculated by initialState*mcSIR^i:

timesteps <- 100
sir.df <- data.frame( "timestep" = numeric(),
 "S" = numeric(), "I" = numeric(),
 "R" = numeric(), stringsAsFactors=FALSE)
 for (i in 0:timesteps) {
newrow <- as.list(c(i,round(as.numeric(initialState * mcSIR ^ i),0)))
 sir.df[nrow(sir.df) + 1, ] <- newrow
 }

Now that we have a data frame containing our SIR results (sir.df), we can display them to see what the values look like:

> head(sir.df)
  timestep  S  I  R
1        0 99  1  0
2        1 89 11  0
3        2 80 17  2
4        3 72 22  6
5        4 65 25 10
6        5 58 26 15

And then plot them to view our simulation results using this DTMC SIR Model:

plot(sir.df$timestep,sir.df$S)
points(sir.df$timestep,sir.df$I, col="red")
points(sir.df$timestep,sir.df$R, col="green")

dtmc-sir-simulation

It’s also possible to use the markovchain package to identify elements of your system as it evolves over time:

> absorbingStates(mcSIR)
[1] "R"
> transientStates(mcSIR)
[1] "S" "I"
> steadyStates(mcSIR)
     S I R
[1,] 0 0 1

And you can calculate the first timestep that your Markov Chain reaches its steady state (the “time to absorption”), which your plot should corroborate:

> ab.state <- absorbingStates(mcSIR)
> occurs.at <- min(which(sir.df[,ab.state]==max(sir.df[,ab.state])))
> (sir.df[row,]$timestep)+1
[1] 58

You can use this code to change the various transition probabilities to see what the effects are on the outputs yourself (sensitivity analysis). Also, there are methods you can use to perform uncertainty analysis, e.g. putting confidence intervals around your transition probabilities. We won’t do either of these here, nor will we create a Shiny app to run this simulation, despite the significant temptation.

My Second (R) Shiny App: Sampling Distributions & CLT

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

I was so excited about my initial foray into Shiny development using jennybc‘s amazing googlesheets package that I stayed up half the night last night (again) working on my second Shiny app: a Shiny-fied version of the function I shared in March to do simulations illustrating sampling distributions and the Central Limit Theorem using many different source distributions. (Note that Cauchy doesn’t play by the rules!) Hope this info is useful to all new Shiny developers.

If the app doesn’t work for you, it’s possible that I’ve exhausted my purchased hours at http://shinyapps.io — no idea how much traffic this post might generate. So if that happens to you, please try getting Shiny to work locally, cutting and pasting the code below into server.R and ui.R files, and then launching the simulation from your R console.

Here are some important lessons I learned on my 2nd attempt at Shiny development:

  • Creating a container (rv) for the server-side values that would change as a result of inputs from the UI was important. That container was then available to the portions of my Shiny code that prepared data for the UI, e.g. output$plotSample.
  • Because switch only takes arguments that are 1 character long, using radio buttons in the Shiny UI was really useful: I can map the label on each radio button to one character that will get passed into the data processing on the server side.
  • I was able to modify the CSS for the page by adding a couple lines to mainPanel() in my UI.
  • Although it was not mentally easy (for me) to convert from an R function to a Shiny app when initially presented with the problem, in retrospect, it was indeed straightforward. All I had to do was take the original function, split out the data processing from the presentation (par & hist commands), put the data processing code on the server side and the presentation code on the UI side, change the variable names on the server side so that they had the input$ prefix, and make sure the variable names were consistent between server and UI.
  • I originally tried writing one app.R file, but http://shinyapps.io did not seem to like that, so I put all the code that was not UI into the server side and tried deploying with server.R and ui.R, which worked. I don’t know what I did wrong.
  • If you want to publish to http://shinyapps.io, the directory name that hosts your files must be at least 4 characters long or you will get a “validation error” when you attempt to deployApp().
## Nicole's Second Shiny Demo App
## N. Radziwill, 12/6/2015, http://qualityandinnovation.com
## Used code from http://github.com/homerhanumat as a base
###########################################################
## ui
###########################################################

ui <- fluidPage(
titlePanel('Sampling Distributions and the Central Limit Theorem'),
sidebarPanel(
helpText('Choose your source distribution and number of items, n, in each
sample. 10000 replications will be run when you click "Sample Now".'),
h6(a("Read an article about this simulation at http://www.r-bloggers.com",
href="http://www.r-bloggers.com/sampling-distributions-and-central-limit-theorem-in-r/", target="_blank")),
sliderInput(inputId="n","Sample Size n",value=30,min=5,max=100,step=2),
radioButtons("src.dist", "Distribution type:",
c("Exponential: Param1 = mean, Param2 = not used" = "E",
"Normal: Param1 = mean, Param2 = sd" = "N",
"Uniform: Param1 = min, Param2 = max" = "U",
"Poisson: Param1 = lambda, Param2 = not used" = "P",
"Cauchy: Param1 = location, Param2 = scale" = "C",
"Binomial: Param1 = size, Param2 = success prob" = "B",
"Gamma: Param1 = shape, Param2 = scale" = "G",
"Chi Square: Param1 = df, Param2 = ncp" = "X",
"Student t: Param1 = df, Param2 = not used" = "T")),
numericInput("param1","Parameter 1:",10),
numericInput("param2","Parameter 2:",2),
actionButton("takeSample","Sample Now")
), # end sidebarPanel
mainPanel(
# Use CSS to control the background color of the entire page
tags$head(
tags$style("body {background-color: #9999aa; }")
),
plotOutput("plotSample")
) # end mainPanel
) # end UI

##############################################################
## server
##############################################################

library(shiny)
r <- 10000 # Number of replications... must be ->inf for sampling distribution!

palette(c("#E41A1C", "#377EB8", "#4DAF4A", "#984EA3",
"#FF7F00", "#FFFF33", "#A65628", "#F781BF", "#999999"))

server <- function(input, output) {
set.seed(as.numeric(Sys.time()))

# Create a reactive container for the data structures that the simulation
# will produce. The rv$variables will be available to the sections of your
# server code that prepare output for the UI, e.g. output$plotSample
rv <- reactiveValues(sample = NULL,
all.sums = NULL,
all.means = NULL,
all.vars = NULL)

# Note: We are giving observeEvent all the output connected to the UI actionButton.
# We can refer to input variables from our UI as input$variablename
observeEvent(input$takeSample,
{
my.samples <- switch(input$src.dist,
"E" = matrix(rexp(input$n*r,input$param1),r),
"N" = matrix(rnorm(input$n*r,input$param1,input$param2),r),
"U" = matrix(runif(input$n*r,input$param1,input$param2),r),
"P" = matrix(rpois(input$n*r,input$param1),r),
"C" = matrix(rcauchy(input$n*r,input$param1,input$param2),r),
"B" = matrix(rbinom(input$n*r,input$param1,input$param2),r),
"G" = matrix(rgamma(input$n*r,input$param1,input$param2),r),
"X" = matrix(rchisq(input$n*r,input$param1),r),
"T" = matrix(rt(input$n*r,input$param1),r))

# It was very important to make sure that rv contained numeric values for plotting:
rv$sample <- as.numeric(my.samples[1,])
rv$all.sums <- as.numeric(apply(my.samples,1,sum))
rv$all.means <- as.numeric(apply(my.samples,1,mean))
rv$all.vars <- as.numeric(apply(my.samples,1,var))
}
)

output$plotSample <- renderPlot({
# Plot only when user input is submitted by clicking "Sample Now"
if (input$takeSample) {
# Create a 2x2 plot area & leave a big space (5) at the top for title
par(mfrow=c(2,2), oma=c(0,0,5,0))
hist(rv$sample, main="Distribution of One Sample",
ylab="Frequency",col=1)
hist(rv$all.sums, main="Sampling Distribution of the Sum",
ylab="Frequency",col=2)
hist(rv$all.means, main="Sampling Distribution of the Mean",
ylab="Frequency",col=3)
hist(rv$all.vars, main="Sampling Distribution of the Variance",
ylab="Frequency",col=4)
mtext("Simulation Results", outer=TRUE, cex=3)
}
}, height=660, width=900) # end plotSample

} # end server

Simulation for Data Science With R

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

Hey everyone! I just wanted to give you the heads up on a book project that I’ve been working on (which should be available by Spring 2016). It’s all about using the R programming language to do simulation — which I think is one of the most powerful (and overlooked) tools in data science. Please feel free to email or write comments below if you have any suggestions for material you’d like to have included in it!

Originally, this project was supposed to be a secret… I’ve been working on it for about two years now, along with two other writing projects, and was approached in June by a traditional publishing company (who I won’t mention by name) who wanted to brainstorm with me about possibly publishing and distributing my next book. After we discussed the intersection of their wants and my needs, I prepared a full outline for them, and they came up with a work schedule and sent me a contract. While I was reading the contract, I got cold feet. It was the part about giving up “all moral rights” to my work, which sounds really frightening (and is not something I have to do under creative commons licensing, which I prefer). I shared the contract with a few colleagues and a lawyer, hoping that they’d say don’t worry… it sounds a lot worse than it really is. But the response I got was it sounds pretty much like it is.

While deliberating the past two weeks, I’ve been moving around a lot and haven’t been in touch with the publisher. I got an email this morning asking for my immediate decision on the matter (paraphrased, because there’s a legal disclaimer at the bottom of their emails that says “this information may be privileged” and I don’t want to violate any laws):

If we don’t hear from you, unfortunately we’ll be moving forward with this project. Do you still want to be on board?

The answer is YEAH – of COURSE I’m “on board” with my own project. But this really made me question the value of a traditional publisher over an indie publisher, or even self-publishing. And if they’re moving forward anyway, does that mean they take my outline (and supporting information about what I’m planning for each chapter) and just have someone else write to it? That doesn’t sound very nice. Since all the content on my blog is copyrighted by ME, I’m sharing the entire contents of what I sent to them on July 6th to establish the copyright on my outline in a public forum.

So if you see this chapter structure in someone ELSE’S book… you know what happened. The publisher came up with the idea for the main title (“Simulation for Data Science With R”) so I might publish under a different title that still has the words Simulation and R in them.

I may still publish with them, but I’ll make that decision after I have the full manuscript in place in a couple months. And after I have the chance to reflect more on what’s best for everyone. What do you think is the best route forward?


 

Simulation for Data Science With R

Effective Data-Driven Decision Making for Business Analysis by Nicole M. Radziwill

Audience

Simulation is an essential (yet often overlooked) tool in data science – an interdisciplinary approach to problem-solving that leverages computer science, statistics, and domain expertise. This easy-to-understand introductory text for new and intermediate-level programmers, data scientists, and business analysts surveys five different simulation techniques (Monte Carlo, Discrete Event Simulation, System Dynamics, Agent-Based Modeling, and Resampling). The book focuses on practical and illustrative examples using the R Statistical Software, presented within the context of structured methodologies for problem solving (such as DMAIC and DMADV) that will enable you to more easily use simulation to make effective data-driven decisions. Readers should have exposure to basic concepts in programming but can be new to the R Statistical Software.

Mission

This book helps its readers 1) formulate research questions that simulation can help solve, 2) choose an appropriate problem-solving methodology, 3) choose one or more simulation techniques to help solve that problem,  4) perform basic simulations using the R Statistical Software, and 5) present results and conclusions clearly and effectively.

Objectives and achievements

The reader will:

  • Learn about essential and foundational concepts in modeling and simulation
  • Determine whether a simulation project is also a data science project
  • Choose an appropriate problem-solving methodology for effective data-driven decision making
  • Select suitable simulation techniques to provide insights about a given problem
  • Build and interpret the results from basic simulations using the R Statistical Software

SECTION I: BASIC CONCEPTS

  1. Introduction to Simulation for Data Science
  2. Foundations for Decision-Making
  3. SECRET NEW CHAPTER THAT YOU WILL BE REALLY EXCITED ABOUT

SECTION II: STOCHASTIC PROCESSES

  1. Variation and Random Variable Generation
  2. Distribution Fitting
  3. Data Generating Processes

SECTION III: SIMULATION TECHNIQUES

  1. Monte Carlo Simulation
  2. Discrete Event Simulation
  3. System Dynamics
  4. Agent-Based Modeling
  5. Resampling Methods
  6. SECRET NEW CHAPTER THAT YOU WILL BE REALLY EXCITED ABOUT

SECTION IV: CASE STUDIES

  1. Case Study 1: Possibly modeling opinion dynamics… specific example still TBD
  2. Case Study 2: A Really Practical Application of Simulation (especially for women)

Chapter 1: Introduction to Simulation for Data Science – 35 pages

Description

This chapter explains the role of simulation in data science, and provides the context for understanding the differences between simulation techniques and their philosophical underpinnings.

Level

BASIC

Topics covered

Variation and Data-Driven Decision Making

What are Complex Systems?

What are Complex Dynamical Systems? What is systems thinking? Why is a systems perspective critical for data-driven decision making? Where do we encounter complex  systems in business or day-to-day life?

What is Data Science?

A Taxonomy of Data Science. The Data Science Venn Diagram. What are the roles of modeling and simulation in data science projects? “Is it a Data Science Project?” — a Litmus Test. How modeling and simulation align with data science.

What is a Model?

Conceptual Models. Equations. Deterministic Models, Stochastic Models. Endogeneous and Exogenous Variables.

What is Simulation?

Types of Simulation: Static vs. Dynamic, Stochastic vs. Deterministic, Discrete vs. Continuous, Terminating and Non-Terminating (Steady State). Philosophical Principles: Holistic vs. Reductionist, Kadanoff’s Universality, Parsimony, Sensitivity to Initial Conditions

Why Use Simulation?

Simulation and Big Data

Choosing the Right Simulation Technique

Skills learned

The reader will be able to:

  • Distinguish a model from a simulation
  • Explain how simulation can provide a valuable perspective in data-driven decision making
  • Understand how simulation fits into the taxonomy of data science
  • Determine whether a simulation project is also a data science project
  • Determine which simulation technique to apply to various kinds of real-world problems

Chapter 2: Foundations for Decision Making – 25 pages

Description

In this chapter, the reader will learn how to plan and structure a simulation project to aid in the decision-making process as well as the presentation of results. The social context of data science will be explained, emphasizing the growing importance of collaborative data and information sharing.

Level

BASIC

Topics covered

The Social Context of Data Science

Ethics and Provenance. Data Curation. Replicability, Reproducibility, and Open Science. Open, interoperable frameworks for collaborative data and information sharing. Problem-Centric Habits of Mind.

Selecting Key Performance Indicators (KPIs)

Determining the Number of Replications

Methodologies for Simulation Projects

A General Problem-Solving Approach

DMAIC

DMADV

Root Cause Analysis (RCA)

PDSA

Verification and Validation Techniques

Output Analysis

Skills learned

The reader will be able to:

  • Plan a simulation study that is supported by effective and meaningful metadata
  • Select an appropriate methodology to guide the simulation project
  • Choose activities to ensure that verification and validation requirements are met
  • Construct confidence intervals for reporting simulation output

Chapter 3: Variability and Random Variate Generation – 25 pages

Description

Simulation is powerful because it provides a way to closely examine the random behavior in systems that arises due to interdependencies and variability. This requires being able to generate random numbers and random variates that come from populations with known statistical characteristics. This chapter describes how random numbers and random variates are generated, and shows how they are applied to perform simple simulations.

Level

MEDIUM

Topics covered

Variability in Stochastic Processes

Why Generate Random Variables?

Pseudorandom Number Generation

Linear Congruential Generators

Inverse Transformation Method

Using sample for Discrete Distributions

Is this Sequence Random? Tests for Randomness

Autocorrelation, Frequency, Runs Tests. Using the randtests package

Tests for homogeneity

Simple Simulations with Random Numbers

 

Skills learned

The reader will be able to:

  • Generate pseudorandom numbers that are uniformly distributed
  • Use random numbers to generate random variates from a target distribution
  • Perform simple simulations using streams of random numbers

Chapter 4: Data Generating Processes – 30 pages

Description

To execute a simulation, you must be able to generate random variates that represent the physical process you are trying to emulate. In this chapter, we cover several common statistical distributions that can be used to represent real physical processes, and explain which physical processes are often modeled using those distributions.

Level

MEDIUM

Topics covered

What is a Data Generating Process?

Continuous, Discrete, and Multivariate Distributions

Discrete Distributions

Binomial Distribution

Geometric Distribution

Hypergeometric Distribution

Poisson Distribution

Continuous Distributions

Exponential Distribution

F Distribution

Lognormal Distribution

Normal Distribution

Student’s t Distribution

Uniform Distribution

Weibull Distribution

Chi2 Distribution

Stochastic Processes

Markov. Poisson. Gaussian, Bernoulli. Brownian Motion. Random Walk.

Stationary and Autoregressive Processes.

 

Skills learned

The reader will be able to:

  • Understand the characteristics of several common discrete and continuous data generating processes
  • Use those distributions to generate streams of random variates
  • Describe several common types of stochastic processes

Chapter 5: Distribution Fitting – 30 pages

Description

An effective simulation is driven by data generating processes that accurately reflect real physical populations. This chapter shows how to use a sample of data to determine which statistical distribution best represents the real population. The resulting distribution is used to generate random variates for the simulation.

Level

MEDIUM

Topics covered

Why is Distribution Fitting Essential?

Techniques for Distribution Fitting

Shapiro-Wilk Test for Normality

Anderson-Darling Test

Lillefors Test

Kolmogorov-Smirnov Test

Chi2 Goodness of Fit Test

Other Goodness Of Fit Tests

Transforming Your Data

When There’s No Data, Use Interviews

Skills learned

The reader will be able to:

  • Use a sample of real data to determine which data generating process is required in a simulation
  • Transform data to find a more effective data generating process
  • Estimate appropriate distributions when samples of real data are not available

Chapter 6: Monte Carlo Simulation – 30 pages

Description

This chapter explains how to set up and execute simple Monte Carlo simulations, using data generating processes to represent random inputs.

Level

ADVANCED

Topics covered

Anatomy of a Monte Carlo Project

The Many Flavors of Monte Carlo

The Hit-or-Miss Method

Example: Estimating Pi

Monte Carlo Integration

Example: Numerical Integration of y = x2

Estimating Variables

Monte Carlo Confidence Intervals

Example: Projecting Profits

Sensitivity Analysis

Example: Projecting Variability of Profits

Example: Projecting Yield of a Process

Markov Chain Monte Carlo

Skills learned

The reader will be able to:

  • Plan and execute a Monte Carlo simulation in R
  • Construct confidence intervals using the Monte Carlo method
  • Determine the sensitivity of process outputs and interpret the results

Chapter 7: Discrete Event Simulation – 30 pages

Description

What is this chapter about?

Level

ADVANCED

Topics covered

Anatomy of a DES Project

Entities, Locations, Resources and Events

System Performance Metrics

Queuing Models and Kendall’s Notation

The Event Calendar

Manual Event Calendar Generation

Example: An M/M/1 system in R

Using the queueing package

Using the simmer package

Arrival-Counting Processes with the NHPoisson Package

Survival Analysis with the survival Package

Example: When Will the Bagels Run Out?

Skills learned

The reader will be able to:

  • Plan and execute discrete event simulation in R
  • Choose an appropriate model for a queueing problem
  • Manually generate an event calendar to verify simulation results
  • Use arrival counting processes for practical problem-solving
  • Execute a survival analysis in R and interpret the results

Chapter 8: System Dynamics – 30 pages

Description

This chapter presents system dynamics, a powerful technique for characterizing the effects of multiple nested feedback loops in a dynamical system. This technique helps uncover the large-scale patterns in a complex system where interdependencies and variation are critical.

Level

ADVANCED

Topics covered

Anatomy of a SD Project

The Law of Unintended Consequences and Policy Resistance

Introduction to Differential Equations

Causal Loop Diagrams (CLDs)

Stock and Flow Diagrams (SFDs)

Using the deSolve Package

Example: Lotka-Volterra Equations

Dynamic Archetypes

Linear Growth

Exponential Growth and Collapse

S-Shaped Growth

S-Shaped Growth with Overshoot

Overshoot and Collapse

Delays and Oscillations

Using the stellaR and simecol Packages

Skills learned

The reader will be able to:

  • Plan and execute a system dynamics project
  • Create causal loop diagrams and stock-and-flow diagrams
  • Set up simple systems of differential equations and solve them with deSolve in R
  • Predict the evolution of stocks using dynamic archetypes in CLDs
  • Convert STELLA models to R

Chapter 9: Agent-Based Modeling – 25 pages

Description

Agent-Based Modeling (ABM) provides a unique perspective on simulation, illuminating the emergent behavior of the whole system by simply characterizing the rules by which each participant in the system operates. This chapter provides an overview of ABM, compares and contrasts it with the other simulation techniques, and demonstrates how to set up a simulation using an ABM in R.

Level

ADVANCED

Topics covered

Anatomy of an ABM Project

Emergent Behavior

PAGE (Percepts, Actions, Goals, and Environment)

Turtles and Patches

Using the RNetLogo package

Skills learned

The reader will be able to:

  • Plan and execute an ABM project in R
  • Create specifications for the ABM using PAGE

Chapter 10: Resampling – 25 pages

Description

Resampling methods are related to Monte Carlo simulation, but serve a different purpose: to help us characterize a data generating process or make inferences about the population our data came from when all we have is a small sample. In this chapter, resampling methods (and some practical problems that use them) are explained.

Level

MEDIUM

Topics covered

Anatomy of an Resampling Project

Bootstrapping

Jackknifing

Permutation Tests

Skills learned

The reader will be able to:

  • Plan and execute a resampling project in R
  • Understand how to select and use a resampling technique for real data

Chapter 11: Comparing the Simulation Techniques – 15 pages

Description

In this chapter, the simulation techniques will be compared and contrasted in terms of their strengths, weaknesses, biases, and computational complexity.

Level

ADVANCED

Topics covered

TBD – at least two simulation approaches will be applied

Skills learned

The reader will learn how to:

  • Think about a simulation study holistically
  • Select an appropriate combination of techniques for a real simulation study

Logistic Growth, S Curves, Bifurcations, and Lyapunov Exponents in R

If you’ve ever wondered how logistic population growth (the Verhulst model), S curves, the logistic map, bifurcation diagrams, sensitive dependence on initial conditions, “orbits”, deterministic chaos, and Lyapunov exponents are related to one another… this post explains it in just 10 steps, each with some code in R so you can explore it all yourself. I’ve included some code written by other people who have explored this problem (cited below) as portions of my own code.

It all starts with a hypothesized population… and a process where the size of the population changes over time. We want to understand how (and under what conditions) those changes occur, so we choose a model that characterizes population changes: the logistic growth model. It’s been used in biology, ecology, econometrics, marketing, and other areas.


1. The logistic growth model describes how the size of a population (N) changes over time (t), based on some maximum population growth rate (r). There is a limiting factor called the carrying capacity (K) which represents the total population that the environment could support, based on the amount of available resources. dN/dt is the rate of change of the population over time.

logistic-growth-dndt


2. You can simplify the logistic growth model by defining a new variable x to represent the portion of the population that’s alive, compared to the total population that the environment could support (and keep alive). So with x = N/K, you get a new differential equation in terms of x. Now we are looking at the rate of change of the population fraction over time. Once x = N/K = 1, the environment can’t support any more members in the population:

logistic-growth-dxdt


3. You can solve this equation by integration! Then, you’ll have an expression that you can use to calculate x (which is still the population fraction) for any time t. This is called the sigmoid or (more commonly), the S Curve. To compute x at any time t, all we need to know is how big the population was when we started looking at it (x0) and the maximum growth rate r:

logistic-growth-xt-solution


4. The equation for the S Curve is deterministic and continuous. If we want to solve it numerically, we have to discretize it by chopping up that continuous axis that contains time into little tiny pieces of time. That’s what produces the difference equation that we recognize as the logistic map. It’s a map because it “maps” each value of the sequence onto the next value in the sequence. As long as you know one of those values for x (indicated by the subscript n), you’ll be able to figure out the next value of x (indicated by the subscript n+1). The value x[n] is the population fraction of the current generation, and the value x[n+1] is the population fraction for the next generation. This makes the logistic map a Markov chain. If you plot x[n] on the x axis and x[n+1] on the y axis, this expression will produce the familiar upside down parabola:

logistic-markov-chain


5. The logistic map behaves differently depending upon the maximum growth rate (r) that describes your population. This parameter is also called fecundity and represents how rabbit-like your population is reproducing. The higher the r, the more productive, like rabbits (although I’m not sure precisely which r you’d choose if you were studying rabbits). Here is an R function that you can use to generate the last M iterations from a sequence of N total, developed and described at Mage’s Blog:

logistic.map <- function(r, x, N, M) {
 ## from http://www.magesblog.com/2012/03/logistic-map-feigenbaum-diagram.html
 ## r: bifurcation parameter
 ## x: initial value, something greater than 0 and less than 1
 ## N: number of iterations (total)
 ## M: number of iteration points to be returned
   z <- 1:N
   z[1] <- x
   for(i in c(1:(N-1))){
     z[i+1] <- r *z[i] * (1 - z[i])
   }
   ## Return the last M iterations 
   z[c((N-M):N)]
}

6. The logistic map has many interesting properties, but here are two in particular (the first in Step 6 and the second in Step 7). First, for several values you can choose for r, the chain converges to a single value (or fixed point) when n gets really big. For other values of r, the value of x will eventually bounce between two values instead of converging (a limit cycle of 2). For other values of r, the value of x will eventually bounce between four values instead of converging. Sometimes, x will bounce around a near limitless collection of values (a condition called deterministic chaos). The eventual values (or collection of eventual values, if they bounce between values) is called an orbit. For example, when the growth rate r is 2.6, the logistic map rapidly converges to an orbit of about 0.615:

plot(logistic.map(2.6,.01,20,20), type="l")

logistic-map-converges


7. Sometimes, it can be nice to take a look at how the values bounce around, and where they eventually converge (or not). To do this, we use cobweb diagrams (which are also sometimes called web diagrams). I used a function that I found at http://bayesianbiologist.com to plot the behavior of the orbits for r=2.6, r=3.2, and r=3.9:

logistic.cobweb <- function(r) {
# code adapted from http://bayesianbiologist.com/tag/cobweb-plot/
 x<-seq(0,1,length=100)
 x_next <- lapply(2:N, function(i) r*x[i]*(1-x[i]))
 plot(x[2:N],x_next,type="l",xlim=c(0,1), ylim=c(0,1), main=paste("r=",r),
 xlab=expression(x[t]),ylab=expression(x[t+1]), col="red", lwd=2)
 abline(0,1)

 # start at your random spot on the x-axis and start with a vertical line:
 start=runif(1,0,1)
 vert=FALSE
 lines(x=c(start,start),y=c(0,r*start*(1-start)) )
 for(i in 1:(2*N)) {
 if(vert) {
   lines(x=c(start,start),y=c(start,r*start*(1-start)) )
   vert=FALSE
 } else {
   lines(x=c(start, r*start*(1-start)), y=c(r*start*(1-start), r*start*(1-start)) )
   vert=TRUE
   start=r*start*(1-start)
 }
 }
}

par(mfrow=c(1,3))
logistic.cobweb(2.6)
logistic.cobweb(3.3)
logistic.cobweb(3.9)

logistic-cobwebs


8. (Remember to dev.off() before you continue.) Second, for some values of r, the logistic map shows sensitive dependence on initial conditions. For example, let’s see what happens for two different growth rates (r=3 and r=3.9) when we start one iteration with an x[n]  of 0.5 COLORED BLACK, and another one with an x[n] of 0.5001 COLORED RED. It’s a small, small difference that can lead to big, BIG variations in the orbits. In the r=3 case, the chain produced by the logistic map with x[n] of 0.5 (in black) is IDENTICAL to the chain produced by the logistic map with x[n] of 0.5001 (in red). That’s why you can’t see the black… the values are the same! But for the r=3.9 case, the chain produced by the logistic map with x[n] of 0.5 (in black) RAPIDLY DIVERGES from the chain produced by the logistic map with x[n] of 0.5001 (in red). They are very different, despite a very tiny difference in initial conditions! The logistic map for r=3.9 shows a very sensitive dependence on initial conditions

par(mfrow=c(2,1))
first <- logistic.map(3,.5,120,100)
second <- logistic.map(3,.5001,120,100)
plot(1:length(first),first,type="l",main="r=3 is not sensitive to initial conditions")
lines(1:length(second),second,type="l",col="red")
first <- logistic.map(3.9,.5,120,100)
second <- logistic.map(3.9,.5001,120,100)
plot(1:length(first),first,type="l",main="but r=3.9 is EXTREMELY sensitive")
lines(1:length(second),second,type="l",col="red")

logistic-sensitivity


9. For any chain, we can determine just how sensitive the logistic map is to initial conditions by looking at the Lyapunov exponent. Very simplistically, if the Lyapunov exponent is negative, the chain will converge to one or more fixed points for that value of r. If the Lyapunov exponent is positive, the chain will demonstrate deterministic chaos for that value of r. If the Lyapunov exponent is zero, there is a bifurcation: a 1-cycle is doubling to a 2-cycle, a 2-cycle is doubling to a 4-cycle, or so forth. The top chart shows an approximation of the Lyapunov exponent based on the first 500 iterations (ideally, you’d use an infinite number, but that would eat up too much computing time), and the bottom chart shows a bifurcation diagramYou’ll notice that the Lyapunov exponents are zero where a bifurcation occurs. To interpret the bifurcation diagram, just remember that each vertical slice through it represents the results of ONE COMPLETELY CONVERGED CHAIN from the logistic map. So it shows the results from many, many, many completely converged chains – and provides an excellent way for us to look at the behavior of MANY different types of populations in just one chart:

n <- 400
XI <- lya <- 0
x <- seq(0,4,0.01)
for (i in 1:n) {
 xi <- logistic.map(x[i],.01,500,500)
 XI <- rbind(XI,xi)
}
for (i in 1:length(x)) { 
 lya[i] <- sum(log(abs(x[i]-(2*x[i]*XI[i,]))))/length(x) 
}
plot(x,lya,ylim=c(-4,1),xlim=c(0,4),type="l",main="Lyapunov Exponents for Logistic Map")
abline(h=0, lwd=2, col="red")
# next 3 lines from http://www.magesblog.com/2012/03/logistic-map-feigenbaum-diagram.html:
my.r <- seq(0, 4, by=0.003)
Orbit <- sapply(my.r, logistic.map, x=0.1, N=1000, M=300)
r <- sort(rep(my.r, 301))

par(mfrow=c(2,1))
plot(x,lya,ylim=c(-5,1),xlim=c(0,4),type="l",main="Lyapunov Exponents for Logistic Map")
abline(h=0, col="red", lwd=2)
abline(v=3, col="blue", lwd=2)
plot(r, Orbit, pch=".", cex=0.5, main="Bifurcation Diagram for r=0 to r=4 Logistic Maps")
abline(v=3, col="blue", lwd=2)

logistic-lyapunov-bifurcation

10. Notice that in the bifurcation diagram, we can easily see that when r is between 0 and 1, the population converges to extinction. This makes sense, because the growth rate is smaller than what’s required to sustain the size of the population. You might like to zoom in, though, and see what the orbits look like for some smaller portions of the diagram. Here’s how you can do it (but be sure to refresh your graphics window with dev.off() before you try it). Try changing the plot character (pch) too, or maybe the size of the characters with cex=0.2 or cex=0.5 in the last line:

# adapted from http://www.magesblog.com/2012/03/logistic-map-feigenbaum-diagram.html:
my.r <- seq(3.5, 4, by=0.003)
Orbit <- sapply(my.r, logistic.map, x=0.1, N=1000, M=300)
multiplier <- length(Orbit)/length(my.r)
r <- sort(rep(my.r, multiplier))
plot(r, Orbit, pch=".")

logistic-lyapunov-bifurcation-2

That’s it!


Find out more information on these other web pages, which are listed in order of difficulty level:

Sampling Distributions and Central Limit Theorem in R

The Central Limit Theorem (CLT), and the concept of the sampling distribution, are critical for understanding why statistical inference works. There are at least a handful of problems that require you to invoke the Central Limit Theorem on every ASQ Certified Six Sigma Black Belt (CSSBB) exam. The CLT says that if you take many repeated samples from a population, and calculate the averages or sum of each one, the collection of those averages will be normally distributed… and it doesn’t matter what the shape of the source distribution is! (Caveat: so long as the data comes from a distribution with finite variance… so that means the Cauchy distribution doesn’t count.)

I wrote some R code to help illustrate this principle for my students. This code allows you to choose a sample size (n), a source distribution, and parameters for that source distribution, and generate a plot of the sampling distributions of the mean, sum, and variance. (Note: the sampling distribution for the variance is a Chi-square distribution — if your source distribution is normal!)

sdm.sim <- function(n,src.dist=NULL,param1=NULL,param2=NULL) {
   r <- 10000  # Number of replications/samples - DO NOT ADJUST
   # This produces a matrix of observations with  
   # n columns and r rows. Each row is one sample:
   my.samples <- switch(src.dist,
	"E" = matrix(rexp(n*r,param1),r),
	"N" = matrix(rnorm(n*r,param1,param2),r),
	"U" = matrix(runif(n*r,param1,param2),r),
	"P" = matrix(rpois(n*r,param1),r),
        "B" = matrix(rbinom(n*r,param1,param2),r),
	"G" = matrix(rgamma(n*r,param1,param2),r),
	"X" = matrix(rchisq(n*r,param1),r),
	"T" = matrix(rt(n*r,param1),r))
   all.sample.sums <- apply(my.samples,1,sum)
   all.sample.means <- apply(my.samples,1,mean)   
   all.sample.vars <- apply(my.samples,1,var) 
   par(mfrow=c(2,2))
   hist(my.samples[1,],col="gray",main="Distribution of One Sample")
   hist(all.sample.sums,col="gray",main="Sampling Distribution\nof
	the Sum")
   hist(all.sample.means,col="gray",main="Sampling Distribution\nof the Mean")
   hist(all.sample.vars,col="gray",main="Sampling Distribution\nof
	the Variance")
}

There are 8 population distributions to choose from: exponential (E), normal (N), uniform (U), Poisson (P), binomial (B), gamma (G), Chi-Square (X), and the Student’s t distribution (T). Note also that you have to provide either one or two parameters, depending upon what distribution you are selecting. For example, a normal distribution requires that you specify the mean and standard deviation to describe where it’s centered, and how fat or thin it is (that’s two parameters). A Chi-square distribution requires that you specify the degrees of freedom (that’s only one parameter). You can find out exactly what distributions require what parameters by going here: http://en.wikibooks.org/wiki/R_Programming/Probability_Distributions.

Here is an example that draws from an exponential distribution with a mean of 1/1 (you specify the number you want in the denominator of the mean):

sdm.sim(50,src.dist="E",param1=1)

The code above produces this sequence of plots:

sd-blog-1

You aren’t allowed to change the number of replications in this simulation because of the nature of the sampling distribution: it’s a theoretical model that describes the distribution of statistics from an infinite number of samples. As a result, if you increase the number of replications, you’ll see the mean of the sampling distribution bounce around until it converges on the mean of the population. This is just an artifact of the simulation process: it’s not a characteristic of the sampling distribution, because to be a sampling distribution, you’ve got to have an infinite number of samples. Watkins et al. have a great description of this effect that all statistics instructors should be aware of. I chose 10,000 for the number of replications because 1) it’s close enough to infinity to ensure that the mean of the sampling distribution is the same as the mean of the population, but 2) it’s far enough away from infinity to not crash your computer, even if you only have 4GB or 8GB of memory.

Here are some more examples to try. You can see that as you increase your sample size (n), the shapes of the sampling distributions become more and more normal, and the variance decreases, constraining your estimates of the population parameters more and more.

sdm.sim(10,src.dist="E",1)
sdm.sim(50,src.dist="E",1)
sdm.sim(100,src.dist="E",1)
sdm.sim(10,src.dist="X",14)
sdm.sim(50,src.dist="X",14)
sdm.sim(100,src.dist="X",14)
sdm.sim(10,src.dist="N",param1=20,param2=3)
sdm.sim(50,src.dist="N",param1=20,param2=3)
sdm.sim(100,src.dist="N",param1=20,param2=3)
sdm.sim(10,src.dist="G",param1=5,param2=5)
sdm.sim(50,src.dist="G",param1=5,param2=5)
sdm.sim(100,src.dist="G",param1=5,param2=5)

A Linear Congruential Generator (LCG) in R

In my simulation classes, we talk about how to generate random numbers. One of the techniques we talk about is the Linear Congruential Generator (LCG). Starting with a seed, the LCG produces the first number in the sequence, and then uses that value to generate the second one. The second value is used to generate the third, the third to generate the fourth, and so on. The equation looks like this:

lcg-regular

where a is a multiplier, c is a shift, and m is a modulus. (Modulus just means “find me the remainder when you divide the stuff to the left of the mod operator by the stuff to the right”. Fortunately there is an easy way to do this in R: a mod b is expressed as a %% b.) When you use the LCG to generate a stream of random numbers, they will always be between 0 and (m-1), and the sequence will repeat every m occurrences. When you take the sequence of values x and divide it by the mod m, you get uniformly distributed random numbers between 0 and 1 (but never quite hitting 1). The LCG is a pseudorandom number generator because after a while, the sequence in the stream of numbers will begin to repeat.

I wrote an exam last week with some LCG computations on it, but then I got lazy and didn’t want to do the manual calculations… so I wrote a function in R. Here it is. It returns two different vectors: the collection of numbers between 0 and (m – 1) that the LCG produces, and the collection of uniformly distributed random numbers between 0 and 1 that we get by dividing the first collection by m:

lcg <- function(a,c,m,run.length,seed) {
    x <- rep(0,run.length)
    x[1] <- seed
    for (i in 1:(run.length-1)) {
       x[i+1] <- (a * x[i] + c) %% m
    }
    U <- x/m # scale all of the x's to
             # produce uniformly distributed
             # random numbers between [0,1)
    return(list(x=x,U=U))
}

So for example, if your LCG looks like this:

lcg-specific

Then, after loading in the function above, you could choose a seed (say, 5) and generate a stream of 20 random numbers:

> z <- lcg(6,7,23,20,5)
> z
$x
 [1] 5 14 22 1 13 16 11 4 8 9 15 5 14 22 1 13 16 11
[19] 4 8

$U
 [1] 0.21739130 0.60869565 0.95652174 0.04347826 0.56521739
 [6] 0.69565217 0.47826087 0.17391304 0.34782609 0.39130435
[11] 0.65217391 0.21739130 0.60869565 0.95652174 0.04347826
[16] 0.56521739 0.69565217 0.47826087 0.17391304 0.34782609

The values contained within z$x are the numbers that the LCG produces, and the values contained within z$U are the values of z$x scaled to fall between 0 and 1. Both z$x and z$U are uniformly distributed, although the pattern will become more apparent as you generate more and more random numbers in your stream. Here’s what they look like with 10000 numbers in the stream:

> z <- lcg(6,7,23,10000,5)
> par(mfrow=c(1,2))
> hist(z$x,col="red")
> hist(z$U,col="purple")

lcg-hists

But how good is your random number generator? First, you know you want a large (and preferably odd valued mod). That way, the numbers will not repeat so quickly. But there are also considerations dealing with what multiplier and shift you should use. I like checking out the quality of my random number generator by plotting how they move in sequence around a two dimensional grid: the better the random number generator, the more filling of the space you will see. 

You can play around with these plots to find a “good” random number generator:

> z1 <- lcg(6,7,23,2000,5) # "Bad" LCG
> z2 <- lcg(1093,18257,86436,2000,12) # "Good" LCG
> par(mfrow=c(1,2))
> plot( z1$U[1:(length(z1$U)-1)], z1$U[2:length(z1$U)] , main="Bad")
> plot( z2$U[1:(length(z2$U)-1)], z2$U[2:length(z2$U)] , main="Good")

lcg-bad-good

Although there are many heuristics you can use to avoid creating a “bad” random number generator, it’s really difficult to design a “good” one. People make careers out of this. I don’t claim to be an expert, I just know how to test to see if a random number generator is suitable for my simulations. (I also really like the handout at http://www.sci.csueastbay.edu/~btrumbo/Stat3401/Hand3401/CongGenIntroB.pdf that explains this graphing approach more, and also talks about how you have to look at patterns in three dimensions as well. When I was searching around for a “good LCG that would give me a nice graph” I found this handout.)

Warren Buffett on Simulation & Modeling

John Hunter shared some excerpts from Warren Buffett’s 2009 Letter to Shareholders. I particularly liked this one part where he reflects on the outcomes of economic modeling and forecasting:

Investors should be skeptical of history-based models. Constructed by a nerdy-sounding priesthood using esoteric terms such as beta, gamma, sigma and the like, these models tend to look impressive. Too often, though, investors forget to examine the assumptions behind the symbols. Our advice: Beware of geeks bearing formulas.

I’d like to amend this: Beware of geeks bearing formulas who a) can’t tell you what every part of the derivation means, b) don’t know the model’s underlying assumptions, and c) don’t know what “threats to validity” are. (And if you’re the geek in question, be able to explain how your models and forecasts work!!)

Models can be a great way to capture the dynamics of social and technical systems, and simulations can help us explore how these systems will evolve over time – but how those models are initialized, and the simplifying assumptions they use to generate results, are just as important as the answers they propose.