Tag Archives: statistics

Analytic Hierarchy Process (AHP) using preferenceFunction in ahp

Yesterday, I wrote about how to use gluc‘s new ahp package on a simple Tom-Dick-Harry one level decision making problem using Analytic Hierarchy Process (AHP). One of the cool things about that package is that in addition to specifying the pairwise comparisons directly using Saaty’s scale (below, from https://kristalaace2014.wordpress.com/2014/05/14/w12_al_vendor-evaluation/)…

saaty-scale

…you can also describe each of the Alternatives in terms of descriptive variables which you can use inside a function to make the pairwise comparisons automatically. This is VERY helpful if you have lots of criteria, subcriteria, or alternatives to evaluate!! For example, I used preferenceFunction to compare 55 alternatives using 6 criteria and 4 subcriteria, and was very easily able to create functions to represent my judgments. This was much easier than manually entering all the comparisons.

This post shows HOW I replaced some of my manual comparisons with automated comparisons using preferenceFunction. (The full YAML file is included at the bottom of this post for you to use if you want to run this example yourself.) First, recall that the YAML file starts with specifying the alternatives that you are trying to choose from (at the bottom level of the decision hierarchy) and some variables that characterize those alternatives. I used the descriptions in the problem statement to come up with some assessments between 1=not great and 10=great:

#########################
# Alternatives Section
# THIS IS FOR The Tom, Dick, & Harry problem at
# https://en.wikipedia.org/wiki/Analytic_hierarchy_process_%E2%80%93_leader_example
#
Alternatives: &alternatives
# 1= not well; 10 = best possible
# Your assessment based on the paragraph descriptions may be different.
  Tom:
    age: 50
    experience: 7
    education: 4
    leadership: 10
  Dick:
    age: 60
    experience: 10
    education: 6
    leadership: 6
  Harry:
    age: 30
    experience: 5
    education: 8
    leadership: 6
#
# End of Alternatives Section
#####################################

Here is a snippet from my original YAML file specifying my AHP problem manually ():

  children: 
    Experience:
      preferences:
        - [Tom, Dick, 1/4]
        - [Tom, Harry, 4]
        - [Dick, Harry, 9]
      children: *alternatives
    Education:
      preferences:
        - [Tom, Dick, 3]
        - [Tom, Harry, 1/5]
        - [Dick, Harry, 1/7]
      children: *alternatives

And here is what I changed that snippet to, so that it would do my pairwise comparisons automatically. The functions are written in standard R (fortunately), and each function has access to a1 and a2 (the two alternatives). Recursion is supported which makes this capability particularly useful. I tried to write a function using two of the characteristics in the decision (a1$age and a1$experience) but this didn’t seen to work. I’m not sure whether the package supports it or not. Here are my comparisons rewritten as functions:

  children: 
    Experience:
          preferenceFunction: >
            ExperiencePreference <- function(a1, a2) {
              if (a1$experience < a2$experience) return (1/ExperiencePreference(a2, a1))
              ratio <- a1$experience / a2$experience
              if (ratio < 1.05) return (1)
              if (ratio < 1.2) return (2)
              if (ratio < 1.5) return (3)
              if (ratio < 1.8) return (4)
              if (ratio < 2.1) return (5) return (6) } children: *alternatives Education: preferenceFunction: >
            EducPreference <- function(a1, a2) {
              if (a1$education < a2$education) return (1/EducPreference(a2, a1))
              ratio <- a1$education / a2$education
              if (ratio < 1.05) return (1)
              if (ratio < 1.15) return (2)
              if (ratio < 1.25) return (3)
              if (ratio < 1.35) return (4)
              if (ratio < 1.55) return (5)
              return (5)
            }
          children: *alternatives

To run the AHP with functions in R, I used this code (I am including the part that gets the ahp package, in case you have not done that yet). BE CAREFUL and make sure, like in FORTRAN, that you line things up so that the words START in the appropriate columns. For example, the “p” in preferenceFunction MUST be immediately below the 7th character of your criterion’s variable name.

devtools::install_github("gluc/ahp", build_vignettes = TRUE)
install.packages("data.tree")

library(ahp)
library(data.tree)

setwd("C:/AHP/artifacts")
nofxnAhp <- LoadFile("tomdickharry.txt")
Calculate(nofxnAhp)
fxnAhp <- LoadFile("tomdickharry-fxns.txt")
Calculate(fxnAhp)

print(nofxnAhp, "weight")
print(fxnAhp, "weight")

You can see that the weights are approximately the same, indicating that I did a good job at developing functions that represent the reality of how I used the variables attached to the Alternatives to make my pairwise comparisons. The results show that Dick is now the best choice, although there is some inconsistency in our judgments for Experience that we should examine further. (I have not examined this case to see whether rank reversal could be happening).

> print(nofxnAhp, "weight")
                         levelName     weight
1  Choose the Most Suitable Leader 1.00000000
2   ¦--Experience                  0.54756924
3   ¦   ¦--Tom                     0.21716561
4   ¦   ¦--Dick                    0.71706504
5   ¦   °--Harry                   0.06576935
6   ¦--Education                   0.12655528
7   ¦   ¦--Tom                     0.18839410
8   ¦   ¦--Dick                    0.08096123
9   ¦   °--Harry                   0.73064467
10  ¦--Charisma                    0.26994992
11  ¦   ¦--Tom                     0.74286662
12  ¦   ¦--Dick                    0.19388163
13  ¦   °--Harry                   0.06325174
14  °--Age                         0.05592555
15      ¦--Tom                     0.26543334
16      ¦--Dick                    0.67162545
17      °--Harry                   0.06294121

> print(fxnAhp, "weight")
                         levelName     weight
1  Choose the Most Suitable Leader 1.00000000
2   ¦--Experience                  0.54756924
3   ¦   ¦--Tom                     0.25828499
4   ¦   ¦--Dick                    0.63698557
5   ¦   °--Harry                   0.10472943
6   ¦--Education                   0.12655528
7   ¦   ¦--Tom                     0.08273483
8   ¦   ¦--Dick                    0.26059839
9   ¦   °--Harry                   0.65666678
10  ¦--Charisma                    0.26994992
11  ¦   ¦--Tom                     0.74286662
12  ¦   ¦--Dick                    0.19388163
13  ¦   °--Harry                   0.06325174
14  °--Age                         0.05592555
15      ¦--Tom                     0.26543334
16      ¦--Dick                    0.67162545
17      °--Harry                   0.06294121

> ShowTable(fxnAhp)

tomdick-ahp-fxns

Here is the full YAML file for the “with preferenceFunction” case.

#########################
# Alternatives Section
# THIS IS FOR The Tom, Dick, & Harry problem at
# https://en.wikipedia.org/wiki/Analytic_hierarchy_process_%E2%80%93_leader_example
#
Alternatives: &alternatives
# 1= not well; 10 = best possible
# Your assessment based on the paragraph descriptions may be different.
  Tom:
    age: 50
    experience: 7
    education: 4
    leadership: 10
  Dick:
    age: 60
    experience: 10
    education: 6
    leadership: 6
  Harry:
    age: 30
    experience: 5
    education: 8
    leadership: 6
#
# End of Alternatives Section
#####################################
# Goal Section
#
Goal:
# A Goal HAS preferences (within-level comparison) and HAS Children (items in level)
  name: Choose the Most Suitable Leader
  preferences:
    # preferences are defined pairwise
    # 1 means: A is equal to B
    # 9 means: A is highly preferrable to B
    # 1/9 means: B is highly preferrable to A
    - [Experience, Education, 4]
    - [Experience, Charisma, 3]
    - [Experience, Age, 7]
    - [Education, Charisma, 1/3]
    - [Education, Age, 3]
    - [Age, Charisma, 1/5]
  children: 
    Experience:
          preferenceFunction: >
            ExperiencePreference <- function(a1, a2) {
              if (a1$experience < a2$experience) return (1/ExperiencePreference(a2, a1))
              ratio <- a1$experience / a2$experience
              if (ratio < 1.05) return (1)
              if (ratio < 1.2) return (2)
              if (ratio < 1.5) return (3)
              if (ratio < 1.8) return (4)
              if (ratio < 2.1) return (5) return (6) } children: *alternatives Education: preferenceFunction: >
            EducPreference <- function(a1, a2) {
              if (a1$education < a2$education) return (1/EducPreference(a2, a1))
              ratio <- a1$education / a2$education
              if (ratio < 1.05) return (1)
              if (ratio < 1.15) return (2)
              if (ratio < 1.25) return (3)
              if (ratio < 1.35) return (4)
              if (ratio < 1.55) return (5)
              return (5)
            }
          children: *alternatives
    Charisma:
      preferences:
        - [Tom, Dick, 5]
        - [Tom, Harry, 9]
        - [Dick, Harry, 4]
      children: *alternatives
    Age:
      preferences:
        - [Tom, Dick, 1/3]
        - [Tom, Harry, 5]
        - [Dick, Harry, 9]
      children: *alternatives
#
# End of Goal Section
#####################################

Analytic Hierarchy Process (AHP) with the ahp Package

On my December to-do list, I had “write an R package to make analytic hierarchy process (AHP) easier” — but fortunately gluc beat me to it, and saved me tons of time that I spent using AHP to do an actual research problem. First of all, thank you for writing the new ahp package! Next, I’d like to show everyone just how easy this package makes performing AHP and displaying the results. We will use the Tom, Dick, and Harry example that is described on Wikipedia. – the goal is to choose a new employee, and you can pick either Tom, Dick, or Harry. Read the problem statement on Wikipedia before proceeding.

AHP is a method for multi-criteria decision making that breaks the problem down based on decision criteria, subcriteria, and alternatives that could satisfy a particular goal. The criteria are compared to one another, the alternatives are compared to one another based on how well they comparatively satisfy the subcriteria, and then the subcriteria are examined in terms of how well they satisfy the higher-level criteria. The Tom-Dick-Harry problem is a simple hierarchy: only one level of criteria separates the goal (“Choose the Most Suitable Leader”) from the alternatives (Tom, Dick, or Harry):

tom-dick-harry

To use the ahp package, the most challenging part involves setting up the YAML file with your hierarchy and your rankings. THE MOST IMPORTANT THING TO REMEMBER IS THAT THE FIRST COLUMN IN WHICH A WORD APPEARS IS IMPORTANT. This feels like FORTRAN. YAML experts may be appalled that I just didn’t know this, but I didn’t. So most of the first 20 hours I spent stumbling through the ahp package involved coming to this very critical conclusion. The YAML AHP input file requires you to specify 1) the alternatives (along with some variables that describe the alternatives; I didn’t use them in this example, but I’ll post a second example that does use them) and 2) the goal hierarchy, which includes 2A) comparisons of all the criteria against one another FIRST, and then 2B) comparisons of the criteria against the alternatives. I saved my YAML file as tomdickharry.txt and put it in my C:/AHP/artifacts directory:

#########################
# Alternatives Section
# THIS IS FOR The Tom, Dick, & Harry problem at
# https://en.wikipedia.org/wiki/Analytic_hierarchy_process_%E2%80%93_leader_example
#
Alternatives: &alternatives
# 1= not well; 10 = best possible
# Your assessment based on the paragraph descriptions may be different.
  Tom:
    age: 50
    experience: 7
    education: 4
    leadership: 10
  Dick:
    age: 60
    experience: 10
    education: 6
    leadership: 6
  Harry:
    age: 30
    experience: 5
    education: 8
    leadership: 6
#
# End of Alternatives Section
#####################################
# Goal Section
#
Goal:
# A Goal HAS preferences (within-level comparison) and HAS Children (items in level)
  name: Choose the Most Suitable Leader
  preferences:
    # preferences are defined pairwise
    # 1 means: A is equal to B
    # 9 means: A is highly preferable to B
    # 1/9 means: B is highly preferable to A
    - [Experience, Education, 4]
    - [Experience, Charisma, 3]
    - [Experience, Age, 7]
    - [Education, Charisma, 1/3]
    - [Education, Age, 3]
    - [Age, Charisma, 1/5]
  children: 
    Experience:
      preferences:
        - [Tom, Dick, 1/4]
        - [Tom, Harry, 4]
        - [Dick, Harry, 9]
      children: *alternatives
    Education:
      preferences:
        - [Tom, Dick, 3]
        - [Tom, Harry, 1/5]
        - [Dick, Harry, 1/7]
      children: *alternatives
    Charisma:
      preferences:
        - [Tom, Dick, 5]
        - [Tom, Harry, 9]
        - [Dick, Harry, 4]
      children: *alternatives
    Age:
      preferences:
        - [Tom, Dick, 1/3]
        - [Tom, Harry, 5]
        - [Dick, Harry, 9]
      children: *alternatives
#
# End of Goal Section
#####################################

Next, I installed gluc’s ahp package and a helper package, data.tree, then loaded them into R:

devtools::install_github("gluc/ahp", build_vignettes = TRUE)
install.packages("data.tree")

library(ahp)
library(data.tree)

Running the calculations was ridiculously easy:

setwd("C:/AHP/artifacts")
myAhp <- LoadFile("tomdickharry.txt")
Calculate(myAhp)

And then generating the output was also ridiculously easy:

> GetDataFrame(myAhp)
                                  Weight  Dick   Tom Harry Consistency
1 Choose the Most Suitable Leader 100.0% 49.3% 35.8% 14.9%        4.4%
2  ¦--Experience                   54.8% 39.3% 11.9%  3.6%        3.2%
3  ¦--Education                    12.7%  1.0%  2.4%  9.2%        5.6%
4  ¦--Charisma                     27.0%  5.2% 20.1%  1.7%        6.1%
5  °--Age                           5.6%  3.8%  1.5%  0.4%        2.5%
> 
> print(myAhp, "weight", filterFun = isNotLeaf)
                        levelName     weight
1 Choose the Most Suitable Leader 1.00000000
2  ¦--Experience                  0.54756924
3  ¦--Education                   0.12655528
4  ¦--Charisma                    0.26994992
5  °--Age                         0.05592555
> print(myAhp, "weight")
                         levelName     weight
1  Choose the Most Suitable Leader 1.00000000
2   ¦--Experience                  0.54756924
3   ¦   ¦--Tom                     0.21716561
4   ¦   ¦--Dick                    0.71706504
5   ¦   °--Harry                   0.06576935
6   ¦--Education                   0.12655528
7   ¦   ¦--Tom                     0.18839410
8   ¦   ¦--Dick                    0.08096123
9   ¦   °--Harry                   0.73064467
10  ¦--Charisma                    0.26994992
11  ¦   ¦--Tom                     0.74286662
12  ¦   ¦--Dick                    0.19388163
13  ¦   °--Harry                   0.06325174
14  °--Age                         0.05592555
15      ¦--Tom                     0.26543334
16      ¦--Dick                    0.67162545
17      °--Harry                   0.06294121

You can also generate very beautiful output with the command below (but you’ll have to run the example yourself if you want to see how fantastically it turns out — maybe that will provide some motivation!)

ShowTable(myAhp)

I’ll post soon with an example of how to use AHP preference functions in the Tom, Dick, & Harry problem.

Course Materials for Statistics (The Easier Way) With R

very-quick-cover-outlineAre you an instructor with a Spring 2016 intro to statistics course coming up… and yet you haven’t started preparing? If so, I have a potential solution for you to consider. The materials (lecture slides, in-class practice problems in R, exams, syllabus) go with my book and are about 85% compiled, but good enough to get a course started this week (I will be finishing the collection by mid-January).
There is also a 36MB .zip file if you would like to download the materials.
Whether you will be using them or just considering them, please fill in the Google Form at https://docs.google.com/forms/d/1Z7djuKHg1L4k7bTtktHXI7Juduad3fUW9G69bxN6jRA/viewform so I can keep track of everyone and provide you with updates. Also, I want to make sure that I’m providing the materials to INSTRUCTORS (and not students), so please use the email account from your institution when you sign up. Once I get your contact information, I will email you the link to the materials.
If you would like permission to edit the materials, I can do that as well — I know a couple of you have expressed that you would like to add to the collection (e.g translate them to another language). If you see any issues or errors, please either fix and/or email to tell me to fix!
Thanks for your interest and participation! Also, Happy New Year!

My First (R) Shiny App: An Annotated Tutorial

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

I’ve been meaning to learn Shiny for 2 years now… and thanks to a fortuitous email from @ImADataGuy this morning and a burst of wild coding energy about 5 hours ago, I am happy to report that I have completely fallen in love again. The purpose of this post is to share how I got my first Shiny app up and running tonight on localhost, how I deployed it to the http://shinyapps.io service, and how you can create a “Hello World” style program of your own that actually works on data that’s meaningful to you.

If you want to create a “Hello World!” app with Shiny (and your own data!) just follow these steps:

0. Install R 3.2.0+ first! This will save you time.
1. I signed up for an account at http://shinyapps.io.
2. Then I clicked the link in the email they sent me.
3. That allowed me to set up my https://radziwill.shinyapps.io location.
4. Then I followed the instructions at https://www.shinyapps.io/admin/#/dashboard
(This page has SPECIAL SECRET INFO CUSTOMIZED JUST FOR YOU ON IT!!) I had lots 
of problems with devtools::install_github('rstudio/shinyapps') - Had to go 
into my R directory, manually delete RCurl and digest, then 
reinstall both RCurl and digest... then installing shinyapps worked.
Note: this last command they tell you to do WILL NOT WORK because you do not have an app yet! 
If you try it, this is what you'll see:
> shinyapps::deployApp('path/to/your/app')
Error in shinyapps::deployApp("path/to/your/app") : 
C:\Users\Nicole\Documents\path\to\your\app does not exist
5. Then I went to http://shiny.rstudio.com/articles/shinyapps.html and installed rsconnect.
6. I clicked on my name and gravatar in the upper right hand corner of the 
https://www.shinyapps.io/admin/#/dashboard window I had opened, and then clicked 
"tokens". I realized I'd already done this part, so I skipped down to read 
"A Demo App" on http://shiny.rstudio.com/articles/shinyapps.html
7. Then, I re-installed ggplot2 and shiny using this command:
install.packages(c('ggplot2', 'shiny'))
8. I created a new directory (C:/Users/Nicole/Documents/shinyapps) and used
setwd to get to it.
9. I pasted the code at http://shiny.rstudio.com/articles/shinyapps.html to create two files, 
server.R and ui.R, which I put into my new shinyapps directory 
under a subdirectory called demo. The subdirectory name IS your app name.
10. I typed runApp("demo") into my R console, and voila! The GUI appeared in 
my browser window on my localhost.
-- Don't just try to close the browser window to get the Shiny app 
to stop. R will hang. To get out of this, I had to use Task Manager and kill R.
--- Use the main menu, and do Misc -> Stop Current Computation
11. I did the same with the "Hello Shiny" code at http://shiny.rstudio.com/articles/shinyapps.html. 
But what I REALLY want is to deploy a hello world app with MY OWN data. You know, something that's 
meaningful to me. You probably want to do a test app with data that is meaningful to you... here's 
how you can do that.
12. A quick search shows that I need jennybc's (Github) googlesheets package to get 
data from Google Drive viewable in my new Shiny app.
13. So I tried to get the googlesheets package with this command:
devtools::install_github('jennybc/googlesheets')
but then found out it requires R version 3.2.0. I you already have 3.2.0 you can skip 
to step 16 now.
14. So I reinstalled R using the installr package (highly advised if you want to 
overcome the agony of upgrading on windows). 
See http://www.r-statistics.com/2013/03/updating-r-from-r-on-windows-using-the-installr-package/
for info -- all it requires is that you type installR() -- really!
15. After installing R I restarted my machine. This is probably the first time in a month that 
I've shut all my browser windows, documents, spreadsheets, PDFs, and R sessions. I got the feeling 
that this made my computer happy.
16. Then, I created a Google Sheet with my data. While viewing that document, I went to 
File -> "Publish to the Web". I also discovered that my DOCUMENT KEY is that 
looooong string in the middle of the address, so I copied it for later:
1Bs0OH6F-Pdw5BG8yVo2t_VS9Wq1F7vb_VovOmnDSNf4
17. Then I created a new directory in C:/Users/Nicole/Documents/shinyapps to test out 
jennybc's googlesheets package, and called it jennybc
18. I copied and pasted the code in her server.R file and ui.R file
from https://github.com/jennybc/googlesheets/tree/master/inst/shiny-examples/01_read-public-sheet 
into files with the same names in my jennybc directory
19. I went into my R console, used getwd() to make sure I was in the
C:/Users/Nicole/Documents/shinyapps directory, and then typed
 runApp("jennybc")
20. A browser window popped up on localhost with her test Shiny app! I played with it, and then 
closed that browser tab.
21. When I went back into the R console, it was still hanging, so I went to the menu bar 
to Misc -> Stop Current Computation. This brought my R prompt back.
22. Now it was time to write my own app. I went to http://shiny.rstudio.com/gallery/ and
found a layout I liked (http://shiny.rstudio.com/gallery/tabsets.html), then copied the 
server.R and ui.R code into C:/Users/Nicole/Documents/shinyapps/my-hello -- 
and finally, tweaked the code and engaged in about 100 iterations of: 1) edit the two files, 
2) type runApp("my-hello") in the R console, 3) test my Shiny app in the 
browser window, 4) kill browser window, 5) do Misc -> Stop Current Computation 
in R. ALL of the computation happens in server.R, and all the display happens in ui.R:

server.R:

library(shiny)
library(googlesheets)
library(DT)

my_key <- "1Bs0OH6F-Pdw5BG8yVo2t_VS9Wq1F7vb_VovOmnDSNf4"
my_ss <- gs_key(my_key)
my_data <- gs_read(my_ss)

shinyServer(function(input, output, session) {
 output$plot <- renderPlot({
 my_data$type <- ordered(my_data$type,levels=c("PRE","POST"))
 boxplot(my_data$score~my_data$type,ylim=c(0,100),boxwex=0.6)
 })
 output$summary <- renderPrint({
 aggregate(score~type,data=my_data, summary)
 })
 output$the_data <- renderDataTable({
 datatable(my_data)
 })

})

ui.R:

library(shiny)
library(shinythemes)
library(googlesheets)

shinyUI(fluidPage(
 
 # Application title
 titlePanel("Nicole's First Shiny App"),
 
 # Sidebar with controls to select the random distribution type
 # and number of observations to generate. Note the use of the
 # br() element to introduce extra vertical spacing
 sidebarLayout(
 sidebarPanel(
     helpText("This is my first Shiny app!! It grabs some of my data 
from a Google Spreadsheet, and displays it here. I      
also used lots of examples from"),
     h6(a("http://shiny.rstudio.com/gallery/", 
href="http://shiny.rstudio.com/gallery/", target="_blank")),
     br(),
     h6(a("Click Here for a Tutorial on How It Was Made", 
href="https://qualityandinnovation.com/2015/12/08/my-first-shin     
y-app-an-annotated-tutorial/",
      target="_blank")),
      br()
 ),
 
 # Show a tabset that includes a plot, summary, and table view
 # of the generated distribution
 mainPanel(
    tabsetPanel(type = "tabs", 
    tabPanel("Plot", plotOutput("plot")), 
    tabPanel("Summary", verbatimTextOutput("summary")), 
    tabPanel("Table", DT::dataTableOutput("the_data"))
 )
 )
 )
))


23. Once I decided my app was good enough for my practice round, it was time to 
deploy it to the cloud.
24. This part of the process requires the shinyapps and dplyr 
packages, so be sure to install them:

devtools::install_github('hadley/dplyr')
library(dplyr)
devtools::install_github('rstudio/shinyapps')
library(shinyapps)
25. To deploy, all I did was this: setwd("C:/Users/Nicole/Documents/shinyapps/my-hello/")
deployApp()

CHECK OUT MY SHINY APP!!

Logistic Growth, S Curves, Bifurcations, and Lyapunov Exponents in R

If you’ve ever wondered how logistic population growth (the Verhulst model), S curves, the logistic map, bifurcation diagrams, sensitive dependence on initial conditions, “orbits”, deterministic chaos, and Lyapunov exponents are related to one another… this post attempts to provide a simplified explanation(!) in just 10 steps, each with some code in R so you can explore it all yourself. I’ve included some code written by other people who have explored this problem (cited below) as portions of my own code.

It all starts with a hypothesized population… and a process where the size of the population changes over time. We want to understand how (and under what conditions) those changes occur, so we choose a model that characterizes population changes: the logistic growth model. It’s been used in biology, ecology, econometrics, marketing, and other areas.

1. The logistic growth model describes how the size of a population (N) changes over time (t), based on some maximum population growth rate (r). There is a limiting factor called the carrying capacity (K) which represents the total population that the environment could support, based on the amount of available resources. dN/dt is the rate of change of the population over time.

logistic-growth-dndt

2. You can simplify the logistic growth model by defining a new variable x to represent the portion of the population that’s alive, compared to the total population that the environment could support (and keep alive). So with x = N/K, you get a new differential equation in terms of x. Now we are looking at the rate of change of the population fraction over time. Once x = N/K = 1, the environment can’t support any more members in the population:

logistic-growth-dxdt

3. You can solve this equation by integration! Then, you’ll have an expression that you can use to calculate x (which is still the population fraction) for any time t. This is called the sigmoid or (more commonly), the S Curve. To compute x at any time t, all we need to know is how big the population was when we started looking at it (x0) and the maximum growth rate r:

logistic-growth-xt-solution

4. The equation for the S Curve is deterministic and continuous. If we want to solve it numerically, we have to discretize it by chopping up that continuous axis that contains time into little tiny pieces of time. That’s what produces the difference equation that we recognize as the logistic map. It’s a map because it “maps” each value of the sequence onto the next value in the sequence. As long as you know one of those values for x (indicated by the subscript n), you’ll be able to figure out the next value of x (indicated by the subscript n+1). The value x[n] is the population fraction of the current generation, and the value x[n+1] is the population fraction for the next generation. This makes the logistic map a Markov chain. If you plot x[n] on the x axis and x[n+1] on the y axis, this expression will produce the familiar upside down parabola:

logistic-markov-chain

5. The logistic map behaves differently depending upon the maximum growth rate (r) that describes your population. This parameter is also called fecundity and represents how rabbit-like your population is reproducing. The higher the r, the more productive, like rabbits (although I’m not sure precisely which r you’d choose if you were studying rabbits). Here is an R function that you can use to generate the last M iterations from a sequence of N total, developed and described at Mage’s Blog:

logistic.map <- function(r, x, N, M) {
 ## from http://www.magesblog.com/2012/03/logistic-map-feigenbaum-diagram.html
 ## r: bifurcation parameter
 ## x: initial value, something greater than 0 and less than 1
 ## N: number of iterations (total)
 ## M: number of iteration points to be returned
   z <- 1:N
   z[1] <- x
   for(i in c(1:(N-1))){
     z[i+1] <- r *z[i] * (1 - z[i])
   }
   ## Return the last M iterations 
   z[c((N-M):N)]
}

6. The logistic map has many interesting properties, but here are two in particular (the first in Step 6 and the second in Step 7). First, for several values you can choose for r, the chain converges to a single value (or fixed point) when n gets really big. For other values of r, the value of x will eventually bounce between two values instead of converging (a limit cycle of 2). For other values of r, the value of x will eventually bounce between four values instead of converging. Sometimes, x will bounce around a near limitless collection of values (a condition called deterministic chaos). The eventual values (or collection of eventual values, if they bounce between values) is called an orbit. For example, when the growth rate r is 2.6, the logistic map rapidly converges to an orbit of about 0.615:

plot(logistic.map(2.6,.01,20,20), type="l")

logistic-map-converges

7. Sometimes, it can be nice to take a look at how the values bounce around, and where they eventually converge (or not). To do this, we use cobweb diagrams (which are also sometimes called web diagrams). I used a function that I found at http://bayesianbiologist.com to plot the behavior of the orbits for r=2.6, r=3.2, and r=3.9:

logistic.cobweb <- function(r) {
# code adapted from http://bayesianbiologist.com/tag/cobweb-plot/
 x<-seq(0,1,length=100)
 x_next <- lapply(2:N, function(i) r*x[i]*(1-x[i]))
 plot(x[2:N],x_next,type="l",xlim=c(0,1), ylim=c(0,1), main=paste("r=",r),
 xlab=expression(x[t]),ylab=expression(x[t+1]), col="red", lwd=2)
 abline(0,1)

 # start at your random spot on the x-axis and start with a vertical line:
 start=runif(1,0,1)
 vert=FALSE
 lines(x=c(start,start),y=c(0,r*start*(1-start)) )
 for(i in 1:(2*N)) {
 if(vert) {
   lines(x=c(start,start),y=c(start,r*start*(1-start)) )
   vert=FALSE
 } else {
   lines(x=c(start, r*start*(1-start)), y=c(r*start*(1-start), r*start*(1-start)) )
   vert=TRUE
   start=r*start*(1-start)
 }
 }
}

par(mfrow=c(1,3))
logistic.cobweb(2.6)
logistic.cobweb(3.3)
logistic.cobweb(3.9)

logistic-cobwebs

 

8. (Remember to dev.off() before you continue.) Second, for some values of r, the logistic map shows sensitive dependence on initial conditions. For example, let’s see what happens for two different growth rates (r=3 and r=3.9) when we start one iteration with an x[n]  of 0.5 COLORED BLACK, and another one with an x[n] of 0.5001 COLORED RED. It’s a small, small difference that can lead to big, BIG variations in the orbits. In the r=3 case, the chain produced by the logistic map with x[n] of 0.5 (in black) is IDENTICAL to the chain produced by the logistic map with x[n] of 0.5001 (in red). That’s why you can’t see the black… the values are the same! But for the r=3.9 case, the chain produced by the logistic map with x[n] of 0.5 (in black) RAPIDLY DIVERGES from the chain produced by the logistic map with x[n] of 0.5001 (in red). They are very different, despite a very tiny difference in initial conditions! The logistic map for r=3.9 shows a very sensitive dependence on initial conditions

par(mfrow=c(2,1))
first <- logistic.map(3,.5,120,100)
second <- logistic.map(3,.5001,120,100)
plot(1:length(first),first,type="l",main="r=3 is not sensitive to initial conditions")
lines(1:length(second),second,type="l",col="red")
first <- logistic.map(3.9,.5,120,100)
second <- logistic.map(3.9,.5001,120,100)
plot(1:length(first),first,type="l",main="but r=3.9 is EXTREMELY sensitive")
lines(1:length(second),second,type="l",col="red")

logistic-sensitivity

9. For any chain, we can determine just how sensitive the logistic map is to initial conditions by looking at the Lyapunov exponent. Very simplistically, if the Lyapunov exponent is negative, the chain will converge to one or more fixed points for that value of r. If the Lyapunov exponent is positive, the chain will demonstrate deterministic chaos for that value of r. If the Lyapunov exponent is zero, there is a bifurcation: a 1-cycle is doubling to a 2-cycle, a 2-cycle is doubling to a 4-cycle, or so forth. The top chart shows an approximation of the Lyapunov exponent based on the first 500 iterations (ideally, you’d use an infinite number, but that would eat up too much computing time), and the bottom chart shows a bifurcation diagramYou’ll notice that the Lyapunov exponents are zero where a bifurcation occurs. To interpret the bifurcation diagram, just remember that each vertical slice through it represents the results of ONE COMPLETELY CONVERGED CHAIN from the logistic map. So it shows the results from many, many, many completely converged chains – and provides an excellent way for us to look at the behavior of MANY different types of populations in just one chart:

n <- 400
XI <- lya <- 0
x <- seq(0,4,0.01)
for (i in 1:n) {
 xi <- logistic.map(x[i],.01,500,500)
 XI <- rbind(XI,xi)
}
for (i in 1:length(x)) { 
 lya[i] <- sum(log(abs(x[i]-(2*x[i]*XI[i,]))))/length(x) 
}
plot(x,lya,ylim=c(-4,1),xlim=c(0,4),type="l",main="Lyapunov Exponents for Logistic Map")
abline(h=0, lwd=2, col="red")
# next 3 lines from http://www.magesblog.com/2012/03/logistic-map-feigenbaum-diagram.html:
my.r <- seq(0, 4, by=0.003)
Orbit <- sapply(my.r, logistic.map, x=0.1, N=1000, M=300)
r <- sort(rep(my.r, 301))

par(mfrow=c(2,1))
plot(x,lya,ylim=c(-5,1),xlim=c(0,4),type="l",main="Lyapunov Exponents for Logistic Map")
abline(h=0, col="red", lwd=2)
abline(v=3, col="blue", lwd=2)
plot(r, Orbit, pch=".", cex=0.5, main="Bifurcation Diagram for r=0 to r=4 Logistic Maps")
abline(v=3, col="blue", lwd=2)

logistic-lyapunov-bifurcation

10. Notice that in the bifurcation diagram, we can easily see that when r is between 0 and 1, the population converges to extinction. This makes sense, because the growth rate is smaller than what’s required to sustain the size of the population. You might like to zoom in, though, and see what the orbits look like for some smaller portions of the diagram. Here’s how you can do it (but be sure to refresh your graphics window with dev.off() before you try it). Try changing the plot character (pch) too, or maybe the size of the characters with cex=0.2 or cex=0.5 in the last line:

# adapted from http://www.magesblog.com/2012/03/logistic-map-feigenbaum-diagram.html:
my.r <- seq(3.5, 4, by=0.003)
Orbit <- sapply(my.r, logistic.map, x=0.1, N=1000, M=300)
multiplier <- length(Orbit)/length(my.r)
r <- sort(rep(my.r, multiplier))
plot(r, Orbit, pch=".")

logistic-lyapunov-bifurcation-2

 

That’s it!

Find out more information on these other web pages, which are listed in order of difficulty level:

A 15-Week Intro Statistics Course Featuring R

Morgan at Burning Man 2014. (Image Credit: Nicole Radziwill)

Morgan at Burning Man 2014. (Image Credit: Nicole Radziwill)

Do you teach introductory statistics or data science? Need some help planning your fall class?

I apply the 10 Principles of Burning Man in the design and conduct of all my undergraduate and graduate-level courses, including my introductory statistics class (which has a heavy focus on R and data science) at JMU. This means that I consider learning to be emergent, and as a result, it often doesn’t follow a prescribed path of achieving specified learning objectives. However, in certain courses, I still feel like it’s important to provide a general structure to help guide the way! This also helps the students get a sense of our general trajectory over the course of the semester, and do readings in advance if they’re ready.

Since several people have asked for a copy, here is the SYLLABUS that I use for my 15-week class (that also uses the “informal” TEXTBOOK I wrote this past spring). We meet twice a week for an hour and 15 minutes each session. The class is designed for undergraduate sophomores, but there are always students from all levels enrolled. The course is intended to provide an introduction to (frequentist) statistical thinking, but with an applied focus that has practical data analysis at its core.

My goal is simple. At the end of the semester, I want students to be able to:

Please let me know if this syllabus is helpful to you! I’ll be posting my intensive (5-session) version of this tomorrow or the next day.

Feel free to join our class Facebook group at https://www.facebook.com/groups/262216220608559/ if you want to play along at home.

A Simple Intro to Bayesian Change Point Analysis

The purpose of this post is to demonstrate change point analysis by stepping through an example of the technique in R presented in Rizzo’s excellent, comprehensive, and very mathy book, Statistical Computing with R, and then showing alternative ways to process this data using the changepoint and bcp packages. Much of the commentary is simplified, and that’s on purpose: I want to make this introduction accessible if you’re just learning the method. (Most of the code is straight from Rizzo who provides a much more in-depth treatment of the technique. I’ve added comments in the code to make it easier for me to follow, and that’s about it.)

The idea itself is simple: you have a sample of observations from a Poisson (counting) process (where events occur randomly over a period of time). You probably have a chart that shows time on the horizontal axis, and how many events occurred on the vertical axis. You suspect that the rate at which events occur has changed somewhere over that range of time… either the event is increasing in frequency, or it’s slowing down — but you want to know with a little more certainty. (Alternatively, you could check to see if the variance has changed, which would be useful for process improvement work in Six Sigma projects.)

You want to estimate the rate at which events occur BEFORE the shift (mu), the rate at which events occur AFTER the shift (lambda), and the time when the shift happens (k). To do it, you can apply a Markov Chain Monte Carlo (MCMC) sampling approach to estimate the population parameters at each possible k, from the beginning of your data set to the end of it. The values you get at each time step will be dependent only on the values you computed at the previous timestep (that’s where the Markov Chain part of this problem comes in). There are lots of different ways to hop around the parameter space, and each hopping strategy has a fancy name (e.g. Metropolis-Hastings, Gibbs, “reversible jump”).

In one example, Rizzo (p. 271-277) uses a Markov Chain Monte Carlo (MCMC) method that applies a Gibbs sampler to do the hopping – with the goal of figuring out the change point in number of coal mine disasters from 1851 to 1962. (Looking at a plot of the frequency over time, it appears that the rate of coal mining disasters decreased… but did it really? And if so, when? That’s the point of her example.) She gets the coal mining data from the boot package. Here’s how to get it, and what it looks like:

library(boot)
data(coal)
y <- tabulate(floor(coal[[1]]))
y <- y[1851:length(y)]
barplot(y,xlab="years", ylab="frequency of disasters")

coalmine-freq

First, we initialize all of the data structures we’ll need to use:

# initialization
n <- length(y) # number of data elements to process
m <- 1000 # target length of the chain
L <- numeric(n) # likelihood fxn has one slot per year
k[1] <- sample(1:n,1) # pick 1 random year to start at
mu[1] <- 1
lambda[1] <- 1
b1 <- 1
b2 <- 1
# now set up blank 1000 element arrays for mu, lambda, and k
mu <- lambda <- k <- numeric(m)

Here are the models for prior (hypothesized) distributions that she uses, based on the Gibbs sampler approach:

  • mu comes from a Gamma distribution with shape parameter of (0.5 + the sum of all your frequencies UP TO the point in time, k, you’re currently at) and a rate of (k + b1)
  • lambda comes from a Gamma distribution with shape parameter of (0.5 + the sum of all your frequencies AFTER the point in time, k, you’re currently at) and a rate of (n – k + b1) where n is the number of the year you’re currently processing
  • b1 comes from a Gamma distribution with a shape parameter of 0.5 and a rate of (mu + 1)
  • b2 comes from a Gamma distribution with a shape parameter of 0.5 and a rate of (lambda + 1)
  • a likelihood function L is also provided, and is a function of k, mu, lambda, and the sum of all the frequencies up until that point in time, k

At each iteration, you pick a value of k to represent a point in time where a change might have occurred. You slice your data into two chunks: the chunk that happened BEFORE this point in time, and the chunk that happened AFTER this point in time. Using your data, you apply a Poisson Process with a (Hypothesized) Gamma Distributed Rate as your model. This is a pretty common model for this particular type of problem. It’s like randomly cutting a deck of cards and taking the average of the values in each of the two cuts… then doing the same thing again… a thousand times. Here is Rizzo’s (commented) code:

# start at 2, so you can use initialization values as seeds
# and go through this process once for each of your m iterations
for (i in 2:m) {
 kt <- k[i-1] # start w/random year from initialization
 # set your shape parameter to pick mu from, based on the characteristics
 # of the early ("before") chunk of your data
 r <- .5 + sum(y[1:kt]) 
 # now use it to pick mu
 mu[i] <- rgamma(1,shape=r,rate=kt+b1) 
 # if you're at the end of the time periods, set your shape parameter
 # to 0.5 + the sum of all the frequencies, otherwise, just set the shape
 # parameter that you will use to pick lambda based on the later ("after")
 # chunk of your data
 if (kt+1 > n) r <- 0.5 + sum(y) else r <- 0.5 + sum(y[(kt+1):n])
 lambda[i] <- rgamma(1,shape=r,rate=n-kt+b2)
 # now use the mu and lambda values that you got to set b1 and b2 for next iteration
 b1 <- rgamma(1,shape=.5,rate=mu[i]+1)
 b2 <- rgamma(1,shape=.5,rate=lambda[i]+1)
 # for each year, find value of LIKELIHOOD function which you will 
 # then use to determine what year to hop to next
 for (j in 1:n) {
 L[j] <- exp((lambda[i]-mu[i])*j) * (mu[i]/lambda[i])^sum(y[1:j])
 }
 L <- L/sum(L)
 # determine which year to hop to next
 k[i] <- sample(1:n,prob=L,size=1)
}

Knowing the distributions of mu, lambda, and k from hopping around our data will help us estimate values for the true population parameters. At the end of the simulation, we have an array of 1000 values of k, an array of 1000 values of mu, and an array of 1000 values of lambda — we use these to estimate the real values of the population parameters. Typically, algorithms that do this automatically throw out a whole bunch of them in the beginning (the “burn-in” period) — Rizzo tosses out 200 observations — even though some statisticians (e.g. Geyer) say that the burn-in period is unnecessary:

> b <- 201 # treats time until the 200th iteration as "burn-in"
> mean(k[b:m])
[1] 39.765
> mean(lambda[b:m])
[1] 0.9326437
> mean(mu[b:m])
[1] 3.146413

The change point happened between the 39th and 40th observations, the arrival rate before the change point was 3.14 arrivals per unit time, and the rate after the change point was 0.93 arrivals per unit time. (Cool!)
After I went through this example, I discovered the changepoint package, which let me run through a similar process in just a few lines of code. Fortunately, the results were very similar! I chose the “AMOC” method which stands for “at most one change”. Other methods are available which can help identify more than one change point (PELT, BinSeg, and SegNeigh – although I got an error message every time I attempted that last method).

> results <- cpt.mean(y,method="AMOC")
> cpts(results)
cpt 
 36 
> param.est(results)
$mean
[1] 3.2500000 0.9736842
> plot(results,cpt.col="blue",xlab="Index",cpt.width=4)

coalmine-changepoint

I decided to explore a little further and found even MORE change point analysis packages! So I tried this example using bcp (which I presume stands for “Bayesian Change Point”) and voila… the output looks very similar to each of the previous two methods!!!):

coalmine-bcp

It’s at this point that the HARD part of the data science project would begin… WHY? Why does it look like the rate of coal mining accidents decreased suddenly? Was there a change in policy or regulatory requirements in Australia, where this data was collected? Was there some sort of mass exodus away from working in the mines, and so there’s a covariate in the number of opportunities for a mining disaster to occur? Don’t know… the original paper from 1979 doesn’t reveal the true story behind the data.

There are also additional resources on R Bloggers that discuss change point analysis:

(Note: If I’ve missed anything, or haven’t explained anything right, please provide corrections and further insights in the comments! Thank you.

« Older Entries