Tag Archives: statistics

A Decade of PhD: What I’ve Learned from Academia and Industry

Today is Cinco de Mayo! It’s also the 10th Anniversary of my PhD defense…. something I carefully timed for late afternoon on this day in 2009. (I wanted to make sure I could celebrate the joyful occasion — or drown my sorrows — with 2-for-1 margaritas. Fortunately, the situation was liquid joy; unfortunately, I still got a hangover.)

I’m writing this post to share what I’ve learned about the value of getting a PhD (is there value?) and the applicability of PhD-level work to industry. If you’re considering more education, maybe this will help you decide whether it’s the right choice. If you’re in industry and trying to figure out whether to hire PhDs, some of what I write here might help. But first, some background!

I never even thought I’d get a PhD — it certainly didn’t happen out of intent or design. My family was poor and I studied ridiculously hard so I could “escape it.” I didn’t think I was smart enough for a PhD, even though I started college at 16 taking half undergrad classes and half grad classes in meteorology. I aced my grad classes and very maturely ignored my required classes, so I got kicked out. (At the same time, I wasn’t really fitting in with people… my roommate called me “Nerdcole”.) When I was let back in the department head wouldn’t let me take any grad classes so I got bored and burned out… not surprising since I was supporting myself, and working three jobs to make that happen. I quit school to work at an e-commerce startup when I was 18. A few months later, thanks to (good) peer pressure, I took 3 credit by exams to see if it would get me over the finish line, and thanks to some side skills I had picked up in vector calculus and statistics, it worked and I got the BS. But I was still left with a pretty bad GPA, and even worse self esteem, and I was convinced no one would ever let me into grad school.

I figured I’d focus on industry and help companies grow. There was no other choice.

The Back Story

After spending a couple years building web sites and storefronts (a huge feat in 1995 and 1996!) I took a job at a national lab as a systems analyst, supporting older scientists and engineers and helping them get work done. The main lesson I learned during this time was: Alignment between strategy and objectives doesn’t come for free (teams of people have to spend dedicated time on it), and most people are really disorganized. There had to be a better way to get work done.

A few years later, I was a traveling Solutions Architect, parachuted once or twice a month into CRM software implementation fiascos around the globe. My job was to figure out what to do to turn these jobs around — was it a people problem? An architecture problem? A training problem? A systems thinking problem? A little of everything? I had a couple weeks to make a recommendation, and then I was on to the next project (results were usually pretty good). But since this required evaluating technology decisions in the context of business and financial constraints, my boss suggested that I use the tuition benefits offered by my job to get an MBA. I had taken 9 credits of science and industrial engineering classes since I’d graduated, so I contacted two of the local MBA schools to see if they’d accept me and my credits. Sure enough, one of them did! I took evening classes for a year and a half, and eventually ended up with an MBA. But I never thought I could (or would) go farther — I’m not that smart, I’d tell myself. Also, it’s expensive. Also, a PhD would probably make me less marketable. (All lies, spoken by a lack of confidence.)

Shortly thereafter, the travel started to get to me (I was flying at least three days a week), so I looked for an opportunity to grow and cultivate a software development organization. (That’s how I ended up in Data Management at NRAO.) A little management led to a lot of management. A few years later one of the organization’s leaders said it was “too bad I didn’t have a PhD” — because in a highly scientific and technical organization, it would give me more credibility and make me a better leader.

“Will you pay for it?” I asked. “Sure,” they said. I just had to find a suitable program that wouldn’t require me to go full time. I’ve always loved learning, and I couldn’t resist the temptation of free education — even if it meant I’d have to balance the demands of a challenging full-time job and a first-time baby at the same time. That’s how much I love learning, just for learning’s sake! I still didn’t think a PhD had that much value, unless you were studying to be a lab scientist or you were dead set on becoming a historian and teaching for the rest of your life. None of these personas was me, but the free education thing sold me, and I didn’t really think about how relevant this step was to my career direction until much later.

Fortunately, I found the perfect program for me — a hybrid academic/practitioner PhD that would help me develop the research and analytical skills to solve practical problems in business and industrial technology.

The next few years were pretty rough, and by the time I got my PhD, I was in my 14th year of post-college professional employment. First lesson learned: it’s probably not the best move to start PhD coursework when you have a three-month old. I have no idea how I made it through.

Shortly after graduating, the impacts of the financial crisis hit our federally funded organization and I was able to segue into a second career as a college professor, teaching data science and manufacturing/EHSQ classes. For the past year, I’ve been back in industry (maybe permanently; we’ll see) and have a better sense of the value of PhDs in industry.

Value of Getting a PhD

There are lots of reasons I’m happy with the time I spent getting a PhD, other than the fact that it helped me get an entirely new job when the economy was down:

  • First and foremost, I’m a better critical thinker. It’s now my nature to look at all parts of a problem, examine the interactions between them, and make sure I have all the required background information needed to start working on a problem.
  • I’m a better writer too. I look at reports and presentations I wrote years ago, and can see all the holes and places where I made assumptions that weren’t valid.
  • I developed a new appreciation for clarity. Researchers want to make sure their messages, methodologies, and models are clear and unambiguous… through the contrast, I was able to recognize that in industry, there’s often pressure to skip due diligence and move fast to perform. This pressure leads to ambiguity, which tends towards what I call “intellectual waste” – people assuming that they see a problem or a project in the same way because they haven’t taken the time to guarantee clarity.
  • It’s easier for me to quickly determine whether information might be true or false, or whether there are gaps that need to be closed before moving forward. (It’s possible that this skill is more from grading and evaluating student work… something that’s orders of magnitude harder than it seems.)
  • I realized that words matter. Really thinking about how one person will respond to a word or phrase, and whether it conveys the meaning that you intend, is a craft — that’s enhanced by working with collaborators.
  • And although I knew this one prior to the PhD, I found that data matters. Where did your data come from? Can you access the original? What kind of people (or instruments) gathered it? Can you trust them? The quality of your data — and the suitability of the methods you choose — will impact the quality and integrity of the conclusions you generate from it. Awareness of these factors is essential.

Value of Caution

One of the biggest lessons was the most surprising. Early on in the PhD program I was told that my opinion didn’t count — regardless of how many years of experience I had. Every statement I made had to be backed up and cited, preferably using material that had been peer-reviewed by other qualified people. At first I was kind of offended by this… didn’t these academics have any sense of the value of actual real-world employment? Apparently not.

But something funny happened as I developed the habit of looking for solid references, distilling their messages, and citing them accurately: I became more careful. And in the evolution of my caution and attention to detail, the quality of my work — ANY work — improved tremendously. I was able to learn from what other people had discovered, and anticipate (and resolve) problems in advance. I learned that “standing on the shoulders of giants” actually means figuring out when solved problems already exist so you don’t waste time reinventing wheels.

Something else funny happened as soon as I graduated: all of a sudden, people were asking me for my opinion. But the habit of due diligence was so ingrained that I couldn’t express my opinion… I was compelled to back it up with facts!

(I think this was the point all along. Go figure.)

The beauty of going through the entire messy process of PhD coursework and comps and research and defense and editing — the entire end-to-end process, not cutting out in the middle anywhere — it gave me the discipline and process to root out accurate and complete answers to problems. Or at the very least, to be able to call out the gaps to get there.

There’s a lot of pressure in industry to move fast, but due diligence is still critical for accurate self-assessment and effective cross-functional communication. Slowing down and figuring out how you know what you know — and making sure everyone is literally on the same page — can help your organization achieve its goals more quickly.

Value of PhDs to Industry

So employers (especially in tech) — should you hire PhDs? Yes. Here’s why:

  • PhDs are trained to find gaps in knowledge and understanding. Is your strategic plan grounded in reality, or is it just wishful thinking? Are your Project Charters well scoped, budgeted, and planned out? Is your workforce prepared to carry out your strategic initiatives?
  • Many PhDs with experience teaching undergrads are great at making complex topics accessible to other audiences. This is fantastic for training, cross-training, and marketing.
  • PhDs love research and writing, and can help you with gathering and interpreting data and content marketing.
  • PhDs love learning. Want to be on the cutting edge? They’re great in R&D… they can help you distill new insights from research papers and interpret and apply them accurately.
  • If you want to do AI or machine learning, or anything that uses Big Data, make sure you have at least one PhD statistician with practical analytical experience. They can prevent you from spending millions on dead ends and help you apply Occam’s Razor to avoid unnecessary complexity (the kind that can lead to technical debt later).

Will there be drawbacks? Sure. The habit of caution may need to be tempered somewhat — you don’t have to probe to the bottom of an issue to generate useful information that a business can use to make progress.

Bottom line… don’t be afraid of PhDs! We are mere mortals who just happen to have spent several years trying to figure out how to get to the core — the fundamental truth — of a complex problem. As a result we know how to approach problems like this — problems that many businesses have lots of. (We are not overqualified at all… we just have an extra skill set in something you desperately need, but may not realize you need it.)

Getting a PhD was challenging, frustrating, and maddening at times (especially the final part of getting your camera-ready text ready for ProQuest). I never planned to do it, but I’d totally do it again. I think my only regret is that I got a PhD in a hybrid business/industrial engineering discipline… it allowed me the freedom to pursue my interests, but if I was at the same crossroads now, I’d get a PhD in statistics to complement my MBA. Overall, this is a pretty tiny regret.

Object of Type Closure is Not Subsettable

I started using R in 2004. I started using R religiously on the day of the annular solar eclipse in Madrid (October 3, 2005) after being inspired by David Hunter’s talk at ADASS.

It took me exactly 4,889 days to figure out what this vexing error means, even though trial and error helped me move through it most every time it happened! I’m so moved by what I learned (and embarrassed that it took so long to make sense), that I have to share it with you.

This error happens when you’re trying to treat a function like a list, vector, or data frame. To fix it, start treating the function like a function.

Let’s take something that’s very obviously a function. I picked the sampling distribution simulator from a 2015 blog post I wrote. Cut and paste it into your R console:

sdm.sim <- function(n,src.dist=NULL,param1=NULL,param2=NULL) {
   r <- 10000  # Number of replications/samples - DO NOT ADJUST
   # This produces a matrix of observations with  
   # n columns and r rows. Each row is one sample:
   my.samples <- switch(src.dist,
	"E" = matrix(rexp(n*r,param1),r),
	"N" = matrix(rnorm(n*r,param1,param2),r),
	"U" = matrix(runif(n*r,param1,param2),r),
	"P" = matrix(rpois(n*r,param1),r),
        "B" = matrix(rbinom(n*r,param1,param2),r),
	"G" = matrix(rgamma(n*r,param1,param2),r),
	"X" = matrix(rchisq(n*r,param1),r),
	"T" = matrix(rt(n*r,param1),r))
   all.sample.sums <- apply(my.samples,1,sum)
   all.sample.means <- apply(my.samples,1,mean)   
   all.sample.vars <- apply(my.samples,1,var) 
   par(mfrow=c(2,2))
   hist(my.samples[1,],col="gray",main="Distribution of One Sample")
   hist(all.sample.sums,col="gray",main="Sampling Distribution\nof
	the Sum")
   hist(all.sample.means,col="gray",main="Sampling Distribution\nof the Mean")
   hist(all.sample.vars,col="gray",main="Sampling Distribution\nof
	the Variance")
}

The right thing to do with this function is use it to simulate a bunch of distributions and plot them using base R. You write the function name, followed by parenthesis, followed by each of the four arguments the function needs to work. This will generate a normal distribution with mean of 20 and standard deviation of 3, along with three sampling distributions, using a sample size of 100 and 10000 replications:

sdm.sim(100, src.dist="N", param1=20, param2=3)

(You should get four plots, arranged in a 2×2 grid.)

But what if we tried to treat sdm.sim like a list, and call the 3rd element of it? Or what if we tried to treat it like a data frame, and we wanted to call one of the variables in the column of the data frame?

> sdm.sim[3]
Error in sdm.sim[3] : object of type 'closure' is not subsettable

> sdm.sim$values
Error in sdm.sim$values : object of type 'closure' is not subsettable

SURPRISE! Object of type closure is not subsettable. This happens because sdm.sim is a function, and its data type is (shockingly) something called “closure”:

> class(sdm.sim)
[1] "function"

> typeof(sdm.sim)
[1] "closure"

I had read this before on Stack Overflow a whole bunch of times, but it never really clicked until I saw it like this! And now that I had a sense for why the error was occurring, turns out it’s super easy to reproduce with functions in base R or functions in anyfamiliar packages you use:

> ggplot$a
Error in ggplot$a : object of type 'closure' is not subsettable

> table$a
Error in table$a : object of type 'closure' is not subsettable

> ggplot[1]
Error in ggplot[1] : object of type 'closure' is not subsettable

> table[1]
Error in table[1] : object of type 'closure' is not subsettable

As a result, if you’re pulling your hair out over this problem, check and see where in your rogue line of code you’re treating something like a non-function. Or maybe you picked a variable name that competes with a function name. Or maybe you got your operations out of order. In any case, change your notation so that your function is doing function things, and your code should start working.

But as Luke Smith pointed out, this is not true for functional sequences (which you can also write as functions). Functional sequences are those chains of commands you write when you’re in tidyverse mode, all strung together with %>% pipes:

Luke’s code that you can cut and paste (and try), with the top line that got cut off by Twitter, is:

fun1 <- . %>% 
    group_by(col1) %>% 
    mutate(n=n+h) %>% 
    filter(n==max(n)) %>% 
    ungroup() %>% 
    summarize(new_col=mean(n))

fun2 <- fun1[-c(2,5)] 

Even though Luke’s fun1 and fun2 are of type closure, they are subsettable because they contain a sequence of functions:

> typeof(fun1)
[1] "closure"

> typeof(fun2)
[1] "closure"

> fun1[1]
Functional sequence with the following components:

 1. group_by(., col1)

Use 'functions' to extract the individual functions. 

> fun1[[1]]
function (.) 
group_by(., col1)

Don’t feel bad! This problem has plagued all of us for many, many hours (me: years), and yet it still happens to us more often than we would like to admit. Awareness of this issue will not prevent you from attempting things that give you this error in the future. It’s such a popular error that there have been memes about it and sad valentines written about it:

SCROLL DOWN PAST STEPH’S TWEET TO SEE THE JOKE!!

(Also, if you’ve made it this far, FOLLOW THESE GOOD PEOPLE ON TWITTER: @stephdesilva @djnavarro @lksmth – They all share great information on data science in general, and R in particular. Many thanks too to all the #rstats crowd who shared in my glee last night and didn’t make me feel like an idiot for not figuring this out for ALMOST 14 YEARS. It seems so simple now.

Steph is also a Microsoft whiz who you should definitely hire if you need anything R+ Microsoft. Thanks to all of you!)

Analytic Hierarchy Process (AHP) using preferenceFunction in ahp

Yesterday, I wrote about how to use gluc‘s new ahp package on a simple Tom-Dick-Harry one level decision making problem using Analytic Hierarchy Process (AHP). One of the cool things about that package is that in addition to specifying the pairwise comparisons directly using Saaty’s scale (below, from https://kristalaace2014.wordpress.com/2014/05/14/w12_al_vendor-evaluation/)…

saaty-scale

…you can also describe each of the Alternatives in terms of descriptive variables which you can use inside a function to make the pairwise comparisons automatically. This is VERY helpful if you have lots of criteria, subcriteria, or alternatives to evaluate!! For example, I used preferenceFunction to compare 55 alternatives using 6 criteria and 4 subcriteria, and was very easily able to create functions to represent my judgments. This was much easier than manually entering all the comparisons.

This post shows HOW I replaced some of my manual comparisons with automated comparisons using preferenceFunction. (The full YAML file is included at the bottom of this post for you to use if you want to run this example yourself.) First, recall that the YAML file starts with specifying the alternatives that you are trying to choose from (at the bottom level of the decision hierarchy) and some variables that characterize those alternatives. I used the descriptions in the problem statement to come up with some assessments between 1=not great and 10=great:

[code language=”bash” gutter=”false”]
#########################
# Alternatives Section
# THIS IS FOR The Tom, Dick, & Harry problem at
# https://en.wikipedia.org/wiki/Analytic_hierarchy_process_%E2%80%93_leader_example
#
Alternatives: &alternatives
# 1= not well; 10 = best possible
# Your assessment based on the paragraph descriptions may be different.
Tom:
age: 50
experience: 7
education: 4
leadership: 10
Dick:
age: 60
experience: 10
education: 6
leadership: 6
Harry:
age: 30
experience: 5
education: 8
leadership: 6
#
# End of Alternatives Section
#####################################
[/code]

Here is a snippet from my original YAML file specifying my AHP problem manually ():

[code language=”bash” gutter=”false”]
children:
Experience:
preferences:
– [Tom, Dick, 1/4]
– [Tom, Harry, 4]
– [Dick, Harry, 9]
children: *alternatives
Education:
preferences:
– [Tom, Dick, 3]
– [Tom, Harry, 1/5]
– [Dick, Harry, 1/7]
children: *alternatives
[/code]

And here is what I changed that snippet to, so that it would do my pairwise comparisons automatically. The functions are written in standard R (fortunately), and each function has access to a1 and a2 (the two alternatives). Recursion is supported which makes this capability particularly useful. I tried to write a function using two of the characteristics in the decision (a1$age and a1$experience) but this didn’t seen to work. I’m not sure whether the package supports it or not. Here are my comparisons rewritten as functions:

[code language=”bash” gutter=”false”]
children:
Experience:
preferenceFunction: >
ExperiencePreference <- function(a1, a2) {
if (a1$experience < a2$experience) return (1/ExperiencePreference(a2, a1))
ratio <- a1$experience / a2$experience
if (ratio < 1.05) return (1)
if (ratio < 1.2) return (2)
if (ratio < 1.5) return (3)
if (ratio < 1.8) return (4)
if (ratio < 2.1) return (5) return (6) } children: *alternatives Education: preferenceFunction: >
EducPreference <- function(a1, a2) {
if (a1$education < a2$education) return (1/EducPreference(a2, a1))
ratio <- a1$education / a2$education
if (ratio < 1.05) return (1)
if (ratio < 1.15) return (2)
if (ratio < 1.25) return (3)
if (ratio < 1.35) return (4)
if (ratio < 1.55) return (5)
return (5)
}
children: *alternatives
[/code]

To run the AHP with functions in R, I used this code (I am including the part that gets the ahp package, in case you have not done that yet). BE CAREFUL and make sure, like in FORTRAN, that you line things up so that the words START in the appropriate columns. For example, the “p” in preferenceFunction MUST be immediately below the 7th character of your criterion’s variable name.

[code language=”bash” gutter=”false”]
devtools::install_github("gluc/ahp", build_vignettes = TRUE)
install.packages("data.tree")

library(ahp)
library(data.tree)

setwd("C:/AHP/artifacts")
nofxnAhp <- LoadFile("tomdickharry.txt")
Calculate(nofxnAhp)
fxnAhp <- LoadFile("tomdickharry-fxns.txt")
Calculate(fxnAhp)

print(nofxnAhp, "weight")
print(fxnAhp, "weight")
[/code]

You can see that the weights are approximately the same, indicating that I did a good job at developing functions that represent the reality of how I used the variables attached to the Alternatives to make my pairwise comparisons. The results show that Dick is now the best choice, although there is some inconsistency in our judgments for Experience that we should examine further. (I have not examined this case to see whether rank reversal could be happening).

[code language=”bash” gutter=”false”]
> print(nofxnAhp, "weight")
levelName weight
1 Choose the Most Suitable Leader 1.00000000
2 ¦–Experience 0.54756924
3 ¦ ¦–Tom 0.21716561
4 ¦ ¦–Dick 0.71706504
5 ¦ °–Harry 0.06576935
6 ¦–Education 0.12655528
7 ¦ ¦–Tom 0.18839410
8 ¦ ¦–Dick 0.08096123
9 ¦ °–Harry 0.73064467
10 ¦–Charisma 0.26994992
11 ¦ ¦–Tom 0.74286662
12 ¦ ¦–Dick 0.19388163
13 ¦ °–Harry 0.06325174
14 °–Age 0.05592555
15 ¦–Tom 0.26543334
16 ¦–Dick 0.67162545
17 °–Harry 0.06294121

> print(fxnAhp, "weight")
levelName weight
1 Choose the Most Suitable Leader 1.00000000
2 ¦–Experience 0.54756924
3 ¦ ¦–Tom 0.25828499
4 ¦ ¦–Dick 0.63698557
5 ¦ °–Harry 0.10472943
6 ¦–Education 0.12655528
7 ¦ ¦–Tom 0.08273483
8 ¦ ¦–Dick 0.26059839
9 ¦ °–Harry 0.65666678
10 ¦–Charisma 0.26994992
11 ¦ ¦–Tom 0.74286662
12 ¦ ¦–Dick 0.19388163
13 ¦ °–Harry 0.06325174
14 °–Age 0.05592555
15 ¦–Tom 0.26543334
16 ¦–Dick 0.67162545
17 °–Harry 0.06294121

> ShowTable(fxnAhp)
[/code]

tomdick-ahp-fxns

Here is the full YAML file for the “with preferenceFunction” case.

[code language=”bash” gutter=”false”]
#########################
# Alternatives Section
# THIS IS FOR The Tom, Dick, & Harry problem at
# https://en.wikipedia.org/wiki/Analytic_hierarchy_process_%E2%80%93_leader_example
#
Alternatives: &alternatives
# 1= not well; 10 = best possible
# Your assessment based on the paragraph descriptions may be different.
Tom:
age: 50
experience: 7
education: 4
leadership: 10
Dick:
age: 60
experience: 10
education: 6
leadership: 6
Harry:
age: 30
experience: 5
education: 8
leadership: 6
#
# End of Alternatives Section
#####################################
# Goal Section
#
Goal:
# A Goal HAS preferences (within-level comparison) and HAS Children (items in level)
name: Choose the Most Suitable Leader
preferences:
# preferences are defined pairwise
# 1 means: A is equal to B
# 9 means: A is highly preferrable to B
# 1/9 means: B is highly preferrable to A
– [Experience, Education, 4]
– [Experience, Charisma, 3]
– [Experience, Age, 7]
– [Education, Charisma, 1/3]
– [Education, Age, 3]
– [Age, Charisma, 1/5]
children:
Experience:
preferenceFunction: >
ExperiencePreference <- function(a1, a2) {
if (a1$experience < a2$experience) return (1/ExperiencePreference(a2, a1))
ratio <- a1$experience / a2$experience
if (ratio < 1.05) return (1)
if (ratio < 1.2) return (2)
if (ratio < 1.5) return (3)
if (ratio < 1.8) return (4)
if (ratio < 2.1) return (5) return (6) } children: *alternatives Education: preferenceFunction: >
EducPreference <- function(a1, a2) {
if (a1$education < a2$education) return (1/EducPreference(a2, a1))
ratio <- a1$education / a2$education
if (ratio < 1.05) return (1)
if (ratio < 1.15) return (2)
if (ratio < 1.25) return (3)
if (ratio < 1.35) return (4)
if (ratio < 1.55) return (5)
return (5)
}
children: *alternatives
Charisma:
preferences:
– [Tom, Dick, 5]
– [Tom, Harry, 9]
– [Dick, Harry, 4]
children: *alternatives
Age:
preferences:
– [Tom, Dick, 1/3]
– [Tom, Harry, 5]
– [Dick, Harry, 9]
children: *alternatives
#
# End of Goal Section
#####################################
[/code]

Analytic Hierarchy Process (AHP) with the ahp Package

On my December to-do list, I had “write an R package to make analytic hierarchy process (AHP) easier” — but fortunately gluc beat me to it, and saved me tons of time that I spent using AHP to do an actual research problem. First of all, thank you for writing the new ahp package! Next, I’d like to show everyone just how easy this package makes performing AHP and displaying the results. We will use the Tom, Dick, and Harry example that is described on Wikipedia. – the goal is to choose a new employee, and you can pick either Tom, Dick, or Harry. Read the problem statement on Wikipedia before proceeding.

AHP is a method for multi-criteria decision making that breaks the problem down based on decision criteria, subcriteria, and alternatives that could satisfy a particular goal. The criteria are compared to one another, the alternatives are compared to one another based on how well they comparatively satisfy the subcriteria, and then the subcriteria are examined in terms of how well they satisfy the higher-level criteria. The Tom-Dick-Harry problem is a simple hierarchy: only one level of criteria separates the goal (“Choose the Most Suitable Leader”) from the alternatives (Tom, Dick, or Harry):

tom-dick-harry

To use the ahp package, the most challenging part involves setting up the YAML file with your hierarchy and your rankings. THE MOST IMPORTANT THING TO REMEMBER IS THAT THE FIRST COLUMN IN WHICH A WORD APPEARS IS IMPORTANT. This feels like FORTRAN. YAML experts may be appalled that I just didn’t know this, but I didn’t. So most of the first 20 hours I spent stumbling through the ahp package involved coming to this very critical conclusion. The YAML AHP input file requires you to specify 1) the alternatives (along with some variables that describe the alternatives; I didn’t use them in this example, but I’ll post a second example that does use them) and 2) the goal hierarchy, which includes 2A) comparisons of all the criteria against one another FIRST, and then 2B) comparisons of the criteria against the alternatives. I saved my YAML file as tomdickharry.txt and put it in my C:/AHP/artifacts directory:

[code language=”bash” gutter=”false”]
#########################
# Alternatives Section
# THIS IS FOR The Tom, Dick, & Harry problem at
# https://en.wikipedia.org/wiki/Analytic_hierarchy_process_%E2%80%93_leader_example
#
Alternatives: &alternatives
# 1= not well; 10 = best possible
# Your assessment based on the paragraph descriptions may be different.
Tom:
age: 50
experience: 7
education: 4
leadership: 10
Dick:
age: 60
experience: 10
education: 6
leadership: 6
Harry:
age: 30
experience: 5
education: 8
leadership: 6
#
# End of Alternatives Section
#####################################
# Goal Section
#
Goal:
# A Goal HAS preferences (within-level comparison) and HAS Children (items in level)
name: Choose the Most Suitable Leader
preferences:
# preferences are defined pairwise
# 1 means: A is equal to B
# 9 means: A is highly preferable to B
# 1/9 means: B is highly preferable to A
– [Experience, Education, 4]
– [Experience, Charisma, 3]
– [Experience, Age, 7]
– [Education, Charisma, 1/3]
– [Education, Age, 3]
– [Age, Charisma, 1/5]
children:
Experience:
preferences:
– [Tom, Dick, 1/4]
– [Tom, Harry, 4]
– [Dick, Harry, 9]
children: *alternatives
Education:
preferences:
– [Tom, Dick, 3]
– [Tom, Harry, 1/5]
– [Dick, Harry, 1/7]
children: *alternatives
Charisma:
preferences:
– [Tom, Dick, 5]
– [Tom, Harry, 9]
– [Dick, Harry, 4]
children: *alternatives
Age:
preferences:
– [Tom, Dick, 1/3]
– [Tom, Harry, 5]
– [Dick, Harry, 9]
children: *alternatives
#
# End of Goal Section
#####################################
[/code]

Next, I installed gluc’s ahp package and a helper package, data.tree, then loaded them into R:

[code language=”bash” gutter=”false”]
devtools::install_github("gluc/ahp", build_vignettes = TRUE)
install.packages("data.tree")

library(ahp)
library(data.tree)
[/code]

Running the calculations was ridiculously easy:

[code language=”bash” gutter=”false”]
setwd("C:/AHP/artifacts")
myAhp <- LoadFile("tomdickharry.txt")
Calculate(myAhp)
[/code]

And then generating the output was also ridiculously easy:

[code language=”bash” gutter=”false”]
> GetDataFrame(myAhp)
Weight Dick Tom Harry Consistency
1 Choose the Most Suitable Leader 100.0% 49.3% 35.8% 14.9% 4.4%
2 ¦–Experience 54.8% 39.3% 11.9% 3.6% 3.2%
3 ¦–Education 12.7% 1.0% 2.4% 9.2% 5.6%
4 ¦–Charisma 27.0% 5.2% 20.1% 1.7% 6.1%
5 °–Age 5.6% 3.8% 1.5% 0.4% 2.5%
>
> print(myAhp, "weight", filterFun = isNotLeaf)
levelName weight
1 Choose the Most Suitable Leader 1.00000000
2 ¦–Experience 0.54756924
3 ¦–Education 0.12655528
4 ¦–Charisma 0.26994992
5 °–Age 0.05592555
> print(myAhp, "weight")
levelName weight
1 Choose the Most Suitable Leader 1.00000000
2 ¦–Experience 0.54756924
3 ¦ ¦–Tom 0.21716561
4 ¦ ¦–Dick 0.71706504
5 ¦ °–Harry 0.06576935
6 ¦–Education 0.12655528
7 ¦ ¦–Tom 0.18839410
8 ¦ ¦–Dick 0.08096123
9 ¦ °–Harry 0.73064467
10 ¦–Charisma 0.26994992
11 ¦ ¦–Tom 0.74286662
12 ¦ ¦–Dick 0.19388163
13 ¦ °–Harry 0.06325174
14 °–Age 0.05592555
15 ¦–Tom 0.26543334
16 ¦–Dick 0.67162545
17 °–Harry 0.06294121
[/code]

You can also generate very beautiful output with the command below (but you’ll have to run the example yourself if you want to see how fantastically it turns out — maybe that will provide some motivation!)

[code language=”bash” gutter=”false”]
ShowTable(myAhp)
[/code]

I’ll post soon with an example of how to use AHP preference functions in the Tom, Dick, & Harry problem.

Course Materials for Statistics (The Easier Way) With R

very-quick-cover-outlineAre you an instructor with a Spring 2016 intro to statistics course coming up… and yet you haven’t started preparing? If so, I have a potential solution for you to consider. The materials (lecture slides, in-class practice problems in R, exams, syllabus) go with my book and are about 85% compiled, but good enough to get a course started this week (I will be finishing the collection by mid-January).
There is also a 36MB .zip file if you would like to download the materials.
Whether you will be using them or just considering them, please fill in the Google Form at https://docs.google.com/forms/d/1Z7djuKHg1L4k7bTtktHXI7Juduad3fUW9G69bxN6jRA/viewform so I can keep track of everyone and provide you with updates. Also, I want to make sure that I’m providing the materials to INSTRUCTORS (and not students), so please use the email account from your institution when you sign up. Once I get your contact information, I will email you the link to the materials.
If you would like permission to edit the materials, I can do that as well — I know a couple of you have expressed that you would like to add to the collection (e.g translate them to another language). If you see any issues or errors, please either fix and/or email to tell me to fix!
Thanks for your interest and participation! Also, Happy New Year!

My First (R) Shiny App: An Annotated Tutorial

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

I’ve been meaning to learn Shiny for 2 years now… and thanks to a fortuitous email from @ImADataGuy this morning and a burst of wild coding energy about 5 hours ago, I am happy to report that I have completely fallen in love again. The purpose of this post is to share how I got my first Shiny app up and running tonight on localhost, how I deployed it to the http://shinyapps.io service, and how you can create a “Hello World” style program of your own that actually works on data that’s meaningful to you.

If you want to create a “Hello World!” app with Shiny (and your own data!) just follow these steps:

0. Install R 3.2.0+ first! This will save you time.
1. I signed up for an account at http://shinyapps.io.
2. Then I clicked the link in the email they sent me.
3. That allowed me to set up my https://radziwill.shinyapps.io location.
4. Then I followed the instructions at https://www.shinyapps.io/admin/#/dashboard
(This page has SPECIAL SECRET INFO CUSTOMIZED JUST FOR YOU ON IT!!) I had lots 
of problems with devtools::install_github('rstudio/shinyapps') - Had to go 
into my R directory, manually delete RCurl and digest, then 
reinstall both RCurl and digest... then installing shinyapps worked.
Note: this last command they tell you to do WILL NOT WORK because you do not have an app yet! 
If you try it, this is what you'll see:
> shinyapps::deployApp('path/to/your/app')
Error in shinyapps::deployApp("path/to/your/app") : 
C:\Users\Nicole\Documents\path\to\your\app does not exist
5. Then I went to http://shiny.rstudio.com/articles/shinyapps.html and installed rsconnect.
6. I clicked on my name and gravatar in the upper right hand corner of the 
https://www.shinyapps.io/admin/#/dashboard window I had opened, and then clicked 
"tokens". I realized I'd already done this part, so I skipped down to read 
"A Demo App" on http://shiny.rstudio.com/articles/shinyapps.html
7. Then, I re-installed ggplot2 and shiny using this command:
install.packages(c('ggplot2', 'shiny'))
8. I created a new directory (C:/Users/Nicole/Documents/shinyapps) and used
setwd to get to it.
9. I pasted the code at http://shiny.rstudio.com/articles/shinyapps.html to create two files, 
server.R and ui.R, which I put into my new shinyapps directory 
under a subdirectory called demo. The subdirectory name IS your app name.
10. I typed runApp("demo") into my R console, and voila! The GUI appeared in 
my browser window on my localhost.
-- Don't just try to close the browser window to get the Shiny app 
to stop. R will hang. To get out of this, I had to use Task Manager and kill R.
--- Use the main menu, and do Misc -> Stop Current Computation
11. I did the same with the "Hello Shiny" code at http://shiny.rstudio.com/articles/shinyapps.html. 
But what I REALLY want is to deploy a hello world app with MY OWN data. You know, something that's 
meaningful to me. You probably want to do a test app with data that is meaningful to you... here's 
how you can do that.
12. A quick search shows that I need jennybc's (Github) googlesheets package to get 
data from Google Drive viewable in my new Shiny app.
13. So I tried to get the googlesheets package with this command:
devtools::install_github('jennybc/googlesheets')
but then found out it requires R version 3.2.0. I you already have 3.2.0 you can skip 
to step 16 now.
14. So I reinstalled R using the installr package (highly advised if you want to 
overcome the agony of upgrading on windows). 
See http://www.r-statistics.com/2013/03/updating-r-from-r-on-windows-using-the-installr-package/
for info -- all it requires is that you type installR() -- really!
15. After installing R I restarted my machine. This is probably the first time in a month that 
I've shut all my browser windows, documents, spreadsheets, PDFs, and R sessions. I got the feeling 
that this made my computer happy.
16. Then, I created a Google Sheet with my data. While viewing that document, I went to 
File -> "Publish to the Web". I also discovered that my DOCUMENT KEY is that 
looooong string in the middle of the address, so I copied it for later:
1Bs0OH6F-Pdw5BG8yVo2t_VS9Wq1F7vb_VovOmnDSNf4
17. Then I created a new directory in C:/Users/Nicole/Documents/shinyapps to test out 
jennybc's googlesheets package, and called it jennybc
18. I copied and pasted the code in her server.R file and ui.R file
from https://github.com/jennybc/googlesheets/tree/master/inst/shiny-examples/01_read-public-sheet 
into files with the same names in my jennybc directory
19. I went into my R console, used getwd() to make sure I was in the
C:/Users/Nicole/Documents/shinyapps directory, and then typed
 runApp("jennybc")
20. A browser window popped up on localhost with her test Shiny app! I played with it, and then 
closed that browser tab.
21. When I went back into the R console, it was still hanging, so I went to the menu bar 
to Misc -> Stop Current Computation. This brought my R prompt back.
22. Now it was time to write my own app. I went to http://shiny.rstudio.com/gallery/ and
found a layout I liked (http://shiny.rstudio.com/gallery/tabsets.html), then copied the 
server.R and ui.R code into C:/Users/Nicole/Documents/shinyapps/my-hello -- 
and finally, tweaked the code and engaged in about 100 iterations of: 1) edit the two files, 
2) type runApp("my-hello") in the R console, 3) test my Shiny app in the 
browser window, 4) kill browser window, 5) do Misc -> Stop Current Computation 
in R. ALL of the computation happens in server.R, and all the display happens in ui.R:

server.R:

library(shiny)
library(googlesheets)
library(DT)

my_key <- "1Bs0OH6F-Pdw5BG8yVo2t_VS9Wq1F7vb_VovOmnDSNf4"
my_ss <- gs_key(my_key)
my_data <- gs_read(my_ss)

shinyServer(function(input, output, session) {
 output$plot <- renderPlot({
 my_data$type <- ordered(my_data$type,levels=c("PRE","POST"))
 boxplot(my_data$score~my_data$type,ylim=c(0,100),boxwex=0.6)
 })
 output$summary <- renderPrint({
 aggregate(score~type,data=my_data, summary)
 })
 output$the_data <- renderDataTable({
 datatable(my_data)
 })

})

ui.R:

library(shiny)
library(shinythemes)
library(googlesheets)

shinyUI(fluidPage(
 
 # Application title
 titlePanel("Nicole's First Shiny App"),
 
 # Sidebar with controls to select the random distribution type
 # and number of observations to generate. Note the use of the
 # br() element to introduce extra vertical spacing
 sidebarLayout(
 sidebarPanel(
     helpText("This is my first Shiny app!! It grabs some of my data 
from a Google Spreadsheet, and displays it here. I      
also used lots of examples from"),
     h6(a("http://shiny.rstudio.com/gallery/", 
href="http://shiny.rstudio.com/gallery/", target="_blank")),
     br(),
     h6(a("Click Here for a Tutorial on How It Was Made", 
href="http://qualityandinnovation.com/2015/12/08/my-first-shin     
y-app-an-annotated-tutorial/",
      target="_blank")),
      br()
 ),
 
 # Show a tabset that includes a plot, summary, and table view
 # of the generated distribution
 mainPanel(
    tabsetPanel(type = "tabs", 
    tabPanel("Plot", plotOutput("plot")), 
    tabPanel("Summary", verbatimTextOutput("summary")), 
    tabPanel("Table", DT::dataTableOutput("the_data"))
 )
 )
 )
))


23. Once I decided my app was good enough for my practice round, it was time to 
deploy it to the cloud.
24. This part of the process requires the shinyapps and dplyr 
packages, so be sure to install them:

devtools::install_github('hadley/dplyr')
library(dplyr)
devtools::install_github('rstudio/shinyapps')
library(shinyapps)
25. To deploy, all I did was this: setwd("C:/Users/Nicole/Documents/shinyapps/my-hello/")
deployApp()

CHECK OUT MY SHINY APP!!

Logistic Growth, S Curves, Bifurcations, and Lyapunov Exponents in R

If you’ve ever wondered how logistic population growth (the Verhulst model), S curves, the logistic map, bifurcation diagrams, sensitive dependence on initial conditions, “orbits”, deterministic chaos, and Lyapunov exponents are related to one another… this post explains it in just 10 steps, each with some code in R so you can explore it all yourself. I’ve included some code written by other people who have explored this problem (cited below) as portions of my own code.

It all starts with a hypothesized population… and a process where the size of the population changes over time. We want to understand how (and under what conditions) those changes occur, so we choose a model that characterizes population changes: the logistic growth model. It’s been used in biology, ecology, econometrics, marketing, and other areas.


1. The logistic growth model describes how the size of a population (N) changes over time (t), based on some maximum population growth rate (r). There is a limiting factor called the carrying capacity (K) which represents the total population that the environment could support, based on the amount of available resources. dN/dt is the rate of change of the population over time.

logistic-growth-dndt


2. You can simplify the logistic growth model by defining a new variable x to represent the portion of the population that’s alive, compared to the total population that the environment could support (and keep alive). So with x = N/K, you get a new differential equation in terms of x. Now we are looking at the rate of change of the population fraction over time. Once x = N/K = 1, the environment can’t support any more members in the population:

logistic-growth-dxdt


3. You can solve this equation by integration! Then, you’ll have an expression that you can use to calculate x (which is still the population fraction) for any time t. This is called the sigmoid or (more commonly), the S Curve. To compute x at any time t, all we need to know is how big the population was when we started looking at it (x0) and the maximum growth rate r:

logistic-growth-xt-solution


4. The equation for the S Curve is deterministic and continuous. If we want to solve it numerically, we have to discretize it by chopping up that continuous axis that contains time into little tiny pieces of time. That’s what produces the difference equation that we recognize as the logistic map. It’s a map because it “maps” each value of the sequence onto the next value in the sequence. As long as you know one of those values for x (indicated by the subscript n), you’ll be able to figure out the next value of x (indicated by the subscript n+1). The value x[n] is the population fraction of the current generation, and the value x[n+1] is the population fraction for the next generation. This makes the logistic map a Markov chain. If you plot x[n] on the x axis and x[n+1] on the y axis, this expression will produce the familiar upside down parabola:

logistic-markov-chain


5. The logistic map behaves differently depending upon the maximum growth rate (r) that describes your population. This parameter is also called fecundity and represents how rabbit-like your population is reproducing. The higher the r, the more productive, like rabbits (although I’m not sure precisely which r you’d choose if you were studying rabbits). Here is an R function that you can use to generate the last M iterations from a sequence of N total, developed and described at Mage’s Blog:

logistic.map <- function(r, x, N, M) {
 ## from http://www.magesblog.com/2012/03/logistic-map-feigenbaum-diagram.html
 ## r: bifurcation parameter
 ## x: initial value, something greater than 0 and less than 1
 ## N: number of iterations (total)
 ## M: number of iteration points to be returned
   z <- 1:N
   z[1] <- x
   for(i in c(1:(N-1))){
     z[i+1] <- r *z[i] * (1 - z[i])
   }
   ## Return the last M iterations 
   z[c((N-M):N)]
}

6. The logistic map has many interesting properties, but here are two in particular (the first in Step 6 and the second in Step 7). First, for several values you can choose for r, the chain converges to a single value (or fixed point) when n gets really big. For other values of r, the value of x will eventually bounce between two values instead of converging (a limit cycle of 2). For other values of r, the value of x will eventually bounce between four values instead of converging. Sometimes, x will bounce around a near limitless collection of values (a condition called deterministic chaos). The eventual values (or collection of eventual values, if they bounce between values) is called an orbit. For example, when the growth rate r is 2.6, the logistic map rapidly converges to an orbit of about 0.615:

plot(logistic.map(2.6,.01,20,20), type="l")

logistic-map-converges


7. Sometimes, it can be nice to take a look at how the values bounce around, and where they eventually converge (or not). To do this, we use cobweb diagrams (which are also sometimes called web diagrams). I used a function that I found at http://bayesianbiologist.com to plot the behavior of the orbits for r=2.6, r=3.2, and r=3.9:

logistic.cobweb <- function(r) {
# code adapted from http://bayesianbiologist.com/tag/cobweb-plot/
 x<-seq(0,1,length=100)
 x_next <- lapply(2:N, function(i) r*x[i]*(1-x[i]))
 plot(x[2:N],x_next,type="l",xlim=c(0,1), ylim=c(0,1), main=paste("r=",r),
 xlab=expression(x[t]),ylab=expression(x[t+1]), col="red", lwd=2)
 abline(0,1)

 # start at your random spot on the x-axis and start with a vertical line:
 start=runif(1,0,1)
 vert=FALSE
 lines(x=c(start,start),y=c(0,r*start*(1-start)) )
 for(i in 1:(2*N)) {
 if(vert) {
   lines(x=c(start,start),y=c(start,r*start*(1-start)) )
   vert=FALSE
 } else {
   lines(x=c(start, r*start*(1-start)), y=c(r*start*(1-start), r*start*(1-start)) )
   vert=TRUE
   start=r*start*(1-start)
 }
 }
}

par(mfrow=c(1,3))
logistic.cobweb(2.6)
logistic.cobweb(3.3)
logistic.cobweb(3.9)

logistic-cobwebs


8. (Remember to dev.off() before you continue.) Second, for some values of r, the logistic map shows sensitive dependence on initial conditions. For example, let’s see what happens for two different growth rates (r=3 and r=3.9) when we start one iteration with an x[n]  of 0.5 COLORED BLACK, and another one with an x[n] of 0.5001 COLORED RED. It’s a small, small difference that can lead to big, BIG variations in the orbits. In the r=3 case, the chain produced by the logistic map with x[n] of 0.5 (in black) is IDENTICAL to the chain produced by the logistic map with x[n] of 0.5001 (in red). That’s why you can’t see the black… the values are the same! But for the r=3.9 case, the chain produced by the logistic map with x[n] of 0.5 (in black) RAPIDLY DIVERGES from the chain produced by the logistic map with x[n] of 0.5001 (in red). They are very different, despite a very tiny difference in initial conditions! The logistic map for r=3.9 shows a very sensitive dependence on initial conditions

par(mfrow=c(2,1))
first <- logistic.map(3,.5,120,100)
second <- logistic.map(3,.5001,120,100)
plot(1:length(first),first,type="l",main="r=3 is not sensitive to initial conditions")
lines(1:length(second),second,type="l",col="red")
first <- logistic.map(3.9,.5,120,100)
second <- logistic.map(3.9,.5001,120,100)
plot(1:length(first),first,type="l",main="but r=3.9 is EXTREMELY sensitive")
lines(1:length(second),second,type="l",col="red")

logistic-sensitivity


9. For any chain, we can determine just how sensitive the logistic map is to initial conditions by looking at the Lyapunov exponent. Very simplistically, if the Lyapunov exponent is negative, the chain will converge to one or more fixed points for that value of r. If the Lyapunov exponent is positive, the chain will demonstrate deterministic chaos for that value of r. If the Lyapunov exponent is zero, there is a bifurcation: a 1-cycle is doubling to a 2-cycle, a 2-cycle is doubling to a 4-cycle, or so forth. The top chart shows an approximation of the Lyapunov exponent based on the first 500 iterations (ideally, you’d use an infinite number, but that would eat up too much computing time), and the bottom chart shows a bifurcation diagramYou’ll notice that the Lyapunov exponents are zero where a bifurcation occurs. To interpret the bifurcation diagram, just remember that each vertical slice through it represents the results of ONE COMPLETELY CONVERGED CHAIN from the logistic map. So it shows the results from many, many, many completely converged chains – and provides an excellent way for us to look at the behavior of MANY different types of populations in just one chart:

n <- 400
XI <- lya <- 0
x <- seq(0,4,0.01)
for (i in 1:n) {
 xi <- logistic.map(x[i],.01,500,500)
 XI <- rbind(XI,xi)
}
for (i in 1:length(x)) { 
 lya[i] <- sum(log(abs(x[i]-(2*x[i]*XI[i,]))))/length(x) 
}
plot(x,lya,ylim=c(-4,1),xlim=c(0,4),type="l",main="Lyapunov Exponents for Logistic Map")
abline(h=0, lwd=2, col="red")
# next 3 lines from http://www.magesblog.com/2012/03/logistic-map-feigenbaum-diagram.html:
my.r <- seq(0, 4, by=0.003)
Orbit <- sapply(my.r, logistic.map, x=0.1, N=1000, M=300)
r <- sort(rep(my.r, 301))

par(mfrow=c(2,1))
plot(x,lya,ylim=c(-5,1),xlim=c(0,4),type="l",main="Lyapunov Exponents for Logistic Map")
abline(h=0, col="red", lwd=2)
abline(v=3, col="blue", lwd=2)
plot(r, Orbit, pch=".", cex=0.5, main="Bifurcation Diagram for r=0 to r=4 Logistic Maps")
abline(v=3, col="blue", lwd=2)

logistic-lyapunov-bifurcation

10. Notice that in the bifurcation diagram, we can easily see that when r is between 0 and 1, the population converges to extinction. This makes sense, because the growth rate is smaller than what’s required to sustain the size of the population. You might like to zoom in, though, and see what the orbits look like for some smaller portions of the diagram. Here’s how you can do it (but be sure to refresh your graphics window with dev.off() before you try it). Try changing the plot character (pch) too, or maybe the size of the characters with cex=0.2 or cex=0.5 in the last line:

# adapted from http://www.magesblog.com/2012/03/logistic-map-feigenbaum-diagram.html:
my.r <- seq(3.5, 4, by=0.003)
Orbit <- sapply(my.r, logistic.map, x=0.1, N=1000, M=300)
multiplier <- length(Orbit)/length(my.r)
r <- sort(rep(my.r, multiplier))
plot(r, Orbit, pch=".")

logistic-lyapunov-bifurcation-2

That’s it!


Find out more information on these other web pages, which are listed in order of difficulty level:

« Older Entries