Category Archives: Data Science

A Decade of PhD: What I’ve Learned from Academia and Industry

Today is Cinco de Mayo! It’s also the 10th Anniversary of my PhD defense…. something I carefully timed for late afternoon on this day in 2009. (I wanted to make sure I could celebrate the joyful occasion — or drown my sorrows — with 2-for-1 margaritas. Fortunately, the situation was liquid joy; unfortunately, I still got a hangover.)

I’m writing this post to share what I’ve learned about the value of getting a PhD (is there value?) and the applicability of PhD-level work to industry. If you’re considering more education, maybe this will help you decide whether it’s the right choice. If you’re in industry and trying to figure out whether to hire PhDs, some of what I write here might help. But first, some background!

I never even thought I’d get a PhD — it certainly didn’t happen out of intent or design. My family was poor and I studied ridiculously hard so I could “escape it.” I didn’t think I was smart enough for a PhD, even though I started college at 16 taking half undergrad classes and half grad classes in meteorology. I aced my grad classes and very maturely ignored my required classes, so I got kicked out. (At the same time, I wasn’t really fitting in with people… my roommate called me “Nerdcole”.) When I was let back in the department head wouldn’t let me take any grad classes so I got bored and burned out… not surprising since I was supporting myself, and working three jobs to make that happen. I quit school to work at an e-commerce startup when I was 18. A few months later, thanks to (good) peer pressure, I took 3 credit by exams to see if it would get me over the finish line, and thanks to some side skills I had picked up in vector calculus and statistics, it worked and I got the BS. But I was still left with a pretty bad GPA, and even worse self esteem, and I was convinced no one would ever let me into grad school.

I figured I’d focus on industry and help companies grow. There was no other choice.

The Back Story

After spending a couple years building web sites and storefronts (a huge feat in 1995 and 1996!) I took a job at a national lab as a systems analyst, supporting older scientists and engineers and helping them get work done. The main lesson I learned during this time was: Alignment between strategy and objectives doesn’t come for free (teams of people have to spend dedicated time on it), and most people are really disorganized. There had to be a better way to get work done.

A few years later, I was a traveling Solutions Architect, parachuted once or twice a month into CRM software implementation fiascos around the globe. My job was to figure out what to do to turn these jobs around — was it a people problem? An architecture problem? A training problem? A systems thinking problem? A little of everything? I had a couple weeks to make a recommendation, and then I was on to the next project (results were usually pretty good). But since this required evaluating technology decisions in the context of business and financial constraints, my boss suggested that I use the tuition benefits offered by my job to get an MBA. I had taken 9 credits of science and industrial engineering classes since I’d graduated, so I contacted two of the local MBA schools to see if they’d accept me and my credits. Sure enough, one of them did! I took evening classes for a year and a half, and eventually ended up with an MBA. But I never thought I could (or would) go farther — I’m not that smart, I’d tell myself. Also, it’s expensive. Also, a PhD would probably make me less marketable. (All lies, spoken by a lack of confidence.)

Shortly thereafter, the travel started to get to me (I was flying at least three days a week), so I looked for an opportunity to grow and cultivate a software development organization. (That’s how I ended up in Data Management at NRAO.) A little management led to a lot of management. A few years later one of the organization’s leaders said it was “too bad I didn’t have a PhD” — because in a highly scientific and technical organization, it would give me more credibility and make me a better leader.

“Will you pay for it?” I asked. “Sure,” they said. I just had to find a suitable program that wouldn’t require me to go full time. I’ve always loved learning, and I couldn’t resist the temptation of free education — even if it meant I’d have to balance the demands of a challenging full-time job and a first-time baby at the same time. That’s how much I love learning, just for learning’s sake! I still didn’t think a PhD had that much value, unless you were studying to be a lab scientist or you were dead set on becoming a historian and teaching for the rest of your life. None of these personas was me, but the free education thing sold me, and I didn’t really think about how relevant this step was to my career direction until much later.

Fortunately, I found the perfect program for me — a hybrid academic/practitioner PhD that would help me develop the research and analytical skills to solve practical problems in business and industrial technology.

The next few years were pretty rough, and by the time I got my PhD, I was in my 14th year of post-college professional employment. First lesson learned: it’s probably not the best move to start PhD coursework when you have a three-month old. I have no idea how I made it through.

Shortly after graduating, the impacts of the financial crisis hit our federally funded organization and I was able to segue into a second career as a college professor, teaching data science and manufacturing/EHSQ classes. For the past year, I’ve been back in industry (maybe permanently; we’ll see) and have a better sense of the value of PhDs in industry.

Value of Getting a PhD

There are lots of reasons I’m happy with the time I spent getting a PhD, other than the fact that it helped me get an entirely new job when the economy was down:

  • First and foremost, I’m a better critical thinker. It’s now my nature to look at all parts of a problem, examine the interactions between them, and make sure I have all the required background information needed to start working on a problem.
  • I’m a better writer too. I look at reports and presentations I wrote years ago, and can see all the holes and places where I made assumptions that weren’t valid.
  • I developed a new appreciation for clarity. Researchers want to make sure their messages, methodologies, and models are clear and unambiguous… through the contrast, I was able to recognize that in industry, there’s often pressure to skip due diligence and move fast to perform. This pressure leads to ambiguity, which tends towards what I call “intellectual waste” – people assuming that they see a problem or a project in the same way because they haven’t taken the time to guarantee clarity.
  • It’s easier for me to quickly determine whether information might be true or false, or whether there are gaps that need to be closed before moving forward. (It’s possible that this skill is more from grading and evaluating student work… something that’s orders of magnitude harder than it seems.)
  • I realized that words matter. Really thinking about how one person will respond to a word or phrase, and whether it conveys the meaning that you intend, is a craft — that’s enhanced by working with collaborators.
  • And although I knew this one prior to the PhD, I found that data matters. Where did your data come from? Can you access the original? What kind of people (or instruments) gathered it? Can you trust them? The quality of your data — and the suitability of the methods you choose — will impact the quality and integrity of the conclusions you generate from it. Awareness of these factors is essential.

Value of Caution

One of the biggest lessons was the most surprising. Early on in the PhD program I was told that my opinion didn’t count — regardless of how many years of experience I had. Every statement I made had to be backed up and cited, preferably using material that had been peer-reviewed by other qualified people. At first I was kind of offended by this… didn’t these academics have any sense of the value of actual real-world employment? Apparently not.

But something funny happened as I developed the habit of looking for solid references, distilling their messages, and citing them accurately: I became more careful. And in the evolution of my caution and attention to detail, the quality of my work — ANY work — improved tremendously. I was able to learn from what other people had discovered, and anticipate (and resolve) problems in advance. I learned that “standing on the shoulders of giants” actually means figuring out when solved problems already exist so you don’t waste time reinventing wheels.

Something else funny happened as soon as I graduated: all of a sudden, people were asking me for my opinion. But the habit of due diligence was so ingrained that I couldn’t express my opinion… I was compelled to back it up with facts!

(I think this was the point all along. Go figure.)

The beauty of going through the entire messy process of PhD coursework and comps and research and defense and editing — the entire end-to-end process, not cutting out in the middle anywhere — it gave me the discipline and process to root out accurate and complete answers to problems. Or at the very least, to be able to call out the gaps to get there.

There’s a lot of pressure in industry to move fast, but due diligence is still critical for accurate self-assessment and effective cross-functional communication. Slowing down and figuring out how you know what you know — and making sure everyone is literally on the same page — can help your organization achieve its goals more quickly.

Value of PhDs to Industry

So employers (especially in tech) — should you hire PhDs? Yes. Here’s why:

  • PhDs are trained to find gaps in knowledge and understanding. Is your strategic plan grounded in reality, or is it just wishful thinking? Are your Project Charters well scoped, budgeted, and planned out? Is your workforce prepared to carry out your strategic initiatives?
  • Many PhDs with experience teaching undergrads are great at making complex topics accessible to other audiences. This is fantastic for training, cross-training, and marketing.
  • PhDs love research and writing, and can help you with gathering and interpreting data and content marketing.
  • PhDs love learning. Want to be on the cutting edge? They’re great in R&D… they can help you distill new insights from research papers and interpret and apply them accurately.
  • If you want to do AI or machine learning, or anything that uses Big Data, make sure you have at least one PhD statistician with practical analytical experience. They can prevent you from spending millions on dead ends and help you apply Occam’s Razor to avoid unnecessary complexity (the kind that can lead to technical debt later).

Will there be drawbacks? Sure. The habit of caution may need to be tempered somewhat — you don’t have to probe to the bottom of an issue to generate useful information that a business can use to make progress.

Bottom line… don’t be afraid of PhDs! We are mere mortals who just happen to have spent several years trying to figure out how to get to the core — the fundamental truth — of a complex problem. As a result we know how to approach problems like this — problems that many businesses have lots of. (We are not overqualified at all… we just have an extra skill set in something you desperately need, but may not realize you need it.)

Getting a PhD was challenging, frustrating, and maddening at times (especially the final part of getting your camera-ready text ready for ProQuest). I never planned to do it, but I’d totally do it again. I think my only regret is that I got a PhD in a hybrid business/industrial engineering discipline… it allowed me the freedom to pursue my interests, but if I was at the same crossroads now, I’d get a PhD in statistics to complement my MBA. Overall, this is a pretty tiny regret.

Imperfect Action is Better Than Perfect Inaction: What Harry Truman Can Teach Us About Loss Functions (with an intro to ggplot)

One of the heuristics we use at Intelex to guide decision making is former US President Truman’s advice that “imperfect action is better than perfect inaction.” What it means is — don’t wait too long to take action, because you don’t want to miss opportunities. Good advice, right?

When I share this with colleagues, I often hear a response like: “that’s dangerous!” To which my answer is “well sure, sometimes, but it can be really valuable depending on how you apply it!” The trick is: knowing how and when.

Here’s how it can be dangerous. For example, statistical process control (SPC) exists to keep us from tampering with processes — from taking imperfect action based on random variation, which will not only get us nowhere, but can exacerbate the problem we were trying to solve. The secret is to apply Truman’s heuristic based on an understanding of exactly how imperfect is OK with your organization, based on your risk appetite. And this is where loss functions can help.

Along the way, we’ll demonstrate how to do a few important things related to plotting with the ggplot package in R, gradually adding in new elements to the plot so you can see how it’s layered, including:

  • Plot a function based on its equation
  • Add text annotations to specific locations on a ggplot
  • Draw horizontal and vertical lines on a ggplot
  • Draw arrows on a ggplot
  • Add extra dots to a ggplot
  • Eliminate axis text and axis tick marks

What is a Loss Function?

A loss function quantifies how unhappy you’ll be based on the accuracy or effectiveness of a prediction or decision. In the simplest case, you control one variable (x) which leads to some cost or loss (y). For the case we’ll examine in this post, the variables are:

  • How much time and effort you put in to scoping and characterizing the problem (x); we assume that time+effort invested leads to real understanding
  • How much it will cost you (y); can be expressed in terms of direct costs (e.g. capex + opex) as well as opportunity costs or intangible costs (e.g. damage to reputation)

Here is an example of what this might look like, if you have a situation where overestimating (putting in too much x) OR underestimating (putting in too little x) are both equally bad. In this case, x=10 is the best (least costly) decision or prediction:

plot of a typical squared loss function
# describe the equation we want to plot
parabola <- function(x) ((x-10)^2)+10  

# initialize ggplot with a dummy dataset
library(ggplot)
p <- ggplot(data = data.frame(x=0), mapping = aes(x=x)) 

p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) +
     xlab("x = the variable you can control") + 
     ylab("y = cost of loss ($$)")

In regression (and other techniques where you’re trying to build a model to predict a quantitative dependent variable), mean square error is a squared loss function that helps you quantify error. It captures two facts: the farther away you are from the correct answer the worse the error is — and both overestimating and underestimating is bad (which is why you square the values). Across this and related techniques, the loss function captures these characteristics:

From http://www.cs.cornell.edu/courses/cs4780/2015fa/web/lecturenotes/lecturenote10.html

Not all loss functions have that general shape. For classification, for example, the 0-1 loss function tells the story that if you get a classification wrong (x < 0) you incur all the penalty or loss (y=1), whereas if you get it right (x > 0) there is no penalty or loss (y=0):

# set up data frame of red points
d.step <- data.frame(x=c(-3,0,0,3), y=c(1,1,0,0))

# note that the loss function really extends to x=-Inf and x=+Inf
ggplot(d.step) + geom_step(mapping=aes(x=x, y=y), direction="hv") +
     geom_point(mapping=aes(x=x, y=y), color="red") + 
     xlab("y* f(x)") + ylab("Loss (Cost)") +  
     ggtitle("0-1 Loss Function for Classification")

Use the Loss Function to Make Strategic Decisions

So let’s get back to Truman’s advice. Ideally, we want to choose the x (the amount of time and effort to invest into project planning) that results in the lowest possible cost or loss. That’s the green point at the nadir of the parabola:

p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) + 
     xlab("Time Spent and Information Gained (e.g. person-weeks)") + ylab("$$ COST $$") +
     annotate(geom="text", x=10, y=5, label="Some Effort, Lowest Cost!!", color="darkgreen") +
     geom_point(aes(x=10, y=10), colour="darkgreen")

Costs get higher as we move up the x-axis:

p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) + 
     xlab("Time Spent and Information Gained (e.g. person-weeks)") + ylab("$$ COST $$") +
     annotate(geom="text", x=10, y=5, label="Some Effort, Lowest Cost!!", color="darkgreen") +
     geom_point(aes(x=10, y=10), colour="darkgreen") +
     annotate(geom="text", x=0, y=100, label="$$$$$", color="green") +
     annotate(geom="text", x=0, y=75, label="$$$$", color="green") +
     annotate(geom="text", x=0, y=50, label="$$$", color="green") +
     annotate(geom="text", x=0, y=25, label="$$", color="green") +
     annotate(geom="text", x=0, y=0, label="$ 0", color="green")

And time+effort grows as we move along the x-axis (we might spend minutes on a problem at the left of the plot, or weeks to years by the time we get to the right hand side):

p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) + 
     xlab("Time Spent and Information Gained (e.g. person-weeks)") + ylab("$$ COST $$") +
     annotate(geom="text", x=10, y=5, label="Some Effort, Lowest Cost!!", color="darkgreen") +
     geom_point(aes(x=10, y=10), colour="darkgreen") +
     annotate(geom="text", x=0, y=100, label="$$$$$", color="green") +
     annotate(geom="text", x=0, y=75, label="$$$$", color="green") +
     annotate(geom="text", x=0, y=50, label="$$$", color="green") +
     annotate(geom="text", x=0, y=25, label="$$", color="green") +
     annotate(geom="text", x=0, y=0, label="$ 0", color="green") +
     annotate(geom="text", x=2, y=0, label="minutes\nof effort", size=3) +
     annotate(geom="text", x=20, y=0, label="months\nof effort", size=3)

Planning too Little = Planning too Much = Costly

What this means is — if we don’t plan, or we plan just a little bit, we incur high costs. We might make the wrong decision! Or miss critical opportunities! But if we plan too much — we’re going to spend too much time, money, and/or effort compared to the benefit of the solution we provide.


p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) + 
     xlab("Time Spent and Information Gained (e.g. person-weeks)") + ylab("$$ COST $$") +
     annotate(geom="text", x=10, y=5, label="Some Effort, Lowest Cost!!", color="darkgreen") +
     geom_point(aes(x=10, y=10), colour="darkgreen") +
     annotate(geom="text", x=0, y=100, label="$$$$$", color="green") +
     annotate(geom="text", x=0, y=75, label="$$$$", color="green") +
     annotate(geom="text", x=0, y=50, label="$$$", color="green") +
     annotate(geom="text", x=0, y=25, label="$$", color="green") +
     annotate(geom="text", x=0, y=0, label="$ 0", color="green") +
     annotate(geom="text", x=2, y=0, label="minutes\nof effort", size=3) +
     annotate(geom="text", x=20, y=0, label="months\nof effort", size=3) +
     annotate(geom="text",x=3, y=85, label="Little (or no) Planning\nHIGH COST", color="red") +
     annotate(geom="text", x=18, y=85, label="Paralysis by Planning\nHIGH COST", color="red") +
     geom_vline(xintercept=0, linetype="dotted") + geom_hline(yintercept=0, linetype="dotted")

The trick is to FIND THAT CRITICAL LEVEL OF TIME and EFFORT invested to gain information and understanding about your problem… and then if you’re going to err, make sure you err towards the left — if you’re going to make a mistake, make the mistake that costs less and takes less time to make:

arrow.x <- c(10, 10, 10, 10)
arrow.y <- c(35, 50, 65, 80)
arrow.x.end <- c(6, 6, 6, 6)
arrow.y.end <- arrow.y
d <- data.frame(arrow.x, arrow.y, arrow.x.end, arrow.y.end)

p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) + 
     xlab("Time Spent and Information Gained (e.g. person-weeks)") + ylab("$$ COST $$") +
     annotate(geom="text", x=10, y=5, label="Some Effort, Lowest Cost!!", color="darkgreen") +
     geom_point(aes(x=10, y=10), colour="darkgreen") +
     annotate(geom="text", x=0, y=100, label="$$$$$", color="green") +
     annotate(geom="text", x=0, y=75, label="$$$$", color="green") +
     annotate(geom="text", x=0, y=50, label="$$$", color="green") +
     annotate(geom="text", x=0, y=25, label="$$", color="green") +
     annotate(geom="text", x=0, y=0, label="$ 0", color="green") +
     annotate(geom="text", x=2, y=0, label="minutes\nof effort", size=3) +
     annotate(geom="text", x=20, y=0, label="months\nof effort", size=3) +
     annotate(geom="text",x=3, y=85, label="Little (or no) Planning\nHIGH COST", color="red") +
     annotate(geom="text", x=18, y=85, label="Paralysis by Planning\nHIGH COST", color="red") +
     geom_vline(xintercept=0, linetype="dotted") + 
     geom_hline(yintercept=0, linetype="dotted") +
     geom_vline(xintercept=10) +
     geom_segment(data=d, mapping=aes(x=arrow.x, y=arrow.y, xend=arrow.x.end, yend=arrow.y.end),
     arrow=arrow(), color="blue", size=2) +
     annotate(geom="text", x=8, y=95, size=2.3, color="blue",
     label="we prefer to be\non this side of the\nloss function")

Moral of the Story

The moral of the story is… imperfect action can be expensive, but perfect action is ALWAYS expensive. Spend less to make mistakes and learn from them, if you can! This is one of the value drivers for agile methodologies… agile practices can help improve communication and coordination so that the loss function is minimized.

## FULL CODE FOR THE COMPLETELY ANNOTATED CHART ##
# If you change the equation for the parabola, annotations may shift and be in the wrong place.
parabola <- function(x) ((x-10)^2)+10

my.title <- expression(paste("Imperfect Action Can Be Expensive. But Perfect Action is ", italic("Always"), " Expensive."))

arrow.x <- c(10, 10, 10, 10)
arrow.y <- c(35, 50, 65, 80)
arrow.x.end <- c(6, 6, 6, 6)
arrow.y.end <- arrow.y
d <- data.frame(arrow.x, arrow.y, arrow.x.end, arrow.y.end)

p + stat_function(fun=parabola) + xlim(-2,23) + ylim(-2,100) + 
     xlab("Time Spent and Information Gained (e.g. person-weeks)") + ylab("$$ COST $$") +
     annotate(geom="text", x=10, y=5, label="Some Effort, Lowest Cost!!", color="darkgreen") +
     geom_point(aes(x=10, y=10), colour="darkgreen") +
     annotate(geom="text", x=0, y=100, label="$$$$$", color="green") +
     annotate(geom="text", x=0, y=75, label="$$$$", color="green") +
     annotate(geom="text", x=0, y=50, label="$$$", color="green") +
     annotate(geom="text", x=0, y=25, label="$$", color="green") +
     annotate(geom="text", x=0, y=0, label="$ 0", color="green") +
     annotate(geom="text", x=2, y=0, label="minutes\nof effort", size=3) +
     annotate(geom="text", x=20, y=0, label="months\nof effort", size=3) +
     annotate(geom="text",x=3, y=85, label="Little (or no) Planning\nHIGH COST", color="red") +
     annotate(geom="text", x=18, y=85, label="Paralysis by Planning\nHIGH COST", color="red") +
     geom_vline(xintercept=0, linetype="dotted") + 
     geom_hline(yintercept=0, linetype="dotted") +
     geom_vline(xintercept=10) +
     geom_segment(data=d, mapping=aes(x=arrow.x, y=arrow.y, xend=arrow.x.end, yend=arrow.y.end),
     arrow=arrow(), color="blue", size=2) +
     annotate(geom="text", x=8, y=95, size=2.3, color="blue",
     label="we prefer to be\non this side of the\nloss function") +
     ggtitle(my.title) +
     theme(axis.text.x=element_blank(), axis.ticks.x=element_blank(),
     axis.text.y=element_blank(), axis.ticks.y=element_blank()) 

Now sometimes you need to make this investment! (Think nuclear power plants, or constructing aircraft carriers or submarines.) Don’t get caught up in getting your planning investment perfectly optimized — but do be aware of the trade-offs, and go into the decision deliberately, based on the risk level (and regulatory nature) of your industry, and your company’s risk appetite.

Object of Type Closure is Not Subsettable

I started using R in 2004. I started using R religiously on the day of the annular solar eclipse in Madrid (October 3, 2005) after being inspired by David Hunter’s talk at ADASS.

It took me exactly 4,889 days to figure out what this vexing error means, even though trial and error helped me move through it most every time it happened! I’m so moved by what I learned (and embarrassed that it took so long to make sense), that I have to share it with you.

This error happens when you’re trying to treat a function like a list, vector, or data frame. To fix it, start treating the function like a function.

Let’s take something that’s very obviously a function. I picked the sampling distribution simulator from a 2015 blog post I wrote. Cut and paste it into your R console:

sdm.sim <- function(n,src.dist=NULL,param1=NULL,param2=NULL) {
   r <- 10000  # Number of replications/samples - DO NOT ADJUST
   # This produces a matrix of observations with  
   # n columns and r rows. Each row is one sample:
   my.samples <- switch(src.dist,
	"E" = matrix(rexp(n*r,param1),r),
	"N" = matrix(rnorm(n*r,param1,param2),r),
	"U" = matrix(runif(n*r,param1,param2),r),
	"P" = matrix(rpois(n*r,param1),r),
        "B" = matrix(rbinom(n*r,param1,param2),r),
	"G" = matrix(rgamma(n*r,param1,param2),r),
	"X" = matrix(rchisq(n*r,param1),r),
	"T" = matrix(rt(n*r,param1),r))
   all.sample.sums <- apply(my.samples,1,sum)
   all.sample.means <- apply(my.samples,1,mean)   
   all.sample.vars <- apply(my.samples,1,var) 
   par(mfrow=c(2,2))
   hist(my.samples[1,],col="gray",main="Distribution of One Sample")
   hist(all.sample.sums,col="gray",main="Sampling Distribution\nof
	the Sum")
   hist(all.sample.means,col="gray",main="Sampling Distribution\nof the Mean")
   hist(all.sample.vars,col="gray",main="Sampling Distribution\nof
	the Variance")
}

The right thing to do with this function is use it to simulate a bunch of distributions and plot them using base R. You write the function name, followed by parenthesis, followed by each of the four arguments the function needs to work. This will generate a normal distribution with mean of 20 and standard deviation of 3, along with three sampling distributions, using a sample size of 100 and 10000 replications:

sdm.sim(100, src.dist="N", param1=20, param2=3)

(You should get four plots, arranged in a 2×2 grid.)

But what if we tried to treat sdm.sim like a list, and call the 3rd element of it? Or what if we tried to treat it like a data frame, and we wanted to call one of the variables in the column of the data frame?

> sdm.sim[3]
Error in sdm.sim[3] : object of type 'closure' is not subsettable

> sdm.sim$values
Error in sdm.sim$values : object of type 'closure' is not subsettable

SURPRISE! Object of type closure is not subsettable. This happens because sdm.sim is a function, and its data type is (shockingly) something called “closure”:

> class(sdm.sim)
[1] "function"

> typeof(sdm.sim)
[1] "closure"

I had read this before on Stack Overflow a whole bunch of times, but it never really clicked until I saw it like this! And now that I had a sense for why the error was occurring, turns out it’s super easy to reproduce with functions in base R or functions in anyfamiliar packages you use:

> ggplot$a
Error in ggplot$a : object of type 'closure' is not subsettable

> table$a
Error in table$a : object of type 'closure' is not subsettable

> ggplot[1]
Error in ggplot[1] : object of type 'closure' is not subsettable

> table[1]
Error in table[1] : object of type 'closure' is not subsettable

As a result, if you’re pulling your hair out over this problem, check and see where in your rogue line of code you’re treating something like a non-function. Or maybe you picked a variable name that competes with a function name. Or maybe you got your operations out of order. In any case, change your notation so that your function is doing function things, and your code should start working.

But as Luke Smith pointed out, this is not true for functional sequences (which you can also write as functions). Functional sequences are those chains of commands you write when you’re in tidyverse mode, all strung together with %>% pipes:

Luke’s code that you can cut and paste (and try), with the top line that got cut off by Twitter, is:

fun1 <- . %>% 
    group_by(col1) %>% 
    mutate(n=n+h) %>% 
    filter(n==max(n)) %>% 
    ungroup() %>% 
    summarize(new_col=mean(n))

fun2 <- fun1[-c(2,5)] 

Even though Luke’s fun1 and fun2 are of type closure, they are subsettable because they contain a sequence of functions:

> typeof(fun1)
[1] "closure"

> typeof(fun2)
[1] "closure"

> fun1[1]
Functional sequence with the following components:

 1. group_by(., col1)

Use 'functions' to extract the individual functions. 

> fun1[[1]]
function (.) 
group_by(., col1)

Don’t feel bad! This problem has plagued all of us for many, many hours (me: years), and yet it still happens to us more often than we would like to admit. Awareness of this issue will not prevent you from attempting things that give you this error in the future. It’s such a popular error that there have been memes about it and sad valentines written about it:

SCROLL DOWN PAST STEPH’S TWEET TO SEE THE JOKE!!

(Also, if you’ve made it this far, FOLLOW THESE GOOD PEOPLE ON TWITTER: @stephdesilva @djnavarro @lksmth – They all share great information on data science in general, and R in particular. Many thanks too to all the #rstats crowd who shared in my glee last night and didn’t make me feel like an idiot for not figuring this out for ALMOST 14 YEARS. It seems so simple now.

Steph is also a Microsoft whiz who you should definitely hire if you need anything R+ Microsoft. Thanks to all of you!)

Quality 4.0: Reveal Hidden Insights with Data Sci & Machine Learning (Webinar)

Quality Digest

What’s Quality 4.0, why is it important, and how can you use it to gain competitive advantage? Did you know you can benefit from Quality 4.0 even if you’re not a manufacturing organization? That’s right. I’ll tell you more next week.

Sign up for my 50-minute webinar at 2pm ET on Tuesday, October 16, 2018 — hosted by Dirk Dusharme and Mike Richman at Quality Digest. This won’t be your traditional “futures” talk to let you know about all of the exciting technology on the horizon… I’ve actually been doing and teaching data science, and applying machine learning to practical problems in quality improvement, for over a decade.

Come to this webinar if:

  1. You have a LOT of data and you don’t know where to begin
  2. You’re kind of behind… you still use paper and Excel and you’re hoping you don’t miss the opportunities here
  3. You’re a data scientist and you want to find out about quality and process improvement
  4. You’re a quality professional and you want to find out more about data science
  5. You’re a quality engineer and you want some professional preparation for what’s on the horizon
  6. You want to be sure you get on our Quality 4.0 mailing list to receive valuable information assets for the next couple years to help you identify and capture opportunities

Register Here! See you on Tuesday. If you can’t make it, we’ll also be at the ASQ Quality 4.0 Summit in Dallas next month sharing more information about the convergence of quality and Big Data.

Quality 4.0: Let’s Get Digital

Want to find out what Quality 4.0 really is — and start realizing the benefits for your organization? If so, check out the October 2018 issue of ASQ’s Quality Progress, where my new article (“Let’s Get Digital“) does just that.

Quality 4.0 asks how we can leverage connected, intelligent, automated (C-I-A) technologies to increase efficiency, effectiveness, and satisfaction: “As connected, intelligent and automated systems are more widely adopted, we can once again expect a renaissance in quality tools and methods.” In addition, we’re working to bring this to the forefront of quality management and quality engineering practice at Intelex.

Quality 4.0 Evolution

The progression can be summarized through four themes. We’re in the “quality as discovery” stage today:

  • Quality as inspection: In the early days, quality assurance relied on inspecting bad quality out of items produced. Walter A. Shewhart’s methods for statistical process control (SPC) helped operators determine whether variation was due to random or special causes.
  • Quality as design: Next, more holistic methods emerged for designing quality in to processes. The goal is to prevent quality problems before they occur. These movements were inspired by W. Edwards Deming’s push to cease dependence on inspection, and Juran’s Quality by Design.
  • Quality as empowerment: By the 1990’s, organizations adopting TQM and Six Sigma advocated a holistic approach to quality. Quality is everyone’s responsibility and empowered individuals contribute to continuous improvement.
  • Quality as discovery: Because of emerging technologies, we’re at a new frontier. In an adaptive, intelligent environment, quality depends on how:
    • quickly we can discover and aggregate new data sources,
    • effectively we can discover root causes and
    • how well we can discover new insights about ourselves, our products and our organizations.”

Read more at http://asq.org/quality-progress/2018/10/basic-quality/lets-get-digital.html  or download the PDF (http://asq.org/quality-progress/2018/10/basic-quality/lets-get-digital.pdf)

A Simple Intro to Q-Learning in R: Floor Plan Navigation

This example is drawn from “A Painless Q-Learning Tutorial” at http://mnemstudio.org/path-finding-q-learning-tutorial.htm which explains how to manually calculate iterations using the updating equation for Q-Learning, based on the Bellman Equation (image from https://www.is.uni-freiburg.de/ressourcen/business-analytics/13_reinforcementlearning.pdf):

The example explores path-finding through a house:

The question to be answered here is: What’s the best way to get from Room 2 to Room 5 (outside)? Notice that by answering this question using reinforcement learning, we will also know how to find optimal routes from any room to outside. And if we run the iterative algorithm again for a new target state, we can find out the optimal route from any room to that new target state.

Since Q-Learning is model-free, we don’t need to know how likely it is that our agent will move between any room and any other room (the transition probabilities). If you had observed the behavior in this system over time, you might be able to find that information, but it many cases it just isn’t available. So the key for this problem is to construct a Rewards Matrix that explains the benefit (or penalty!) of attempting to go from one state (room) to another.

Assigning the rewards is somewhat arbitrary, but you should give a large positive value to your target state and negative values to states that are impossible or highly undesirable. Here’s the guideline we’ll use for this problem:

  • -1 if “you can’t get there from here”
  • 0 if the destination is not the target state
  • 100 if the destination is the target state

We’ll start constructing our rewards matrix by listing the states we’ll come FROM down the rows, and the states we’ll go TO in the columns. First, let’s fill the diagonal with -1 rewards, because we don’t want our agent to stay in the same place (that is, move from Room 1 to Room 1, or from Room 2 to Room 2, and so forth). The final one gets a 100 because if we’re already in Room 5, we want to stay there.

Next, let’s move across the first row. Starting in Room 0, we only have one choice: go to Room 4. All other possibilities are blocked (-1):

Now let’s fill in the row labeled 1. From Room 1, you have two choices: go to Room 3 (which is not great but permissible, so give it a 0) or go to Room 5 (the target, worth 100)!

Continue moving row by row, determining if you can’t get there from here (-1), you can but it’s not the target (0), or it’s the target(100). You’ll end up with a final rewards matrix that looks like this:

Now create this rewards matrix in R:

[code language=”r”]
R <- matrix(c(-1, -1, -1, -1, 0, -1,
-1, -1, -1, 0, -1, 100,
-1, -1, -1, 0, -1, -1,
-1, 0, 0, -1, 0, -1,
0, -1, -1, 0, -1, 100,
-1, 0, -1, -1, 0, 100), nrow=6, ncol=6, byrow=TRUE)
[/code]

And run the code. Notice that we’re calling the target state 6 instead of 5 because even though we have a room labeled with a zero, our matrix starts with a 1s so we have to adjust:

[code language=”r”]
source("https://raw.githubusercontent.com/NicoleRadziwill/R-Functions/master/qlearn.R")

results <- q.learn(R,10000,alpha=0.1,gamma=0.8,tgt.state=6)
> round(results)
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 0 0 0 0 80 0
[2,] 0 0 0 64 0 100
[3,] 0 0 0 64 0 0
[4,] 0 80 51 0 80 0
[5,] 64 0 0 64 0 100
[6,] 0 80 0 0 80 100
[/code]

You can read this table of average value to obtain policies. A policy is a “path” through the states of the system:

  • Start at Room 0 (first row, labeled 1): Choose Room 4 (80), then from Room 4 choose Room 5 (100)
  • Start at Room 1: Choose Room 5 (100)
  • Start at Room 2: Choose Room 3 (64), from Room 3 choose Room 1 or Room 4 (80); from 1 or 4 choose 5 (100)
  • Start at Room 3: Choose Room 1 or Room 4 (80), then Room 5 (100)
  • Start at Room 4: Choose Room 5 (100)
  • Start at Room 5: Stay at Room 5 (100)

To answer the original problem, we would take route 2-3-1-5 or 2-3-4-5 to get out the quickest if we started in Room 2. This is easy to see with a simple map, but is much more complicated when the maps get bigger.

Reinforcement Learning: Q-Learning with the Hopping Robot

Overview: Reinforcement learning uses “reward” signals to determine how to navigate through a system in the most valuable way. (I’m particularly interested in the variant of reinforcement learning called “Q-Learning” because the goal is to create a “Quality Matrix” that can help you make the best sequence of decisions!) I found a toy robot navigation problem on the web that was solved using custom R code for reinforcement learning, and I wanted to reproduce the solution in different ways than the original author did. This post describes different ways that I solved the problem described at http://bayesianthink.blogspot.com/2014/05/hopping-robots-and-reinforcement.html

The Problem: Our agent, the robot, is placed at random on a board of wood. There’s a hole at s1, a sticky patch at s4, and the robot is trying to make appropriate decisions to navigate to s7 (the target). The image comes from the blog post linked above.

To solve a problem like this, you can use MODEL-BASED approaches if you know how likely it is that the robot will move from one state to another (that is, the transition probabilities for each action) or MODEL-FREE approaches (you don’t know how likely it is that the robot will move from state to state, but you can figure out a reward structure).

  • Markov Decision Process (MDP) – If you know the states, actions, rewards, and transition probabilities (which are probably different for each action), you can determine the optimal policy or “path” through the system, given different starting states. (If transition probabilities have nothing to do with decisions that an agent makes, your MDP reduces to a Markov Chain.)
  • Reinforcement Learning (RL) – If you know the states, actions, and rewards (but not the transition probabilities), you can still take an unsupervised approach. Just randomly create lots of hops through your system, and use them to update a matrix that describes the average value of each hop within the context of the system.

Solving a RL problem involves finding the optimal value functions (e.g. the Q matrix in Attempt 1) or the optimal policy (the State-Action matrix in Attempt 2). Although there are many techniques for reinforcement learning, we will use Q-learning because we don’t know the transition probabilities for each action. (If we did, we’d model it as a Markov Decision Process and use the MDPtoolbox package instead.) Q-Learning relies on traversing the system in many ways to update a matrix of average expected rewards from each state transition. This equation that it uses is from https://www.is.uni-freiburg.de/ressourcen/business-analytics/13_reinforcementlearning.pdf:

For this to work, all states have to be visited a sufficient number of times, and all state-action pairs have to be included in your experience sample. So keep this in mind when you’re trying to figure out how many iterations you need.

Attempt 1: Quick Q-Learning with qlearn.R

  • Input: A rewards matrix R. (That’s all you need! Your states are encoded in the matrix.)
  • Output: A Q matrix from which you can extract optimal policies (or paths) to help you navigate the environment.
  • Pros: Quick and very easy. Cons: Does not let you set epsilon (% of random actions), so all episodes are determined randomly and it may take longer to find a solution. Can take a long time to converge.

Set up the rewards matrix so it is a square matrix with all the states down the rows, starting with the first and all the states along the columns, starting with the first:

[code language=”r”]
hopper.rewards <- c(-10, 0.01, 0.01, -1, -1, -1, -1,
-10, -1, 0.1, -3, -1, -1, -1,
-1, 0.01, -1, -3, 0.01, -1, -1,
-1, -1, 0.01, -1, 0.01, 0.01, -1,
-1, -1, -1, -3, -1, 0.01, 100,
-1, -1, -1, -1, 0.01, -1, 100,
-1, -1, -1, -1, -1, 0.01, 100)

HOP <- matrix(hopper.rewards, nrow=7, ncol=7, byrow=TRUE)
> HOP
[,1] [,2] [,3] [,4] [,5] [,6] [,7]
[1,] -10 0.01 0.01 -1 -1.00 -1.00 -1
[2,] -10 -1.00 0.10 -3 -1.00 -1.00 -1
[3,] -1 0.01 -1.00 -3 0.01 -1.00 -1
[4,] -1 -1.00 0.01 -1 0.01 0.01 -1
[5,] -1 -1.00 -1.00 -3 -1.00 0.01 100
[6,] -1 -1.00 -1.00 -1 0.01 -1.00 100
[7,] -1 -1.00 -1.00 -1 -1.00 0.01 100
[/code]

Here’s how you read this: the rows represent where you’ve come FROM, and the columns represent where you’re going TO. Each element 1 through 7 corresponds directly to S1 through S7 in the cartoon above. Each cell contains a reward (or penalty, if the value is negative) if we arrive in that state.

The S1 state is bad for the robot… there’s a hole in that piece of wood, so we’d really like to keep it away from that state. Location [1,1] on the matrix tells us what reward (or penalty) we’ll receive if we start at S1 and stay at S1: -10 (that’s bad). Similarly, location [2,1] on the matrix tells us that if we start at S2 and move left to S1, that’s also bad and we should receive a penalty of -10. The S4 state is also undesirable – there’s a sticky patch there, so we’d like to keep the robot away from it. Location [3,4] on the matrix represents the action of going from S3 to S4 by moving right, which will put us on the sticky patch

Now load the qlearn command into your R session:

[code language=”r”]
qlearn <- function(R, N, alpha, gamma, tgt.state) {
# Adapted from https://stackoverflow.com/questions/39353580/how-to-implement-q-learning-in-r
Q <- matrix(rep(0,length(R)), nrow=nrow(R))
for (i in 1:N) {
cs <- sample(1:nrow(R), 1)
while (1) {
next.states <- which(R[cs,] > -1) # Get feasible actions for cur state
if (length(next.states)==1) # There may only be one possibility
ns <- next.states
else
ns <- sample(next.states,1) # Or you may have to pick from a few
if (ns > nrow(R)) { ns <- cs }
# NOW UPDATE THE Q-MATRIX
Q[cs,ns] <- Q[cs,ns] + alpha*(R[cs,ns] + gamma*max(Q[ns, which(R[ns,] > -1)]) – Q[cs,ns])
if (ns == tgt.state) break
cs <- ns
}
}
return(round(100*Q/max(Q)))
}
[/code]

Run qlearn with the HOP rewards matrix, a learning rate of 0.1, a discount rate of 0.8, and a target state of S7 (the location to the far right of the wooden board). I did 10,000 episodes (where in each one, the robot dropped randomly onto the wooden board and has to get to S7):

[code language=”r”]
r.hop <- qlearn(HOP,10000,alpha=0.1,gamma=0.8,tgt.state=7)
> r.hop
[,1] [,2] [,3] [,4] [,5] [,6] [,7]
[1,] 0 51 64 0 0 0 0
[2,] 0 0 64 0 0 0 0
[3,] 0 51 0 0 80 0 0
[4,] 0 0 64 0 80 80 0
[5,] 0 0 0 0 0 80 100
[6,] 0 0 0 0 80 0 100
[7,] 0 0 0 0 0 80 100
[/code]

The Q-Matrix that is presented encodes the best-value solutions from each state (the “policy”). Here’s how you read it:

  • If you’re at s1 (first row), hop to s3 (biggest value in first row), then hop to s5 (go to row 3 and find biggest value), then hop to s7 (go to row 5 and find biggest value)
  • If you’re at s2, go right to s3, then hop to s5, then hop to s7
  • If you’re at s3, hop to s5, then hop to s7
  • If you’re at s4, go right to s5 OR hop to s6, then go right to s7
  • If you’re at s5, hop to s7
  • If you’re at s6, go right to s7
  • If you’re at s7, stay there (when you’re in the target state, the value function will not be able to pick out a “best action” because the best action is to do nothing)

Alternatively, the policy can be expressed as the best action from each of the 7 states: HOP, RIGHT, HOP, RIGHT, HOP, RIGHT, (STAY PUT)

Attempt 2: Use ReinforcementLearning Package

I also used the ReinforcementLearning package by Nicholas Proellochs (6/19/2017) described in https://cran.r-project.org/web/packages/ReinforcementLearning/ReinforcementLearning.pdf.

  • Input: 1) a definition of the environment, 2) a list of states, 3) a list of actions, and 4) control parameters alpha (the learning rate; usually 0.1), gamma (the discount rate which describes how important future rewards are; often 0.9 indicating that 90% of the next reward will be taken into account), and epsilon (the probability that you’ll try a random action; often 0.1)
  • Output: A State-Action Value matrix, which attaches a number to how good it is to be in a particular state and take an action. You can use it to determine the highest value action from each state. (It contains the same information as the Q-matrix from Attempt 1, but you don’t have to infer the action from the destination it brings you to.)
  • Pros: Relatively straightforward. Allows you to specify epsilon, which controls the proportion of random actions you’ll explore as you create episodes and explore your environment. Cons: Requires manual setup of all state transitions and associated rewards.

First, I created an “environment” that describes 1) how the states will change when actions are taken, and 2) what rewards will be accrued when that happens. I assigned a reward of -1 to all actions that are not special, e.g. landing on S1, landing on S4, or landing on S7. To be perfectly consistent with Attempt 1, I could have used 0.01 instead of -1, but the results will be similar. The values you choose for rewards are sort of arbitrary, but you do need to make sure there’s a comparatively large positive reward at your target state and “negative rewards” for states you want to avoid or are physically impossible.

[code language=”r”]
my.env <- function(state,action) {
next_state <- state
if (state == state("s1") && action == "right") { next_state <- state("s2") }
if (state == state("s1") && action == "hop") { next_state <- state("s3") }

if (state == state("s2") && action == "left") {
next_state <- state("s1"); reward <- -10 }
if (state == state("s2") && action == "right") { next_state <- state("s3") }
if (state == state("s2") && action == "hop") {
next_state <- state("s4"); reward <- -3 }

if (state == state("s3") && action == "left") { next_state <- state("s2") }
if (state == state("s3") && action == "right") {
next_state <- state("s4"); reward <- -3 }
if (state == state("s3") && action == "hop") { next_state <- state("s5") }

if (state == state("s4") && action == "left") { next_state <- state("s3") }
if (state == state("s4") && action == "right") { next_state <- state("s5") }
if (state == state("s4") && action == "hop") { next_state <- state("s6") }

if (state == state("s5") && action == "left") {
next_state <- state("s4"); reward <- -3 }
if (state == state("s5") && action == "right") { next_state <- state("s6") }
if (state == state("s5") && action == "hop") {
next_state <- state("s7"); reward <- 10 }

if (state == state("s6") && action == "left") { next_state <- state("s5") }
if (state == state("s6") && action == "right") {
next_state <- state("s7"); reward <- 10 }

if (next_state == state("s7") && state != state("s7")) {
reward <- 10
} else {
reward <- -1
}
out <- list(NextState = next_state, Reward = reward)
return(out)
}
[/code]

Next, I installed and loaded up the ReinforcementLearning package and ran the RL simulation:

[code language=”r”]
install.packages("ReinforcementLearning")
library(ReinforcementLearning)
states <- c("s1", "s2", "s3", "s4", "s5", "s6", "s7")
actions <- c("left","right","hop")
data <- sampleExperience(N=3000,env=my.env,states=states,actions=actions)
control <- list(alpha = 0.1, gamma = 0.8, epsilon = 0.1)
model <- ReinforcementLearning(data, s = "State", a = "Action", r = "Reward",
s_new = "NextState", control = control)
[/code]

Now we can see the results:

[code language=”r”]
> print(model)
State-Action function Q
hop right left
s1 2.456741 1.022440 1.035193
s2 2.441032 2.452331 1.054154
s3 4.233166 2.469494 1.048073
s4 4.179853 4.221801 2.422842
s5 6.397159 4.175642 2.456108
s6 4.217752 6.410110 4.223972
s7 -4.602003 -4.593739 -4.591626

Policy
s1 s2 s3 s4 s5 s6 s7
"hop" "right" "hop" "right" "hop" "right" "left"

Reward (last iteration)
[1] 223
[/code]

The recommended policy is: HOP, RIGHT, HOP, RIGHT, HOP, RIGHT, (STAY PUT)

If you tried this example and it didn’t produce the same response, don’t worry! Model-free reinforcement learning is done by simulation, and when you used the sampleExperience function, you generated a different set of state transitions to learn from. You may need more samples, or to tweak your rewards structure, or both.)

« Older Entries