Tag Archives: data science

Object of Type Closure is Not Subsettable

I started using R in 2004. I started using R religiously on the day of the annular solar eclipse in Madrid (October 3, 2005) after being inspired by David Hunter’s talk at ADASS.

It took me exactly 4,889 days to figure out what this vexing error means, even though trial and error helped me move through it most every time it happened! I’m so moved by what I learned (and embarrassed that it took so long to make sense), that I have to share it with you.

This error happens when you’re trying to treat a function like a list, vector, or data frame. To fix it, start treating the function like a function.

Let’s take something that’s very obviously a function. I picked the sampling distribution simulator from a 2015 blog post I wrote. Cut and paste it into your R console:

sdm.sim <- function(n,src.dist=NULL,param1=NULL,param2=NULL) {
   r <- 10000  # Number of replications/samples - DO NOT ADJUST
   # This produces a matrix of observations with  
   # n columns and r rows. Each row is one sample:
   my.samples <- switch(src.dist,
	"E" = matrix(rexp(n*r,param1),r),
	"N" = matrix(rnorm(n*r,param1,param2),r),
	"U" = matrix(runif(n*r,param1,param2),r),
	"P" = matrix(rpois(n*r,param1),r),
        "B" = matrix(rbinom(n*r,param1,param2),r),
	"G" = matrix(rgamma(n*r,param1,param2),r),
	"X" = matrix(rchisq(n*r,param1),r),
	"T" = matrix(rt(n*r,param1),r))
   all.sample.sums <- apply(my.samples,1,sum)
   all.sample.means <- apply(my.samples,1,mean)   
   all.sample.vars <- apply(my.samples,1,var) 
   par(mfrow=c(2,2))
   hist(my.samples[1,],col="gray",main="Distribution of One Sample")
   hist(all.sample.sums,col="gray",main="Sampling Distribution\nof
	the Sum")
   hist(all.sample.means,col="gray",main="Sampling Distribution\nof the Mean")
   hist(all.sample.vars,col="gray",main="Sampling Distribution\nof
	the Variance")
}

The right thing to do with this function is use it to simulate a bunch of distributions and plot them using base R. You write the function name, followed by parenthesis, followed by each of the four arguments the function needs to work. This will generate a normal distribution with mean of 20 and standard deviation of 3, along with three sampling distributions, using a sample size of 100 and 10000 replications:

sdm.sim(100, src.dist="N", param1=20, param2=3)

(You should get four plots, arranged in a 2×2 grid.)

But what if we tried to treat sdm.sim like a list, and call the 3rd element of it? Or what if we tried to treat it like a data frame, and we wanted to call one of the variables in the column of the data frame?

> sdm.sim[3]
Error in sdm.sim[3] : object of type 'closure' is not subsettable

> sdm.sim$values
Error in sdm.sim$values : object of type 'closure' is not subsettable

SURPRISE! Object of type closure is not subsettable. This happens because sdm.sim is a function, and its data type is (shockingly) something called “closure”:

> class(sdm.sim)
[1] "function"

> typeof(sdm.sim)
[1] "closure"

I had read this before on Stack Overflow a whole bunch of times, but it never really clicked until I saw it like this! And now that I had a sense for why the error was occurring, turns out it’s super easy to reproduce with functions in base R or functions in anyfamiliar packages you use:

> ggplot$a
Error in ggplot$a : object of type 'closure' is not subsettable

> table$a
Error in table$a : object of type 'closure' is not subsettable

> ggplot[1]
Error in ggplot[1] : object of type 'closure' is not subsettable

> table[1]
Error in table[1] : object of type 'closure' is not subsettable

As a result, if you’re pulling your hair out over this problem, check and see where in your rogue line of code you’re treating something like a non-function. Or maybe you picked a variable name that competes with a function name. Or maybe you got your operations out of order. In any case, change your notation so that your function is doing function things, and your code should start working.

But as Luke Smith pointed out, this is not true for functional sequences (which you can also write as functions). Functional sequences are those chains of commands you write when you’re in tidyverse mode, all strung together with %>% pipes:

Luke’s code that you can cut and paste (and try), with the top line that got cut off by Twitter, is:

fun1 <- . %>% 
    group_by(col1) %>% 
    mutate(n=n+h) %>% 
    filter(n==max(n)) %>% 
    ungroup() %>% 
    summarize(new_col=mean(n))

fun2 <- fun1[-c(2,5)] 

Even though Luke’s fun1 and fun2 are of type closure, they are subsettable because they contain a sequence of functions:

> typeof(fun1)
[1] "closure"

> typeof(fun2)
[1] "closure"

> fun1[1]
Functional sequence with the following components:

 1. group_by(., col1)

Use 'functions' to extract the individual functions. 

> fun1[[1]]
function (.) 
group_by(., col1)

Don’t feel bad! This problem has plagued all of us for many, many hours (me: years), and yet it still happens to us more often than we would like to admit. Awareness of this issue will not prevent you from attempting things that give you this error in the future. It’s such a popular error that there have been memes about it and sad valentines written about it:

SCROLL DOWN PAST STEPH’S TWEET TO SEE THE JOKE!!

(Also, if you’ve made it this far, FOLLOW THESE GOOD PEOPLE ON TWITTER: @stephdesilva @djnavarro @lksmth – They all share great information on data science in general, and R in particular. Many thanks too to all the #rstats crowd who shared in my glee last night and didn’t make me feel like an idiot for not figuring this out for ALMOST 14 YEARS. It seems so simple now.

Steph is also a Microsoft whiz who you should definitely hire if you need anything R+ Microsoft. Thanks to all of you!)

Supplier Quality Management: Seeking Test Data

Do you have, or have you had, a supplier selection problem to solve? I have some algorithms I’ve been working on to help you make better decisions about what suppliers to choose — and how to monitor performance over time. I’d like to test and refine them on real data. If anyone has data that you’ve used to select suppliers in the past 10 years, or have data that you’re working with right now to select suppliers, or have a colleague who may be able to share this data — that’s what I’m interested in sourcing.

Because this data can sometimes be proprietary and confidential, feel free to blind the names or identifying information for the suppliers — or I can do this myself (no suppliers, products, or parts will be named when I publish the results). I just need to be able to tell them apart. Tags like Supplier A or Part1SupplierA are fine. I’d prefer if you blinded the data, but I can also write scripts to do this and have you check them before I move forward.

Desired data format is CSV or Excel. Text files are also OK, as long as they clearly identify the criteria that you used for supplier selection. Email me at myfirstname dot mylastname at gmail if you can help out — and maybe I can help you out too! Thanks.

 

Happy 10th Birthday!

10 years ago today, this blog published its first post: “How Do I Do a Lean Six Sigma (LSS) Project?” Looking back, it seems like a pretty simple place to have started. I didn’t know whether it would even be useful to anyone, but I was committed to making my personal PDSA cycles high-impact: I was going to export things I learned, or things I found valuable. (As it turns out, many people did appreciate the early posts even though it would take a few years for that to become evident!)

Since then, hundreds more have followed to help people understand more about quality and process improvement in theory and in practice. I started writing because I was in the middle of my PhD dissertation in the Quality Systems program at Indiana State, and I was discovering so many interesting nuggets of information that I wanted to share those with the world – particularly practitioners, who might not have lots of time (or even interest) in sifting through the research. In addition, I was using data science (and some machine learning, although at the time, it was much more difficult to implement) to explore quality-related problems, and could see the earliest signs that this new paradigm for problem solving might help fuel data-driven decision making in the workplace… if only we could make the advanced techniques easy for people in busy jobs to use and apply.

We’re not there yet, but as ASQ and other organizations recognize Quality 4.0 as a focus area, we’re much closer. As a result, I’ve made it my mission to help bring insights from research to practitioners, to make these new innovations real. If you are developing or demonstrating any new innovative techniques that relate to making people, processes, or products better, easier, faster, or less expensive — or reducing risks and building individual and organizational capabilities — let me know!

I’ve also learned a lot in the past decade, most of which I’ve spent helping undergraduate students develop and refine their data-driven decision making skills, and more recently at Intelex (provider of integrated environment, health & safety, and quality management EHSQ software to enterprises and smaller organizations). Here are some of the big lessons:

  1. People are complex. They have multidimensional lives, and work should support and enrich those lives. Any organization that cares about performance — internally and in the market — should examine how it can create complete and meaningful experiences. This applies not only to customers, but to employees and partners and suppliers. It also applies to anyone an organization has the power and potential to impact, no matter how small.
  2. Everybody wants to do a good job (and be recognized for it). How can we create environments where each person is empowered to contribute in all the areas where they have talent and interest? How can these same environments be designed with empathy as a core capability?
  3. Your data are your most valuable assets. It sounds trite, but data is becoming as valuable as warehouses, inventory, and equipment. I was involved in a project a few years ago where we digitized data that had been collected for three years — and by analyzing it, we uncovered improvement opportunities that when implemented, saved thousands of dollars a week. We would not have been able to do that if the data had remained scratched in pencil on thousands of sheets of well-worn legal paper.
  4. Nothing beats domain expertise (especially where data science is concerned). I’ve analyzed terabytes of data over the past decade, and in many cases, the secrets are subtle. Any time you’re using data to make decisions, be sure to engage the people with practical, on-the-ground experience in the area you’re studying.
  5. Self-awareness must be cultivated. The older you get, and the more experience you gain, the more you know what you don’t know. Many of my junior colleagues (and yours) haven’t reached this point yet, and will need some help from senior colleagues to gain this awareness. At the same time, those of you who are senior have valuable lessons to learn from your junior colleagues, too! Quality improvement is grounded in personal and organizational learning, and processes should help people help each other uncover blind spots and work through them — without fear.

Most of all, I discovered that what really matters is learning. We can spend time supporting human and organizational performance, developing and refining processes that have quality baked in, and making sure that products meet all their specifications. But what’s going on under the surface is more profound: people are learning about themselves, they are learning about how to transform inputs into outputs in a way that adds value, and they are learning about each other and their environment. Our processes just encapsulate that organizational knowledge that we develop as we learn.

Quality 4.0: Let’s Get Digital

Want to find out what Quality 4.0 really is — and start realizing the benefits for your organization? If so, check out the October 2018 issue of ASQ’s Quality Progress, where my new article (“Let’s Get Digital“) does just that.

Quality 4.0 asks how we can leverage connected, intelligent, automated (C-I-A) technologies to increase efficiency, effectiveness, and satisfaction: “As connected, intelligent and automated systems are more widely adopted, we can once again expect a renaissance in quality tools and methods.” In addition, we’re working to bring this to the forefront of quality management and quality engineering practice at Intelex.

Quality 4.0 Evolution

The progression can be summarized through four themes. We’re in the “quality as discovery” stage today:

  • Quality as inspection: In the early days, quality assurance relied on inspecting bad quality out of items produced. Walter A. Shewhart’s methods for statistical process control (SPC) helped operators determine whether variation was due to random or special causes.
  • Quality as design: Next, more holistic methods emerged for designing quality in to processes. The goal is to prevent quality problems before they occur. These movements were inspired by W. Edwards Deming’s push to cease dependence on inspection, and Juran’s Quality by Design.
  • Quality as empowerment: By the 1990’s, organizations adopting TQM and Six Sigma advocated a holistic approach to quality. Quality is everyone’s responsibility and empowered individuals contribute to continuous improvement.
  • Quality as discovery: Because of emerging technologies, we’re at a new frontier. In an adaptive, intelligent environment, quality depends on how:
    • quickly we can discover and aggregate new data sources,
    • effectively we can discover root causes and
    • how well we can discover new insights about ourselves, our products and our organizations.”

Read more at http://asq.org/quality-progress/2018/10/basic-quality/lets-get-digital.html  or download the PDF (http://asq.org/quality-progress/2018/10/basic-quality/lets-get-digital.pdf)

What is Quality 4.0?

COMING FEB 2020

My first post of 2018 addresses an idea that’s just starting to gain traction – one you’ll hear a lot more about from me soon: Quality 4.0.  It’s not a fad or trend, but a reminder that the business environment is changing, and that performance excellence in the future will depend on how well you adapt, change, and transform in response.

Although we started building community around this concept at the ASQ Quality 4.0 Summits on Disruption, Innovation, and Change in 2017 and 2018, the truly revolutionary work is yet to come.

What is Quality 4.0?

Quality 4.0 = Connectedness + Intelligence + Automation (C-I-A)

for Performance Innovation

The term “Quality 4.0” comes from “Industry 4.0” – the “fourth industrial revolution” originally addressed at the Hannover (Germany) Fair in 2011. That meeting emphasized the increasing intelligence and interconnectedness in “smart” manufacturing systems, and reflected on the newest technological innovations in historical context.

The Industrial Revolutions

  • In the first industrial revolution (late 1700’s), steam and water power made it possible for production facilities to scale up and expanded the potential locations for production.
  • By the late 1800’s, the discovery of electricity and development of associated infrastructure enabled the development of machines for mass production. In the US, the expansion of railways made it easier to obtain supplies and deliver finished goods. The availability of power also sparked a renaissance in computing, and digital computing emerged from its analog ancestor.
  • The third industrial revolution came at the end of the 1960’s, with the invention of the Programmable Logic Controller (PLC). This made it possible to automate processes like filling and reloading tanks, turning engines on and off, and controlling sequences of events based on changing conditions.

The Fourth Industrial Revolution

Although the growth and expansion of the internet accelerated innovation in the late 1990’s and 2000’s, we are just now poised for another industrial revolution. What’s changing?

  • Production & Availability of Information: More information is available because people and devices are producing it at greater rates than ever before. Falling costs of enabling technologies like sensors and actuators are catalyzing innovation in these areas.
  • Connectivity: In many cases, and from many locations, that information is instantly accessible over the internet. Improved network infrastructure is expanding the extent of connectivity, making it more widely available and more robust. (And unlike the 80’s and 90’s, there are far fewer communications protocols that are commonly encountered so it’s a lot easier to get one device to talk to another device on your network.)
  • Intelligent Processing: Affordable computing capabilities (and computing power!) are available to process that information so it can be incorporated into decision making. High-performance software libraries for advanced processing and visualization of data are easy to find, and easy to use. (In the past, we had to write our own… now we can use open-source solutions that are battle tested.
  • New Modes of Interaction: The way in which we can acquire and interact with information are also changing, in particular through new interfaces like Augmented Reality (AR) and Virtual Reality (VR), which expand possibilities for training and navigating a hybrid physical-digital environment with greater ease.
  • New Modes of Production: 3D printing, nanotechnology, and gene editing (CRISPR) are poised to change the nature and means of production in several industries. Technologies for enhancing human performance (e.g. exoskeletons, brain-computer interfaces, and even autonomous vehicles) will also open up new mechanisms for innovation in production. (Roco & Bainbridge (2002) describe many of these, and their prescience is remarkable.) New technologies like blockchain have the potential to change the nature of production as well, by challenging ingrained perceptions of trust, control, consensus, and value.

The fourth industrial revolution is one of intelligence: smart, hyperconnected cyber-physical systems that help humans and machines cooperate to achieved shared goals, and use data to generate value.

Enabling Technologies are Physical, Digital, and Biological

These enabling technologies include:

  • Information (Generate & Share)
    • Affordable Sensors and Actuators
    • Big Data infrastructure (e.g. MapReduce, Hadoop, NoSQL databases)
  • Connectivity
    • 5G Networks
    • IPv6 Addresses (which expand the number of devices that can be put online)
    • Internet of Things (IoT)
    • Cloud Computing
  • Processing
    • Predictive Analytics
    • Artificial Intelligence
    • Machine Learning (incl. Deep Learning)
    • Data Science
  • Interaction
    • Augmented Reality (AR)
    • Mixed Reality (MR)
    • Virtual Reality (VR)
    • Diminished Reality (DR)
  • Construction
    • 3D Printing
    • Additive Manufacturing
    • Smart Materials
    • Nanotechnology
    • Gene Editing
    • Automated (Software) Code Generation
    • Robotic Process Automation (RPA)
    • Blockchain

Today’s quality profession was born during the middle of the second industrial revolution, when methods were needed to ensure that assembly lines ran smoothly – that they produced artifacts to specifications, that the workers knew how to engage in the process, and that costs were controlled. As industrial production matured, those methods grew to encompass the design of processes which were built to produce to specifications. In the 1980’s and 1990’s, organizations in the US started to recognize the importance of human capabilities and active engagement in quality as essential, and TQM, Lean, and Six Sigma gained in popularity. 

How will these methods evolve in an adaptive, intelligent environment? The question is largely still open, and that’s the essence of Quality 4.0.

Roco, M. C., & Bainbridge, W. S. (2002). Converging technologies for improving human performance: Integrating from the nanoscale. Journal of nanoparticle research4(4), 281-295. (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.465.7221&rep=rep1&type=pdf)

A Simple Intro to Q-Learning in R: Floor Plan Navigation

This example is drawn from “A Painless Q-Learning Tutorial” at http://mnemstudio.org/path-finding-q-learning-tutorial.htm which explains how to manually calculate iterations using the updating equation for Q-Learning, based on the Bellman Equation (image from https://www.is.uni-freiburg.de/ressourcen/business-analytics/13_reinforcementlearning.pdf):

The example explores path-finding through a house:

The question to be answered here is: What’s the best way to get from Room 2 to Room 5 (outside)? Notice that by answering this question using reinforcement learning, we will also know how to find optimal routes from any room to outside. And if we run the iterative algorithm again for a new target state, we can find out the optimal route from any room to that new target state.

Since Q-Learning is model-free, we don’t need to know how likely it is that our agent will move between any room and any other room (the transition probabilities). If you had observed the behavior in this system over time, you might be able to find that information, but it many cases it just isn’t available. So the key for this problem is to construct a Rewards Matrix that explains the benefit (or penalty!) of attempting to go from one state (room) to another.

Assigning the rewards is somewhat arbitrary, but you should give a large positive value to your target state and negative values to states that are impossible or highly undesirable. Here’s the guideline we’ll use for this problem:

  • -1 if “you can’t get there from here”
  • 0 if the destination is not the target state
  • 100 if the destination is the target state

We’ll start constructing our rewards matrix by listing the states we’ll come FROM down the rows, and the states we’ll go TO in the columns. First, let’s fill the diagonal with -1 rewards, because we don’t want our agent to stay in the same place (that is, move from Room 1 to Room 1, or from Room 2 to Room 2, and so forth). The final one gets a 100 because if we’re already in Room 5, we want to stay there.

Next, let’s move across the first row. Starting in Room 0, we only have one choice: go to Room 4. All other possibilities are blocked (-1):

Now let’s fill in the row labeled 1. From Room 1, you have two choices: go to Room 3 (which is not great but permissible, so give it a 0) or go to Room 5 (the target, worth 100)!

Continue moving row by row, determining if you can’t get there from here (-1), you can but it’s not the target (0), or it’s the target(100). You’ll end up with a final rewards matrix that looks like this:

Now create this rewards matrix in R:

[code language=”r”]
R <- matrix(c(-1, -1, -1, -1, 0, -1,
-1, -1, -1, 0, -1, 100,
-1, -1, -1, 0, -1, -1,
-1, 0, 0, -1, 0, -1,
0, -1, -1, 0, -1, 100,
-1, 0, -1, -1, 0, 100), nrow=6, ncol=6, byrow=TRUE)
[/code]

And run the code. Notice that we’re calling the target state 6 instead of 5 because even though we have a room labeled with a zero, our matrix starts with a 1s so we have to adjust:

[code language=”r”]
source("https://raw.githubusercontent.com/NicoleRadziwill/R-Functions/master/qlearn.R&quot;)

results <- q.learn(R,10000,alpha=0.1,gamma=0.8,tgt.state=6)
> round(results)
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 0 0 0 0 80 0
[2,] 0 0 0 64 0 100
[3,] 0 0 0 64 0 0
[4,] 0 80 51 0 80 0
[5,] 64 0 0 64 0 100
[6,] 0 80 0 0 80 100
[/code]

You can read this table of average value to obtain policies. A policy is a “path” through the states of the system:

  • Start at Room 0 (first row, labeled 1): Choose Room 4 (80), then from Room 4 choose Room 5 (100)
  • Start at Room 1: Choose Room 5 (100)
  • Start at Room 2: Choose Room 3 (64), from Room 3 choose Room 1 or Room 4 (80); from 1 or 4 choose 5 (100)
  • Start at Room 3: Choose Room 1 or Room 4 (80), then Room 5 (100)
  • Start at Room 4: Choose Room 5 (100)
  • Start at Room 5: Stay at Room 5 (100)

To answer the original problem, we would take route 2-3-1-5 or 2-3-4-5 to get out the quickest if we started in Room 2. This is easy to see with a simple map, but is much more complicated when the maps get bigger.

Reinforcement Learning: Q-Learning with the Hopping Robot

Overview: Reinforcement learning uses “reward” signals to determine how to navigate through a system in the most valuable way. (I’m particularly interested in the variant of reinforcement learning called “Q-Learning” because the goal is to create a “Quality Matrix” that can help you make the best sequence of decisions!) I found a toy robot navigation problem on the web that was solved using custom R code for reinforcement learning, and I wanted to reproduce the solution in different ways than the original author did. This post describes different ways that I solved the problem described at http://bayesianthink.blogspot.com/2014/05/hopping-robots-and-reinforcement.html

The Problem: Our agent, the robot, is placed at random on a board of wood. There’s a hole at s1, a sticky patch at s4, and the robot is trying to make appropriate decisions to navigate to s7 (the target). The image comes from the blog post linked above.

To solve a problem like this, you can use MODEL-BASED approaches if you know how likely it is that the robot will move from one state to another (that is, the transition probabilities for each action) or MODEL-FREE approaches (you don’t know how likely it is that the robot will move from state to state, but you can figure out a reward structure).

  • Markov Decision Process (MDP) – If you know the states, actions, rewards, and transition probabilities (which are probably different for each action), you can determine the optimal policy or “path” through the system, given different starting states. (If transition probabilities have nothing to do with decisions that an agent makes, your MDP reduces to a Markov Chain.)
  • Reinforcement Learning (RL) – If you know the states, actions, and rewards (but not the transition probabilities), you can still take an unsupervised approach. Just randomly create lots of hops through your system, and use them to update a matrix that describes the average value of each hop within the context of the system.

Solving a RL problem involves finding the optimal value functions (e.g. the Q matrix in Attempt 1) or the optimal policy (the State-Action matrix in Attempt 2). Although there are many techniques for reinforcement learning, we will use Q-learning because we don’t know the transition probabilities for each action. (If we did, we’d model it as a Markov Decision Process and use the MDPtoolbox package instead.) Q-Learning relies on traversing the system in many ways to update a matrix of average expected rewards from each state transition. This equation that it uses is from https://www.is.uni-freiburg.de/ressourcen/business-analytics/13_reinforcementlearning.pdf:

For this to work, all states have to be visited a sufficient number of times, and all state-action pairs have to be included in your experience sample. So keep this in mind when you’re trying to figure out how many iterations you need.

Attempt 1: Quick Q-Learning with qlearn.R

  • Input: A rewards matrix R. (That’s all you need! Your states are encoded in the matrix.)
  • Output: A Q matrix from which you can extract optimal policies (or paths) to help you navigate the environment.
  • Pros: Quick and very easy. Cons: Does not let you set epsilon (% of random actions), so all episodes are determined randomly and it may take longer to find a solution. Can take a long time to converge.

Set up the rewards matrix so it is a square matrix with all the states down the rows, starting with the first and all the states along the columns, starting with the first:

[code language=”r”]
hopper.rewards <- c(-10, 0.01, 0.01, -1, -1, -1, -1,
-10, -1, 0.1, -3, -1, -1, -1,
-1, 0.01, -1, -3, 0.01, -1, -1,
-1, -1, 0.01, -1, 0.01, 0.01, -1,
-1, -1, -1, -3, -1, 0.01, 100,
-1, -1, -1, -1, 0.01, -1, 100,
-1, -1, -1, -1, -1, 0.01, 100)

HOP <- matrix(hopper.rewards, nrow=7, ncol=7, byrow=TRUE)
> HOP
[,1] [,2] [,3] [,4] [,5] [,6] [,7]
[1,] -10 0.01 0.01 -1 -1.00 -1.00 -1
[2,] -10 -1.00 0.10 -3 -1.00 -1.00 -1
[3,] -1 0.01 -1.00 -3 0.01 -1.00 -1
[4,] -1 -1.00 0.01 -1 0.01 0.01 -1
[5,] -1 -1.00 -1.00 -3 -1.00 0.01 100
[6,] -1 -1.00 -1.00 -1 0.01 -1.00 100
[7,] -1 -1.00 -1.00 -1 -1.00 0.01 100
[/code]

Here’s how you read this: the rows represent where you’ve come FROM, and the columns represent where you’re going TO. Each element 1 through 7 corresponds directly to S1 through S7 in the cartoon above. Each cell contains a reward (or penalty, if the value is negative) if we arrive in that state.

The S1 state is bad for the robot… there’s a hole in that piece of wood, so we’d really like to keep it away from that state. Location [1,1] on the matrix tells us what reward (or penalty) we’ll receive if we start at S1 and stay at S1: -10 (that’s bad). Similarly, location [2,1] on the matrix tells us that if we start at S2 and move left to S1, that’s also bad and we should receive a penalty of -10. The S4 state is also undesirable – there’s a sticky patch there, so we’d like to keep the robot away from it. Location [3,4] on the matrix represents the action of going from S3 to S4 by moving right, which will put us on the sticky patch

Now load the qlearn command into your R session:

[code language=”r”]
qlearn <- function(R, N, alpha, gamma, tgt.state) {
# Adapted from https://stackoverflow.com/questions/39353580/how-to-implement-q-learning-in-r
Q <- matrix(rep(0,length(R)), nrow=nrow(R))
for (i in 1:N) {
cs <- sample(1:nrow(R), 1)
while (1) {
next.states <- which(R[cs,] > -1) # Get feasible actions for cur state
if (length(next.states)==1) # There may only be one possibility
ns <- next.states
else
ns <- sample(next.states,1) # Or you may have to pick from a few
if (ns > nrow(R)) { ns <- cs }
# NOW UPDATE THE Q-MATRIX
Q[cs,ns] <- Q[cs,ns] + alpha*(R[cs,ns] + gamma*max(Q[ns, which(R[ns,] > -1)]) – Q[cs,ns])
if (ns == tgt.state) break
cs <- ns
}
}
return(round(100*Q/max(Q)))
}
[/code]

Run qlearn with the HOP rewards matrix, a learning rate of 0.1, a discount rate of 0.8, and a target state of S7 (the location to the far right of the wooden board). I did 10,000 episodes (where in each one, the robot dropped randomly onto the wooden board and has to get to S7):

[code language=”r”]
r.hop <- qlearn(HOP,10000,alpha=0.1,gamma=0.8,tgt.state=7)
> r.hop
[,1] [,2] [,3] [,4] [,5] [,6] [,7]
[1,] 0 51 64 0 0 0 0
[2,] 0 0 64 0 0 0 0
[3,] 0 51 0 0 80 0 0
[4,] 0 0 64 0 80 80 0
[5,] 0 0 0 0 0 80 100
[6,] 0 0 0 0 80 0 100
[7,] 0 0 0 0 0 80 100
[/code]

The Q-Matrix that is presented encodes the best-value solutions from each state (the “policy”). Here’s how you read it:

  • If you’re at s1 (first row), hop to s3 (biggest value in first row), then hop to s5 (go to row 3 and find biggest value), then hop to s7 (go to row 5 and find biggest value)
  • If you’re at s2, go right to s3, then hop to s5, then hop to s7
  • If you’re at s3, hop to s5, then hop to s7
  • If you’re at s4, go right to s5 OR hop to s6, then go right to s7
  • If you’re at s5, hop to s7
  • If you’re at s6, go right to s7
  • If you’re at s7, stay there (when you’re in the target state, the value function will not be able to pick out a “best action” because the best action is to do nothing)

Alternatively, the policy can be expressed as the best action from each of the 7 states: HOP, RIGHT, HOP, RIGHT, HOP, RIGHT, (STAY PUT)

Attempt 2: Use ReinforcementLearning Package

I also used the ReinforcementLearning package by Nicholas Proellochs (6/19/2017) described in https://cran.r-project.org/web/packages/ReinforcementLearning/ReinforcementLearning.pdf.

  • Input: 1) a definition of the environment, 2) a list of states, 3) a list of actions, and 4) control parameters alpha (the learning rate; usually 0.1), gamma (the discount rate which describes how important future rewards are; often 0.9 indicating that 90% of the next reward will be taken into account), and epsilon (the probability that you’ll try a random action; often 0.1)
  • Output: A State-Action Value matrix, which attaches a number to how good it is to be in a particular state and take an action. You can use it to determine the highest value action from each state. (It contains the same information as the Q-matrix from Attempt 1, but you don’t have to infer the action from the destination it brings you to.)
  • Pros: Relatively straightforward. Allows you to specify epsilon, which controls the proportion of random actions you’ll explore as you create episodes and explore your environment. Cons: Requires manual setup of all state transitions and associated rewards.

First, I created an “environment” that describes 1) how the states will change when actions are taken, and 2) what rewards will be accrued when that happens. I assigned a reward of -1 to all actions that are not special, e.g. landing on S1, landing on S4, or landing on S7. To be perfectly consistent with Attempt 1, I could have used 0.01 instead of -1, but the results will be similar. The values you choose for rewards are sort of arbitrary, but you do need to make sure there’s a comparatively large positive reward at your target state and “negative rewards” for states you want to avoid or are physically impossible.

[code language=”r”]
my.env <- function(state,action) {
next_state <- state
if (state == state("s1") && action == "right") { next_state <- state("s2") }
if (state == state("s1") && action == "hop") { next_state <- state("s3") }

if (state == state("s2") && action == "left") {
next_state <- state("s1"); reward <- -10 }
if (state == state("s2") && action == "right") { next_state <- state("s3") }
if (state == state("s2") && action == "hop") {
next_state <- state("s4"); reward <- -3 }

if (state == state("s3") && action == "left") { next_state <- state("s2") }
if (state == state("s3") && action == "right") {
next_state <- state("s4"); reward <- -3 }
if (state == state("s3") && action == "hop") { next_state <- state("s5") }

if (state == state("s4") && action == "left") { next_state <- state("s3") }
if (state == state("s4") && action == "right") { next_state <- state("s5") }
if (state == state("s4") && action == "hop") { next_state <- state("s6") }

if (state == state("s5") && action == "left") {
next_state <- state("s4"); reward <- -3 }
if (state == state("s5") && action == "right") { next_state <- state("s6") }
if (state == state("s5") && action == "hop") {
next_state <- state("s7"); reward <- 10 }

if (state == state("s6") && action == "left") { next_state <- state("s5") }
if (state == state("s6") && action == "right") {
next_state <- state("s7"); reward <- 10 }

if (next_state == state("s7") && state != state("s7")) {
reward <- 10
} else {
reward <- -1
}
out <- list(NextState = next_state, Reward = reward)
return(out)
}
[/code]

Next, I installed and loaded up the ReinforcementLearning package and ran the RL simulation:

[code language=”r”]
install.packages("ReinforcementLearning")
library(ReinforcementLearning)
states <- c("s1", "s2", "s3", "s4", "s5", "s6", "s7")
actions <- c("left","right","hop")
data <- sampleExperience(N=3000,env=my.env,states=states,actions=actions)
control <- list(alpha = 0.1, gamma = 0.8, epsilon = 0.1)
model <- ReinforcementLearning(data, s = "State", a = "Action", r = "Reward",
s_new = "NextState", control = control)
[/code]

Now we can see the results:

[code language=”r”]
> print(model)
State-Action function Q
hop right left
s1 2.456741 1.022440 1.035193
s2 2.441032 2.452331 1.054154
s3 4.233166 2.469494 1.048073
s4 4.179853 4.221801 2.422842
s5 6.397159 4.175642 2.456108
s6 4.217752 6.410110 4.223972
s7 -4.602003 -4.593739 -4.591626

Policy
s1 s2 s3 s4 s5 s6 s7
"hop" "right" "hop" "right" "hop" "right" "left"

Reward (last iteration)
[1] 223
[/code]

The recommended policy is: HOP, RIGHT, HOP, RIGHT, HOP, RIGHT, (STAY PUT)

If you tried this example and it didn’t produce the same response, don’t worry! Model-free reinforcement learning is done by simulation, and when you used the sampleExperience function, you generated a different set of state transitions to learn from. You may need more samples, or to tweak your rewards structure, or both.)

« Older Entries Recent Entries »