Category Archives: Uncategorized

My First R Package (Part 3)

After refactoring my programming so that it was only about 10 lines of code, using 12 functions I wrote an loaded in via the source command, I went through all the steps in Part 1 of this blog post and Part 2 of this blog post to set up the R package infrastructure using testthis in RStudio. Then things started humming along with the rest of the setup:

> use_mit_license("Nicole Radziwill")
✔ Setting active project to 'D:/R/easyMTS'
✔ Setting License field in DESCRIPTION to 'MIT + file LICENSE'
✔ Writing 'LICENSE.md'
✔ Adding '^LICENSE\\.md$' to '.Rbuildignore'
✔ Writing 'LICENSE'

> use_testthat()
✔ Adding 'testthat' to Suggests field in DESCRIPTION
✔ Creating 'tests/testthat/'
✔ Writing 'tests/testthat.R'
● Call `use_test()` to initialize a basic test file and open it for editing.

> use_vignette("easyMTS")
✔ Adding 'knitr' to Suggests field in DESCRIPTION
✔ Setting VignetteBuilder field in DESCRIPTION to 'knitr'
✔ Adding 'inst/doc' to '.gitignore'
✔ Creating 'vignettes/'
✔ Adding '*.html', '*.R' to 'vignettes/.gitignore'
✔ Adding 'rmarkdown' to Suggests field in DESCRIPTION
✔ Writing 'vignettes/easyMTS.Rmd'
● Modify 'vignettes/easyMTS.Rmd'

> use_citation()
✔ Creating 'inst/'
✔ Writing 'inst/CITATION'
● Modify 'inst/CITATION'

Add Your Dependencies

> use_package("ggplot2")
✔ Adding 'ggplot2' to Imports field in DESCRIPTION
● Refer to functions with `ggplot2::fun()`
> use_package("dplyr")
✔ Adding 'dplyr' to Imports field in DESCRIPTION
● Refer to functions with `dplyr::fun()`

> use_package("magrittr")
✔ Adding 'magrittr' to Imports field in DESCRIPTION
● Refer to functions with `magrittr::fun()`
> use_package("tidyr")
✔ Adding 'tidyr' to Imports field in DESCRIPTION
● Refer to functions with `tidyr::fun()`

> use_package("MASS")
✔ Adding 'MASS' to Imports field in DESCRIPTION
● Refer to functions with `MASS::fun()`

> use_package("qualityTools")
✔ Adding 'qualityTools' to Imports field in DESCRIPTION
● Refer to functions with `qualityTools::fun()`

> use_package("highcharter")
Registered S3 method overwritten by 'xts':
  method     from
  as.zoo.xts zoo 
Registered S3 method overwritten by 'quantmod':
  method            from
  as.zoo.data.frame zoo 
✔ Adding 'highcharter' to Imports field in DESCRIPTION
● Refer to functions with `highcharter::fun()`

> use_package("cowplot")
✔ Adding 'cowplot' to Imports field in DESCRIPTION
● Refer to functions with `cowplot::fun()`

Adding Data to the Package

I want to include two files, one data frame containing 50 observations of a healthy group with 5 predictors each, and another data frame containing 15 observations from an abnormal or unhealthy group (also with 5 predictors). I made sure the two CSV files I wanted to add to the package were in my working directory first by using dir().

> use_data_raw()
✔ Creating 'data-raw/'
✔ Adding '^data-raw$' to '.Rbuildignore'
✔ Writing 'data-raw/DATASET.R'
● Modify 'data-raw/DATASET.R'
● Finish the data preparation script in 'data-raw/DATASET.R'
● Use `usethis::use_data()` to add prepared data to package

> mtsdata1 <- read.csv("MTS-Abnormal.csv") %>% mutate(abnormal=1)
> usethis::use_data(mtsdata1)
✔ Creating 'data/'
✔ Saving 'mtsdata1' to 'data/mtsdata1.rda'

> mtsdata2 <- read.csv("MTS-Normal.csv") %>% mutate(normal=1)
> usethis::use_data(mtsdata2)
✔ Saving 'mtsdata2' to 'data/mtsdata2.rda'

Magically, this added my two files (in .rds format) into my /data directory. (Now, though, I don’t know why the /data-raw directory is there… maybe we’ll figure that out later.) I decided it was time to commit these to my repository again:

Following the instruction above, I re-knit the README.Rmd and then it was possible to commit everything to Github again. At which point I ended up in a fistfight with git, again saved only by my software engineer partner who uses Github all the time:

I think it should be working. The next test will be if anyone can install this from github using devtools. Let me know if it works for you… it works for me locally, but you know how that goes. The next post will show you how to use it 🙂

install.packages("devtools")
install_github("NicoleRadziwill/easyMTS")

SEE WHAT WILL BECOME THE easyMTS VIGNETTE –>

My First R Package (Part 2)

In Part 1, I set up RStudio with usethis, and created my first Minimum Viable R Package (MVRP?) which was then pushed to Github to create a new repository.

I added a README:

> use_readme_rmd()
✔ Writing 'README.Rmd'
✔ Adding '^README\\.Rmd$' to '.Rbuildignore'
● Modify 'README.Rmd'
✔ Writing '.git/hooks/pre-commit'

Things were moving along just fine, until I got this unkind message (what do you mean NOT an R package???!! What have I been doing the past hour?)

> use_testthat()
Error: `use_testthat()` is designed to work with packages.
Project 'easyMTS' is not an R package.

> use_mit_license("Nicole Radziwill")
✔ Setting active project to 'D:/R/easyMTS'
Error: `use_mit_license()` is designed to work with packages.
Project 'easyMTS' is not an R package.

Making easyMTS a Real Package

I sent out a tweet hoping to find some guidance, because Stack Overflow and Google and the RStudio community were coming up blank. As soon as I did, I discovered this button in RStudio:

The first time I ran it, it complained that I needed Rtools, but that Rtools didn’t exist for version 3.6.1. I decided to try finding and installing Rtools anyway because what could I possibly lose. I went to my favorite CRAN repository and found a link for Rtools just under the link for the base install:

I’m on Windows 10, so this downloaded an .exe which I quickly right-clicked on to run… the installer did its thing, and I clicked “Finish”, assuming that all was well. Then I went back into RStudio and tried to do Build -> Clean and Rebuild… and here’s what happened:

IT WORKED!! (I think!!!)

It created a package (top right) and then loaded it into my RStudio session (bottom left)! It loaded the package name into the package console (bottom right)!

I feel like this is a huge accomplishment for now, so I’m going to move to Part 3 of my blog post. We’ll figure out how to close the gaps that I’ve invariably introduced by veering off-tutorial.

GO TO PART 3 –>

My First R Package (Part 1)

(What does this new package do? Find out here.)

I have had package-o-phobia for years, and have skillfully resisted learning how to build a new R package. However, I do have a huge collection of scripts on my hard drive with functions in them, and I keep a bunch of useful functions up on Github so anyone who wants can source and use them. I source them myself! So, really, there’s no reason to package them up and (god forbid) submit them to CRAN. I’m doing fine without packages!

Reality check: NO. As I’ve been told by so many people, if you have functions you use a lot, you should write a package. You don’t even have to think about a package as something you write so that other people can use. It is perfectly fine to write a package for an audience of one — YOU.

But I kept making excuses for myself until very recently, when I couldn’t find a package to do something I needed to do, and all the other packages were either not getting the same answers as in book examples OR they were too difficult to use. It was time.

So armed with moral support and some exciting code, I began the journey of a thousand miles with the first step, guided by Tomas Westlake and Emil Hvitfeldt and of course Hadley. I already had some of the packages I needed, but did not have the most magical one of all, usethis:

install.packages("usethis")

library(usethis)
library(roxygen2)
library(devtools)

Finding a Package Name

First, I checked to see if the package name I wanted was available. It was not available on CRAN, which was sad:

> available("MTS")
Urban Dictionary can contain potentially offensive results,
  should they be included? [Y]es / [N]o:
1: Y
-- MTS -------------------------------------------------------------------------
Name valid: ✔
Available on CRAN: ✖ 
Available on Bioconductor: ✔
Available on GitHub:  ✖ 
Abbreviations: http://www.abbreviations.com/MTS
Wikipedia: https://en.wikipedia.org/wiki/MTS
Wiktionary: https://en.wiktionary.org/wiki/MTS

My second package name was available though, and I think it’s even better. I’ve written code to easily create and evaluate diagnostic algorithms using the Mahalanobis-Taguchi System (MTS), so my target package name is easyMTS:

> available("easyMTS")
-- easyMTS ------------------------------------------------------------
Name valid: ✔
Available on CRAN: ✔ 
Available on Bioconductor: ✔
Available on GitHub:  ✔ 
Abbreviations: http://www.abbreviations.com/easy
Wikipedia: https://en.wikipedia.org/wiki/easy
Wiktionary: https://en.wiktionary.org/wiki/easy
Sentiment:+++

Create Minimum Viable Package

Next, I set up the directory structure locally. Another RStudio session started up on its own; I’m hoping this is OK.

> create_package("D:/R/easyMTS")
✔ Creating 'D:/R/easyMTS/'
✔ Setting active project to 'D:/R/easyMTS'
✔ Creating 'R/'
✔ Writing 'DESCRIPTION'
Package: easyMTS
Title: What the Package Does (One Line, Title Case)
Version: 0.0.0.9000
Authors@R (parsed):
    * First Last <first.last@example.com> [aut, cre] (<https://orcid.org/YOUR-ORCID-ID>)
Description: What the package does (one paragraph).
License: What license it uses
Encoding: UTF-8
LazyData: true
✔ Writing 'NAMESPACE'
✔ Writing 'easyMTS.Rproj'
✔ Adding '.Rproj.user' to '.gitignore'
✔ Adding '^easyMTS\\.Rproj$', '^\\.Rproj\\.user$' to '.Rbuildignore'
✔ Opening 'D:/R/easyMTS/' in new RStudio session
✔ Setting active project to '<no active project>'

Syncing with Github

use_git_config(user.name = "nicoleradziwill", user.email = "nicole.radziwill@gmail.com")

browse_github_token()

This took me to a page on Github where I entered my password, and then had to go down to the bottom of the page to click on the green button that said “Generate Token.” They said I would never be able to see it again, so I gmailed it to myself for easy searchability. Next, I put this token where it is supposed to be locally:

edit_r_environ()

A blank file popped up in RStudio, and I added this line, then saved the file to its default location (not my real token):

GITHUB_PAT=e54545x88f569fff6c89abvs333443433d

Then I had to restart R and confirm it worked:

github_token()

This revealed my token! I must have done the Github setup right. Finally I could proceed with the rest of the git setup:

> use_github()
✔ Setting active project to 'D:/R/easyMTS'
Error: Cannot detect that project is already a Git repository.
Do you need to run `use_git()`?
> use_git()
✔ Initialising Git repo
✔ Adding '.Rhistory', '.RData' to '.gitignore'
There are 5 uncommitted files:
* '.gitignore'
* '.Rbuildignore'
* 'DESCRIPTION'
* 'easyMTS.Rproj'
* 'NAMESPACE'
Is it ok to commit them?

1: No
2: Yeah
3: Not now

Selection: use_github()
Enter an item from the menu, or 0 to exit
Selection: 2
✔ Adding files
✔ Commit with message 'Initial commit'
● A restart of RStudio is required to activate the Git pane
Restart now?

1: No way
2: For sure
3: Nope

Selection: 2

When I tried to commit to Github, it was asking me if the description was OK, but it was NOT. Every time I said no, it kicked me out. Turns out it wanted me to go directly into the DESCRIPTION file and edit it, so I did. I used Notepad because this was crashing RStudio. But this caused a new problem:

Error: Uncommited changes. Please commit to git before continuing.

This is the part of the exercise where it’s great to be living with a software engineer who uses git and Github all the time. He pointed me to a tiny little tab that said “Terminal” in the bottom left corner of RStudio, just to the right of “Console”. He told me to type this, which unstuck me:

THEN, when I went back to the Console, it all worked:

> use_git()
> use_github()
✔ Checking that current branch is 'master'
Which git protocol to use? (enter 0 to exit) 

1: ssh   <-- presumes that you have set up ssh keys
2: https <-- choose this if you don't have ssh keys (or don't know if you do)

Selection: 2
● Tip: To suppress this menu in future, put
  `options(usethis.protocol = "https")`
  in your script or in a user- or project-level startup file, '.Rprofile'.
  Call `usethis::edit_r_profile()` to open it for editing.
● Check title and description
  Name:        easyMTS
  Description: 
Are title and description ok?

1: Yes
2: Negative
3: No

Selection: 1
✔ Creating GitHub repository
✔ Setting remote 'origin' to 'https://github.com/NicoleRadziwill/easyMTS.git'
✔ Pushing 'master' branch to GitHub and setting remote tracking branch
✔ Opening URL 'https://github.com/NicoleRadziwill/easyMTS'

This post is getting long, so I’ll split it into parts. See you in Part 2.

GO TO PART 2 –>

Agile vs. Lean: Explained by Cats

Over the past few years, Agile has gained popularity. This methodology emerged as a solution to manage projects with a number of unknown elements and to counter the typical waterfall method. Quality practitioners have observed the numerous similarities between this new framework and Lean. Some have speculated that Agile is simply the next generation’s version of Lean. These observations have posed the question: Is Agile the new Lean?  

ASQ Influential Voices Roundtable for December 2019

The short answer to this question is: NO.

The longer answer is one I’m going to have to hold back some emotions to answer. Why? I have two reasons.

Reason #1: There is No Magic Bullet

First, many managers are on a quest for the silver bullet — a methodology or a tool that they can implement on Monday, and reap benefits no later than Friday. Neither lean nor agile can make this happen. But it’s not uncommon to see organizations try this approach. A workgroup will set up a Kanban board or start doing daily stand-up meetings, and then talk about how they’re “doing agile.” Now that agile is in place, these teams have no reason to go any further.

Reason #2: There is Nothing New Under the Sun

Neither approach is “new” and neither is going away. Lean principles have been around since Toyota pioneered its production system in the 1960s and 1970s. The methods prioritized value and flow, with attention to reducing all types of waste everywhere in the organization. Agile emerged in the 1990s for software development, as a response to waterfall methods that couldn’t respond effectively to changes in customer requirements.

Agile modeling uses some lean principles: for example, why spend hours documenting flow charts in Visio, when you can just write one on a whiteboard, take a photo, and paste it into your documentation? Agile doesn’t have to be perfectly lean, though. It’s acceptable to introduce elements that might seem like waste into processes, as long as you maintain your ability to quickly respond to new information and changes required by customers. (For example, maybe you need to touch base with your customers several times a week. This extra time and effort is OK in agile if it helps you achieve your customer-facing goals.)

Both lean and agile are practices. They require discipline, time, and monitoring. Teams must continually hone their practice, and learn about each other as they learn together. There are no magic bullets.

Information plays a key role. Effective flow of information from strategy to action is important for lean because confusion (or incomplete communication) and forms of waste. Agile also emphasizes high-value information flows, but for slightly different purposes — that include promoting:

  • Rapid understanding
  • Rapid response
  • Rapid, targeted, and effective action

The difference is easier to understand if you watch a couple cat videos.

This Cat is A G I L E

From Parkour Cats: https://www.youtube.com/watch?v=iCEL-DmxaAQ

This cat is continuously scanning for information about its environment. It’s young and in shape, and it navigates its environment like a pro, whizzing from floor to ceiling. If it’s about to fall off something? No problem! This cat is A G I L E and can quickly adjust. It can easily achieve its goal of scaling any of the cat towers in this video. Agile is also about trying new things to quickly assess whether they will work. You’ll see this cat attempt to climb the wall with an open mind, and upon learning the ineffectiveness of the approach, abandoning that experiment.

This Cat is L E A N

From “How Lazy Cats Drink Water”: https://www.youtube.com/watch?v=FlVo3yUNI6E

This cat is using as LITTLE energy as possible to achieve its goal of hydration. Although this cat might be considered lazy, it is actually very intelligent, dynamically figuring out how to remove non-value-adding activity from its process at every moment. This cat is working smarter, not harder. This cat is L E A N.

Hope this has been helpful. Business posts definitely need more cat videos.

An Easy Way to Make Minimum Viable Product (MVP) Totally Not Viable

The classic viral MVP cartoon from Henrik Kniberg (https://blog.crisp.se/2016/01/25/henrikkniberg/making-sense-of-mvp)

5 minute read

The Minimum Viable Product (MVP) concept has taken off over the past few years. Indeed, its heart is in the right place. MVP encourages product managers to scope features and functionality carefully so that customer needs are satisfied at every stage of development — not just in a sweeping finale at the end of development.

It’s a great way to shorten time-to-value and test new market concepts before committing. Zappos, for example, started by posting pictures of shoes on the internet without having an inventory. They wanted to quickly test to see whether people would even consider buying shoes without trying them on.

Unfortunately, adhering to MVP won’t guarantee success thanks to one critical caveat. And that is: if your product already exists, you have to consider your product’s base state. What can your customers do right now with your product? Failure to take this into consideration can be disastrous.

An Example: Your Web Site

Here’s what I mean: let’s say the product is your company’s web site. If you’re starting from scratch, a perfectly suitable MVP would be a splash page with one or two sentences about what you do. Maybe you’d add some contact information. Customers will be able to find you and communicate with you, and you’ll be providing greater value than without a web presence.

But if you already have a 5000-page site online, that solution is not going to fly. Customers and prospects returning to your site will wonder why it vaporized. If they’re relying on the content or functionality you previously provided, chances are they will not be happy. Confused, they may choose to go elsewhere.

The moral of the story is: in defining the scope of your MVP, take into consideration what your customers can already do, and don’t dare give them less in your next release.

Designing Experiences for Authentic Engagement: The Design for STEAM Canvas

As Industry 4.0 and Digital Transformation efforts bear their first fruits, capabilities, business models, and the organizations that embody them are transforming. A century ago, we thought of organizations as machines to be rigidly designed and controlled. In the latter part of the 20th century, organizations were thought of as knowledge to be cultivated, shared, and expanded. But “as intelligent systems gain traction, we are once again at a crossroads – where organizations must create complete and meaningful experiences” for their customers, stakeholders, and employees.

Read our new paper in the STEAM Journal

How do you design those complete, meaningful, and radically engaging experiences? To provide a starting point, check out “Design for Steam: Creating Participatory Art with Purpose” by my former student Nick Kamienski and me. It was just published today by the STEAM Journal.

“Participatory Art” doesn’t just mean creating things that are pretty to look at in your office lobby or tradeshow booth. It means finding ways to connect with your audience in ways that help them find meaning, purpose, and self-awareness – the ultimate ingredient for authentic engagement.

Designing experiences to make this happen is challenging, but totally within reach. Learn more in today’s new article!

Reinforcement Learning: Q-Learning with the Hopping Robot

Overview: Reinforcement learning uses “reward” signals to determine how to navigate through a system in the most valuable way. (I’m particularly interested in the variant of reinforcement learning called “Q-Learning” because the goal is to create a “Quality Matrix” that can help you make the best sequence of decisions!) I found a toy robot navigation problem on the web that was solved using custom R code for reinforcement learning, and I wanted to reproduce the solution in different ways than the original author did. This post describes different ways that I solved the problem described at http://bayesianthink.blogspot.com/2014/05/hopping-robots-and-reinforcement.html

The Problem: Our agent, the robot, is placed at random on a board of wood. There’s a hole at s1, a sticky patch at s4, and the robot is trying to make appropriate decisions to navigate to s7 (the target). The image comes from the blog post linked above.

To solve a problem like this, you can use MODEL-BASED approaches if you know how likely it is that the robot will move from one state to another (that is, the transition probabilities for each action) or MODEL-FREE approaches (you don’t know how likely it is that the robot will move from state to state, but you can figure out a reward structure).

  • Markov Decision Process (MDP) – If you know the states, actions, rewards, and transition probabilities (which are probably different for each action), you can determine the optimal policy or “path” through the system, given different starting states. (If transition probabilities have nothing to do with decisions that an agent makes, your MDP reduces to a Markov Chain.)
  • Reinforcement Learning (RL) – If you know the states, actions, and rewards (but not the transition probabilities), you can still take an unsupervised approach. Just randomly create lots of hops through your system, and use them to update a matrix that describes the average value of each hop within the context of the system.

Solving a RL problem involves finding the optimal value functions (e.g. the Q matrix in Attempt 1) or the optimal policy (the State-Action matrix in Attempt 2). Although there are many techniques for reinforcement learning, we will use Q-learning because we don’t know the transition probabilities for each action. (If we did, we’d model it as a Markov Decision Process and use the MDPtoolbox package instead.) Q-Learning relies on traversing the system in many ways to update a matrix of average expected rewards from each state transition. This equation that it uses is from https://www.is.uni-freiburg.de/ressourcen/business-analytics/13_reinforcementlearning.pdf:

For this to work, all states have to be visited a sufficient number of times, and all state-action pairs have to be included in your experience sample. So keep this in mind when you’re trying to figure out how many iterations you need.

Attempt 1: Quick Q-Learning with qlearn.R

  • Input: A rewards matrix R. (That’s all you need! Your states are encoded in the matrix.)
  • Output: A Q matrix from which you can extract optimal policies (or paths) to help you navigate the environment.
  • Pros: Quick and very easy. Cons: Does not let you set epsilon (% of random actions), so all episodes are determined randomly and it may take longer to find a solution. Can take a long time to converge.

Set up the rewards matrix so it is a square matrix with all the states down the rows, starting with the first and all the states along the columns, starting with the first:

[code language=”r”]
hopper.rewards <- c(-10, 0.01, 0.01, -1, -1, -1, -1,
-10, -1, 0.1, -3, -1, -1, -1,
-1, 0.01, -1, -3, 0.01, -1, -1,
-1, -1, 0.01, -1, 0.01, 0.01, -1,
-1, -1, -1, -3, -1, 0.01, 100,
-1, -1, -1, -1, 0.01, -1, 100,
-1, -1, -1, -1, -1, 0.01, 100)

HOP <- matrix(hopper.rewards, nrow=7, ncol=7, byrow=TRUE)
> HOP
[,1] [,2] [,3] [,4] [,5] [,6] [,7]
[1,] -10 0.01 0.01 -1 -1.00 -1.00 -1
[2,] -10 -1.00 0.10 -3 -1.00 -1.00 -1
[3,] -1 0.01 -1.00 -3 0.01 -1.00 -1
[4,] -1 -1.00 0.01 -1 0.01 0.01 -1
[5,] -1 -1.00 -1.00 -3 -1.00 0.01 100
[6,] -1 -1.00 -1.00 -1 0.01 -1.00 100
[7,] -1 -1.00 -1.00 -1 -1.00 0.01 100
[/code]

Here’s how you read this: the rows represent where you’ve come FROM, and the columns represent where you’re going TO. Each element 1 through 7 corresponds directly to S1 through S7 in the cartoon above. Each cell contains a reward (or penalty, if the value is negative) if we arrive in that state.

The S1 state is bad for the robot… there’s a hole in that piece of wood, so we’d really like to keep it away from that state. Location [1,1] on the matrix tells us what reward (or penalty) we’ll receive if we start at S1 and stay at S1: -10 (that’s bad). Similarly, location [2,1] on the matrix tells us that if we start at S2 and move left to S1, that’s also bad and we should receive a penalty of -10. The S4 state is also undesirable – there’s a sticky patch there, so we’d like to keep the robot away from it. Location [3,4] on the matrix represents the action of going from S3 to S4 by moving right, which will put us on the sticky patch

Now load the qlearn command into your R session:

[code language=”r”]
qlearn <- function(R, N, alpha, gamma, tgt.state) {
# Adapted from https://stackoverflow.com/questions/39353580/how-to-implement-q-learning-in-r
Q <- matrix(rep(0,length(R)), nrow=nrow(R))
for (i in 1:N) {
cs <- sample(1:nrow(R), 1)
while (1) {
next.states <- which(R[cs,] > -1) # Get feasible actions for cur state
if (length(next.states)==1) # There may only be one possibility
ns <- next.states
else
ns <- sample(next.states,1) # Or you may have to pick from a few
if (ns > nrow(R)) { ns <- cs }
# NOW UPDATE THE Q-MATRIX
Q[cs,ns] <- Q[cs,ns] + alpha*(R[cs,ns] + gamma*max(Q[ns, which(R[ns,] > -1)]) – Q[cs,ns])
if (ns == tgt.state) break
cs <- ns
}
}
return(round(100*Q/max(Q)))
}
[/code]

Run qlearn with the HOP rewards matrix, a learning rate of 0.1, a discount rate of 0.8, and a target state of S7 (the location to the far right of the wooden board). I did 10,000 episodes (where in each one, the robot dropped randomly onto the wooden board and has to get to S7):

[code language=”r”]
r.hop <- qlearn(HOP,10000,alpha=0.1,gamma=0.8,tgt.state=7)
> r.hop
[,1] [,2] [,3] [,4] [,5] [,6] [,7]
[1,] 0 51 64 0 0 0 0
[2,] 0 0 64 0 0 0 0
[3,] 0 51 0 0 80 0 0
[4,] 0 0 64 0 80 80 0
[5,] 0 0 0 0 0 80 100
[6,] 0 0 0 0 80 0 100
[7,] 0 0 0 0 0 80 100
[/code]

The Q-Matrix that is presented encodes the best-value solutions from each state (the “policy”). Here’s how you read it:

  • If you’re at s1 (first row), hop to s3 (biggest value in first row), then hop to s5 (go to row 3 and find biggest value), then hop to s7 (go to row 5 and find biggest value)
  • If you’re at s2, go right to s3, then hop to s5, then hop to s7
  • If you’re at s3, hop to s5, then hop to s7
  • If you’re at s4, go right to s5 OR hop to s6, then go right to s7
  • If you’re at s5, hop to s7
  • If you’re at s6, go right to s7
  • If you’re at s7, stay there (when you’re in the target state, the value function will not be able to pick out a “best action” because the best action is to do nothing)

Alternatively, the policy can be expressed as the best action from each of the 7 states: HOP, RIGHT, HOP, RIGHT, HOP, RIGHT, (STAY PUT)

Attempt 2: Use ReinforcementLearning Package

I also used the ReinforcementLearning package by Nicholas Proellochs (6/19/2017) described in https://cran.r-project.org/web/packages/ReinforcementLearning/ReinforcementLearning.pdf.

  • Input: 1) a definition of the environment, 2) a list of states, 3) a list of actions, and 4) control parameters alpha (the learning rate; usually 0.1), gamma (the discount rate which describes how important future rewards are; often 0.9 indicating that 90% of the next reward will be taken into account), and epsilon (the probability that you’ll try a random action; often 0.1)
  • Output: A State-Action Value matrix, which attaches a number to how good it is to be in a particular state and take an action. You can use it to determine the highest value action from each state. (It contains the same information as the Q-matrix from Attempt 1, but you don’t have to infer the action from the destination it brings you to.)
  • Pros: Relatively straightforward. Allows you to specify epsilon, which controls the proportion of random actions you’ll explore as you create episodes and explore your environment. Cons: Requires manual setup of all state transitions and associated rewards.

First, I created an “environment” that describes 1) how the states will change when actions are taken, and 2) what rewards will be accrued when that happens. I assigned a reward of -1 to all actions that are not special, e.g. landing on S1, landing on S4, or landing on S7. To be perfectly consistent with Attempt 1, I could have used 0.01 instead of -1, but the results will be similar. The values you choose for rewards are sort of arbitrary, but you do need to make sure there’s a comparatively large positive reward at your target state and “negative rewards” for states you want to avoid or are physically impossible.

[code language=”r”]
my.env <- function(state,action) {
next_state <- state
if (state == state("s1") && action == "right") { next_state <- state("s2") }
if (state == state("s1") && action == "hop") { next_state <- state("s3") }

if (state == state("s2") && action == "left") {
next_state <- state("s1"); reward <- -10 }
if (state == state("s2") && action == "right") { next_state <- state("s3") }
if (state == state("s2") && action == "hop") {
next_state <- state("s4"); reward <- -3 }

if (state == state("s3") && action == "left") { next_state <- state("s2") }
if (state == state("s3") && action == "right") {
next_state <- state("s4"); reward <- -3 }
if (state == state("s3") && action == "hop") { next_state <- state("s5") }

if (state == state("s4") && action == "left") { next_state <- state("s3") }
if (state == state("s4") && action == "right") { next_state <- state("s5") }
if (state == state("s4") && action == "hop") { next_state <- state("s6") }

if (state == state("s5") && action == "left") {
next_state <- state("s4"); reward <- -3 }
if (state == state("s5") && action == "right") { next_state <- state("s6") }
if (state == state("s5") && action == "hop") {
next_state <- state("s7"); reward <- 10 }

if (state == state("s6") && action == "left") { next_state <- state("s5") }
if (state == state("s6") && action == "right") {
next_state <- state("s7"); reward <- 10 }

if (next_state == state("s7") && state != state("s7")) {
reward <- 10
} else {
reward <- -1
}
out <- list(NextState = next_state, Reward = reward)
return(out)
}
[/code]

Next, I installed and loaded up the ReinforcementLearning package and ran the RL simulation:

[code language=”r”]
install.packages("ReinforcementLearning")
library(ReinforcementLearning)
states <- c("s1", "s2", "s3", "s4", "s5", "s6", "s7")
actions <- c("left","right","hop")
data <- sampleExperience(N=3000,env=my.env,states=states,actions=actions)
control <- list(alpha = 0.1, gamma = 0.8, epsilon = 0.1)
model <- ReinforcementLearning(data, s = "State", a = "Action", r = "Reward",
s_new = "NextState", control = control)
[/code]

Now we can see the results:

[code language=”r”]
> print(model)
State-Action function Q
hop right left
s1 2.456741 1.022440 1.035193
s2 2.441032 2.452331 1.054154
s3 4.233166 2.469494 1.048073
s4 4.179853 4.221801 2.422842
s5 6.397159 4.175642 2.456108
s6 4.217752 6.410110 4.223972
s7 -4.602003 -4.593739 -4.591626

Policy
s1 s2 s3 s4 s5 s6 s7
"hop" "right" "hop" "right" "hop" "right" "left"

Reward (last iteration)
[1] 223
[/code]

The recommended policy is: HOP, RIGHT, HOP, RIGHT, HOP, RIGHT, (STAY PUT)

If you tried this example and it didn’t produce the same response, don’t worry! Model-free reinforcement learning is done by simulation, and when you used the sampleExperience function, you generated a different set of state transitions to learn from. You may need more samples, or to tweak your rewards structure, or both.)

« Older Entries