Category Archives: Uncategorized

Reinforcement Learning: Q-Learning with the Hopping Robot

Overview: Reinforcement learning uses “reward” signals to determine how to navigate through a system in the most valuable way. (I’m particularly interested in the variant of reinforcement learning called “Q-Learning” because the goal is to create a “Quality Matrix” that can help you make the best sequence of decisions!) I found a toy robot navigation problem on the web that was solved using custom R code for reinforcement learning, and I wanted to reproduce the solution in different ways than the original author did. This post describes different ways that I solved the problem described at http://bayesianthink.blogspot.com/2014/05/hopping-robots-and-reinforcement.html

The Problem: Our agent, the robot, is placed at random on a board of wood. There’s a hole at s1, a sticky patch at s4, and the robot is trying to make appropriate decisions to navigate to s7 (the target). The image comes from the blog post linked above.

To solve a problem like this, you can use MODEL-BASED approaches if you know how likely it is that the robot will move from one state to another (that is, the transition probabilities for each action) or MODEL-FREE approaches (you don’t know how likely it is that the robot will move from state to state, but you can figure out a reward structure).

  • Markov Decision Process (MDP) – If you know the states, actions, rewards, and transition probabilities (which are probably different for each action), you can determine the optimal policy or “path” through the system, given different starting states. (If transition probabilities have nothing to do with decisions that an agent makes, your MDP reduces to a Markov Chain.)
  • Reinforcement Learning (RL) – If you know the states, actions, and rewards (but not the transition probabilities), you can still take an unsupervised approach. Just randomly create lots of hops through your system, and use them to update a matrix that describes the average value of each hop within the context of the system.

Solving a RL problem involves finding the optimal value functions (e.g. the Q matrix in Attempt 1) or the optimal policy (the State-Action matrix in Attempt 2). Although there are many techniques for reinforcement learning, we will use Q-learning because we don’t know the transition probabilities for each action. (If we did, we’d model it as a Markov Decision Process and use the MDPtoolbox package instead.) Q-Learning relies on traversing the system in many ways to update a matrix of average expected rewards from each state transition. This equation that it uses is from https://www.is.uni-freiburg.de/ressourcen/business-analytics/13_reinforcementlearning.pdf:

For this to work, all states have to be visited a sufficient number of times, and all state-action pairs have to be included in your experience sample. So keep this in mind when you’re trying to figure out how many iterations you need.

Attempt 1: Quick Q-Learning with qlearn.R

  • Input: A rewards matrix R. (That’s all you need! Your states are encoded in the matrix.)
  • Output: A Q matrix from which you can extract optimal policies (or paths) to help you navigate the environment.
  • Pros: Quick and very easy. Cons: Does not let you set epsilon (% of random actions), so all episodes are determined randomly and it may take longer to find a solution. Can take a long time to converge.

Set up the rewards matrix so it is a square matrix with all the states down the rows, starting with the first and all the states along the columns, starting with the first:

hopper.rewards <- c(-10, 0.01, 0.01, -1, -1, -1, -1,
         -10, -1, 0.1, -3, -1, -1, -1,
         -1, 0.01, -1, -3, 0.01, -1, -1,
         -1, -1, 0.01, -1, 0.01, 0.01, -1,
         -1, -1, -1, -3, -1, 0.01, 100,
         -1, -1, -1, -1, 0.01, -1, 100,
         -1, -1, -1, -1, -1, 0.01, 100)

HOP <- matrix(hopper.rewards, nrow=7, ncol=7, byrow=TRUE) 
> HOP
     [,1]  [,2]  [,3] [,4]  [,5]  [,6] [,7]
[1,]  -10  0.01  0.01   -1 -1.00 -1.00   -1
[2,]  -10 -1.00  0.10   -3 -1.00 -1.00   -1
[3,]   -1  0.01 -1.00   -3  0.01 -1.00   -1
[4,]   -1 -1.00  0.01   -1  0.01  0.01   -1
[5,]   -1 -1.00 -1.00   -3 -1.00  0.01  100
[6,]   -1 -1.00 -1.00   -1  0.01 -1.00  100
[7,]   -1 -1.00 -1.00   -1 -1.00  0.01  100

Here’s how you read this: the rows represent where you’ve come FROM, and the columns represent where you’re going TO. Each element 1 through 7 corresponds directly to S1 through S7 in the cartoon above. Each cell contains a reward (or penalty, if the value is negative) if we arrive in that state.

The S1 state is bad for the robot… there’s a hole in that piece of wood, so we’d really like to keep it away from that state. Location [1,1] on the matrix tells us what reward (or penalty) we’ll receive if we start at S1 and stay at S1: -10 (that’s bad). Similarly, location [2,1] on the matrix tells us that if we start at S2 and move left to S1, that’s also bad and we should receive a penalty of -10. The S4 state is also undesirable – there’s a sticky patch there, so we’d like to keep the robot away from it. Location [3,4] on the matrix represents the action of going from S3 to S4 by moving right, which will put us on the sticky patch

Now load the qlearn command into your R session:

qlearn <- function(R, N, alpha, gamma, tgt.state) {
# Adapted from https://stackoverflow.com/questions/39353580/how-to-implement-q-learning-in-r
  Q <- matrix(rep(0,length(R)), nrow=nrow(R))
  for (i in 1:N) {
    cs <- sample(1:nrow(R), 1)
    while (1) {
      next.states <- which(R[cs,] > -1)  # Get feasible actions for cur state
      if (length(next.states)==1)        # There may only be one possibility
        ns <- next.states
      else
        ns <- sample(next.states,1) # Or you may have to pick from a few 
      if (ns > nrow(R)) { ns <- cs }
      # NOW UPDATE THE Q-MATRIX
      Q[cs,ns] <- Q[cs,ns] + alpha*(R[cs,ns] + gamma*max(Q[ns, which(R[ns,] > -1)]) - Q[cs,ns])
      if (ns == tgt.state) break
      cs <- ns
    }
  }
  return(round(100*Q/max(Q)))
}

Run qlearn with the HOP rewards matrix, a learning rate of 0.1, a discount rate of 0.8, and a target state of S7 (the location to the far right of the wooden board). I did 10,000 episodes (where in each one, the robot dropped randomly onto the wooden board and has to get to S7):

r.hop <- qlearn(HOP,10000,alpha=0.1,gamma=0.8,tgt.state=7) 
> r.hop
     [,1] [,2] [,3] [,4] [,5] [,6] [,7]
[1,]    0   51   64    0    0    0    0
[2,]    0    0   64    0    0    0    0
[3,]    0   51    0    0   80    0    0
[4,]    0    0   64    0   80   80    0
[5,]    0    0    0    0    0   80  100
[6,]    0    0    0    0   80    0  100
[7,]    0    0    0    0    0   80  100

The Q-Matrix that is presented encodes the best-value solutions from each state (the “policy”). Here’s how you read it:

  • If you’re at s1 (first row), hop to s3 (biggest value in first row), then hop to s5 (go to row 3 and find biggest value), then hop to s7 (go to row 5 and find biggest value)
  • If you’re at s2, go right to s3, then hop to s5, then hop to s7
  • If you’re at s3, hop to s5, then hop to s7
  • If you’re at s4, go right to s5 OR hop to s6, then go right to s7
  • If you’re at s5, hop to s7
  • If you’re at s6, go right to s7
  • If you’re at s7, stay there (when you’re in the target state, the value function will not be able to pick out a “best action” because the best action is to do nothing)

Alternatively, the policy can be expressed as the best action from each of the 7 states: HOP, RIGHT, HOP, RIGHT, HOP, RIGHT, (STAY PUT)

Attempt 2: Use ReinforcementLearning Package

I also used the ReinforcementLearning package by Nicholas Proellochs (6/19/2017) described in https://cran.r-project.org/web/packages/ReinforcementLearning/ReinforcementLearning.pdf.

  • Input: 1) a definition of the environment, 2) a list of states, 3) a list of actions, and 4) control parameters alpha (the learning rate; usually 0.1), gamma (the discount rate which describes how important future rewards are; often 0.9 indicating that 90% of the next reward will be taken into account), and epsilon (the probability that you’ll try a random action; often 0.1)
  • Output: A State-Action Value matrix, which attaches a number to how good it is to be in a particular state and take an action. You can use it to determine the highest value action from each state. (It contains the same information as the Q-matrix from Attempt 1, but you don’t have to infer the action from the destination it brings you to.)
  • Pros: Relatively straightforward. Allows you to specify epsilon, which controls the proportion of random actions you’ll explore as you create episodes and explore your environment. Cons: Requires manual setup of all state transitions and associated rewards.

First, I created an “environment” that describes 1) how the states will change when actions are taken, and 2) what rewards will be accrued when that happens. I assigned a reward of -1 to all actions that are not special, e.g. landing on S1, landing on S4, or landing on S7. To be perfectly consistent with Attempt 1, I could have used 0.01 instead of -1, but the results will be similar. The values you choose for rewards are sort of arbitrary, but you do need to make sure there’s a comparatively large positive reward at your target state and “negative rewards” for states you want to avoid or are physically impossible.

my.env <- function(state,action) {
   next_state <- state
   if (state == state("s1") && action == "right")  { next_state <- state("s2") }
   if (state == state("s1") && action == "hop")    { next_state <- state("s3") }

   if (state == state("s2") && action == "left")  {
	next_state <- state("s1"); reward <- -10 }
   if (state == state("s2") && action == "right") { next_state <- state("s3") }
   if (state == state("s2") && action == "hop")   {
	next_state <- state("s4"); reward <- -3 }

   if (state == state("s3") && action == "left")  { next_state <- state("s2") }
   if (state == state("s3") && action == "right") {
	next_state <- state("s4"); reward <- -3 }
   if (state == state("s3") && action == "hop")   { next_state <- state("s5") }

   if (state == state("s4") && action == "left")  { next_state <- state("s3") }
   if (state == state("s4") && action == "right") { next_state <- state("s5") }
   if (state == state("s4") && action == "hop")   { next_state <- state("s6") }

   if (state == state("s5") && action == "left")  {
	next_state <- state("s4"); reward <- -3 }
   if (state == state("s5") && action == "right") { next_state <- state("s6") }
   if (state == state("s5") && action == "hop")   {
	next_state <- state("s7"); reward <- 10 }

   if (state == state("s6") && action == "left")  { next_state <- state("s5") }
   if (state == state("s6") && action == "right") {
	next_state <- state("s7"); reward <- 10 }

   if (next_state == state("s7") && state != state("s7")) {
        reward <- 10
   } else {
	reward <- -1
   }
   out <- list(NextState = next_state, Reward = reward)
   return(out)
}

Next, I installed and loaded up the ReinforcementLearning package and ran the RL simulation:

install.packages("ReinforcementLearning")
library(ReinforcementLearning)
states <- c("s1", "s2", "s3", "s4", "s5", "s6", "s7")
actions <- c("left","right","hop")
data <- sampleExperience(N=3000,env=my.env,states=states,actions=actions)
control <- list(alpha = 0.1, gamma = 0.8, epsilon = 0.1)
model <- ReinforcementLearning(data, s = "State", a = "Action", r = "Reward", 
      s_new = "NextState", control = control)

Now we can see the results:

> print(model)
State-Action function Q
         hop     right      left
s1  2.456741  1.022440  1.035193
s2  2.441032  2.452331  1.054154
s3  4.233166  2.469494  1.048073
s4  4.179853  4.221801  2.422842
s5  6.397159  4.175642  2.456108
s6  4.217752  6.410110  4.223972
s7 -4.602003 -4.593739 -4.591626

Policy
     s1      s2      s3      s4      s5      s6      s7
  "hop" "right"   "hop" "right"   "hop" "right"  "left" 

Reward (last iteration)
[1] 223

The recommended policy is: HOP, RIGHT, HOP, RIGHT, HOP, RIGHT, (STAY PUT)

If you tried this example and it didn’t produce the same response, don’t worry! Model-free reinforcement learning is done by simulation, and when you used the sampleExperience function, you generated a different set of state transitions to learn from. You may need more samples, or to tweak your rewards structure, or both.)

Analytic Hierarchy Process (AHP) with the ahp Package

On my December to-do list, I had “write an R package to make analytic hierarchy process (AHP) easier” — but fortunately gluc beat me to it, and saved me tons of time that I spent using AHP to do an actual research problem. First of all, thank you for writing the new ahp package! Next, I’d like to show everyone just how easy this package makes performing AHP and displaying the results. We will use the Tom, Dick, and Harry example that is described on Wikipedia. – the goal is to choose a new employee, and you can pick either Tom, Dick, or Harry. Read the problem statement on Wikipedia before proceeding.

AHP is a method for multi-criteria decision making that breaks the problem down based on decision criteria, subcriteria, and alternatives that could satisfy a particular goal. The criteria are compared to one another, the alternatives are compared to one another based on how well they comparatively satisfy the subcriteria, and then the subcriteria are examined in terms of how well they satisfy the higher-level criteria. The Tom-Dick-Harry problem is a simple hierarchy: only one level of criteria separates the goal (“Choose the Most Suitable Leader”) from the alternatives (Tom, Dick, or Harry):

tom-dick-harry

To use the ahp package, the most challenging part involves setting up the YAML file with your hierarchy and your rankings. THE MOST IMPORTANT THING TO REMEMBER IS THAT THE FIRST COLUMN IN WHICH A WORD APPEARS IS IMPORTANT. This feels like FORTRAN. YAML experts may be appalled that I just didn’t know this, but I didn’t. So most of the first 20 hours I spent stumbling through the ahp package involved coming to this very critical conclusion. The YAML AHP input file requires you to specify 1) the alternatives (along with some variables that describe the alternatives; I didn’t use them in this example, but I’ll post a second example that does use them) and 2) the goal hierarchy, which includes 2A) comparisons of all the criteria against one another FIRST, and then 2B) comparisons of the criteria against the alternatives. I saved my YAML file as tomdickharry.txt and put it in my C:/AHP/artifacts directory:

#########################
# Alternatives Section
# THIS IS FOR The Tom, Dick, & Harry problem at
# https://en.wikipedia.org/wiki/Analytic_hierarchy_process_%E2%80%93_leader_example
#
Alternatives: &alternatives
# 1= not well; 10 = best possible
# Your assessment based on the paragraph descriptions may be different.
  Tom:
    age: 50
    experience: 7
    education: 4
    leadership: 10
  Dick:
    age: 60
    experience: 10
    education: 6
    leadership: 6
  Harry:
    age: 30
    experience: 5
    education: 8
    leadership: 6
#
# End of Alternatives Section
#####################################
# Goal Section
#
Goal:
# A Goal HAS preferences (within-level comparison) and HAS Children (items in level)
  name: Choose the Most Suitable Leader
  preferences:
    # preferences are defined pairwise
    # 1 means: A is equal to B
    # 9 means: A is highly preferable to B
    # 1/9 means: B is highly preferable to A
    - [Experience, Education, 4]
    - [Experience, Charisma, 3]
    - [Experience, Age, 7]
    - [Education, Charisma, 1/3]
    - [Education, Age, 3]
    - [Age, Charisma, 1/5]
  children: 
    Experience:
      preferences:
        - [Tom, Dick, 1/4]
        - [Tom, Harry, 4]
        - [Dick, Harry, 9]
      children: *alternatives
    Education:
      preferences:
        - [Tom, Dick, 3]
        - [Tom, Harry, 1/5]
        - [Dick, Harry, 1/7]
      children: *alternatives
    Charisma:
      preferences:
        - [Tom, Dick, 5]
        - [Tom, Harry, 9]
        - [Dick, Harry, 4]
      children: *alternatives
    Age:
      preferences:
        - [Tom, Dick, 1/3]
        - [Tom, Harry, 5]
        - [Dick, Harry, 9]
      children: *alternatives
#
# End of Goal Section
#####################################

Next, I installed gluc’s ahp package and a helper package, data.tree, then loaded them into R:

devtools::install_github("gluc/ahp", build_vignettes = TRUE)
install.packages("data.tree")

library(ahp)
library(data.tree)

Running the calculations was ridiculously easy:

setwd("C:/AHP/artifacts")
myAhp <- LoadFile("tomdickharry.txt")
Calculate(myAhp)

And then generating the output was also ridiculously easy:

> GetDataFrame(myAhp)
                                  Weight  Dick   Tom Harry Consistency
1 Choose the Most Suitable Leader 100.0% 49.3% 35.8% 14.9%        4.4%
2  ¦--Experience                   54.8% 39.3% 11.9%  3.6%        3.2%
3  ¦--Education                    12.7%  1.0%  2.4%  9.2%        5.6%
4  ¦--Charisma                     27.0%  5.2% 20.1%  1.7%        6.1%
5  °--Age                           5.6%  3.8%  1.5%  0.4%        2.5%
> 
> print(myAhp, "weight", filterFun = isNotLeaf)
                        levelName     weight
1 Choose the Most Suitable Leader 1.00000000
2  ¦--Experience                  0.54756924
3  ¦--Education                   0.12655528
4  ¦--Charisma                    0.26994992
5  °--Age                         0.05592555
> print(myAhp, "weight")
                         levelName     weight
1  Choose the Most Suitable Leader 1.00000000
2   ¦--Experience                  0.54756924
3   ¦   ¦--Tom                     0.21716561
4   ¦   ¦--Dick                    0.71706504
5   ¦   °--Harry                   0.06576935
6   ¦--Education                   0.12655528
7   ¦   ¦--Tom                     0.18839410
8   ¦   ¦--Dick                    0.08096123
9   ¦   °--Harry                   0.73064467
10  ¦--Charisma                    0.26994992
11  ¦   ¦--Tom                     0.74286662
12  ¦   ¦--Dick                    0.19388163
13  ¦   °--Harry                   0.06325174
14  °--Age                         0.05592555
15      ¦--Tom                     0.26543334
16      ¦--Dick                    0.67162545
17      °--Harry                   0.06294121

You can also generate very beautiful output with the command below (but you’ll have to run the example yourself if you want to see how fantastically it turns out — maybe that will provide some motivation!)

ShowTable(myAhp)

I’ll post soon with an example of how to use AHP preference functions in the Tom, Dick, & Harry problem.

My First (R) Shiny App: An Annotated Tutorial

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

I’ve been meaning to learn Shiny for 2 years now… and thanks to a fortuitous email from @ImADataGuy this morning and a burst of wild coding energy about 5 hours ago, I am happy to report that I have completely fallen in love again. The purpose of this post is to share how I got my first Shiny app up and running tonight on localhost, how I deployed it to the http://shinyapps.io service, and how you can create a “Hello World” style program of your own that actually works on data that’s meaningful to you.

If you want to create a “Hello World!” app with Shiny (and your own data!) just follow these steps:

0. Install R 3.2.0+ first! This will save you time.
1. I signed up for an account at http://shinyapps.io.
2. Then I clicked the link in the email they sent me.
3. That allowed me to set up my https://radziwill.shinyapps.io location.
4. Then I followed the instructions at https://www.shinyapps.io/admin/#/dashboard
(This page has SPECIAL SECRET INFO CUSTOMIZED JUST FOR YOU ON IT!!) I had lots 
of problems with devtools::install_github('rstudio/shinyapps') - Had to go 
into my R directory, manually delete RCurl and digest, then 
reinstall both RCurl and digest... then installing shinyapps worked.
Note: this last command they tell you to do WILL NOT WORK because you do not have an app yet! 
If you try it, this is what you'll see:
> shinyapps::deployApp('path/to/your/app')
Error in shinyapps::deployApp("path/to/your/app") : 
C:\Users\Nicole\Documents\path\to\your\app does not exist
5. Then I went to http://shiny.rstudio.com/articles/shinyapps.html and installed rsconnect.
6. I clicked on my name and gravatar in the upper right hand corner of the 
https://www.shinyapps.io/admin/#/dashboard window I had opened, and then clicked 
"tokens". I realized I'd already done this part, so I skipped down to read 
"A Demo App" on http://shiny.rstudio.com/articles/shinyapps.html
7. Then, I re-installed ggplot2 and shiny using this command:
install.packages(c('ggplot2', 'shiny'))
8. I created a new directory (C:/Users/Nicole/Documents/shinyapps) and used
setwd to get to it.
9. I pasted the code at http://shiny.rstudio.com/articles/shinyapps.html to create two files, 
server.R and ui.R, which I put into my new shinyapps directory 
under a subdirectory called demo. The subdirectory name IS your app name.
10. I typed runApp("demo") into my R console, and voila! The GUI appeared in 
my browser window on my localhost.
-- Don't just try to close the browser window to get the Shiny app 
to stop. R will hang. To get out of this, I had to use Task Manager and kill R.
--- Use the main menu, and do Misc -> Stop Current Computation
11. I did the same with the "Hello Shiny" code at http://shiny.rstudio.com/articles/shinyapps.html. 
But what I REALLY want is to deploy a hello world app with MY OWN data. You know, something that's 
meaningful to me. You probably want to do a test app with data that is meaningful to you... here's 
how you can do that.
12. A quick search shows that I need jennybc's (Github) googlesheets package to get 
data from Google Drive viewable in my new Shiny app.
13. So I tried to get the googlesheets package with this command:
devtools::install_github('jennybc/googlesheets')
but then found out it requires R version 3.2.0. I you already have 3.2.0 you can skip 
to step 16 now.
14. So I reinstalled R using the installr package (highly advised if you want to 
overcome the agony of upgrading on windows). 
See http://www.r-statistics.com/2013/03/updating-r-from-r-on-windows-using-the-installr-package/
for info -- all it requires is that you type installR() -- really!
15. After installing R I restarted my machine. This is probably the first time in a month that 
I've shut all my browser windows, documents, spreadsheets, PDFs, and R sessions. I got the feeling 
that this made my computer happy.
16. Then, I created a Google Sheet with my data. While viewing that document, I went to 
File -> "Publish to the Web". I also discovered that my DOCUMENT KEY is that 
looooong string in the middle of the address, so I copied it for later:
1Bs0OH6F-Pdw5BG8yVo2t_VS9Wq1F7vb_VovOmnDSNf4
17. Then I created a new directory in C:/Users/Nicole/Documents/shinyapps to test out 
jennybc's googlesheets package, and called it jennybc
18. I copied and pasted the code in her server.R file and ui.R file
from https://github.com/jennybc/googlesheets/tree/master/inst/shiny-examples/01_read-public-sheet 
into files with the same names in my jennybc directory
19. I went into my R console, used getwd() to make sure I was in the
C:/Users/Nicole/Documents/shinyapps directory, and then typed
 runApp("jennybc")
20. A browser window popped up on localhost with her test Shiny app! I played with it, and then 
closed that browser tab.
21. When I went back into the R console, it was still hanging, so I went to the menu bar 
to Misc -> Stop Current Computation. This brought my R prompt back.
22. Now it was time to write my own app. I went to http://shiny.rstudio.com/gallery/ and
found a layout I liked (http://shiny.rstudio.com/gallery/tabsets.html), then copied the 
server.R and ui.R code into C:/Users/Nicole/Documents/shinyapps/my-hello -- 
and finally, tweaked the code and engaged in about 100 iterations of: 1) edit the two files, 
2) type runApp("my-hello") in the R console, 3) test my Shiny app in the 
browser window, 4) kill browser window, 5) do Misc -> Stop Current Computation 
in R. ALL of the computation happens in server.R, and all the display happens in ui.R:

server.R:

library(shiny)
library(googlesheets)
library(DT)

my_key <- "1Bs0OH6F-Pdw5BG8yVo2t_VS9Wq1F7vb_VovOmnDSNf4"
my_ss <- gs_key(my_key)
my_data <- gs_read(my_ss)

shinyServer(function(input, output, session) {
 output$plot <- renderPlot({
 my_data$type <- ordered(my_data$type,levels=c("PRE","POST"))
 boxplot(my_data$score~my_data$type,ylim=c(0,100),boxwex=0.6)
 })
 output$summary <- renderPrint({
 aggregate(score~type,data=my_data, summary)
 })
 output$the_data <- renderDataTable({
 datatable(my_data)
 })

})

ui.R:

library(shiny)
library(shinythemes)
library(googlesheets)

shinyUI(fluidPage(
 
 # Application title
 titlePanel("Nicole's First Shiny App"),
 
 # Sidebar with controls to select the random distribution type
 # and number of observations to generate. Note the use of the
 # br() element to introduce extra vertical spacing
 sidebarLayout(
 sidebarPanel(
     helpText("This is my first Shiny app!! It grabs some of my data 
from a Google Spreadsheet, and displays it here. I      
also used lots of examples from"),
     h6(a("http://shiny.rstudio.com/gallery/", 
href="http://shiny.rstudio.com/gallery/", target="_blank")),
     br(),
     h6(a("Click Here for a Tutorial on How It Was Made", 
href="https://qualityandinnovation.com/2015/12/08/my-first-shin     
y-app-an-annotated-tutorial/",
      target="_blank")),
      br()
 ),
 
 # Show a tabset that includes a plot, summary, and table view
 # of the generated distribution
 mainPanel(
    tabsetPanel(type = "tabs", 
    tabPanel("Plot", plotOutput("plot")), 
    tabPanel("Summary", verbatimTextOutput("summary")), 
    tabPanel("Table", DT::dataTableOutput("the_data"))
 )
 )
 )
))


23. Once I decided my app was good enough for my practice round, it was time to 
deploy it to the cloud.
24. This part of the process requires the shinyapps and dplyr 
packages, so be sure to install them:

devtools::install_github('hadley/dplyr')
library(dplyr)
devtools::install_github('rstudio/shinyapps')
library(shinyapps)
25. To deploy, all I did was this: setwd("C:/Users/Nicole/Documents/shinyapps/my-hello/")
deployApp()

CHECK OUT MY SHINY APP!!

A Chat with Jaime Casap, Google’s Chief Education Evangelist

jaime-casap-head

“The classroom of the future does not exist!”

That’s the word from Jaime Casap (@jcasap), Google’s Chief Education Evangelist — and a highly anticipated new Business Innovation Factory (BIF) storyteller for 2015.  In advance of the summit which takes place on September 16 and 17, Morgan and I had the opportunity to chat with Jaime about a form of business model innovation that’s close to our hearts – improving education. He’s a native New Yorker, so he’s naturally outspoken and direct. But his caring and considerate tone makes it clear he’s got everyone’s best interests at heart.

At Google, he’s the connector and boundary spanner… the guy the organization trusts to “predict the future” where education is concerned. He makes sure that the channels of communication are open between everyone working on education-related projects. Outside of Google, he advocates smart and innovative applications of technology in education that will open up educational opportunities for everyone.  Most recently, he visited the White House on this mission.

jaime-quote-image

The current system educational system is not broken, he says. It’s doing exactly what it was designed to do: prepare workers for a hierarchical, industrialized production economy. The problem is that the system cannot be high-performing because it’s not doing what we need it to for the upcoming decades, which requires leveraging the skills and capabilities of everyone.

He points out that low-income minorities now have a 9% chance of graduating from college… whereas a couple decades ago, they had a 6% chance. This startling statistic reflects an underlying deficiency in how education is designed and delivered in this country today.

So how do we fix it?

“Technology gives us the ability to question everything,” he says.  As we shift to performance-based assessments, we can create educational experiences that are practical, iterative, and focused on continuous improvement — where we measure iteration, innovation, and sustained incremental progress.

Measuring these, he says, will be a lot more interesting than what we tend to measure now: whether a learner gets something right the first time — or how long it took for a competency to emerge. From this new perspective, we’ll finally be able to answer questions like: What is an excellent school? What does a high-performing educational system look (and feel) like?

Jaime’s opportunity-driven vision for inclusiveness  is an integral part of Google’s future. And you can hear more about his personal story and how it shaped this vision next month at BIF.

If you haven’t made plans already to hear Jaime and the other storytellers at BIF, there may be a few tickets left — but this event always sells out! Check the BIF registration page and share a memorable experience with the BIF community this year: http://www.businessinnovationfactory.com/summit/register

A 15-Week Course to Introduce Machine Learning and Intelligent Systems in R

lantz-ml-in-rEvery fall, I teach a survey course for advanced undergraduates that covers one of the most critical themes in data science: intelligent systems. According to the IEEE, these are “systems that perceive, reason, learn, and act intelligently.” While data science is focused on analyzing data (often quite a lot of it) to make effective data-driven decisions, intelligent systems use those decisions to accomplish goals. As more and more devices join the Internet of Things (IoT), collecting data and sharing it with other “things” to make even more complex decisions, the role of intelligent systems will become even more pronounced.

So by the end of my course, I want students to have some practical skills that will be useful in analyzing, specifying, building, testing, and using intelligent systems:

  • Know whether a system they’re building (or interacting with) is intelligent… and how it could be made more intelligent
  • Be sensitized to ethical, social, political, and legal aspects of building and using intelligent systems 
  • Use regression techniques to uncover relationships in data using R (including linear, nonlinear, and neural network approaches)
  • Use classification and clustering methods to categorize observations (neural networks, k-means/KNN, Naive Bayes, support vector machines)
  • Be able to handle structured and unstructured data, using both supervised and unsupervised approaches
  • Understand what “big data” is, know when (and when not) to use it, and be familiar with some tools that help them deal with it

My course uses Brett Lantz’s VERY excellent book, Machine Learning with R (which is now also available in Kindle format), which I provide effusive praise for at https://qualityandinnovation.com/2014/04/14/the-best-book-ever-on-machine-learning-and-intelligent-systems-in-r/

One of the things I like the MOST about my class is that we actually cover the link between how your brain works and how neural networks are set up. (Other classes and textbooks typically just show you a picture of a neuron superimposed with inputs, a summation, an activation, and outputs, implying that “See? They’re pretty much the same!”) But it goes much deeper than this… we actually model error-correction learning and observational learning through the different algorithms we employ. To make this point real, we have an amazing guest lecture every year by Dr. Anne Henriksen, who is also a faculty member in the Department of Integrated Science and Technology at JMU. She also does research in neuroscience at the University of Virginia. After we do an exercise where we use a spreadsheet to iteratively determine the equation for a single layer perceptron’s decision boundary, we watch a video by Dr. Mark Gluck that shows how what we’re doing is essentially error-correction learning… and then he explains the chemistry that supports the process. We’re going to videotape Anne’s lecture this fall so you can see it!

Here is the syllabus I am using for Fall 2015. Please feel free to use it (in full or in part) if you are planning a similar class… but do let me know!

Getting Deep With Value Creation

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

In his November post, ASQ CEO Paul Borawski asks “What new fields or disciplines could most reap the benefits of quality tools and techniques?”

He notes that although the tradition of quality assurance, control, and improvement emerged from manufacturing, the techniques are now widely acknowledged and applied in many fields such as healthcare, education, and service. So what’s next?

One of the things I like to do when I’m trying to be a futurist is to go back to first principles. Explore the basis for why we do what we do… what makes us tick… why we like improving quality and making processes more effective, more efficient. And in doing so, while reflecting on Paul’s question, I think what’s next is…

Getting Deep with Value Creation.

As quality professionals we spend most of our time and energy figuring out how to create value. Either we’re improving the systems we work with to tweak out additional value, or we’re working with customers and stakeholders to figure out how to provide them with more value, or we’re focusing on innovation – figuring out how to create value for the future — reliably, consistently, and according to new and unexpected business models.

To me, this starts with me. How can I improve myself so that I’m a kickass vessel for the delivery of value? How can I use quality principles and quality tools to find – and align myself – with what I’m supposed to be doing at any given time? How can I become most productive in terms of the deep, meaningful value I add to those around me?

I know that others feel the same way. Marc Kelemen, a member of the ASQ Board of Directors, is leading a charge to develop a Body of Knowledge for Social Responsibility. He recognizes that the personal element is crucial if we’re trying to become socially responsible as teams, and organizations, and communities. So we’ll be working on this over the next few months… figuring out how to get deep with the notion of value creation, and how we can do it within ourselves so that we are better positioned to help others do it too.

Peripheral Visioning

doug-jan-d

Image Credit: Doug Buckley of http://hyperactive.to

Somehow, some way, over the course of too many years growing up staring into a computer screen — my eyesight became much-less-than-perfect.

Only I didn’t know it. I thought everyone lived in a slightly hazy, cloudy world, where all the colors naturally blended into postmodern mosaics of distant trees and mountains. It was never a problem for me until the day about ten years ago that I was headed east on I-64 into Charlottesville, and coming over the hill into town, struggled to identify what that giant number on the speed limit sign was. I squinted, closed one eye at a time, and figured that the number was probably 55… so I slowed down. Then I realized:

They probably make those speed limit signs big enough for anyone to see.

I got scared, and drove straight to the walk-in eyeglass clinic, where I explained my predicament. They quickly made room in their schedule for an emergency appointment. Usually afterwards, they make you wait 24 hours to pick up your new glasses, but with my 20/400 vision, they wouldn’t let me leave without them. Fortunately, my eyesight could be corrected to almost 20/20, which was nice. I walked out of the store with my new glasses on — and into an amazing, sparkly new world! The trees all had individual leaves on them!! Cars were so shiny! I could read license plates — from MY driver’s seat!

But immediately, I recognized how I’d managed to drive for all those years with bad vision!

Because I couldn’t really see what was ahead of me, I just focused my vision off and to the right side of the road, on the ground. I kept the road and the cars in my peripheral vision, so I could easily sense where they were, and make accommodations. If I tried to look straight ahead, I got frustrated quickly, emotionally wrapped around my own axle, because I couldn’t see any of the detail… and ultimately, that state of being wasn’t safe for driving. I couldn’t focus on what I was worried about, or I’d be a danger on the road.

Not long after that, I realized how effective a strategy this was in my work — because there’s so much change and uncertainty, it’s impossible to look directly ahead of you and see clearly. And that can be scary and unsettling! My solution was: if there was some big goal I was trying to achieve, the best way to reduce my angst and qualm my (sometimes very subtle) emotional stranglehold on myself — was to focus on something else. Something just as important, maybe even something that contributed to the main goal, but something I was not quite so emotionally wrangled by!

I starting calling this my “peripheral visioning” technique. It actually helped me achieve my primary goals – because by consciously setting my primary goal to the side, and focusing on something related to it (or maybe in support of it), I was still making progress but I wasn’t experiencing as much stress. And as a result, I was more open to the serendipity and the chance encounters – with people and with information – that helped me make progress on the primary goal!

Set an intention, get your ducks in a row, and then get out of your own way by focusing on something else!

« Older Entries