The Value of Defining Context

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

The most important stage of problem-solving in organizations is often one of the earliest: getting everyone on the same page by defining the concepts, processes, and desired outcomes that are central to understanding the problem and formulating a solution. (“Everyone” can be the individuals on a project team, or the individuals that contribute actions to a process, or both.) Too often, we assume that the others around us see and experience the world the same way we do. In many cases, our assessments are not too far apart, which is how most people can get away with making this assumption on a regular basis.

In fact, some people experience things so differently that they don’t even “picture” anything in their minds. Can you believe it?

I first realized this divergence in the work context a few years ago, when a colleague and I were advising a project at a local social services office. We asked our students to document the process that was being used to process claims. There were nearly ten people who were part of this claims-processing activity, and our students interviewed all of them, discovering that each person had a remarkably different idea about the process that they were all engaged in! No wonder the claims processing time was nearly two months long.

We helped them all — literally — get onto the same page, and once they all had the same mental map of the process, time-in-system for each claim dropped to 10 days. (This led us to the quantum-esque conclusion that there is no process until it is observed.)

Today, I read about how mathematician Keith Devlin revolutionized the process of intelligence gathering after 9/11 using this same approach… by going back to one of the first principles he learned in his academic training:

So what had I done? Nothing really — from my perspective. My task was to find a way of analyzing how context influences data analysis and reasoning in highly complex domains involving military, political, and social contexts. I took the oh-so-obvious (to me) first step. I need to write down as precise a mathematical definition as possible of what a context is. It took me a couple of days…I can’t say I was totally satisfied with it…but it was the best I could do, and it did at least give me a firm base on which to start to develop some rudimentary mathematical ideas.

The fairly large group of really smart academics, defense contractors, and senior DoD personnel spent the entire hour of my allotted time discussing that one definition. The discussion brought out that all the different experts had a different conception of what a context is — a recipe for disaster.

What I had given them was, first, I asked the question “What is a context?” Since each person in the room besides me had a good working concept of context — different ones, as I just noted — they never thought to write down a formal definition. It was not part of what they did. And second, by presenting them with a formal definition, I gave them a common reference point from which they could compare and contrast their own notions. There we had the beginnings of disaster avoidance.

Getting people to very precisely understand the definitions, concepts, processes, and desired outcomes that are central to a problem might take some time and effort, but it is always extremely valuable.

When you face a situation like this in mathematics, you spend a lot of time going back to the basics. You ask questions like, “What do these words mean in this context?” and, “What obvious attempts have already been ruled out, and why?” More deeply, you’d ask, “Why are these particular open questions important?” and, “Where do they see this line of inquiry leading?”

(You can read the full article about Devlin, and more important lessons from mathematical thinking, Here.)

View story at Medium.com

Using xda with googlesheets in R

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

Want to do a quick, exploratory data analysis in R of your data that’s stored in a spreadsheet on Google Drive? You’re in luck, because now you can use the new xda package in conjunction with Jenny Bryan‘s googlesheets. There are some quirks, though, and that’s what this post is all about.

Before proceeding, you should review this recent article from R-Bloggers called “Introducing xda”.

First, be sure to install the googlesheets and xda packages. Although googlesheets is on CRAN, xda is not, and you’ll have to bring it in directly from github. You can actually do the same for googlesheets if you like:

install.packages("devtools")
library(devtools)
install_github("jennybc/googlesheets")
install_github("ujjwalkarn/xda")
library(googlesheets)
library(xda)

Next, you’ll have to show R how to access your Google spreadsheet. While you are looking at your spreadsheet, go to File -> Publish to the Web. The URL that’s in the text box is the one you want to capture. Just to make sure it works, copy and paste it into a new browser address window and see if you can display your spreadsheet in your browser.

If you want to import the data at https://docs.google.com/spreadsheets/d/1DO0ksD8d-rn_j2Yn7DQKZDPBrhrvZTpgszewxokjWKU/pubhtml into R, for example, you’ll need to know the spreadsheet’s key. That’s the long string of unintelligible numbers and letters between the “d” and the “pubhtml”. So, my key would be “1DO0ksD8d-rn_j2Yn7DQKZDPBrhrvZTpgszewxokjWKU” — which you’ll see in this next block of code:

> my.gs <- gs_key("1DO0ksD8d-rn_j2Yn7DQKZDPBrhrvZTpgszewxokjWKU") 
> my.data <- gs_read(my.gs) # Retrieves data from googlesheets and places it into an R object. 
> my.df <- as.data.frame(my.data)  # Important! xda needs you to extract only the data in a data frame.

Now, you can access your data. Try head(my.df) to make sure you’ve imported it properly.

Next, it’s time for exploratory data analysis. There are three commands available:

  • numSummary – takes a data frame as an argument, provides descriptive statistics, quantiles, and missing data info for quantitative variables
  • charSummary – takes a data frame as an argument, provides counts, missing data info, and number of unique factors for quantitative variables
  • bivariate – takes a data frame and two quantitative variables as an argument, and performs a quick bivariate analysis (giving this categorical variables, or giving this one categorical and one quantitative variable, will throw an error)

Here’s what happens when you run those commands on the data you just loaded in from your Google spreadsheet:

> numSummary(my.df)
               n   mean    sd median   max  min  mode miss miss%    1%   5%   25%   50%   75%   90%   95%   99%
obs          200 100.50 57.88  100.5 200.0  1.0   1.0    0     0  2.99 11.0  50.8 100.5 150.2 180.1 190.0 198.0
heartrates   200  73.01  7.43   73.0  96.3 53.4  71.2    0     0 56.49 61.2  68.4  73.0  77.4  82.6  85.7  90.3
systolics    200 139.27 29.27  138.0 221.0 59.0 139.0    0     0 79.98 96.0 117.0 138.0 160.0 177.2 188.1 205.0
diastolics   200  87.76  9.74   87.7 116.4 62.4  85.2    0     0 66.01 72.2  81.9  87.7  93.7 100.3 104.5 108.3
bmis         200  25.53  3.06   25.0  33.1 18.4  24.7    0     0 19.00 21.0  23.5  25.0  27.7  29.6  31.2  32.8
ages         200  44.41 14.59   45.0  70.0 18.0  30.0    0     0 18.00 22.0  32.0  45.0  57.0  64.1  67.0  70.0
heartpm      200  72.26  3.55   72.2  83.8 64.2  74.2    0     0 64.72 66.2  69.8  72.2  74.2  76.4  78.7  81.4
fitnesslevel 200   2.62  1.17    3.0   4.0  1.0   4.0    0     0  1.00  1.0   2.0   3.0   4.0   4.0   4.0   4.0

> charSummary(my.df)
          n miss miss% unique
genders 200    0     0      2
smokers 200    0     0      2
group   200    0     0      4

> bivariate(my.df,'heartrates','bmis')
     bin_bmis min_heartrates max_heartrates mean_heartrates
1   (18.3,22]          53.40          85.60           72.80
2   (22,25.7]          55.70          90.70           72.87
3 (25.7,29.4]          60.30          96.30           73.45
4 (29.4,33.1]          56.50          90.30           72.46

Observations:

  • There is a fourth “Plot” command but I couldn’t get it to work on any googlesheetsdata. The xda package is looking for class(range) to be anything other than “function”, which it was for every sheet I attempted to load.
  • There really should be an extra column in xda that displays the enumeration of all the unique values for the factors. It felt great to know how many unique values there were, but I would love to be reminded of what they are too, unless there are too many of them.

Please share your experiences using xda & googlesheets together in the comments! Thanks!

Innovation Tips for Strategic Planning

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

Over the past 15 years, I’ve helped several organizations with continuous improvement initiatives at the strategic, executive level. There are a lot of themes that keep appearing and reappearing, so the purpose of this post is to call out just a few and provide some insights in how to deal with them! 

These come up when you are engaged in strategic planning and when you are planning operations (to ensure that processes and procedures ultimately satisfy strategic goals), and are especially prominent when you’re trying to develop or use Key Performance Indicators (KPIs) and other metrics or analytics.

 

1) How do you measure innovation? Before you pick metrics, recognize that the answer to this question depends on how you articulate the strategic goals for your innovation outcomes. Do you want to:

  • Keep up with changing technology?
  • Develop a new product/technology?
  • Lead your industry in developing best practices?
  • Pioneer new business models?
  • Improve quality of life for a particular group of people?

All of these will be measured in different ways! And it’s OK to not strategically innovate in one area or another… for example, you might not want to innovate your business model if technology development is your forte. Innovation is one of those things where you really don’t want to be everything to everyone… by design.

 

2) Do you distinguish between improving productivity and generating impact?

Improving quality (the ability to satisfy stated and implied needs) is good. Improving productivity (that is, what you can produce given the resources that you use) is also good. Reducing defects, reducing waste, and reducing variation (sometimes) are all very good things to do, and to report on. 

But who really cares about any improvements at all unless they have impact? It’s always necessary to tie your KPIs, which are often measures of outcomes, to metrics or analytics that can tell the story about why a particular improvement was useful — in the short term, and (hopefully also) in the long term.

You also have to balance productivity and impact. For example, maybe you run an ultra-efficient 24/7 Help Desk. Your effectiveness is exemplary… when someone submits a request, it’s always satisfied within 8 hours. But you discover that no tickets come in between Friday at 5pm and Monday at 8am. So all that time you spend staffing that Help Desk on the weekend? It’s non-value-added time, and could be eliminated to improve your productivity… but won’t influence your impact at all.

We just worked on a project where we had to consciously had to think about how all the following interact… and you should too:

  • Organizational Productivity: did your improvement help increase the capacity or capability for part of your organization? If so, then it could contribute to technical productivity or business productivity.
  • Technical Productivity: did the improvement remove a technical barrier to getting work done, or make it faster or less error-prone?
  • Business Productivity: did the improvement help you get the needs of the business satisfied faster or better?
  • Business Impact: Did the improvements that yielded organizational productivity benefits, technical productivity benefits, or business productivity benefits make a difference at the strategic level? (This answers the “so what” question. So you improved your throughput by 83%… so what? Who really cares, and why does this matter to them? Long-term, why does this awesome thing you did really matter?)
  • Educational/Workforce Development Impact: Were the lessons learned captured, fed back into the organization’s processes to close the loop on learning, or maybe even used to educate people who may become part of your workforce pipeline?

All of the categories above are interrelated. I don’t think you can have a comprehensive, innovation-focused analytics approach unless you address all of these.

 

3) Do you distinguish between participation and engagement?

Participation means you showed up. Engagement means you got involved, you stayed involved, your mission was advanced, or maybe you used this experience to help society. Too often, I see organizations that want to improve engagement, and all the metrics they select are really good at characterizing participation.

I’m writing a paper on this topic right now, but in the meantime (if you want to get a REALLY good sense of the difference between participation and engagement), read The Participatory Museum by Nina Simon. Yes, it is “about museums” — and yes, I know you’re in business or industry — and YES, this book really will provide you with amazing management insights. So read it!

Voice of the Customer (VOC) in the Internet of Things (IoT)

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

In February, I speculated about how our notion of “Voice of the Customer” (VoC) might change, since between 2016 and 2020 we are poised to witness the Internet of Things (IoT) as it grows from 6.4 billion to over 20 billion entities. The IoT will require us to re-think fundamental questions about how our interests as customer and stakeholders are represented. In particular,

  • What will the world look (and feel) like when everything you interact with has a “voice”?
  • How will the “Voice of the Customer” be heard when all of that customer’s stuff ALSO has a voice?
  • Will your stuff have “agency” — that is, the right to represent your needs and interests to other products and services?

Companies are also starting to envision how their strategies will morph in response to the new capabilities offered by the IoT. Starbucks CTO Gerri Martin-Flickenger, for example, shares her feelings in GeekWire, 3/24/2016:

“Imagine you’re on a road trip, diving across the country, and you pull into a Starbucks drive-through that you’ve never been to before,” she said at the Starbucks annual shareholder’s meeting Wednesday in Seattle. “We detect you’re a loyal customer and you buy about the same thing every day, at about the same time. So as you pull up to the order screen, we show you your order, and the barista welcomes you by name.”

“Does that sound crazy?” she asked. “No, actually, not really. In the coming months and years you will see us continue to deliver on a basic aspiration: to deliver technology that enhances the human connection.”

IoT to enhance the human connection? Sounds great, right? But hold on… that’s not what she’s talking about. She wants to enhance the feeling of connection between individuals and a companynothing different than cultivating customer loyalty.

Her scenario is actually pretty appealing: I can imagine pulling up to a Starbuck’s drive-through and having everything disappear from the screen except for maybe 2 or 3 choices of things I’ve had before, and 1 or 2 choices for things I might be interested in. The company could actually work with me to help alleviate my sensory overload problems, reducing the stress I experience when presented with a hundred-item menu, and improving my user experience. IoT can help them hear my voice  as a customer, and adapt to my preferences, but it won’t make them genuinely care about me any more than they already do not.

When I first read this article, I thought it would give me insight into a question I’ve had for a while now… but the question is still substantially unanswered: How can IoT facilitate capturing and responding to VoC in a way that really does cultivate human connection? John Hagel and John Seely Brown, in my opinion, are a little closer to the target:

[Examples] highlight a paradox inherent in connected devices and the Internet of Things: although technology aims to weave data streams without human intervention, its deeper value comes from connecting people. By offloading data capture and information transfer to the background, devices and applications can actually improve human relationships. Practitioners can use technology to get technology out of the way—to move data and information flows to the side and enable better human interaction…

Where is Quality Management Headed?

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

[This post is in response to ASQ’s February topic for the Influential Voices group, which asks: Where do you plan to take your career in 2016? What’s your view of careers in quality today—what challenges is this field facing? How can someone starting out in quality succeed?]

We are about to experience a paradigm shift in production, operations, and service: a shift that will have direct consequences on the principles and practice of design, development, and quality management. This “fourth industrial revolution” of cyber-physical systems will require more people in the workforce to understand quality principles associated with co-creation of value, and to develop novel business models. New technical skills will become critical for a greater segment of workers, including embedded software, artificial intelligence, data science, analytics, Big Data (and data quality), and even systems integration. 

Over the past 20 years, we moved many aspects of our work and our lives online. And in the next 20 years, the boundaries between the physical world and the online world will blur — to a point where the distinction may become unnecessary.

Here is a vignette to illustrate the kinds of changes we can anticipate. Imagine the next generation FitBit, the personalized exercise assistant that keeps track of the number of steps you walk each day. As early as 2020, this device will not only automatically track your exercise patterns, but will also automatically integrate that information with your personal health records. Because diet strategies have recently been shown to be predominantly unfounded, and now researchers like Kevin Hall, Eran Elinav, and Eran Siegal know that the only truly effective diets are the ones that are customized to your body’s nutritional preferences [1], your FitBit and your health records will be able to talk to your food manager application to design the perfect diet for you (given your targets and objectives). Furthermore, to make it easy for you, your applications will also autonomously communicate with your refrigerator and pantry (to monitor how much food you have available), your local grocery store, and your calendar app so that food deliveries will show up when and only when you need to be restocked. You’re amazed that you’re spending less on food, less of it is going to waste, and you never have to wonder what you’re going to make for dinner. Your local grocery store is also greatly rewarded, not only for your loyalty, but because it can anticipate the demand from you and everyone else in your community – and create specials, promotions, and service strategies that are targeted to your needs (rather than just what the store guesses you need).

Although parts of this example may seem futuristic, the technologies are already in place. What is missing is our ability to link the technologies together using development processes that are effective and efficient – and in particular, coordinating and engaging the people  who will help make it happen. This is a job for quality managers and others who study production and operations management

As the Internet of Things (IoT) and pervasive information become commonplace, the fundamental nature and character of how quality management principles are applied in practice will be forced to change. As Eric Schmidt, former Chairman of Google, explains:  “the new age of artificial intelligence is beginning, and it’s a big deal.” [2] Here are some ways that this shift will impact researchers and practitioners interested in quality:

  • Strategic deployment of IoT technologies will help us simultaneously improve our use of enterprise assets, reduce waste, promote sustainability, and coordinate people and machines to more effectively meet strategic goals and operational targets.
  • Smart materials, embedded in our production and service ecosystems, will change our views of objects from inert and passive to embedded and engaged. For example, MIT has developed a “smart band-aid” that communicates with a wound, provides visual indicators of the healing process, and delivers medication as needed. [3] Software developers will need to know how to make this communication seamless and reliable in a variety of operations contexts.
  • Our technologies will be able to proactively anticipate the Voice of the Customer, enabling us to meet not only their stated and implied needs, but also their emergent needs and hard-to-express desires. Similarly, will the nature of customer satisfaction change as IoT becomes more pervasive?
  • Cloud and IoT-driven Analytics will make more information available for powerful decision-making (e.g. real-time weather analytics), but comes with its own set of challenges: how to find the data, how to assess data quality, and how to select and store data with likely future value to decision makers. This will be particularly challenging since analytics has not been a historical focus among quality managers. [4]
  • Smart, demand-driven supply chains (and supply networks) will leverage Big Data, and engage in automated planning, automatic adjustment to changing conditions or supply chain disruptions like war or extreme weather events, and self-regulation.
  • Smart manufacturing systems will implement real time communication between people, machines, materials, factories and warehouses, supply chain partners, and logistics partners using cloud computing. Production systems will adapt to demand as well as environmental factors, like the availability of resources and components. Sustainability will be a required core capability of all organizations that produce goods.
  • Cognitive manufacturing will implement manufacturing and service systems capable of perception, judgment, and improving quality autonomously – without the delays associated with human decision-making or the detection of issues.
  • Cybersecurity will be recognized as a critical component of all of the above. For most (if not all) of these next generation products and production systems, quality will not be possible without addressing information security.
  • The nature of quality assurance will also change, since products will continue to learn (and not necessarily meet their own quality requirements) after purchase or acquisition, until the consumer has used them for a while. In a December 2015 article I wrote for Software Quality Professional, I ask “How long is the learning process for this technology, and have [product engineers] designed test cases to accommodate that process after the product has been released? The testing process cannot find closure until the end of the ‘burn-in’ period when systems have fully learned about their surroundings.” [5]
  • We will need new theories for software quality practice in an era where embedded artificial intelligence and technological panpsychism (autonomous objects with awareness, perception, and judgment) are the norm.

How do we design quality into a broad, adaptive, dynamically evolving ecosystem of people, materials, objects, and processes? This is the extraordinarily complex and multifaceted question that we, as a community of academics and practitioners, must together address.

Just starting out in quality? My advice is to get a technical degree (science, math, or engineering) which will provide you with a solid foundation for understanding the new modes of production that are on the horizon. Industrial engineering, operations research, industrial design, and mechanical engineering are great fits for someone who wants a career in quality, as are statistics, data science, manufacturing engineering, and telecommunications. Cybersecurity and intelligence will become increasingly more central to quality management, so these are also good directions to take. Or, consider applying for an interdisciplinary program like JMU’s Integrated Science and Technology where I teach. We’re developing a new 21-credit sector right now where you can study EVERYTHING in the list above! Also, certifications are a plus, but in addition to completing training programs be sure to get formally certified by a professional organization to make sure that your credentials are widely recognized (e.g. through ASQ and ATMAE).

 

References

[1] http://www.huffingtonpost.com/entry/no-one-size-fits-all-diet-plan_564d605de4b00b7997f94272
[2] https://www.washingtonpost.com/news/innovations/wp/2015/09/15/what-eric-schmidt-gets-right-and-wrong-about-the-future-of-artificial-intelligence/
[3] http://news.mit.edu/2015/stretchable-hydrogel-electronics-1207
[4] Evans, J. R. (2015). Modern Analytics and the Future of Quality and Performance Excellence. The Quality Management Journal22(4), 6.
[5] Radziwill, N. M., Benton, M. C., Boadu, K., & Perdomo, W., 2015: A Case-Based Look at Integrating Social Context into Software Quality. Software Quality Professional, December.

Free Speech in the Internet of Things (IoT)

Image Credit: from "Reclaim Democracy" at http://reclaimdemocracy.org/who-are-citizens-united/

IF YOUR TOASTER COULD TALK, IT WOULD HAVE THE RIGHT TO FREE SPEECH. Image Credit: from “Reclaim Democracy” at http://reclaimdemocracy.org/who-are-citizens-united/

By the end of 2016, Gartner estimates that over 6.4 BILLION “things” will be connected to one another in the nascent Internet of Things (IoT). As innovation yields new products, services, and capabilities that leverage this ecosystem, we will need new conceptual models to ensure quality and support continuous improvement in this environment.

I wasn’t thinking about quality or IoT this morning… but instead, was trying to understand why so many people on Twitter and Facebook are linking Justice Scalia’s recent death to Citizens United. (I’d heard of Citizens United, but quite frankly, thought it was a soccer team. Embarrassing, I know.) I was surprised to find out that instead, Citizens United is a conservative U.S. political organization best known for its role in the 2010 Supreme Court Case Citizens United v. FEC.

That case removed many restrictions on political spending. With the “super-rich donating more than ever before to individual campaigns plus the ‘enormous’ chasm in wealth has given the super-rich the power to steer the economic and political direction of the United States and undermine its democracy.” Interesting, sure… but what’s more interesting to me is that the Citizens United case, according to this source

  • Strengthened First Amendment protection for corporations, 
  • Affirmed that Money = Speech, and
  • Affirmed that Non-Persons have the right to free speech.

The article goes on to state that “if your underpants could talk, they would be protected by free speech.”

Not too long ago, a statement like this would just be silly. But today, with immersive IoT looming, this isn’t too far-fetched. 

  • What will the world look (and feel) like when everything you interact with has a “voice”?
  • How will the “Voice of the Customer” be heard when all of that customer’s stuff ALSO has a voice?
  • What IS the “Voice of the Customer” in a world like this?

Analytic Hierarchy Process (AHP) using preferenceFunction in ahp

Yesterday, I wrote about how to use gluc‘s new ahp package on a simple Tom-Dick-Harry one level decision making problem using Analytic Hierarchy Process (AHP). One of the cool things about that package is that in addition to specifying the pairwise comparisons directly using Saaty’s scale (below, from https://kristalaace2014.wordpress.com/2014/05/14/w12_al_vendor-evaluation/)…

saaty-scale

…you can also describe each of the Alternatives in terms of descriptive variables which you can use inside a function to make the pairwise comparisons automatically. This is VERY helpful if you have lots of criteria, subcriteria, or alternatives to evaluate!! For example, I used preferenceFunction to compare 55 alternatives using 6 criteria and 4 subcriteria, and was very easily able to create functions to represent my judgments. This was much easier than manually entering all the comparisons.

This post shows HOW I replaced some of my manual comparisons with automated comparisons using preferenceFunction. (The full YAML file is included at the bottom of this post for you to use if you want to run this example yourself.) First, recall that the YAML file starts with specifying the alternatives that you are trying to choose from (at the bottom level of the decision hierarchy) and some variables that characterize those alternatives. I used the descriptions in the problem statement to come up with some assessments between 1=not great and 10=great:

#########################
# Alternatives Section
# THIS IS FOR The Tom, Dick, & Harry problem at
# https://en.wikipedia.org/wiki/Analytic_hierarchy_process_%E2%80%93_leader_example
#
Alternatives: &alternatives
# 1= not well; 10 = best possible
# Your assessment based on the paragraph descriptions may be different.
  Tom:
    age: 50
    experience: 7
    education: 4
    leadership: 10
  Dick:
    age: 60
    experience: 10
    education: 6
    leadership: 6
  Harry:
    age: 30
    experience: 5
    education: 8
    leadership: 6
#
# End of Alternatives Section
#####################################

Here is a snippet from my original YAML file specifying my AHP problem manually ():

  children: 
    Experience:
      preferences:
        - [Tom, Dick, 1/4]
        - [Tom, Harry, 4]
        - [Dick, Harry, 9]
      children: *alternatives
    Education:
      preferences:
        - [Tom, Dick, 3]
        - [Tom, Harry, 1/5]
        - [Dick, Harry, 1/7]
      children: *alternatives

And here is what I changed that snippet to, so that it would do my pairwise comparisons automatically. The functions are written in standard R (fortunately), and each function has access to a1 and a2 (the two alternatives). Recursion is supported which makes this capability particularly useful. I tried to write a function using two of the characteristics in the decision (a1$age and a1$experience) but this didn’t seen to work. I’m not sure whether the package supports it or not. Here are my comparisons rewritten as functions:

  children: 
    Experience:
          preferenceFunction: >
            ExperiencePreference <- function(a1, a2) {
              if (a1$experience < a2$experience) return (1/ExperiencePreference(a2, a1))
              ratio <- a1$experience / a2$experience
              if (ratio < 1.05) return (1)
              if (ratio < 1.2) return (2)
              if (ratio < 1.5) return (3)
              if (ratio < 1.8) return (4)
              if (ratio < 2.1) return (5) return (6) } children: *alternatives Education: preferenceFunction: >
            EducPreference <- function(a1, a2) {
              if (a1$education < a2$education) return (1/EducPreference(a2, a1))
              ratio <- a1$education / a2$education
              if (ratio < 1.05) return (1)
              if (ratio < 1.15) return (2)
              if (ratio < 1.25) return (3)
              if (ratio < 1.35) return (4)
              if (ratio < 1.55) return (5)
              return (5)
            }
          children: *alternatives

To run the AHP with functions in R, I used this code (I am including the part that gets the ahp package, in case you have not done that yet). BE CAREFUL and make sure, like in FORTRAN, that you line things up so that the words START in the appropriate columns. For example, the “p” in preferenceFunction MUST be immediately below the 7th character of your criterion’s variable name.

devtools::install_github("gluc/ahp", build_vignettes = TRUE)
install.packages("data.tree")

library(ahp)
library(data.tree)

setwd("C:/AHP/artifacts")
nofxnAhp <- LoadFile("tomdickharry.txt")
Calculate(nofxnAhp)
fxnAhp <- LoadFile("tomdickharry-fxns.txt")
Calculate(fxnAhp)

print(nofxnAhp, "weight")
print(fxnAhp, "weight")

You can see that the weights are approximately the same, indicating that I did a good job at developing functions that represent the reality of how I used the variables attached to the Alternatives to make my pairwise comparisons. The results show that Dick is now the best choice, although there is some inconsistency in our judgments for Experience that we should examine further. (I have not examined this case to see whether rank reversal could be happening).

> print(nofxnAhp, "weight")
                         levelName     weight
1  Choose the Most Suitable Leader 1.00000000
2   ¦--Experience                  0.54756924
3   ¦   ¦--Tom                     0.21716561
4   ¦   ¦--Dick                    0.71706504
5   ¦   °--Harry                   0.06576935
6   ¦--Education                   0.12655528
7   ¦   ¦--Tom                     0.18839410
8   ¦   ¦--Dick                    0.08096123
9   ¦   °--Harry                   0.73064467
10  ¦--Charisma                    0.26994992
11  ¦   ¦--Tom                     0.74286662
12  ¦   ¦--Dick                    0.19388163
13  ¦   °--Harry                   0.06325174
14  °--Age                         0.05592555
15      ¦--Tom                     0.26543334
16      ¦--Dick                    0.67162545
17      °--Harry                   0.06294121

> print(fxnAhp, "weight")
                         levelName     weight
1  Choose the Most Suitable Leader 1.00000000
2   ¦--Experience                  0.54756924
3   ¦   ¦--Tom                     0.25828499
4   ¦   ¦--Dick                    0.63698557
5   ¦   °--Harry                   0.10472943
6   ¦--Education                   0.12655528
7   ¦   ¦--Tom                     0.08273483
8   ¦   ¦--Dick                    0.26059839
9   ¦   °--Harry                   0.65666678
10  ¦--Charisma                    0.26994992
11  ¦   ¦--Tom                     0.74286662
12  ¦   ¦--Dick                    0.19388163
13  ¦   °--Harry                   0.06325174
14  °--Age                         0.05592555
15      ¦--Tom                     0.26543334
16      ¦--Dick                    0.67162545
17      °--Harry                   0.06294121

> ShowTable(fxnAhp)

tomdick-ahp-fxns

Here is the full YAML file for the “with preferenceFunction” case.

#########################
# Alternatives Section
# THIS IS FOR The Tom, Dick, & Harry problem at
# https://en.wikipedia.org/wiki/Analytic_hierarchy_process_%E2%80%93_leader_example
#
Alternatives: &alternatives
# 1= not well; 10 = best possible
# Your assessment based on the paragraph descriptions may be different.
  Tom:
    age: 50
    experience: 7
    education: 4
    leadership: 10
  Dick:
    age: 60
    experience: 10
    education: 6
    leadership: 6
  Harry:
    age: 30
    experience: 5
    education: 8
    leadership: 6
#
# End of Alternatives Section
#####################################
# Goal Section
#
Goal:
# A Goal HAS preferences (within-level comparison) and HAS Children (items in level)
  name: Choose the Most Suitable Leader
  preferences:
    # preferences are defined pairwise
    # 1 means: A is equal to B
    # 9 means: A is highly preferrable to B
    # 1/9 means: B is highly preferrable to A
    - [Experience, Education, 4]
    - [Experience, Charisma, 3]
    - [Experience, Age, 7]
    - [Education, Charisma, 1/3]
    - [Education, Age, 3]
    - [Age, Charisma, 1/5]
  children: 
    Experience:
          preferenceFunction: >
            ExperiencePreference <- function(a1, a2) {
              if (a1$experience < a2$experience) return (1/ExperiencePreference(a2, a1))
              ratio <- a1$experience / a2$experience
              if (ratio < 1.05) return (1)
              if (ratio < 1.2) return (2)
              if (ratio < 1.5) return (3)
              if (ratio < 1.8) return (4)
              if (ratio < 2.1) return (5) return (6) } children: *alternatives Education: preferenceFunction: >
            EducPreference <- function(a1, a2) {
              if (a1$education < a2$education) return (1/EducPreference(a2, a1))
              ratio <- a1$education / a2$education
              if (ratio < 1.05) return (1)
              if (ratio < 1.15) return (2)
              if (ratio < 1.25) return (3)
              if (ratio < 1.35) return (4)
              if (ratio < 1.55) return (5)
              return (5)
            }
          children: *alternatives
    Charisma:
      preferences:
        - [Tom, Dick, 5]
        - [Tom, Harry, 9]
        - [Dick, Harry, 4]
      children: *alternatives
    Age:
      preferences:
        - [Tom, Dick, 1/3]
        - [Tom, Harry, 5]
        - [Dick, Harry, 9]
      children: *alternatives
#
# End of Goal Section
#####################################
« Older Entries Recent Entries »