Tag Archives: education

A Decade of PhD: What I’ve Learned from Academia and Industry

Today is Cinco de Mayo! It’s also the 10th Anniversary of my PhD defense…. something I carefully timed for late afternoon on this day in 2009. (I wanted to make sure I could celebrate the joyful occasion — or drown my sorrows — with 2-for-1 margaritas. Fortunately, the situation was liquid joy; unfortunately, I still got a hangover.)

I’m writing this post to share what I’ve learned about the value of getting a PhD (is there value?) and the applicability of PhD-level work to industry. If you’re considering more education, maybe this will help you decide whether it’s the right choice. If you’re in industry and trying to figure out whether to hire PhDs, some of what I write here might help. But first, some background!

I never even thought I’d get a PhD — it certainly didn’t happen out of intent or design. My family was poor and I studied ridiculously hard so I could “escape it.” I didn’t think I was smart enough for a PhD, even though I started college at 16 taking half undergrad classes and half grad classes in meteorology. I aced my grad classes and very maturely ignored my required classes, so I got kicked out. (At the same time, I wasn’t really fitting in with people… my roommate called me “Nerdcole”.) When I was let back in the department head wouldn’t let me take any grad classes so I got bored and burned out… not surprising since I was supporting myself, and working three jobs to make that happen. I quit school to work at an e-commerce startup when I was 18. A few months later, thanks to (good) peer pressure, I took 3 credit by exams to see if it would get me over the finish line, and thanks to some side skills I had picked up in vector calculus and statistics, it worked and I got the BS. But I was still left with a pretty bad GPA, and even worse self esteem, and I was convinced no one would ever let me into grad school.

I figured I’d focus on industry and help companies grow. There was no other choice.

The Back Story

After spending a couple years building web sites and storefronts (a huge feat in 1995 and 1996!) I took a job at a national lab as a systems analyst, supporting older scientists and engineers and helping them get work done. The main lesson I learned during this time was: Alignment between strategy and objectives doesn’t come for free (teams of people have to spend dedicated time on it), and most people are really disorganized. There had to be a better way to get work done.

A few years later, I was a traveling Solutions Architect, parachuted once or twice a month into CRM software implementation fiascos around the globe. My job was to figure out what to do to turn these jobs around — was it a people problem? An architecture problem? A training problem? A systems thinking problem? A little of everything? I had a couple weeks to make a recommendation, and then I was on to the next project (results were usually pretty good). But since this required evaluating technology decisions in the context of business and financial constraints, my boss suggested that I use the tuition benefits offered by my job to get an MBA. I had taken 9 credits of science and industrial engineering classes since I’d graduated, so I contacted two of the local MBA schools to see if they’d accept me and my credits. Sure enough, one of them did! I took evening classes for a year and a half, and eventually ended up with an MBA. But I never thought I could (or would) go farther — I’m not that smart, I’d tell myself. Also, it’s expensive. Also, a PhD would probably make me less marketable. (All lies, spoken by a lack of confidence.)

Shortly thereafter, the travel started to get to me (I was flying at least three days a week), so I looked for an opportunity to grow and cultivate a software development organization. (That’s how I ended up in Data Management at NRAO.) A little management led to a lot of management. A few years later one of the organization’s leaders said it was “too bad I didn’t have a PhD” — because in a highly scientific and technical organization, it would give me more credibility and make me a better leader.

“Will you pay for it?” I asked. “Sure,” they said. I just had to find a suitable program that wouldn’t require me to go full time. I’ve always loved learning, and I couldn’t resist the temptation of free education — even if it meant I’d have to balance the demands of a challenging full-time job and a first-time baby at the same time. That’s how much I love learning, just for learning’s sake! I still didn’t think a PhD had that much value, unless you were studying to be a lab scientist or you were dead set on becoming a historian and teaching for the rest of your life. None of these personas was me, but the free education thing sold me, and I didn’t really think about how relevant this step was to my career direction until much later.

Fortunately, I found the perfect program for me — a hybrid academic/practitioner PhD that would help me develop the research and analytical skills to solve practical problems in business and industrial technology.

The next few years were pretty rough, and by the time I got my PhD, I was in my 14th year of post-college professional employment. First lesson learned: it’s probably not the best move to start PhD coursework when you have a three-month old. I have no idea how I made it through.

Shortly after graduating, the impacts of the financial crisis hit our federally funded organization and I was able to segue into a second career as a college professor, teaching data science and manufacturing/EHSQ classes. For the past year, I’ve been back in industry (maybe permanently; we’ll see) and have a better sense of the value of PhDs in industry.

Value of Getting a PhD

There are lots of reasons I’m happy with the time I spent getting a PhD, other than the fact that it helped me get an entirely new job when the economy was down:

  • First and foremost, I’m a better critical thinker. It’s now my nature to look at all parts of a problem, examine the interactions between them, and make sure I have all the required background information needed to start working on a problem.
  • I’m a better writer too. I look at reports and presentations I wrote years ago, and can see all the holes and places where I made assumptions that weren’t valid.
  • I developed a new appreciation for clarity. Researchers want to make sure their messages, methodologies, and models are clear and unambiguous… through the contrast, I was able to recognize that in industry, there’s often pressure to skip due diligence and move fast to perform. This pressure leads to ambiguity, which tends towards what I call “intellectual waste” – people assuming that they see a problem or a project in the same way because they haven’t taken the time to guarantee clarity.
  • It’s easier for me to quickly determine whether information might be true or false, or whether there are gaps that need to be closed before moving forward. (It’s possible that this skill is more from grading and evaluating student work… something that’s orders of magnitude harder than it seems.)
  • I realized that words matter. Really thinking about how one person will respond to a word or phrase, and whether it conveys the meaning that you intend, is a craft — that’s enhanced by working with collaborators.
  • And although I knew this one prior to the PhD, I found that data matters. Where did your data come from? Can you access the original? What kind of people (or instruments) gathered it? Can you trust them? The quality of your data — and the suitability of the methods you choose — will impact the quality and integrity of the conclusions you generate from it. Awareness of these factors is essential.

Value of Caution

One of the biggest lessons was the most surprising. Early on in the PhD program I was told that my opinion didn’t count — regardless of how many years of experience I had. Every statement I made had to be backed up and cited, preferably using material that had been peer-reviewed by other qualified people. At first I was kind of offended by this… didn’t these academics have any sense of the value of actual real-world employment? Apparently not.

But something funny happened as I developed the habit of looking for solid references, distilling their messages, and citing them accurately: I became more careful. And in the evolution of my caution and attention to detail, the quality of my work — ANY work — improved tremendously. I was able to learn from what other people had discovered, and anticipate (and resolve) problems in advance. I learned that “standing on the shoulders of giants” actually means figuring out when solved problems already exist so you don’t waste time reinventing wheels.

Something else funny happened as soon as I graduated: all of a sudden, people were asking me for my opinion. But the habit of due diligence was so ingrained that I couldn’t express my opinion… I was compelled to back it up with facts!

(I think this was the point all along. Go figure.)

The beauty of going through the entire messy process of PhD coursework and comps and research and defense and editing — the entire end-to-end process, not cutting out in the middle anywhere — it gave me the discipline and process to root out accurate and complete answers to problems. Or at the very least, to be able to call out the gaps to get there.

There’s a lot of pressure in industry to move fast, but due diligence is still critical for accurate self-assessment and effective cross-functional communication. Slowing down and figuring out how you know what you know — and making sure everyone is literally on the same page — can help your organization achieve its goals more quickly.

Value of PhDs to Industry

So employers (especially in tech) — should you hire PhDs? Yes. Here’s why:

  • PhDs are trained to find gaps in knowledge and understanding. Is your strategic plan grounded in reality, or is it just wishful thinking? Are your Project Charters well scoped, budgeted, and planned out? Is your workforce prepared to carry out your strategic initiatives?
  • Many PhDs with experience teaching undergrads are great at making complex topics accessible to other audiences. This is fantastic for training, cross-training, and marketing.
  • PhDs love research and writing, and can help you with gathering and interpreting data and content marketing.
  • PhDs love learning. Want to be on the cutting edge? They’re great in R&D… they can help you distill new insights from research papers and interpret and apply them accurately.
  • If you want to do AI or machine learning, or anything that uses Big Data, make sure you have at least one PhD statistician with practical analytical experience. They can prevent you from spending millions on dead ends and help you apply Occam’s Razor to avoid unnecessary complexity (the kind that can lead to technical debt later).

Will there be drawbacks? Sure. The habit of caution may need to be tempered somewhat — you don’t have to probe to the bottom of an issue to generate useful information that a business can use to make progress.

Bottom line… don’t be afraid of PhDs! We are mere mortals who just happen to have spent several years trying to figure out how to get to the core — the fundamental truth — of a complex problem. As a result we know how to approach problems like this — problems that many businesses have lots of. (We are not overqualified at all… we just have an extra skill set in something you desperately need, but may not realize you need it.)

Getting a PhD was challenging, frustrating, and maddening at times (especially the final part of getting your camera-ready text ready for ProQuest). I never planned to do it, but I’d totally do it again. I think my only regret is that I got a PhD in a hybrid business/industrial engineering discipline… it allowed me the freedom to pursue my interests, but if I was at the same crossroads now, I’d get a PhD in statistics to complement my MBA. Overall, this is a pretty tiny regret.

Designing Experiences for Authentic Engagement: The Design for STEAM Canvas

As Industry 4.0 and Digital Transformation efforts bear their first fruits, capabilities, business models, and the organizations that embody them are transforming. A century ago, we thought of organizations as machines to be rigidly designed and controlled. In the latter part of the 20th century, organizations were thought of as knowledge to be cultivated, shared, and expanded. But “as intelligent systems gain traction, we are once again at a crossroads – where organizations must create complete and meaningful experiences” for their customers, stakeholders, and employees.

Read our new paper in the STEAM Journal

How do you design those complete, meaningful, and radically engaging experiences? To provide a starting point, check out “Design for Steam: Creating Participatory Art with Purpose” by my former student Nick Kamienski and me. It was just published today by the STEAM Journal.

“Participatory Art” doesn’t just mean creating things that are pretty to look at in your office lobby or tradeshow booth. It means finding ways to connect with your audience in ways that help them find meaning, purpose, and self-awareness – the ultimate ingredient for authentic engagement.

Designing experiences to make this happen is challenging, but totally within reach. Learn more in today’s new article!

Improve Writing Quality with Speaking & Storyboarding

For a decade, I supervised undergrads and grad students as they were completing writing projects: term papers, semester projects, and of course — capstone projects and thesis work. Today, I’m responsible for editing the work of (and mentoring) junior colleagues. The main lesson I’ve learned over this time is: writing is really hard for most people. So I’m here to help you.

Me, Reviewing Someone Else’s Work

If I had a dollar for every time this scenario happened, I’d… well, you get my point:

ME (reading their “final draft”): [Voice in Head] Huh? Wow, that sentence is long. OK, start it again. I don’t understand what they’re saying. What are they trying to say? This doesn’t make any sense. It could mean… no, that’s not it. Maybe they mean… nope, that can’t be it.

ME: So this sentence here, the one that says “Start by commutating and telling the story of what the purpose of the company’s quality management software is, the implementation plans and the impact to the current state of quality roles and responsibilities for everyone involved.”

THEM (laughing): Oh! Commutating isn’t a word. I meant communicating.

ME: Have you tried reading this sentence out loud?

THEM (still laughing, trying to read it): Yeah, that doesn’t really make sense.

ME: What were you trying to say?

THEM: I was trying to say “Start by explaining how quality management software will impact everyone’s roles and responsibilities.”

ME: Well, why don’t you say that?

THEM: You mean I can just say that? Don’t I need to make it sound good?

ME: You did just make it sound good when you said what you were trying to say.

What Just Happened?

By trying to “make it sound good” — it’s more likely that you’ll mess it up. People think speaking and writing are two different practices, but when you write, it’s really important that when you speak it out loud, it sounds like you’re a human talking to another human. If you wouldn’t say what you wrote to someone in your target audience in exactly the way that you wrote it, then you need to revise it to something you would say.

Why? Because people read text using the voice in their heads. It’s a speaking voice! So give it good, easy, flowing sentences to speak to itself with.

What Can You Do?

Here are two ways you can start improving your writing today:

  1. Read your writing out loud (preferably to someone else who’s not familiar with your topic, or a collaborator). If it doesn’t sound right, it’s not right.
  2. Use a storyboard. (What does that mean?)

There are many storyboard templates available online, but the storyboard attached to this post is geared towards developing the skills needed for technical writing. (That is, writing where it’s important to support your statements with citations that can be validated.) Not only does citing sources add credibility, but it also gives your reader more material to read if they want to go deeper.

Storyboarding

The process is simple: start by outlining your main message. That means:

  1. Figure out meaningful section headers that are meaningful on their own.
  2. Within each section, write a complete phrase or sentence to describe the main point of each paragraph or small group of paragraphs
  3. For each phrase or sentence that forms your story, cut and paste material from your references that supports your point, and list the citation (I prefer APA style) so you don’t forget it.
  4. Read the list of section headers and main points out loud. If this story, spoken, hangs together and is logical and complete — there’s a good chance your fully written story will as well.

Not all elements of your story need citations, but many of them will.

Next Steps

When the storyboard is complete, what should you do next? Sometimes, I hand it to a collaborator to flesh it out. Other times, I’ll put it aside for a few days or weeks, and then pick it up later when my mind is fresh. Whatever approach you use, this will help you organize your thoughts and citations, and help you form a story line that’s complete and understandable. Hope this helps get you started!

STORYBOARD (BLANK)

STORYBOARD (PARTIALLY FILLED IN)

Where Do Z-Score Tables Come From? (+ how to make them in R)

Every student learns how to look up areas under the normal curve using Z-Score tables in their first statistics class. But what is less commonly covered, especially in courses where calculus is not a prerequisite, is where those Z-Score tables come from: figuring out the area under the normal curve for all possible places you could chop it into two, then making a table from it.

You get the z-score by evaluating the integral of the equation for the bell-shaped normal curve, usually from -Inf to the z-score of interest. This is the same thing that the R command pnorm does when you provide it with a z-score. Here is the slide presentation I put together to explain the use and origin of the Z-Score table, and how it relates to pnorm and qnorm (the command that lets you input an area to find the z-score at which the area to the left is swiped out). It’s free to use under Creative Commons, and is part of the course materials that is available for use with this 2017 book.

One of the fun things I did was to make my own z-score table in R. I don’t know why anyone would WANT to do this — they are easy to find in books, and online, and if you know how to use pnorm and qnorm, you don’t need one at all. But, you can, and here’s how.

First, let’s create a z-score table just with left-tail areas. Using symmetry, we can also use this to get any areas in the right tail, because the area to the left of any -z is the same as any area to the right of any +z. Even though the z-score table contains areas in its cells, our first step is to create a table just of the z-scores that correspond to each cell:

[code language=”r”]
c0 <- seq(-3.4,0,.1)
c1 <- seq(-3.41,0,.1)
c2 <- seq(-3.42,0,.1)
c3 <- seq(-3.43,0,.1)
c4 <- seq(-3.44,0,.1)
c5 <- seq(-3.45,0,.1)
c6 <- seq(-3.46,0,.1)
c7 <- seq(-3.47,0,.1)
c8 <- seq(-3.48,0,.1)
c9 <- seq(-3.49,0,.1)
z <- cbind(c0,c1,c2,c3,c4,c5,c6,c7,c8,c9)
z

c0 c1 c2 c3 c4 c5 c6 c7 c8 c9
[1,] -3.4 -3.41 -3.42 -3.43 -3.44 -3.45 -3.46 -3.47 -3.48 -3.49
[2,] -3.3 -3.31 -3.32 -3.33 -3.34 -3.35 -3.36 -3.37 -3.38 -3.39
[3,] -3.2 -3.21 -3.22 -3.23 -3.24 -3.25 -3.26 -3.27 -3.28 -3.29
[4,] -3.1 -3.11 -3.12 -3.13 -3.14 -3.15 -3.16 -3.17 -3.18 -3.19
[5,] -3.0 -3.01 -3.02 -3.03 -3.04 -3.05 -3.06 -3.07 -3.08 -3.09
[6,] -2.9 -2.91 -2.92 -2.93 -2.94 -2.95 -2.96 -2.97 -2.98 -2.99
[7,] -2.8 -2.81 -2.82 -2.83 -2.84 -2.85 -2.86 -2.87 -2.88 -2.89
[8,] -2.7 -2.71 -2.72 -2.73 -2.74 -2.75 -2.76 -2.77 -2.78 -2.79
[9,] -2.6 -2.61 -2.62 -2.63 -2.64 -2.65 -2.66 -2.67 -2.68 -2.69
[10,] -2.5 -2.51 -2.52 -2.53 -2.54 -2.55 -2.56 -2.57 -2.58 -2.59
[11,] -2.4 -2.41 -2.42 -2.43 -2.44 -2.45 -2.46 -2.47 -2.48 -2.49
[12,] -2.3 -2.31 -2.32 -2.33 -2.34 -2.35 -2.36 -2.37 -2.38 -2.39
[13,] -2.2 -2.21 -2.22 -2.23 -2.24 -2.25 -2.26 -2.27 -2.28 -2.29
[14,] -2.1 -2.11 -2.12 -2.13 -2.14 -2.15 -2.16 -2.17 -2.18 -2.19
[15,] -2.0 -2.01 -2.02 -2.03 -2.04 -2.05 -2.06 -2.07 -2.08 -2.09
[16,] -1.9 -1.91 -1.92 -1.93 -1.94 -1.95 -1.96 -1.97 -1.98 -1.99
[17,] -1.8 -1.81 -1.82 -1.83 -1.84 -1.85 -1.86 -1.87 -1.88 -1.89
[18,] -1.7 -1.71 -1.72 -1.73 -1.74 -1.75 -1.76 -1.77 -1.78 -1.79
[19,] -1.6 -1.61 -1.62 -1.63 -1.64 -1.65 -1.66 -1.67 -1.68 -1.69
[20,] -1.5 -1.51 -1.52 -1.53 -1.54 -1.55 -1.56 -1.57 -1.58 -1.59
[21,] -1.4 -1.41 -1.42 -1.43 -1.44 -1.45 -1.46 -1.47 -1.48 -1.49
[22,] -1.3 -1.31 -1.32 -1.33 -1.34 -1.35 -1.36 -1.37 -1.38 -1.39
[23,] -1.2 -1.21 -1.22 -1.23 -1.24 -1.25 -1.26 -1.27 -1.28 -1.29
[24,] -1.1 -1.11 -1.12 -1.13 -1.14 -1.15 -1.16 -1.17 -1.18 -1.19
[25,] -1.0 -1.01 -1.02 -1.03 -1.04 -1.05 -1.06 -1.07 -1.08 -1.09
[26,] -0.9 -0.91 -0.92 -0.93 -0.94 -0.95 -0.96 -0.97 -0.98 -0.99
[27,] -0.8 -0.81 -0.82 -0.83 -0.84 -0.85 -0.86 -0.87 -0.88 -0.89
[28,] -0.7 -0.71 -0.72 -0.73 -0.74 -0.75 -0.76 -0.77 -0.78 -0.79
[29,] -0.6 -0.61 -0.62 -0.63 -0.64 -0.65 -0.66 -0.67 -0.68 -0.69
[30,] -0.5 -0.51 -0.52 -0.53 -0.54 -0.55 -0.56 -0.57 -0.58 -0.59
[31,] -0.4 -0.41 -0.42 -0.43 -0.44 -0.45 -0.46 -0.47 -0.48 -0.49
[32,] -0.3 -0.31 -0.32 -0.33 -0.34 -0.35 -0.36 -0.37 -0.38 -0.39
[33,] -0.2 -0.21 -0.22 -0.23 -0.24 -0.25 -0.26 -0.27 -0.28 -0.29
[34,] -0.1 -0.11 -0.12 -0.13 -0.14 -0.15 -0.16 -0.17 -0.18 -0.19
[35,] 0.0 -0.01 -0.02 -0.03 -0.04 -0.05 -0.06 -0.07 -0.08 -0.09
[/code]

Now that we have slots for all the z-scores, we can use pnorm to transform all those values into the areas that are swiped out to the left of that z-score. This part is easy, and only takes one line. The remaining three lines format and display the z-score table:

[code]
zscore.df <- round(pnorm(z),4)
row.names(zscore.df) <- sprintf(“%.2f”, c0)
colnames(zscore.df) <- seq(0,0.09,0.01)
zscore.df

0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
-3.40 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003 0.0002
-3.30 0.0005 0.0005 0.0005 0.0004 0.0004 0.0004 0.0004 0.0004 0.0004 0.0003
-3.20 0.0007 0.0007 0.0006 0.0006 0.0006 0.0006 0.0006 0.0005 0.0005 0.0005
-3.10 0.0010 0.0009 0.0009 0.0009 0.0008 0.0008 0.0008 0.0008 0.0007 0.0007
-3.00 0.0013 0.0013 0.0013 0.0012 0.0012 0.0011 0.0011 0.0011 0.0010 0.0010
-2.90 0.0019 0.0018 0.0018 0.0017 0.0016 0.0016 0.0015 0.0015 0.0014 0.0014
-2.80 0.0026 0.0025 0.0024 0.0023 0.0023 0.0022 0.0021 0.0021 0.0020 0.0019
-2.70 0.0035 0.0034 0.0033 0.0032 0.0031 0.0030 0.0029 0.0028 0.0027 0.0026
-2.60 0.0047 0.0045 0.0044 0.0043 0.0041 0.0040 0.0039 0.0038 0.0037 0.0036
-2.50 0.0062 0.0060 0.0059 0.0057 0.0055 0.0054 0.0052 0.0051 0.0049 0.0048
-2.40 0.0082 0.0080 0.0078 0.0075 0.0073 0.0071 0.0069 0.0068 0.0066 0.0064
-2.30 0.0107 0.0104 0.0102 0.0099 0.0096 0.0094 0.0091 0.0089 0.0087 0.0084
-2.20 0.0139 0.0136 0.0132 0.0129 0.0125 0.0122 0.0119 0.0116 0.0113 0.0110
-2.10 0.0179 0.0174 0.0170 0.0166 0.0162 0.0158 0.0154 0.0150 0.0146 0.0143
-2.00 0.0228 0.0222 0.0217 0.0212 0.0207 0.0202 0.0197 0.0192 0.0188 0.0183
-1.90 0.0287 0.0281 0.0274 0.0268 0.0262 0.0256 0.0250 0.0244 0.0239 0.0233
-1.80 0.0359 0.0351 0.0344 0.0336 0.0329 0.0322 0.0314 0.0307 0.0301 0.0294
-1.70 0.0446 0.0436 0.0427 0.0418 0.0409 0.0401 0.0392 0.0384 0.0375 0.0367
-1.60 0.0548 0.0537 0.0526 0.0516 0.0505 0.0495 0.0485 0.0475 0.0465 0.0455
-1.50 0.0668 0.0655 0.0643 0.0630 0.0618 0.0606 0.0594 0.0582 0.0571 0.0559
-1.40 0.0808 0.0793 0.0778 0.0764 0.0749 0.0735 0.0721 0.0708 0.0694 0.0681
-1.30 0.0968 0.0951 0.0934 0.0918 0.0901 0.0885 0.0869 0.0853 0.0838 0.0823
-1.20 0.1151 0.1131 0.1112 0.1093 0.1075 0.1056 0.1038 0.1020 0.1003 0.0985
-1.10 0.1357 0.1335 0.1314 0.1292 0.1271 0.1251 0.1230 0.1210 0.1190 0.1170
-1.00 0.1587 0.1562 0.1539 0.1515 0.1492 0.1469 0.1446 0.1423 0.1401 0.1379
-0.90 0.1841 0.1814 0.1788 0.1762 0.1736 0.1711 0.1685 0.1660 0.1635 0.1611
-0.80 0.2119 0.2090 0.2061 0.2033 0.2005 0.1977 0.1949 0.1922 0.1894 0.1867
-0.70 0.2420 0.2389 0.2358 0.2327 0.2296 0.2266 0.2236 0.2206 0.2177 0.2148
-0.60 0.2743 0.2709 0.2676 0.2643 0.2611 0.2578 0.2546 0.2514 0.2483 0.2451
-0.50 0.3085 0.3050 0.3015 0.2981 0.2946 0.2912 0.2877 0.2843 0.2810 0.2776
-0.40 0.3446 0.3409 0.3372 0.3336 0.3300 0.3264 0.3228 0.3192 0.3156 0.3121
-0.30 0.3821 0.3783 0.3745 0.3707 0.3669 0.3632 0.3594 0.3557 0.3520 0.3483
-0.20 0.4207 0.4168 0.4129 0.4090 0.4052 0.4013 0.3974 0.3936 0.3897 0.3859
-0.10 0.4602 0.4562 0.4522 0.4483 0.4443 0.4404 0.4364 0.4325 0.4286 0.4247
0.00 0.5000 0.4960 0.4920 0.4880 0.4840 0.4801 0.4761 0.4721 0.4681 0.4641
[/code]

You can also draw a picture to go along with your z-score table, so that people remember which area they are looking up:

[code]
x <- seq(-4,4,0.1)
y <- dnorm(x)
plot(x,dnorm(x),type=”l”, col=”black”, lwd=3)
abline(v=-1,lwd=3,col=”blue”)
abline(h=0,lwd=3,col=”black”)
polygon(c(x[1:31],rev(x[1:31])), c(rep(0,31),rev(y[1:31])), col=”lightblue”)
[/code]

It looks like this:

z-score-table-icon-small

In the slides, code to produce a giant-tail z-score table is also provided (where the areas are > 50%).

The Value of Defining Context

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

The most important stage of problem-solving in organizations is often one of the earliest: getting everyone on the same page by defining the concepts, processes, and desired outcomes that are central to understanding the problem and formulating a solution. (“Everyone” can be the individuals on a project team, or the individuals that contribute actions to a process, or both.) Too often, we assume that the others around us see and experience the world the same way we do. In many cases, our assessments are not too far apart, which is how most people can get away with making this assumption on a regular basis.

In fact, some people experience things so differently that they don’t even “picture” anything in their minds. Can you believe it?

I first realized this divergence in the work context a few years ago, when a colleague and I were advising a project at a local social services office. We asked our students to document the process that was being used to process claims. There were nearly ten people who were part of this claims-processing activity, and our students interviewed all of them, discovering that each person had a remarkably different idea about the process that they were all engaged in! No wonder the claims processing time was nearly two months long.

We helped them all — literally — get onto the same page, and once they all had the same mental map of the process, time-in-system for each claim dropped to 10 days. (This led us to the quantum-esque conclusion that there is no process until it is observed.)

Today, I read about how mathematician Keith Devlin revolutionized the process of intelligence gathering after 9/11 using this same approach… by going back to one of the first principles he learned in his academic training:

So what had I done? Nothing really — from my perspective. My task was to find a way of analyzing how context influences data analysis and reasoning in highly complex domains involving military, political, and social contexts. I took the oh-so-obvious (to me) first step. I need to write down as precise a mathematical definition as possible of what a context is. It took me a couple of days…I can’t say I was totally satisfied with it…but it was the best I could do, and it did at least give me a firm base on which to start to develop some rudimentary mathematical ideas.

The fairly large group of really smart academics, defense contractors, and senior DoD personnel spent the entire hour of my allotted time discussing that one definition. The discussion brought out that all the different experts had a different conception of what a context is — a recipe for disaster.

What I had given them was, first, I asked the question “What is a context?” Since each person in the room besides me had a good working concept of context — different ones, as I just noted — they never thought to write down a formal definition. It was not part of what they did. And second, by presenting them with a formal definition, I gave them a common reference point from which they could compare and contrast their own notions. There we had the beginnings of disaster avoidance.

Getting people to very precisely understand the definitions, concepts, processes, and desired outcomes that are central to a problem might take some time and effort, but it is always extremely valuable.

When you face a situation like this in mathematics, you spend a lot of time going back to the basics. You ask questions like, “What do these words mean in this context?” and, “What obvious attempts have already been ruled out, and why?” More deeply, you’d ask, “Why are these particular open questions important?” and, “Where do they see this line of inquiry leading?”

(You can read the full article about Devlin, and more important lessons from mathematical thinking, Here.)

View at Medium.com

Innovation Tips for Strategic Planning

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

Over the past 15 years, I’ve helped several organizations with continuous improvement initiatives at the strategic, executive level. There are a lot of themes that keep appearing and reappearing, so the purpose of this post is to call out just a few and provide some insights in how to deal with them! 

These come up when you are engaged in strategic planning and when you are planning operations (to ensure that processes and procedures ultimately satisfy strategic goals), and are especially prominent when you’re trying to develop or use Key Performance Indicators (KPIs) and other metrics or analytics.

 

1) How do you measure innovation? Before you pick metrics, recognize that the answer to this question depends on how you articulate the strategic goals for your innovation outcomes. Do you want to:

  • Keep up with changing technology?
  • Develop a new product/technology?
  • Lead your industry in developing best practices?
  • Pioneer new business models?
  • Improve quality of life for a particular group of people?

All of these will be measured in different ways! And it’s OK to not strategically innovate in one area or another… for example, you might not want to innovate your business model if technology development is your forte. Innovation is one of those things where you really don’t want to be everything to everyone… by design.

 

2) Do you distinguish between improving productivity and generating impact?

Improving quality (the ability to satisfy stated and implied needs) is good. Improving productivity (that is, what you can produce given the resources that you use) is also good. Reducing defects, reducing waste, and reducing variation (sometimes) are all very good things to do, and to report on. 

But who really cares about any improvements at all unless they have impact? It’s always necessary to tie your KPIs, which are often measures of outcomes, to metrics or analytics that can tell the story about why a particular improvement was useful — in the short term, and (hopefully also) in the long term.

You also have to balance productivity and impact. For example, maybe you run an ultra-efficient 24/7 Help Desk. Your effectiveness is exemplary… when someone submits a request, it’s always satisfied within 8 hours. But you discover that no tickets come in between Friday at 5pm and Monday at 8am. So all that time you spend staffing that Help Desk on the weekend? It’s non-value-added time, and could be eliminated to improve your productivity… but won’t influence your impact at all.

We just worked on a project where we had to consciously had to think about how all the following interact… and you should too:

  • Organizational Productivity: did your improvement help increase the capacity or capability for part of your organization? If so, then it could contribute to technical productivity or business productivity.
  • Technical Productivity: did the improvement remove a technical barrier to getting work done, or make it faster or less error-prone?
  • Business Productivity: did the improvement help you get the needs of the business satisfied faster or better?
  • Business Impact: Did the improvements that yielded organizational productivity benefits, technical productivity benefits, or business productivity benefits make a difference at the strategic level? (This answers the “so what” question. So you improved your throughput by 83%… so what? Who really cares, and why does this matter to them? Long-term, why does this awesome thing you did really matter?)
  • Educational/Workforce Development Impact: Were the lessons learned captured, fed back into the organization’s processes to close the loop on learning, or maybe even used to educate people who may become part of your workforce pipeline?

All of the categories above are interrelated. I don’t think you can have a comprehensive, innovation-focused analytics approach unless you address all of these.

 

3) Do you distinguish between participation and engagement?

Participation means you showed up. Engagement means you got involved, you stayed involved, your mission was advanced, or maybe you used this experience to help society. Too often, I see organizations that want to improve engagement, and all the metrics they select are really good at characterizing participation.

I’m writing a paper on this topic right now, but in the meantime (if you want to get a REALLY good sense of the difference between participation and engagement), read The Participatory Museum by Nina Simon. Yes, it is “about museums” — and yes, I know you’re in business or industry — and YES, this book really will provide you with amazing management insights. So read it!

Analytic Hierarchy Process (AHP) using preferenceFunction in ahp

Yesterday, I wrote about how to use gluc‘s new ahp package on a simple Tom-Dick-Harry one level decision making problem using Analytic Hierarchy Process (AHP). One of the cool things about that package is that in addition to specifying the pairwise comparisons directly using Saaty’s scale (below, from https://kristalaace2014.wordpress.com/2014/05/14/w12_al_vendor-evaluation/)…

saaty-scale

…you can also describe each of the Alternatives in terms of descriptive variables which you can use inside a function to make the pairwise comparisons automatically. This is VERY helpful if you have lots of criteria, subcriteria, or alternatives to evaluate!! For example, I used preferenceFunction to compare 55 alternatives using 6 criteria and 4 subcriteria, and was very easily able to create functions to represent my judgments. This was much easier than manually entering all the comparisons.

This post shows HOW I replaced some of my manual comparisons with automated comparisons using preferenceFunction. (The full YAML file is included at the bottom of this post for you to use if you want to run this example yourself.) First, recall that the YAML file starts with specifying the alternatives that you are trying to choose from (at the bottom level of the decision hierarchy) and some variables that characterize those alternatives. I used the descriptions in the problem statement to come up with some assessments between 1=not great and 10=great:

[code language=”bash” gutter=”false”]
#########################
# Alternatives Section
# THIS IS FOR The Tom, Dick, & Harry problem at
# https://en.wikipedia.org/wiki/Analytic_hierarchy_process_%E2%80%93_leader_example
#
Alternatives: &alternatives
# 1= not well; 10 = best possible
# Your assessment based on the paragraph descriptions may be different.
Tom:
age: 50
experience: 7
education: 4
leadership: 10
Dick:
age: 60
experience: 10
education: 6
leadership: 6
Harry:
age: 30
experience: 5
education: 8
leadership: 6
#
# End of Alternatives Section
#####################################
[/code]

Here is a snippet from my original YAML file specifying my AHP problem manually ():

[code language=”bash” gutter=”false”]
children:
Experience:
preferences:
– [Tom, Dick, 1/4]
– [Tom, Harry, 4]
– [Dick, Harry, 9]
children: *alternatives
Education:
preferences:
– [Tom, Dick, 3]
– [Tom, Harry, 1/5]
– [Dick, Harry, 1/7]
children: *alternatives
[/code]

And here is what I changed that snippet to, so that it would do my pairwise comparisons automatically. The functions are written in standard R (fortunately), and each function has access to a1 and a2 (the two alternatives). Recursion is supported which makes this capability particularly useful. I tried to write a function using two of the characteristics in the decision (a1$age and a1$experience) but this didn’t seen to work. I’m not sure whether the package supports it or not. Here are my comparisons rewritten as functions:

[code language=”bash” gutter=”false”]
children:
Experience:
preferenceFunction: >
ExperiencePreference <- function(a1, a2) {
if (a1$experience < a2$experience) return (1/ExperiencePreference(a2, a1))
ratio <- a1$experience / a2$experience
if (ratio < 1.05) return (1)
if (ratio < 1.2) return (2)
if (ratio < 1.5) return (3)
if (ratio < 1.8) return (4)
if (ratio < 2.1) return (5) return (6) } children: *alternatives Education: preferenceFunction: >
EducPreference <- function(a1, a2) {
if (a1$education < a2$education) return (1/EducPreference(a2, a1))
ratio <- a1$education / a2$education
if (ratio < 1.05) return (1)
if (ratio < 1.15) return (2)
if (ratio < 1.25) return (3)
if (ratio < 1.35) return (4)
if (ratio < 1.55) return (5)
return (5)
}
children: *alternatives
[/code]

To run the AHP with functions in R, I used this code (I am including the part that gets the ahp package, in case you have not done that yet). BE CAREFUL and make sure, like in FORTRAN, that you line things up so that the words START in the appropriate columns. For example, the “p” in preferenceFunction MUST be immediately below the 7th character of your criterion’s variable name.

[code language=”bash” gutter=”false”]
devtools::install_github("gluc/ahp", build_vignettes = TRUE)
install.packages("data.tree")

library(ahp)
library(data.tree)

setwd("C:/AHP/artifacts")
nofxnAhp <- LoadFile("tomdickharry.txt")
Calculate(nofxnAhp)
fxnAhp <- LoadFile("tomdickharry-fxns.txt")
Calculate(fxnAhp)

print(nofxnAhp, "weight")
print(fxnAhp, "weight")
[/code]

You can see that the weights are approximately the same, indicating that I did a good job at developing functions that represent the reality of how I used the variables attached to the Alternatives to make my pairwise comparisons. The results show that Dick is now the best choice, although there is some inconsistency in our judgments for Experience that we should examine further. (I have not examined this case to see whether rank reversal could be happening).

[code language=”bash” gutter=”false”]
> print(nofxnAhp, "weight")
levelName weight
1 Choose the Most Suitable Leader 1.00000000
2 ¦–Experience 0.54756924
3 ¦ ¦–Tom 0.21716561
4 ¦ ¦–Dick 0.71706504
5 ¦ °–Harry 0.06576935
6 ¦–Education 0.12655528
7 ¦ ¦–Tom 0.18839410
8 ¦ ¦–Dick 0.08096123
9 ¦ °–Harry 0.73064467
10 ¦–Charisma 0.26994992
11 ¦ ¦–Tom 0.74286662
12 ¦ ¦–Dick 0.19388163
13 ¦ °–Harry 0.06325174
14 °–Age 0.05592555
15 ¦–Tom 0.26543334
16 ¦–Dick 0.67162545
17 °–Harry 0.06294121

> print(fxnAhp, "weight")
levelName weight
1 Choose the Most Suitable Leader 1.00000000
2 ¦–Experience 0.54756924
3 ¦ ¦–Tom 0.25828499
4 ¦ ¦–Dick 0.63698557
5 ¦ °–Harry 0.10472943
6 ¦–Education 0.12655528
7 ¦ ¦–Tom 0.08273483
8 ¦ ¦–Dick 0.26059839
9 ¦ °–Harry 0.65666678
10 ¦–Charisma 0.26994992
11 ¦ ¦–Tom 0.74286662
12 ¦ ¦–Dick 0.19388163
13 ¦ °–Harry 0.06325174
14 °–Age 0.05592555
15 ¦–Tom 0.26543334
16 ¦–Dick 0.67162545
17 °–Harry 0.06294121

> ShowTable(fxnAhp)
[/code]

tomdick-ahp-fxns

Here is the full YAML file for the “with preferenceFunction” case.

[code language=”bash” gutter=”false”]
#########################
# Alternatives Section
# THIS IS FOR The Tom, Dick, & Harry problem at
# https://en.wikipedia.org/wiki/Analytic_hierarchy_process_%E2%80%93_leader_example
#
Alternatives: &alternatives
# 1= not well; 10 = best possible
# Your assessment based on the paragraph descriptions may be different.
Tom:
age: 50
experience: 7
education: 4
leadership: 10
Dick:
age: 60
experience: 10
education: 6
leadership: 6
Harry:
age: 30
experience: 5
education: 8
leadership: 6
#
# End of Alternatives Section
#####################################
# Goal Section
#
Goal:
# A Goal HAS preferences (within-level comparison) and HAS Children (items in level)
name: Choose the Most Suitable Leader
preferences:
# preferences are defined pairwise
# 1 means: A is equal to B
# 9 means: A is highly preferrable to B
# 1/9 means: B is highly preferrable to A
– [Experience, Education, 4]
– [Experience, Charisma, 3]
– [Experience, Age, 7]
– [Education, Charisma, 1/3]
– [Education, Age, 3]
– [Age, Charisma, 1/5]
children:
Experience:
preferenceFunction: >
ExperiencePreference <- function(a1, a2) {
if (a1$experience < a2$experience) return (1/ExperiencePreference(a2, a1))
ratio <- a1$experience / a2$experience
if (ratio < 1.05) return (1)
if (ratio < 1.2) return (2)
if (ratio < 1.5) return (3)
if (ratio < 1.8) return (4)
if (ratio < 2.1) return (5) return (6) } children: *alternatives Education: preferenceFunction: >
EducPreference <- function(a1, a2) {
if (a1$education < a2$education) return (1/EducPreference(a2, a1))
ratio <- a1$education / a2$education
if (ratio < 1.05) return (1)
if (ratio < 1.15) return (2)
if (ratio < 1.25) return (3)
if (ratio < 1.35) return (4)
if (ratio < 1.55) return (5)
return (5)
}
children: *alternatives
Charisma:
preferences:
– [Tom, Dick, 5]
– [Tom, Harry, 9]
– [Dick, Harry, 4]
children: *alternatives
Age:
preferences:
– [Tom, Dick, 1/3]
– [Tom, Harry, 5]
– [Dick, Harry, 9]
children: *alternatives
#
# End of Goal Section
#####################################
[/code]

« Older Entries