Category Archives: Education

Improve Writing Quality with Speaking & Storyboarding

For a decade, I supervised undergrads and grad students as they were completing writing projects: term papers, semester projects, and of course — capstone projects and thesis work. Today, I’m responsible for editing the work of (and mentoring) junior colleagues. The main lesson I’ve learned over this time is: writing is really hard for most people. So I’m here to help you.

Me, Reviewing Someone Else’s Work

If I had a dollar for every time this scenario happened, I’d… well, you get my point:

ME (reading their “final draft”): [Voice in Head] Huh? Wow, that sentence is long. OK, start it again. I don’t understand what they’re saying. What are they trying to say? This doesn’t make any sense. It could mean… no, that’s not it. Maybe they mean… nope, that can’t be it.

ME: So this sentence here, the one that says “Start by commutating and telling the story of what the purpose of the company’s quality management software is, the implementation plans and the impact to the current state of quality roles and responsibilities for everyone involved.”

THEM (laughing): Oh! Commutating isn’t a word. I meant communicating.

ME: Have you tried reading this sentence out loud?

THEM (still laughing, trying to read it): Yeah, that doesn’t really make sense.

ME: What were you trying to say?

THEM: I was trying to say “Start by explaining how quality management software will impact everyone’s roles and responsibilities.”

ME: Well, why don’t you say that?

THEM: You mean I can just say that? Don’t I need to make it sound good?

ME: You did just make it sound good when you said what you were trying to say.

What Just Happened?

By trying to “make it sound good” — it’s more likely that you’ll mess it up. People think speaking and writing are two different practices, but when you write, it’s really important that when you speak it out loud, it sounds like you’re a human talking to another human. If you wouldn’t say what you wrote to someone in your target audience in exactly the way that you wrote it, then you need to revise it to something you would say.

Why? Because people read text using the voice in their heads. It’s a speaking voice! So give it good, easy, flowing sentences to speak to itself with.

What Can You Do?

Here are two ways you can start improving your writing today:

  1. Read your writing out loud (preferably to someone else who’s not familiar with your topic, or a collaborator). If it doesn’t sound right, it’s not right.
  2. Use a storyboard. (What does that mean?)

There are many storyboard templates available online, but the storyboard attached to this post is geared towards developing the skills needed for technical writing. (That is, writing where it’s important to support your statements with citations that can be validated.) Not only does citing sources add credibility, but it also gives your reader more material to read if they want to go deeper.

Storyboarding

The process is simple: start by outlining your main message. That means:

  1. Figure out meaningful section headers that are meaningful on their own.
  2. Within each section, write a complete phrase or sentence to describe the main point of each paragraph or small group of paragraphs
  3. For each phrase or sentence that forms your story, cut and paste material from your references that supports your point, and list the citation (I prefer APA style) so you don’t forget it.
  4. Read the list of section headers and main points out loud. If this story, spoken, hangs together and is logical and complete — there’s a good chance your fully written story will as well.

Not all elements of your story need citations, but many of them will.

Next Steps

When the storyboard is complete, what should you do next? Sometimes, I hand it to a collaborator to flesh it out. Other times, I’ll put it aside for a few days or weeks, and then pick it up later when my mind is fresh. Whatever approach you use, this will help you organize your thoughts and citations, and help you form a story line that’s complete and understandable. Hope this helps get you started!

STORYBOARD (BLANK)

STORYBOARD (PARTIALLY FILLED IN)

Yes, You Do Need to Write Down Procedures. Except…

近代工芸の名品― [特集展示] 

A 棗 from http://www.momat.go.jp/cg/exhibition/masterpiece2018/ — I saw this one in person!!

Several weeks ago we went to an art exhibit about “tea caddies” at the Tokyo Museum of Modern Art. Although it might seem silly, these kitchen containers are a fixture of Japanese culture. In Japan, drinking green tea is a cornerstone of daily life.

It was about 2 in the afternoon, and we had checked out of our hotel at 11. Wandering through the center of the city, we stumbled upon the museum. Since we didn’t have to meet our friends for several more hours, we decided to check it out.

Confession: I’m not a huge fan of art museums. Caveat: I usually enjoy them to some degree or another when I end up in them. But I didn’t think tea caddies could possibly be useful to me. I was wrong!

When to Write SOPs

One of the features of the exhibit was a Book of Standard Operating Procedures. It described how to create a new lacquered tea caddy from paper. (Unfortunately, photography was prohibited for this piece in particular.) The book was open, laying flat, showing a grid of characters on the right hand side. The grid described a particular process step in great detail. On the left page, a picture of a craftsman performing that step was attached. The card describing the book of SOPs explained that each of the 18 process steps was described using exactly the same format. This decision was made to ensure that the book would help accomplish certain things:

  • Improve Production Quality. Even masters sometimes need to follow instructions, or to be reminded about an old lesson learned, especially if the process is one you only do occasionally. SOPs promote consistency over time, and from person to person. 
  • Train New Artists. Even though learning the craft is done under the supervision of a skilled worker, it’s impossible to remember every detail (unless you have an eidetic memory, which most of us don’t have). The SOP serves as a guide during the learning process.
  • Enable Continuous Improvement. The SOP is the base from which adjustments and performance improvements are grown. It provides “version control” so you can monitor progress and examine the evolution of work over time.
  • Make Space for Creativity. It might be surprising, but having guidance for a particular task or process in the form of an SOP reduces cognitive load, making it easier for a person to recognize opportunities for improvement. In addition, deviations aren’t always prohibited (although in high-reliability organizations, or industries that are highly regulated, you might want to check before being too creative). The art is contributed by the person, not the process.

When Not to Write SOPs

Over the past couple decades, when I’ve asked people to write up SOPs for a given process, I’ve often run into pushback. The most common reasons are “But I know how to do this!” and “It’s too complicated to describe!” The first reason suggests that the person is threatened by the prospect of someone else doing (and possibly taking over) that process, and the second is just an excuse. Maybe.

Because sometimes, the pushback can be legitimate. Not all processes need SOPs. For example, I wouldn’t write up an SOP for the creative process of writing a blog post, or for a new research project (that no one has ever done before) culminating in the publication of a new research article. In general, processes that vary significantly each time they’re run, or processes that require doing something that no one has ever done before — don’t lend themselves well to SOPs.

Get on the Same Page

The biggest reason to document SOPs is to literally get everyone on the same page. You’d be surprised how often people think they’re following the same process, but they’re not! An easy test for this is to have each person who participates in a process draw a flow chart showing the process steps and decisions are made on their own, and then compare all the sketches. If they’re different, work together until you’re all in agreement over what’s on one flow chart — and you’ll notice a sharp and immediate improvement in performance and communication.

Happy World Quality Day 2018!

Each year, the second Thursday of November day is set aside to reflect on the way quality management can contribute to our work and our lives. Led by the Chartered Quality Institute (CQI) in the United Kingdom, World Quality Day provides a forum to reflect on how we implement more effective processes and systems that positively impact KPIs and business results — and celebrate outcomes and new insights.

This year’s theme is “Quality: A Question of Trust”.

We usually think of quality as an operations function. The quality system (whether we have quality management software implemented or not) helps us keep track of the health and effectiveness of our manufacturing, production, or service processes. Often, we do this to obtain ISO 9001:2015 certification, or achieve outcomes that are essential to how the public perceives us, like reducing scrap, rework, and customer complaints.

But the quality system encompasses all the ways we organize our business — ensuring that people, processes, software, and machines are aligned to meet strategic and operational goals. For example, QMS validation (which is a critical for quality management in the pharmaceutical industry), helps ensure that production equipment is continuously qualified to meet performance standards, and trust is not broken. Intelex partner Glemser Technologies explains in more detail in The Definitive Guide to Validating Your QMS in the Cloud. This extends to managing supplier relationships — building trust to cultivate rich partnerships in the business ecosystem out of agreements to work together.

This also extends to building and cultivating trust-based relationships with our colleagues, partners, and customers…

Read more about how Integrated Management Systems and Industry 4.0/ Quality 4.0 are part of this dynamic: https://community.intelex.com/explore/posts/world-quality-day-2018-question-trust

Quality 4.0: Reveal Hidden Insights with Data Sci & Machine Learning (Webinar)

Quality Digest

What’s Quality 4.0, why is it important, and how can you use it to gain competitive advantage? Did you know you can benefit from Quality 4.0 even if you’re not a manufacturing organization? That’s right. I’ll tell you more next week.

Sign up for my 50-minute webinar at 2pm ET on Tuesday, October 16, 2018 — hosted by Dirk Dusharme and Mike Richman at Quality Digest. This won’t be your traditional “futures” talk to let you know about all of the exciting technology on the horizon… I’ve actually been doing and teaching data science, and applying machine learning to practical problems in quality improvement, for over a decade.

Come to this webinar if:

  1. You have a LOT of data and you don’t know where to begin
  2. You’re kind of behind… you still use paper and Excel and you’re hoping you don’t miss the opportunities here
  3. You’re a data scientist and you want to find out about quality and process improvement
  4. You’re a quality professional and you want to find out more about data science
  5. You’re a quality engineer and you want some professional preparation for what’s on the horizon
  6. You want to be sure you get on our Quality 4.0 mailing list to receive valuable information assets for the next couple years to help you identify and capture opportunities

Register Here! See you on Tuesday. If you can’t make it, we’ll also be at the ASQ Quality 4.0 Summit in Dallas next month sharing more information about the convergence of quality and Big Data.

Happy 10th Birthday!

10 years ago today, this blog published its first post: “How Do I Do a Lean Six Sigma (LSS) Project?” Looking back, it seems like a pretty simple place to have started. I didn’t know whether it would even be useful to anyone, but I was committed to making my personal PDSA cycles high-impact: I was going to export things I learned, or things I found valuable. (As it turns out, many people did appreciate the early posts even though it would take a few years for that to become evident!)

Since then, hundreds more have followed to help people understand more about quality and process improvement in theory and in practice. I started writing because I was in the middle of my PhD dissertation in the Quality Systems program at Indiana State, and I was discovering so many interesting nuggets of information that I wanted to share those with the world – particularly practitioners, who might not have lots of time (or even interest) in sifting through the research. In addition, I was using data science (and some machine learning, although at the time, it was much more difficult to implement) to explore quality-related problems, and could see the earliest signs that this new paradigm for problem solving might help fuel data-driven decision making in the workplace… if only we could make the advanced techniques easy for people in busy jobs to use and apply.

We’re not there yet, but as ASQ and other organizations recognize Quality 4.0 as a focus area, we’re much closer. As a result, I’ve made it my mission to help bring insights from research to practitioners, to make these new innovations real. If you are developing or demonstrating any new innovative techniques that relate to making people, processes, or products better, easier, faster, or less expensive — or reducing risks and building individual and organizational capabilities — let me know!

I’ve also learned a lot in the past decade, most of which I’ve spent helping undergraduate students develop and refine their data-driven decision making skills, and more recently at Intelex (provider of integrated environment, health & safety, and quality management EHSQ software to enterprises and smaller organizations). Here are some of the big lessons:

  1. People are complex. They have multidimensional lives, and work should support and enrich those lives. Any organization that cares about performance — internally and in the market — should examine how it can create complete and meaningful experiences. This applies not only to customers, but to employees and partners and suppliers. It also applies to anyone an organization has the power and potential to impact, no matter how small.
  2. Everybody wants to do a good job (and be recognized for it). How can we create environments where each person is empowered to contribute in all the areas where they have talent and interest? How can these same environments be designed with empathy as a core capability?
  3. Your data are your most valuable assets. It sounds trite, but data is becoming as valuable as warehouses, inventory, and equipment. I was involved in a project a few years ago where we digitized data that had been collected for three years — and by analyzing it, we uncovered improvement opportunities that when implemented, saved thousands of dollars a week. We would not have been able to do that if the data had remained scratched in pencil on thousands of sheets of well-worn legal paper.
  4. Nothing beats domain expertise (especially where data science is concerned). I’ve analyzed terabytes of data over the past decade, and in many cases, the secrets are subtle. Any time you’re using data to make decisions, be sure to engage the people with practical, on-the-ground experience in the area you’re studying.
  5. Self-awareness must be cultivated. The older you get, and the more experience you gain, the more you know what you don’t know. Many of my junior colleagues (and yours) haven’t reached this point yet, and will need some help from senior colleagues to gain this awareness. At the same time, those of you who are senior have valuable lessons to learn from your junior colleagues, too! Quality improvement is grounded in personal and organizational learning, and processes should help people help each other uncover blind spots and work through them — without fear.

Most of all, I discovered that what really matters is learning. We can spend time supporting human and organizational performance, developing and refining processes that have quality baked in, and making sure that products meet all their specifications. But what’s going on under the surface is more profound: people are learning about themselves, they are learning about how to transform inputs into outputs in a way that adds value, and they are learning about each other and their environment. Our processes just encapsulate that organizational knowledge that we develop as we learn.

Writing a Great Article Review

We’re teaching a class on blockchain and cryptocurrencies this semester, and since the field is so new and changing rapidly, we’ve asked our students to make finding and reviewing articles part of their learning practice this semester. This is a particularly challenging topic for this task because there’s so much hype, marketing, and fluff around these topics. We want to slice through that, and improve the signal-to-noise for people new to learning about blockchain and cryptocurrencies. Here are some tips I just prepared for our students — they may be helpful to anyone writing article reviews, especially for technology-related areas.


0 – Type of Source. Reviews or articles from from arXiv, Google Scholar were strong; reviews from Coindesk, CNN were weak; reviews from WSJ and Hacker Noon went both ways. Here are two submissions that were publishable with only minor edits:

1 – Spelling & Grammar. Most of you are college seniors, and the few who aren’t… are juniors. Please use complete sentences that make sense, with words that are spelled correctly! If this is hard for you, remember that every one of you has spell check. One way to remember this is to draft your posts in Word, and then perform spell check before you copy and paste what you wrote into WordPress.

1 – Your job is to create the TL;DR. What’s the essential substance of the source you’re reviewing? What are the main lessons or findings? If you were taking notes for an exam, what elements would you capture? (Using this perspective, commentary about how good or bad you think the article was, or what it didn’t cover well, would not help you on an exam.)

2 – Choose solid source material — primary sources, e.g. research papers, if possible. If the article is less than ~400-500 words, it’s probably not detailed enough to write a 250-300 word summary/analysis. Your job in this class is to break down complex topics & help people understand them. If your article is short and already very easy to understand, there’s nothing for you to do.

3 – Avoid “weasel words” (phrases or sentences that sound like marketing or clickbait but actually say nothing) and words/sentences that sound like you’re writing a Yelp or Amazon review rather than a critical academic review. Here are a couple weaselly examples drawn from this week’s draft posts (see if you can spot what’s wrong):

  • It is clear how beneficial blockchain can be to smaller businesses.
  • Blockchain has the potential to change the world.
  • Each other the topics covered in the article deserve their own piece and could be augmented upon greatly.
  • There is a degree of uncertainty that comes with an emerging technology.
  • Blockchain can bring them into the 21st century to compete with larger corporations.
  • Many people are scared of the changes, and governments will seek to regulate it.

4 – Answer the “so what” question. Why is this topic interesting or compelling?

5 – Choose information-rich tags. For example, in our class, don’t include blockchain as a tag… pretty much everything we do will be related to blockchain, and everyone will tend to use it, so there won’t be much information contained in the tag.

Where Do Z-Score Tables Come From? (+ how to make them in R)

Every student learns how to look up areas under the normal curve using Z-Score tables in their first statistics class. But what is less commonly covered, especially in courses where calculus is not a prerequisite, is where those Z-Score tables come from: figuring out the area under the normal curve for all possible places you could chop it into two, then making a table from it.

You get the z-score by evaluating the integral of the equation for the bell-shaped normal curve, usually from -Inf to the z-score of interest. This is the same thing that the R command pnorm does when you provide it with a z-score. Here is the slide presentation I put together to explain the use and origin of the Z-Score table, and how it relates to pnorm and qnorm (the command that lets you input an area to find the z-score at which the area to the left is swiped out). It’s free to use under Creative Commons, and is part of the course materials that is available for use with this 2017 book.

One of the fun things I did was to make my own z-score table in R. I don’t know why anyone would WANT to do this — they are easy to find in books, and online, and if you know how to use pnorm and qnorm, you don’t need one at all. But, you can, and here’s how.

First, let’s create a z-score table just with left-tail areas. Using symmetry, we can also use this to get any areas in the right tail, because the area to the left of any -z is the same as any area to the right of any +z. Even though the z-score table contains areas in its cells, our first step is to create a table just of the z-scores that correspond to each cell:

c0 <- seq(-3.4,0,.1)
c1 <- seq(-3.41,0,.1)
c2 <- seq(-3.42,0,.1)
c3 <- seq(-3.43,0,.1)
c4 <- seq(-3.44,0,.1)
c5 <- seq(-3.45,0,.1)
c6 <- seq(-3.46,0,.1)
c7 <- seq(-3.47,0,.1)
c8 <- seq(-3.48,0,.1)
c9 <- seq(-3.49,0,.1)


z <- cbind(c0,c1,c2,c3,c4,c5,c6,c7,c8,c9)
z</pre>
<pre>c0 c1 c2 c3 c4 c5 c6 c7 c8 c9
[1,] -3.4 -3.41 -3.42 -3.43 -3.44 -3.45 -3.46 -3.47 -3.48 -3.49
[2,] -3.3 -3.31 -3.32 -3.33 -3.34 -3.35 -3.36 -3.37 -3.38 -3.39
[3,] -3.2 -3.21 -3.22 -3.23 -3.24 -3.25 -3.26 -3.27 -3.28 -3.29
[4,] -3.1 -3.11 -3.12 -3.13 -3.14 -3.15 -3.16 -3.17 -3.18 -3.19
[5,] -3.0 -3.01 -3.02 -3.03 -3.04 -3.05 -3.06 -3.07 -3.08 -3.09
[6,] -2.9 -2.91 -2.92 -2.93 -2.94 -2.95 -2.96 -2.97 -2.98 -2.99
[7,] -2.8 -2.81 -2.82 -2.83 -2.84 -2.85 -2.86 -2.87 -2.88 -2.89
[8,] -2.7 -2.71 -2.72 -2.73 -2.74 -2.75 -2.76 -2.77 -2.78 -2.79
[9,] -2.6 -2.61 -2.62 -2.63 -2.64 -2.65 -2.66 -2.67 -2.68 -2.69
[10,] -2.5 -2.51 -2.52 -2.53 -2.54 -2.55 -2.56 -2.57 -2.58 -2.59
[11,] -2.4 -2.41 -2.42 -2.43 -2.44 -2.45 -2.46 -2.47 -2.48 -2.49
[12,] -2.3 -2.31 -2.32 -2.33 -2.34 -2.35 -2.36 -2.37 -2.38 -2.39
[13,] -2.2 -2.21 -2.22 -2.23 -2.24 -2.25 -2.26 -2.27 -2.28 -2.29
[14,] -2.1 -2.11 -2.12 -2.13 -2.14 -2.15 -2.16 -2.17 -2.18 -2.19
[15,] -2.0 -2.01 -2.02 -2.03 -2.04 -2.05 -2.06 -2.07 -2.08 -2.09
[16,] -1.9 -1.91 -1.92 -1.93 -1.94 -1.95 -1.96 -1.97 -1.98 -1.99
[17,] -1.8 -1.81 -1.82 -1.83 -1.84 -1.85 -1.86 -1.87 -1.88 -1.89
[18,] -1.7 -1.71 -1.72 -1.73 -1.74 -1.75 -1.76 -1.77 -1.78 -1.79
[19,] -1.6 -1.61 -1.62 -1.63 -1.64 -1.65 -1.66 -1.67 -1.68 -1.69
[20,] -1.5 -1.51 -1.52 -1.53 -1.54 -1.55 -1.56 -1.57 -1.58 -1.59
[21,] -1.4 -1.41 -1.42 -1.43 -1.44 -1.45 -1.46 -1.47 -1.48 -1.49
[22,] -1.3 -1.31 -1.32 -1.33 -1.34 -1.35 -1.36 -1.37 -1.38 -1.39
[23,] -1.2 -1.21 -1.22 -1.23 -1.24 -1.25 -1.26 -1.27 -1.28 -1.29
[24,] -1.1 -1.11 -1.12 -1.13 -1.14 -1.15 -1.16 -1.17 -1.18 -1.19
[25,] -1.0 -1.01 -1.02 -1.03 -1.04 -1.05 -1.06 -1.07 -1.08 -1.09
[26,] -0.9 -0.91 -0.92 -0.93 -0.94 -0.95 -0.96 -0.97 -0.98 -0.99
[27,] -0.8 -0.81 -0.82 -0.83 -0.84 -0.85 -0.86 -0.87 -0.88 -0.89
[28,] -0.7 -0.71 -0.72 -0.73 -0.74 -0.75 -0.76 -0.77 -0.78 -0.79
[29,] -0.6 -0.61 -0.62 -0.63 -0.64 -0.65 -0.66 -0.67 -0.68 -0.69
[30,] -0.5 -0.51 -0.52 -0.53 -0.54 -0.55 -0.56 -0.57 -0.58 -0.59
[31,] -0.4 -0.41 -0.42 -0.43 -0.44 -0.45 -0.46 -0.47 -0.48 -0.49
[32,] -0.3 -0.31 -0.32 -0.33 -0.34 -0.35 -0.36 -0.37 -0.38 -0.39
[33,] -0.2 -0.21 -0.22 -0.23 -0.24 -0.25 -0.26 -0.27 -0.28 -0.29
[34,] -0.1 -0.11 -0.12 -0.13 -0.14 -0.15 -0.16 -0.17 -0.18 -0.19
[35,] 0.0 -0.01 -0.02 -0.03 -0.04 -0.05 -0.06 -0.07 -0.08 -0.09

Now that we have slots for all the z-scores, we can use pnorm to transform all those values into the areas that are swiped out to the left of that z-score. This part is easy, and only takes one line. The remaining three lines format and display the z-score table:

zscore.df <- round(pnorm(z),4)
row.names(zscore.df) <- sprintf("%.2f", c0)
colnames(zscore.df) <- seq(0,0.09,0.01)


zscore.df

0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
-3.40 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003 0.0003 0.0002
-3.30 0.0005 0.0005 0.0005 0.0004 0.0004 0.0004 0.0004 0.0004 0.0004 0.0003
-3.20 0.0007 0.0007 0.0006 0.0006 0.0006 0.0006 0.0006 0.0005 0.0005 0.0005
-3.10 0.0010 0.0009 0.0009 0.0009 0.0008 0.0008 0.0008 0.0008 0.0007 0.0007
-3.00 0.0013 0.0013 0.0013 0.0012 0.0012 0.0011 0.0011 0.0011 0.0010 0.0010
-2.90 0.0019 0.0018 0.0018 0.0017 0.0016 0.0016 0.0015 0.0015 0.0014 0.0014
-2.80 0.0026 0.0025 0.0024 0.0023 0.0023 0.0022 0.0021 0.0021 0.0020 0.0019
-2.70 0.0035 0.0034 0.0033 0.0032 0.0031 0.0030 0.0029 0.0028 0.0027 0.0026
-2.60 0.0047 0.0045 0.0044 0.0043 0.0041 0.0040 0.0039 0.0038 0.0037 0.0036
-2.50 0.0062 0.0060 0.0059 0.0057 0.0055 0.0054 0.0052 0.0051 0.0049 0.0048
-2.40 0.0082 0.0080 0.0078 0.0075 0.0073 0.0071 0.0069 0.0068 0.0066 0.0064
-2.30 0.0107 0.0104 0.0102 0.0099 0.0096 0.0094 0.0091 0.0089 0.0087 0.0084
-2.20 0.0139 0.0136 0.0132 0.0129 0.0125 0.0122 0.0119 0.0116 0.0113 0.0110
-2.10 0.0179 0.0174 0.0170 0.0166 0.0162 0.0158 0.0154 0.0150 0.0146 0.0143
-2.00 0.0228 0.0222 0.0217 0.0212 0.0207 0.0202 0.0197 0.0192 0.0188 0.0183
-1.90 0.0287 0.0281 0.0274 0.0268 0.0262 0.0256 0.0250 0.0244 0.0239 0.0233
-1.80 0.0359 0.0351 0.0344 0.0336 0.0329 0.0322 0.0314 0.0307 0.0301 0.0294
-1.70 0.0446 0.0436 0.0427 0.0418 0.0409 0.0401 0.0392 0.0384 0.0375 0.0367
-1.60 0.0548 0.0537 0.0526 0.0516 0.0505 0.0495 0.0485 0.0475 0.0465 0.0455
-1.50 0.0668 0.0655 0.0643 0.0630 0.0618 0.0606 0.0594 0.0582 0.0571 0.0559
-1.40 0.0808 0.0793 0.0778 0.0764 0.0749 0.0735 0.0721 0.0708 0.0694 0.0681
-1.30 0.0968 0.0951 0.0934 0.0918 0.0901 0.0885 0.0869 0.0853 0.0838 0.0823
-1.20 0.1151 0.1131 0.1112 0.1093 0.1075 0.1056 0.1038 0.1020 0.1003 0.0985
-1.10 0.1357 0.1335 0.1314 0.1292 0.1271 0.1251 0.1230 0.1210 0.1190 0.1170
-1.00 0.1587 0.1562 0.1539 0.1515 0.1492 0.1469 0.1446 0.1423 0.1401 0.1379
-0.90 0.1841 0.1814 0.1788 0.1762 0.1736 0.1711 0.1685 0.1660 0.1635 0.1611
-0.80 0.2119 0.2090 0.2061 0.2033 0.2005 0.1977 0.1949 0.1922 0.1894 0.1867
-0.70 0.2420 0.2389 0.2358 0.2327 0.2296 0.2266 0.2236 0.2206 0.2177 0.2148
-0.60 0.2743 0.2709 0.2676 0.2643 0.2611 0.2578 0.2546 0.2514 0.2483 0.2451
-0.50 0.3085 0.3050 0.3015 0.2981 0.2946 0.2912 0.2877 0.2843 0.2810 0.2776
-0.40 0.3446 0.3409 0.3372 0.3336 0.3300 0.3264 0.3228 0.3192 0.3156 0.3121
-0.30 0.3821 0.3783 0.3745 0.3707 0.3669 0.3632 0.3594 0.3557 0.3520 0.3483
-0.20 0.4207 0.4168 0.4129 0.4090 0.4052 0.4013 0.3974 0.3936 0.3897 0.3859
-0.10 0.4602 0.4562 0.4522 0.4483 0.4443 0.4404 0.4364 0.4325 0.4286 0.4247
0.00 0.5000 0.4960 0.4920 0.4880 0.4840 0.4801 0.4761 0.4721 0.4681 0.4641

You can also draw a picture to go along with your z-score table, so that people remember which area they are looking up:

x <- seq(-4,4,0.1)
y <- dnorm(x)
plot(x,dnorm(x),type="l", col="black", lwd=3)
abline(v=-1,lwd=3,col="blue")
abline(h=0,lwd=3,col="black")
polygon(c(x[1:31],rev(x[1:31])), c(rep(0,31),rev(y[1:31])), col="lightblue")
[/code]
It looks like this:

z-score-table-icon-small

In the slides, code to produce a giant-tail z-score table is also provided (where the areas are > 50%).

« Older Entries Recent Entries »