Taking a Subset of a Data Frame in R

I just wrote a new chapter for my students describing how to subset a data frame in R. The full text is available at https://docs.google.com/document/d/1K5U11-IKRkxNmitu_lS71Z6uLTQW_fp6QNbOMMwA5J8/edit?usp=sharing but here’s a preview:

Let’s load in ChickWeight, one of R’s built in datasets. This contains the weights of little chickens at 12 different times throughout their lives. The chickens are on different diets, numbered 1, 2, 3, and 4. Using the str command, we find that there are 578 observations in this data frame, and two different categorical variables: Chick and Diet.

> data(ChickWeight)
> head(ChickWeight)
  weight Time Chick Diet
1     42    0     1    1
2     51    2     1    1
3     59    4     1    1
4     64    6     1    1
5     76    8     1    1
6     93   10     1    1
> str(ChickWeight)
Classes ‘nfnGroupedData’, ‘nfGroupedData’, ‘groupedData’ and 'data.frame':      578 obs. of  4 variables:
 $ weight: num  42 51 59 64 76 93 106 125 149 171 ...
 $ Time  : num  0 2 4 6 8 10 12 14 16 18 ...
 $ Chick : Ord.factor w/ 50 levels "18"<"16"<"15"<..: 15 15 15 15 15 15 15 15 15 15 ...
 $ Diet  : Factor w/ 4 levels "1","2","3","4": 1 1 1 1 1 1 1 1 1 1 ...
 - attr(*, "formula")=Class 'formula' length 3 weight ~ Time | Chick
  .. ..- attr(*, ".Environment")=<environment: R_EmptyEnv> 
 - attr(*, "outer")=Class 'formula' length 2 ~Diet
  .. ..- attr(*, ".Environment")=<environment: R_EmptyEnv> 
 - attr(*, "labels")=List of 2
  ..$ x: chr "Time"
  ..$ y: chr "Body weight"
 - attr(*, "units")=List of 2
  ..$ x: chr "(days)"
  ..$ y: chr "(gm)"

Get One Column: Now that we have a data frame named ChickWeight loaded into R, we can take subsets of these 578 observations. First, let’s assume we just want to pull out the column of weights. There are two ways we can do this: specifying the column by name, or specifying the column by its order of appearance. The general form for pulling information from data frames is data.frame[rows,columns] so you can get the first column in either of these two ways:

ChickWeight[,1]   		# get all rows, but only the first column
ChickWeight[,c("weight")]	# get all rows, and only the column named “weight”

Get Multiple Columns: If you want more than one column, you can specify the column numbers or the names of the variables that you want to extract. If you want to get the weight and diet columns, you would do this:

ChickWeight[,c(1,4)]   		# get all rows, but only 1st and 4th columns
ChickWeight[,c("weight","Diet")]	# get all rows, only “weight” & “Diet” columns

If you want more than one column and those columns are next to each other, you can do this:

ChickWeight[,c(1:3)]

Get One Row: You can get the first row similarly to how you got the first column, and any other row the same way:

ChickWeight[1,]   		# get first row, and all columns
ChickWeight[82,]   		# get 82nd row, and all columns

Get Multiple Rows: If you want more than one row, you can specify the row numbers you want like this:

> ChickWeight[c(1:6,15,18,27),] 
   weight Time Chick Diet      
1      42    0     1    1   
2      51    2     1    1 
3      59    4     1    1    
4      64    6     1    1    
5      76    8     1    1 
6      93   10     1    1    
15     58    4     2    1    
18    103   10     2    1 
27     55    4     3    1    

What Protests and Revolutions Reveal About Innovation

The following book review will appear in an issue of the Quality Management Journal later this year:

The End of Protest: A New Playbook for Revolution.   2016.  Micah White.  Toronto, Ontario, Canada. Alfred A. Knopf Publishing.  317 pages.

You may wonder why I’m reviewing a book written by the creator of the Occupy movement for an audience of academics and practitioners who care about quality and continuous improvement in organizations, many of which are trying to not only sustain themselves but also (in many cases) to make a profit. The answer is simple: by understanding how modern social movements are catalyzed by decentralized (and often autonomous) interactive media, we will be better able to achieve some goals we are very familiar with. These include 1) capturing the rapidly changing “Voice of the Customer” and, in particular, gaining access to its silent or hidden aspects, 2) promoting deep engagement, not just in work but in the human spirit, and 3) gaining insights into how innovation can be catalyzed and sustained in a truly democratic organization.

This book is packed with meticulously researched cases, and deeply reflective analysis. As a result, is not an easy read, but experiencing its modern insights in terms of the historical context it presents is highly rewarding. Organized into three sections, it starts by describing the events leading up to the Occupy movement, the experience of being a part of it, and why the author feels Occupy fell short of its objectives. The second section covers several examples of protests, from ancient history to modern times, and extracts the most important strategic insight from each event. Next, a unified theory of revolution is presented that reconciles the unexpected, the emotional, and the systematic aspects of large-scale change.

The third section speaks directly to innovation. Some of the book’s most powerful messages, the principles of revolution, are presented in Chapter 14. “Understanding the principles behind revolution,” this chapter begins, “allows for unending tactical innovation that shifts the paradigms of activism, creates new forms of protest, and gives the people a sudden power over their rulers.” If we consider that we are often “ruled” by the status quo, then these principles provide insight into how we can break free: short sprints, breaking patterns, emphasizing spirit, presenting constraints, breaking scripts, transposing known tactics to new environmental contexts, and proposing ideas from the edge. The end result is a masterful work that describes how to hear, and mobilize, the collective will.

 

Reviewed by

Dr. Nicole M. Radziwill

 

The Value of Defining Context

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

The most important stage of problem-solving in organizations is often one of the earliest: getting everyone on the same page by defining the concepts, processes, and desired outcomes that are central to understanding the problem and formulating a solution. (“Everyone” can be the individuals on a project team, or the individuals that contribute actions to a process, or both.) Too often, we assume that the others around us see and experience the world the same way we do. In many cases, our assessments are not too far apart, which is how most people can get away with making this assumption on a regular basis.

In fact, some people experience things so differently that they don’t even “picture” anything in their minds. Can you believe it?

I first realized this divergence in the work context a few years ago, when a colleague and I were advising a project at a local social services office. We asked our students to document the process that was being used to process claims. There were nearly ten people who were part of this claims-processing activity, and our students interviewed all of them, discovering that each person had a remarkably different idea about the process that they were all engaged in! No wonder the claims processing time was nearly two months long.

We helped them all — literally — get onto the same page, and once they all had the same mental map of the process, time-in-system for each claim dropped to 10 days. (This led us to the quantum-esque conclusion that there is no process until it is observed.)

Today, I read about how mathematician Keith Devlin revolutionized the process of intelligence gathering after 9/11 using this same approach… by going back to one of the first principles he learned in his academic training:

So what had I done? Nothing really — from my perspective. My task was to find a way of analyzing how context influences data analysis and reasoning in highly complex domains involving military, political, and social contexts. I took the oh-so-obvious (to me) first step. I need to write down as precise a mathematical definition as possible of what a context is. It took me a couple of days…I can’t say I was totally satisfied with it…but it was the best I could do, and it did at least give me a firm base on which to start to develop some rudimentary mathematical ideas.

The fairly large group of really smart academics, defense contractors, and senior DoD personnel spent the entire hour of my allotted time discussing that one definition. The discussion brought out that all the different experts had a different conception of what a context is — a recipe for disaster.

What I had given them was, first, I asked the question “What is a context?” Since each person in the room besides me had a good working concept of context — different ones, as I just noted — they never thought to write down a formal definition. It was not part of what they did. And second, by presenting them with a formal definition, I gave them a common reference point from which they could compare and contrast their own notions. There we had the beginnings of disaster avoidance.

Getting people to very precisely understand the definitions, concepts, processes, and desired outcomes that are central to a problem might take some time and effort, but it is always extremely valuable.

When you face a situation like this in mathematics, you spend a lot of time going back to the basics. You ask questions like, “What do these words mean in this context?” and, “What obvious attempts have already been ruled out, and why?” More deeply, you’d ask, “Why are these particular open questions important?” and, “Where do they see this line of inquiry leading?”

(You can read the full article about Devlin, and more important lessons from mathematical thinking, Here.)

View story at Medium.com

Using xda with googlesheets in R

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

Want to do a quick, exploratory data analysis in R of your data that’s stored in a spreadsheet on Google Drive? You’re in luck, because now you can use the new xda package in conjunction with Jenny Bryan‘s googlesheets. There are some quirks, though, and that’s what this post is all about.

Before proceeding, you should review this recent article from R-Bloggers called “Introducing xda”.

First, be sure to install the googlesheets and xda packages. Although googlesheets is on CRAN, xda is not, and you’ll have to bring it in directly from github. You can actually do the same for googlesheets if you like:

install.packages("devtools")
library(devtools)
install_github("jennybc/googlesheets")
install_github("ujjwalkarn/xda")
library(googlesheets)
library(xda)

Next, you’ll have to show R how to access your Google spreadsheet. While you are looking at your spreadsheet, go to File -> Publish to the Web. The URL that’s in the text box is the one you want to capture. Just to make sure it works, copy and paste it into a new browser address window and see if you can display your spreadsheet in your browser.

If you want to import the data at https://docs.google.com/spreadsheets/d/1DO0ksD8d-rn_j2Yn7DQKZDPBrhrvZTpgszewxokjWKU/pubhtml into R, for example, you’ll need to know the spreadsheet’s key. That’s the long string of unintelligible numbers and letters between the “d” and the “pubhtml”. So, my key would be “1DO0ksD8d-rn_j2Yn7DQKZDPBrhrvZTpgszewxokjWKU” — which you’ll see in this next block of code:

> my.gs <- gs_key("1DO0ksD8d-rn_j2Yn7DQKZDPBrhrvZTpgszewxokjWKU") 
> my.data <- gs_read(my.gs) # Retrieves data from googlesheets and places it into an R object. 
> my.df <- as.data.frame(my.data)  # Important! xda needs you to extract only the data in a data frame.

Now, you can access your data. Try head(my.df) to make sure you’ve imported it properly.

Next, it’s time for exploratory data analysis. There are three commands available:

  • numSummary – takes a data frame as an argument, provides descriptive statistics, quantiles, and missing data info for quantitative variables
  • charSummary – takes a data frame as an argument, provides counts, missing data info, and number of unique factors for quantitative variables
  • bivariate – takes a data frame and two quantitative variables as an argument, and performs a quick bivariate analysis (giving this categorical variables, or giving this one categorical and one quantitative variable, will throw an error)

Here’s what happens when you run those commands on the data you just loaded in from your Google spreadsheet:

> numSummary(my.df)
               n   mean    sd median   max  min  mode miss miss%    1%   5%   25%   50%   75%   90%   95%   99%
obs          200 100.50 57.88  100.5 200.0  1.0   1.0    0     0  2.99 11.0  50.8 100.5 150.2 180.1 190.0 198.0
heartrates   200  73.01  7.43   73.0  96.3 53.4  71.2    0     0 56.49 61.2  68.4  73.0  77.4  82.6  85.7  90.3
systolics    200 139.27 29.27  138.0 221.0 59.0 139.0    0     0 79.98 96.0 117.0 138.0 160.0 177.2 188.1 205.0
diastolics   200  87.76  9.74   87.7 116.4 62.4  85.2    0     0 66.01 72.2  81.9  87.7  93.7 100.3 104.5 108.3
bmis         200  25.53  3.06   25.0  33.1 18.4  24.7    0     0 19.00 21.0  23.5  25.0  27.7  29.6  31.2  32.8
ages         200  44.41 14.59   45.0  70.0 18.0  30.0    0     0 18.00 22.0  32.0  45.0  57.0  64.1  67.0  70.0
heartpm      200  72.26  3.55   72.2  83.8 64.2  74.2    0     0 64.72 66.2  69.8  72.2  74.2  76.4  78.7  81.4
fitnesslevel 200   2.62  1.17    3.0   4.0  1.0   4.0    0     0  1.00  1.0   2.0   3.0   4.0   4.0   4.0   4.0

> charSummary(my.df)
          n miss miss% unique
genders 200    0     0      2
smokers 200    0     0      2
group   200    0     0      4

> bivariate(my.df,'heartrates','bmis')
     bin_bmis min_heartrates max_heartrates mean_heartrates
1   (18.3,22]          53.40          85.60           72.80
2   (22,25.7]          55.70          90.70           72.87
3 (25.7,29.4]          60.30          96.30           73.45
4 (29.4,33.1]          56.50          90.30           72.46

Observations:

  • There is a fourth “Plot” command but I couldn’t get it to work on any googlesheetsdata. The xda package is looking for class(range) to be anything other than “function”, which it was for every sheet I attempted to load.
  • There really should be an extra column in xda that displays the enumeration of all the unique values for the factors. It felt great to know how many unique values there were, but I would love to be reminded of what they are too, unless there are too many of them.

Please share your experiences using xda & googlesheets together in the comments! Thanks!

Innovation Tips for Strategic Planning

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

Over the past 15 years, I’ve helped several organizations with continuous improvement initiatives at the strategic, executive level. There are a lot of themes that keep appearing and reappearing, so the purpose of this post is to call out just a few and provide some insights in how to deal with them! 

These come up when you are engaged in strategic planning and when you are planning operations (to ensure that processes and procedures ultimately satisfy strategic goals), and are especially prominent when you’re trying to develop or use Key Performance Indicators (KPIs) and other metrics or analytics.

 

1) How do you measure innovation? Before you pick metrics, recognize that the answer to this question depends on how you articulate the strategic goals for your innovation outcomes. Do you want to:

  • Keep up with changing technology?
  • Develop a new product/technology?
  • Lead your industry in developing best practices?
  • Pioneer new business models?
  • Improve quality of life for a particular group of people?

All of these will be measured in different ways! And it’s OK to not strategically innovate in one area or another… for example, you might not want to innovate your business model if technology development is your forte. Innovation is one of those things where you really don’t want to be everything to everyone… by design.

 

2) Do you distinguish between improving productivity and generating impact?

Improving quality (the ability to satisfy stated and implied needs) is good. Improving productivity (that is, what you can produce given the resources that you use) is also good. Reducing defects, reducing waste, and reducing variation (sometimes) are all very good things to do, and to report on. 

But who really cares about any improvements at all unless they have impact? It’s always necessary to tie your KPIs, which are often measures of outcomes, to metrics or analytics that can tell the story about why a particular improvement was useful — in the short term, and (hopefully also) in the long term.

You also have to balance productivity and impact. For example, maybe you run an ultra-efficient 24/7 Help Desk. Your effectiveness is exemplary… when someone submits a request, it’s always satisfied within 8 hours. But you discover that no tickets come in between Friday at 5pm and Monday at 8am. So all that time you spend staffing that Help Desk on the weekend? It’s non-value-added time, and could be eliminated to improve your productivity… but won’t influence your impact at all.

We just worked on a project where we had to consciously had to think about how all the following interact… and you should too:

  • Organizational Productivity: did your improvement help increase the capacity or capability for part of your organization? If so, then it could contribute to technical productivity or business productivity.
  • Technical Productivity: did the improvement remove a technical barrier to getting work done, or make it faster or less error-prone?
  • Business Productivity: did the improvement help you get the needs of the business satisfied faster or better?
  • Business Impact: Did the improvements that yielded organizational productivity benefits, technical productivity benefits, or business productivity benefits make a difference at the strategic level? (This answers the “so what” question. So you improved your throughput by 83%… so what? Who really cares, and why does this matter to them? Long-term, why does this awesome thing you did really matter?)
  • Educational/Workforce Development Impact: Were the lessons learned captured, fed back into the organization’s processes to close the loop on learning, or maybe even used to educate people who may become part of your workforce pipeline?

All of the categories above are interrelated. I don’t think you can have a comprehensive, innovation-focused analytics approach unless you address all of these.

 

3) Do you distinguish between participation and engagement?

Participation means you showed up. Engagement means you got involved, you stayed involved, your mission was advanced, or maybe you used this experience to help society. Too often, I see organizations that want to improve engagement, and all the metrics they select are really good at characterizing participation.

I’m writing a paper on this topic right now, but in the meantime (if you want to get a REALLY good sense of the difference between participation and engagement), read The Participatory Museum by Nina Simon. Yes, it is “about museums” — and yes, I know you’re in business or industry — and YES, this book really will provide you with amazing management insights. So read it!

Voice of the Customer (VOC) in the Internet of Things (IoT)

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

In February, I speculated about how our notion of “Voice of the Customer” (VoC) might change, since between 2016 and 2020 we are poised to witness the Internet of Things (IoT) as it grows from 6.4 billion to over 20 billion entities. The IoT will require us to re-think fundamental questions about how our interests as customer and stakeholders are represented. In particular,

  • What will the world look (and feel) like when everything you interact with has a “voice”?
  • How will the “Voice of the Customer” be heard when all of that customer’s stuff ALSO has a voice?
  • Will your stuff have “agency” — that is, the right to represent your needs and interests to other products and services?

Companies are also starting to envision how their strategies will morph in response to the new capabilities offered by the IoT. Starbucks CTO Gerri Martin-Flickenger, for example, shares her feelings in GeekWire, 3/24/2016:

“Imagine you’re on a road trip, diving across the country, and you pull into a Starbucks drive-through that you’ve never been to before,” she said at the Starbucks annual shareholder’s meeting Wednesday in Seattle. “We detect you’re a loyal customer and you buy about the same thing every day, at about the same time. So as you pull up to the order screen, we show you your order, and the barista welcomes you by name.”

“Does that sound crazy?” she asked. “No, actually, not really. In the coming months and years you will see us continue to deliver on a basic aspiration: to deliver technology that enhances the human connection.”

IoT to enhance the human connection? Sounds great, right? But hold on… that’s not what she’s talking about. She wants to enhance the feeling of connection between individuals and a companynothing different than cultivating customer loyalty.

Her scenario is actually pretty appealing: I can imagine pulling up to a Starbuck’s drive-through and having everything disappear from the screen except for maybe 2 or 3 choices of things I’ve had before, and 1 or 2 choices for things I might be interested in. The company could actually work with me to help alleviate my sensory overload problems, reducing the stress I experience when presented with a hundred-item menu, and improving my user experience. IoT can help them hear my voice  as a customer, and adapt to my preferences, but it won’t make them genuinely care about me any more than they already do not.

When I first read this article, I thought it would give me insight into a question I’ve had for a while now… but the question is still substantially unanswered: How can IoT facilitate capturing and responding to VoC in a way that really does cultivate human connection? John Hagel and John Seely Brown, in my opinion, are a little closer to the target:

[Examples] highlight a paradox inherent in connected devices and the Internet of Things: although technology aims to weave data streams without human intervention, its deeper value comes from connecting people. By offloading data capture and information transfer to the background, devices and applications can actually improve human relationships. Practitioners can use technology to get technology out of the way—to move data and information flows to the side and enable better human interaction…

Where is Quality Management Headed?

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

[This post is in response to ASQ’s February topic for the Influential Voices group, which asks: Where do you plan to take your career in 2016? What’s your view of careers in quality today—what challenges is this field facing? How can someone starting out in quality succeed?]

We are about to experience a paradigm shift in production, operations, and service: a shift that will have direct consequences on the principles and practice of design, development, and quality management. This “fourth industrial revolution” of cyber-physical systems will require more people in the workforce to understand quality principles associated with co-creation of value, and to develop novel business models. New technical skills will become critical for a greater segment of workers, including embedded software, artificial intelligence, data science, analytics, Big Data (and data quality), and even systems integration. 

Over the past 20 years, we moved many aspects of our work and our lives online. And in the next 20 years, the boundaries between the physical world and the online world will blur — to a point where the distinction may become unnecessary.

Here is a vignette to illustrate the kinds of changes we can anticipate. Imagine the next generation FitBit, the personalized exercise assistant that keeps track of the number of steps you walk each day. As early as 2020, this device will not only automatically track your exercise patterns, but will also automatically integrate that information with your personal health records. Because diet strategies have recently been shown to be predominantly unfounded, and now researchers like Kevin Hall, Eran Elinav, and Eran Siegal know that the only truly effective diets are the ones that are customized to your body’s nutritional preferences [1], your FitBit and your health records will be able to talk to your food manager application to design the perfect diet for you (given your targets and objectives). Furthermore, to make it easy for you, your applications will also autonomously communicate with your refrigerator and pantry (to monitor how much food you have available), your local grocery store, and your calendar app so that food deliveries will show up when and only when you need to be restocked. You’re amazed that you’re spending less on food, less of it is going to waste, and you never have to wonder what you’re going to make for dinner. Your local grocery store is also greatly rewarded, not only for your loyalty, but because it can anticipate the demand from you and everyone else in your community – and create specials, promotions, and service strategies that are targeted to your needs (rather than just what the store guesses you need).

Although parts of this example may seem futuristic, the technologies are already in place. What is missing is our ability to link the technologies together using development processes that are effective and efficient – and in particular, coordinating and engaging the people  who will help make it happen. This is a job for quality managers and others who study production and operations management

As the Internet of Things (IoT) and pervasive information become commonplace, the fundamental nature and character of how quality management principles are applied in practice will be forced to change. As Eric Schmidt, former Chairman of Google, explains:  “the new age of artificial intelligence is beginning, and it’s a big deal.” [2] Here are some ways that this shift will impact researchers and practitioners interested in quality:

  • Strategic deployment of IoT technologies will help us simultaneously improve our use of enterprise assets, reduce waste, promote sustainability, and coordinate people and machines to more effectively meet strategic goals and operational targets.
  • Smart materials, embedded in our production and service ecosystems, will change our views of objects from inert and passive to embedded and engaged. For example, MIT has developed a “smart band-aid” that communicates with a wound, provides visual indicators of the healing process, and delivers medication as needed. [3] Software developers will need to know how to make this communication seamless and reliable in a variety of operations contexts.
  • Our technologies will be able to proactively anticipate the Voice of the Customer, enabling us to meet not only their stated and implied needs, but also their emergent needs and hard-to-express desires. Similarly, will the nature of customer satisfaction change as IoT becomes more pervasive?
  • Cloud and IoT-driven Analytics will make more information available for powerful decision-making (e.g. real-time weather analytics), but comes with its own set of challenges: how to find the data, how to assess data quality, and how to select and store data with likely future value to decision makers. This will be particularly challenging since analytics has not been a historical focus among quality managers. [4]
  • Smart, demand-driven supply chains (and supply networks) will leverage Big Data, and engage in automated planning, automatic adjustment to changing conditions or supply chain disruptions like war or extreme weather events, and self-regulation.
  • Smart manufacturing systems will implement real time communication between people, machines, materials, factories and warehouses, supply chain partners, and logistics partners using cloud computing. Production systems will adapt to demand as well as environmental factors, like the availability of resources and components. Sustainability will be a required core capability of all organizations that produce goods.
  • Cognitive manufacturing will implement manufacturing and service systems capable of perception, judgment, and improving quality autonomously – without the delays associated with human decision-making or the detection of issues.
  • Cybersecurity will be recognized as a critical component of all of the above. For most (if not all) of these next generation products and production systems, quality will not be possible without addressing information security.
  • The nature of quality assurance will also change, since products will continue to learn (and not necessarily meet their own quality requirements) after purchase or acquisition, until the consumer has used them for a while. In a December 2015 article I wrote for Software Quality Professional, I ask “How long is the learning process for this technology, and have [product engineers] designed test cases to accommodate that process after the product has been released? The testing process cannot find closure until the end of the ‘burn-in’ period when systems have fully learned about their surroundings.” [5]
  • We will need new theories for software quality practice in an era where embedded artificial intelligence and technological panpsychism (autonomous objects with awareness, perception, and judgment) are the norm.

How do we design quality into a broad, adaptive, dynamically evolving ecosystem of people, materials, objects, and processes? This is the extraordinarily complex and multifaceted question that we, as a community of academics and practitioners, must together address.

Just starting out in quality? My advice is to get a technical degree (science, math, or engineering) which will provide you with a solid foundation for understanding the new modes of production that are on the horizon. Industrial engineering, operations research, industrial design, and mechanical engineering are great fits for someone who wants a career in quality, as are statistics, data science, manufacturing engineering, and telecommunications. Cybersecurity and intelligence will become increasingly more central to quality management, so these are also good directions to take. Or, consider applying for an interdisciplinary program like JMU’s Integrated Science and Technology where I teach. We’re developing a new 21-credit sector right now where you can study EVERYTHING in the list above! Also, certifications are a plus, but in addition to completing training programs be sure to get formally certified by a professional organization to make sure that your credentials are widely recognized (e.g. through ASQ and ATMAE).

 

References

[1] http://www.huffingtonpost.com/entry/no-one-size-fits-all-diet-plan_564d605de4b00b7997f94272
[2] https://www.washingtonpost.com/news/innovations/wp/2015/09/15/what-eric-schmidt-gets-right-and-wrong-about-the-future-of-artificial-intelligence/
[3] http://news.mit.edu/2015/stretchable-hydrogel-electronics-1207
[4] Evans, J. R. (2015). Modern Analytics and the Future of Quality and Performance Excellence. The Quality Management Journal22(4), 6.
[5] Radziwill, N. M., Benton, M. C., Boadu, K., & Perdomo, W., 2015: A Case-Based Look at Integrating Social Context into Software Quality. Software Quality Professional, December.
« Older Entries Recent Entries »