Tag Archives: technology

If Japan Can, Why Can’t We? A Retrospective

if-japan-canJune 24, 1980 is kind of like July 4, 1776 for quality management… that’s the pivotal day that NBC News aired its one hour and 16 minute documentary called “If Japan Can, Why Can’t We?” introducing W. Edwards Deming and his methods to the American public. 

The video has been unavailable for years, but as of 2018, it’s posted on YouTube. So my sophomore undergrads in Production & Operations Management took a step back in time to get a taste of the environment in the manufacturing industry in the late 1970’s, and watched it during class.

The last time I watched it was in 1997, in a graduate industrial engineering class. It didn’t feel quite as dated as it does now, nor did I have the extensive experience in industry as a lens to view the interviews through.

What did surprise me is the challenges they were facing then aren’t that much different than the ones we face today — and the groundbreaking good advice from Deming is still good advice today.

  • Before 1980, it was common practice to produce a whole bunch of stuff and then check and see which ones were bad, and throw them out. The video provides a clear and consistent story around the need to design quality in to products and processes, which then reduces (or eliminates) the need to inspect bad quality out.
  • It was also common to tamper with a process that was just exhibiting random variation. As one of the line workers in the documentary said, “We didn’t know. If we felt like there might be a problem with the process, we would just go fix it.” Deming’s applications of Shewhart’s methods made it clear that there is no need to tamper with a process that’s exhibiting only random variation.
  • Both workers and managers seemed frustrated with the sheer volume of regulations they had to address, and noted that it served to increase costs, decrease the rate of innovation, and disproportionately hurt small businesses. They noted that there was a great need for government and industry to partner to resolve these issues, and that Japan was a model for making these interactions successful.
  • Narrator Lloyd Dobyns remarked that “the Japanese operate by consensus… we, by competition.” He made the point that one reason industrial reforms were so powerful and positive was that Japanese culture naturally supported working together towards shared goals. He cautioned managers that they couldn’t just drop in statistical quality control and expect a rosy outcome: improving quality is a cultural commitment, and the methods are not as useful in the absence of buy-in and engagement.

The video also sheds light on ASQ’s November question to the Influential Voices, which is: “What’s the key to talking quality with the C-Suite?” Typical responses include: think at the strategic level; create compelling arguments using the language of money; learn the art of storytelling and connect your case with what it important to the executives.

But I think the answer is much more subtle. In the 1980 video, workers comment on how amazed their managers were when Deming proclaimed that management was responsible for improving productivity. How could that be??!? Many managers at that time were convinced that if a productivity problem existed, it was because the workers didn’t work fast enough, or with enough skill — or maybe they had attitude problems! Certainly not because the managers were not managing well.

Implementing simple techniques like improving training programs and establishing quality circles (which demonstrated values like increased transparency, considering all ideas, putting executives on the factory floor so they could learn and appreciate the work being done, increasing worker participation and engagement, encouraging work/life balance, and treating workers with respect and integrity) were already demonstrating benefits in some U.S. companies. But surprisingly, these simple techniques were not widespread, and not common sense.

Just like Deming advocated, quality belongs to everyone. You can’t go to a CEO and suggest that there are quality issues that he or she does not care about. More likely, the CEO believes that he or she is paying a lot of attention to quality. They won’t like it if you accuse them of not caring, or not having the technical background to improve quality. The C-Suite is in a powerful position where they can, through policies and governance, influence not only the actions and operating procedures of the system, but also its values and core competencies — through business model selection and implementation. 

What you can do, as a quality professional, is acknowledge and affirm their commitment to quality. Communicate quickly, clearly, and concisely when you do. Executives have to find the quickest ways to decompose and understand complex problems in rapidly changing external environments, and then make decisions that affect thousands (and sometimes, millions!) of people. Find examples and stories from other organizations who have created huge ripples of impact using quality tools and technologies, and relate them concretely to your company.

Let the C-Suite know that you can help them leverage their organization’s talent to achieve their goals, then continually build their trust.

The key to talking quality with the C-suite is empathy.

You may also be interested in “Are Deming’s 14 Points Still Valid?” from Nov 19, 2012.

A Robust Approach to Determining Voice of the Customer (VOC)

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

I got really excited when I discovered Morris Holbrook’s 1996 piece on customer value, and wanted to share it with all of you. From the perspective of philosophy, he puts together a vision of what we should mean by customer value… and a framework for specifying it. The general approach is straightforward:

“Customer Value provides the foundation for all marketing activity…
One can understand a given type of value only by considering its relationship to other types of value.
Thus, we can understand Quality only by comparison with Beauty, Convenience, and Reputation; we can understand Beauty only by comparison with Quality, Fun, and Ecstasy.”

There are MANY dimensions that should be addressed when attempting to characterize the Voice of the Customer (VOC). When interacting with your customers or potential customers, be sure to use surveys or interview techniques that aim to acquire information in all of these areas for a complete assessment of VOC.

The author defines customer value as an “interactive relativistic preference experience”:

  • Interactive – you construct your notion of value through interaction with the object
  • Relativistic – you instinctively do pairwise comparisons (e.g. “I like Company A’s customer service better than Company B’s”)
  • Preference – you make judgments about the value of an object
  • Experience – value is realized at the consumption stage, rather than the purchase stage

Hist typology of customer value is particularly interesting to me:

typology-customer-value

Most of the time, we do a good job at coming up with quality attributes that reflect efficiency and excellence. Some of the time, we consider aesthetics and play. But how often – while designing a product, process, or service – have you really thought about status, esteem, ethics, and spirituality as dimensions of quality?

This requires taking an “other-oriented” approach, as recommended by Holbrook. We’re not used to doing that – but as organizations transform to adjust the age of empathy, it will be necessary.

Holbrook, M. B. (1996) . “Special Session Summary Customer Value C a Framework For Analysis and Research”, in NA – Advances in Consumer Research Volume 23, eds. Kim P. Corfman and John G. Lynch Jr., Provo, UT : Association for Consumer Research, Pages: 138-142. Retrieved from http://www.acrwebsite.org/search/view-conference-proceedings.aspx?Id=7929

Quality and Diversity, Especially Women in Tech

Image Credit: Doug Buckley of http://hyperactive.to

Image Credit: Doug Buckley of http://hyperactive.to

The newly launched R Consortium has announced its inaugural Board members, and not one of them is a woman. (Even more unfortunately, I don’t think any of them are active R users; although I’m sure he’s used it, the new President’s bio establishes him as a SAS and S-PLUS user.)

Although I’m sure the lack of diversity is an oversight (as it so often is), I’ve gotten my knickers in a knot a lot more about this issue lately. It’s probably just because I’m getting older (I’ll be 40 next year), but it’s also due to the fact that I’ve been reflecting an awful lot more lately: about what I’ve done, and what I’ve chosen not to do. About how I’ve struggled, and the battles I’ve chosen (versus those I’ve chosen to ignore). About how the subtle and unspoken climate of women in technology is keeping them out, and chasing them away, even though the industry needs more.

I really love programming. I’ve been doing it since 1982, when I realized that I could make my Atari 800 beep on command.

But in the workplace, I never really felt comfortable as a programmer. Whether they intended to or not, male colleagues always gave off a vibe of mistrust when they integrated my code… they always had a better way to design a new module, or a better approach to resolve a troubleshooting issue. When I got an instrumentation job that required field work on the hardware, I’d hear comments like “maybe you can stay here… girls don’t like to get dirty.” I felt uncomfortable geeking out with other women because I even felt like I’d be judged by them… like if they were some technical rock star, they would find my skills an embarrassment to other women like themselves who were trying to become experts.

So I went into software development management, where my role was much more accepted. My job was to let the coders do their job, and just keep everyone else out of their hair. I remember hearing comments like “you know a lot more about code than I thought you would.” I wanted to get a lot deeper into the technical aspects of the work, but I never felt like one of the guys. So I stopped trying.

Even while working as a manager, the organizations I was a part of were always male-dominated, in both the hierarchy and the style and tone of the work environment. (It was much like the masculine, emotionally void environment of so many of the classrooms I’d spent time in during my youth.) I felt lots of pressure to be firm and decisive, to never show emotions, and to work a 60 hour week even when I had a newborn at home. When I was firm and unyielding, I was called “difficult” and “strident.” I changed my approach and became “not assertive enough.” The women who I saw as being successful were all decidedly masculine, and I couldn’t transform my personality to become an ultra-productive, emotion-suppressing machine. (I’ve got the personality of an artist, and I’ve got to flow with my ideas and inspiration.)

Eventually I lost my mojo, switched careers entirely and went into higher education. (What do I teach? Mostly R… so I’m having fun, and I get to code pretty much every day.) But I still fantasize about getting back into the technical workforce and being one of those rare women leaders in technology (which I try to rationalize is not that rare at all, because I know plenty of women scientists, engineers, and technicians). But yeah, comparatively, we are a minority.

My situation is not unique. So why does this tend to happen? Gordon Hunt of Silicon Republic reports that gender stereotypes, a small talent pool, and in-group favoritism are to blame. I’ll agree with the gender stereotyping – even women do it to each other. My college roommate called me “Nerdcole” and it was sort of endearing, and sort of not. As a hiring manager, I remember being surprised every time a resume from a woman crossed my email box, and giving it a second look no matter what. I remember feeling guilty every time I thought “oh, well, she can’t be as serious about doing this as the guys are.” As for in-group favoritism, I think it’s hard not to favor naturally masculine people for jobs in a naturally masculine environment. 

The role of diversity in achieving quality and stimulating innovation has not been deeply explored in the research. Doing a quick literature search, I could only find a few examples. Liang et al. (2013) found that diversity does influence innovation, but due to inconsistent outcomes they couldn’t recommend a management intervention. Feldman & Audretch (1999) found that more innovation occurs in cities because of greater diversity. Ostergaard et al. (2011) explored the breadth of a firm’s knowledge base and its influence on innovation. And in one of my favorite papers ever, Bassett-Jones (2005) explains that diversity creates a “combustible cocktail of creative tension” that, although difficult to manage, ultimately enhancesa firm’s innovation performance.

I found no papers that looked at a link between diversity and quality performance.

But I would love to have a combustible cocktail of creative tension right now.

What (Really) is a Data Scientist?

Drew Conway's very popular Data Science Venn Diagram. From http://drewconway.com/zia/2013/3/26/the-data-science-venn-diagram

Drew Conway’s very popular Data Science Venn Diagram. From http://drewconway.com/zia/2013/3/26/the-data-science-venn-diagram

What is a data scientist? What makes for a good (or great!) data scientist? It’s been challenging enough to determine what a data scientist really is (several people have proposed ways to look at this). The Guardian (a UK publication) said, however, that a true data scientist is as “rare as a unicorn”.

I believe that the data scientist “unicorn” is hidden right in front of our faces; the purpose of this post is to help you find it. First, we’ll take a look at some models, and then I’ll present my version of what a data scientist is (and how this person can become “great”).

#1 Drew Conway’s popularData Science Venn Diagram” — created in 2010 — characterizes the data scientist as a person with some combination of skills and expertise in three categories (and preferably, depth in all of them): 1) Hacking, 2) Math and Statistics, and 3) Substantive Expertise (also called “domain knowledge”). 

Later, he added that there was a critical missing element in the diagram: that effective storytelling with data is fundamental. The real value-add, he says, is being able to construct actionable knowledge that facilitates effective decision making. How to get the “actionable” part? Be able to communicate well with the people who have the responsibility and authority to act.

“To me, data plus math and statistics only gets you machine learning, which is great if that is what you are interested in, but not if you are doing data science. Science is about discovery and building knowledge, which requires some motivating questions about the world and hypotheses that can be brought to data and tested with statistical methods. On the flip-side, substantive expertise plus math and statistics knowledge is where most traditional researcher falls. Doctoral level researchers spend most of their time acquiring expertise in these areas, but very little time learning about technology. Part of this is the culture of academia, which does not reward researchers for understanding technology. That said, I have met many young academics and graduate students that are eager to bucking that tradition.”Drew Conway, March 26, 2013

#2 In 2013, Harlan Harris (along with his two colleagues, Sean Patrick Murphy and Marck Vaisman) published a fantastic study where they surveyed approximately 250 professionals who self-identified with the “data science” label. Each person was asked to rank their proficiency in each of 22 skills (for example, Back-End Programming, Machine Learning, and Unstructured Data). Using clustering, they identified four distinct “personality types” among data scientists:

As a manager, you might try to cut corners by hiring all Data Creatives(*). But then, you won’t benefit from the ultra-awareness that theorists provide. They can help you avoid choosing techniques that are inappropriate, if (say) your data violates the assumptions of the methods. This is a big deal! You can generate completely bogus conclusions by using the wrong tool for the job. You would not benefit from the stress relief that the Data Developers will provide to the rest of the data science team. You would not benefit from the deep domain knowledge that the Data Businessperson can provide… that critical tacit and explicit knowledge that can save you from making a potentially disastrous decision.

Although most analysts and researchers who do screw up very innocently screw up their analyses by stumbling into misuses of statistical techniques, some unscrupulous folks might mislead other on purpose; although an extreme case, see I Fooled Millions Into Thinking Chocolate Helps Weight Loss.

Their complete results are available as a 30-page report (available in print or on Kindle).

#3 The Guardian is, in my opinion, a little more rooted in realistic expectations:

“The data scientist’s skills – advanced analytics, data integration, software development, creativity, good communications skills and business acumen – often already exist in an organisation. Just not in a single person… likely to be spread over different roles, such as statisticians, bio-chemists, programmers, computer scientists and business analysts. And they’re easier to find and hire than data scientists.”

They cite British Airways as an exemplar:

“[British Airways] believes that data scientists are more effective and bring more value to the business when they work within teams. Innovation has usually been found to occur within team environments where there are multiple skills, rather than because someone working in isolation has a brilliant idea, as often portrayed in TV dramas.”

Their position is you can’t get all those skills in one person, so don’t look for it. Just yesterday I realized that if I learn one new amazing thing in R every single day of my life, by the time I die, I will probably be an expert in about 2% of the package (assuming it’s still around).

#4 Others have chimed in on this question and provided outlines of skill sets, such as:

  • Six Qualities of a Great Data Scientist: statistical thinking, technical acumen, multi-modal communication skills, curiosity, creativity, grit
  • The Udacity blog: basic tools (R, Python), software engineering, statistics, machine learning, multivariate calculus, linear algebra, data munging, data visualization and communication, and the ultimately nebulous “thinking like a data scientist”
  • IBM: “part analyst, part artist” skilled in “computer science and applications, modeling, statistics, analytics and math… [and] strong business acumen, coupled with the ability to communicate findings to both business and IT leaders in a way that can influence how an organization approaches a business challenge.”
  • SAS: “a new breed of analytical data expert who have the technical skills to solve complex problems – and the curiosity to explore what problems need to be solved. They’re part mathematician, part computer scientist and part trend-spotter.” (Doesn’t that sound exciting?)
  • DataJobs.Com: well, these guys just took Drew Conway’s Venn diagram and relabeled it.

#5 My Answer to “What is a Data Scientist?”:  A data scientist is a sociotechnical boundary spanner who helps convert data and information into actionable knowledge.

Based on all of the perspectives above, I’d like to add that the data scientist must have an awareness of the context of the problems being solved: social, cultural, economic, political, and technological. Who are the stakeholders? What’s important to them? How are they likely to respond to the actions we take in response to the new knowledge data science brings our way? What’s best for everyone involved so that we can achieve sustainability and the effective use of our resources? And what’s with the word “helps” in the definition above? This is intended to reflect that in my opinion, a single person can’t address the needs of a complex data science challenge. We need each other to be “great” at it.

A data scientist is someone who can effectively span the boundaries between

1) understanding social+ context, 

2) correctly selecting and applying techniques from math and statistics,

3) leveraging hacking skills wherever necessary,

4) applying domain knowledge, and

5) creating compelling and actionable stories and connections that help decision-makers achieve their goals. This person has a depth of knowledge and technical expertise in at least one of these five areas, and a high level of familiarity with each of the other areas (commensurate with Harris’ T-model). They are able to work productively within a small team whose deep skills span all five areas.

It’s data-driven decision making embedded in a rich social, cultural, economic, political, and technological context… where the challenges may be complex, and the stakes (and ultimately, the benefits) may be high. 


(*) Disclosure: I am a Data Creative!

(**)Quality professionals (like Six Sigma Black Belts) have been doing this for decades. How can we enhance, expand, and leverage our skills to address the growing need for data scientists?

What Kentucky Derby Handicapping Can Teach Us About Organizational Metrics

My Favorite (#10, Firing Line), from http://www.telegraph.co.uk/sport/horseracing/11574821/Kentucky-Derby-Simon-Callaghan-has-Firing-Line-primed.html

My Favorite (#10, Firing Line), from http://www.telegraph.co.uk/sport/horseracing/11574821/Kentucky-Derby-Simon-Callaghan-has-Firing-Line-primed.html. Apr 29, 2015; Louisville, KY, USA; Exercise rider Humberto Gomez works out Kentucky Derby hopeful Firing Line trained by Simon Callaghan at Churchill Downs. Mandatory Credit: Jamie Rhodes-USA TODAY Sports

I love horse racing. More specifically, I love betting on the horses. Why? Because it’s a complex exercise in data science, requiring you to integrate (what feels like) hundreds of different kinds of performance measures — and environmental factors (like weather) — to predict which horse will come in first, second, third, and maybe even fourth (if you’re betting a superfecta). And, you can win actual money!

I spent most of the day yesterday handicapping for Kentucky Derby 2015, before stopping at the track to place my bets for today. As I was going through the handicapping process, I realized that I’m essentially following the analysis process that we use as Examiners when we review applications for the Malcolm Baldrige National Quality Award (MBNQA). We apply “LeTCI” — pronounced like “let’s see” — to determine whether an organization has constructed a robust, reliable, and relevant assessment program to evaluate their business and their results. (And if they haven’t, LeTCI can provide some guidance on how to continuously improve to get there).

LeTCI stands for “Levels, Trends, Comparisons, and Integration”. In Baldrige parlance, here’s what we mean by each of those:

  • Levels: This refers to categorical or quantitative values that “place or position an organization’s results and performance on a meaningful measurement scale. Performance levels permit evaluation relative to past performance, projections, goals, and appropriate comparisons.” [1] Your measured levels refer to where you’re at now — your current performance. 
  • Trends: These describe the direction and/or rate of your performance improvements, including the slope of the trend data (if appropriate) and the breadth of your performance results. [2] “A minimum of three data points is generally needed to begin to ascertain a trend.” [1]
  • Comparisons: This “refers to establishing the value of results by their relationship to similar or equivalent measures. Comparisons can be made to results of competitors, industry averages, or best-in-class organizations. The maturity of the organization should help determine what comparisons are most relevant.” [1] This also includes performance relative to benchmarks.
  • Integration: This refers to “the extent to which your results measures address important customer, product, market, process, and action plan performance requirements” and “whether your results are harmonized across processes and work units to support organization-wide goals.” [2]

(Quoted sections above come from http://www.dtic.mil/ndia/2008cmmi/Track7/TuesdayPM/7059olson.pdf, Slide 31 [1] and http://www.baldrige21.com/Baldrige%20Scoring%20System.html. [2])

Here’s a snapshot of my Kentucky Derby handicapping process, using LeTCI. (I also do it for other horse races, but the Derby has got to be one of the most challenging prediction tasks of the year.) Derby prediction is fascinating because all of the horses are excellent, for the most part — and what you’re trying to do is determine on this particular day, against these particular competitors, how likely is a horse to win? Although my handicapping process is much more complex than what I lay out below, this should give you a sense of the process that I use, and how it relates to the Baldrige LeTCI approach:

  • Levels: First, I have to check out the current performance levels of each contender in the Derby. What’s the horse’s current Beyer speed score or Bris score (that is, are they fast enough to win this race)? What are the recent exercise times? If a horse isn’t running 5 furlongs in under a minute, then I wonder (for example) if they can handle the Derby pace. Has this horse raced on this particular track, or with this particular jockey? I can also check out the racing pedigree of the horse through metrics like “dosage”. 
  • Trends: Next, I look at a few key trends. Have the horse’s past races been preparing him for the longer distance of the Derby? Ideally, I want to see that the two prior races were a mile and a sixteenth, and a mile and an eighth. Is their Beyer speed score increasing, at least over the past three races? Depending on the weather for Louisville, has this horse shown a liking for either fast or muddy tracks? Has the horse won a race recently? 
  • Comparisons: Is the horse paired with a jockey he has been successful with in the past? I spend a lot of time comparing the horses to each other as well. A horse doesn’t have to beat track records to win… he just has to beat the other horses. Even a slow horse will win if the other horses are slower. Additionally, you have to compare the horse’s performance to baselines provided by the other horses throughout the duration of the race. Does your horse tend to get out in front, and then burn out? Or does he stalk the other horses and then launch an attack in the end, pulling out in front as a closer? You have to compare the performance of the horse to the performance of the other horses longitudinally  — because the relative performance will change as the race progresses.
  • Integration: What kind of story do all of these metrics tell together? That’s the real trick of handicapping horse races… the part where you have to bring everything together in to a cohesive, coherent way. This is also the part where you have to apply intuition. Do I really think this horse is ready to pull off a victory today, at this particular track, against these contenders and embedded in the wild and festive Derby environment (which a horse may not have experienced yet)?

And what does this mean for organizational metrics? To me, it means that when I’m formulating and evaluating business metrics I should take a perspective that’s much more like handicapping a major horse race — because assessing performance is intricately tied to capabilities, context, the environment, and what’s bound to happen now, in the near future.

Google Docs to Markdown Converter: A Gateway Drug to Getting Your Books on LeanPub

doug-feb2

Image Credit: Doug Buckley of http://hyperactive.to

Following in the footsteps of fellow ASQ Influential Voice John Hunter (who published Management Matters on LeanPub) — I’ve had the intention for the past couple years to write my next book using LeanPub too.

There’s only one problem: LeanPub requires that you prepare and format your book in Markdown. I know Markdown is not that hard, but in order to move forward with it, I would have to find at least a couple days without distractions to get my head into it and start flowing with that approach. With work and kid’s-school-schedule and my travel schedule, this has been darn near next to impossible.

Until today! I found an article that shows how you can convert Google Docs to Markdown using a simple script.

I haven’t tried it yet, but I am convinced this huge productivity booster will be the gateway drug to getting my books onto LeanPub.

 

Quality of Art & Design in the Digital Age

doug-mirror(Image credit: Doug Buckley of http://hyperactive.to)

In a December article in Wired, John Maeda talks about how the art community’s sensibilities were recently challenged by a decision made by the Museum of Modern Art (MoMA) to include videogames in a new category of art there. Although the examples were acquired on the basis that they demonstrate good interaction design, some art critics claim that videogames are not art – that they do not, as per Jonathon Jones, represent an “act of personal imagination.”

Whereas design is focused on solutions, art (according to Maeda) is focused more on creating questions – “the deep probing of purpose and meaning that sometimes takes us backward and sideways to reveal which way ‘forward’ actually is.So should artifacts like video games be accepted into an art collection? The answer, according to Maeda, comes down to how the institution defines quality:

When I was invited to a MoMA Board meeting a couple of years ago to field questions about the future of art with Google Chairman Eric Schmidt, we were asked about how MoMA should make acquisitions in the digital age. Schmidt answered, Graduate-style, with just one word: “quality.”

And that answer has stuck with me even today, because he was absolutely right – quality trumps all, whatever the medium and tools are: paints or pixels, canvas or console.

The problem is that what “quality” represents in the digital age hasn’t been identified much further than heuristic-metrics like company IPOs and the market share of well-designed technology products. It’s even more difficult to describe quality when it comes to something as non-quantitative – and almost entirely qualitative – as art and design.

Last month, I shared what I’ve discovered over the past 7 years, as I’ve aimed to answer the question What is Quality? By applying the ISO 9000/Mitra perspective that I described, the MoMA dilemma (and others like it) may be easier to resolve. My approach centers around the ISO 9000 definition that quality is the “totality of characteristics of an entity that bears upon its ability to satisfy stated and implied needs.”

These stated and implied needs translate into quality attributes.

For art, the object of art is the entity. If that art is functional or interactive, then there are stated needs that relate to its ability to function within a given context or towards a given purpose. These may relate to quality attributes like conformance, reliability, or durability. (If the piece is not functional or interactive, then there are quite possibly no stated needs to meet). However, there will always be implied needs which relate to the meaning and purpose of the art; does the object help achieve the goals of art in general, or of the individual interacting with or observing the art?

Similarly, since art is in many ways a personal experience, does the object help the individual by inspiring, connecting, engaging, encouraging, illuminating, clarifying, catalyzing, transforming, encouraging, or revealing aspects of the self and/or the environment? Does the object stimulate an emotional experience? (Any of these aspects might indicate that the object of art is meeting quality attributes that are related to implied needs.)

A subset of Mitra’s model is relevant to examining the quality of art and design. Note that to assess the quality of an example of art, such as a videogame, we might focus more on the objective quality and the consequences of quality, because the antecedents will be more useful if we are attempting to improve quality over time:

Antecedents of Quality (conditions that must be in place to quality to be achieved): contextual factors (e.g. whether the environment/culture – or enough people within it – are ready to recognize the piece as art), quality improvement process (what mechanisms are in place to continually improve the ability of the artist/team to deliver high quality work, e.g. practice or evaluating other artwork), and capabilities (whether the artist has the skill to create and share the art).

Objective/Product Quality: This asks “how well does the entity meet the stated and implied needs?” Does it meet all of them, or just some of them, and to what degree or extent?

Consequences of Quality: This is the combined effect of the quality perception process (whether the piece meets each individual’s standards for value) and the broader impacts that the piece has on individuals and/or society in general. Quality perception is, necessarily, an individual process – whereas broader impacts involves factors such as how many people did this piece impact, and to what extent.

So, are videogames art? First, we have to check to make sure they meet their stated needs – and since they were produced and successfully distributed by companies to people who played and enjoyed those games, we can assume that the stated needs were met. So, what are the implied needs of videogames as art? This depends, like many things, on how you select and define those stated needs. Ultimately, you want to take into account the emotional and transformative impact of the piece on one person, and then across individual and demographic designations to see the impact of the piece within and between social groups.

IMHO, I was personally inspired to learn more about computer programming before I turned 10 by playing lots and lots of Pac-Man and Space Invaders. I was an empowered fighter in a world of power pellets, ghosts, strawberries, and bananas, and so were lots of my friends. We connected with one another, and with the era in history that is the 1980’s, as we do today whenever someone reflects on those games or the arcades in which they were played. Because the games inspired in me an emotional experience, that today is tinged with nostalgia, I’d say that videogames are just as much art as the beautiful cars of the 1950’s that catalyzed the same feelings in people of that generation.

Kudos to MoMA for casting their net wider.

What do you all think? How can we effectively assess the quality of art and design?

« Older Entries Recent Entries »