Tag Archives: technology assessment

The Rubric as a General Purpose Quality Tool

According to dictionary.com, one of the definitions for rubric is “any established mode of conduct; protocol.” But the context you’ve probably heard this word in is education – where a grading rubric or a scoring rubric is used to evaluate a complex artifact like a student essay.

In my opinion, it’s time to move the concept of the rubric from the classroom into the mainstream, because it can be applied as a very practical general purpose quality tool! (Hear that, Nancy Tague? I think you should write about rubrics in your next edition of the very excellent book The Quality Toolbox. Let me know if you’d like me to help make this happen.)

A rubric is basically a grid with 1) levels of performance indicated along the top row, and 2) criteria or dimensions of performance listed down the leftmost column. Each cell of the grid contains a descriptive statement that explains how the level of performance in that column might be achieved for a specific dimension:

For example, here’s a rubric that one group constructed to evaluate the quality of the mind maps that they were producing. The performance levels are organized from high performance in the top left (smiley face giving a thumbs up) to low performance in the top right (smiley face that looks like he’s about to pass out):

The dimensions of performance are neatness and presentation, use of images/symbols, and use of color. The descriptive statements in each cell provide specific examples of how the performance level might be achieved, e.g. “has failed to include color in the mind map” is an indicator of a low performance level for the dimension of “use of color” – which is very understandable!

The concept of the rubric as a performance assessment tool is relatively new! Griffin (2009), in a brief history of the rubric, notes that since its introduction in 1981, “the scoring rubric has evolved into a more precise, technical, scientific-looking document. It carries a tone of certainty, authority, and exactitude.” However, she notes, the utility of a rubric will depend upon the thought and consideration that goes into its construction. “A rubric is a product of many minds working collaboratively to create new knowledge. It will, almost by definition, be more thoughtful, valid, unbiased and useful than any one of us could have conceived of being as we worked in isolation.”

Advantages of applying a well developed rubric include:

  • Provides a common language for sharing expectations and feedback
  • Helps to clarify and distinguish the differences between various performance levels
  • Helps to focus an individual or group’s ATTENTION on relevant aspects of each desired quality characteristic or skill area
  • Provides a mechanism to more easily identify strengths and opportunities for improvement
  • Helps lend objectivity to an evaluation process that might otherwise be subjective


  • Different rubrics may need to be devised for the different activities or artifacts that are to be evaluated using the rubric
  • Not all evaluators will apply the rubric in exactly the same way – there is a subjective element at work here – so people may need to be trained in the use of a rubric, or perhaps it would be more effective in a group consensus context where inter-rater variability can be interactively discussed and resolved
  • Creating a rubric can be time consuming
  • The rubric may limit exploration of solutions or modes of presentation that do not conform to the rubric

Using Rubrics for Quality Improvement

Rubrics are already applied in the world of quality, although I’ve never heard them go by that name. The process scoring guidelines for the Baldrige Criteria are essentially rubrics (although the extra dimension of ADLI and LeTCI has to be considered in the mind of the examiner). The International Team Excellence Award (ITEA) criteria in the Team Excellence Framework (TEF) also forms a rubric in conjunction with the performance levels of missing, unclear, meets expectations or exceeds expectations.

I see a lot of ways in which rubrics can be developed and applied in the quality community to help us establish best practices for some of our most common project artifacts, such as Project Charters. Nancy Tague includes a Project Charter Checklist in The Quality Toolbox to help us create better and more complete charters… but what if we added a second dimension, which includes performance levels, and turned this checklist into a rubric? Any checklist could be transformed into a rubric. Furthermore, to develop a good rubric, we can brainstorm and rank all of the potential criteria in the left hand column, using a Pareto chart to separate the vital few criteria from the trivial many.

Are any of you already using rubrics for purposes outside training or education? I would love to start a list of resources to share with the quality community.

Reference: Griffin, M. (2009). What is a rubric? Assessment Update, 21(6), Nov/Dec 2009.

Note: There is a comprehensive site containing many examples of rubrics at http://www.web.virginia.edu/iaas/assess/tools/rubrics.shtm – however, they won’t open in Google Chrome.

Technology Assessment from the Jetsons Perspective

jetsonMy almost-4-year-old and I watched “The Jetsons” together today. In this episode, Elroy started out by solving an incomprensible math problem on the blackboard. Grade school, in the Jetsons future, was apparently much more advanced than today’s! After Elroy solved the problem, he returned to his seat where the boy next to him (the class clown) was shown distracting the other kids, making rude comments, and watching a rerun of “The Flintstones” on his hand-held mobile device.

It was almost 10 minutes later when I realized – “Hold on! They didn’t have iPods or iPhones back in the 60’s when this show was made!”

It didn’t dawn on me until much later that the blackboard Elroy used was hopelessly antiquated. If the animators could envision our ubiquitous mobile devices, even without the wealth of information they can access through the Internet, why not networked communications? Why not the very simple whiteboard?

Liebenau (2007), in a study by the London School of Economics intended to identify better ways to prioritize emerging technologies in the UK between 2007 and 2017, captured the problem of technology assessment in The Jetsons as part of his work. For example, he notes that both the Flintstones and the Jetsons portrayed suburban life from the social, cultural and moral perspective of the typical 1960’s American family – only the technologies were different. The most critical variables were unchanged:

They left alone most of the really interesting things: social and interpersonal life; spatial and temporal relations (even though the Jetsons used rockets they still travelled around the strict equivalent of their own neighbourhood); and work. They only tinkered with shelter, sustenance and security (everybody was always safe, although they occasionally crashed their stone-wheel cars and personal rockets). Where their imagination ranged more widely was with communication (where the Flintstones used a squawking bird for a factory whistle and the Jetsons used wrist-watch telephones – an image already common in futuristic comics of the 1920s like Dick Tracey). They probably meant to portray significant differences with regard to food, although that was less well thought out since the Flintstones merely ate huge joints of meat and the Jetsons ate processed gloop excreted from kitchen machines. They also had some imaginative notions of play, but both families had leisure time (and tastes) typical of aspirations of the American lower middle-class suburbanites they were… Whereas they captured the idea of the transformation of food, they did not imagine the associated social and commercial context brought about by fast food take-aways and eating at shopping malls.

Nor did they capture changes in the form and function of work – George labors at Spacely Sprockets, a manufacturing firm, during “normal business hours” each day and kicked off his shoes when he got home, greeted by a dinner made by Rosie, the robot helper. Fred heads to Slate Enterprises to operate the “heavy equipment” (dinosaurs) until the bird-whistle blows at the end of the day and he can go home. Their relationships with Mr. Spacely and Mr. Slate are identical, and mirror the hierarchical structures of the predictable, scientifically managed organization. Although George Jetson is often seen using a videoconferencing facility like Skype, he is never seen using this to attend a meeting at work or to otherwise get something done asynchronously.

The reason I find this scenario interesting is that it highlights the challenges we face today when we are assess the implementation of a particular technology in a specific context, or when we attempt to gauge the impact of emerging technologies on people, on the market, on relationships, or on society as a whole. We change our technologies, and then those technologies change and shape us, continuously impacted by the social, economic and cultural context.

The ITEA Criteria for Software Process & Performance Improvement

(I originally wrote this article for the ASQ Software Division Newsletter compiled in the first quarter of 2009. I’m reproducing it here because I’ve found the ITEA criteria to be remarkably useful for all kinds of planning since I was introduced to it last year.)

frangipani-flowersFor software professionals, particularly those of us who manage product development or development teams, it is important to track progress towards our goals and to justify the results of our efforts. We have to write effective project charters for software development just to get things moving, evaluate improvement alternatives before making an investment of time and effort in a process change, and ultimately validate the effectiveness of what we have implemented.

This past fall, I had the opportunity to serve as a preliminary round judge for the ASQ International Team Excellence Award (ITEA). My subgroup of judges met at the Bank of America training facility in Charlotte, North Carolina, where we split up into teams to evaluate almost 20 project portfolios. A handful of other events just like ours were held at the same time across the country, giving many people the opportunity to train and serve as judges. Before we evaluated the portfolios, we were all trained on how to use and understand the ITEA criteria, a 37-point system for assessing how well a project had established and managed to its own internal quality system. The ITEA criteria can be applied to any development project or process improvement initiative in the same way that the Baldrige criteria might be applied to an organization‘s strategic efforts. For software, this might include improving the internal processes of a software development team, using software improvements and automation to streamline a production or service process, and improving the performance or quality of a software product. (For example, I can envision the ITEA criteria being used to evaluate the benefits of parallelizing all or part of a software system to achieve a tenfold or hundredfold performance improvement.)

You can review these criteria on the web at http://wcqi.asq.org/2008/pdf/criteria-detail.pdf yourself. There are five main categories in the ITEA criteria: project selection and purpose, the current situation (prior to improvement), solution development (and evaluation of alternatives), project implementation and results, and team management and project presentation. An important distinction is in the use of the words Identify/Indicate, Describe and Explain within the criteria. To identify or indicate means that you have enumerated the results of brainstorming or analysis, which can often be achieved using a simple list of bullet points. To describe means that you have explained what you mean by each of these points. To explain means that you have fully discussed not only the subject addressed by one of the 37 points, but also your rationale for whatever decisions were made. Sustainability of the improvements that a project makes is also a major component of the ITEA criteria. Once your project is complete, how will you ensure that the benefits you provided are continued? How can you make sure that a new process you developed will actually be followed? Do you have the resources and capabilities to maintain the new state of the system and/or process?

The ITEA criteria can serve as a useful checklist to make sure you‘ve covered all of the bases for your software development or process improvement project. I encourage you to review the criteria and see how they can be useful to your work.

What is Socio-Technical Design?

10-node-plotTom Erickson, in his introduction to one of the sections in the forthcoming Handbook of Research on Socio-Technical Design and Social Networking Systems, explains socio-technical design well:

Socio-technical design is not just about designing things, it is about designing things that participate in complex systems that have both social and technical aspects. Furthermore, these systems and the activities they support are distributed across time and space. One consequence of this is that the systems that are the sites for which we are designing are in constant flux. And even if we were to ignore the flux, the distributed nature of the systems means that they surface in different contexts, and are used by different people for different (and sometimes conflicting) purposes.”

Socio-technical design is, understandably, related to sociotechnology. There is much work to be done to develop the processes and techniques that will be required to manage quality and continuous improvement in the context of socio-technical design!

Questions for a Technology Assessment

If you’re already familiar with what a technology assessment is all about, here are some examples of questions you can ask to help form ideas to shape your analysis:

  • Cultural/Social Context. How does technology change the way we view ourselves in the historical context? How does technology change the way we interact with one another?Science fiction provides a great source of material here, since so many stories focus on the thoughts, emotions and transformation of characters impacted by fictional technologies in ordinary social contexts. (Landon 1997) Thinking about these issues is not limited to science fiction, but is also the domain of mainstream science. For example, when the first visionary ideas of nanotechnology were conceived, discussions and debates about its possible cultural and social impacts were hypothesized. (Drexler 1986)
  • Legal/Policy. Should scientists be prohibited from doing research that might benefit terrorists? Should life forms be patented and owned? Should cloning be banned? What is appropriate in the sense that values are honored and protected? What are the environmental and health impacts of our technology use choices, and how should laws be set in place to help us preserve our surroundings and way of life – or better yet, enhance our environment and improve the quality of life for many?
  • Moral/Ethical. Are scientists or CEO’s “playing god” with a technology? How much advancement are we comfortable with, and how much should we be comfortable with? A moral and ethical analysis concerns the purpose for which the technology will be used, and how appropriate that purpose is, given the value systems active within a society. Realists will weigh the pros and cons of a situation; idealists may consider one con to be so destructive that a technology will be deemed unethical. Technology has potential to transform the way we live, the way we think, our perceptions, values, capabilities and social relations.
  • Economic. Politicians are concerned with economics, business and the law. According to Rodemeyerm “scientific and technical knowledge is rarely sought for its own sake, but rather to support policy ends.” Introduction of new technologies can cause job loss by wiping out the need for certain functions. Wealth and health can increase or decrease as the result of technology introductions.
  • Environmental/Health. How does a technology impact the environment, the health of a population, or the ability to deliver health care? Rodemeyer mentions that people are often not willing to make trade-offs. They want the convenience of air travel, but are unhappy with the environmental impacts, sound pollution, and so forth. They are unhappy with the proliferation of landfills and the destruction of the land by trash, but are sometimes unwilling to purchase less pre-packaged foods, or take the time to recycle.
  • Workforce Education & Training. As technologies are created and diffuse into general use, the need arises for people to be trained in the use of these advancements. Much like an invention without a context of use cannot be considered an innovation, an innovation without a plan to be leveraged by society will not achieve its potential.

Drexler, K.E. (1986). Engines of Creation: The Coming Era of Nanotechnology, 
     New York: Anchor Press, Doubleday.
Landon, B. (1997). Science Fiction After 1900: From the Steam Man to the Stars, 
     New York: Twayne.
Rodemeyer, M., Sarewitz, D. & Wilsdon, J. (2005). 
     The future of technology assessment. Washington, DC: Woodrow Wilson 
     International Center for Scholars. Retrieved on Nov 17, 2007 from 

How do you conduct a Technology Assessment?

Technology assessment is the process of exploring the impacts of a new technology on people, social and governmental structures, and societies. Together, a technology assessment and environmental analysis can provide useful inputs into how a company or organization can develop a strong strategy. The acronym I use to remind me how to do a technology assessment is VIMP-SPC.

  • First, explore and understand VIMP: the values, interests, motives, and perspectives of the people who will be making the ultimate decisions regarding how this technology will be used, regulated, traded, and continually improved
  • Determine how this assessment of VIMP relates to SPC: the socioeconomic, political, and cultural environment. Consulting a previously completed environmental analysis may be useful here.
  • Determine how the products of scientific and technological advancement – both basic research and applied R&D – interact with these forces to create social outcomes.

As new technologies are developed, and as existing technologies converge and coalesce into new capabilities for humanity, the “COMPLEXITY and RANGE of social, ethical and legal issues are likely to expand, not contract.” (Rodemeyer 2005) These effects can be either positive or negative, or a mix of both, or the effects may shift between the two extremes in response to other changes in the environment.

Drexler, K.E. (1986). Engines of Creation: The Coming Era of Nanotechnology, New York: Anchor Press, Doubleday.
Landon, B. (1997). Science Fiction After 1900: From the Steam Man to the Stars, New York: Twayne.
Rodemeyer, M., Sarewitz, D. & Wilsdon, J. (2005). The future of technology assessment. Washington, DC: Woodrow Wilson International Center for Scholars. Retrieved on Nov 17, 2007 from http://www.wilsoncenter.org/topics/docs/techassessment.pdf