Category Archives: Measurement

Inspection, Abstraction and Shipping Containers

On my drive home tonight, a giant “Maersk Sealand” branded truck passed me on the highway. It got me thinking about how introducing a standard size and container shape revolutionized the shipping industry and enabled a growing global economy. At least that’s the perspective presented by Mark Levinson in The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger.

A synopsis of the story and a sample chapter are available; Wikipedia’s entry on containerization also presents a narrative describing the development and its impacts.

Here’s how impactlab.com describes it:

Indeed, it is hard to imagine how world trade could have grown so fast—quintupling in the last two decades—without the “intermodal shipping container,” to use the technical term. The invention of a standard-size steel box that can be easily moved from a truck to a ship to a railroad car, without ever passing through human hands, cut down on the work and vastly increased the speed of shipping. It represented an entirely new system, not just a new product. The dark side is that these steel containers are by definition black boxes, invisible to casual inspection, and the more of them authorities open for inspection, the more they undermine the smooth functioning of the system.

Although some people like to debate whether shipping containers were an incremental improvement or a breakthrough innovation, I’d like to note that a single process improvement step generated a multitude of benefits because the inspection step was eliminated. Inspection happened naturally the old way, without planning it explicitly; workers had to unpack all the boxes and crates from one truck and load them onto another truck, or a ship. It would be difficult to overlook a nuclear warhead or a few tons of pot.

To make the system work, the concept of what was being transported was abstracted away from the problem, making the shipping container a black box. If all parties are trustworthy and not using the system for a purpose other than what was intended, this is no problem. But once people start using the system for unintended purposes, everything changes.

This reflects what happens in software development as well: you code an application, abstracting away the complex aspects of the problem and attaching unit tests to those nuggets. You don’t have to inspect the code within the nuggets because either you’ve already fully tested them, or you don’t care – and either way, you don’t expect what’s in the nugget to change. Similarly, the shipping industry did not plan that the containers would be used to ship illegal cargo – that wasn’t one of the expectations of what could be within the black box. The lesson (to me)? Degree of abstraction within a system, and the level of inspection of a system, are related. When your expectations of what constitutes your components changes, you need to revisit whether you need inspection (and how much).

What is an Environmental Analysis?

An environmental analysis (or environmental assessment) is a decision-making tool, often applied in technology management to characterize the forces impacting an emerging technology or a new or existing product. The environmental analysis can help you determine the effects of a proposed project or policy, and to proactively assess the impacts of a developing or emerging product or discipline. An environmental analysis also provides a really useful structure for learning about an area or a theme new to you or your company and identifying what the “state of the art” is (e.g. petascale computing, nanotechnology, innovative composite materials).

To conduct an environmental analysis, you should investigate and outline:

  • CONTEXT. The technology of interest and the context in which it is/to be used
  • CHALLENGES. The challenges that are presently identifiable; what you know, and how it compares and contrasts with the unknowns
  • EXTERNAL ENVIRONMENT. How the competitive environment impacts the scenario. This can be done via SWOT analysis (strengths, weaknesses, opportunities, threats) and/or by examining Porter’s (1980) Five Forces (supplier power, barriers to entry, threat of substitutes, buyer power, degree of rivalry)
  • INTERNAL ENVIRONMENT. How themes influence and affect the scenario (e.g. via PEST analysis – political, economic, socio-cultural, technological impacts)
  • ALTERNATIVES. Examine alternatives to the scenario being evaluated, and investigate what criteria (e.g. values, beliefs, project constraints, technical constraints) might be used if you will choose between competing alternatives in the future

Where can you get data for an environmental analysis? In addition to searching through resources from newspapers, magazines and trade journals, check the following:

Organization for Economic Cooperation and Development (OECD)

  • The OECD statistics portal contains international databases on agriculture, education, development, finance, labor, science and technology, energy, globalization, productivity, welfare, transport
  • Their online library also contains environmental outlooks, news on economic policy reforms, and issues like work/life balance

World Economic Forum Global Competitiveness Report

  • In the Growth Competitiveness Index (GCI) issued by the World Economic Forum, which is measured for more a hundred countries every year, there are four dimensions of global competitiveness routinely assessed: institutions, infrastructure, the macroeconomic environment, and health and education.
  • Because technology has the potential to impact productivity at many levels, and because it is embedded in each of these areas, the effects of technological change are implicit in macroeconomic measures of competitiveness.
  • You can learn more about the Global Competitiveness Report on Wikipedia
  • Or use the Analyzer to explore the data

National Science Foundation Solicitations for Research Proposals – The NSF solicitations are an excellent place to learn about the state of the art in various fields. The solicitations explain what topics are the most interesting to the experts today, and what they are willing to pay to know more about. Often, the solicitations will explain the most recent trends that may be difficult to ascertain from the industry and academic literature.

Google Tracks Spread of Flu

google-orgIs the flu spreading across your state? You can find out using Google Flu Trends, which projects the spread of influenza based on how people are using Google to search for health information. Check out the movie illustrating how search data appears to correlate with flu data from the Center for Disease Control.

The reason this interests me is that Google is using a tracer – examining search patterns in terms of where the searches are originating from geographically to infer how diseases might be spreading. They are not tracking diagnosis information or other “hard” data which would affirm the presence of disease, only recognizing that people will tend to be more interested in the flu when they’re trying to figure out whether they have it! (The most useful aspect of the search data is that it appears to serve as a leading indicator for the CDC data, which has a two week lag.)

Are any companies out there using patterns in their Google searches on their websites to infer what consumers or constituents are most interested in at any given time? It would be interesting to see what other “real” things Google search data can serve as a leading indicator for. I could see this as a useful technique for diagnosing the “voice of the customer” in a novel way.

Election 2008: Struggle Between Tradition and Innovation

Today is Monday, November 3rd. Election Day, when the U.S. picks its 44th President, is less than 24 hours away. And as of Saturday night, just 72 hours before the polls close, 27 MILLION early votes and absentee ballots had already been placed. This represents almost 13% of the total population that’s eligible to vote this year, and 22% of all the people who voted in 2004. (The numbers are from Michael McDonald’s dataset; he is an associate professor specializing in voting behaviors. The VEP column in his table represents the total number of eligible voters over 18 and not in prison, on probation or on parole. )

Remember, long ago (or maybe more recently) in statistics class, when you learned that you could learn a lot about the properties of a population by taking a random sample? Having approximately 20% of the vote already in from a sample expected to be between 120 and 150 million is extremely significant – remember, these are actual votes, and not someone’s report of what they may or may not vote “for real”. Assuming that systematic errors have not played a large part in early voting behavior, the winner is already determined, and we just don’t know it yet.

“We go around in a circle and suppose, but the answer lies in the middle and knows.” –Robert Frost

However, ignoring systematics is indeed a significant assumption, one that’s discussed by Peter Norvig, Director of Research at Google, in his excellent explanation of the accuracy of polls. Which is why the campaigns are rightly pushing EVERYONE to get out there and vote – to mitigate the impact of systematic errors. (After all, you don’t want to stop voting if the other side keeps voting.) So if you are reading this and you haven’t voted yet, DO IT! Go vote!

I see three potential scenarios:

  • Breakthrough: the decision has already been made, is accurately reflected in the actual sample of early votes, and the votes placed on Tuesday won’t change the pattern at all. The additional votes amount to nothing (other than beating down or insuring against systematic error).
  • Breakdown: a flood of voters overwhelm the capacity of the voting stations, the voting machines just can’t handle it, and the polls close before everyone can get through the door and get an error-free ballot submitted. I think there might be social unrest if this is the case.
  • Breakout: a single demographic (or two) comes out in droves to vote on Tuesday, breaking out of wherever they’ve been hiding, and shifting the balance of the race in a huge upset. Certainly a possibility.

Whatever happens, the 2008 Election reflects a mythical struggle between structure, order, hierarchy, stability, and tradition on one side; revolution, dynamism, community, collaboration, and exploration on the other. One potential leader clearly has more experience on one side of the coin, and the other potential leader is stronger in the opposite area. Each candidate has plenty of experience on the side of the coin he’s promoting. The difference will be how the voter determines which standard the candidate’s experience should be measured against!

Why am I interested in all this? First, because polling is measurement, and quality assurance requires effective measurement. But more importantly, because the themes of this election parallel the struggle that many organizations face with quality and innovation – getting the job done reliably is paramount, and experience is important, but we cannot lose sight of the way we need to reinvent ourselves and our companies to continue being competitive. Accepting the wilder side, where structures are not sacrosanct and community is more productive than hierarchy, is hard to swallow.

The old methods that tell us how to manage projects, do budgeting, evaluate employees, and manage change are incomplete in such a global, dynamic competitive environment. New organizational models that help us deal with complexity more effectively will be required, but will the 2008 Election usher one into the institution of government?

36 hours from now (hopefully), we’ll know.


Other Resources:

  • Peter Norvig, Director of Research at Google, keeps a 2008 Election site with the most comprehensive collection of data-based reports I’ve encountered
  • CNN’s early voting map shows how many early ballots were cast according to state and proportion of Democrats/Republicans voting
  • If the whole world could vote, according to the Economist, the “Global Electoral College” would be stacked.
  • Eight Ways to Deal with Complexity

    There is not one person I know who doesn’t have to deal with complexity in their work or personal lives. Either the subject matter they work with is complex or specialized, the politics are stressful, or there’s just too much information to process and the information overload becomes oppressive.

    Fortunately, dealing with complexity has been the subject of recent research and there are some lessons to report. These lessons revolve around the importance of “sensemaking” – a term coined by Karl Weick to reflect concerted effort made to understand the relationships between people, technologies, places and events, how they behave, and how to respond to their current and future behaviors.

    Weick and Sutcliffe (2001) studied the environments of aircraft carriers, nuclear power plants, and fire fighting teams – situations where the stakes are high, and on-the-job errors can be costly and dangerous. These are the workplaces “that require acute mindfulness if they are to avoid situations in which small errors build upon one another to precipitate a catastrophic breakdown.” Snowden (2003), who worked at IBM, said that most environments just weren’t like that – so it would be difficult to generalize how those workers dealt with complexity to the typical office.

    Browning & Boudes (2005) compared and contrasted these two studies to try and define some guiding principles for how people make sense of complex scenarios. Here’s a synopsis of what they found:

    1: “Acknowledging and accepting complexity is better than placating it with planning models. There are simply too many situations where the standard tools and techniques of policy-making and decisionmaking do not apply.” The lesson: move away from a “training culture of obedience” and towards a “learning culture of understanding and action”. (This will be a challenge, because it requires humility and trust.)

    2: “It is important to acknowledge failure and learn from instances of it.” Self-explanatory, but as above, this requires humility and ego-transcendence. For people to learn from failure, they must first confront it. I can think of some “death march” software projects where failure “never” occurred!

    3: “Self-organization is an order that has no hierarchical designer or director.” Browning & Boudes cite Peter Drucker’s idea that “in the Knowledge Economy everyone is a volunteer.” Volunteerism means that everyone is fundamentally qualified to do some part of a project, that roles shift to accommodate existing talents and the development of new talents, and that if a person isn’t working out in one role, they can move to another. In volunteer contexts, leadership is often dynamic, where everyone serves for a time as the leader and then moves on.

    4: “Narratives are valuable for showing role differentiation and polyvocality.” There are many voices, and many perspectives, and these can be effectively communicated when people relate to one another through stories and examples. Diversity of opinion and distance from a problem can help raise solutions – but the people who know the situation best, and who are closest to it, must be open to the possibilities. (Easier said than done.)

    5: “Conformity carries risks, and thus we need diverse inputs when responding to complexity.” The authors suggest that learning should be valued over order and structure. This does not mean that order is unnecessary, but that any order that is established should be viewed as temporary – a framework to serve as the basis for new learning.

    6: “Action is valuable under conditions of complexity.” Acting, learning, and adjusting is more effective and more productive than trying to identify the right solution, plan it, and then do it. Action builds identity and creates meaning; “the most useful information is contextual and need-driven.”

    7: “The focus is properly on small forces and how they affect complex systems.” The authors suggest that focusing on small wins and keeping things simple is a strategy for success. I love the following story that they describe, so I have to include it here:

    Snowden relates a story of two software development groups – one expert, the other a lesser group – whose experience in programming was limited to the fairly routine requirements of payroll systems. In a competitive exercise between these two groups for learning purposes, the experts created a plan for an elegant piece of code that would take two months to develop. The payroll programmers, meanwhile, downloaded a “good enough” list from the Internet that cost five dollars (Snowden, 1999). Thus one feature for smallness for Snowden is the decisions that can be made that allow the group to move on – to accept “good enough,” implement it, and then see what that action means.

    8: “It is important to understand the irony of bureaucratic control.” Producing data and information can be overwhelming; innovative achievements can be suffocated by measurement, evaluation and control. We assume that organizations are deterministic and behavior can be programmed, using carrots and sticks. But people are neither rational nor linear, and this can be both the strength of the organization and its Achilles heel.

    New organizational models are needed to be able to follow this advice. So many of the projects I see today are like solving mysteries: you know what needs to happen (because you have requirements documents), there are motivated people all around you who really want the mystery solved, someone is footing the bill (at least for a while), and everyone wants to make progress. But because the process is full of learning new information, finding clues, and relating to people’s needs – it’s impossible to put a timeline on it. This frustrates project managers (and their bosses) immensely, because their jobs revolve around bringing some order to complexity and setting expectations in terms of time and budget.

    Can you imagine a crime show like Law & Order starting out with a murder – and then you cut to the scene where the police chief says: “We need to solve this murder immediately… all the clues need to be found by next Friday, all the suspects interviewed by the following Wednesday, and then I want a report on my desk with the solution three weeks from today!”

    It sounds funny, but this is exactly what plenty of managers are trying to do right now in their own organizations. The “corporate consciousness” will support this kind of behavior until a rational alternative is found.


    Browning, L. & Boudes, T. (2005). The use of narrative to understand and respond to complexity: a comparative analysis of the Cynefin and Weickian models. E:CO, 7(3-4), 32-39.
    Snowden, D. J. (1999). “Story telling: An old skill in a new context,” Business Information Review, 16 (1): 30-37.
    Weick, K. E. and Sutcliffe, K. M. (2001). Managing the Unexpected: Assuring High Performance in an Age of Complexity, San Francisco, CA: Jossey-Bass.

    Quality in your Genes

    Is your body programmed to get cancer? How about Parkinson’s, or multiple sclerosis, or diabetes? With 23andme.com, you could find out if you are predisposed to these or 19 other diseases simply by spitting in a cup. In addition, you can easily donate your data to further medical research. These are the same kinds of tests used to diagnose cystic fibrosis and Down’s syndrome prenatally, and have earned a measure of trust within the medical community. Now, genetic testing is a service offered to just about anyone – even outside the hospital or doctor’s office. (It’s not free, but what’s $399 for a little peace of mind – assuming that’s what you end up with?)

    That’s not where the fun ends, though. If you’d like to meet other people who share similar genetic characteristics, no problem – one someone writes a mashup between a social networking site like Facebook and your 23andme genome data, it will be as easy as adding a new tab to “My Networks”.

    Diagnosing genetic profiles is a high-stakes game for two reasons: quality and ethics. The quality of the genetic testing procedures, the calibration of the lab instruments, and the handling of the samples are all aspects of the process that must be carefully controlled. Failures can result in minor consequences (e.g. a sample is lost and the subject has to spit in a second cup, then wait longer to get the results) or major consequences (someone is incorrectly told that they are predisposed to a life-threatening condition, and because they can’t bear to get it, they commit suicide). Fortunately, quality control is taken very seriously in this domain, as the QC process from Kalman et al. (2005) indicates.

    The ethical issues can be a direct result of the capability of the process to produce accurate and reliable results. But is it appropriate to offer this service on such a large scale? What are the consequences of knowing versus not knowing, on both a personal scale and on the scale of society? Without probing these issues in more depth, and understanding how laws and regulations could counterbalance potentially austere social consequences, it’s a risky service to offer. Some people might be able to handle the information; some might not. (Imagine being in high school and getting dumped by someone who checked your genome, found out you’re predisposed to psoriasis, and thinks that’s gross.)

    I’m not sure how I would respond personally to having such information, assuming that it was accurate and my sample wasn’t swapped in the lab with someone else’s. But the concept is interesting enough that I just might try their demo and explore the genomes of the Mendels, a hypothetical family.


    Kalman, L., Chen, B., & Boone, J. (2005). The genetic testing quality control materials program (GTQC) – a sustainable community process to improve availability of appropriate, verified quality control (QC) materials for genetic testing.
    Kotler, S. (2008). Spitomics: Spit-in-a-cup genomics has arrived: now what? Science Progress, 10/28/2008.

    What are Standards?

    Standards represent “accepted ways of doing things” and can be defined as sets or features of characteristics that describe products, processes, services or concepts. (National Research Council, 2005) Alternatively, standards can be described as “rules, regulations, and frameworks intended to enable individuals, groups, organizations, countries and nations to achieve goals.” (Spivak & Brenner, 2001) According to these authors, there are five types of standards: 1) physical or measurement standards; 2) standards for symbols, meanings and language; 3) standards for products and processes, including test validation, quality assurance, and operating procedures; 4) standards for systems (including management systems and regulatory systems); and 5) standards for health, safety, the environment and for ensuring other consumer interests. Standards may be compulsory (as in the case of legal and regulatory standards, such as the Sarbanes-Oxley requirements for financial reporting) or voluntarily adopted (as in the case of product interface standards such as USB for computer peripherals).

    Because standards are a mechanism used by social groups to promote norms, and to facilitate the creation of artifacts that advance human capabilities, standards are technologies. (Recall that Random House defines technology as “The sum of the ways in which social groups provide themselves with the material objects of their civilizations.”)

    What can standards do for you and your projects? The importance of standards can be explained in terms of the eight goals that standards can be used to achieve (Blind, 2004):

    • Compatibility – facilitating cooperation between people, processes, systems, or technologies
    • Communication – facilitates information flow between people, processes, systems or technologies
    • Conformance – provides a common basis to establish competency or define excellence
    • Continuous Improvement – helps organizations leverage the “lessons learned” that are imbued within the standards
    • Establish Order – promotes consistency, reliability and effective communications
    • Prescribe Behavior – enables organizations to reduce variability between people and systems
    • Promote Social Order – establishes a basis for legal and ethical behavior
    • Establish Shared Meaning – provides a context for communication, compatibility and conformance

    A standard is “working” if it accomplishes one or more of these goals (depending, of course, on how relevant and pertinent the goals are to the project that is being pursued). For example, it’s probably not very important for two computer devices to “promote social order” if they need to communicate. But it’s definitely important for people.


    Blind, K. (2003). The economics of standards. Kluwer Academic Press.
    National Research Council. (1995). Standards, conformity assessment and trade. National Academy Press.
    Spivak, L. & Brenner, F.C. (2001). Standardization essentials: principles and practice. Marcel Dekker.

    « Older Entries Recent Entries »