Tag Archives: Software Quality

Software Hell is a Crowded Place

fireI’ve been thinking a lot about management fads lately, and ran into this 2005 article by Nick Carr, titled “Does Not Compute”. Here’s the part that caught my eye:

“A look at the private sector reveals that software debacles are routine. And the more ambitious the project, the higher the odds of disappointment. It may not be much consolation to taxpayers, but the F.B.I. has a lot of company. Software hell is a very crowded place.”

Carr continues by describing two examples of failed projects: a massive systems integration effort at Ford Motor Company, and a overzealous business intelligence initiative embarked upon by McDonald’s. Both projects were cancelled when the price tags got too big: $200M for Ford, $170M for McDonald’s. The catch is that failure is good, because when we fail we at least know one solution path that’s not workable – we just need to 1) understand that it doesn’t have to be expensive, and 2) have more courage to allow ourselves and our colleagues to fail without getting depressed or thinking our coworkers are idiots. This is often expressed as “fail early, fail often“. (But note that the assumption is that you persist, and as a result of the learning experience, ultimately meet your goals.)

Without an effective team culture, rational managers, healthy relationships with stakeholders, and capable programmers dedicated to continually improving their skills, all roads can lead to software hell. The process of getting there – which is hellish in and of itself – is the famed death march. This is where a software-related project, doomed to fail, sucks up more time, people, resources, and emotional energy at an ever increasing rate until the eventual cataclysm.

Carr also cites The Standish Report, which in 1994, asserted that only 16% of projects were completed on time, and budget, and meeting specifications. By 2003 the percentage had grown to 34% in a new survey. Other projects that were still completed ran, on average, 50 percent over budget. (And this is for the survey respondents who were actually telling the truth. I know a few people who wouldn’t admit that their project was quite so grossly over budget.)

One way to solve this problem is by focusing on sufficiency and continuous learning, starting the blueprint for a system based on these questions:

  • What features represent the bare minimum we need to run this system?
  • What are the really critical success factors?
  • What do we know about our specifications now? What do we not know?
  • What do we know about ourselves now? What do we want to learn more about?

Software development is a learning process. It’s a process of learning about the problem we need to solve, the problem domain, and ourselves – our interests and capabilities. It’s a process of recognizing what parts of building the solution we’re really good at, and what parts we’re not so good at. Let’s start small, and grow bigger as we form stronger relationships with the systems that we are developing. Having a $170M appetite sure didn’t get McDonald’s anywhere, at least in this case.

How Usability and (Software) Quality are Related

ISO 9241-11 defines usability as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.” The four elements that define usability within this context are as follows:

  • both the users and goals must be explicitly identified,
  • the intended context of use must be identified and understood, and
  • the user can use the system in question
  • to meet those stated goals.

These same four elements are implied by the ISO 8402 definition of quality: stated and implied needs are relative to specific users with specific goals, are dependent upon a context of use, and the entity in question is the system being defined and developed in response.

Usability is the extent, or the degree, to which the above criteria are satisfied. Here’s an example from software development to make this a little more concrete. The software development lifecycle, regardless of what incarnation you’re using (even waterfall),  inherently addresses usability through these four elements:

  • the requirements process outlines the specified users, their goals, and the context of use
  • the design process defines a specific technical solution to meet those needs, and
  • the finished product provides evidence that the system can be used to achieve the goals of its users.

As a result, usability can be considered an implicit factor in software quality, ultimately reflecting how well the design process interpreted requirements within a specified context of use.

The Butter Test

butterI do not want to struggle with my butter. Or my software.

This morning for breakfast, I chose the “nutritious” option of a slice of buttered rye. After the obligatory ninety seconds’ wait, my freshly toasted bread popped out of the toaster. It was hot, with a little steam coming off the sides – pretty attractive for a frigid winter’s morning. But I’d forgotten to take the butter out of the refrigerator earlier – arrhgrhh, I thought, now I’m going to have to wrestle with the cold butter and try to get it to spread on my rapidly cooling toast.

I hate cutting cold butter. The butter itself is unwieldy – it tends to act like an anti-magnet (at least for me), consistently tipping the wrong way, repelling the knife at every attempt. It’s an altogether unpleasant experience, certainly not a complement to a rushed morning where you’re trying to get yourself ready for work and kids ready for school.

But — microwave that butter for about 15 seconds and everything changes. All of a sudden, the butter is at your command. The knife slices through effortlessly, like an airy ballet. The pat of butter, liberated from its stick(!) conforms to the shape of your toast with only a few strokes.

But I spend more time managing software development than I do buttering toast.

So it’s become my opinion that the software you use should feel the same as the experience of cutting the warm butter. But all too often, software feels far more like the cold butter. You try to take a decisive step with your newly coded tool, but the application jumps out from under your control. It falls on the ground, sometimes things get dirty, maybe lands face down or picks up some hair or dust, and then it becomes a struggle to work with the tool thereafter. That is, until you’ve had a while to adjust to the software, and it’s had time to adjust to the “room temperature” of the deployed environment and become more malleable. Just as the butter eventually adjusts to a temperate environment, with extended use, a user will adapt to software and be able to work with it. This is not always due to changes in the software, but to changes in the human-software sociotechnical system – that is, you just develop an all-around better relationship with the app and it becomes easier to work with.

Enter the Butter Test.

The Butter Test is the equivalent of the “5-Second” test for user interface navigability – but for software applications, web pages, web applications, APIs, or any other software-related design. (There’s even a web app that helps you conduct a formal 5-Second test.) Whereas the 5-Second test gauges your user’s first impressions when they visit your web page, the Butter Test assesses how malleable the software is upon a user’s first encounter. To do the Butter Test, spend about five minutes with a new application. You don’t have to be alone; you can get a walkthrough from someone who’s more familiar with it. How does it feel? Do you feel like you’re struggling with a cold stick of butter? Does the software respond in a jagged, unpredictable way – forcing you to catch it before it falls on the floor? Or alternatively, is your first cut at using the software smooth? Do the results feel trustworthy, interpretable, and extendable (meaning you’re left feeling empowered to do more)? Using the Butter Test, your first impressions count.

The Butter Test is not just useful for subjectively evaluating full software applications. Today, I used it to determine whether a taxonomy for a directory structure made sense. We needed a file structure for holding different types of data (with different levels of “ease to reproduce”) from different instruments. After learning more about the proposed new structure, I could immediately figure out where new data would go, how we would adapt to novel data types and processed data products, and how to access the data without an a priori knowledge of the full directory structure. By learning a few rules, I could work easily with the entire collection of data – and I learned all this in less than five minutes. The new taxonomy passed the Butter Test with flying colors.

I’ve been using the Butter Test for about 15 years and it does not disappoint. Trust your instincts. If your software was toast and butter, would you be content or frustrated?

Why Software Reuse is Hard

If you were a software developer in the 1990’s or 2000’s, software reuse was kind of like the Holy Grail.

You probably thought it was a good idea in theory from the start. Plus, managers were always enthusiastic about wanting to get more of it.

But achieving software reuse in practice was, and still can be, difficult. It’s one thing if you can use really stable, trusted code libraries and treat them as black boxes in your code. Unfortunately this only works if those black boxes don’t run into problems, even when you deploy them on hardware that’s different than what the original writers used. (Good luck.)

Here’s why I think software reuse is difficult.

First, we need to note the difference between explicit knowledge and tacit knowledge. Explicit knowledge is what you get from books, or memos, or hearing a story or experience that someone else shares with you. Tacit knowledge is what you get from experimentation, mentorship, or exploration. (For example, I have no explicit knowledge of the vi editor – but I have extensive tacit knowledge of it, because when I open the editor my fingers know what to do, darting back and forth between ESC and : and other punctuation marks that my brain is not consciously processing. Perl is kind of the same way for me.)

If software is the executable representation of knowledge, then developing software is the process of codifying (literally!) tacit and explicit knowledge to make it executable. That’s a huge challenge that requires the developer to learn, reflect, experiment, learn, reflect, experiment… and so on. It’s a process of learning that ultimately results in software – a tool that hopefully will earn a life of its own, perhaps extending beyond the time that the original developer works on it.

What do you get by reading and exploring someone else’s code? Explicit knowledge. What do you get by writing code yourself, by uncovering each new function one at a time, by realizing you didn’t know what you were doing the first time, by refactoring, by trying out a new design (or maybe two or three), by bouncing ideas off of your colleagues, or by going back to the first design you had a long time ago? That’s tacit knowledge. You need both to write good software.

Unless there’s some way to get that tacit knowledge of someone else’s code, or you don’t need the tacit knowledge anyway (ie. in the case of the “black box”), successful reuse will be challenging… if not impossible.

(Related Post –> Software Reuse Antipatterns)

Software Reuse Antipatterns

coderIn 2000, Scott Ambler wrote  an excellent article on the organizational aspects of software reuse. He talked specifically about patterns and antipatterns:

“A pattern is a common approach to solving a problem that is proven to
work in practice. Conversely, an antipattern is a common approach to solving a problem that leaves us worse off than when we started.”

Long (2001) built on Ambler’s work and made it more fun. (This is, in my opinion, one of the most interesting and entertaining articles about software reuse in existence. You can get the full text from the ACM if you have an account there.) Long calls antipatterns obvious, but wrong, solutions to recurring problems and characterizes four organizational approaches that don’t support successful software reuse. See if your organization is one of these:

  • Abracadabra Model: A high-level manager is frustrated with a perceived lack of reuse and declares that “reuse will happen”. What Happens: Lots of talk, no action, silo development continues, managers start to panic, then the organization “de-evolves” into the next model.
  • High Noon Model: A high-level manager is REALLY frustrated with a perceived lack of reuse and declares that “reuse will happen”. What Happens: Finger pointing, as everybody has a lot of reasons (many of them very good, and very accurate) about why reuse can’t possibly work. The de-evolution continues.
  • Cost Cutter Model: A high-level manager is REALLY, REALLY frustrated with a perceived lack of reuse and declares that “reuse must happen to cut costs”. What Happens: Software people start to “force” reuse, immediate costs go up, upper management gets nervous, and more finger pointing happens (as everybody finds even more reasons now – including higher cost – for why reuse can’t possibly work.)
  • Used Car Fiasco Model: Software group says “OK, we’ll try reuse.” One group has software it thinks about group can use, so it is made available as-is and with no support. What Happens: There are lots of bugs to fix. Reusers have to fix them because the originators don’t have the time or resources to solve the new group’s problems. The reusers get frustrated and then write the code themselves.

Note that in all models, the expectations and behavior of the managers doesn’t change. In the fourth model, the behavior of the software developers changes. At no point do the expectations of the software developers change – their mission is to do what they need to do to get the software written.

Tomorrow I’ll write about why I think software reuse is difficult. The antipatterns above provide a good foundation for that discussion.


Ambler, S. (2000). Reuse Patterns and Antipatterns. Dr. Dobbs Journal, February 1, 2000.
Long, J. (2001). Software Reuse Antipatterns. ACM SIGSOFT, Software Engineering Notes, 26(3), 68-76.

Continuous Improvement Begins With Standards

apple-21Software is the executable representation of knowledge. [1] As a result, I find that software development provides a fruitful basis for exploring how problem solving is done by diverse team members in a cooperative (or even combative!) context.

Here is one example. In June 1997, Tom Gilb wrote an article for Crosstalk on “Requirements-Driven Management”. He noted that his purpose was intended to discuss, among other things, “some of the current major problems in systems engineering.” I stumbled upon this article again over the weekend, and it’s still as relevant now as it was a decade ago.

Standards are a prerequisite for systematic continuous improvement; the means to achieve an improvement process, not the ends, according to Glib. Furthermore, he remarks that Deming’s perspective on continuous improvement establishes, in part, that the reason we should adopt standards is to normalize our project to other projects . Once we do that, we’ve opened the doors to be able to use industry best practices, derived from other projects who also applied the standards – so that we can “clearly see the effect of any changes experimentally introduced into a process and not have to worry too much about other potential factors that impact the results.”

Learning, learning, learning. It’s all about continuous improvement through continuous learning, and in presenting this, Gilb is essentially promoting the same philosophy as Alistair Cockburn’s Cooperative Game Manifesto for Software Development. Both see the learning process as the key to successful software development. (So why do we not focus on this aspect of development more?) Glib addresses the issue of learning directly:

“The time has come to recognize that projects are so large, so complex, so unpredictable, and so state-of-the-art new that we have no practical alternative except to maximize our learning process during the project and as early as possible in the project life.

In this article, Gilb also presents “Evo” – short for “Evolutionary Project Management” – which has been used at IBM since 1970. It’s an implementation of Deming’s PDCA (Plan-Do-Check-Act) approach, and the author likens it to Humphrey’s Personal Software Process (PSP).


[1] This definition is credited to Eric Sessoms, who I consider a true artisan of software development. See some of his work at libraryh3lp.com. He likes the elegance of simplicity in his software.

Does PowerPoint Make You Stupid?

I remember a few years ago hearing about a study that claimed using Microsoft PowerPoint makes you dumb. On the basis that effective communication can either enhance or hinder quality improvement efforts, I decided to look back today and see a) where that information came from, and b) if it’s accurate. Given that over 400 million people used PowerPoint in 2003, the number of people who use it today (or other comparable presentation software, like OpenOffice) is probably even larger.

A December 14, 2003 article in the New York Times referred to a NASA report which examined the root causes of the Columbia disaster. Among other issues, PowerPoint was implicated:

”It is easy to understand how a senior manager might read this PowerPoint slide and not realize that it addresses a life-threatening situation,” the [NASA] board [reviewing the project] sternly noted.

My advice would be for senior managers preparing these presentations to communicate more deliberately, in words like “THIS IS A LIFE-THREATENING CONDITION!! IMMEDIATE ACTION REQUIRED!!” Unfortunately, that might be perceived as “too easy” or alternatively, the senior managers might not have wanted to admit a problem for fear that they would lose their funding. In any case, respect for human life should come above all, and should certainly be a reason for bare, clear communication – regardless of whether the message is delivered by PowerPoint, in person, or as part of a 40-lb., 500-page treatise.

The same New York Times article references a brochure by Edward Tufte, a well-regarded information theorist who has written a book on effective data visualization and gives seminars across the country on that subject. You can get his brochure, The Cognitive Style of PowerPoint, at AmazonAccording to the New York Times:

Ultimately, Tufte concluded, PowerPoint is infused with ”an attitude of commercialism that turns everything into a sales pitch.”

I would think that the burden of communication is on the communicator. There are many times where we only have a few minutes or an hour to convey a complex message, and for this, PowerPoint can be effective. However, if there’s a message that cannot be conveyed in simple terms, it’s up to the communicator to say so, and in really simple language, e.g. “this is a grave concern, and you need to review the complete report, now!” Easier said than done, I know.

But a far more complete review of the Tufte brochure at http://contactsheet.org notes that Tufte specifically argues against this position, noting that communicators are just victims of the product’s lack of user-centered design. Is the criticism of PowerPoint accurate? Possibly – I didn’t read the in-depth study so I don’t have a reason to believe or disbelieve the causal link between PowerPoint and stupidity. However, the recommendation ignores one critical element: that if the material is indeed comprehensively described in a much larger memo, people may or may not read and comprehend it.

However, let’s say you’re a patient in the hospital facing a life or death diagnosis, and a team of physicians is charged with solving your mystery. Do you want them making a decision based on the PowerPoint version of your case, or on all 800 pages of your medical history? Personally, I’d vote for the latter. But I would also insist that the medical team be given appropriate time to review, internalize, and reflect on the information before making a decision. This is a step that unfortunately has become a luxury in many organizations! Bottom line – the burden still remains with the communicator for now.

Buss (2006) doesn’t argue with the premise, and just writes about ways to use PowerPoint effectively. His article provides five tips from a professor in the Graduate School of Business at SUNY Albany. Starting with the premise that PowerPoint is ubiquitous in training sessions and presentations, the author first recommends that we subvert the linear “title and text” format that everyone is accustomed to because it does not capture peoples’ attention. Though this point is a sweeping generalization that is not substantiated, one opinion of the author is to remedy the situation by “switch[ing] the display order of the presentation. Present supporting data with points on the first slide and show the data and draw the conclusions on the next.” He also suggests that PowerPoint first be used to outline a message, and then a report should be written to expound upon the details, rather than the other way around. Buss also recommends to keep the information per slide short (though he does not suggest a “good” length for training slides), and provides the clichéd guidance that one should not merely read out his or her slides. The best advice is given in the author’s fifth point, where he recognizes that the presentation begins well before you start talking, and ends until your meeting is over. He suggests that the presenter mix with the audience to get a sense of their needs, and target those needs in the spoken presentation.

A related article, discussing Talking Heads vocalist David Byrne’s view of PowerPoint as art, is also entertaining.


Buss, W.C. (2006). Stop death by Powerpoint. Training & Development, March 2006, p. 20-21.

Recent Entries »