Category Archives: Software Quality

Quality of an Interactive System

n1041950747_53558_5249Today, I spent some time in a remote visualization tutorial presented by John Clyne of NCAR. He referenced a 2005 answer to the question “What is meant by interactive analysis?” by Mark Rast of the University of Colorado:

Definition: A system is interactive if the time between a user event and [the system’s] response to that event is short enough to maintain my full attention.

If the response time is…

1-5 seconds: I’m engaged
5-60 seconds: I’m tapping my foot
1-3 minutes: I’m reading email
>3 minutes: I’ve forgotten why I asked the question!

I liked this because it defines a quality attribute: a high-quality interactive system maintains the user’s attention.

Not Invented Here

peacockIf you are part of the software development world, no doubt you are familiar with “not invented here” (NIH) syndrome. It is the scourge of the software development culture, the unfortunate tendency within a group of software-minded people to attribute value to the code that members of the group or the group itself has written, while devaluing code, modules or COTS packages that have not been written by members of the group.

“Not invented here” is so prominent that it has a wikipedia entry, with text that assures us that this tendency is indeed a facet of many a social, corporate or institutional culture. Bloggers and even Harvard Business Review have touted its benefits, suggesting that this characteristic of a culture may catalyze innovation.

Today, I attended a meeting where I had an even bigger revelation about NIH. About 10 people attended, and we talked about how to search a large archive of metadata across multiple data sources. Attendees spoke of the problem as something that really needs to be done, as something that our organization really needs to spend time on – and we need resources to do it. There’s only one problem with this picture – a couple of people within the organization have been working on this problem for the past 18 months, have produced a prototype that’s consistently getting about 100 hits a day (which is substantial given the problem domain), and have received positive reviews and helpful suggestions for moving forward from the user community. The releases have been published in inter-organizational emails, the company newsletter, and other venues where it would be very easy for everyone in this meeting to have learned of the new functionality and used it. But apparently no one has bothered to pay attention!

The moral of the story: when a NIH culture is observed, perhaps the resources and opportunities that are available to a group or an organization that could use them are truly invisible to the people who need them. The people can not see the opportunities because they are not looking; they are not paying attention.

Is paying attention to opportunities a value within your software development organization? It requires conscious effort.

The ITEA Criteria for Software Process & Performance Improvement

(I originally wrote this article for the ASQ Software Division Newsletter compiled in the first quarter of 2009. I’m reproducing it here because I’ve found the ITEA criteria to be remarkably useful for all kinds of planning since I was introduced to it last year.)

frangipani-flowersFor software professionals, particularly those of us who manage product development or development teams, it is important to track progress towards our goals and to justify the results of our efforts. We have to write effective project charters for software development just to get things moving, evaluate improvement alternatives before making an investment of time and effort in a process change, and ultimately validate the effectiveness of what we have implemented.

This past fall, I had the opportunity to serve as a preliminary round judge for the ASQ International Team Excellence Award (ITEA). My subgroup of judges met at the Bank of America training facility in Charlotte, North Carolina, where we split up into teams to evaluate almost 20 project portfolios. A handful of other events just like ours were held at the same time across the country, giving many people the opportunity to train and serve as judges. Before we evaluated the portfolios, we were all trained on how to use and understand the ITEA criteria, a 37-point system for assessing how well a project had established and managed to its own internal quality system. The ITEA criteria can be applied to any development project or process improvement initiative in the same way that the Baldrige criteria might be applied to an organization‘s strategic efforts. For software, this might include improving the internal processes of a software development team, using software improvements and automation to streamline a production or service process, and improving the performance or quality of a software product. (For example, I can envision the ITEA criteria being used to evaluate the benefits of parallelizing all or part of a software system to achieve a tenfold or hundredfold performance improvement.)

You can review these criteria on the web at http://wcqi.asq.org/2008/pdf/criteria-detail.pdf yourself. There are five main categories in the ITEA criteria: project selection and purpose, the current situation (prior to improvement), solution development (and evaluation of alternatives), project implementation and results, and team management and project presentation. An important distinction is in the use of the words Identify/Indicate, Describe and Explain within the criteria. To identify or indicate means that you have enumerated the results of brainstorming or analysis, which can often be achieved using a simple list of bullet points. To describe means that you have explained what you mean by each of these points. To explain means that you have fully discussed not only the subject addressed by one of the 37 points, but also your rationale for whatever decisions were made. Sustainability of the improvements that a project makes is also a major component of the ITEA criteria. Once your project is complete, how will you ensure that the benefits you provided are continued? How can you make sure that a new process you developed will actually be followed? Do you have the resources and capabilities to maintain the new state of the system and/or process?

The ITEA criteria can serve as a useful checklist to make sure you‘ve covered all of the bases for your software development or process improvement project. I encourage you to review the criteria and see how they can be useful to your work.

Quality is Better When You Feel Good

blue-brainHow you perceive quality is influenced by your expectations. And sometimes, your expectations are subconscious or emotionally driven.

For example, a product may have all the features you, as a consumer, could possibly want and need – and it might perform well too! But it still might not satisfy everyone, or generate the magnitude of sales that were originally projected. How could this be?

Understanding the psychology of quality and value, based on affect, provides insight into how this can happen. Merriam Webster’s Medical Dictionary defines affect as “the conscious subjective aspect of an emotion considered apart from bodily changes.” In short, affect describes how something makes you feel. For example, working on a task that you really enjoy promotes positive affect. Spending time with “de-energizers” who are negative, critical, and generally unhappy can create negative affect.

Research in psychology indicates that positive affect corresponds with the ability to solve problems more readily and effectively, while negative affect can impede problem solving, even for simple tasks. As a result, usability can be considered a function of the positive or negative affect that is generated when a user interacts with a product. This applies to all products, including software and web-based applications.

These studies also suggest that effective design translates to positive affect – meaning that before use, perceived quality and perceived value are more closely related to the perceived quality and value that will be experienced after use. Aesthetics thus play a role in promoting positive affect. As interpreted by Don Norman (2004) in Emotional Design, where many of the aforementioned studies are referenced,

the emotional system changes how the cognitive system operates… [it is] easier for people to find solutions to the problems they encounter… [there is a] tendency to repeat the same operation over again is especially likely for those who are anxious or tense.”

An entertaining example is the ATM case, which I’ll write about tomorrow.

Software Hell is a Crowded Place

fireI’ve been thinking a lot about management fads lately, and ran into this 2005 article by Nick Carr, titled “Does Not Compute”. Here’s the part that caught my eye:

“A look at the private sector reveals that software debacles are routine. And the more ambitious the project, the higher the odds of disappointment. It may not be much consolation to taxpayers, but the F.B.I. has a lot of company. Software hell is a very crowded place.”

Carr continues by describing two examples of failed projects: a massive systems integration effort at Ford Motor Company, and a overzealous business intelligence initiative embarked upon by McDonald’s. Both projects were cancelled when the price tags got too big: $200M for Ford, $170M for McDonald’s. The catch is that failure is good, because when we fail we at least know one solution path that’s not workable – we just need to 1) understand that it doesn’t have to be expensive, and 2) have more courage to allow ourselves and our colleagues to fail without getting depressed or thinking our coworkers are idiots. This is often expressed as “fail early, fail often“. (But note that the assumption is that you persist, and as a result of the learning experience, ultimately meet your goals.)

Without an effective team culture, rational managers, healthy relationships with stakeholders, and capable programmers dedicated to continually improving their skills, all roads can lead to software hell. The process of getting there – which is hellish in and of itself – is the famed death march. This is where a software-related project, doomed to fail, sucks up more time, people, resources, and emotional energy at an ever increasing rate until the eventual cataclysm.

Carr also cites The Standish Report, which in 1994, asserted that only 16% of projects were completed on time, and budget, and meeting specifications. By 2003 the percentage had grown to 34% in a new survey. Other projects that were still completed ran, on average, 50 percent over budget. (And this is for the survey respondents who were actually telling the truth. I know a few people who wouldn’t admit that their project was quite so grossly over budget.)

One way to solve this problem is by focusing on sufficiency and continuous learning, starting the blueprint for a system based on these questions:

  • What features represent the bare minimum we need to run this system?
  • What are the really critical success factors?
  • What do we know about our specifications now? What do we not know?
  • What do we know about ourselves now? What do we want to learn more about?

Software development is a learning process. It’s a process of learning about the problem we need to solve, the problem domain, and ourselves – our interests and capabilities. It’s a process of recognizing what parts of building the solution we’re really good at, and what parts we’re not so good at. Let’s start small, and grow bigger as we form stronger relationships with the systems that we are developing. Having a $170M appetite sure didn’t get McDonald’s anywhere, at least in this case.

How Usability and (Software) Quality are Related

ISO 9241-11 defines usability as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.” The four elements that define usability within this context are as follows:

  • both the users and goals must be explicitly identified,
  • the intended context of use must be identified and understood, and
  • the user can use the system in question
  • to meet those stated goals.

These same four elements are implied by the ISO 8402 definition of quality: stated and implied needs are relative to specific users with specific goals, are dependent upon a context of use, and the entity in question is the system being defined and developed in response.

Usability is the extent, or the degree, to which the above criteria are satisfied. Here’s an example from software development to make this a little more concrete. The software development lifecycle, regardless of what incarnation you’re using (even waterfall),  inherently addresses usability through these four elements:

  • the requirements process outlines the specified users, their goals, and the context of use
  • the design process defines a specific technical solution to meet those needs, and
  • the finished product provides evidence that the system can be used to achieve the goals of its users.

As a result, usability can be considered an implicit factor in software quality, ultimately reflecting how well the design process interpreted requirements within a specified context of use.

Why Software Reuse is Hard – More Perspectives

mt-everest-smallA couple weeks ago I presented my perspective on Why Software Reuse is Hard. I also posted a link on LinkedIn, and wanted to capture the comments that were received there. I think it’s interesting to note how many people support the concept of continuous learning for competitive advantage in software development – without actually saying it!

Mike Duskis – Software Quality Lead at CaridianBCT says:

I like the distinction between “black box” code that we can simply plug in on one hand and code that we need to understand “tacitly” on the other. I think that, deep down, most of us software geeks would love to experiment with the innards of every interesting gadget that we can get our hands on. That’s what makes us geeks.

In the real world, we can’t let our geek nature control us. We also need to be pragmatic engineers, producing optimal solutions with minimal waste. This is what drives us to adopt code written by someone else.

But the issue cannot be boiled down to a primal conflict between the inner geek and the outer engineer. As Nicole’s headline implies, software reuse is hard. Even after we make the decision to reuse existing code, we face all sorts of obstacles. There is a whole lot of lousy code floating around out there. In fact, I would argue that most code available for reuse is poorly designed, badly documented, and ripe for mishap or exploit. Doubters might tour a few random projects at sourceforge.net . The good ones are very good, but they are vastly outnumbered. The only way to know for sure that a component is not lousy is to tear it open and examine its guts, but doing so would expend the very resources that we are trying to save. Even if we assume that the code is not lousy outright, its developer was probably motivated by different problems and circumstances than we face. Ironically, the more efficient the original developer was, the less likely it is that he produced something that can simply be plugged in to a completely different environment.

Over the years, languages, patterns, and paradigms have evolved to help us to resolve these difficulties, but there is always a cost. Java almost forces us to produce reusable components but at the cost of a cumbersome paradigm that encourages over-engineering. At the other end of the spectrum, Perl and Python are easy to use. They also encourage the development of quick-and-dirty code that can be very hard to reuse. I suspect I’ll get some flack for this, but I’ve struggled with both extremes enough to develop “tacit” knowledge of them. If there is a happy medium, I think it lies in a way of thinking rather than a language, but every step that we take from the extreme represented by orthodox JEE (with beans) is a step away from the “black box” that allows us to simply plug a component in and forget about it.

On balance, I think that reuse is not only desirable but necessary. Who has time to design new systems from the operating system on up to the code that actually solves our business problem? The trick is figuring out how to do it well.

Greg Zimmerman, Owner, Applied Quality Solutions LLC & Regional Councilor, ASQ Software Division says:

There’s no magic way to make reuse easier, but you can attempt to create a normalizing factor with service frameworks that define the structures, relationships, and communications. On top of that, build code generation tools that automactically create the common components within and around the objects.

I say this as though it’s simple. Nothing can be farther from the truth. It’s taken us the better part of three years to reach the point where this has become standard practice for new services. Integrating / updating existing services is still a challenge.

I guess I’ve just reiterated the point that reuse is hard, but as Nicole said, it’s not impossible. It requires committment from both developers and managers, and the patience (and funding) to lay the groundwork.

Steven Rakitin – Software Consulting, Training, Facilitating, and Auditing says:

Nicole there are several other reasons why software reuse is hard that have nothing to do with the forms of knowledge that you talked about. Imagine trying to come up with a reusable design for a new product. In order for the design to be truly reuseable, the designer would need to know the possible ways the design will be used in the future. If that information or knowledge does not exist, it becomes very difficult to design something that is truly reusable…

Reuse was a vary popular issue back in the 1980s and people envisioned “software factories” cranking out reusable, modular components that could be designed and tested once and reused many times. About the closest we’ve ever come to achieving that goal is defining standard interfaces (APIs) that designers can design to…

It’s hard because everytime we design a new app, we want to start from scratch… just the nature of software engineering and software engineers…

-Steve Rakitin-

Bryce Bolton – Electrical Engineer, Prof. Research Assistant at LASP / University of Colorado

Personalities and architecture are part of the difficulty.

Personalities: Everyone wants to do things their own way, and if they can’t quickly understand the architecture, or don’t like it, they will re-create it into something they understand.

Architecture: Even the best architectures create ongoing debates. Consider the battles between languages like Perl and Python. Verilog and VHDL, C and ADA. In each case, the first language is more freestyle & flexible while the second is more structured. This creates endless preference debates on a language level. Within teams there could be even more differences.

Good system architects within the projects lead the team toward a standard which is adopted by the other team members. So lack of reuse, or spaghetti code could indicate a lack of leadership in the team.

To converge, some team members must force their structure on others, or management must resolve to come up with a standard to overcome the “style battles”. Getting team members working together on this, and putting team members with different styles together on a project may help smooth differences.

Or, the organization could suggest a 3rd party platform or OS from which the project-specific standard is derived.

The goal isn’t to make everything reused or rigid, except where project requirements mandate that. The goal is to meet the system requirements, including reliability. So, if reuse means designing to a coding style that increases quality, then the effort matches programmatic requirements. If the code will not be reused, it is wasteful of project resources to design to something outside the scope of the project, even if it is an ideal.

We typically working within a project, not fully on ancillary goals or ideals. Yet, if functional teams do not gel and create standards across projects, all projects will be less effective.

Reuse is of particular interest in product platforms and successive projects where teams are re-using each others code anyhow. At some point it will pay off to converge on a coding style. Seeing this and pushing for such commonality is up to management or enlightened team members. In the beginning, though, it may be less likely to invest in reuse. The situation may be one of more rhetoric and less consensus.

Bryce Bolton

Alex Elentukh – Quality Assurance Director, Program Manager, Process Architect, Consultant, Writer

At my last engagement we had some significant progress with reusable test components. These are fully supported by BPT feature of Quality Center of HP. Component-based test design allows to curb the number of tests. Otherwise tests were growing into an unmanageable mass of many thousands. A predictable lesson from this experience is that reuse must be supported by tools and deployed with great care and attention. For example, there should be provision to regularly review, communicate and measure reusable components

Emmett Dignan – Experienced systems and software professional

How many of the issues of reuseability are simply issues of poor documentation, overly complex or overly large code blocks?

I have seen an organization use an internally developed library of functional modules for regularly reused needs. But it required a dragon of a software manager devoted to the concept.

But development time dropped substantially if you could just drop a known and tested module into the framework for the next program.

Would UML help with this?

« Older Entries Recent Entries »