Category Archives: Software Quality

Top 10 Business Books You Should Read in 2020


I read well over a hundred books a year, and review many for Quality Management Journal and Software Quality Professional. Today, I’d like to bring you my TOP 10 PICKS out of all the books I read in 2019. First, let me affirm that I loved all of these books — it was really difficult to rank them. The criteria I used were:

  1. Is the topic related to quality or improvement? The book had to focus on making people, process, or technology better in some way. (So even though Greg Satell’s Cascades provided an amazing treatment of how to start movements, which is helpful for innovation, it wasn’t as closely related to the themes of quality and improvement I was targeting.)
  2. Did the book have an impact on me? In particular, did it transform my thinking in some way?
  3. Finally, how big is the audience that would be interested in this book? (Although some of my picks are amazing for niche audiences, they will be less amazing for people who are not part of that group; they were ranked lower.)
  4. Did I read it in 2019? (Unfortunately, several amazing books I read at the end of 2018 like Siva Vaidhyanathan’s Antisocial Media.)

#10 – Understanding Agile Values & Principles (Duncan)

Duncan, Scott. (2019). Understanding Agile Values & Principles. An Examination of the Agile Manifesto. InfoQ, 106 pp. Available from https://www.infoq.com/minibooks/agile-values-principles

The biggest obstacle in agile transformation is getting teams to internalize the core values, and apply them as a matter of habit. This is why you see so many organizations do “fake agile” — do things like introduce daily stand-ups, declare themselves agile, and wonder why the success isn’t pouring in. Scott goes back to the first principles of the Agile Manifesto from 2001 to help leaders and teams become genuinely agile.

#9 – Risk-Based Thinking (Muschara)

Muschara, T. (2018). Risk-Based Thinking: Managing the Uncertainty of Human Error in Operations. Routledge/Taylor & Francis: Oxon and New York. 287 pages.

Risk-based thinking is one of the key tenets of ISO 9001:2015, which became the authoritative version in September 2018. Although clause 8.5.3 from ISO 9001:2008 indirectly mentioned risk, it was not a driver for identifying and executing preventive actions. The new emphasis on risk depends upon the organizational context (clause 4.1) and the needs and expectations of “interested parties” or stakeholders (clause 4.2).

Unfortunately, the ISO 9001 revision does not provide guidance for how to incorporate risk-based thinking into operations, which is where Muschara’s new book fills the gap. It’s detailed and complex, but practical (and includes immediately actionable elements) throughout. For anyone struggling with the new focus of ISO 9001:2015, this book will help you bring theory into practice.

#8 – The Successful Software Manager (Fung)

Fung, H. (2019). The Successful Software Manager. Packt Publishing, Birmingham UK, 433 pp.

There lots of books on the market that provide technical guidance to software engineers and quality assurance specialists, but little information to help them figure out how (and whether) to make the transition from developer to manager. Herman Fung’s new release fills this gap in a complete, methodical, and inspiring way. This book will benefit any developer or technical specialist who wants to know what software management entails and how they can adapt to this role effectively. It’s the book I wish I had 20 years ago.

#7 – New Power (Heimans & Timms)

Heiman, J. & Timms, H. (2018). New Power: How Power Works in Our Hyperconnected World – and How to Make it Work For You. Doubleday, New York, 325 pp.

As we change technology, the technology changes us. This book is an engaging treatise on how to navigate the power dynamics of our social media-infused world. It provides insight on how to use, and think in terms of, “platform culture”.

#6 – A Practical Guide to the Safety Profession (Maldonado)

Maldonado, J. (2019). A Practical Guide to the Safety Profession: The Relentless Pursuit (CRC Focus). CRC Press: Taylor & Francis, Boca Raton FL, 154 pp.

One of the best ways to learn about a role or responsibility is to hear stories from people who have previously served in those roles. With that in mind, if you’re looking for a way to help make safety management “real” — or to help new safety managers in your organization quickly and easily focus on the most important elements of the job — this book should be your go-to reference. In contrast with other books that focus on the interrelated concepts in quality, safety, and environmental management, this book gets the reader engaged by presenting one key story per chapter. Each story takes an honest, revealing look at safety. This book is short, sweet, and high-impact for those who need a quick introduction to the life of an occupational health and safety manager.

# 5 – Data Quality (Mahanti)

Mahanti, R. (2018). Data Quality: Dimensions, Measurement, Strategy, Management and Governance. ASQ Quality Press, Milwaukee WI, 526 pp.

I can now confidently say — if you need a book on data quality, you only need ONE book on data quality. Mahanti, who is one of the Associate Editors of Software Quality Professional, has done a masterful job compiling, organizing, and explaining all aspects of data quality. She takes a cross-industry perspective, producing a handbook that is applicable for solving quality challenges associated with any kind of data.

Throughout the book, examples and stories are emphasized. Explanations supplement most concepts and topics in a way that it is easy to relate your own challenges to the lessons within the book. In short, this is the best data quality book on the market, and will provide immediately actionable guidance for software engineers, development managers, senior leaders, and executives who want to improve their capabilities through data quality.

#4 – The Innovator’s Book (McKeown)

McKeown, M. (2020). The Innovator’s Book: Rules for Rebels, Mavericks and Innovators (Concise Advice). LID Publishing, 128 pp.

Want to inspire your teams to keep innovation at the front of their brains? If so, you need a coffee table book, and preferably one where the insights come from actual research. That’s what you’ve got with Max’s new book. (And yes, it’s “not published yet” — I got an early copy. Still meets my criteria for 2019 recommendations.)

#3 – The Seventh Level (Slavin)

Slavin, A. (2019). The Seventh Level: Transform Your Business Through Meaningful Engagement with Customer and Employees. Lioncrest Publishing, New York, 250 pp.

For starters, Amanda is a powerhouse who’s had some amazing marketing and branding successes early in her career. It makes sense, then, that she’s been able to encapsulate the lessons learned into this book that will help you achieve better customer engagement. How? By thinking about engagement in terms of different levels, from Disengagement to Literate Thinking. By helping your customers take smaller steps along this seven step path, you can make engagement a reality.

#2 – Principle Based Organizational Structure (Meyer)

Meyer, D. (2019). Principle-Based Organizational Structure: A Handbook to Help You Engineer Entrepreneurial Thinking and Teamwork into Organizations of Any Size. NDMA, 420 pp.

This is my odds-on impact favorite of the year. It takes all the best practices I’ve learned over the past two decades about designing an organization for laser focus on strategy execution — and packages them up into a step-by-step method for assessing and improving organizational design. This book can help you fix broken organizations… and most organizations are broken in some way.

#1 Story 10x (Margolis)

Margolis, M. (2019). Story 10x: Turn the Impossible Into the Inevitable. Storied, 208 pp.

You have great ideas, but nobody else can see what you see. Right?? Michael’s book will help you cut through the fog — build a story that connects with the right people at the right time. It’s not like those other “build a narrative” books — it’s like a concentrated power pellet, immediately actionable and compelling. This is my utility favorite of the year… and it changed the way I think about how I present my own ideas.


Hope you found this list enjoyable! And although it’s not on my Top 10 for obvious reasons, check out my Introductory Statistics and Data Science with R as well — I released the 3rd edition in 2019.

There’s a Fly in the Milk (and a Bug in the Software)

Where “software bugs” got their name — the dead moth stuck in a relay in Harvard’s Mark II in 1947. From https://en.wikipedia.org/wiki/Software_bug

As one does, I spent a good part of this weekend reading the Annual Report of the Michigan Dairymen’s Association. It provides an interesting glimpse into the processes that have to be managed to source raw materials from suppliers, to produce milk and cream and butter, and to cultivate an engaged and productive workforce.

You might be yelling at your screen right now. DairyMEN’s? Aren’t we beyond that now? What’s wrong with them? The answer is: nothing. This is an annual report from 1915. Your next question is probably what could the dairymen be doing in 1915 that would possibly be interesting for production and operations managers in 2019?  The answer here, surprisingly, is a lot. Except for the overly formal and old-timey word choices, the challenges and concerns encountered in the dairy industry are remarkably consistent over time.

It turns out that flies were a particular concern in 1915 — and they remain a huge threat to quality and safety in food and beverage production today:

  • “…an endless war should be waged against the fly.”
  • “[avoid] the undue exposure of the milk cooler to dust and flies.”
  • “The same cows that freshen in July and August will give more milk in December it seems to me… because at that time of year the dairyman has flies to contend with…”
  • “Flies are known to be great carriers of bacteria, and coming from these feeding places to the creamery may carry thousands of undesirable bacteria direct to the milk-cans or vats.”

In a December 2018 column in Food Safety Tech, Chelle Hartzer describes not one but three (!!!) different types of flies that can wreak havoc in a food production facility. There are house flies that deposit pathogens and contaminants on every surface they land, moth flies that grow in the film inside drains until they start flying too, and fruit flies that can directly contaminate food. All flies need food, making your food or beverage processing facility a potential utopia for them.

In the controls she presented to manage fly-related hazards, I noticed parallels to controls for preventing and catching bugs in software:

  • Make sanitation a priority. Clean up messes, take out the trash on a daily basis, and clean the insides of trash bins. In software development, don’t leave your messes to other people — or your future self!  Bake time into your development schedule to refactor on a regular basis. And remember to maintain your test tools! If you’re doing test-driven development with old tools, even your trash bins may be harboring additional risks.
  • Swap outdoor lighting. In food production facilities, it’s important to use lighting that doesn’t bring the flies to you (particularly at night). Similarly, in software, examine your environment to make sure there are no “bug attractors” like lack of communication or effective version control, dependencies on buggy packages or third party tools, or lack of structured and systematic development processes.
  • Install automatic doors to limit the amount of time and space available for flies to get in to the facility. In software, this relates to limiting the complexity of your code and strategically implementing testing, e.g. test-driven development or an emphasis on hardening the most critical and/or frequently used parts of your system.
  • Inspect loading and unloading areas and seal cracks and crevices. Keep tight seals around critical areas. The “tight seals” in software development are the structured, systematic processes related to verifying and validating your code. This includes design reviews, pair programming, sign-offs, integration and regression testing, and user acceptable testing.
  • Clean drains on a regular basis. The message here is that flies can start their lives in any location that’s suitable for their growth, and you should look for those places and keep them sanitized too. In software, this suggests an ongoing examination of technical debt. Where are the drains that could harbor new issues? Find them, monitor them, and manage them.

Although clearly there’s a huge difference between pest management in food and beverage production and managing code quality, process-related pests have been an issue for at least a century — and likely throughout history. What are the flies in your industry, and what are you doing to make sure they don’t contaminate your systems and bring you down?

How to Assess the Quality of a Chatbot

Image Credit: Doug Buckley of http://hyperactive.to

Quality is the “totality of characteristics of an entity that bear upon its ability to meet stated and implied needs.” (ISO 9001:2015, p.3.1.5) Quality assurance is the practice of assessing whether a particular product or service has the characteristics to meet needs, and through continuous improvement efforts, we use data to tell us whether or not we are adjusting those characteristics to more effectively meet the needs of our stakeholders.

But what if the entity is a chatbot?

In June 2017, we published a paper that explored that question. We mined the academic and industry literature to determine 1) what quality attributes have been used by others to determine chatbot quality, we 2) organized them according to the efficiency, effectiveness, and satisfaction (using guidance from the ISO 9241 definition of usability), and 3) we explored the utility of Saaty’s Analytic Hierarchy Process (AHP) to help organizations select between one or more versions of chatbots based on quality considerations. (It’s sort of like A/B testing for chatbots.)

“There are many ways for practitioners to apply the material in this article:

  • The quality attributes in Table 1 can be used as a checklist for a chatbot implementation team to make sure they have addressed key issues.
  • Two or more conversational systems can be compared by selecting the most significant quality attributes.
  • Systems can be compared at two points in time to see if quality has improved, which may be particularly useful for adaptive systems that learn as they as exposed to additional participants and topics.”

Data Quality is Key for Asset Management in Data Science

This post was motivated by two recent tweets by Dr. Diego Kuonen, Principal of Statoo Consulting in Switzerland (who you should definitely follow if you don’t already – he’s one of the only other people in the world who thinks about data science and quality). First, he shared a slide show from CIO Insight with this clickbaity title, bound to capture the attention of any manager who cares about their bottom line (yeah, they’re unicorns):

“The Best Way to Use Data to Cut Costs? Delete It.”

I’m so happy this message is starting to enter corporate consciousness, because I lived it throughout the decade of the 2000’s — working on data management for the National Radio Astronomy Observatory (NRAO). I published several papers during that time that present the following position on this theme (links to the full text articles are at the bottom of this post):

  • First, storing data means you’ve saved it to physical media; archiving data implies that you are storing data over a longer (and possibly very long) time horizon.
  • Even though storage is cheap, don’t store (or archive) everything. Inventories have holding costs, and data warehouses are no different (even though those electrons are so, so tiny).
  • Archiving data that is of dubious quality is never advised. (It’s like piling your garage full of all those early drafts of every paper you’ve ever written… and having done this, I strongly recommend against it.)
  • Sometimes it can be hard to tell whether the raw data we’re collecting is fundamentally good or bad — but we have to try.
  • Data science provides fantastic techniques for learning what is meant by data quality, and then automating the classification process.
  • The intent of whoever collects the data is bound to be different than whoever uses the data in the future.
  • If we do not capture intent, we are significantly suppressing the potential that the data asset will have in the future.

Although I hadn’t seen this when I was deeply enmeshed in the problem long ago, it totally warmed my heart when Diego followed up with this quote from Deming in 1942:

dont-archive-it

 

In my opinion, the need for a dedicated focus on understanding what we mean by data quality (for our particular contexts) and then working to make sure we don’t load up our Big Data opportunities with Bad Data liabilities will be the difference between competitive and combustible in the future. Mind your data quality before your data science. It will also positively impact the sustainability of your data archive.

Papers where I talked about why NOT to archive all your data are here:

  1. Radziwill, N. M., 2006: Foundations for Quality Management of Scientific Data Products. Quality Management Journal, v13 Issue 2 (April), p. 7-21.
  2. Radziwill, N. M., 2006: Valuation, Policy and Software Strategy. SPIE, Orlando FL, May 25-31.
  3. Radziwill, N.M. and R. DuPlain, 2005: A Framework for Telescope Data Quality Management. Proc. SPIE, Madrid, Spain, October 2-5, 2005.
  4. DuPlain, R. F. and N.M. Radziwill, 2006: Autonomous Quality Assurance and Troubleshooting. SPIE, Orlando FL, May 25-31.
  5. DuPlain, R., Radziwill, N.M., & Shelton, A., 2007: A Rule-Based Data Quality Startup Using PyCLIPS. ADASS XVII, London UK, September 2007.

 

Innovation Without Maintenance is (Potentially) Cyberterrorism

Image Credit: Lucy Glover Photography (http://lucyglover.com/)

Image Credit: Lucy Glover Photography (http://lucyglover.com/)

On July 8th, Wall Street’s software failed (and the WSJ web site went down). United’s planes were grounded for two hours across the entire US. And this all happened only shortly after China’s stocks mysteriously plummeted. Odd coincidence, or carefully planned coordinated cyberattack? Bloggers say don’t worry… don’t panic. Probably not a big deal overall:

Heck, the whole United fleet was grounded last month too… NYSE is one stock exchange among many. The website of a newspaper isn’t important, and the Chinese stocks are volatile… we should not worry that this is a coordinated attack, especially of the dreaded “cyber-terrorist” kind…

The big problem we face isn’t coordinated cyber-terrorism, it’s that software sucks. Software sucks for many reasons, all of which go deep, are entangled, and expensive to fix. (Or, everything is broken, eventually). This is a major headache, and a real worry as software eats more and more of the world.

In a large and complex system, something will ALWAYS be broken. Our job is to make sure we don’t let the wrong pieces get broken and stay broken… and we need to make sure our funding, our policies, and our quality systems reflect this priority.

Once upon a time in the early 2000’s, I worked as a technology manager at a great big telescope called the GBT (not an acronym for great big, but rather Green Bank… where it’s located in West Virginia).

It cost a lot to maintain and operate that telescope… nearly $10M every year. About 10-15% of this budget was spent on software development. Behind all great hardware and instrumentation, there’s great (or at least functional) software that helps you accomplish whatever goals and objectives you have that require the hardware. Even though we had to push forward and work on new capabilities to keep our telescope relevant to the scientists who used it to uncover new knowledge about the universe, we had to continue maintaining the old software… or the whole telescope might malfunction.

It’s not popular to keep putting money into maintenance at the expense of funding innovation. But it’s necessary:

  • Without spending time and money to continuously firm up our legacy systems, we’re increasing the likelihood that they will crash (all on their own), producing devastating impacts (either individually or collectively).
  • Without spending time and money to continuously firm up our legacy systems, we’re also increasing the possibility that some rogue hacker (or completely legitimate domestic or foreign enemy) will be able to trigger some form of devastation that impacts the safety, security, or well-being of many people.

When we choose to support innovation at the expense of regular maintenance and continuous improvement, we’re terrorizing our future selves. Especially if our work involves maintaining software that connects or influences people and their devices. Isn’t that right, Amtrak?

Why I <3 R: My Not-So-Secret Valentine

My valentine is unique. It will not provide me with flowers, or chocolates, or a romantic dinner tonight, and will certainly not whisper sweet nothings into my good ear. And yet – I will feel no less loved. In contrast, my valentine will probably give me some routines for identifying control limits on control charts, and maybe a way to classify time series. I’m really looking forward to spending some quality time today with this great positive force in my life that saves me so much time and makes me so productive.

Today, on Valentine’s Day, I am serenading one of the loves of my life – R. Technically, R is a statistical software package, but for me, it’s the nirvana of data analysis. I am not a hardcore geek programmer, you see. I don’t like to spend hours coding, admiring the elegance of the syntax and data structures, or finding more compact ways to get the job done. I just want to crack open my data and learn cool things about it, and the faster and more butter-like the better.

Here are a few of the reasons why I love R:

  • R did not play hard to get. The first time I downloaded R from http://www.r-project.org, it only took about 3 minutes, I was able to start playing with it immediately, and it actually worked without a giant installation struggle.
  • R is free. I didn’t have to pay to download it. I don’t have to pay its living expenses in the form of license fees, upgrade fees, or rental charges (like I did when I used SPSS). If I need more from R, I can probably download a new package, and get that too for free.
  • R blended into my living situation rather nicely, and if I decide to move, I’m confident that R will be happy in my new place. As a Windows user, I’m accustomed to having hellacious issues installing software, keeping it up to date, loading new packages, and so on. But R works well on Windows. And when I want to move to Linux, R works well there too. And on the days when I just want to get touchy feely with a Mac, R works well there too.
  • R gets a lot of exercise, so it’s always in pretty good shape. There is an enthusiastic global community of R users who number in the tens of thousands (and maybe more), and report issues to the people who develop and maintain the individual packages. It’s rare to run into an error with R, especially when you’re using a package that is very popular.
  • R is very social; in fact, it’s on Facebook. And if you friendR Bloggersyou’ll get updates about great things you can do with the software (some basic techniques, but some really advanced ones too). Most updates from R Bloggers come with working code.
  • Instead of just having ONE nice package, R has HUNDREDS of nice packages. And each performs a different and unique function, from graphics, to network analysis, to machine learning, to bioinformatics, to super hot-off-the-press algorithms that someone just developed and published. (I even learned how to use the “dtw” package over the weekend, which provides algorithms for time series clustering and classification using a technique called Dynamic Time Warping. Sounds cool, huh!) If you aren’t happy with one package, you can probably find a comparable package that someone else wrote that implements your desired functions in a different way.
  • (And if you aren’t satisfied by those packages, there’s always someone out there coding a new one.)
  • R helps me meditate. OK, so we can’t go to tai chi class together, but I do find it very easy to get into the flow (a la Csikzentmihalyi) when I’m using R.
  • R doesn’t argue with me for no reason. Most of the error messages actually make sense and mean something.
  • R always has time to spend with me. All I have to do is turn it on by double-clicking that nice R icon on my desktop. I don’t ever have to compete with other users or feel jealous of them. R never turns me down or says it’s got other stuff to do. R always makes me feel important and special, because it helps me accomplish great things that I would not be able to do on my own. R supports my personal and professional goals.
  • R has its own journal. Wow. Not only is it utilitarian and fun to be around, but it’s also got a great reputation and is recognized and honored as a solid citizen of the software community.
  • R always remembers me. I can save the image of my entire session with it and pick it up at a later time.
  • R will never leave me. (Well, I hope.)

The most important reason I like R is that I just like spending time with it, learning more about it, and feeling our relationship deepen as it gently helps me analyze all my new data. (Seriously geeky – yeah, I know. At least I won’t be disappointed by the object of MY affection today : )

<3

Maker’s Meeting, Manager’s Meeting

In July, Paul Graham posted an article called “Maker’s Schedule, Manager’s Schedule“. He points out that people who make things, like software engineers and writers, are on a completely different schedule than managers – and that by imposing the manager’s schedule on the developers, there is an associated cost. Makers simply can’t be as productive on the manager’s schedule:

When you’re operating on the maker’s schedule, meetings are a disaster. A single meeting can blow a whole afternoon, by breaking it into two pieces each too small to do anything hard in. Plus you have to remember to go to the meeting. That’s no problem for someone on the manager’s schedule. There’s always something coming on the next hour; the only question is what. But when someone on the maker’s schedule has a meeting, they have to think about it.

For someone on the maker’s schedule, having a meeting is like throwing an exception. It doesn’t merely cause you to switch from one task to another; it changes the mode in which you work.

I find one meeting can sometimes affect a whole day. A meeting commonly blows at least half a day, by breaking up a morning or afternoon. But in addition there’s sometimes a cascading effect. If I know the afternoon is going to be broken up, I’m slightly less likely to start something ambitious in the morning.

This concept really resonated with us – we know about the costs of context switching, but this presented a nice concept for how a developer’s day can be segmented such that ample time is provided for getting things done. As a result, we attempted to apply the concept to achieve more effective communication between technical staff and managers. And in at least one case, it worked extremely well.

Case: Ron DuPlain (@rduplain) and I frequently work together on technical projects. I am the manager; he is the developer. More than we like, we run into problems communicating, but fortunately we are both always on the lookout for strategies to help us communicate better. We decided to apply the “makers vs. managers” concept to meetings, to see whether declaring whether we were having a maker’s meeting or a manager’s meeting prior to the session would improve our ability to communicate with one another.

And it did. We had a very effective maker’s meeting today, for example… explored some technical challenges, worked through a solution space, and talked about possible design options and background information. It was great. As a manager, I got to spend time thinking about a technical problem, but temporarily suspended my attachment to dates, milestones and artifacts. As a developer, Ron got the time and attention from me that he needed to explain his challenges, without the pressure of knowing that I was in a hurry and just needed the bottom line. As a result, Ron felt like I was able to understand the perspectives he was presenting more effectively, and get a better sense of the trade-offs he was exploring.

We had the opportunity to meet on the same terms, all because we declared the intent of our meeting up front in terms of “makers” and “managers”. Thanks Paul – this common language is proving to be a powerful concept for achieving a shared and immediate understanding of technical problems.

« Older Entries