Author Archives: Nicole Radziwill

Cogramming (Or: Pair Programming for People Who “Don’t Like It”)

Picture Unrelated. Image Source: Doug Buckley @ Hyperactive (Braintree, MA)

I saw a bunch of threads this morning on Twitter about pair programming, one of the core practices of XP and agile cornerstone. The arguments were diametrically opposed, either: it’s great, and the overhead is necessary and leads to long-term value; versus it’s terrible, because I need to get in the zone and that requires alone time.

(I don’t really like pair programming either… to be honest, I don’t have the attention span for it. I can get into my own groove, but it’s painful to follow along with someone else for more than a few minutes.)

Let me offer up an alternative: cogramming.

Cogramming is like pair programming, but you’re just programming next to someone on a module or a task that’s related to what they’re doing. (In today’s WFH climate, that might mean opening up a Zoom or a Google Meet, turning your cameras off, muting, and working independently until you encounter an issue that you’d ordinarily deal with inside your own head.) When you reach a checkpoint, or when it’s time to sign off, you can do “mini code reviews” to make sure someone else’s eyeballs have been on your work… and caught any big things that you might not have been able to see.

Does it work? From experience, yes. And there’s a basis for it in the research too, through the literature on collaborative play and group dynamics in kids. (Here’s an article that describes some of that.)

Cogramming can help you realize the benefits of pair programming without the pain of actually pair programming. Plus, managers will love hearing that they’re not losing any apparent productivity by two people working on the same code at the same time. If you’re anti-pair, try this approach before you give up entirely.

The Discovery Channel Test

Presentations can be boring. Talking about your work can be boring (to other people). When you’re sitting in a talk or a session that you find boring (and you can’t figure out WIIFM – what’s in it for me)… you learn less.

Although we shouldn’t have to use clickbait techniques to satisfy societally decreasing attention spans, it is easier to learn and retain information when it’s interesting. So I encourage all of you to apply the “Discovery Channel Test” to every presentation, talk, or session you contribute to or lead.

The reason I call it the “Discovery Channel Test” is that there used to be a program called City Confidential that was on all the time. Even though City Confidential was a show about murder mysteries, the first half of the one-hour show was about the city or region where the crime occurred. They talked about when the town was first settled, and key families who made the town what it is today. You heard about stories of intrigue, and challenges the town was facing today. They made the story about the town itself so captivating that EVERY SINGLE TIME I was caught off guard when they ended the first half-hour segment with “You’d never believe a murder could happen here.” Through the hundreds of times I watched this show, I was always shocked when this point came. “Oh, yeah… this is where that murder mystery happened, and we’re about to find out about it.”

Any production crew that can spend a half hour off-topic, keep me interested, and give me a dopamine burst right before the main point of the show… has achieved something great. And you, too, can achieve this same greatness when you’re talking about your tech stack or a new architectural initiative or that project you did last year that improved customer sat. Make it interesting by applying….

The Discovery Channel Test (* = could also be called The Good Podcast Test)

Rule 1: Use an emotional hook up front. Why should any of the people in your audience not leave after the first five minutes? Don’t make them guess! Tell them specifically why this topic might be interesting, or surprise them with an initial feeling of novelty or unexpectedness. Jonathan Lipnicki’s character in Jerry Maguire (1996) was great at this, with his “did you know?” questions. The dopamine surge you create with an emotional hook will keep them engaged for long enough to get hooked on your story.

Rule 2: Find the wow nugget(s). This is one of the things the Discovery Channel has always been really good at: getting me interested in things that I didn’t think were interesting to me. Remember that your projects and initiatives, no matter how cool they are to you, will be boring to other people unless you TRY to make them interesting. I try to tell my high schooler that even in the most boring classes you should be able to find some nugget, some angle, some insight that helps you see the subject matter in a way that grabs you. Find that angle for your audience, and then spoon feed it to them.

Rule 3: Use examples, screen shots, visuals, diagrams. If your presentations are full of slides with words, people will start yawning immediately and may or may not actually hear those words. You can also reduce ambiguity with examples and screen shots. For example, saying “we used Python for this project” is far less compelling than showing the tree structure of the code (or a simplified diagram) and annotating it with what each piece did to get the overall job done. Saying “we used Confluence” is less compelling than saying “we set up a confluence site at [this location] and agreed to put [this kind of information in there]” because if someone has a need for [that information] – at least they know a first place they can look for it.

Rule 4: Spoon feed the closing thoughts. At the very end, remind people what you want them to remember when they leave. Remind people why that’s interesting, and how it might benefit them in the future. Make it concrete and tangible. For example, can you give them reference material, or an infographic, or a checklist that they can use in the future? Don’t assume that people will get something out of your presentation just by attending… spoon feed what you WANT them to get out of it.

“Documentation” is a Dirty Word

I just skimmed through another 20 page planning document, written in old-school “requirements specification”-ese with a hefty dose of ambiguity. You know, things like “the Project Manager shall…” and “the technical team will respond to bugs.”

There’s a lot of good content in there. It’s not easy to get at, though… for each page I want to understand, I can expect to spend between ten minutes and an hour deciphering what’s going on. (That’s half a week of work, and I have a lot of other work to do this week. Is it even worth it?)

Also, this document is boring… and there’s a lot of filler (like “package X is useful, flexible, and open source” – OK, that’s great, and factually correct, but not very informative).

The whole document could have been condensed into 4 or 5 slides of diagrams plus annotations. If that had happened, I’d be looking at a 10 to 30 minute commitment overall to get my brain into this particular company’s context… and then I’d be able to make some useful contributions. (As it stands, I’m weighing whether it’s worth my time to go spelunking in that 20-pager at all. Probably not.)

Unfortunately:

  • The bulk of technical documentation that I read is unintelligible or lacks sufficient meaning. I have three decades of exposure to this stuff, so if I can’t understand it, I feel super sorry for the early career people (who might think they can’t understand it because they don’t know enough).
  • Nearly all of the technical documents I see are uninteresting. And tech is interesting.

99 times out of 100, technical documentation is dull and dysfunctional. While it’s supposed to help people and teams establish a shared understanding of a system or a process or a concept, it just ends up looking really impressive and convincing you that the authors know a lot more about this than you do. It doesn’t help you much, if at all.

And as a CTO, CIO, or VP of Engineering, I don’t want to pay for crappy documentation. This particular document probably cost me between $32K-50K, and that’s just the tip of the iceberg… because it doesn’t account for the incremental costs of the people who will attempt to decipher it – which could be 10-100x more over the lifetime of that document. Although the person or team that wrote the 20-pager probably knows what’s going on (at least a little bit), the artifact itself is going to cost other people time and take them away from more value-adding tasks.

The sheer volume of information we’re required to wade through to gain understanding – without the assurance that it will lead us in the right direction, or even in any direction at all – is probably what’s led to the disease of devaluing documentation

A lot of managers think documentation isn’t value-adding, and shouldn’t be done, because… in most cases, people do it so badly that it wasn’t worth the investment in the first place.

Dear tech workers: can we fix this, please? I know it will take a whole generation to effect meaningful change, but… I’m ready to roll up my sleeves.

Scrum and the Illusion of Progress

Agile is everywhere. Sprints are everywhere. Freshly trained Scrum practitioners and established devotees in the guise of Scrum Masters beat the drum of backlog grooming sessions and planning poker and demos and retros. 

They are very busy, and provide the blissful illusion of progress.

(Full disclosure: I am a Sprint Grinch, and my heart will not grow three sizes, or even one… on any day. I do support Alistair’s Heart of Agile, because I think he’s as pained about where the industry has gone as I am. Probably more.)

Despite the mirage of progress, the people on the front lines doing the engineering work feel little relief from the ill logic and misplaced pressure of the pointy haired bosses. The managers aren’t getting the deliverables faster (like they convinced themselves they would). There are tons of charts and graphs that tell us how we’re doing (like burndowns) but now the CFO is using them to whip our horses just a little bit harder, trying to get them to go faster. They’re not going fast enough. (Spoiler: They will never go fast enough.)

And isn’t that what Agile is supposed to do… speed things up? NO!

Once upon a time, we used to build software like we built skyscrapers: plan every single task, put in on a Gantt chart, and make sure there are approval gates between every phase of work: elicit requirements, design the systems, write code, test it, deploy it, maintain it.

But a couple of big, bad things tended to happen over and over: 

  • By the time we got to “test and deploy” we’d discover that the thing we built was not what the users and stakeholders actually needed. (“But it’s exactly what you asked for in the requirements,” we’d say, “and you approved it at multiple decision points.” “Sure,” they would say, “but we learned a lot in the meantime, and things are different now. You just built us what we thought we needed six months ago.”)
  • The software developers would build something that the operations team, responsible for standing it up and maintaining it, wouldn’t be able to support at the intended service levels. (“You can’t expect us to maintain this,” they’d say. “It’s fragile.”)

Agile practices emerged more than two decades ago when engineers said… hold it, there’s got to be a better way. If we work collaboratively with our users and stakeholders, learning together, figuring out how to realize value together… then we’ll produce something that real people are able to get value from much faster. We need control over our process, and we need you to be actively engaged, business people. We can eliminate all these handoffs, and replace them with shared accountability.

The two solutions were: 

  • Agile: We said “let’s take a more collaborative approach to development, and get the users and stakeholders together with the engineers, so they can all learn and co-create together until they’ve produced the next nugget of value.” 
  • DevOps: Instead of two groups siloed from each other, let’s automate the handoff and make it seamless, and give the developers tools that will alert them when they’ve got to fix more stuff. If tests automatically run before the code is merged into production, we’ll always know that we’re deploying stuff that runs. (I’m pretty sure everyone agreed this was a good idea.)

Agile is a great aspiration. But how often have you ever seen the users and stakeholders sharing space, sharing accountability, sharing the process of creating value? Not often. They still coordinate through user stories, and tickets, and handoffs, and PRs. 

Agile speeds up value delivery, but not necessarily software delivery. Having daily stand-ups and burndown charts and JIRA tickets won’t make people code faster. But it will make managers think they should be coding faster, and that’s where the poison will settle in and grow.

Bottom line, my position statement is:

  1. Any (minimalist) process is better than no process.
  2. Agility is an essential goal, but Scrum and certified Agile practitioners are less likely to get you there.
  3. Go back to first principles: plan an iteration, co-create value, see where it got you and reassess your ability to reach your goal, then adjust. Keep experimenting.

Would I ever use Scrum? Sure, under specific conditions:

  • There’s something we have to build, and we’ve never built it before.
  • The road ahead is unknown.
  • We have access to collaborators who can help us see what’s needed.
  • We agree to intentional checkpoints where we can see how far we got, and whether we can still get to the place we’d originally intended.
  • We iterate, and iterate, until we agree we’ve incrementally added enough total value nuggets to move to the next project.
  • There are no well-meaning-but-not-helpful engineering managers or business managers hovering over the development team with bated breath, waiting to beat us with burndowns. The dev team can manage itself.

Agility is a laudable – and essential – goal. But only in rare cases do I believe Agile and Scrum will get you there.

Why Software Estimation is (Often) Useless

For 12 years, I blogged and wrote a whole bunch. For the past year and a half, I’ve let myself be pulled away from so many of the things that make me me… including writing. Today I heard one of the best anecdotes ever, and it’s the spark that will be pulling me back in. (Thanks, John.)

So here it is! It has to do with software estimation. Not only is it difficult to accurately estimate how long it’s going to take someone to do a programming (or similarly technical) task, the act of estimating often does not add any value at all. Estimation is a bad thing, especially if you’re trying to be Agile.

(And if you’re in a client/contractor relationship, you’ll have discovered that estimates are the lifeblood of the relationship, even as they drain all the life and all the blood out of the relationship, slowly and deliberately.)

Sometimes I get caught asking engineers for estimates even when the task we’re embarking on is new, unknown, uncertain, and requiring lots of learning and exploration and discovery. I should know better. But I cave, because the concept of the software estimate is so enticing: with a good estimate, I’ll know exactly how much time someone will need to spend working on a task that’s still kind of nebulous and mostly unknown. (That was a joke.)

My friend John shared the best anecdote ever today about why software estimation is so frustrating (liberally embellished):

Imagine that you’re standing on a hill looking down at a labyrinth, or a corn maze. It’s reasonably small… you can see that the corn maze is definitely doable, you can see a couple paths in and out, and the entire maze is a similar size to other mazes you’ve successfully found your way out of. So you’re pretty sure once you get to the entrance, you’ll find your way out.

But there’s no way you can say exactly how long it will take you to escape. Maybe you’ll run right through from start to finish, and it will be smooth. Maybe you’ll get stuck in the beginning, and spend a long time before winding your way out. Maybe you’ll run right close to the end, but have no idea you’re a few feet away from the exit, and you’ll get stuck there for a while.

And maybe you’ll make it halfway through, get lost, go in circles, and eventually just die in the maze.

Problem is, I can tell you how long it’s taken me to get across comparable mazes, but I have no way of knowing how long it will take me to escape from this maze, and just having another engineer in the maze to pair with me and see things I’m maybe not seeing is no guarantee at all that either of us will get us. Statistically, we’ll probably make it out, but the estimate I give you is just a guess.

Unfortunately, it’s a guess that’s going to make a lot of people unhappy, no matter what. Because even if I make it out of the maze fast this time, then they’ll expect that I’ll zoom through the next maze.

– John

Turn on your Domain Name for a Hugo Site on Google Cloud (PART 4)

You’ve got your Hugo site built, and in a public GCS bucket, and you can see it by going to the URL at https://storage.googleapis.com/www.yourdomainname.yourtld — now how can you make http://yourdomainname.yourtld work directly? This post assumes that you’ve already purchased a domain from http://domains.google and you have a Hugo site (or any other web site) set up in a GCS bucket.

Step 1 – Get into the domain administrator panel. Go to http://domains.google and click “My Domains” in the top right of the window.

Step 2 – Find the domain that you want to map to this bucket and click “Manage” on the rightmost side of that line.

Step 3 – Click “Edit DNS Settings” OR “Use custom name servers for your domain” – they both go to the same page.

Step 4 – Scroll down to “Custom Resource Records”:

  • In the textbox that says @, type www
  • In the dropdown, select CNAME
  • Keep 1H in the next textbox
  • In the last textbox, type c.storage.googleapis.com. (that last dot is important, too)
  • Click Add

Step 5 – Add a Synthetic Record so your “naked domain” (no www) will work. In the “Synthetic Records” section, put an @ in the first box, put http://www.yourdomainname.yourtld in the rightmost box, and click “Forward Path” in the middle group of radio buttons. Click Add and this should make the magic happen.

Step 6 – Wait. They’ll say it takes up to 48 hours for DNS to propagate… it usually takes a lot less. Even though you set up the CNAME to trigger from www, Google Domains automatically catches www and no subdomain, so you should be able to go to http://yourdomainname.yourtld. Check your custom URL after an hour or two! If the site isn’t available by the next day, though, there may be a problem in your GCS bucket or your setup.

Good luck! And remember to check out the “I want to get off Squarespace” project that motivated these posts… http://nicoleradziwill.com

Pushing Your Hugo Site to a GCS Bucket (PART 3)

By now, you’ve build a gorgeous Hugo site that you admire at http://localhost:1313 that no one but you can see. This post aims to help you fix that, by taking the first deployment step: putting your Hugo site into a GCS bucket. This assumes that you also have git bash installed with the Google Cloud SDK (gsutil), and that you’re familiar enough with http://console.cloud.google.com that you can log in and double check your settings. This also assumes that you’ve purchased a domain name that you administer through http://domains.google. Now! to begin.

Step 1 – Build your Hugo site. Even though you can view it on localhost, the static site hasn’t been built yet.

cd /d/Hugo/myrealsite
hugo

That’s it! The GO engine behind Hugo just created a ton of static web pages on your behalf, and popped them into the /public directory. Go into that directory and ls -spla just to make sure stuff is there.

Step 2 – Use gsutil to set up your new bucket and tell Google Cloud that it will be a web site. You should still be sitting in /d/Hugo/myrealsite. Note: The domain name MUST have www in the beginning, even if you want to direct people to yourdomainname.yourtld.

gsutil mb gs://www.yourdomainname.yourtld
gsutil web set -m index.html -e 404.html gs://www.yourdomainname.yourtld

Step 3 – Make everything in that bucket public so people on the internet have the permissions to get to your page. This part was a little tricky for me, because I’m not sure exactly which command made it work, but I can tell you how to double check. Start by doing one (or both) of these:

gsutil defacl ch -u allUsers:R gs://www.yourdomainname.yourtld
gsutil iam ch allUsers:objectViewer gs://www.yourdomainname.yourtld

When you go into http://console.cloud.google.com, head to Storage -> Browser. Find the line item that has your http://www.yourdomainname.yourtld name that you just created. In the “Public Access” column, there should be a little orange warning icon and the words “Public to Internet”. If you don’t see that, then Google around for more advice on “how to make the contents of your GCS bucket public to the internet”.

While you’re in there, check to make sure the bucket is set up to serve a web page. On the far far right side of that same line, you’ll see three vertical dots. Click on them, and go into “Edit Website Configuration”. The top textbox should say index.html and the bottom should say 404.html. If they don’t, make them say that, and then click Save.

Step 4 – Populate your GCS bucket with your built Hugo site in /public (not the raw files you edited). Make sure you’re still sitting in /d/Hugo/myrealsite and then do this:

gsutil rsync -R public gs://www.yourdomainname.yourtld

Step 5 – You now have a Hugo web site on Google Cloud! Test it out by pointing a browser to https://storage.googleapis.com/www.yourdomainname.yourtld/index.html (being sure to put your actual domain name in there) — if you get a permission error or an XML dump, go back to Step 3 because something went wrong there.

Now it’s time to link your bought domain name with your publicly visible Hugo web site!

« Older Entries Recent Entries »