I’ve been involved in a few contentious arguments lately regarding two of the most misunderstood parts of the dev process: testing and requirements gathering. The framing of the debate is often “Should we have a process for these things?” When you find yourself arguing about testing or requirements from the frame of yes/no it provides some cover to get mired in “tests take too long to write and aren’t useful” or “the requirements are always changing, and we have to move fast so we don’t do that.” The not-so-subtle implication is that these things are extra add-ons we can skip.

But this frame is incomplete. Requirements will be decided, and tests will be run. It’s just a matter of when and how.

When someone is arguing that we don’t need to do formal requirements gathering, they’re saying, “the devs should just guess, and it’ll probably be good enough.” For really simple projects, it might be. However, an even moderately complex project won’t work this way. The software gets built, the users complain about missing features, and then the managers argue about it. Maybe it never gets fixed. Perhaps the project gets canceled. In this process, it’s “we gather requirements after the software is built, then we try to get out of needing to do it.”

My sticking point is that regardless of your process, someone has to decide what the software will do. It’s just a matter of how it’ll be decided and when it’ll be decided. You are not skipping it when you abandon formal processes.

My recommendation is to talk to the user of the software and have a quick discussion of what they need. This is where an experienced dev has an advantage and can tease out some hidden requirements and avoid dead ends. Then, have a short meeting with your team, make clear the ‘must haves’ and the ‘nice to haves’. They should be bullet points and not technical implementations. The point of this meeting is to argue about requirements now, not after it’s built.

The same concept applies to testing. All software is tested, it’s just a matter of how much and when. Except in the rarest of situations, no developer will write a piece of code and push to production without running it. And why would they run it locally? To test that it works, of course. So, the first and most common method of testing is ‘having the developer run it on their machine.’ There are some small projects where that might be good enough.

You’ll need to do better if you have any reliability requirements. For those who hem and haw about testing, my question is always, “When would you like errors to occur?” They will occur, and it’s better for them to occur before getting to production. If you can’t catch them in local dev, it’s better to catch them through CI/CD. And if they make it through, it’s better to have swift rollback mechanisms. When viewed this way, the conversation can change from “Should we do testing?” into “How much testing should we do and in what way should we do it?” Only then can we make an informed decision about how much reliability we are willing to pay for. The analytics systems I work in often go straight from local development into production. Internal customers can deal with a few hours of latency for reporting fixes. External-facing customers won’t accept that. Deciding how much testing and where works best when it’s based on the actual needs of the users. But we’ll never get there when we view testing as a yes/no question. It’s always a question of how and when, but never ‘if.’