Several jobs ago, I was in charge of support tickets for an analytics system, and I kept getting tickets from one of the analyst teams that the DB connection was broken. Every time they tried to query the DB with their tool, they’d get an error (The tool’s name rhymes with “bike row soft dower lee eye”). So, I went through the usual troubleshooting steps. The DB was up and running, and I could query it just fine. I asked for the exact sequence of steps. I went through them one at a time and had no trouble connecting.

Knowing it wasn’t an issue with the DB, I went with my next hunch. “Can you open up the task manager for me? I wanna see something.” This analyst was working on a machine with a whopping 6GB of RAM. And since it was a virtual machine, I was also willing to bet the disk speed was abysmal. I gave the bad news – this machine needs vastly more resources for you to get this done. The reply I got was “Oh, IT is very strict about resources. We could never get that approved.”

If you’ve experienced anything like this, then you’ve experienced improving a visible metric by undermining an invisible metric.

Companies and individuals do this all the time. There are a million ways. Airlines decrease reward points to increase profits at the expense of the less measurable “how much customers like us” metric. Or just cancel maintenance to show short term profits. If a manager promises a quick turnaround on key metrics, you can almost guarantee they will work primarily off this sleight-of-hand trick.

Avoiding this requires understanding that metrics will always paint an incomplete picture. The data will inform, but it always needs more detail and nuance. The second is that using metrics alone, with no overriding values, puts you in a race to the bottom. It’s a well-known joke that if you A/B test a website and drive purely for clicks, eventually, it becomes porn or gambling. We risk the analog equivalent when we blindly follow metrics.