Article

We Measure Time Because It’s Convenient. Not Because It’s Correct.

Across almost every industry, we obsess over one thing more than anything else: time.

How long will it take?
What’s the deadline?
Can we hit the date?

Everything else becomes secondary.

I first noticed this clearly while doing consulting work. Deadlines weren’t just important. They were the thing. Quality, correctness, durability — all of that was negotiable as long as the calendar obligation was met.

My boss and I used to joke about it by quoting a scene from Gung Ho.

In the movie, a car factory is frantically pushing to hit a production quota. At one point, a windshield gets smashed during assembly. Everyone panics — until someone shrugs and says, “Don’t worry. We’ll fix it in the dealership. Just ship it.”

That line stuck with me because it perfectly captures how deadline-driven systems actually work.

Once time is the primary metric, having something at the end of the timeline matters more than having something that’s right.

The end user almost never benefits from this. I can’t think of a single scenario where someone is genuinely better off because a product arrived on a specific date but didn’t actually work, wasn’t durable, or had to be corrected later.

What time-based metrics really do is give everyone else cover.

They create a clean stopping point.
They turn mistakes into tradeoffs.
They let people say, “We did what you asked,” even when the outcome is broken.

This doesn’t require bad intent. Most people aren’t lazy or malicious. But once you tell a system that time matters more than quality, you’ve also told it that quality is optional.

And so errors get a pass. Not because they’re acceptable — but because the clock ran out.

You see this everywhere. Education. Construction. Software. Manufacturing. Anywhere a deadline exists, behavior bends toward it. Not because people want to cut corners, but because the system rewards the thing it measures.

There’s another, quieter failure mode that shows up in deadline-driven organizations.

Sometimes the response isn’t rushing — it’s throttling.

I’ve seen executive teams deliberately reduce output so delivery becomes more predictable. Not because the team couldn’t do more, but because leadership was afraid of missing a date. The solution wasn’t better scoping or clearer quality bars. It was slowing everything down.

Work was padded. Capacity was left unused. Underwork of 30% or more became normal.

From a spreadsheet perspective, it looked responsible. Timelines stabilized. Forecasts became easier. Risk appeared to go down.

But the cost was invisible — until it wasn’t.

Developers didn’t quit because the work was hard.
They quit because it was boring.

Predictability became the goal because it was easier to defend than performance.

Ironically, this is the same mistake as rushing. It’s just wearing a different costume.

In both cases, time is still the god. The system is still organized around avoiding calendar failure rather than producing something worth having.

Over time, this led me to adopt a simple rule.

Unless there is a singular, immovable event driving the date — a wedding, a party, a court deadline — the timeline is flexible.

When I work with clients, I explain that upfront: quality comes first. In ongoing relationships, I’ve found that better work delivered slightly late is almost always preferable to on-time work that creates downstream problems.

And if a date is going to slip, it’s communicated as early as possible. Not when the deadline arrives — when the risk becomes visible.

That rule doesn’t eliminate uncertainty. It just stops pretending the calendar can solve it.

Time is the easiest metric to agree on.
That doesn’t make it the right one.