What We Deliberately Chose Not to Build

Ethan Cole
Ethan Cole I’m Ethan Cole, a digital journalist based in New York. I write about how technology shapes culture and everyday life — from AI and machine learning to cloud services, cybersecurity, hardware, mobile apps, software, and Web3. I’ve been working in tech media for over 7 years, covering everything from big industry news to indie app launches. I enjoy making complex topics easy to understand and showing how new tools actually matter in the real world. Outside of work, I’m a big fan of gaming, coffee, and sci-fi books. You’ll often find me testing a new mobile app, playing the latest indie game, or exploring AI tools for creativity.
5 min read 75 views
What We Deliberately Chose Not to Build

Every product has a feature list.
Fewer have a refusal list.

The first one is easy to show investors, users, and press.
The second one rarely leaves internal documents — if it exists at all.

For us, that list matters more.

Because products don’t become dangerous by accident. They become dangerous by accumulation — by features added “temporarily,” by data collected “just in case,” by options introduced not because they were needed, but because someone didn’t want to say no.

This is our record of what we chose not to build — and why those absences define our product more clearly than anything we shipped.

We didn’t build background data collection

Not because it’s hard.
Not because it’s expensive.

But because once it exists, it never stays “background.”

This echoes the ideas from The Hidden Economy Behind Personal Data, where we explained how seemingly “neutral” data slowly becomes liability. Background collection has a way of becoming infrastructure. Infrastructure becomes dependency. Dependency becomes justification: we already have the data, so why not use it?

That’s how products slide from respectful to invasive without a single dramatic decision.

So we chose not to collect:

  • behavioral telemetry unrelated to core functionality
  • passive usage analytics that users don’t explicitly trigger
  • data gathered without a clear, present use

Not “we might need this later.”
Not “everyone else does it.”
Only: does the product break without it?

If the answer was no, the feature didn’t ship.

This approach continues the thinking from Security Theater vs Real Protection — that more data doesn’t equal more safety.

We didn’t build features “for edge cases”

Edge cases are seductive. They flatter the team’s intelligence. They create the illusion of thoroughness.

They also quietly expand complexity in directions most users will never see — until something fails.

We explored this tension in Why Minimalism Improves Security: every extra branch, toggle, and exception becomes a new risk vector.

Every edge-case feature:

  • introduces new logic paths
  • creates new failure modes
  • demands new data inputs
  • increases attack surface

And most of the time, it solves a problem for a hypothetical user while making the product harder to reason about for everyone else.

We chose a narrower product that behaves predictably over a broader one that behaves cleverly.

That decision cost us feature parity in some comparisons.
It bought us something more valuable: explainability.

This disciplined refusal is also described in Why We Build Fewer Features on Purpose, where deliberate reduction is presented as strategy, not limitation.

We didn’t build retention mechanics

No streaks.
No nudges.
No artificial friction to discourage exit.

Not because retention doesn’t matter — but because forced retention distorts incentives.

When your product depends on keeping users regardless of fit, you stop asking whether it’s still serving them. You start optimizing for staying, not for usefulness.

We deliberately designed for exit. This directly follows the philosophy in Designing for Exit Instead of Retention.

So we avoided:

  • gamified chains
  • progress traps
  • forced engagement loops

A product confident in its value doesn’t need to trap people.

We didn’t build “just in case” integrations

Integrations are rarely neutral.

Every external connection:

  • imports another company’s assumptions
  • extends your trust boundary
  • multiplies your risk surface

The promise is always convenience.
The cost is always hidden.

So we refused integrations that:

  • existed only to check a box
  • duplicated core functionality
  • required sharing user data without strict necessity

This slowed adoption in some ecosystems.
It also meant we never had to explain why someone else’s breach affected our users.

This approach aligns with the broader philosophy from Why We Don’t Chase Growth at Any Cost: we grow intentionally, not indiscriminately.

We didn’t build configurability for its own sake

Customization feels empowering — until it becomes a burden.

Infinite toggles don’t give users control. They give them responsibility for decisions the system should have made safely by default.

We avoided:

  • advanced configuration exposed without strong reasons
  • options that exist mainly to avoid internal decisions
  • settings that shift risk management onto users

A secure product shouldn’t require expertise to be safe.

When in doubt, we chose fewer options and stronger defaults.

This mindset resonates with ideas from The Illusion of Control in Modern Digital Life — too many options often mimic control while actually diluting it.

We didn’t build features we couldn’t fully explain

If a feature required:

  • a long disclaimer
  • vague documentation
  • or “trust us” reasoning

It didn’t make it in.

Opacity is not a complexity problem. It’s a responsibility problem.

This echoes the philosophy of Why Digital Self-Sovereignty Matters: users deserve systems they can understand, not systems they’re left to hope are safe.

If we couldn’t explain what something does, what it touches, and what happens when it fails — then shipping it would have been irresponsible, no matter how impressive it looked in demos.

What refusing to build actually gave us

Saying no didn’t make the product smaller in impact.
It made it smaller in uncertainty.

It gave us:

  • clearer security boundaries
  • fewer hidden dependencies
  • lower long-term maintenance risk
  • and something most products lose early: confidence in what the system doesn’t do

Users don’t interact with those absences directly.
But they feel them — in predictability, in calmness, in the lack of surprises.

We’ve seen how this plays out in earlier explorations — from minimizing data collection to avoiding engagement illusions — and it continues to shape how we think about what software should be.

The uncomfortable truth

Most products don’t ship too little.
They ship too much before they understand the consequences.

Refusal isn’t conservatism.
It’s discipline.

And discipline is the only way to build software that stays trustworthy when no one is watching.

In short

We didn’t build everything we could.
We built what we were willing to take responsibility for.

Everything else stayed on the refusal list — exactly where it belongs.

Share this article: