There is a familiar promise in software and product design:
Build for everyone, and everyone will benefit.
It sounds inclusive.
It sounds fair.
It sounds safe.
In reality, this approach often does the opposite.
Building for everyone usually means protecting no one.
Safety Is Not Universal — It’s Perceived
Users don’t evaluate safety in abstract terms. They don’t read policy documents or internal principles. They decide based on signals, friction, and lived experience.
As explored in Software safety: how users decide,
users make fast, emotional judgments about whether a system feels safe for them. Not for everyone — for them.
When a product tries to be universal, it often removes the very signals that allow users to feel protected:
- clear boundaries
- explicit constraints
- visible enforcement
What remains is ambiguity. And ambiguity feels unsafe.
Trust Erodes Quietly, Then Suddenly
Products rarely lose users because of a single dramatic failure. More often, trust decays slowly — through small inconsistencies, vague rules, and a sense that “no one is really in charge here.”
That erosion is exactly why users leave, as described in Why users abandon products they don’t trust.
When systems claim to serve everyone equally, users quickly learn that:
- abuse is tolerated “for now”
- harm is contextualized instead of addressed
- enforcement depends on scale, not principle
At that point, leaving becomes a rational act of self-protection.
Ethics Only Matter When They Constrain
Ethics that don’t limit behavior aren’t ethics — they’re branding.
Real ethical design introduces constraints:
- on growth
- on engagement
- on what is allowed, even if profitable
That idea is central to Ethics as a design constraint.
Designing for everyone often avoids constraints altogether. Why?
Because constraints upset someone.
They slow something down.
They force a clear “no”.
But without those limits, systems default to protecting the most powerful, loud, or persistent actors — not the most vulnerable ones.
Neutral Systems Still Make Choices
Many teams hide behind technical neutrality:
“The system just behaves this way.”
“The algorithm is impartial.”
“We don’t interfere.”
But systems are never neutral.
They encode priorities.
Even deeply technical decisions — like performance trade-offs or infrastructure optimizations — shape user experience and access in uneven ways. A good reminder of this comes from a purely technical angle in Python cold start problem | Cloudflare.
What looks like an engineering detail often becomes:
- latency for some users
- exclusion for others
- advantage for those who already have resources
Neutrality doesn’t remove responsibility. It just hides it.
Protection Requires Choosing Sides
To protect someone, you must decide:
- who the system is for
- what harms matter most
- which behaviors are unacceptable, even if common
That means disappointing someone else.
Building for everyone avoids that discomfort.
But avoiding discomfort is not the same as avoiding harm.
A system that refuses to choose ends up reinforcing existing power dynamics — because those actors need the least protection to begin with.
The Real Question
So the question is not:
“How do we build for everyone?”
The honest question is:
“Who are we willing to inconvenience, restrict, or lose in order to protect someone else?”
If the answer is “no one”, then protection was never part of the design.
Conclusion
Building for everyone feels generous.
It feels scalable.
It feels safe — for the builders.
But safety does not emerge from universality.
It emerges from specificity, constraints, and accountability.
Protection is always selective.
And pretending otherwise only ensures that no one is truly protected.