Before I wrote a line of spec for the acceptance facet, I sat with a question that made me genuinely uncomfortable.
I was building a tool that would collect data on racial tolerance, LGBTQ+ legal protections, housing discrimination by nationality, and how well immigrants actually integrate into a society. I was going to use that data to help a mixed-race family find a city where their children would not be bullied for how they look.
But I could see the other use. The same data, pointed in a different direction, could theoretically help someone find a destination with fewer people who look different from them.
I did not want to build that product. The question was whether wanting to avoid it was enough, or whether I needed to make it structurally impossible.
I went with structurally impossible.
The misuse case, named directly
Most products that touch diversity or inclusion data handle this in one of two ways. They ignore the misuse case entirely and hope their users are well-intentioned. Or they write a content policy that says something like “this product is not designed to be used for discrimination” and call it done.
Neither approach is serious. A policy document does not change what a product is capable of. If the data can be surfaced in a way that enables demographic sorting, someone will sort by demographics.
I had seen enough product decisions defended with “we didn’t intend for it to be used that way” to know that intent is not a safeguard. Architecture is.
The asymmetric design
The acceptance facet is built with a single directional purpose: to optimize for the safety and comfort of people who face discrimination. It is not built to serve users seeking demographic uniformity. That is a deliberate and permanent product decision.
Here is what that looks like in practice.
Scores measure openness, not homogeneity. High acceptance scores go to diverse, tolerant places. A homogeneous country with strong tolerance indicators scores higher than a diverse country with documented racial tension. If someone were trying to use the product to find a racially uniform destination, the scoring model would direct them toward the most diverse, inclusive places on earth. The tool does not work for that purpose.
No demographic sorting exists anywhere in the product. There is no filter for majority-anything. No percentage display for any group. No sort by ethnic composition. The browse-and-sort misuse vector does not exist because the interface never enables it.
The quiz asks about experiences to avoid, not demographics to seek. “Have members of your family experienced discrimination?” not “What population composition do you prefer?” The second question does not exist in any form, in any version of the product. I made that a documented requirement with the same standing as a security requirement.
Demographic data as input, not output
This is the architectural key, and it took some careful thinking to get right.
We use ethnic fractionalization indices, foreign-born population percentages, and language prevalence data. All of it feeds the scoring algorithm. None of it surfaces to the user as a filter, a percentage, or a sortable column.
A user sees: “Acceptance score for your family: 83/100.”
They do not see the underlying demographic breakdowns. They do not see what percentage of the population looks like them. They see what the data, weighted for their specific vulnerability profile, predicts about their lived experience in that city.
The data is an input. The personalized score is the output. No raw demographic data crosses the line between the scoring engine and the user interface. That boundary is enforced in the architecture, not in a policy document.
The data privacy layer
The quiz collects sensitive personal information. Racial and ethnic identity. Sexual orientation. Religious identity. This is the kind of data that, in the wrong hands or the wrong context, can cause real harm.
I made four decisions early that I would make again.
First, minimization. Acceptance intake data is used for score personalization only. It is not stored beyond the session unless the user explicitly creates an account and opts in to saving their profile.
Second, no aggregation. Individual identity data is never analyzed for trends, sold, or shared with third parties. The product does not create datasets describing its users’ racial or identity composition.
Third, deletion on demand. Users who create accounts can delete their acceptance profile immediately and completely.
Fourth, encryption at rest and in transit, with access restricted to the scoring engine.
These are engineering requirements. They live in the technical spec alongside the security requirements, not in a privacy policy addendum that nobody reads.
What this means for any product that touches identity data
I have been in enough product reviews to know how these conversations usually go. Someone raises a potential misuse case. Someone else says that is not the intended use. The team moves on.
That is how products end up being used in ways their builders find genuinely embarrassing.
The question worth asking early, before you build anything, is: what would this product enable if used by someone with the worst possible intent? If the honest answer is something you would not want your name on, that is a design problem, not a communications problem.
Ethical constraints built into the architecture last. Ethical constraints built into the policy get overridden when someone makes a compelling business case.
For WhereToAdvisor, the asymmetric design of the acceptance facet is a permanent architectural constraint. Any change to the scoring logic, data display, or filtering capabilities that would enable demographic sorting requires documented justification and leadership approval. The default answer to such requests is no.
That is not a policy. It is a requirement with teeth.
Next: The Dealbreaker Problem. The first version of the dealbreaker system could produce zero results. Here is why that is the wrong model and what we built instead.
Read the full series at wheretoadvisor.com/blog.



