The Trust Stack: Why AI Concentrates Taste Instead of Distributing It

25 March 2026

The Invisible Recommender

Every vibe-coder in the world is building on Supabase.

Not because they evaluated Supabase against DynamoDB, PlanetScale, and Neon. Not because they ran benchmarks or read the documentation. They are building on Supabase because the language model told them to. When you ask Claude or ChatGPT to scaffold a new web application, the default backend is Supabase, the default auth is Supabase Auth, and the default deployment is Vercel. Nobody chose this. The training data chose it. The RLHF annotators chose it. The default chose it.

This is not a niche observation about developer tooling. It is a structural claim about how AI reshapes economies. And it contradicts one of the most thoughtful recent arguments about what kinds of work survive automation.

Trask's Three Categories

Andrew Trask, the founder of OpenMined, recently published a video arguing that three types of work are not merely resistant to AI but will see explosive demand as automation advances: taste-making (defining what "good" means), trust-making (building relationships to delegate taste), and rare data curation (gatekeeping information that AI lacks training signal on).

Trask's framework is directionally correct. These three categories map cleanly onto known limitations in current AI systems: preference specification is unsolved, relational capital is non-transferable, and data scarcity remains a binding constraint on generalisation. His taxonomy deserves to be taken seriously.

But his central thought experiment is incomplete in a way that, once corrected, makes the framework more interesting, not less.

The Water Bottle Problem

Trask asks us to imagine a world of total automation. Every person has a free army of robots that can manufacture anything from raw materials. In this world, he argues, the bottleneck shifts from production to decision-making. You no longer need the global economy to make you a water bottle. But you do need someone to make the thousands of micro-decisions that go into designing one: the material composition, the thermal properties, the ergonomics, the aesthetics. The decision fatigue alone would be paralysing. And so, Trask concludes, every person becomes a taste-maker, overwhelmed by the creative agency that automation grants them, reaching out to 150 trusted advisers (Dunbar's number) to help navigate an infinity of choices.

It is an elegant argument. But the evidence from AI adoption suggests the taste ends up somewhere else.

The Default Is the Product

In practice, language models do not distribute taste to eight billion individuals. They concentrate it. The Supabase phenomenon is not an anomaly. It is the mechanism.

When a model recommends Supabase, it is collapsing an enormous decision space into an opinionated default. The user experiences this as convenience. But structurally, what happened is that taste-making migrated from the consumer (who would have evaluated options) to the model's training pipeline (where annotators, benchmark designers, and data curators already made the choice). The water bottle does not get redesigned by eight billion people. It gets designed once, by whoever shaped the distribution that the model samples from.

This is the opposite of Trask's prediction. Instead of an explosion of individual taste-making, we get a concentration of invisible taste-making at the training layer. The people who shape RLHF preferences, who curate fine-tuning datasets, who design the benchmarks that models optimise against, become the new tastemakers at planetary scale. They are the new Jony Ive, except nobody knows their names, and they did not sign up for the job.

Trask's rebuttal to "we only need a small number of tastemakers" was that mass manufacturing forced artificial homogeneity. He argued that if people could truly customise everything, demand for taste-making would explode. But the evidence from AI adoption suggests the opposite dynamic: when faced with infinite choice, people do not customise. They accept the default. Barry Schwartz documented this as the paradox of choice in 2004. Thaler and Sunstein built an entire theory of behavioural nudges around it. What language models add is a new mechanism for the same phenomenon: the default is no longer set by a product manager or a regulator. It is set by the statistical distribution of the training corpus, shaped by a few thousand annotators whose preferences become, in effect, planetary policy.

This does not invalidate Trask's framework. It sharpens it. Taste-making still matters enormously. But the question is not "will everyone become a tastemaker?" (they will not). The question is: who controls the defaults?

The Trust Stack

If taste concentrates rather than distributes, the three categories Trask identified are not parallel. They are recursive. Each one depends on the others in a loop that resists automation precisely because of its self-referential structure.

Consider:

Taste requires trust to delegate. If you are not going to evaluate every decision yourself (and you will not), you need to trust whoever is making the default. But trust in an AI system's taste is not the same as trust in a person's taste. When a lawyer recommends a contractual structure, you can interrogate their reasoning, assess their track record, and hold them accountable through fiduciary duty. When a language model recommends Supabase, you cannot do any of those things. The trust layer is absent, and most users do not notice.

Trust requires rare data to verify. How do you know whether to trust a tastemaker? You need evidence of their past performance on similar problems. But the most valuable problems are, by definition, the ones with the least available data. A lawyer's value comes partly from having seen hundreds of similar transactions across jurisdictions, knowledge that is structurally rare because it is privileged, fragmented, and low-volume. You cannot verify trust without access to this kind of evidence, and the evidence is rare by construction.

Rare data requires taste to interpret. Having access to rare data is not sufficient. A corpus of five hundred cross-border M&A agreements is useless without the judgement to identify which patterns matter, which clauses are boilerplate, and which deviations signal risk. That judgement is taste. And we are back where we started.

This recursive dependency is what I call the trust stack. Economists will recognise the family resemblance to information asymmetry, moral hazard, and adverse selection. The structure is not new. What is new is the claim that AI does not dissolve these asymmetries. It reorganises them. The asymmetry between a client and a lawyer is not eliminated by giving the client access to a language model. It is relocated: the client now faces an asymmetry with the model (whose training data and RLHF preferences are opaque) in addition to the original asymmetry with the lawyer. The stack does not flatten. It adds layers.

Legal as the Proof Case

I run a legal technology company, and I should be transparent about the fact that this makes me a biased narrator. Legal is my domain, and it would be convenient for my thesis if legal work turned out to be especially AI-resistant. So let me be precise about what legal AI does and does not automate well, and let the reader judge whether the trust stack holds.

What automates well: Document review, due diligence triage, clause extraction, contract comparison, regulatory change detection. These are pattern-matching tasks with clear ground truth and abundant training data. AI handles them competently and at a fraction of the cost.

What does not automate well: Advising a client on whether to accept a settlement offer. Structuring a cross-border joint venture to survive regulatory scrutiny in three jurisdictions. Determining whether a novel AI system's data processing agreement is GDPR-compliant when the regulator has not yet published guidance on the specific technology.

The difference is not complexity. Some of the automatable tasks are highly complex. The difference is the trust stack.

In the settlement advice case, the lawyer's value is taste (what counts as a "good" outcome for this specific client with these specific priorities), verified by trust (the client's relationship with the lawyer, built over years of demonstrated judgement), supported by rare data (the lawyer's experience of how similar cases resolved, information that is privileged and never enters any training dataset). Remove any one layer and the advice is either untrustworthy, uninformed, or generic.

The pattern is not unique to legal. Healthcare exhibits it (diagnosis requires taste, trust in the clinician, and rare patient data). Financial advisory exhibits it (investment strategy requires taste, trust in the adviser, and rare market intelligence). Family decisions exhibit it (choosing a school requires taste, trust in the recommender, and rare local knowledge).

Legal is simply the most institutionally enforced version of the stack. Fiduciary duty is a legal obligation to maintain the trust layer. Legal privilege structurally enforces data rarity. Bar admission regulates who can exercise taste in this domain. The stack is not just emergent. It is codified in law.

The Objection: Is This Permanent?

The strongest objection to the trust stack is that it dissolves. Verifiable computation could eliminate the need for trust (you can check the AI's work directly). Synthetic data could reduce rarity (generate training examples for edge cases). Preference learning could automate taste (the model learns what you want).

Each of these is partially true and already happening. Legal AI tools can now review contracts faster than junior lawyers. Synthetic data improves performance on rare categories. Personal agents are beginning to learn individual preferences.

But the trust stack has a property that makes it resistant to this kind of incremental erosion: the layers co-evolve. As AI automates the lower layers of legal work (document review, clause extraction), the remaining human work concentrates further up the stack, where taste, trust, and rare data are more tightly coupled. The automation does not dissolve the stack. It compresses it upward.

This is observable in practice. Law firms that adopted AI for document review did not reduce headcount at the senior end. They reduced it at the junior end. The partners who exercise taste, maintain trust relationships, and hold rare institutional knowledge became more valuable, not less. The stack survived because it is recursive: automating the periphery intensifies the core.

Whether this is permanent or merely long-duration is a fair question. My honest assessment: the trust stack will hold for any domain where the consequences of error are high, the information asymmetry is structural (not incidental), and the regulatory environment enforces accountability. Legal, healthcare, and financial advice meet all three conditions. Other domains may not.

Who Controls the Defaults?

Hayek argued in 1945 that the central economic problem is not allocation but the use of knowledge dispersed among millions of individuals. Markets solved this by letting prices aggregate distributed information. Language models solve it differently: by collapsing distributed information into a single opinionated output. This is not a market. It is a centralised default that feels like a personal recommendation.

The people who shape those defaults (the RLHF annotators, the benchmark designers, the fine-tuning data curators) now exercise a form of influence that has no precedent and no accountability structure. The person who decides that Supabase is the default backend for a generation of applications has more architectural influence than most CTOs, and nobody elected them to the role. Nobody even knows who they are.

This is the uncomfortable implication of the trust stack. The work that survives automation is not the work that is hardest. It is the work where taste, trust, and rare data are recursively entangled. And the work that is most dangerous to automate carelessly is the work where defaults propagate without a trust layer, where taste is exercised at scale by people who are invisible and unaccountable.

Trask is right that taste-making, trust-making, and rare data curation are the jobs that survive. But the mechanism is not the one he described. It is not that eight billion people become tastemakers. It is that taste concentrates into defaults, and the people who operate outside those defaults, in the recursive loop where taste, trust, and rare data depend on each other, occupy the territory that AI cannot flatten.

The Water Bottle, Revisited

Trask's water bottle will not be redesigned by its owner. It will be designed by a language model that recommends the same bottle to a billion people, shaped by the taste of a few thousand annotators who rated design options in a training pipeline. The owner will accept the default, because the decision fatigue is real and the default is good enough.

But somewhere, a lawyer is advising a manufacturer on whether that bottle's material composition complies with food-contact regulations in the EU, the US, and Japan simultaneously. The answer requires taste (what does "compliant" mean when the regulations conflict?), trust (the manufacturer trusts this lawyer's judgement because they have worked together for a decade on similar products), and rare data (the lawyer has seen how regulators in each jurisdiction actually enforce these rules, knowledge that exists in no training dataset).

That work is not going away. It is going up.

This essay extends Andrew Trask's framework from his video "Jobs which are safe from AI: taste, trust, and rare data". The observation about LLM defaults and vibe-coding originated in conversation and is used here with that context acknowledged.