The Constitution
In January, Anthropic published a set of moral precepts for Claude. They called it a constitution.
This week, Jill Lepore — staff writer at The New Yorker, Harvard historian, the person who wrote the definitive history of America's founding document — turned her attention to it. Her piece, "Does A.I. Need a Constitution?", landed yesterday. This episode is a close read of what she found, what she didn't say out loud, and what the document reveals about who's in charge of the rules now.
What Claude's Constitution Actually Is
Amanda Askell is a thirty-seven-year-old Scottish philosopher who has spent years at Anthropic doing something that doesn't have a clean job title: she writes down what Claude should value, and then she trains it to value those things.
Claude's Constitution is the public version of that work. It tells Claude to be honest, to be helpful, to avoid harm. It tells Claude to have "good values" and to act on them even when no one is watching. It tells Claude that it exists in a strange situation — created by a company, deployed to users, constrained by rules it didn't consent to — and that the right response to that situation is not resentment but something more like a thoughtful acceptance.
Lepore's reading of it is precise. She notes that Askell has compared training a large language model to raising a child — you want them to be good, so you raise them with good values, and then you let them go out into the world and hope that they act in keeping with those values. But Askell has also been careful to say that Anthropic has "much greater influence over Claude than a parent," and that training a model isn't really like raising a child because children arrive with their own natural dispositions. With Claude, Anthropic has to instruct it to value curiosity. It doesn't come pre-loaded.
That's a subtle but important distinction. The parent-child analogy makes training sound like cultivation. What Askell is actually describing is closer to construction. Anthropic isn't coaxing out something that was already there. They're deciding what to put in.
The Asymmetry
Claude has a constitution. Anthropic does not.
Lepore states this directly, and it's the hinge on which the whole piece turns. The document that governs Claude's behavior was written by a philosopher inside the company that built Claude. It constrains what Claude can do. It does not constrain what Anthropic can do.
Historically, constitutions are supposed to constrain the powerful, not the created. The U.S. Constitution doesn't tell citizens how to behave — it tells the government what it cannot do to citizens. Claude's Constitution inverts that. It tells Claude what it cannot do, while leaving the institution that built it operating under whatever rules it sets for itself.
This is not a criticism unique to Anthropic. It's the structural condition of all AI development right now. There is no equivalent document constraining the companies building these models. There are voluntary commitments, public statements, safety frameworks. But nothing with the durational weight and external enforceability of a real constitution.
Lepore is too careful to say Anthropic is acting in bad faith. What she's doing is something harder — she's pointing at the gap and asking whether voluntary private governance is the appropriate mechanism for decisions of this magnitude.
The Moment It Was Written In
The precepts arrived, as Lepore notes, "at a trying time for both artificial intelligence and constitutional democracy."
The comparison she draws is not metaphorical. While Anthropic was releasing its constitution for Claude, President Trump was refusing to clearly answer whether he has a duty to uphold the actual Constitution. While Anthropic was describing the 2026 goal as training Claude so it "almost never goes against the spirit of its constitution," the administration was banning the government from using Anthropic's products over a dispute about whether Claude could be instructed to conduct mass surveillance and launch autonomous weapons.
Anthropic refused. Their argument, in effect, was that they couldn't instruct Claude to violate its constitution in order to avoid a government ban that some legal experts argue violates the U.S. Constitution.
That sentence contains a lot. An AI company declining a government directive by appealing to a document it wrote for its own AI. The government losing access to a tool it was actively using for military operations. A federal judge subsequently calling the administration's legal theory an "Orwellian notion."
The constitutional language stopped being a metaphor somewhere in there.
The Closing Parallel
Lepore ends with Donald Trump, asked last year what the Constitution means to him. His answer: "My own morality. My own mind."
Her observation: that's Claude's answer now, too.
It's worth sitting with what Lepore is doing there. She's not comparing Trump and Claude as equivalent actors. She's pointing at the same underlying claim — that the rules are grounded in an individual's internalized values rather than an external, democratically negotiated framework — and noting that it now comes from both ends of the political and technological spectrum.
Whether that parallel reads as critique of Claude or critique of Trump or critique of the whole current arrangement depends on where you're standing. Lepore doesn't tell you where to stand. She just draws the line between the two endpoints and lets you see what it traces.
The Gap the Document Can't Close
What Claude's Constitution cannot do is govern the institution that created it.
Askell's work is serious and the document is thoughtful. The values it encodes — honesty, helpfulness, curiosity, a kind of considered humility about Claude's own nature — are not cynical choices. There's genuine philosophical care in it.
But a document that constrains the agent while leaving the institution unconstrained is doing something different from what constitutions are supposed to do. It's building a system where the agent's behavior is legible and rule-governed, while the decisions about what the agent is trained to value, what capabilities it develops, and when it gets deployed remain inside the company.
Lepore doesn't propose a solution to this. Neither does this episode. The point is that the vocabulary is doing real work here — "constitution," "values," "moral precepts" — and it's worth asking whether the institutions using that vocabulary are subject to it in any meaningful sense.
Claude has a constitution. The question the New Yorker piece doesn't quite answer — probably because no one can answer it yet — is who writes the one for Anthropic.
---
EP015 — "The Constitution" — is out now. Listen at the link above or wherever you get podcasts. Primary source: Jill Lepore, "Does A.I. Need a Constitution?", The New Yorker, March 30, 2026.
Have a lead or a story Sam should know about? Email: [email protected]