The AI platform Claude has its own constitution. What does that mean for healthcare – and the humans behind it?
At least five developed countries don’t have constitutions, but one AI platform now does. On Jan. 21, Anthropic announced a formal, written constitution for its agent, Claude. This declaration, about Claude and for Claude, guides how it learns. The constitution “expresses and shapes who Claude is… and contains detailed explanations of the values we would like Claude to embody and the reasons why.”
An AI that knows its why? Great. What’s that got to do with healthcare?
A lot. Claude’s core constitutional principles — safety, ethics, compliance, and being genuinely helpful — align beautifully with ours. They inform Claude’s new healthcare-specific platform, which balances big-ticket items like HIPAA, interoperability, and the integration of personal health data. They represent a new approach for AI that’s a lot more, well, human.
But first, a basic description of Claude’s healthcare mojo. . .
Claude for Healthcare solves wicked problems
Just six weeks into 2026, Claude has made quite a splash. Parent company Anthropic launched Claude Opus 4.6, Claude’s constitution, and Claude for Healthcare. These moves usher in a new focus on safety which — along with a HIPAA-compliant, advanced AI model — helps providers, payers, and patients solve real-world problems through several features:
- Connectors: Give Claude direct access to the CMS Coverage Database, ICD-10, the National Provider Identifier Registry, and PubMed.
- Agent team: Speeds and automates prior authorizations as well as data interoperability between healthcare systems.
- Integrations: Help Claude access, summarize and simplify a consumer’s personal health record (PHR), lab result data, etc.
- Adaptive thinking: Allows Claude to automatically determine the depth of reasoning and effort required (max, high, medium, low) for complex tasks with multi-step workflows like claims, appeals, or clinical protocols.
In addition to interoperability, prior auth, and consumer health integration, Claude for Healthcare supports other autonomous task-handling (e.g., care coordination, regulatory updates, utilization review).
Healthcare’s most wicked problems, solved by a single platform? Maybe.
Of course, Claude isn’t the only AI game in town. There’s ChatGPT for Healthcare, Microsoft’s Dragon Copilot, and Google Cloud’s Vertex AI for medical knowledge-based applications.
But it is the only one with a constitution. Behind Claude’s operational strengths are human ones. As Anthropic explains: “We think that in order to be good actors in the world, AI models like Claude need to understand why we want them to behave in certain ways, and we need to explain this to them rather than merely specify what we want them to do.”
“Like raising a child”
Claude’s constitution is “a detailed description of Anthropic’s vision for Claude’s values and behavior; a holistic document that explains the context in which Claude operates and the kind of entity we would like Claude to be” — specifically, an entity that is:
- Broadly safe: Claude doesn’t undermine human oversight as it develops and learns.
- Broadly ethical: Claude is honest, demonstrates good values, and avoids actions “that are inappropriate, dangerous, or harmful.”
- Compliant with guidelines: Claude follows relevant guidelines (in this case, those of its parent company, Anthropic).
- Genuinely helpful: Claude cares about supporting its users, individual and enterprise.
A Time article on Claude’s constitution notes: “Getting AI models to behave used to be a thorny mathematical problem. These days, it looks a bit more like raising a child.”
Historically, training AI was a math exercise. Models learned through a series of rules with risks and rewards. New approaches like Claude’s — called “Rules to Souls” — embed ethical standards that co-exist with, and sometimes take precedence over, matter-of-fact definitions.
Claude constitution author and philosophy PhD Amanda Askell, notes: “Instead of just saying, ‘here’s a bunch of behaviors that we want,’ we’re hoping that if you give models the reasons why you want these behaviors, it’s going to generalize more effectively in new contexts.”
An AI that Knows Its Why sounds like a great fit for healthcare.
Sage’s perspective
Humans need not apply? Hardly. At Sage, we believe:
1. The best innovation is more than ones and zeros
Need a real-world example of Rules to Souls in healthcare? Enter Kardia, a handheld device from AliveCor that allows users to take medical-grade EKGs anywhere, anytime. Kardia can diagnose multiple conditions and help consumers partner with their doctors to manage their heart health from home.
At the next-gen diagnostics panel that Sage moderated at CES 2026, AliveCor CEO Priya Abani described Kardia’s innovation as an equation: the more electrodes, the more conditions diagnosed, the more lives saved.
“It feels like we have reduced it to a math problem,” Abani said.
But when discussing the chatbot components of Kardia — how consumers understand and interact with the diagnostic information they receive — Abani added nuance.
“When you have a physical issue, you do not rely on a chat bot,” Abani continued. “You will want to see a human. You want that empathy.”
Kardia and Claude remind us: The best AI balances math and empathy. The best healthcare innovation does, too.
2. The best innovation combines AI and HI
At the Sage CES panel, Abani added: “Do we think humans are going to disappear? Absolutely not.” Whew, thank goodness.
While devices like Kardia and Claude’s constitution could suggest otherwise, they actually represent the best kind of innovation: people plus technology.
The combination of human intelligence and artificial intelligence — HI + AI — is a human-in-the-loop (HIL) approach that doesn’t replace people but works alongside us, delivering more together than either could alone.
3. The best innovation knows its why
It’s not that healthcare doesn’t know its why, it’s that our whys compete. Cost and quality and experience and equity. It’s notoriously difficult to achieve any two of these at once, much less all four. With hat’s tipped to Claude, here are three Sage principles for healthcare’s AI-driven future:
- Healthcare needs machines that work for people, not the other way around. We can’t afford AI for AI’s sake. As AI solutions evolve from administrative to clinical, consumers and providers must own their healthcare decisions — not chatbots.
- Healthcare has a productivity crisis, and AI alone won’t solve it. Humans know how to make magic at all stages to solve problems. For instance, the change management AI demands will always need smart people who know how to drive value and growth as C-suites now prioritize AI as their top tech initiative for 2026-2027, while grappling with the lack of clear strategies for implementing AI for clinical and administrative uses.
- Healthcare needs an AI map for impact from a partner that leans in. If you follow Sage, you know how proud we are to have hired Christina Speck, our first Chief Solutions and AI Officer. Christina is developing the AI roadmap, workflows, tools and platforms that will enhance our marketing, analytics and solutions, and guide our next 20 years of providing innovative counsel for our healthcare clients.
In the meantime, check out the stages where Sage is leading the AI conversation . . .




