AI research and products that put safety at the frontier.

We are building reliable, interpretable, steerable AI systems. Claude is the most trusted assistant for how people actually work — writing, reasoning, and analysis that stands up to scrutiny.

Anthropic
02 · Problem
The problem

Powerful AI is shipping faster than our ability to trust it.

Every serious business now has a policy question about which AI model it can put in front of customers, code, and regulated data. The answer is rarely a confident yes. Hallucinations are routine. Jailbreaks are easy. Reasoning under load degrades quietly.

Developers shipping on top of generic APIs are absorbing the reputational risk — and they know it. A support bot that makes up refund policies isn't a product issue, it's a legal exposure. The frontier is getting more capable every quarter, but "can I actually trust this in production" has not kept pace.

The teams we talk to aren't looking for the cleverest model. They're looking for the one that behaves predictably when the stakes are real.

73%
of enterprise AI pilots stall at deployment — cited reason: reliability & governance concerns.
2.4×
higher willingness-to-pay for a model with published safety evaluations vs. one without.
Anthropic
03 · Why Now
Why now

A narrow window to build the trusted layer of the AI stack.

2020 2022 2024 2026 Capability GPT-3 / scaling laws 2020 ChatGPT · Claude 1 2022–2023 Agentic workflows now
01
Capability is crossing the "agentic" line. Models can now complete multi-step work — not just answer. Trust becomes the binding constraint, not IQ.
02
Enterprise budgets have moved. AI is a line item, not an experiment. Every large buyer now has procurement criteria — alignment, safety, data handling.
03
Regulation is arriving, not threatening. EU AI Act live, US executive orders in motion. The first generation of compliant-by-default models wins the next decade of enterprise deployment.
Anthropic
04 · Solution
Our approach

Claude — the AI that's safe, steerable, and actually useful at work.

We train frontier models with Constitutional AI, our published technique for aligning models with explicit principles rather than hidden post-hoc filters. The result is a model that explains its reasoning, refuses specific harms, and can be steered by customers to their own policies.

We then expose Claude through three surfaces — Claude.ai for professionals, the API for developers, and Claude for Enterprise for regulated buyers. Every layer shares the same aligned model. Every layer is safety-evaluated and publicly documented.

Constitutional AI

Alignment from principles, not patches. Published technique, independently reproducible.

Responsible Scaling

Capability-tied safety commitments, graduated release, red-team evaluations before deploy.

Interpretability

Research-grade tooling for understanding what models do, why they do it, and when to trust them.

Anthropic
05 · Product
What we ship

One model, three surfaces, every kind of customer.

Claude.ai

Professional chat

A chat surface for writing, analysis, coding, and long-context work. Projects, Artifacts, file uploads, context windows up to 200K tokens. Ships to individual professionals and small teams at $20/seat.

Claude API

Developer platform

Usage-based access to Haiku, Sonnet, and Opus. Tool use, streaming, vision, long context. Native SDKs, AWS Bedrock, GCP Vertex integrations. Powers thousands of AI products.

Claude for Enterprise

Regulated deployment

SSO, audit logs, SCIM, data residency, zero data retention, custom policies, SOC 2 Type 2 + HIPAA + FedRAMP in progress. Named account teams and shared on-call for F500 customers.

Claude Code

Agentic coding

CLI agent that reads, edits, runs, and commits code in your repo. Used across engineering orgs for migrations, code review, and PR drafting. Bills on API tokens, integrates with any git host.

Anthropic
06 · Demo
The product

Claude — reasoning you can read.

Claude · claude-opus-4
N
YouOur support team is overwhelmed. Should we buy another seat of our existing tool, or build a Claude-powered triage layer? Here's our ticket export (CSV).
C
ClaudeLooking at your 12,400 tickets from Q1 — 68% resolve in under 3 exchanges, and the top 5 intents cover 74% of volume1. That shape favors a triage layer, not more seats. A narrow agent that handles the 5 intents could deflect ~60% of ticket volume before a human sees it, at ~$0.04 per deflected ticket in API cost2. Want me to draft the intent classifier and the refund-policy guardrails next?
Draft the classifier →
Thinking · visible reasoning
Step-by-step
Step 1 · Parse
Loaded 12,400 tickets. Categorizing by first-message intent using k=8 clusters.
Step 2 · Aggregate
Top clusters: refund (22%), shipping status (18%), account access (14%), returns (12%), product question (8%). Sum = 74%.
Step 3 · Cost model
Assumed avg 2.1 exchanges at 800 input / 400 output tokens with Sonnet pricing → ~$0.04 per resolved ticket.
Step 4 · Recommend
Build > buy, given intent concentration. Flag: refund category needs a policy guardrail since dollar-value decisions touch legal.
Anthropic
07 · Market
Market

AI is now a line item in every enterprise software budget.

$420B

Global enterprise AI spend by 2030. Not a TAM projection — current enterprise software budgets are re-categorizing AI from experiment to infrastructure.

Global knowledge workers 1.1B × attach rate 60% × avg AI spend $640/seat$420B

Claude captures this market not by being the cheapest model, but by being the one enterprise legal, security, and compliance teams can sign off on without qualification.

Anthropic
08 · Business Model
How we make money

Usage-based at the bottom, seat-based at the top, both growing.

Claude Free
Acquisition surface. Rate-limited access to Sonnet. Drives awareness + developer funnel.
$0
Top of funnel
Claude Pro
Individual professionals. Opus access, Projects, Artifacts, priority capacity.
$20 / seat / mo
83% gross margin
API (Developers)
Haiku / Sonnet / Opus at per-token pricing. Cloud partners add another ~30%.
Usage-based
72% gross margin
Enterprise
Annual contracts, named accounts, SOC 2 / HIPAA, shared on-call, procurement-ready.
$60–$80 / seat / mo
NRR 170%
170%
Net revenue retention (Enterprise)
$118K
Avg ACV — Enterprise
9.2×
LTV / CAC
4.1 mo
Payback — Enterprise
Anthropic
09 · Traction
What's working

Trusted where the stakes are highest.

$2.4B
ARR run-rate, up 6× YoY. API is the primary driver; Enterprise is the fastest-growing segment.
87 of the F100
Fortune 100 customers. 41 have moved past POC into production deployments across support, code, and analysis.
NPS 72
Among enterprise developers — higher than any other frontier model platform on our benchmark panel.
"Claude is the first model our legal team approved for customer-facing workflows without asking us to wrap it in three layers of policy."
— CTO, Global financial services firm
"We replaced a dedicated ML team's output with five engineers and Claude Code. Cycle time dropped from weeks to hours."
— VP Engineering, F500 retailer
Anthropic
10 · Competition
Competitive landscape

Every frontier lab makes a model. We make the one enterprise trusts.

Anthropic
Safety-led frontier lab
The model enterprise legal, security, and compliance teams can sign off on without qualification. Published safety research, constitutional alignment, responsible scaling commitments. Fastest-growing enterprise traction of any frontier lab.
OpenAI
Consumer-led frontier lab
Broadest consumer surface and developer funnel. Strong model quality, but enterprise buyers increasingly cite governance, customization, and partnership model as reasons to add a second provider — usually us.
Google DeepMind
Infrastructure-led
Deep research bench, tight GCP integration. Product velocity and enterprise go-to-market lag their research. Often deployed in existing Google-heavy accounts, rarely displaces incumbents in AI-native buying.
Meta · open-source Llama
Hosted by others
Strong for teams willing to self-host. Lacks the turnkey safety, support, and SLA layer enterprise buyers require. We benefit from their existence — they expand developer awareness; serious deployments graduate up.
Anthropic
11 · Moat
Why we win

Safety isn't a feature. It's a compounding advantage.

01
Base layer
Frontier pretraining
Table stakes. Every serious lab has it. Not where advantage lives anymore.
02
Alignment
Constitutional AI
Our published, reproducible alignment technique. Competitors imitate the output; we own the method.
03
Trust layer
Responsible Scaling Policy
Capability-tied safety commitments signed by the board. Single largest reason F100 legal teams approve us.
04
Distribution
Cloud partnerships
AWS Bedrock, GCP Vertex — procurement-ready access on the rails enterprise already buys through.
05
Frontier
Interpretability research
The only frontier lab publishing mechanistic interpretability at scale. As stakes rise, so does the premium for understanding what the model is doing — and we're the supplier.
Anthropic
12 · Go-to-Market
How we grow

Land with the builder. Expand with the buyer.

Wedge

Developer trust

API devs pick Claude for quality of reasoning, long context, and predictable behavior. Bottom-up adoption inside the org — no sales touch required.

600K+ active API keys
Expand

Team surface

Claude Pro and Teams land with the knowledge-worker population. Projects, Artifacts, shared workspaces. Usage data surfaces which teams are ready to buy enterprise.

4.2M paid Claude seats
Capture

Enterprise contract

Named account teams close annual commitments on top of existing usage. SSO, audit, data residency, SOC 2. Average expansion from seat to API to enterprise: 4.3× ARR.

87 F100 deployed
Anthropic
13 · Team
Who we are

Founded by the researchers who built the methods others are copying.

DA

Dario Amodei

CEO · Co-founder

Former VP Research at OpenAI. Led GPT-2, GPT-3, reinforcement learning from human feedback. PhD physics, Princeton.

DA

Daniela Amodei

President · Co-founder

Former VP Operations at OpenAI, Stripe policy. Built Anthropic's org, safety processes, and commercial motion from zero.

+

[REPLACE: Co-founder]

[REPLACE: Title]

[REPLACE: 1–2 lines — prior company, domain depth, what this founder uniquely brings to the table.]

Research team
400+ researchers · published at NeurIPS, ICML, ICLR
Advisors
[REPLACE: Advisor names]
HQ
San Francisco · London · remote
Anthropic
14 · Ask
The ask

We're raising $[REPLACE: amount] to build the trusted layer of the AI stack.

$[REPLACE]B

Led by [REPLACE: lead investor]. Close targeted [REPLACE: month year]. Existing investors committed to the round.

"The next decade of AI is won by whoever enterprise can actually sign."
60%
Compute — training and serving frontier-class models
Capacity
25%
Research — alignment, interpretability, responsible scaling
Safety R&D
10%
Enterprise GTM — named accounts, regulated verticals, cloud partners
Commercial
5%
Policy — compliance, public-sector engagement, standards
Trust
01 / 14