Trump Officials Push Banks to Test Anthropic Mythos — While DoD Banned It
TL;DR
Trump administration officials are quietly urging major US banks to pilot Anthropic's unreleased Mythos model for financial compliance and risk analysis. Meanwhile, the Department of Defense has formally designated Anthropic a supply-chain risk — restricting its models from defense systems. The contradiction reveals a sharp split in AI policy inside the current administration.
The Trump administration is simultaneously pushing two opposite positions on Anthropic. On one side, officials connected to Treasury Secretary Scott Bessent and the White House AI policy office have been encouraging financial institutions to evaluate Mythos — Anthropic's next-generation model that has not yet been publicly released. On the other side, the Department of Defense has formally flagged Anthropic as a supply-chain risk, making it harder for defense contractors to use Claude in sensitive systems.
The juxtaposition is striking. While one arm of the executive branch tells Wall Street to bet on Anthropic's most capable AI, another arm has effectively blacklisted the same company from military infrastructure.
What Is Anthropic Mythos?
Mythos is the working name for Anthropic's next major model release — positioned above Claude Opus 4.6. Internal references to the model surfaced in April 2026 via leaked documentation and later confirmed through Anthropic's own preview disclosures. Reports describe Mythos as having dramatically improved capabilities in cybersecurity analysis, long-horizon reasoning, and autonomous task execution.
The model was first associated with Project Glasswing — Anthropic's security-focused initiative backed by AWS, Apple, Google, Microsoft, and NVIDIA. Project Glasswing positions Mythos as a defensive AI tool for securing critical software infrastructure, not an offensive weapon. That framing made it appealing to the financial sector, where AI-driven compliance, fraud detection, and risk analysis represent a multi-billion dollar opportunity.
| Model | Status | DoD Clearance | Banking Use |
|---|---|---|---|
| Claude Opus 4.6 | Available now | Restricted | Encouraged (Treasury) |
| Anthropic Mythos | Unreleased (preview) | Restricted | Encouraged (Treasury) |
| GPT-5.4 (OpenAI) | Available now | Approved | Permitted |
| Gemini 3.1 Pro | Available now | Approved | Permitted |
The DoD Contradiction
The Department of Defense's supply-chain risk designation for Anthropic stems from concerns about the company's investor base. Google and Amazon are Anthropic's two largest investors — and both are foreign-competition-flagged entities in certain defense procurement contexts. The DoD designation doesn't bar commercial use of Claude but restricts its deployment in classified systems, defense contractor infrastructure, and ITAR-controlled environments.
The contradiction with Treasury's informal Mythos endorsement reflects a broader pattern in the Trump administration's AI policy: different agencies are operating with different frameworks, and no single coherent AI doctrine has emerged. The White House AI policy office has focused on American AI competitiveness — meaning it wants US firms, including Anthropic, to win commercial markets. The DoD is focused on national security exposure — meaning it wants to reduce dependency on AI firms with Big Tech investor ties.
Both positions are internally consistent. But together, they create regulatory whiplash for businesses trying to build long-term AI strategies.
Why Banks Are Interested in Mythos
Financial institutions have been evaluating large language models for compliance automation, AML (anti-money laundering) analysis, loan underwriting support, and internal knowledge management for several years. What makes Mythos distinctly interesting is its reported capability in long-horizon reasoning over complex regulatory documents — exactly the kind of task that existing models struggle with when regulatory frameworks span thousands of pages.
Banks also have a specific interest in AI tools backed by security-hardened supply chains. Project Glasswing's consortium of AWS, Apple, Google, JPMorganChase, Microsoft, and NVIDIA signals that Mythos was designed with institutional security requirements in mind — a differentiator over general-purpose models.
For teams evaluating AI platforms now
If your organization needs access to multiple frontier models — including Claude Opus 4.6 today and Mythos when it releases — a multi-model platform gives you flexibility without vendor lock-in. Happycapy provides Claude, GPT-5.4, Gemini 3.1, and Grok 3 in one workspace from $17/month.
Try Happycapy free →What This Means for AI Policy Watchers
The Anthropic situation illustrates three dynamics that will define AI policy for the next 12–24 months:
- Regulatory fragmentation is the baseline. No unified US AI framework exists. Different agencies will issue conflicting signals, and businesses must map their exposure to each.
- Financial sector AI adoption is accelerating despite uncertainty. Banks are not waiting for regulatory clarity — they are piloting now and adjusting compliance postures as rules emerge.
- Anthropic's dual positioning is intentional. By aligning Mythos with Project Glasswing's security narrative while pursuing commercial banking partnerships, Anthropic is trying to thread the needle between DoD caution and Treasury enthusiasm.
The administration will likely resolve the contradiction through a formal AI policy executive order in Q2–Q3 2026. Until then, businesses operating across both defense and commercial sectors face genuine ambiguity about which Anthropic products they can deploy and in which contexts.
Frequently Asked Questions
What is Anthropic Mythos?
Mythos is Anthropic's next-generation AI model, positioned above Claude Opus 4.6. It has not been publicly released as of April 2026. Reports indicate advanced capabilities in cybersecurity analysis, long-horizon reasoning, and autonomous task completion. It is being piloted in preview with select enterprise partners.
Why did the DoD flag Anthropic as a supply-chain risk?
The DoD designation cites concerns about Anthropic's investor base — primarily Google and Amazon — in defense procurement contexts. The restriction applies to classified and ITAR-controlled systems, not commercial or financial sector deployments.
Are Trump officials really endorsing Anthropic for banks?
Yes. Officials connected to Treasury Secretary Scott Bessent have informally encouraged major US financial institutions to evaluate Mythos for compliance and risk analysis use cases. This is separate from — and in tension with — the DoD supply-chain designation.
What should businesses do given these conflicting signals?
Defense contractors and federal agencies must follow the DoD supply-chain designation. Commercial businesses, including banks, financial institutions, and private sector companies, face no such restriction and can evaluate and deploy Anthropic models freely. A multi-model platform strategy reduces single-vendor risk regardless of which regulatory direction ultimately prevails.
Access Claude and all frontier models in one platform
Happycapy gives teams access to Claude Opus 4.6, GPT-5.4, Gemini 3.1 Pro, and Grok 3 — with no per-model subscription juggling. As Mythos rolls out to enterprise partners, multi-model platforms will be the fastest path to access. Start free, upgrade to Pro at $17/month.
Start for free at Happycapy →Sources
- Anthropic Project Glasswing announcement — April 7, 2026
- DoD supply-chain risk designation filing — April 2026
- Reuters: Trump Treasury officials encourage bank AI evaluations — April 12, 2026
- The Information: Mythos preview access — April 2026
Related: Claude Mythos Preview: What Project Glasswing Means · Anthropic Pentagon Blacklist Court Win · Happycapy vs Claude: Which to Choose