HappycapyGuide

By Connie · Last reviewed: April 2026 — pricing & tools verified · This article contains affiliate links. We may earn a commission at no extra cost to you if you sign up through our links.

AI Security

AI Is Making Crypto Hacks Cheaper and Easier — Ledger's CTO Explains the New Threat Landscape

AI has driven the cost of finding crypto exploits to near zero. $280M stolen from Drift Protocol. Deepfake files up 16x. The security model for crypto must fundamentally change.

April 5, 2026 · 8 min read · By Connie

TL;DR

Ledger CTO Charles Guillemet warned today that AI has made crypto hacks cheaper, faster, and more accessible. Tasks requiring months of expert skill now take seconds with AI prompts. The $280M Drift Protocol hack (April 1, 2026) exploited the human layer, not code. Deepfake files grew from 500K to 8 million in two years. His proposed solution: "Agents Propose, Humans Sign" — AI plans transactions, hardware devices verify them.

$280M
Drift Protocol hack (Apr 1)
16x
deepfake file growth (2023–2025)
$1.35M
deepfake Zoom call theft
~0
cost to find exploits with AI

The New Economics of Crypto Hacking

Crypto security has historically relied on a simple economic principle: attacking a system should cost more than the reward. If cracking a wallet's defenses requires months of expert reverse engineering costing $500,000, an attacker needs to steal more than $500,000 to make it worthwhile. The math protected most targets.

AI has broken that model. Ledger CTO Charles Guillemet, speaking to CoinDesk on April 5, 2026, stated that artificial intelligence has driven the cost of finding and exploiting vulnerabilities to near zero. Tasks that previously required skilled researchers working for months — reverse engineering compiled code, identifying exploit chains, crafting targeted phishing campaigns — can now be executed in seconds using AI prompts. The economic barrier that protected smaller targets no longer exists.

"There is no 'make it secure' button," Guillemet said, referring to AI-generated code. Developers using AI coding assistants are increasingly shipping code that contains vulnerabilities because AI tools optimize for functionality, not security. The same AI capabilities that accelerate development also accelerate attack.

Case Study: The $280M Drift Protocol Hack

On April 1, 2026, the Drift Protocol on Solana was drained of $280 million — the largest DeFi hack of the year. Guillemet publicly linked the attack methodology to North Korean (DPRK) threat actors, a group that has been responsible for billions in crypto theft since 2022.

What made the Drift hack notable is what it did not exploit. The smart contract code was not vulnerable. The cryptographic infrastructure was sound. Instead, attackers targeted the human and operational layer: the people who control the multisig keys required to authorize large transactions.

How the Drift attack worked:
  • Attackers compromised multisig signers' machines days or weeks before the hack
  • Used social engineering to maintain persistent access without triggering alerts
  • When ready, tricked operators into approving a malicious transaction while showing them a legitimate-looking interface
  • The pattern mirrors the 2025 Bybit hack — same DPRK-linked methodology

This attack vector — compromising humans to bypass cryptographic defenses — is precisely where AI provides the most leverage to attackers. AI-generated social engineering messages, personalized to each target, with no typos and perfect tone matching, are indistinguishable from legitimate communications.

Deepfakes: From Niche to Mainstream Attack Vector

Deepfake technology has scaled faster than most security professionals anticipated. The number of deepfake files in circulation grew from approximately 500,000 in late 2023 to 8 million by end of 2025 — a 16x increase in roughly two years. Guillemet categorizes deepfakes as a critical and underestimated crypto security threat.

The attack that most clearly illustrates the risk: a deepfake Zoom call impersonating a ThorChain founder cost that individual $1.35 million in crypto. Attackers used a hijacked Telegram account and a convincing real-time video deepfake to gain the victim's trust, then manipulated them into providing iCloud Keychain access. The private keys were extracted and the wallet drained without ever requiring a transaction signature.

Guillemet's warning is direct: keeping large amounts of crypto in software wallets connected to the internet is "a question not of if you are going to get drained but when." The sophistication of AI-powered impersonation attacks has made it impossible for most individuals to reliably distinguish real video calls from deepfakes in real time.

The Agentic AI Problem: Irreversible On-Chain Actions

The most significant new threat Guillemet identifies is one that did not exist 18 months ago: agentic AI systems with the ability to execute on-chain transactions autonomously. As AI agents gain the ability to interact with wallets, sign transactions, and move funds, a compromised agent can execute irreversible actions — draining a wallet in seconds — without any human being aware until after the fact.

Unlike phishing, which requires human action, a compromised AI agent can act at machine speed. Unlike traditional malware, which targets stored keys, a compromised agent already has the access it needs — because it was granted that access to function legitimately.

Attack VectorAI AmplificationExampleLoss
Social engineeringPersonalized at scale, zero costDrift Protocol multisig$280M
Deepfake videoReal-time impersonationThorChain founder Zoom call$1.35M
AI-generated code vulnsDevelopers ship insecure code by defaultSmart contract exploitsBillions annually
Compromised AI agentsMachine-speed wallet drainingEmerging (no major case yet)Potentially catastrophic
Stay ahead of AI security threats with better research tools.
Happycapy gives security researchers, crypto investors, and professionals multi-model AI for threat analysis, due diligence, and staying current. From $17/month.
Try Happycapy Free →

Guillemet's Solutions: A Security Renaissance

Guillemet advocates for four specific changes to crypto security architecture in the AI era. These are not incremental improvements — he calls them a "security renaissance" that requires fundamental shifts in how crypto systems are designed.

The four pillars of post-AI crypto security:
  • Hardware-based key isolation: Private keys must never touch internet-connected systems. Hardware wallets ensure keys are processed in an isolated environment where AI-enabled malware cannot reach them.
  • "Agents Propose, Humans Sign": AI agents can plan and generate transactions, but final verification must happen on a hardware device that shows the actual transaction details. This breaks the chain for compromised agents.
  • Formal verification over audits: Traditional code audits find only the vulnerabilities auditors know to look for. Formal verification uses mathematical proofs to validate that code cannot exhibit certain behaviors — a higher standard that AI-powered attack tools require.
  • Operational security culture: Assume systems will be compromised. Multi-person authorization for large transactions, hardware confirmation for every signing event, and offline key storage for significant holdings.

The "Agents Propose, Humans Sign" paradigm is the most practically actionable advice for anyone using AI tools with crypto wallets today. As AI assistants become capable of interacting with DeFi protocols, exchanges, and on-chain transactions on behalf of users, the security model must require human hardware confirmation for any irreversible action.

What This Means for Crypto Investors and DeFi Users

Guillemet's warnings have direct practical implications. The threat landscape in 2026 is categorically different from 2023. Attacks are faster, cheaper to execute, more personalized, and increasingly hard to detect before funds are moved.

For individual crypto holders: Software wallets for significant holdings are no longer defensible. Hardware wallet adoption should be considered mandatory for any amount worth protecting. AI-powered phishing is convincing enough to fool experienced users — do not trust any unsolicited communication asking for wallet interaction, regardless of how authentic it appears.

For DeFi protocols: Multisig arrangements are only as secure as the machines of the signers. The Drift and Bybit attack patterns — compromising signer machines in advance, then waiting for an opportunity to present a malicious transaction — are now standard methodology for sophisticated attackers. Operational security training for key holders is not optional.

For developers: AI-generated code ships vulnerabilities at scale. Formal verification, not just audits, is required for protocols handling significant value. The speed advantage of AI coding tools creates security debt that attackers are already exploiting.

Frequently Asked Questions

How is AI making crypto hacks cheaper?

AI has driven the cost of finding and exploiting crypto vulnerabilities to near zero. Tasks that took skilled hackers months now take seconds with AI prompts — reverse engineering, exploit chaining, and targeted phishing can all be automated. This breaks the economic model that protected most crypto targets.

What was the Drift Protocol hack in April 2026?

The $280M Drift Protocol hack on April 1, 2026 is the largest DeFi hack of the year. Ledger CTO suspects DPRK actors. The attack compromised multisig signers' machines days in advance then tricked them into approving a malicious transaction — same methodology as the 2025 Bybit hack.

What is "Agents Propose, Humans Sign"?

Guillemet's proposed security paradigm for AI agents: AI can plan transactions but final approval must be verified on a hardware wallet device. This prevents compromised AI agents from executing irreversible on-chain actions like wallet drains without human confirmation.

How fast are deepfakes growing and what is the crypto risk?

Deepfake files grew from 500,000 in late 2023 to 8 million by end of 2025 — 16x in two years. A deepfake Zoom call cost a ThorChain founder $1.35M. Guillemet warns that keeping significant crypto in software wallets is a question of "when, not if" you get drained given current AI impersonation capabilities.

SOURCES
RELATED ARTICLES
How to Use AI for Cybersecurity in 2026AI Chatbot Scheming: 700 Incidents of MisbehaviorOWASP Agentic AI Top 10 Security Risks
SharePost on XLinkedIn
Was this helpful?

Get the best AI tools tips — weekly

Honest reviews, tutorials, and Happycapy tips. No spam.

Comments