HappycapyGuide

By Connie · This article contains affiliate links. We may earn a commission at no extra cost to you if you sign up through our links.

AI in Healthcare

LillyPod: Eli Lilly's 1,016-GPU AI Supercomputer That Could Reinvent Drug Discovery

By Connie  ·  February 2026  ·  8 min read

TL;DR — Key Takeaways
  • Eli Lilly launched LillyPod in February 2026 — the world's most powerful AI supercomputer owned and operated by a pharmaceutical company.
  • 1,016 NVIDIA Blackwell Ultra (B300) GPUs, delivering 9,000+ petaflops. Assembled in 4 months in Indianapolis.
  • Trains protein diffusion models, small-molecule graph neural networks, and genomics foundation models to simulate millions of drug candidates.
  • $1 billion, 5-year co-innovation lab with NVIDIA (announced January 2026), targeting closed-loop AI drug discovery.
  • Lilly TuneLab opens access to Lilly's proprietary models to external biotech — federated learning keeps partner data private.
1,016Blackwell Ultra GPUs
9,000+petaflops AI compute
$1BNVIDIA partnership
4mobuild time

On February 27, 2026, Eli Lilly flipped the switch on LillyPod — the most powerful AI supercomputer ever wholly owned by a pharmaceutical company. Built in Indianapolis from 1,016 NVIDIA Blackwell Ultra GPUs in four months, LillyPod delivers more than 9,000 petaflops of AI performance. Its purpose: compress years of drug discovery laboratory work into hours of silicon computation.

The launch is not just a technical milestone. It signals a fundamental shift in how large pharmaceutical companies think about competitive advantage. Rather than renting compute from AWS or Azure, Lilly is betting that owning the full stack — from GPU silicon to proprietary training data to AI models — will give it a structural edge in the $1.5 trillion global pharmaceutical market.

LillyPod Technical Specifications

GPU Count
1,016
NVIDIA Blackwell Ultra (B300) — world's first pharma DGX SuperPOD with B300 systems
AI Performance
9,000+ PFLOPS
Petaflops of AI inference and training throughput
Data Capacity
700 TB
Data processing capacity per training run
GPU Memory
290+ TB
High-bandwidth GPU memory for in-memory model training
Networking
Spectrum-X
NVIDIA Spectrum-X Ethernet — optimized for distributed AI training
Build Time
4 months
Assembled and operational in Indianapolis from hardware delivery to live
The Scale in Context

NVIDIA notes that the computational power inside a single modern GPU equals what once required 7 million Cray supercomputers. LillyPod has 1,016 of them. In practical terms: protein folding simulations that once took months on distributed cloud compute now run in hours. Lilly's scientists can test millions of molecular structures against a biological target in a single overnight run.

What LillyPod Actually Does: Three AI Models Changing Drug Discovery

LillyPod was built to train three specific categories of AI models — each targeting a different bottleneck in the drug discovery pipeline:

Model TypeWhat It DoesTraditional TimelineWith LillyPod
Protein Diffusion ModelsPredict 3D protein structures from sequence data; design novel proteins that bind to disease targets with high precision6–12 months (wet lab crystallography)Hours per structure
Small-Molecule Graph Neural NetworksPredict binding affinity, ADMET properties (absorption, distribution, metabolism, excretion, toxicity), and selectivity for candidate drug molecules2–4 years (iterative synthesis cycles)Screen millions of candidates in days
Genomics Foundation ModelsIdentify disease targets from patient genomic data; find biomarkers that predict responders vs. non-responders3–5 years (GWAS studies, cohort analysis)Pattern recognition across terabytes of genomic data in hours
The AI tools reshaping every industry — in one workspace
From drug discovery to daily productivity, AI is moving fast. Happycapy gives you Claude, GPT-5, Gemini, and more in one place. Pro starts at $17/month.
Try Happycapy Free →

The $1 Billion NVIDIA Partnership

LillyPod is the hardware layer of a deeper Lilly-NVIDIA relationship. On January 12, 2026 — six weeks before LillyPod went live — the companies announced a $1 billion, five-year co-innovation lab in the Bay Area focused on "closed-loop AI drug discovery."

Closed-loop discovery is the key phrase. The traditional drug discovery process is open-loop: scientists design a hypothesis, test it in the lab, observe results, and design the next hypothesis weeks or months later. With AI in the loop, Lilly can continuously train models on experimental results, generate better hypotheses, test those hypotheses faster, and feed the outcomes back into training — the loop running orders of magnitude faster than human-paced iteration.

The co-innovation lab is intended to push this further: NVIDIA contributes next-generation hardware access (including first access to the Vera Rubin architecture, NVIDIA's upcoming GPU generation); Lilly contributes 150 years of proprietary pharmaceutical data and domain expertise. Neither side can replicate the other's contribution independently.

Lilly TuneLab: Opening the Platform to Biotech

The most strategically interesting part of the LillyPod announcement is what Lilly is doing with the models it trains: making them available to external biotech companies through Lilly TuneLab.

How Lilly TuneLab Works
  • Access to $1B+ proprietary models: Biotech partners can fine-tune Lilly's foundation models (trained on Lilly's proprietary data, worth over $1B) on their own disease targets.
  • Federated learning via NVIDIA FLARE: Partner data never leaves the partner's environment. Lilly's models come to the data, not vice versa — critical for regulatory compliance and IP protection.
  • Shared infrastructure: Partners access LillyPod's compute without building their own supercomputer — democratizing frontier-scale AI for smaller biotechs.
  • Platform business model: Lilly transitions from pure pharmaceutical company to AI platform provider for the broader biotech ecosystem.

This is a meaningful strategic shift. Lilly is not just using AI to discover its own drugs faster — it is monetizing its AI infrastructure and models as a platform business, competing with the cloud AI services of AWS and Azure in the life sciences vertical.

Why Every Major Pharma Company Is Building AI Infrastructure

LillyPod is the most powerful deployment, but Lilly is not alone. The pharmaceutical industry has undergone a rapid transformation in AI investment since 2024:

CompanyAI Infrastructure InvestmentFocus Area
Eli Lilly$1B+ (LillyPod + NVIDIA co-lab)Drug discovery, genomics, clinical development
PfizerProject Pinnacle — NVIDIA AI partnershipMolecule screening, clinical trial optimization
Roche / GenentechNAVIFY AI platform + internal computeOncology biomarker identification, diagnostics
NovartisTrinity AI program — $1B+ AI investmentProtein design, patient stratification
AstraZenecaNeXT platform (partnered with BenevolentAI)Target identification, toxicity prediction
Johnson & JohnsonAI Center of Excellence + NVIDIA partnershipSurgical robotics, clinical analytics, drug repurposing

The industry logic is straightforward: drug development costs $1–3 billion per approved drug and takes 10–15 years. If AI can compress that timeline by even 30%, it represents hundreds of billions in value for a single large pharmaceutical company. LillyPod is a capital allocation decision as much as a technology one.

Related Coverage

AI at every scale — including yours
Lilly has LillyPod. You have Happycapy — Claude, GPT-5, Gemini, all the frontier models in one workspace. Pro $17/month. Max $167/month.
Start Free on Happycapy →

Frequently Asked Questions

What is LillyPod?
LillyPod is the world's most powerful AI supercomputer wholly owned and operated by a pharmaceutical company. Launched by Eli Lilly in February 2026, it contains 1,016 NVIDIA Blackwell Ultra GPUs, delivers over 9,000 petaflops of AI performance, and is designed to train protein diffusion models, small-molecule neural networks, and genomics foundation models to accelerate drug discovery.
How powerful is LillyPod compared to other supercomputers?
LillyPod delivers 9,000+ petaflops using 1,016 NVIDIA Blackwell Ultra GPUs — NVIDIA notes that the compute in one modern GPU equals what once required 7 million Cray supercomputers. It processes 700 TB of data using 290+ TB of high-bandwidth GPU memory, and was assembled in just four months. As a purpose-built pharmaceutical AI system, it has no direct public comparisons, but ranks among the most powerful AI-specialized installations globally.
What is Lilly TuneLab?
Lilly TuneLab is an AI platform that lets external biotech companies fine-tune drug discovery models built on Lilly's proprietary data (worth $1B+). It uses NVIDIA FLARE federated learning, meaning partner companies can use Lilly's models without sharing their own data. TuneLab positions Lilly as a platform business for the broader biotech ecosystem.
Why is AI important for drug discovery?
Traditional drug discovery takes 10–15 years and costs $1–3 billion per approved drug, with most candidates failing in late-stage trials. AI systems like LillyPod can simulate millions of molecular compounds against biological targets in hours — work that would take years in a physical lab. Protein diffusion models, small-molecule neural networks, and genomics models each target a different bottleneck in the R&D pipeline, compressing timelines by orders of magnitude.
Sources
  • NVIDIA Blog: "Now Live: Lilly AI Factory for Pharmaceutical Discovery and Development" (February 27, 2026)
  • HPCwire: "Lilly Launches LillyPod NVIDIA DGX SuperPOD for Genomics and Drug Discovery AI" (February 27, 2026)
  • Eli Lilly Investor Relations: "NVIDIA and Lilly Announce Co-Innovation AI Lab to Reinvent Drug Discovery" (January 12, 2026)
  • NVIDIA Newsroom: "NVIDIA and Lilly Announce Co-Innovation Lab to Reinvent Drug Discovery in the Age of AI" (January 12, 2026)
SharePost on XLinkedIn
Was this helpful?

Get the best AI tools tips — weekly

Honest reviews, tutorials, and Happycapy tips. No spam.

Comments