Trump Named the AI Tech Council That Will Shape US Policy — Sam Altman and Elon Musk Aren't On It
March 29, 2026 · 6 min read · AI Policy · US Government
On March 25, 2026, the Trump White House announced its AI and technology advisory council — formally the President's Council of Advisors on Science and Technology (PCAST). Members: Zuckerberg (Meta), Jensen Huang (Nvidia), Ellison (Oracle), Sergey Brin (Google), Lisa Su (AMD), Marc Andreessen (a16z). Co-chairs: David Sacks and Michael Kratsios. Notably absent: Sam Altman (OpenAI) and Elon Musk (xAI). The lineup signals a hardware-and-infrastructure-first AI policy agenda — and carries real implications for which AI tools businesses can use, how they're regulated, and which companies get the tailwinds.
The Full Council Lineup
The White House announced a 13-person initial cohort, with room to expand to 24. Here's who made it and what their AI interests represent:
What the Lineup Signals About US AI Policy
The council composition is not random. Every appointment reflects a policy constituency:
- Hardware first: Jensen Huang (Nvidia) and Lisa Su (AMD) together represent both sides of the US AI chip duopoly. Their presence signals that compute infrastructure — not model safety or ethics — is the primary policy focus. Expect chip export controls, domestic fab investment, and data center permitting to be priority items.
- Open source + platforms: Zuckerberg's Llama models and Brin's Google represent the view that AI should be widely deployable infrastructure, not a regulated utility. This is opposed to the Altman/Anthropic view that frontier AI requires active safety governance.
- Enterprise cloud: Ellison (Oracle) and Dell represent the enterprise AI infrastructure market — the $200B+ cloud spending that will determine how AI actually gets deployed at scale in corporate America.
- Venture capital: Andreessen's a16z has consistently advocated for minimal AI regulation. His presence signals a deregulatory bias on the council.
The complete absence of frontier model lab CEOs — Sam Altman (OpenAI) and Dario Amodei (Anthropic) — is striking. Both companies have been active in Washington on AI safety regulation. OpenAI and Anthropic have each clashed with the DoD and Pentagon over AI ethics clauses in government contracts. The council's composition suggests the White House views the AI policy conversation as primarily a hardware, cloud, and investment question — not a safety or alignment question.
AI Policy Changes. Your AI Platform Shouldn't.
When one AI company faces regulatory headwinds — like Anthropic's Pentagon dispute — Happycapy users switch to GPT-5.4, Gemini, or 48 other models instantly. 50+ models, one platform, policy-resilient. Pro at $17/mo.
Try Happycapy FreeWhat the Council Means for AI Companies and Users
| Company / Tool | Council Representation | Policy Likely Outcome | Risk to Users |
|---|---|---|---|
| Nvidia / AMD (chips)On council | Jensen Huang + Lisa Su | Domestic chip incentives, export control review, data center permitting relief | Low — favorable regulatory environment |
| Meta / Google (open models)On council | Zuckerberg + Brin | Favorable to Llama/Gemini open-weight distribution, less model regulation | Low — policy tailwind for their models |
| OpenAI (GPT-5.4, ChatGPT)Not on council | No direct representation | Uncertain — safety clauses in gov contracts at risk | Medium — policy uncertainty around API governance |
| Anthropic (Claude)Not on council | No representation (ongoing Pentagon dispute) | Less favorable — safety-first position out of step with deregulatory council | Higher — government contract and API risk |
| Happycapy (50+ models)Diversified | Not a single model — uses all of the above | Policy-resilient: if any one model faces restrictions, 49+ remain available | Low — diversification hedges single-company policy risk |
What This Means for Businesses Using AI in 2026
The practical implications for companies building on or buying AI tools flow from the council's composition:
- Deregulatory stance: With Andreessen and Dell on the council, expect fewer new AI compliance requirements — not more. The EU AI Act approach is unlikely to be replicated in the US under this advisory structure.
- Open-source acceleration: Zuckerberg and Brin's presence signals continued policy support for freely distributable AI models. This makes locally-run AI tools (via open-weight models) more viable for sensitive business uses.
- Hardware investment: Nvidia and AMD representation means compute capacity — and domestic chip production — will be a policy priority. This could lower the long-term cost of AI inference as more domestic supply comes online.
- Government AI procurement: Oracle's Ellison advising on AI policy means enterprise cloud AI (OCI, Azure, AWS) will be a focus for federal AI deployment — affecting which tools government contractors can and must use.
Frequently Asked Questions
Who is on Trump's AI technology council?
The council (PCAST) announced March 25, 2026 includes Mark Zuckerberg (Meta), Jensen Huang (Nvidia), Larry Ellison (Oracle), Sergey Brin (Google), Lisa Su (AMD), Marc Andreessen (a16z), Michael Dell, and Bob Mumgaard — 13 members initially, up to 24. Co-chaired by White House AI czar David Sacks and Michael Kratsios.
Why are Sam Altman and Elon Musk not on the AI council?
Neither was named. Musk is heavily involved in DOGE rather than the PCAST structure. Altman's absence may reflect the ongoing OpenAI-Anthropic regulatory tensions and the council's hardware/infrastructure focus rather than frontier model safety. The council has no representation from the two most prominent AI model labs.
What will Trump's AI technology council actually do?
PCAST advises the White House on AI policy: reducing regulatory barriers, accelerating US AI competitiveness against China, developing a national AI Action Plan, and coordinating federal AI investment. Members advise but don't legislate — influence flows through executive policy directives and agency AI guidelines.
How does US AI policy affect which AI tools businesses can use?
Policy impacts include: export controls on AI models or chips, federal procurement rules that restrict or mandate certain AI vendors, data sovereignty requirements, and security certifications for government contractors. A pro-competition, deregulatory stance generally keeps more AI tools accessible and affordable. Multi-model platforms like Happycapy (50+ models) are more resilient to single-vendor policy shifts than tools locked to one AI company.
50+ AI Models. No Single-Vendor Policy Risk.
Happycapy routes your work across Claude, GPT-5.4, Gemini, Grok, and 50+ other models. When AI policy shifts around one company, your workflow keeps running. Pro at $17/mo.
Try Happycapy Free- Reuters — "Trump names Nvidia, Meta CEOs to science and tech council" (March 25, 2026)
- Bloomberg — "Trump Appoints Zuckerberg, Andreessen, Huang to Presidential Tech Council" (March 25, 2026)
- Fortune — "Trump appoints Zuckerberg, Huang, Ellison to tech council; Musk and Altman excluded" (March 25, 2026)
- Forbes — "Trump Reportedly Taps Mark Zuckerberg, Jensen Huang, Larry Ellison, Others As White House AI Advisers" (March 25, 2026)
- Politico — "Jensen Huang and Mark Zuckerberg among tech leaders appointed to White House advisory council" (March 25, 2026)