AI Capex Flow MapHyperscaler → Supplier Attribution

Methodology & Limitations

Every number on this dashboard is either filed data sourced from SEC EDGAR, or a modeled estimate built from explicitly stated assumptions. This page documents the math, the sources, and the things this analysis does not do.

1 · Historical capex (filed, exact)

Quarterly capex is pulled directly from each hyperscaler's 10-Q and 10-K filings via the SEC EDGAR XBRL API (data.sec.gov/api/xbrl/companyconcept).

The us-gaap concepts used:

  • MSFT, META, GOOGL, ORCL → PaymentsToAcquirePropertyPlantAndEquipment
  • AMZN → PaymentsToAcquireProductiveAssets (Amazon stopped tagging the first concept after FY2017)

Cash-flow items in XBRL are filed period-to-date within a fiscal year. We diff consecutive YTD entries (Q1 → Q2 = H1 − Q1, etc.) to recover discrete quarterly values. Each row in the dataset carries the filing accession number so the source 10-Q is one click away.

2 · Forward guidance (company-stated, externally sourced)

Forward guidance is loaded from a curated JSON file (data/guidance.json) seeded with the most recently disclosed full-year capex commentary per company. Each row carries a sourceUrl to the press release or 10-Q where the guidance was given.

CaveatGuidance shifts quarterly. The asOf field on each row indicates the earnings cycle from which the range was extracted; re-verify against the most recent earnings press release before relying on a specific number.

Current asOf: 2026-Q1

3 · AI-attributable share (modeled)

Capex covers more than AI. We discount each hyperscaler's guided capex by:

  • MSFT — 75% (Azure AI infra excl. office/Xbox capex)
  • META — 70% (DC build + GPU; excludes Reality Labs)
  • GOOGL — 70% (TPU, GPU, DC; excludes office, fiber, Other Bets)
  • AMZN — 65% (AWS GenAI; carve-out vs. fulfillment build)
  • ORCL — 90% (Stargate / OCI Gen2 is overwhelming the FY26 mix)

These shares are anchored to management commentary (e.g. Microsoft's "cloud and AI" segment disclosure) but remain a judgment call.

4 · Layer mix

AI-attributable capex is split into seven infrastructure layers using a single industry-baseline mix applied to all hyperscalers:

LayerMix %Rationale
Compute Silicon45.0%GPUs + custom AI ASICs (~45% of AI capex per SemiAnalysis composition)
HBM & Memory10.0%HBM3E/HBM4 stacks (typically bundled in GPU BoM but allocated here)
Networking & Optical8.0%Switches + optical (back-end fabric ~7–9% of cluster cost)
Power & Electrical15.0%Switchgear, UPS, busway, transformers — rising with density
Datacenter Build12.0%DC shell + electrical EPC + land prep
Foundry & Equipment7.0%Imputed via foundry/wafer cost back to AMAT/LRCX/ASML pass-through
Neoclouds3.0%Hyperscaler take-or-pay to GPU clouds (MSFT↔CRWV style deals)
5 · Supplier shares

Within each layer, dollars allocate to suppliers by approximate market share or disclosed customer concentration. Hyperscaler-specific overrides apply where disclosure or strategic preference differs materially (e.g. GOOGL skews to AVGO via TPU; ORCL is almost pure NVDA).

Compute Silicon
NVDA 72% · AVGO 18% · AMD 10%
HBM & Memory
MU 30%
Networking & Optical
ANET 55% · AVGO 30% · CIEN 15%
Power & Electrical
VRT 45% · ETN 30% · GEV 25%
Datacenter Build
PWR 55% · VRT 25% · ETN 20%
Foundry & Equipment
TSM 45% · ASML 20% · AMAT 20% · LRCX 15%
Neoclouds
CRWV 100%
Hyperscaler overrides
  • GOOGL: compute_silicon → NVDA 45%/AVGO 50%/AMD 5%
  • META: compute_silicon → NVDA 78%/AVGO 17%/AMD 5%
  • AMZN: compute_silicon → NVDA 65%/AVGO 5%/AMD 5%
  • ORCL: compute_silicon → NVDA 90%/AMD 10%; neocloud →
  • MSFT: compute_silicon → NVDA 70%/AVGO 15%/AMD 15%; neocloud → CRWV 100%
6 · Known limitations

In-house silicon is partially excluded. Amazon Trainium/Inferentia and Microsoft MAIA are not in the supplier roster (they are internal). Per-hyperscaler compute_silicon shares are intentionally below 100% for these names; the residual stays internal.

HBM coverage is partial. Micron is the only HBM name in scope. SK Hynix and Samsung Electronics dominate global HBM share but are not US-listed, so they are out of scope for a US-listing demo. Total memory dollar flow understates industry HBM TAM by approximately 3×.

Capex ≠ supplier revenue. Hyperscaler capex is the address-able dollar base; supplier revenue recognition lags and depends on mix, take-or-pay terms, and ASP. Treat modeled flows as TAM share, not booked revenue.

No FX effect. All figures are USD. TSM and ASML reported figures are reported in TWD/EUR respectively and are not currency- adjusted within this model.

No double-counting adjustment. AVGO supplies both compute silicon (TPU, MTIA) and networking (Tomahawk/Jericho); both lines flow to AVGO and aggregate at the supplier level — by design.

7 · Refresh & reproducibility

npm run fetch:capex re-pulls all SEC EDGAR data into data/capex_actuals.json. The fetch script is in scripts/fetch-capex.mjs.

npm run extractruns the Claude CLI extraction pipeline that pulls latest 10-Q MD&A commentary on capex and surfaces it as structured JSON.

All assumptions live in src/lib/flow-model.ts as named constants — edit there to A/B alternative scenarios.