Newspaper

Call Notes: Quarterly Semiconductor Update (March 2026)

Every quarter I participate in a call for buy-side investment analysts focused on the broader datacenter/hyperscaler semiconductor ecosystem

Here, I’m sharing the raw notes I used to drive the Q1 2026 call (it’s my cheat sheet).

(If you’d like more information about how to participate, feel free to email us at [email protected].)

Q1 Key Announcements (all vendors)

CompanyDealPartyValue / ScaleDetails
AMDAI chip supply agreementMeta~$60B–$100B potentialMulti-year deal to deploy 6 GW of AMD Instinct GPUs across Meta AI infrastructure. Includes a custom MI450 GPU and Helios rack architecture with EPYC CPUs. 
AMDStrategic chip supply + equity optionMetaUp to 10% AMD equity via warrantsMeta can receive up to 160M AMD shares tied to GPU shipment milestones. 
AMDAI chip supply agreementOpenAIMulti-year (value undisclosed)OpenAI agreed to purchase 6 GW of AMD AI processors over several years, with a similar warrant structure. 
BroadcomCustom AI accelerator design programAnthropicMulti-gigawatt deploymentsBroadcom expects to supply 1 GW of custom chips in 2026 scaling to 3 GW by 2027. 
BroadcomCustom AI chip design partnershipsAlphabet, Microsoft, Amazon, MetaMulti-year programsBroadcom designs custom AI processors for multiple hyperscalers. 
BroadcomAI chip development partnershipOpenAIProduction expected ~2027OpenAI is working with Broadcom on its first in-house AI accelerator. 
NVIDIAStrategic ecosystem investmentLumentum & Coherent~$4B total investmentNVIDIA investing $2B each in optical networking companies to accelerate photonic interconnects for AI chips. 
NVIDIALarge AI chip supply dealOpenAIPart of broader ~$100B AI investment ecosystemNVIDIA expected to supply AI processors and potentially invest in OpenAI infrastructure. 
MarvellCustom ASIC and networking supplyAmazon, Microsoft (reported by analysts)Multi-year infrastructure demandMarvell supplies custom chips and networking silicon used with Trainium and Maia AI systems.
QualcommData-center AI accelerator pushCloud providers / AI infrastructure vendorsProduct launch cycle 2026Qualcomm introduced AI200 / AI250 inference accelerators targeting data-center AI workloads. 
MediaTekEdge-AI and telecom AI partnershipsTelecom operators and device makersStrategic expansionMediaTek expanding AI chips across 6G infrastructure, edge AI, and automotive platforms.

Semi Company Detail

AMD

CategoryEvent / AnnouncementProduct / TechnologyKey Customers / PartnersNotes
Major hyperscaler dealMulti-year 6-gigawatt AI infrastructure agreementInstinct GPUs + EPYC CPUsMetaAMD will supply up to 6 GW of Instinct GPUs and EPYC CPUs for Meta’s AI data centers, with first deployments starting in 2H 2026. 
Financial structure of Meta dealPartnership includes equity warrantsInstinct GPU roadmapMetaMeta receives warrants that could convert to ~10% of AMD, aligning incentives with AMD’s AI growth. 
Earlier hyperscaler deal (still impacting 2026)Multi-year AI chip supply agreementInstinct GPUsOpenAIOpenAI signed a similar 6 GW supply agreement, positioning AMD as a second supplier to major AI labs. 
Rack-scale AI platform launchAMD unveiled Helios rack-scale architectureHelios AI rackMeta (OCP)AMD’s first rack-scale system competes directly with NVIDIA’s NVL systems and hyperscaler racks. 
Next-gen GPU roadmapAMD previewed MI400-series acceleratorsInstinct MI430X / MI440X / MI455XCloud providers & hyperscalersThe MI400 family targets next-generation AI training and inference infrastructure. 
AI supercluster deploymentsLarge AI clusters planned using AMD GPUsInstinct MI450 + Helios racksOracle CloudOracle is building AI superclusters with ~50,000 Instinct GPUs. 
Next-generation CPU roadmapEPYC “Venice” server CPUs announcedZen-based EPYC processorsHyperscalers and AI data centersVenice CPUs will pair with MI450 GPUs in AI racks. 

Broadcom

CategoryEvent / AnnouncementProduct / TechnologyKey Customers / PartnersNotes
Explosive AI revenue growthAI semiconductor revenue more than doubled year-over-year to $8.4B in Q1 FY2026Custom AI ASICs + networking siliconHyperscalers and AI labsAI demand is now the primary driver of Broadcom’s semiconductor business, fueling 29% revenue growth to $19.3B. 
Massive AI chip forecastBroadcom projected over $100B in AI chip revenue by 2027Custom XPUsHyperscalers including Google, Microsoft, Amazon, Meta
Hyperscaler ASIC programs expandingBroadcom confirmed six major AI chip customersCustom AI acceleratorsGoogle, Meta, OpenAI, Anthropic, others
Anthropic deploymentPlans to supply 1 GW of AI chips in 2026, scaling to 3 GW by 2027Custom AI ASICsAnthropic
Advanced chip-stacking technologyBroadcom developing 3D-stacked AI chips, targeting 1M units shipped by 2027Advanced packaging / chip stackingFujitsu and other AI customersImproves bandwidth and power efficiency for AI workloads, enabling larger accelerators. 
Networking leadership for AI clustersContinued growth of Tomahawk Ethernet switches and SerDes technologiesAI data-center networkingHyperscalers
Next-gen switching roadmapDevelopment of 1.6T–3.2T networking and terabit switchingDC switching siliconCloud and hyperscale infrastructure providersNext-generation AI clusters will require major upgrades to data-center networking, strengthening Broadcom’s position. 
VMware integration progressVMware Cloud Foundation becoming a major enterprise software platformInfrastructure softwareEnterprises and cloud providersVMware revenue and subscriptions continue to grow as Broadcom integrates the platform into its enterprise strategy. 
Capital allocationAnnounced $10B share buyback program alongside strong earningsCorporate financeInvestorsConfidence in long-term AI demand and cash flow. 

Intel

CategoryNews ItemDetailsNotes
AI accelerator roadmapFalcon Shores canceledIntel decided not to commercialize the Falcon Shores AI GPU and will use it as a development platform for future architectures. A reset of Intel’s AI accelerator strategy after delays and competitive pressure.
New AI GPUCrescent Island inference GPUIntel unveiled a new AI inference-focused GPU based on Xe architecture with large memory capacity for enterprise servers. Intel is pivoting toward cost-efficient inference workloads, a massive market expected to outgrow training.
AI partnershipsTechnical collaboration with SambaNovaIntel announced a multi-year partnership with SambaNova Systems to advance AI infrastructure. Is Intel leaning into AI startups to strengthen its ecosystem?
Client AI chipsCore Ultra Series 3 launchIntel introduced processors for AI PCs built on its 18A manufacturing node. Shows Intel’s push to lead the AI PC market, where it has stronger OEM relationships.
Next-gen AI architectureJaguar Shores roadmapIntel is developing a future AI platform called Jaguar Shores, expected to follow Gaudi accelerators. Intel’s longer-term attempt to build rack-scale AI systems.
AI accelerator deploymentGaudi ecosystem expansionGaudi accelerators are available in systems from vendors like Dell, HPE, Lenovo, and Supermicro. Intel’s main foothold in data-center AI currently remains Gaudi accelerators.
Data-center CPU roadmapClearwater Forest Xeon delayIntel pushed back its next-generation Xeon server processor to 1H 2026 due to packaging challenges. Ongoing manufacturing and packaging challenges

Marvell

CategoryEvent / AnnouncementProduct / TechnologyKey Customers / PartnersNotes
Major acquisition (AI connectivity)Completed $3.25B acquisition of Celestial AIPhotonic Fabric optical interconnectAI cloud providers, hyperscalersExpands Marvell’s ability to build optical interconnects for large AI clusters, improving bandwidth and power efficiency in data-center fabrics. 
Optical interconnect strategyCelestial AI technology integrated into Marvell’s data-center portfolioSilicon photonics + chip-to-chip optical linksHyperscale AI data centersPhotonic Fabric enables high-bandwidth, low-latency connectivity between AI accelerators
Networking acquisitionAnnounced $540M acquisition of XConn TechnologiesCXL switching and AI chip interconnectAI infrastructure vendorsXConn’s technology enables AI accelerator pooling and memory disaggregation using CXL fabrics. 
Financial outlook tied to AICustom AI chips expected to drive 20%+ growth in ASIC revenueAI ASICsAWS and hyperscalersDemand for Trainium-related silicon is expected to drive strong revenue growth in 2026. 

NVIDIA

CategoryEvent / AnnouncementProduct / TechnologyKey Customers / PartnersNotes
Next-gen AI platform launchNVIDIA unveiled the Rubin AI platform, successor to BlackwellRubin GPU + Vera CPU + NVLink 6 fabricHyperscalers, AI labsRubin is a six-chip architecture designed for AI factories, integrating CPU, GPU, networking, and memory into a single rack-scale system. 
Rack-scale AI architectureIntroduction of NVL72 rack systems built on Rubin superchipsVera Rubin NVL72Hyperscalers and AI infrastructure providers
Software ecosystem expansionExpanded collaboration to optimize enterprise AI stackCUDA + AI software stackRed HatJoint effort integrates Rubin with Red Hat Enterprise Linux, OpenShift, and hybrid-cloud AI tooling. 
AI infrastructure investmentNVIDIA committed $4B to photonics companiesOptical networking for AI clustersLumentum, CoherentInvestments to accelerate optical interconnects critical for scaling AI clusters and reducing networking bottlenecks. 
Strategic optics partnershipsMultiyear agreements for advanced optical networking technologySilicon photonics, optical transceiversLumentum, CoherentAgreements include multibillion-dollar purchase commitments and capacity guarantees
AI-native telecom ecosystemNVIDIA joined telecom leaders to build AI-native 6G infrastructureAI-RAN platformsNokia, Cisco, Deutsche Telekom, SK Telecom, othersExpands NVIDIA GPUs into telecom networks as AI-accelerated radio infrastructure. 
Networking architecture expansionNVLink ecosystem expanding with CPU partnersNVLink Fusion interconnectIntel, Fujitsu, Arm ecosystemAllows CPU vendors to integrate NVLink, strengthening NVIDIA’s GPU-centric compute ecosystem. 
Competitive pressure from custom siliconHyperscalers expanding in-house AI chipsCustom ASICsAmazon, Google, Microsoft, MetaGrowing competition from custom silicon is pushing NVIDIA to strengthen system-level differentiation. 

Hyperscaler ASIC Business

CompanyAI ChipPrimary UseKey Silicon Partner(s)
GoogleTPUTraining + inferenceBroadcom + Marvell + TSMC
MetaMTIA (Recommenders + generative AIBroadcom + TSMC
MicrosoftMaiaAzure AI infrastructureMarvell + TSMC
Amazon (AWS)Trainium + InferentiaTraining (Trainium) + inference (Inferentia)Annapurna Labs + TSMC
ByteDanceCustom XPU (unnamed publicly)Training + inferenceBroadcom (rumored)
OpenAICustom accelerator (Titan / internal project)Frontier model training + inferenceBroadcom

Google TPU

TPU v7 “Ironwood”

  • Ironwood is Google’s seventh-generation TPU—most performant and scalable custom AI accelerator to date, first designed specifically for inference

Production Volume & Manufacturing

  • Fubon projects total TPU v7 racks will reach about 36,000 units in 2026
  • Total TPU production estimated at 3.1 to 3.2 million units in 2026, constrained mainly by advanced packaging capacity
  • In January 2026, Google confirmed that its custom-designed TPUs have outshipped general-purpose GPUs in volume for the first time

Partnership Structure Evolution

  • Google is teaming with MediaTek and Broadcom for Ironwood—spreading cost and production risk
  • MediaTek handles I/O and manufacturing coordination with TSMC; Broadcom continues developing high-performance cores
  • Google selected MediaTek partially because the company offers costs approximately 20–30% lower than alternative partners
  • MediaTek focuses on cost-sensitive variants such as the TPUv7e, with production scheduled to begin in 2026

Selling TPUs

  • Anthropic has access to as many as 1 million Google TPUs, expected to bring over a gigawatt of new AI compute capacity online in 2026
  • Anthropic has committed $21 billion to custom chip orders via Broadcom, securing nearly 1 million Google TPU v7p units for delivery by late 2026
  • Meta signed a multi-billion-dollar TPU lease agreement to access chips via Google Cloud for training AI models
  • Meta is separately in talks to purchase TPUs outright for deployment in its own data centers starting in 2027

TCO Advantage vs. NVIDIA

  • From Google’s perspective, all-in TCO per Ironwood chip for full 3D Torus configuration is ~44% lower than the TCO of a GB200 server
  • Even when offering prices to external customers, TPU v7’s TCO is about 30% lower than NVIDIA’s GB200 and about 41% lower than the upcoming GB300

OpenAI “Titan”

Partnership Details

  • OpenAI and Broadcom collaboration covers 10 gigawatts of custom AI accelerators
  • OpenAI designs the accelerators and systems; Broadcom develops and deploys
  • Deployments targeted to start in 2H 2026, completing by end of 2029
  • Systems will use Ethernet-based networking

Chip Specifications

  • OpenAI’s chip codenamed “Titan” will use TSMC’s N3 process, mass production expected end of 2026
  • Second-generation “Titan 2” reportedly planned on more advanced A16 process

Revenue Timeline for Broadcom

  • Broadcom CEO Hock Tan told investors he doesn’t expect much revenue from the OpenAI partnership in 2026
  • “I appreciate the fact that it is a multiyear journey that will run through 2029”
  • Internal push to roll the chip out in Q2 2026 has already slipped to Q3 at the earliest

Meta MTIA

Current Status

  • Meta accelerated deployment of MTIA v3, codenamed “Iris”, 3rd generation custom silicon
  • As of February 2026, Iris chips have moved into broad deployment across Meta’s massive data center fleet
  • Flagship Iris was designed primarily with assistance from Broadcom
  • Fabricated on TSMC’s cutting-edge 3nm process, integrates eight HBM3E 12-high memory stacks with bandwidth exceeding 3.5 TB/s

Setbacks in Training Chip Program

  • Meta scrapped an advanced AI training chip after design roadblocks
  • The MTIA program has had a history of setbacks, including scrapping an earlier inference chip after it underperformed
  • Meta announced a multiyear agreement with AMD on February 24 worth more than $100 billion for MI450 GPUs

2026 Roadmap

  • MTIA-2 already in production, slated to debut in H1 2026, built on TSMC’s 3nm with CoWoS-S packaging
  • MTIA-3 set for H2 2026 debut, with GUC handling back-end packaging
  • MTIA v4 “Santa Barbara” readying for latter half of 2026 with HBM4 memory and liquid-cooling systems exceeding 180kW per rack
  • Arke inference-only chip variant developed in collaboration with Marvell (not Broadcom)—representing supply chain diversification

Partner Diversification Risk

  • Meta co-developed its MTIA chips with Broadcom, but is reportedly diversifying with Marvell for certain variants
  • Meta has committed up to $135 billion in capital expenditures for 2026 to build out AI infrastructure

Disclosure: The author is an industry analyst, and NAND Research an industry analyst firm, that engages in, or has engaged in, research, analysis, and advisory services with many technology companies, which may include those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.