I just wrapped my quarterly semiconductor buy-side investor call (if you’re interested in attending, reach out and I’ll hook you up). I’m not sharing the transcript, but happy to share my prep notes.
The top-line takeaway: We’re in a moment that’s less about individual chip performance and more about processing-at-scale. It’s all about sprawling AI racks, optics, interconnect, and custom silicon hungry for scale, speed, and a story.
Headlines
Chipmakers
AMD
- Helios (OCP 2025): Rack-scale AI platform using MI450
- It is built for the Meta “Open Rack Wide” form factor, supports up to 72 MI450 GPUs per rack with ~31 TB HBM4 memory, and claims ~50 % more memory than what’s expected from NVIDIA’s comparable rack platform.
- GPUs:
- Oracle contracted for ~50,000 units (delivery 2026)
- The initial deployment will include about 50,000 MI450 processors in Q3 2026, with further expansion in 2027 and beyond.
- Quantum/Hybrid with IBM:
- AMD and IBM Corporation announced a collaboration to integrate AMD CPUs, GPUs and FPGAs with IBM’s quantum computing systems, with a view toward “hybrid quantum-classical workflows.”
- Counter to NVIDIA’s Quantum consortium-building
- Earnings recap:
- AMD reported Q2 2025 revenue of US $7.7 billion, up 32 % year-on-year.
- Gross margin was 40 % on the GAAP basis; on a non-GAAP basis it was 43 %.
- Net income was US $872 million; diluted EPS was $0.54. On non-GAAP basis net income was $781 million and EPS $0.48.
- The results were impacted significantly by export controls on their Instinct MI308 data-center GPU: AMD disclosed about ~US $800 million of inventory and related charges tied to those restrictions.
- They also highlighted strong growth in Client (desktop) and Gaming segments: Client revenue up ~67 % YoY and Gaming up ~73 % YoY, which helped offset data-center headwinds.
ARM
- Introduced its “Foundation Chiplet System Architecture (FCSA)” specification, aimed at enabling interoperable multi-vendor chiplets in a single package
- Expanding its Flexible Access licensing program to include the Armv9 edge AI platform to lower entry cost for startups and device makers
Broadcom
- OpenAI Deal:
- Broadcom announced a multi-year deal with OpenAI to co-develop and deploy around 10 gigawatts (GW) of custom AI accelerator and rack systems, targeting start of deployment in the second half of 2026 and completion by end of 2029.
- Earnings:
- Broadcom reported record semiconductor/AI-driven growth: e.g., in its Q2 FY 2025 (ended April/May depending on fiscal calendar) it posted revenue of ~$15 billion (+20 % y/y) with AI-related revenue up ~46% (~$4.4 billion)
- $10B Hyperscale Deal:
- Broadcom has secured a large ~$10 billion plus AI-hardware order from an unnamed customer (we believe it’s Anthropic) for its “AI server racks,” which is driving upgrades in price targets and bullish sentiment.
Intel
- U.S. Government Stake in Intel: the U.S. government will invest about US $8.9 billion for roughly 9.9 % of Intel’s common stock.
- Credit Downgrade: Rating agency Fitch Ratings downgraded Intel’s credit rating from BBB+ to BBB, citing weak demand, competition, and execution risks.
- SoftBank Investment: SB announced it’s invest $2 billion in Intel at ~$23 per share, acquiring about ~2 % of the company.
NVIDIA
- $5T valuation, $5B Intel partnership: Peak investor conviction around AI infrastructure leadership
- Earnings:
- Q2 Results: 12% sequential growth in data center revenue (compute + networking).
- Q3 Outlook: 17% sequential growth expected.
- Key contributors:
- Ramp of GB200 and GB300 Ultra systems based on Blackwell architecture.
- Continued adoption of NVLink networking and Ethernet for AI.
- GTC DC Big Takeaways:
- Jensen said in his keynote that AI computing demand is reaching ~$1 trillion scale, marking a structural shift.Sold out thru 2026
- Blackwell Ultra AI Factory Platform
- Vera Rubin (and Rubin Ultra) roadmap: late 26/early 27
- DGX Spark & DGX Station (“personal AI supercomputer”) launched — but can it inference as fast as Apple’s M4? Slower memory on the Spark.
- Spectrum‑X Photonics & Co-Packaged Optics Networking Switches
- Strong tailwinds for optical interconnect vendors, silicon-photonics suppliers, co-packaged optics, active electrical cables.
- NVIDIA announced collaboration with GM on next-gen vehicle AI, manufacturing systems, and simulation
Marvell and Broadcom
- Unveiled chiplet- and optics-centric interconnects aimed at next-gen AI fabrics at OCP
Qualcomm
- Arduino Acquistion
- Automotive Momentum
- Expanding into rack-scale inference with new AI200/250 chips
- Snapdragon W5:
- nnounced its next-generation Snapdragon W5+ and W5 wearable platforms — the “world’s first wearable platforms with NB-NTN (narrowband non-terrestrial network) satellite support
Cloud
- AWS × OpenAI ($38B): Long-term compute supply using NVIDIA GB200/GB300
- Google × Anthropic (~1 GW TPU): Cemented Google Cloud as custom-silicon leader
- Oracle × AMD: 50K MI450 deployment, first large-scale non-NVIDIA AI cloud
- Tencent Cloud: Pivot to domestic AI silicon, accelerating China’s self-sufficiency strategy
Policy
- U.S. Government: Intel 9.9% stake (~$8.9B) via CHIPS Act equity conversion
- India: Gaining strategic relevance as Qualcomm & Arm localize automotive and edge-AI production
- China: Accelerating domestic design after U.S. export constraints
- AMD & NVIDIA Corporation agreed to a deal where they’ll pay the U.S. government a share (~15 %) of chip sales to China as part of export-license negotiations.
Age of Inference
- Qualcomm AI200/AI250: Data-center inference chips expanding beyond smartphones
- Arm Lumex CSS (OCP): On-device AI reference design; expanded Flexible Access program for startups
- Apple, MediaTek, Samsung: Accelerating NPU roadmaps for offline generative AI
What’s It All Mean?
The Rack Is the New Chip
NVIDIA’s Blackwell Ultra platform and Spectrum-X photonics switches define compute not by die size but by how many GPUs can communicate over light.
AMD fired back with Helios, a rack-scale architecture built to out-memory NVIDIA’s Rubin systems, targeting disaggregated AI compute. Qualcomm entered the conversation with its AI200 and AI250 accelerators—dense, power-efficient racks for inference at scale, tuned for hyperscaler efficiency rather than brute-force FLOPS.
Key Takeaway: The semiconductor arms race has moved past the chip and into the rack. Performance is now measured in bandwidth, not clockspeed. NVIDIA, AMD, and Qualcomm are designing systems as silicon.
What’s Next: Expect hyperscalers to buy “AI racks” as atomic units of compute, not individual GPUs (already happening on the NVIDIA front). The next wave of winners will be those who own the interconnect, memory fabric, and orchestration software that glue these rack-scale monsters together.
Custom Silicon Grows Up
Broadcom reported ~$10B+ in custom AI-chip wins; Marvell revised its custom-chip TAM to ~$55B; MediaTek is targeting cloud ASICs and partnering with hyperscalers.
Broadcom’s the big dog in the custom semi space. Here’s an aggregate look at speculation about Broadcom’s custom semi business gives us this view:
| Status | Company | Evidence / Notes | Confidence |
|---|---|---|---|
| Current / Qualified Customer #4 | OpenAI | Deal announced Oct 13 2025 for co-developing 10 GW custom AI accelerators and network racks with Broadcom. | High |
| Existing Customer | Google LLC | Google uses Broadcom for its TPU / AI-accelerator / custom silicon efforts. | Medium |
| Existing Customer | Meta Platforms, Inc. | Broadcom believed to be co-developerof Meta’s in-house MTIA accelerators. | Medium |
| Existing Customer | ByteDance Ltd. | ByteDance belived to be among Broadcom’s top XPU customers. | Lower |
| Prospect | Apple Inc. | Media speculation that Apple may be engaging Broadcom for custom AI silicon. | Low |
| Prospect | Anthropic PBC | Analyst (Mizuho) suggests Anthropic may be the unnamed $10 B customer. | Low |
| Prospect pool (unspecified) | — | Broadcom states it sees a total “~7” customers/prospects for the custom-XPU business. | — |
Key Takeaway: Hyperscalers are tired of buying off-the-shelf chips. Custom silicon is now a strategic imperative, not a nicety.
What’s Next: Over the next 24 months, expect more design-win announcements. Custom ASICs are fast becoming table stakes for cloud/AI infrastructure:
- AWS Trainium/Inferentia + Graviton
- Google TPU + Axion
- Microsoft Azure’s Maia + Cobalt
- Will we see SoftBank + Ampere move into this space?
- Do Qualcomm’s data center aspirations extend into the custom chip business?
Optics & Photonics: The Hidden Semiconductor Boom
NVIDIA says light will link GPUs by 2026; optical engines and co-packaged optics are cited as major upgrades in AI racks.
Key Takeaway: The bottleneck in AI isn’t compute—it’s communication. Optics are the next frontier of “Moore’s Law.”
What’s Next: Optical components, photonic packaging, and cable/connector vendors will see growth spikes in 2026-27. Start watching smaller optics/IP plays.
Compute Express Link (CXL) Hits Reality at OCP 2025
Demos of 100 TiB CXL memory pools, memory-tiering architectures, and AI server platforms embedding CXL.
Key Takeaway: Memory isn’t a passive commodity anymore—it’s being re-architected as a shared, disaggregated fabric.
What’s Next: DDR vendors who embrace memory-fabric modules and CXL-controller IP will benefit; those stuck making slot-in DRAM chips risk value compression.
Analyst’s Take
If I were an investor (I’m not – no one at NAND Research invests in the companies we cover), here’s what I’d be looking at:
| Theme | Position | Time Horizon | Key Metrics to Track |
| Rack-Scale AI Infrastructure | Overweight optical interconnect, HBM, CXL IP | 12–24 mo | Cloud capex cadence, OCP adoption |
| Hyperscaler Silicon Partnerships | Accumulate vendors with multi-cloud exposure (AMD, Broadcom, Marvell) | 6–18 mo | Cloud order visibility |
| Packaging & Advanced Assembly | Buy on dips—structural capacity constraint | 12–36 mo | CoWoS lead-times, ASP trends |
| Regional Foundry Diversification | Selective exposure to Intel/TSMC US expansion | 24 mo+ | Government subsidy flow, yield ramp |
| Edge AI Semiconductors | Tactical trade for smartphone cycle 2026–2027 | 6–12 mo | NPU attach rate, handset ASP uplift |
Who’s Up
- Hyperscale clouds (AWS, Google Cloud, Oracle): Becoming hardware OEMs, controlling the compute stack
- Infrastructure IP/packaging/optics firms: Marvell, Broadcom’s custom silicon units, optics vendors
- Photonics & memory fabric vendors: CXL roadmaps, memory pooling, optical interconnects present significant upside
Who’s Down
- Traditional logic/foundry players without ecosystem plays: Just making the die no longer means you capture most value
- Server OEMs stuck in old form factors: Composable racks and open standards are eating legacy “box” margins
- Commodity chip vendors ignoring infrastructure architecture: The market is demanding custom, integrated solutions now



