Newsletter

Call Notes: Compute Hardware & Semiconductor Ecosystem

Every quarter I participate in a call for buy-side investment analysts focused on compute hardware and the broader semiconductor ecosystem. If you’d like more information about these sorts of calls, feel free to email us at [email protected].

Here, I’m sharing the raw notes from that call.

Hot Topics

  • Google TPU business: 
    • Meta has been reported in multiple outlets to be in talks with Google about purchasing/renting TPU hardware for use in its own data centers, a potentially multibillion-dollar commitment planned for rollout as early as 2027
    • Anthropic publicly announced a major expansion of its use of Google Cloud TPUs, planning to deploy up to one million TPU units to accelerate training and inference across Claude’s models
    • MediaTek/Broadcom will split the business along performance lines.
  • OpenAI is exploring NVIDIA alternatives
  • NAND margins (industry-wide) projected at 40–50% in 1H26.
    • North American hyperscalers reportedly moving to shorter-term contracts.
  • CPU Pricing:
    • Consumer CPU pricing remains flat.
    • Server-grade CPUs may see ~15% hikes.
  • Intelwarns of China Shortages:
    • Reported 10% price hikes for Chinese customers.
    • Delivery times up to six months.
  • MediaTek expands ASIC Push
    • Forecasting $1B ASIC revenue in 2026.
    • Custom AI chips projected to reach 20% of revenue by 2027.

Google TPU

Google’s custom AI Tensor Processing Unit (TPU), is evolving into one of the most ambitious ASIC plays in AI infrastructure outside of traditional GPU vendors:

  • TPU Growth & CapEx: Alphabet has massively increased its AI infrastructure spending for 2026, announcing capex of $175 B–$185 B, much of it tied to deploying TPUs at scale. This has ripple effects across the chip supply chain. 
  • Generational Progress: The TPU family is progressing into its 7th/8th generations with large-scale production planned on advanced nodes like TSMC’s 3 nm. These chips are aimed at large language model training and inference workloads and are seen as a potential alternative to NVIDIA GPUs in certain workloads. 
  • Market Interest: Beyond Google’s internal usage, external cloud and enterprise customers are expressing interest.
  • TPU “as a Service”: Google is commercializing its TPU infrastructure through cloud service offerings, attracting enterprise demand that benefits ASIC partners.

MediaTek: New Role in the TPU Supply Chain

MediaTek has become a strategic collaborator in Google’s TPU ecosystem:

  • Partnership on Next-Gen TPUs: Reports indicate MediaTek is directly collaborating with Google on future TPU designs (e.g., v7e and next-generation variants).
  • Supply Chain & Manufacturing Role: MediaTek is expected to contribute to I/O design and manufacturing coordination for Google’s TPU chips through TSMC, leveraging its strong relationships with the foundry and efficient pricing model. This is viewed as important for diversifying TPU production and relieving capacity bottlenecks. 
  • Stock & Market Reaction: Market response to MediaTek’s involvement has been strong. Shares surged as much as ~19% in early 2026 on optimism tied to its Google AI chip work. 
  • Potential Volume Gains: Some industry commentary suggests Google has increased TPU orders through MediaTek relative to earlier targets, positioning MediaTek as a significant second TPM partner alongside Broadcom. 

Broadcom: Core TPU ASIC Partner & Beneficiary

Broadcom plays a central role in the TPU ecosystem, with its business benefiting directly from Google’s AI spending:

  • Broadcom has long been a co-developer and supplier for Google’s TPU ASICs, handling high-performance ASIC design, SerDes interfaces, and translating TPU architecture into manufacturable silicon via TSMC. 
  • With Google’s aggressive 2026 capital spend, Broadcom’s exposure to TPU orders is significant; analysts have tied 80% or more of its AI compute revenue to TPU infrastructure growth. 

ST Micro & AWS

ST Micro struck a major multiyear, multibillion-dollar chip supply deal with AWS, aligning semiconductor development with cloud infrastructure spending:

  • High-bandwidth connectivity technologies for fast data movement inside data centers
  • High-performance mixed-signal processing chips
  • Advanced MCUs for managing complex infrastructure functions
  • Analog and power ICs for improved energy efficiency and power management in hyperscale facilities
  • ST and AWS will collaborate on optimizing EDA workloads using AWS’s scalable cloud compute, potentially speeding silicon design and time-to-market.
  • As part of the deal, STMicroelectronics issued warrants to AWS to acquire up to 24.8 million shares of STM stock.

Samsung: 2nm Expansion; 1.4nm on Roadmap

  • Samsung Foundry reportedly expects 30% growth in 2nm orders in calendar 2026.
  • 1.4nm mass production is targeted for 2029.
  • The ramp reflects customer diversification beyond mobile SoCs into AI/HPC silicon.

Implications:

  • Samsung is attempting to close the gap with TSMC at leading edge.
  • 2nm traction will be closely watched for yield stability and AI accelerator adoption.
  • A credible 1.4nm roadmap strengthens Samsung’s competitive signaling to hyperscalers pursuing custom silicon.

OpenAI Exploring NVIDIA Alternatives

1. Nvidia Investment in OpenAI

  • Reports indicate Nvidia is nearing a deal to invest about $20 billion in OpenAI’s latest funding round, part of a broader plan in which OpenAI seeks to raise up to ~$100 billion in funding. This would be one of Nvidia’s largest investments ever and tie the companies more closely together financially. 
  • Early 2025 rumors suggested a much larger ~$100 billion investment from Nvidia; this has since evolved into a smaller but still material ~$20 billion pillar of OpenAI’s fundraising. 
  • This reinforces a bifurcation between training and inference silicon stacks.

2. Reported Frustration with Some Nvidia Chips

  • Several sources reported that OpenAI has been not satisfied” with the performance of certain Nvidia AI chips, especially for inference workloads
  • That dissatisfaction is said to be prompting OpenAI to explore other chip suppliers (e.g., AMD, or specialized players like Groq and Cerebras) for parts of its inference infrastructure. 

3. Tension vs. Narrative of Strong Partnership

  • Some media and market narrative has portrayed tension between the two companies, fueled by:
    • Delays or changes in the previously discussed ~$100 billion investment plan. 
    • OpenAI’s interest in evaluating other inference hardware pathways. 
  • Nvidia’s CEO Jensen Huang has publicly dismissed such reports as exaggerated or “nonsense”, emphasizing the companies’ strong collaboration and Nvidia’s continued intention to support OpenAI’s success

January News: Compute & Semiconductor Ecosystem

VendorSegment FocusKey Q1 News
NVIDIAAI Accelerators, Platform EcosystemContinued Blackwell-class deployments; expanded hyperscale AI infrastructure footprint; navigating China export constraints
AMDAI Accelerators, CPUsScaled MI300 family production; increased hyperscaler design wins; expanded AI server share
IntelGPUs, CPUs, FoundryAnnounced renewed push into data center GPUs; advanced 14A process positioning; increased foundry customer engagement
TSMCFoundry, Advanced Packaging3nm production expansion in Japan; quadrupling CoWoS advanced packaging capacity; record AI-driven growth
Samsung ElectronicsMemory (HBM, DRAM), FoundryInitiated next-gen HBM4 production ramp; strong AI-driven memory profits
Micron TechnologyMemory (HBM, DRAM)Expanded HBM production capacity; AI-server-oriented roadmap acceleration
SK hynixHBMContinued leadership in HBM supply to AI accelerators
MicrosoftCustom AI SiliconContinued Maia 200 AI chip development while maintaining NVIDIA/AMD purchases
BroadcomNetworking, AI InterconnectExpanded high-performance switching silicon portfolio; scale-up interconnect emphasis
Marvell TechnologyInterconnect, Custom SiliconExpanded PCIe/CXL switching and scale-up offerings
ASMLLithographyContinued EUV tool expansion amid AI-driven fab investments

Earnings Learnings

VendorEarningsNotes
NVIDIAQ3 Fiscal 2026 (Nov 19, 2025)Record revenue: $57.0B, up 62% YoY; Data Center revenue $51.2B (up 66% YoY) — showing AI demand strength. 
Q4 & FY 2026 (scheduled Feb 25, 2026)Earnings expected Feb. 25, 2026; analysts project a potential beat on revenue and EPS with expectations of upside surprises as AI infrastructure demand remains strong. 
AMDQ4 2025 & FY 2025 (reported Feb 3, 2026)Record results: Q4 revenue $10.3B (+34% YoY), gross margin 54%, net income $1.5B; Full-year revenue $34.6B (+34% YoY). Guidance for Q1 FY 2026: $9.8B revenue expected (+32% YoY). 
TSMCQ4 FY2025 (reported Jan 15, 2026)Profit +35% YoY, beat forecasts; flagged robust AI-driven growth and projected ~30% revenue growth for full 2026. 
Micron TechnologyAI-led performance in early 2026Memory demand has driven performance: stock up sharply and HBM expansion continues, with expectations of strong earnings momentum into 2026 (margin expansion). 
Samsung Electronics4Q 2025 Earnings (released early 2026)Reported consolidated 4Q results (financial document released); while not detailed online yet, the report outlines strong memory profits and forward expectations. 
BroadcomMarket reaction to Google capex (early 2026)Stock uptick tied to partner capex increases and expected AI silicon demand; broad semiconductor earnings season supports continued revenue growth in networking & custom silicon segments. 

Server OEM News

OEMKey 2026 AnnouncementsAI / Accelerated Compute Focus
Dell TechnologiesScaled AI server shipments; expanded NVIDIA Blackwell-based platforms; continued enterprise AI infrastructure pushStrong focus on GPU-accelerated rack servers; heavy alignment with NVIDIA ecosystem; enterprise AI factory positioning
HPEExpanded AI server portfolio; continued growth in GPU-based infrastructure; hybrid cloud integration via GreenLakeAI-optimized ProLiant & Cray systems; integration of accelerated compute with hybrid cloud model
LenovoAnnounced AI inferencing servers; expanded GPU-enabled enterprise racks; deepened AI model partnershipsEmphasis on inference servers and enterprise AI readiness; multi-model strategy
SupermicroRolled out new NVIDIA GB300 / HGX-based systems; continued modular AI rack-scale designsHighly optimized GPU-dense systems; strong hyperscale and AI-native focus

Disclosure: The author is an industry analyst, and NAND Research an industry analyst firm, that engages in, or has engaged in, research, analysis, and advisory services with many technology companies, which may include those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.