Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

OFC 2025: Optical Interconnects Take Center Stage in the AI-First Data Center

OFC 2025

AI is reshaping the data center, bringing networking along for the ride. It’s clear that optical networking is rapidly moving from a back-end concern to a front-line enabler of next-generation infrastructure.

AI workloads, with their massive datasets, distributed training pipelines, and high-performance compute requirements, demand interconnect solutions that combine extreme bandwidth with low power consumption and low latency. At last month’s OFC 2025 event in San Francisco, this shift was unmistakable.

Research Note: Marvell Custom HBM for Cloud AI

Marvell Custom HBM

Marvell recently announced a new custom high-bandwidth memory (HBM) compute architecture that addresses the scaling challenges of XPUs in AI workloads. The new architecture enables higher compute and memory density, reduced power consumption, and lower TCO for custom XPUs.

Marvell Sees Momentum in Cloud, AI, and Automotive

marvell building

Last week, Marvell released its earnings for the second quarter of its fiscal 2024, demonstrating robust performance with $1.34 billion in top-line revenue. While that number was down year-over-year, it surpassed the midpoint of the company’s guidance. Insight: Growth in Cloud & AI Marvell’s data center business was a bright spot. Revenue from that business […]