Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Newspaper

NAND Insider Newsletter: March 24, 2025

Every week NAND Research puts out a newsletter for our industry customers taking a look at what’s driving the week, and what happened last week that caught our attention. Below is a excerpt from this week’s, March 24, 2025.

Driving the Week

All eyes will be on Friday’s expected IPO of GPU cloud provider CoreWeave. The company hopes to raise up to $2.7 billion, targeting a valuation as high as $32 billion. This IPO is a litmus test for investor appetite in AI-driven enterprises in the face of an unusually sluggish market.

There are no major tech events scheduled for this week, but our team will be attending a couple of private analyst events.

Policy Watch

Is Section 230 a ticking clock? A bipartisan bill is in the works that could sunset Section 230 of the 1996 Communications Decency Act by 2027, according to a scoop from The Information. The bill gives platforms a countdown — three years — to prove they can moderate responsibly or face the same legal liabilities as publishers.

Section 230 has been the spine of the modern internet— shielding platforms from endless lawsuits over user content. Repealing or altering it? That’s a tectonic shift.

Hollywood to Trump: “Don’t let AI steal our work.”  More than 400 boldface names — from Ben Stiller to Toni Collettesigned an open letter asking Donald Trump to put the brakes on what they call a brazen copyright heist by AI giants.

OpenAI, Google, and others have trained their models on mountains of copyrighted content — scripts, songs, screenplays — without paying a dime. Now industry titans want a new federal mandate: no training on copyrighted content without a license.

NVIDIA GTC Takeaways

NVIDIA released a slew of announcements at its GTC event in San Jose last week. There were updates to Blackwell generation GPUs with the introduction of the Blackwell Ultra, a tease of GPUs to come with Vera Rubin and Rubin Ultra, NVIDIA’s Dynamo to power AI factories, a new Spectrum-X Silicon Photonics ethernet switch and Quantum-X networking chips, and and expanded range of OEM relationships.

We published a recap blog that provides more detail for NVIDIA’s announcements, but it was really the announcements from the broader ecosystem that pointed the way for AI:

Storage is critical for AI, that was the message NVIDIA sent with its announcement of its new NVIDIA AI Data Platform reference design and new storage certification. Nearly every major storage OEM announced support for the initiatives, including DDN, Dell, HPE, Hitachi Vantara, IBM, NetApp, Nutanix, Pure Storage, VAST Data, and WEKA – Nutanix’s inclusion on this list caught us by surprise, but the company’s going deep with AI and is very close to releasing its new external storage capabilities.

The storage vendors didn’t stop there. Pure Storage announced its FlashBlade//EXA, a new disaggregated storage system for AI. HPE announced a set of enhancements for its Alletra-based offerings. IBM launched a content-aware solution for RAG. NetApp rolled out new NVIDIA integrations. VAST Data continued its push into AI-related data handling, while WEKA announced a broad set of enhancements, including an innovative new approach to GenAI with its new Augmented Memory Grid.

OEMs are ready for Blackwell. Every major server OEM announced updates to their AI offerings. While Lenovo and Supermicro both offered up new support for Blackwell, the most interesting announcements came from Dell and HPE:

Dell updated the PowerEdge servers in its AI Factory offerings, while also embracing NVIDIA’s new AI developer PCs with news of its upcoming Dell Pro Max models.

HPE announced a range of updates, expanding its private cloud offerings, updated its validated blueprints, and introduced multiple new servers for NVIDIA’s Blackwell.

Liquid cooling is hot. With GPUs running above 1KW, liquid cooling is necessary to deliver the required densities to energy-hungry data centers. We detailed our favorite announcements in a related blog post, including a collaboration between Chemours, NTT DATA and Hibiya that explores immersion cooling, a new row-based cooling solution, the CHx1500, from the always-innovative CoolIT, Intel’s reported collaboration with Taiwanese suppliers on its proprietary SuperFluid technology, and NVIDIA’s upcoming Rubin Ultra NVL576 liquid-cooled rack system coming in 2027.

What We’re Reading

GTC is on our mind as we continue to dig through the announcements and review the dozens of technical sessions that caught our eye, so it should be no surprise that this week’s reading list is equally consumed with AI, but on the less technical front:

News outlets are using AI to mine for interesting local stories, as Neiman Labs reports in its piece, Local Newsrooms are using AI to Listen In on Public Meetings:

’Sending a reporter to drive an hour to go sit in a four-hour meeting that will probably not make news 90% of the time isn’t worth the time and effort,’ said Alex Seitz-Wald, the deputy editor of Midcoast Villager. Having [AI] sit in on the meeting is a convenient and affordable alternative.

Are LLMs training on pirated library books? The venerable Atlantic’s powerful read, The Unbelievable Scale of AI’s Pirated-Books Problem, looks at how models were trained on content from LibGen, one of the largest sites for pirated books, using Meta as an example:

Meta employees turned their attention to Library Genesis, or LibGen, one of the largest of the pirated libraries that circulate online. It currently contains more than 7.5 million books and 81 million research papers. Eventually, the team at Meta got permission from ‘MZ’ — an apparent reference to Meta CEO Mark Zuckerberg — to download and use the data set.

Sometimes, maybe, AI just wants us to help ourselves. That’s the unexpected takeaway from Wired’s piece, An AI Coding  Assistant Refused to Write Code – and Suggested the User Learn to Do It Himself.

Companies mentioned: Chemours, CoolIT, CoreWeave, DDN, Dell, Google, Hibiya, Hitachi Vantara, HPE, IBM, Intel, LibGen, NetApp, Nutanix, NTT DATA, OpenAI, Pure STorage, VAST Data, WEKA

Disclosure: The author is an industry analyst, and NAND Research an industry analyst firm, that engages in, or has engaged in, research, analysis, and advisory services with many technology companies, which may include those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Leave a Reply

Your email address will not be published. Required fields are marked *