Backgrounder: State of CXL

CXL

Compute Express Link (CXL) is a high-speed interconnect standard designed to enhance communication between CPUs, GPUs, memory, and other accelerators. It’s designed to address the growing needs for efficient data sharing and coherent memory access in data centers, HPC, and AI-driven workloads.

Research Brief: Oracle Exadata on Exascale Infrastructure

Oracle SIgn

Oracle announced the general availability of the Oracle Exadata Database Service on Exascale Infrastructure, offering extreme performance, reliability, availability, and security for Oracle Database workloads. This new infrastructure is now available on Oracle Cloud Infrastructure (OCI), catering to any workload size for all Oracle Database customers.

Research Note: onsemi EliteSIC M3e MOSFET

onsemi m3e

onsemi’s new EliteSiC M3e MOSFETs address the growing demands for more efficient, reliable, and cost-effective power solutions across various industries. The increasing global focus on mitigating climate change and transitioning to renewable energy sources necessitates significant advancements in power semiconductor technology. onsemi’s latest generation EliteSiC M3e MOSFETs are a major step forward.  

Research Report: Impact of Storage Architecture on the AI Lifecycle

WEKA GenAI Pipeline

Traditional storage solutions, whether on-premises or in the cloud, often fail to meet the varying needs of each phase of the AI lifecycle. These legacy approaches are particularly ill-suited for the demands of distributed training, where keeping an expensive AI training cluster idle has a real economic impact on the enterprise.

Research Note: IBM Q2 2024 Earnings

IBM Q2 2024 Earnings

IBM reported a robust second quarter, surpassing expectations across key financial metrics. The company continues to capitalize on its hybrid cloud and AI strategy, driving solid performance in software and infrastructure. However, consulting services experienced slower growth due to macroeconomic challenges.

Let OpenAI GPT-4o Mini Introduce Itself

GPT-4o mini

We usually write our own content, using LLMs to help refine our research, but in honor of OpenAI releasing its new GPT-4o Mini language model just a day after Meta released its Llama 3.1, and on the same day Mistral releases its new Mistral Large 2, we’ll let GPT 4o Mini tell you about itself.

Research Note: Meta Llama 3.1

Meta Llama 3.1

Meta recently released its new Llama 3.1 large language model, setting a new benchmark for open-source models. This latest iteration of the Llama series enhances AI capabilities while underscoring Meta’s continuing commitment to democratizing advanced technology. Along with the updated model, Meta also released a new suite of ethical AI tools.

Research Note: NVIDIA AI Factory & NIM Inference Microservices

NVIDIA

NVIDIA announced its new NVIDIA AI Foundry, a service designed to supercharge generative AI capabilities for enterprises using Meta’s just-released Llama 3.1 models, along with its new NIM Inference microservices.

The new offerings significantly advance the ability to customize and deploy AI models for domain-specific applications.

Research Note: NTT Data’s New AI Edge Platform

NTT Data

NTT Data unveiled its new Edge AI platform, which aims to accelerate IT/OT convergence by bringing AI processing to the edge. The fully managed solution enables real-time decisions, enhances operational efficiencies, and deploys secure AI applications across various industries, driving the adoption of advanced Industry 4.0 technologies.