Research Notes

Research Note: IBM DS8000 Mainframe Storage

IBM introduced its tenth-generation DS8000 mainframe storage, designed to store more data and access it faster. The new lineup includes the DS8A10 entry-level product and the larger DS8A50, which succeed previous models like the DS8910F, DS8950F, and DS8980F.

Read More »

Research Note: GenAI Enhancements to Oracle Autonomous Database

Oracle recently introduced significant Generative AI enhancements to its Autonomous Database to simplify the development of AI-driven applications at enterprise scale. The enhancements leverage Oracle Database 23ai technology to empower organizations with tools that make integrating AI, streamlining data workflows easier, and modernizing application development.

Read More »

Research Note: Oracle Database@Google Cloud

Oracle and Google announced Oracle Database@Google Cloud, offers Oracle Exadata Service and Oracle Autonomous Database on Google Cloud, enabling organizations to run mission-critical database workloads with improved performance, reduced latency, and streamlined management.

Read More »

Research Note: Juniper Expands AIOps Portfolio

Juniper Networks expanded its AI-driven networking portfolio by introducing new Ops4AI initiatives. These solutions integrate AI-driven automation and optimization to enhance network performance, particularly focusing on the growing demands of AI workloads within data centers.

Read More »

Research Note: Oracle Q1 FY2025 Earnings

Oracle’s fiscal Q1 2025 results beat Wall Street expectations, reporting $13.3 billion in total revenue, an 8% increase year-over-year. More impressively, its cloud revenue surged 22%, hitting $5.6 billion, a key driver of its overall growth. Oracle’s multi-cloud strategy, coupled with cutting-edge AI capabilities, has expanded its market reach and enhanced its profitability.

Read More »

Research Note: Cerebras Inference Service

Cerebras Systems recently introduced Cerebras Inference, a high-performance AI inference service that delivers exceptional speed and affordability. The new service achieves 1,800 tokens per second for Meta’s Llama 3.1 8B model and 450 tokens per second for the 70B model, which Cerebras says makes it 20 times faster than NVIDIA GPU-based alternatives.

Read More »