IBM SIgn

IBM Releases Granite Foundation Models for Enterprise AI

IBM takes a big step towards addressing enterprise requirements for generative AI with its new Granite Foundation Models. Let’s look at what IBM announced.

Granite Foundation Models

IBM’s Granite foundation models for watsonx.ai, developed by IBM Research (go deep with IBM’s Research Paper on the models), are designed to meet the stringent demands of enterprise applications. The new models are built to support various business-domain tasks, such as summarization, question-answering, classification, and more.

IBM carefully selected datasets to meet the specific needs of business users. IBM told us that these datasets encompass various domains:

  • Internet: This dataset comprises unstructured language data collected from the public internet, providing a broad range of generic content.
  • Academic: Focused on science and technology, this dataset contains technical unstructured language data from academic sources.
  • Code: Unstructured data sets covering multiple coding languages, enabling models to understand diverse coding practices.
  • Legal: This dataset features unstructured language data relevant to enterprises, including legal opinions and publicly available legal documents.
  • Finance: Specifically curated for enterprises, this dataset contains unstructured data from publicly posted financial documents and reports

Training the models on industry-specific datasets, IBM ensures the models are tuned to the specialized language, terminology, and knowledge relevant to these sectors. This familiarity empowers the models to make informed decisions grounded in industry-specific expertise.

Trust & Transparency

Among the biggest concerns of enterprises adopting generative AI is understanding the integrity of the underlying data. IBM addresses this with a comprehensive end-to-end AI and governance process for the entire data model lifecycle. This ensures that potential client risks are managed and mitigated within the watsonx AI and data platform.

IBM’s robust data governance process evaluates datasets for criteria like governance, risk, and compliance in alignment with IBM’s AI ethics principles. This process spans the entire lifecycle of training data, from data clearance and acquisition to tokenization.

The data clearance process ensures careful consideration of datasets for inclusion in IBM’s curated pre-training dataset, covering aspects like data description, intended use, licensing, usage restrictions, and more.

Additionally, IBM maintains a blocklist for websites that disseminate pirated information to address copyright concerns. IBM’s commitment to transparency and responsible AI is evident in the publication of descriptions of its training dataset, emphasizing its efforts to address issues related to open-source data compilations’ content quality and relevance.

IBM also subjected its Granite models to rigorous quality checks, including scrutiny by IBM’s own “HAP detector” to root out hateful and profane content. Quality measures like duplication removal and data protection safeguards are also applied.

Just the Beginning

IBM is just starting with its Granite foundation models, with plans to expand into other languages and develop more IBM-trained models. The company also adding open-source models to its platform, including Meta’s Llama 2-chat model with 70 billion parameters.

Additionally, IBM plans to host StarCoder, a large language model for code, covering numerous programming languages, Git commits, GitHub issues, and Jupyter notebooks.

IBM also plans to release its watsonx.governance toolkit later this year to further enhance trusted AI workflows. There’s a lot of goodness to come, so stay tuned.

Updated Prompt Tuning Tools

To complement the new Granite models, IBM announced new features to its watsonx.ai studio.

The new Tuning Studio, part of watsonx.ai, will enable business users to personalize foundation models for their specific business needs in applications like Q&A, content generation, named entity recognition, insight extraction, summarization, and classification.

In its initial release, the Tuning Studio will focus on prompt tuning. This will allow organizations to fine-tune existing foundation models to their unique data, even with limited data available (as few as 100 to 1,000 examples).

This customization lets businesses adapt large models to specific tasks without requiring extensive retraining, potentially reducing computing and energy consumption. It empowers businesses to make AI models more efficient and tailored to their proprietary requirements.

IBM also unveiled its new Synthetic Data Generator, enabling users to create artificial tabular datasets from custom schemas or internal data. This tool facilitates AI model training, fine-tuning, scenario simulations, and quicker decision-making with lower risk, all designed to accelerate time to market.

Analysis

Generative AI is one of those rare technologies that lives up to its hype. In a short period, it’s already changing how we engage with computers across a vast spectrum of tasks, ranging from managing IT infrastructure to generating art. We’re still looking for the boundaries. But for enterprises to leverage the power of AI, the technology must meet the requirements of business – this is what IBM does exceptionally well.

IBM’s watsonx, now including the Granite foundation models and the additional tools, presents exciting possibilities for AI applications in the business world. It empowers organizations to personalize AI to their values and regulatory requirements.

At the same time, IBM’s commitment to innovation and expansion means that more models, features, and solutions are on the horizon, signaling the ongoing growth of AI’s potential in business.

Disclosure: The author is an industry analyst, and NAND Research an industry analyst firm, that engages in, or has engaged in, research, analysis, and advisory services with many technology companies, which may include those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.