IBM just launched powerful new open source AI models – here’s what you need to know

IBM logo pictured on a webpage on a tablet screen with AI assistant promotional materials.
(Image credit: Getty Images)

IBM has launched the latest versions of its Granite AI models, claiming they match or surpass everything else currently available on the market.

The new models are open source, released under the Apache 2.0 license, and the Granite 3.0 8B and 2B language models in particular are being pitched as 'workhorse' models for tasks such as Retrieval Augmented Generation (RAG), classification, summarization, entity extraction, and tool use.

While many large language models (LLMs) are trained on publicly available data, IBM has gone for enterprise data, which it said can deliver better task-specific performance than rivals larger models at a much lower cost.

The models can be fine-tuned with enterprise data and seamlessly integrated across a range of business environments or workflows, with IBM providing an IP indemnity for all Granite models on watsonx.ai for enterprise clients.

Meanwhile, the Granite Guardian 3.0 models allow application developers to implement safety guardrails by checking user prompts and LLM responses for a variety of risks.

"The Granite Guardian 3.0 8B and 2B models provide the most comprehensive set of risk and harm detection capabilities available in the market today," IBM claimed.

"In addition to harm dimensions such as social bias, hate, toxicity, profanity, violence, jailbreaking and more, these models also provide a range of unique RAG-specific checks such as groundedness, context relevance, and answer relevance."

The tech giant said the Granite 3.0 3B-A800M and Granite 3.0 1B-A400M MoE models deliver high inference efficiency with a minimal trade-off in performance.

Trained on more than 10 trillion tokens of data, the company is pushing them for on-device applications, CPU servers, and situations requiring extremely low latency.

IBM doubles down on AI assistant commitments

As part of the announcement, IBM also promised new developments in AI assistants, saying it's paving the way for future AI agents that can self-direct, reflect, and perform complex tasks in dynamic business environments.

This sharpened focus will include the next generation of watsonx Code Assistant, powered by Granite code models, to offer general-purpose coding assistance across languages like C, C++, Go, Java, and Python, and with advanced application modernization capabilities for Enterprise Java Applications.

It's also planning to release new tools to help developers build, customize and deploy AI more efficiently via watsonx.ai, including agentic frameworks, integrations with existing environments and low-code automations for common use cases like RAG and agents.

Over the rest of the year, IBM said it will expand all model context windows to 128K tokens, improve multilingual support for 12 natural languages, and introduce multimodal image-in, text-out capabilities.

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.

TOPICS