LLMs and AI at the Enterprise

The opportunities gen AI (artificial intelligence) brings to most industries are significant—dare we say remarkable? But let’s be clear, there are still some challenges. For example, most LLMs (large language models) are trained on publicly available data and the vast majority of enterprise data remains untapped, and much work needs to be done to address [...] The post LLMs and AI at the Enterprise first appeared on Connected World.

LLMs and AI at the Enterprise

The opportunities gen AI (artificial intelligence) brings to most industries are significant—dare we say remarkable? But let’s be clear, there are still some challenges. For example, most LLMs (large language models) are trained on publicly available data and the vast majority of enterprise data remains untapped, and much work needs to be done to address this. And again, dare we say address this sooner, rather than later? Enter Granite 3.0, IBM’s third-generation Granite flagship language models, which was announced earlier this week at IBM’s second annual TechXchange event.

By combining a small Granite model with enterprise data, especially using the alignment technique InstructLab—introduced by IBM and RedHat in May—IBM believes businesses have the opportunity to achieve task-specific performance that rivals larger models at a fraction of the cost. But here’s the rub. They can’t do it alone.

At a recent analyst event IBM hosted last week, attendees (including myself) got a sneak peek under the sheet where executives shared their thoughts about what lies ahead for IBM in innovation with quantum computing, AI assistants and their consulting business.

“I think there is so much opportunity we can go get,” Arvind Krishna, CEO, IBM says. “We need to be ready for the future. That is kind of what excites me. I am excited about our portfolio today.”

“When we talk about a hybrid cloud from a portfolio perspective, you are going to hear us talk about three words: (trusted, comprehensiveness, and consistency),” says Brian Gracely, senior director, portfolio strategy, Red Hat.

“IBM Consulting Advantage is our AI-powered delivery platform for IBM Consulting,” adds Eileen Lowry, IBM vice president, product management. “It is where all our assets, namely our agents, our assistants, our applications, and our methods reside.”

“We have seen tremendous progress,” says Nick Otto, head of global strategic partnerships, IBM. “We still have a lot more to do, but when we look at some of the stats around that progress. First, almost 2,000 new transacting partners since we launched with Partner Plus stepped forward, and that was one of our main objectives. Two, almost 300,000 new badges for our ecosystems.”

An Enterprise Workhorse

The new Granite 3.0 8B and 2B language models are designed as ‘workhorse’ models for enterprise AI, delivering performance for tasks such as RAG (retrieval augmented generation), classification, summarization, entity extraction, and tool use. These compact, versatile models are designed to be fine-tuned with enterprise data and seamlessly integrated across diverse business environments or workflows.

Consistent with the company’s commitment to open-source AI, the Granite models are released under the permissive Apache 2.0 license.

The Granite 3.0 models were trained on more than 12 trillion tokens on data taken from 12 different natural languages and 116 different programming languages, using a novel two-stage training method, leveraging results from several thousand experiments designed to optimize data quality, data selection, and training parameters. By the end of the year, the 3.0 8B and 2B language models are expected to include support for an extended 128K context window and multi-modal document understanding capabilities.

An Era of Responsible AI

As we enter into a new era of innovation and AI, industries must also consider how they approach the use of gen AI in a way that is both responsible and ethical. A new family of Granite Guardian models permit application developers to implement safety guardrails by checking user prompts and LLM responses for a variety of risks. The Granite Guardian 3.0 8B and 2B models provide risk and harm detection capabilities.

While the Granite Guardian models are derived from the corresponding Granite language models, they can be used to implement guardrails alongside any open or proprietary AI models.

Certainly, this is one example of how LLMs and AI are advancing, but it is exciting to see the opportunities it opens up for the enterprise today.

Want to tweet about this article? Use hashtags #IoT #sustainability #AI #5G #cloud #edge #futureofwork #digitaltransformation #GenAI #IBM #watsonx

The post LLMs and AI at the Enterprise first appeared on Connected World.