Partners can ‘become AI providers themselves for their enterprise customers,’ says Joe Fernandes, vice president and general manager of Red Hat’s AI business unit.
Red Hat has unveiled the third version of its enterprise artificial intelligence platform, leveraging Models as a Service, a catalog of validated and optimized AI models, and a generative AI studio as part of its goal to allow organizations to rapidly scale and distribute AI workloads across hybrid, multi-vendor environments.
Red Hat AI 3 gives solution providers another path to manage customers’ AI use cases, Joe Fernandes, vice president and general manager of Raleigh, N.C.-based Red Hat’s AI business unit, said in response to a CRN reporter’s question during a press briefing.
Even for customers starting with AI vendors such as OpenAI or Google, Red Hat AI 3 can enable flexibility to work across multiple clouds, at the edge, on-premises and in a mix of environments, he said. Red Hat AI 3 helps users go to “the next level” of multi-environment AI use cases beyond what they do with Red Hat OpenShift and Red Hat Enterprise Linux (RHEL).
“This opens up another area where we can partner with our SI [systems integrator] or managed service provider partners to help customers deploy these platforms, manage these platforms and take advantage of these technologies,” he said. “Rather than just consuming from the AI providers in the cloud, they become AI providers themselves for their enterprise customers.”
[RELATED: Red Hat Launches RHEL 10 With New Capabilities For Hybrid Cloud And AI Systems]
Red Hat AI 3
The IBM division’s latest version of Red Hat AI promises users improved cross-team collaboration on agents and other AI workloads leveraging a common platform. Red Hat AI 3’s open standards foundation should support any model and hardware accelerator from data centers to public cloud and sovereign AI environments.
Red Hat AI 3 adds Model-as-a-Service (MaaS) capabilities built on distributed inference. IT teams can provide their own MaaS and serve common models centrally with on-demand access for developers and applications, improving cost management. This should also allow for AI use cases that can’t run on public services due to privacy and data concerns, according to the vendor.
Red Hat AI 3 also now has a hub for exploring, deploying and managing AI assets. In addition, it has a catalog of validated and optimized generative AI models, a registry for managing model life cycles, and a deployment environment for configuring and monitoring AI assets running on Red Hat OpenShift AI.
The platform’s GenAI studio offers an environment for AI engineers to work with models, prototype GenAI apps and discover and consume available models and Model Context Protocol servers through an AI asset endpoint feature.
Engineers can use the studio to experiment, test prompts and tune parameters for chat, retrieval-augmented generation and other use cases.
OpenAI’s gpt-oss, DeepSeek-R1, the Whisper speech-to-text model and Voxtral Mini voice-enabled agents model are among the new Red Hat-validated and optimized models available for Red Hatt AI 3 users, according to the vendor.
Other recent AI innovations by the vendor include a developer preview of the offline version of RHEL command-line assistant, which uses AI to power guidance and suggestions to users, plus general availability of Red Hat Ansible Automation Platform 2.6, which integrates the Ansible Lightspeed intelligent assistant directly into the user interface to aid with troubleshooting, on-boarding and more.