[ad_1]
Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
IBM’s Red Hat business unit is expanding its AI capabilities with the new Red Hat OpenShift AI technology.
The OpenShift AI platform was announced today at the Red Hat Summit. For the last decade, OpenShift has been Red Hat’s flagship application container offering based on the open source Kubernetes container orchestration platform. OpenShift AI is a version of the platform that (much as the name implies) is optimized to help enable AI and machine learning (ML) deployments.
The new platform is an evolution of the Red Hat OpenShift data science platform with a focus on helping enable the production deployment of AI models.
“We’ve focused so much of our time and energy in the past 10 or 20 years building application platforms, and today it’s about bringing the data workloads together with the same platform that we use to produce applications and run applications,” Red Hat CTO Chris Wright said in a briefing with press and analysts. “The challenges for enterprises to adopt AI/ML are huge.”
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Register Now
IBM is already using OpenShift AI
Wright noted that the reality for many enterprises is that data science experiments often fail, with less than half reaching production.
Red Hat’s goal with OpenShift AI is to have a collection of tools that provide the ability to do all of the training, serving and monitoring needed for AI, and in a way that will help more models reach production. It’s an approach and technology that Red Hat has already proven via its parent company IBM.
Wright commented that the cost and complexity of training large language models (LLMs) is, well, particularly large. When IBM started to build out is new watsonx foundation models — which were publicly announced earlier this month — it turned to Red Hat OpenShift.
“Our platform is the platform that IBM uses to build, train and manage their foundation models just to show you the kind of scale and production capabilities that we have built into OpenShift AI,” he said.
The challenges of AI/ML deployments and Red Hat’s solution
Red Hat is building a series of enhanced capabilities into OpenShift AI. Among them is model performance capabilities. Wright said OpenShift AI will continue to improve data scientists’ ability to manage the monitoring and performance of a model deployed into production. Part of model performance is also about watching for potential model drift and making sure that a model remains accurate.
Deployment pipelines for AI/ML workloads is also critical. To that end, Red Hat OpenShift AI is enabling organizations to create repeatable approaches for model builds and deployment. There is also an effort to integrate custom runtimes for building AI/ML models.
“One of the things that we’ve discovered is that data science teams spend a disproportionate amount of their time just assembling their tools,” said Wright. “Of course, we can produce a set of tools, but it may not be the exact set of tools that an enterprise is looking for, so they may need to customize the runtime environment.”
What’s also needed to help AI/ML workloads reach production is the ability to integrate AI quality metrics. Wright noted that many data science experiments fail because they lack alignment with business outcomes.
When that happens, “it’s hard to measure your success,” said Wright. “So, making sure we can build metrics into that whole pipeline I think is really critical.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
[ad_2]
Source link