[ad_1]
Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
As the demand for generative AI continues to grow, Nvidia is doing everything possible to make sure its tech stack for model development and deployment reaches where enterprises prefer to work.
Case in point: the computing giant’s latest partnership with Microsoft to integrate the Nvidia AI Enterprise software into Azure Machine Learning (Azure ML) and introduce deep learning frameworks on Windows 11 PCs.
The move, announced today at the ongoing Microsoft Build developer conference, accelerates enterprise and individual AI efforts, and comes just a few hours after Nvidia announced Project Helix with Dell to bring generative AI to on-premise deployments.
Integration with Microsoft Azure Machine Learning
In simple terms, Nvidia AI Enterprise can be described as an end-to-end, secure software platform that accelerates the data science pipeline and streamlines the development and deployment of production AI.
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Register Now
With this partnership with Microsoft, Nvidia is bringing this software layer into Azure ML, providing Azure cloud customers with an enterprise-ready solution to quickly build, deploy and manage customized AI applications. As part of this, Azure ML users get access to more than 100 AI frameworks, pre-trained models, and development tools, such as Nvidia Rapids, that come with Nvidia AI Enterprise as well as Nvida’s accelerated computing resources to speed up the training and inference of targeted AI models, including LLMs.
“The combination of Nvidia AI Enterprise software and Azure Machine Learning will help enterprises speed up their AI initiatives with a straight, efficient path from development to production,” Manuvir Das, vice president of enterprise computing at Nvidia said in a statement.
In addition to the integration, which is available in preview by invitation only in the Nvidia community registry, the companies also announced that Nvidia AI Enterprise as well as the Omniverse Cloud will be coming to the Microsoft Azure Marketplace.
“What this means is that customers who have existing relationships with Azure can use the contracts they already have in place to access Nvidia AI Enterprise and use it either within Azure ML or separately on instances of their choice,” Das said in a press briefing. The same goes for Omniverse Cloud, which provides developers and enterprises with a full-stack cloud environment to design, develop, deploy, and manage industrial metaverse applications at scale.
AI on Windows 11
Finally, to help developers build AI models via laptops, Nvidia announced that all of its GPU-accelerated deep learning frameworks will be enabled on Windows. This, the companies said, will be done through Windows Subsystem for Linux (WSL), which combines the best of Windows and Linux and allows AI libraries that were built for Linux to run on a Windows laptop.
Nvidia said it has been working closely with Microsoft to deliver GPU acceleration and support for its entire AI software stack inside WSL, allowing developers to use Windows PCs for all their local AI development needs.
While this will make leading generative AI models available on PCs, Das noted that users will still have the option to do large-scale training on Azure.
“Of course, you can use Nvidia AI enterprise and Azure ML to do the training and then push the models down to Nvidia PCs. It’s the same Nvidia stack so it’ll run there,” he noted.
Microsoft Build runs through Thursday, May 25.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
[ad_2]
Source link