[ad_1]
VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
The open source machine learning (ML) framework PyTorch is moving forward with a new release, as well as a new project for enabling AI inference at the edge and on mobile devices.
The new developments were announced today at the PyTorch Conference, which loosely coincided with the one year anniversary of the formation of the PyTorch Foundation, at the Linux Foundation. As part of the event, technical details on the PyTorch 2.1 update which was released on Oct. 4, were discussed.
Most notable, however, was the announcement of new mobile and edge efforts with PyTorch Edge and the open sourcing of ExecuTorch by Meta Platforms (formerly Facebook). ExecuTorch is technology for deploying AI models for on-device inference, specifically on mobile and edge devices.
Meta has already proven the technology and is using it to power the latest generation of Ray-Ban smart glasses and it’s also part of the recently released Quest 3 VR headset. As part of the open source PyTorch project the goal is to push the technology further enabling what could be a new era of on-device AI inference capabilities.
Event
AI Unleashed
An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.
Learn More
During the opening keynote at PyTorch Conference, Ibrahim Haddad, executive director of the PyTorch Foundation outlined the progress the organization has made over the past year.
“At the Linux Foundation we host over 900 technical projects, PyTorch is one of them,” Haddad said. “There are over 900 examples of how a neutral open home for projects help projects grow and PyTorch is a great example of that.”
The expanding capabilities for inference of PyTorch 2.1
PyTorch has long been one of the most widely used tools underpinning training of AI, including many of the world’s most popular large language models (LLMs) including GPT models from OpenAI and Meta’s Llama to name a few.
Historically, PyTorch has not been widely used for inference, but that is now changing. In a recent exclusive with VentureBeat, IBM detailed its efforts and contributions into PyTorch 2.1 that help to improve inference for server deployments.
PyTorch 2.1 also provides performance enhancement that should help to improve operations for the torch.compile function that is at the foundation for the technology. The addition of support for automatic dynamic shapes will minimize the need for recompilations due to tensor shape changes, and Meta developers added support to translate NumPy operations into PyTorch to accelerate certain types of numerical calculations that are commonly used for data science.
ExecuTorch is on a quest to change the game for AI inference
In a keynote session at the PyTorch Conference, Mergen Nachin, Software Engineer at Meta detailed what the new ExecuTorch technology is all about and why it matters.
Nachin said that ExecuTorch is a new end-to-end solution for deploying AI for on-device inference, specifically for mobile and edge devices.
He noted that today’s AI models are extending beyond servers to edge devices such as mobile, AR, VR and AR headsets, wearables, embedded systems and microcontrollers.
ExecuTorch addresses the challenges of restricted edge devices by providing an end-to-end workflow from PyTorch models to deliver optimized native programs.
Nachin explained that ExecuTorch starts with a standard PyTorch module, but coverts it into an exporter graph, and then optimizes it with further transformations and compilations to target specific devices.
A key benefit of ExecuTorch is portability with the ability to run on both mobile and embedded devices. Nachin noted that ExecuTorch can also help to improve developer productivity by using consistent APIs and software development kits across different targets.
ExecuTorch was validated and vetted by actual real-world engineering problems and Meta has already proven the technology with deployment in its Ray-Ban Meta smart glasses.
With the technology now being made available as open source as part of the PyTorch Foundation, Nachin said the goal is to help the industry collaboratively address fragmentation in deploying AI models to the wide array of edge devices. Meta believes ExecuTorch can help more organizations take advantage of on-device AI through its optimized and portable workflow.
“Today we are open sourcing ExecuTorch and it’s still very early, but we’re open sourcing because we want to get feedback from the community and embrace the community,” he said.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
[ad_2]
Source link