[ad_1]
Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.
3D can be a powerful tool for brands and creatives, offering immersive, engaging experiences and enhancing the design process.
Still, it can be expensive, time-consuming and difficult to execute effectively — and thus not always feasible in everyday enterprise.
But generative AI, once again, is rising to the challenge — and today, Nvidia is looking to stake its claim in this new dimension. The company announced at GTC 2024 that its Nvidia Edify multimodal generative AI model can now generate 3D content, and that it has partnered with Shutterstock and Getty Images on Edify-powered tools.
Shutterstock is providing early access to an application programming interface (API) built on Edify that creates 3D objects for virtual scenes from text prompts and images.
VB Event
The AI Impact Tour – Atlanta
Request an invite
Getty, meanwhile, is adding custom fine-tuning capabilities to its gen AI service so that enterprise customers can generate visuals adhering to brand guidelines and style.
Developers will soon be able to test these models through Nvidia NIM, a new collection of inference microservices announced at GTC.
“3D asset generation is among the latest capabilities Edify offers developers and visual content providers, who will also be able to exert more creative control over AI image generation,” Gerardo Delgado, a director of product management at Nvidia, wrote in a blog post about the new capability.
Getty fine-tuning Edify to specific brands (including Sam’s Club, Mucinex and Coca-Cola)
One of the biggest challenges in gen AI is finer control over AI image outputs.
To help address this problem, Getty announced at the Consumer Electronics Show (CES) in January Edify-powered APIs for inpainting and outpainting. Inpainting can add, remove or replace objects in an image, while outpainting can expand the canvas. Both these features are now available on Getty’s website and iStock.com.
Beginning in May, the company will also provide new services allowing companies to custom fine-tune Edify to their specific brand and style. This will be via a no-code, self-service method in which brands can upload proprietary datasets, review auto-tags, submit fine-tuning parameters and review results before deployment.
Additionally, developers will soon have access to Sketch, Depth and Segmentation features. These, respectively, allow users to submit a drawing to guide image generation; copy compositions of reference images via a “depth map”; and segment sections of images to add, remove or retouch characters and objects.
“Getty Images continues to expand the capabilities offered through its commercially safe gen AI service, which provides users indemnification for the content they generate,” writes Delgado.
Like Shutterstock, Getty’s gen AI tools are being used by “leading creatives and advertisers,” according to the company. Some of these include:
- Dentsu Inc.: The Japanese PR agency is using Nvidia Picasso to fine-tune Getty’s model for membership retail giant Sam’s Club. The company is also using Getty to support Manga Anime for All, which can generate manga and anime-type content for marketing use cases.
- McCann: The creative agency used gen AI to create a game for the over-the-counter cold remedy Mucinex; this interactive feature allows users to interact with its surly mascot Mr. Mucus.
- WPP: The marketing and communications company is working with The Coca-Cola Company to fine-tune Getty’s model to build custom visuals for the iconic soda brand.
Shutterstock speeding up prototyping
With Shutterstock’s new 3D AI service, users can quickly generate virtual objects for ideation and set dressing. They can enter text prompts or reference images and choose from a selection of popular 3D formats, according to Nvidia.
“This capability can drastically reduce the time needed to prototype a scene, giving artists more time to focus on hero characters and objects,” writes Delgado.
The Edify-based tool is commercially safe and trained on Shutterstock’s licensed data. The stock photography company has compensated hundreds of thousands of artists with “anticipated payments to millions more for the role their content IP has played in training generative technology,” Nvidia says.
Shutterstock is also building Edify-based tools to light 3D scenes using 360 HDRi scenarios created by text or image prompts.
Furthermore, at GTC this week, Shutterstock and HP are demonstrating their collaboration around 3D custom printing. With Shutterstock’s 3D AI generator, designers can create digital content that HP can convert to 3D printable models.
“HP 3D printers will then turn these models into physical prototypes to help inspire product designs,” writes Delgado.
Leading companies are already using Shutterstock’s 3D tools, including Dassault Systèmes and CGI studio Katana. The companies are integrating generative 360 HDRi APIs based on Nvidia Omniverse, which is being used to develop Universal Scene Description (OpenUSD)-based 3D scenarios.
Accenture Song is also leveraging Omniverse and gen AI-powered Edify microservices to develop Defender vehicles.
This, Delgado writes, is “enabling the creation of cinematic, interactive 3D environments via conversational prompts. The result is a fully immersive 3D scene that harmonizes realistic generated environments with a digital twin of the Defender vehicle.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
[ad_2]
Source link