[ad_1]
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Today, data ecosystem major Databricks announced new retrieval augmented generation (RAG) tooling for its Data Intelligence Platform to help customers build, deploy and maintain high-quality LLM apps targeting different business use cases.
Available in public preview starting today, the tools address all key challenges in developing production-grade RAG apps, right from serving relevant real-time business data from different sources to combining that data with the right model for the targeted application and monitoring that application for toxicity and other issues that often plague large language models.
“While there is an urgency to develop and deploy retrieval augmented generation apps, organizations struggle to deliver solutions that consistently deliver accurate, high-quality responses and have the appropriate guardrails in place to prevent undesirable and off-brand responses,” Craig Wiley, senior director of product for AI/ML at Databricks, told VentureBeat.
The new tools target this exact problem.
VB Event
The AI Impact Tour
Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!
Learn More
What is RAG and why is difficult?
Large language models are all the rage, but most models out there contain parameterized knowledge, which makes them useful in responding to general prompts at light speed. To make these models more up-to-date and catered to specific topics, especially for internal business needs, enterprises look at retrieval augmented generation or RAG. It is the technique that taps certain specific sources of data to further enhance the accuracy and reliability of the model and improve the overall quality of its responses. Imagine a model being trained to HR data to help employees with different queries.
Now, the thing with RAG is that it involves multiple layers of work. You have to collect the latest structured and unstructured data from multiple systems, prepare it, combine it with the right models, engineer prompts, monitor and a lot more. This is a fragmented process, which leaves many teams with underperforming RAG apps.
How Databricks is helping
With the new RAG tools in its Data Intelligence Platform, Databricks is solving this challenge, giving teams the ability to combine all aspects and quickly prototype and ship quality RAG apps into production.
For instance, with the new vector search and feature serving capabilities, the hassle of building complex pipelines to load data into a bespoke serving layer goes away. All the structured and unstructured data (from Delta tables) is automatically pulled and synced with the LLM app, ensuring it has access to the most recent and relevant business information for providing accurate and context-aware responses.
“Unity Catalog automatically tracks lineage between the offline and online copies of served datasets, making debugging data quality issues much easier. It also consistently enforces access control settings between online and offline datasets, meaning enterprises can better audit and control who is seeing sensitive proprietary information,” Databricks’ co-founder and VP of engineering Patrick Wendell and CTO for Neural Networks Hanlin Tang wrote in a joint blog post.
Then, with the unified AI playground and MLFlow evaluation, developers get the ability to access models from different providers, including Azure OpenAI Service, AWS Bedrock and Anthropic and open source models such as Llama 2 and MPT, and see how they fare on key metrics like toxicity, latency and token count. This ultimately enables them to deploy their project on the best-performing and most affordable model via model serving – while retaining the option to change whenever something better comes along.
Notably, the company is also releasing foundation model APIs, a fully managed set of LLM models that are served from within Databricks’ infrastructure and could be used for the app on a pay-per-token basis, delivering cost and flexibility benefits with enhanced data security.
Once the RAG app is deployed, the next step is tracking how it performs in the production environment, at scale. This is where the company’s fully-managed Lakehouse Monitoring capability comes in.
Lakehouse monitoring can automatically scan the responses of an application to check for toxicity, hallucinations or any other unsafe content. This level of detection can then feed dashboards, alert systems and related data pipelines, allowing teams to take action and prevent large-scale hallucination fiascos. The feature is directly integrated with the lineage of models and datasets, ensuring developers can quickly understand errors and the root cause behind them.
Adoption already underway
While the company has just launched the tooling, Wiley confirmed that multiple enterprises are already testing and using them with the Databricks Data Intelligence platform, including RV supplier Lippert and EQT Corporation.
“Managing a dynamic call center environment for a company our size, the challenge of bringing new agents up to speed amidst the typical agent churn is significant. Databricks provides the key to our solution… By ingesting content from product manuals, YouTube videos, and support cases into our Vector Search, Databricks ensures our agents have the knowledge they need at their fingertips. This innovative approach is a game-changer for Lippert, enhancing efficiency and elevating the customer support experience,” Chris Nishnick, who leads data and AI efforts at Lippert, noted.
Internally, the company’s teams have built RAG apps using the same tools.
“Databricks IT team has multiple internal projects underway that deploy Generative AI, including piloting a RAG slackbot for account executives to find information and a browser plugin for sales development reps and business development reps to reach out to new prospects,” Wileys said.
Given the growing demand for LLM apps catered to specific topics and subjects, Databricks plans to “invest heavily” in its suite of RAG tooling aimed at ensuring customers can deploy high-quality LLM apps based on their data to production, at scale. The company has already committed significant research in this space and plans to announce more innovations in the future, the product director added.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
[ad_2]
Source link