[ad_1]
A new chapter in WordLift’s Journey
We are pleased to introduce you WordLift Reader, a new connector for Llama Index and LangChain, with a spirit of continuous innovation towards generative AI for SEO. This feature represents a small but significant evolution, allowing users to directly interact using their knowledge graph in engaging conversations.
The power of mating knowledge graphs with language models
In our journey to make the web more intelligent and accessible, we recognize The enormous potential of combining knowledge-structured world graphs with the fluid, context-aware capabilities of language models (LMs).. This fusion, which we call neuro-symbolic AI, is a game changer. With the ability to store and organize large amounts of information, Knowledge graphs provide a solid foundation for relevant facts and relationships. On the other hand, LLMsWith the ability to understand and generate human-like text, Bring a level of customization that static web pages simply cannot match. We’re empowering and fundamentally transforming the customer experience by bringing these two powerful technologies together.
We are moving from a world where information is passively consumed to a digital ecosystem where users engage in rich, vibrant and active activities. Personalized conversations with the help of semantic concepts and structured data.
A modern architecture for generative AI apps
The evolving architecture of LLM applications is a fascinating mix of new and existing technologies. I live with the AI hype and its stack evolution with the same enthusiasm I had back in the mid-nineties when Mosaic was the interface of the Internet.
Large Language Models (LLMs) are a powerful new tool for software developmentBut their unique characteristics Requires a combination of legacy ETL techniques and innovative approaches (embed, call, match intention) in order to use their full potential. The LLM Application Stack Reference Architecture includes standard systems, tools, and design patterns used by AI startups and tech companies.
The architecture, as outlined by a16z, is primarily focused on contextual learning; This is the ability of LLMs to be controlled by intelligent querying techniques and access to personal “contextual” data.
Architecture can be divided into three main areas:
- Data preprocessing/embedding. This is where our KG comes into play with the new connector. It is in the preprocessing stage that semantic data can be used and sliced into an embedding model and a set of indexes (vector stores are the most common type).
- In the fast stage of construction, a query series is compiled for submission to the LLM. A query combines a hard-coded template, examples of valid results, information from external APIs, and relevant documents extracted from a vector database.
- At the conclusion stageThe query is passed to a pre-trained LLM for forecasting.
LlamaIndex, LangChain, Semantic Kernel and other emerging frameworks orchestrate the process and provide an abstraction layer on top of LLM.
The WordLift connector loads the data we need from the knowledge graph using GraphQL, a query language introduced by Facebook that enables accurate, efficient data retrieval. With just a few lines of code, we can determine exactly what data we need by extracting the subgraphs we want from our KG. We do this by using specific properties of the schema.org vocabulary (or whatever custom ontology we use).
GraphQL enables the connector (WordLift Reader) to keep you updated on the latest changes to the website without climbing, to ensure that all conversations are always based on the latest data. This seamless integration via GraphQL between our KG and the Llama Index (or LangChain) allows the connector to transform static, structured data into dynamic, interactive conversations.
Modulation: Tailoring conversations to the user’s needs
We incorporate the concept of modularization into a new AI generative stack for SEO. We see blocks like the “TalkMe” widget introduced in my previous blog post to “modulate” conversations; The connector is designed to use different subgraphs and provide specific responses. This feature allows us to tailor conversations to specific customer needs.
In the example below, the shopping assistant is created by accessing:
- Schema: Product for product discovery and recommendation;
- Scheme: FAQ page for information about the store’s return policy.
Let’s take a look at the dialog goals we can address using product data alone, using product data and Q&A content.
The power of using your knowledge graph created using schema.org for LLMs. On the left, the AI uses only the schema: product; The AI on the right uses schema:Product and schema:FAQPage.
Here’s how to connect with your audience using AI agents
The notebook below provides a reference implementation that uses our demo e-commerce website. To see how everything works, open the link and follow the steps.
Ready to jump to code? 🪄 wor.ai/wl-reader-demo (Here’s a Colab Notebook to see how it all works).
📖 Here is a link to the documentation.
We will build an AI-powered shopping assistant using LlamaIndex and the knowledge graph behind our e-commerce demo website (https://product-finder.wordlift.io/). You must add your OpenAI API key and WordLift key to run this tutorial. We will start by setting up the environment and installing the required libraries such as langchain, openai, lama-index and API keys.
The main part of the code is focused on using the WordLiftReader class to create a vector index of the store’s products.
We will also see how to create multiple indexes based on different schema classes such as Schema: Product and Scheme: FAQ page. This allows the AI agent to efficiently handle different types of user queries. For example, an agent can recommend products based on customer preferences or provide information about a store’s return policy.
conclusion
Introducing our new connector is an important step in our mission to make the web smarter and more accessible. By transforming your knowledge graphs into interactive conversations, we enhance the user experience and pave the way for more effective SEO implementation.
We’re excited to see how our customers use this new tool to create a more engaging, personalized and optimized web experience.
Stay tuned for more updates as we continue to innovate and push the boundaries of what is possible with the synergistic combination of KG and LLMs!
References and credits 📚
Great content to read to learn more about LLM + KG:
Frequently Asked Questions
How is LlamaIndex different from LangChain?
LlamaIndex and LangChain are two open source libraries that can be used to build applications that harness the power of large language models (LLM). LlamaIndex provides a simple interface between LLMs and external data sources, while LangChain provides a framework for building and managing LLM-powered applications. In short, LlamaIndex is primarily focused on an intelligent storage mechanism, while LangChain is a tool for combining multiple tools. At the same time, you can create agents using only LlamaIndex or LlamaIndex as a LangChain agent tool.
What orchestrators are available for large language model interfaces?
Various orchestration frameworks, such as LangChain and LlamaIndex, help us abstract a fast chain, interface with external APIs, and extract knowledge from various sources. The most famous are Microsoft’s FlowwiseAI, Auto-GPT, AgentGPT and BabyAGI semantic core. Similar solutions include JinaChat or the Italian Cheshire Cat AI.
[ad_2]
Source link