tacnode
Back to Demos·

Vector Search

We demonstrate Tacnode's vector search capabilities. Watch as we explore how Tacnode efficiently finds similar data using high-dimensional vector representations. We'll showcase a practical example with our AI chatbot, which recommends B&B listings based on natural language queries. You'll see how the chatbot converts user queries into vector embeddings and calculates similarity using cosine operations to provide relevant suggestions.

INTRODUCTION

Welcome to our demonstration of Tacnode's vector search capabilities. We will be using the Tacnode Playground for this demonstration.

Vector similarity search is a highly efficient technique that finds similar data according to their respective vector representations. To do this, we convert data objects (like text and images) into high-dimensional vectors and store them in a database. When querying, we also convert the objects from the query into vectors, and calculate the similarity between the query vectors and the vectors in the database. Compared to keyword search, vector search captures semantic information better. Vector search can handle synonyms, typos, and ambiguous language, as well as cross-lingual similarity. With the rise of large language models (LLMs) and Generative AI, vector search has also seen an increase in popularity. Most commonly, we would use vector search with Retrieval-Augmented Generation or RAG, for short. RAGs use vector databases to find the most relevant documents, which are then provided to LLMs as augmented context, to generate more informative, accurate responses. This means that vector search functionality directly affects the performance of RAGs.

SCHEMA OVERVIEW

In the tables used by the Tacnode Playground, there's a listing_embedding table, which plays a key role in vector search. The listings table contains text descriptions for each listing, which are converted into vector embeddings and stored in the listing_embedding table under the embedding field. There's also a chunk_id field in the listing_embedding table. Since the text descriptions are usually quite long, we need to partition the text into chunks before generating the embeddings, which is why we need a chunk_id to locate the position of any particular chunk of text. For example, if a 3000-word description is partitioned into 1000-word chunks, we would have 0, 1, and 2 as our chunk_id s. The listings and listing_embedding table would be joined by the listing IDs.

TACNODE PLAYGROUND

In the Tacnode Playground, we have a chatbot that supports natural language conversations. It's essentially an AI agent that can recommend Tacnode BnB listings that might be appealing to us.

For example, we can ask the chatbot "What are some of the best oceanfront properties?" And we see that it has recommended three properties on the coast of Australia. Let's check the backend to see what actually happened here. The AI agent modifies the query and calls an embedding model to generate the vector embedding of this query. Once we have the embedding, in order to find the properties that are most similar to what the guest requested, we use cosine similarity operations to calculate the similarity between the query vector and the vectors in the listing_embedding table.

Next, we can obtain the relevant information for these listings and pass it as context to the LLM. We see that the LLM generates a response based on this augmented context, and the final result is displayed through the guest interface.

Now, we can click on one of these recommended properties and interact with the AI Co-Host. We can ask this chatbot questions about the property to inform our booking decisions. For example, we can ask, "Is this property suitable for a family of three?" The AI Co-Host will then give us more information about the property and explain its reasoning for why this property is or isn't suitable for a family of three.

And with that, we conclude this demonstration of Tacnode's vector search capabilities. Thank you for tuning in.