The use of Vector Databases in LLM use cases
Large Language Models (LLMs) are all around us today. The field of Artificial Intelligence is changing by the minute and new, useful tools arise on a daily basis. Every company is looking for ways to introduce AI in their workflow, to generate more revenue, do more effective marketing for a targeted audience or to improve internal processes for employees. Company data is often required to leverage these LLM use cases in the best way possible.
In what comes next, we'll discuss how to optimally prepare this company data using vector databases.
Using a vector database to structure your company data
One way to make more efficient use of the power of these Large Language Models is by combining them with a vector database to structure your data. A vector database, as the name implies, is a database especially designed for storing vectors, a collection of numbers. As most of the data we keep is not vectorised, at first glance this seems to be highly impractical or not suitable for your business needs. However, in nearly all use cases your company data can be vectorised.
Transforming text into vectors
Not optimal: Word mapping
Let's look at your company data as text, since most of it is basically text that is structured in a certain way. Imagine you want to turn your text into an array of numbers to use it in a vector database. You could try to map every word to a number and make up a vector that way. However in most languages there are words that could have multiple meanings based on the context in which they are used. Putting in place a word mapping would lose the context of the text you are trying to vectorize. It would also mean that the similarity of words would be entirely based on the index of its mapping. For example: the word “bed” and “bee” could be closely related in this case, due to their mapping index, when in reality those two words would rarely be used in the same context.
Best practice: Use an embedding model
Another, more sound way to vectorise your text is to let a Large Language Model do the work for you. One thing that LLMs excel at is predicting what word comes next in a given sentence. For the model to be able to predict this, it just needs a way to get the context of the earlier words of a sentence in order to generate a new word that fits the context.
Let's look at an example:
Imagine you want to find a type of flower that blooms in the same period as the roses in your garden.
You turn to an LLM and type in the sentence “Roses bloom in June and so do…” in hopes of having the LLM complete the sentence with a different kind of flower. If the model was trained right it should not complete the sentence with something that doesn't fit the context. Where 'violets' would be acceptable, 'pianos' would not be. You could think of it as the model ‘asking itself’ how rose-like violets are compared to a piano. In this case, based on this context, violets would be closely related to roses.
If you would quantify this relation as a 'distance between contexts' along with other characteristics you would have a vector that represents the context of your textual input, as is the case for roses and violets in the image. Keep in mind that this 2D representation is simplified. In reality you'll quickly go to over 1024 dimensions.
Lucky for us, a lot of current LLMs on the market offer an embedding model just to do this work for you. The input you supply to the model will return a vectorised representation based on the context of the text. Under the hood it will do much of the same computing it does when you interact with an LLM's chat interface, like chatGPT.
But instead of generating a response to your input or question, it will now tell you how the model 'interprets' your text.
The power of contextual search
Now that all your textual input is represented as vectors, you can insert all your items in a vector database. This is where the real benefit arises: You're now able to query the database and make it search for the nearest vectors. This would change your usual textual search to a contextual search, meaning that results are now based on the context of your search query.
This is very powerful when searching in a large database to find data points with a similar contextual meaning or subject.
Your own custom LLM use case
Let's say you want to build a support chatbot for your product or service. Using the principles explained above, you can now feed all of your product documentation along with frequently asked questions and responses to a vector database. This way you can insert a large amount of contextual information to power your LLM in a very efficient way. Given the token (usage) limitations put in place by LLM providers like openAI, this is a huge benefit.
Now, when your customer interacts with your chatbot, the system will first search for similar questions or cases in your vector database and feed the top results back to your Large Language Model. The LLM will now be able to generate an appropriate response, based on the product documentation, FAQ and responses without having to search your entire database.
In the end, this optimal set-up would translate into:
A better generated response
A lower load translating to a lower cost
A more efficient use of the token limit
Book a meeting
Discuss your idea directly with one of our experts. Pick a slot that fits your schedule!