Dataset Viewer
Auto-converted to Parquet Duplicate
audio
audioduration (s)
10.5
29.9
text
stringlengths
156
578
Hello guys, welcome to this new amazing module on understanding about multimodal rag. So guys, till now, specifically if I talk about, you know, we have discussed about creating different kind of rag applications. And in that rag, let's say that there are two main important components, you know. So let's say that I have my vector store. And you know, in this vector store,
how do we go ahead and store our data? Okay. So let's say that we initially have some kind of data. This data can be a PDF file, it can be a Word doc file, it can be a database file, it can be any kind of files, right? So usually for this particular data, we convert this into chunks.
Then further, we use some kind of embedding models, embedding models, and we convert this into some kind of vector representation. And we store everything inside our vector store, right? Then what we do, once we have everything in the vector store, from this particular vector store, we create a retriever. Why do we create a
retriever? Because whenever we have any kind of new query that comes in, any type of new query that comes in, we will, we will perform an embedding on this particular query. And this query from the retriever can retrieve similar kind of results from the vector store. Okay, then from this,
we get the top key documents, top key documents, and we give it to the LLM model. The LLM model will finally be able to generate the output based on the prompt that we have given. So these are the two main fundamental modules that we specifically work on a rack pipeline. Yes, there are techniques like re-ranker, you can combine different, different strategies like sparse and dense matrix.
How to probably go ahead and do the retrieval, how to apply filtering keywords and all we have discussed about this, right? But let's consider one specific use case in this particular data. Okay, so let's say that I have a PDF file. So this is my what my PDF file looks like. Okay, in my PDF file, I'll just go ahead and consider they are some kind of text data. But let's say that I go
ahead and add some more information over here in the form of images. So here I have some kind of images. Let's say this particular image is a revenue of a company. Okay, so it is a kind of a revenue of a company, you have some textual information, again, you have some kind of images over here, again, you have text, right? Whenever we talk about multimodal rag, in short, multimodal is nothing but we are going to go
ahead and use two important things. One is text plus images. So let's say that if in your data, you have both text and images, can you also take this particular image and do some kind of similar search? You know, let's say that I will go ahead and ask, Hey, tell me from the diagram in the page one, can you talk about the findings or can you provide the summary? Right? So from this particular document, you know,
you should be able to retrieve this particular information based on this particular image, and we should be able to generate the output. So whenever we talk about multimodal, whenever we talk about this kind of scenarios, here we are working with both text and image data. Now we need to find out a way that how do we go ahead and store the text and the image data in the vector store. Obviously, for storing the text data, we know that we have applied different kind of
embeddings. But till now, we have never seen that for images, what kind of embeddings we can go ahead and apply? Right? What kind of embeddings you can go ahead and apply so that we store this in a vector store. And at the end of the day, when we give a specific query, how it is going to go ahead and do a search with respect to text and with respect to the images, right? And that all things we will specifically go ahead and discuss in this multimodal rag. Okay? And when I say multimodal,
multimodal is very simple text plus images. When I talk about rag, I'm talking about creating this entire pipeline. So we will discuss about this. I will also probably go ahead and show you the entire step by step flow diagram, how we are going to go ahead and implement this. Okay? Let's say that we have a PDF in this PDF, we have some kind of text and images. The first thing that you actually require
is a multimodal LLM. Okay? multimodal LLM. Now, what does this basically mean? So till now, you have worked with different different LLMs, but those are specifically, if I talk about a normal LLM, like let's say GPT 3.5, GPT 4, these LLMs are very specific to text generation, means they are good at
text generation. Okay? When we talk about multimodal LLM, these LLMs will are trained on both text and images. Okay? That basically means if you give an image parameter, let's say if you give an image parameter to this LLM, this LLM will also be able to talk about like what is there in the images itself.
If you ask anything related to text, then also they will be able to provide you some kind of responses. So whenever I talk about multimodal LLM, that basically means they have been trained in both text and images. So this kind of models, we are going to use it. Okay? If I like to name some of the model in OpenAI, if I take the multiple, there are many models that are available, which are multimodal,
but we will do an application with GPT 4. GPT 4, let me just go ahead and check in my code file so that, you know, you should see again, there are different different models, which you can use, right? In our examples, what we are going to specifically do is that we are going to use GPT 4 model, right? So here specifically, if I talk about it is nothing but GPT 4.1. I was just checking in my code, like what model we have specifically used,
you can also use any other kind of model itself. If you go into OpenAI documentation, you will be able to see this. If I take one more example, there is Google Gemini model, right? If you go ahead and see Google Gemini flash model, specifically 2.5 version, right? So this is also a multimodal model. Okay? That basically means it will be able to work in both text and images. So we will consider this kind of models and we will develop an entire rack pipeline. Okay? Now, the
most important thing is that what will be the steps that we really need to follow in order to solve this problem. So let's go ahead and discuss about the steps. Let's say the first thing is that I have a PDF document. So I'll go ahead and write this. So this is my PDF document. Okay? And in order to draw the steps, I will just go ahead and write it in a much more clear way so that you should be able to
see this. Okay? I will take this. Okay? Okay? So let's say this is my entire cycle that I'm actually are the entire steps that I'm going to write it over here. So the first step, let's say that I have my data source over here. Okay? So this is my PDF document. Let's say inside this PDF document,
you have both text and images. So from this PDF document, the first step will be that I will go ahead and extract the text and images. Okay? First of all, I will go ahead and extract text and images. So here, let me go ahead and write it down for you. Extract
text and images. Now the question arises, for this, how we are going to go ahead and extract text and images? If we just use like some library should be there, right? Because at the end of the day, we have to read this PDF document. And while reading the PDF document, we should be able to distinguish which are the text and which are the images. So for this, I'll be showing you that which library
will be using. Okay? So this is my first step. Now in the second step is that, let's say, I go ahead and get this, I have completed this step. Okay? Then the second step is, second and third, let's consider this, second and third. Okay? So the second step is that, what I will do,
from this text, we will convert this into chunks. And from this, we are going to perform embeddings. Because for text, there will be different kinds of embeddings, right? Similarly, over here, for images, we will do this, we are going to go ahead and take this. And we are going to perform the embeddings.
Now the question arises, Krish, how are we going to go ahead and do this? Do we use different model to probably different embedding model to handle the text and images separately? Or should we just use one model? The best idea is that we try to use a single model. Now for this, which model we will be using? There is a model called as clip model. Right? So this clip, I'll talk about the full form, what exactly
clip is, you know? So this clip model is provided by OpenAI. Now how this model is basically trained? This model is trained with images and text mapping. Okay? So there are like 400 million images, I think this model is basically trained with, where we can, and it is open source. Okay? So it is available in hugging face, we will try to use this specific model. And what we are going to do is that based on
the text, we will try to convert that into embedding. And based on images, we will also try to convert that into embedding. The reason of using one model is that because this model is already trained with images, text mapped data set, you can consider in that way. Right? So there's a huge data set, which is already trained with. So this will actually go ahead and create a similar kind of vectors based on the
text and the images. Okay? Then the next step over here, what we are basically going to do from this, once we are able to do this, the next step will be that we will store this images in the base 64 format. Because that is the format that is actually required. Okay? We will store this particular images as base 64. Okay? Now the next thing is that once we perform this particular embedding,
then we take both these embeddings and we store it in some kind of vector store. So let's say that I will go ahead and use a phi s vector store, where I'm going to go ahead and store this entire embeddings. This is my next step. Okay? So now I have my vector store ready. All I have to do is that I have to take a new query. So let's say that I go ahead and take a new query.
And from this particular query, I will again use this clip model. Why I will be using this clip model? So that I will be able to convert this query into an embedding. Right? So here, I will just go ahead and write clip embed. Okay? So that basically means this clip model will be used to convert the query into an embedding vectors. Okay? So once we do this,
the next step is that we will do a vector search from here. Vector search from here. So this will basically be my retriever. Once we do the retriever here, we are going to get the top K documents. And this top K documents will be specifically text plus image information.
text as image information. Then before sending it to the LLM model, as I said, here, let's say that I'm using my multimodal LLM model that is open AI GPT 4.4.1. Let's say if I'm using this specific model.
Okay? Before I give this text and images, it should be on a specific format. So here, we are going to convert this into a specific format. And this format will be passed to the LLM model. And finally, we are going to get the multimodal answer.
So these are the steps that we are going to specifically follow. Okay? We are going to specifically follow in order to implement this entire multimodal rag.
So this is my rag. The format is necessary because models, which are multimodal rag, multimodal LLM, they require in some specific format from the retriever. Then only they will be able to provide you the answer. Okay? So this is the steps. So here you can see, initially, we took this PDF document, we'll read this document, we'll extract the text and images, then we'll take the text, convert that into chunk, then convert it into embeddings.
Similarly, images convert that into embedding. We'll use this clip model. It is from OpenAI. The best part is that this is already trained with images to sentence mappings. Okay? Then we are going to store this into a vector store. Then whenever we get a new query, we'll be embedding this. Then from the retriever, we'll hit the files vector and we'll get the information of text and images.
Then we'll format it, give it to the LLM model. And finally, we'll get the multimodal answer. Okay? So this is what we are specifically going to go ahead and do this. Okay? And the best part will be that when we are doing this right, you'll be able to see how efficient this is and how easily you should be able to see this entire thing. Okay? Okay. And one more thing which I missed about. Whenever I talk about clip, right?
So the clip full form is nothing but contrastive, contrastive, language, image, image, pre-training. We are going to specifically go ahead and use this. Okay?
So I hope you have understood this entire flow. But you can just see that this clip will be very, very handy because it can process both text and images also. That's the best part about it. Okay? And usually in case of images, it uses something called as vision transformer. In case of text, it uses text encoder. Okay? I mean transformers.
So if I see, right, since this clip, if you see the architecture of this, this has the combination of vision transformer plus transformer. Transformer is basically for image, for text and vision transformer is for images, right? So I hope you like this particular video related to multimodal rag. In the next video, we are going to go ahead and do the practical implementation. Thank you.
So guys, now let's go ahead and implement the multimodal rag wherein let's say that our data source that we are going to probably go ahead and consider is PDF with images. Now, if you remember this entire flow, the first thing is that we will be considering the PDF document and then we will extract the text and images. Okay? So for this, the example we have, we are going to consider this specific PDF. Okay?
And this PDF here, you can see it's a very simple PDF just to show you one basic example. I'm using this. Here in this particular PDF here, you can see some important information is there. Like document summarizes the revenue trend across Q1, Q2, Q3. As illustrated in the chart below, revenues grew steadily with the highest growth record in Q3. And here you have this all three charts. Okay? I've not even mentioned Q1, Q2, Q3, but I've just given some text also. Right?
Now, considering this, we will go ahead and try to ask some query and we'll see that whether it will be able to, whether my rag will be able to do this or not. So I'm going to use that particular data set itself. Okay? Okay? Now, coming back to our simple diagram over here. Right? And from this, you can see that the first step is very simple. That is reading the PDF document and extracting the text and the images. Okay?
So how do we go ahead and do this? So for this, we will be using this library which is called as PyMuPDF. Okay? Now PyMuPDF, inside this, there is a library called Fids. The best part about this library is that it is very good at text extraction, image extraction, speed, and memory usage. So we will go ahead and use this. If you see other libraries like PyPDF, PDF Plumber, PDF Miner, they are not that good. Okay? So how do we go ahead and do this?
So first of all, what we are going to go ahead and do is that I will import some of the libraries. Okay? So let me go ahead and import all the specific libraries which we are going to use it. Okay? So here you can see that we are importing libraries like Fids. We are using library like Document. I have already told you why we will be using Clip. So for this, we will be importing two important libraries from Transformer. Clip Processor and Clip Model.
I will discuss more about this. Then we are importing from PIL import image so that we can play with the images. Along with this, we will import Torch. Then we have Langtion.chat models import in it. Chat model, prompt template, human message, cosine similarity. Then along with that, you also have import Base64. I am using Recursive Character Text Plater for text and this is Fias. This is the library that will be used in order to read the PDF. Okay?
So now let me quickly go ahead and execute this. So once we execute this, it will take some amount of time. But again, it will get executed. There are so many different libraries that we are importing. Okay? The next thing is that you know that I will be requiring, as I said, I will be requiring what? Clip Model. Okay? And I have already told you what this Clip Model is. It is nothing but Contractive Language Image Pre-Training. Right?
The main aim of this particular clip model is that after extracting the text and the image, you know, we will be using text, converting that into chunks and converting that into embedding. Similarly, if you have the images, we will be converting that into embedding. And for that, we will be using this clip model itself. Okay? And this is again an open source model from OpenAI. So let me quickly go ahead and set up two important things. So first of all, here we are going to load the clip models. Okay?
So for clip model, we require, if you want to go ahead and load the clip model, two things is basically required. One is processor and the other one is model. I will talk about it. Why do we require the processor also? But before that, I will go ahead and quickly import OS from .env. I'm going to go ahead and import load underscore dot env. I'll go ahead and initialize load underscore dot env. Let's set up the environment. So set up the environment. Okay?
Now for setting up the environment, I'll write OS dot environment. And here I'm going to go ahead and write openAI underscore API underscore key. And here I will just go ahead and write OS dot and get env. Now you may be thinking, am I doing this for the clip model? No. I'll be using the OpenAI multi-model LLM. Right? So for that, I'm just going and setting up this particular environment. Okay?
Now let's go ahead and initialize the clip model for unified embeddings. Okay? Unified embeddings. Quickly, let's do that. Okay? So first thing is that I will be going to hugging face. So let me just open the browser quickly for you.
So here and here I will search for clip model hugging face. Okay? So here you can see instantiate all the information is probably over here. Okay? Clip is a multi-model vision and language model motivated by overcoming the fixed number of categories. All these things you can find all the clip checkpoints under OpenAI organization. All these things are there. Right? You can read more about it. And we are going to use this.
Now, for initializing the clip model, two things is basically required over here. Okay? One, I will go ahead and create a variable called as clip model. And here I'm going to go ahead and write clip model dot from pre-trained. So I'm going to go ahead and directly call the model. And the model name, again, you can go to the hugging face and you can probably go ahead and check it. Okay?
It is nothing but it is OpenAI clip vit base patch 32. Okay? So this is the model that we are going to specifically use. And this is the model name which will be responsible in converting both the text into embeddings and images into embeddings. Okay? And along with this, since we need to go ahead and use this particular model, I will also go ahead and create clip underscore processor. And here, let me go ahead and write clip processor dot from from pre-trained.
And here, we are going to go ahead and write OpenAI slash. And for this, the same model because we need and what is this? What is this processor? See, for giving input to any of the model, this processor, this clip underscore processor is making sure that whatever format is basically required over here. Right? It will try to convert in that particular format. Okay? So for the clip model, you need to import these two things. One is the pre-processor and one is the model.
Now what you can do is that I can go ahead and write dot eval. So just to see the entire clip model evaluation. Okay? So this is going to take some amount of time. Again, it depends on system to system. So here, you can see over here, clip model is nothing but it is using the clip text transformer, embeddings, position encoding, all these things. And it is trying to convert this into 512 dimensions. Then you have this clip encoder. All this information you will be able to see with respect to this particular model. Okay?
So in the second step, which I have already told you, for this, we are going to use this clip model from the OpenAI, which will be able to do that. And we have loaded it. Okay? Now coming to the next important step. We need to find out a way of embedding the images. Right? How do we go ahead and embed the images? Basically, take the image part and convert that into image vectors or image embeddings. Right?
So for that, we will go ahead and create two embedding functions. Embedding function where we are going to specifically use this clip models. Right? One is definition embed underscore image. Here, we have to give our image data. Okay? And then we will go ahead and embed image using clip. Okay?
Now, whenever we have this image data, how this image data will come, I will talk about it in the later stages. Okay? Now, first of all, what we will do, we will just go ahead and check whether this image, if instance image underscore data comma str, if it is a path, okay, if I am providing this image data in the form of path, then we will open this image particularly with the help of this image library.
Otherwise, if it is an image, let's say if we give directly the base64 data, then this will be just considered as the image data. Okay? Now, once we have this image data, since we need to convert this image into embedding vectors, for this, we are going to go ahead and use clip processor. Clip processor. And inside this, I am going to go ahead and give my images is equal to image. Okay?
And we are going to go ahead and return this in this particular format. So this is basically saying that, hey, you need to return the tensors in the form of a PyTorch tensors itself. Okay? And this is actually available if you have some understanding with respect to deep learning and all. So what it does is that it is trying to convert the entire format into tensors. Right? Now, the next thing is that we will also perform normalization.
See, once we get this, right, this will be my inputs that needs to be given to my clip model. Right? So here now I will write with torch.nograd. Here we are going to take the image features. There is a function inside the clip model saying that clip model get image features. This particular features based on this particular input, I will be able to get it. Okay? So here we are going to take the image. This two line of code is basically normalizing the embeddings to unit vector. Okay? To unit vector.
This is what it is basically done. Because see, every image dimensions will be different. Right? If we really want to convert that into a unit vector, we can basically go ahead and use this. Wherein, we are taking the features and we are dividing it by normalization by keeping the dimension as minus one. And then finally, we convert this into a numpy array. Okay? So this is how embedding with respect to images happen. Now, you know that this clip model is also very good at converting the text into embeddings.
So here you can see embed text using clip. So I have used the same clip processor. Here we are giving text, return, taster, padding is equal to true, truncation is equal to true, and maximum length we are given as 77. And again, we are trying to normalize this. Here you can see instead of using get image feature, we are given get text features. Okay? Now, this text features will be very important so that it will be able to convert the text into feature. So for this particular function, we give text. For this person, we give image data.
But at the end of the day, we are using both clip model and clip processor. Okay? So here also we are using clip model and clip processor. Perfect. So these two are the function. One is embed text and one is embed image. Now, you know that. See, we have completed this function. We have completed this function. Now, let's go ahead and read this PDF document and extract the images. Okay? So first of all, let me go ahead and give the process PDF. I give my PDF path.
So let's say my PDF path is multi model underscore sample dot PDF. Okay? Multi model. So this is the PDF name. The same PDF which I have actually shown. Now, I will go ahead and write doc is equal to fits dot open. And I will give this PDF path. Okay? Whatever the PDF path is. Okay? Now, initially what we will do? We will try to create some variables.
Okay? We will try to create some variable wherein we will store all documents and embeddings. So this is my all docs, all embeddings and image data store. Okay? So we are creating this particular variable. Next step is that we will go ahead and use some kind of text splitter. So text splitter is also required for images. Right? So here you can see I have used recursive character text splitter. Okay? This see I am copying and pasting those things which you already know. Okay? So that is the reason why I am doing this. Okay? So now I have my doc. Okay?
Now with respect to the doc, see if I just go ahead and execute this and if I see what is my doc. Right? Here you can see doc fit sort open if I am actually doing. Here you can actually see that this is what is my doc. Like it's a document of multimodel dot underscore sample dot PDF. Now once I have this variable doc, now what I am actually going to do is that I will iterate through all the documents that are available inside this.
And then I will first of all get my text data, convert it into a text chunks and then convert it into embeddings. Similarly for the image data, I will get all the images and probably convert that into image vectors. Okay? So that is what we are basically going to do. So I will write for I in, sorry, for I comma page in, I will use an enumerate function, go inside my doc. Okay?
Then I will first of all process the text. This is my first step. Then my second step is process the images. Okay? Now for processing the text, first of all, I will just go ahead and write text is equal to page dot, page dot get underscore text. Okay? So we are going to go ahead and use this particular function. Once we go ahead and write this, with this you are going to get all the text. So I will write if text dot strip.
Right? We are just removing all the empty spaces dot strip, not trip. Okay? Strip. Here we are going to go ahead and create a temporary document for splitting. Okay? So let me go ahead and do this and I will keep this in the form of a document data structure. Okay? So here, this is my first step and we have done this.
See temp underscore doc document data structure, page content is equal to text and metadata, some of the information. And remember, for all the text data, you need to keep the metadata as type is equal to text. Okay? This is really important. And then we are going to end using the split documents for this. Okay? Now, after this, we will embed each chunk. See, after this step, we are going to go ahead and get the chunks. Right? So now I'm going to go ahead and embed each chunks using clip.
So for chunk in text underscore chunk, we have used this embed function. Then I am combining all the embeddings over here inside this particular variable and all underscore docs dot append. If you remember what is embed text, embed text is nothing but it is a function, which is basically getting the text features. Right? And it is normalizing and giving you back all those particular features. Right? So that is what we have basically done over here. Right?
So guys, now similarly for the images, we need to follow these three important steps. Okay? So I will just go ahead and quickly comment this for you so that there are a lot of code to be written. So I really want, I'm trying my best to probably teach you in a way so that you should be able to understand. Okay? So like for images, I hope everybody has understood what we did. Right? Now for processing images, we need to perform three important actions. One is convert PDF image to PIL format.
Store as base64 for my GPT, for like my computer vision GPT model, specifically and multi-model itself. Create clip embeddings for retrieval. Okay? Like how we did it for the text. Now we, in this particular loop, right? We are already having this particular page. Now, similarly, what we will do, I will go ahead and copy some code for you and tell you that what we will be doing. Okay?
So here in the same loop, we will go ahead and write like this. See? So I'm enumerating through every pages and there is a function called as get underscore image. So this will actually get all the image information. As I said, we need to convert the PDF image to PL format. So this is what we are basically doing. Okay? Here you can see we are converting this into PL image. Okay? And before that, we need to convert our image information that we are getting into image bytes.
Then we create a unique identifier just to name that particular image. So here you can see based on the index, that particular image and image index will be getting this. Then we are storing the image as base64 for later use for the GPT model, whichever model we are specifically giving. And this PL image is getting saved with the format of PNG. Then I have base64 image in encoded format. So this is basically storing the image as base64.
And finally, we go ahead and embed using clip wherein I have to give the PL image. Then I will get the embeddings and the variable that we have created, right? All embeddings inside this, we will put that specific embedding also, right? So all embeddings.append of abending. Finally, we go ahead and create the document for the image. Remember here, you need to go ahead and put the metadata as type image. And this will basically be your image ID. And this we get all documents itself. So this is what we are basically doing.
This is nothing but processing the text. This is nothing but processing the image. And step by step, we have written it what you are doing it, right? Please have a look onto the code. Step by step, if you will be able to understand all the code itself, right? Here, I have also written proper comments so that you should be able to understand it. So now, once you go ahead and execute this, this has got executed successfully, which looks good.
Now, if you go ahead and see like how many embeddings has got created and how many documents has got created, you can go ahead and write, just go ahead and execute it over here. All embeddings. This is how my all embedding looks like, right? Similarly, if you want to also go ahead and see some other embeddings, you can also see this. Okay. So all embedding is there and there is also one variable called as all underscore docs. So how many docs are basically there?
Here you can see there are two docs, which is absolutely good. And with respect to this particular docs, here you can see, right? Image, image, ID, information, image, right? And this is my text. So this is my type image. This is my type text, right? So here we are able to get this. So this is perfect. Now the next step will be that we need to go ahead and create a vector store. So for creating a vector store, I will just go ahead and create a unified files vector store.
So I'm using that all embeddings over here, right? You can see over here, I'm using that same all embeddings with side this embedding arrays. And then five start from embeddings, right? From embeddings. And there was also something called as from documents for another function. Here we are going to use from embeddings, wherein we are directly giving the embeddings itself, right? So here text embeddings is basically there. I'm iterating through all documents and embedding array.
See, if I just go ahead and see what is my embedding array also, you should be able to see this. Okay. So let's print this embedding array. So this is what is my embedding array for two sentences, right? For two sentences. So I'm taking all the docs and I'm taking all the embedding arrays, combining them in a zip and taking the page content. See, if I do zip, what will happen?
See, if I do zip off or if I just combine this all docs, all underscore docs, comma embedding array, right? So here you can see that this is my page information. This is my image information. And for this, this is my vector. And for my second record, this is my vector, right? And it is a 512 dimension, which I've already told.
So we are iterating through, we are getting the page content and we are putting it inside my text embeddings. Here embedding will be none because we have already done that embedding. So that is the reason we have used this function from embedding. So this becomes my vector store. So here is my vector store. Now from this particular vector store, you can do anything. You can create a retriever. You can do whatever things you like, right? Now quickly, let's go ahead and initialize my LLM model. So for this, we are going to use GPT 4.1. It is a multi-model.
And with the help of this particular model, you can go ahead and implement, you can implement a rack pipeline with a multi-model rack. Okay. So this will be able to understand the text and this one. Right? Now, this is done. My entire, this vector store is ready, right? Now I can go ahead and use it in creating a rack pipeline. But before creating a rack pipeline, I have to convert this vector store and probably show you how a search will basically happen. Okay.
So here you can see with respect to this particular search. Here we have created a function called as a retrieve model, unified retrieval using clip embedding for both text and images. Because here is my vector store. So this will basically be my retrieval model, right? Retrieval, sorry. So whenever I give a query, first of all, we embed the text with this query. We get the query embedding. And then we search it in the vector store. And here we are going to use, say let me go ahead and write this particular query. Okay.
So here what we need to do, we need to search based on the query embeddings, right? But now I already have this query embeddings. So I can directly, what I can do, I can go ahead and use, I will create a variable results is equal to vector store dot. There is a lot of method I told, right? Similarity search by vector, right? There is similarity search for the relevance score. So here I can go ahead and directly do it with vector because I already have that query embedding.
So here I will go ahead and write my embeddings is equal to query embedding, query underscore embedding, double equal to operator, okay? Single equal to query embedding, comma, whatever k value is, k is equal to k, right? And if you remember what k is, the value that we have given, k is equal to five, it should be small k, okay?
Query embedding, why this is wrong, let me check. Similarity search by vectors, query embeddings, I think I have made some mistake in the spelling, just a second. Okay, so here, oh, there was an indentation issue, okay? So query embedding, similarity search by vector and then we are returning the results. Perfect.
So here you can see that I will be able to retrieve the results from here, okay? Now, this retrieval model is just like a retriever, right? Here you can see that from this retriever, I am able to get some text on images, right? In the top k relevant. But we need to convert this into a format before we give it to our LL. So for that format, we will go ahead and define one more function, okay? And now I will give you one assignment for this.
Please go ahead and just check this function, okay? So this function is called as create multimodal message. Create a message with both text and images for GPT-4V, okay? So here I have created a content, content.append type is equal to text, question and context. It's just like one template of information. I will separate the text and image document from the retrieve docs, right? So retrieve docs is basically going to come from where? From this particular function, right?
We will go ahead and separate for text and images. Then if we have text docs, we will create a separate text context for this. If there is an images, we create separate image context for this. Here we give image URL because image URL is basically required along with the image data, okay? So the GPT-4 version that we are specifically using and then finally we have this content, okay? We basically go ahead and return this human message. Now, just go ahead and see this, okay?
Just go ahead and see this, how this message is basically created. There is a specific format that is required for GPT-4V.1, right? And for that we use it over here. Here the main fundamental is that whatever retrieve documents we are basically getting, we are separating it with respect to text docs and image docs, okay? Now, it's time that we go ahead and integrate this into a RAC pipeline.
So here inside this function, you can see context docs, I'm calling retrieval multi-doc based on the query. Then we are creating this multi-model message, how the message the LLM wants. And finally, whatever docs, context underscore docs we are basically getting. See, from this particular message, we are just invoking with the LLM, LLM.invoke. And this is basically for printing the entire information, okay? That's it, right? Context docs is nothing but we are printing all the information over here.
If you want to see the response, you can also go ahead and return the response.content over here, right? And this is basically printing all the relevant context that you have got from the retriever. Very simple, right? Two to three functions and you should be able to do this, okay? And let's, the exciting part is that we'll go ahead and check this out, how things are working over here, okay? So I will ask four different questions. If name is equal to underscore underscore main.
First is, how does the chart on page one, let's go ahead and see this. How does the chart on page one show about revenue trends? Summarize the main findings on the document. What visual elements are present in the document? So this is the three questions. I will print the query. We'll call this multimodal rag pipeline and we'll display it. So guys, finally, let's go ahead and execute this. And here I've given three questions, which you can see. What does the chart on page one shows about revenue trends? Summarize the findings from the document and all are there.
Here we are basically calling the multimodal rag pipeline and then we can see the answer, okay? So what does the chart on page one show about revenue trends? So retrieve documents, two documents, text from page zero, annual revenue. This document, this image from page zero, this information is basically getting displayed. The chart on the page one shows the revenue steadily increase over three quarters. Q1 had the lowest revenue blue bar. See, it is able to determine the blue bar, right?
Then summarize the main findings from the document. Text from zero, this is there. Main findings here you can see. Steady revenue growth. The document shows that we are able to go over the three quarters. The height of the bar increased from left to right, visually representing growth across three quarters, right? And this visual elements align with the context provided in the text. So I hope you are able to see these amazing answers. But if you are following this, I think you should be able to see that.
How we created a retriever, then how we created this particular format for the LLM. And finally, we are able to get the model answer. For this, the main thing is that how do you generate this multimodal message? And for this, you need to see the documentation for the OpenAI multimodal LLM models. Like how this specific message is basically required. How the input is or how the format is required in order to call this LLM, right? So I hope you like this particular video.
Now you can go ahead and play with any kind of PDFs and try to use this and see what all things you are able to get. So I hope you like this particular video. I will see you all in the next video. Thank you. Take care. Bye-bye.
README.md exists but content is empty.
Downloads last month
8