Hello Learners…
Welcome to my blog…
Table Of Contents
- Introduction
- Overcoming OpenAI’s Token Limit: Leveraging Chain Types in Langchain
- Summary
- References
Introduction
In this post, we discuss chain type in Langchain and try to Overcome OpenAI’s Token Limit: Leveraging Chain Types in Langchain.
Overcoming OpenAI’s Token Limit: Leveraging Chain Types in Langchain
When we are working with OpenAI’s API with Langchain and creating a chain of models to question answering or any other task on the document files at that time we have to define chain type parameter.
Based on the chain type that we set after that we sent our document data to the OpenAI’s API for the question answering, and calculate tokens based that tokens it calculates the charges.
As we know that every OpenAI models have a different token limit.
Let’s understand with an example,
- If in one document there are 2k tokens sent to the OpenAI models for the summarization then it will be okay.
- But when if the document is very very large, let’s take the document has 8k tokens, then we can’t send it to the OpenAI models.
There are four types of chain types in Langchain.
- Stuff
- Map Reduce
- Refine
- Map Rerank
Stuff
In chain type ‘stuffing’ we sent all the text data of our document file at once. if our document has more tokens than the model’s limit then we get a Rate Limit error.it is very useful for small files.
Working fine with small files.
More tokens than the models limit
- Pros:
- One API call
- All data at once
- Cons:
- Limited context data length
Map Reduce
There is one scenario where our document is a very large file let’s take in that our document has 8k tokens and the model’s token limit is 4k tokens.
In Map Reduce we divide the document data into some number of chunks then we give every chunk to the OpenAI Models for processing.
As an output, we get a number of prompts as many as we give chunks. after that, combining all the prompts model generates one final prompt which is our final output.
- Pros:
- Scales The Large document files
- Cons:
- Many API Calls (For every chunk one API call)
- Loses Information
Refine
In Refine chain type we give a chunk of our full document to a prompt and we get a response-1, after that, we give a second chunk of our full document to a second prompt with response-1 and we get refine response-2.in the third prompt we give third chunk with response-2 and we get the response-3 and this process is going on and we get a final response at the end of the chunks.
- Pros:
- More Relevant Content
- Cons
- Many Independent calls
- It takes more time to complete the prompts
Map Rerank
Map Rerank is mostly used in question-answering models. In this first, we divide the documents into a number of chunks and then we give every chunk to the different prompts and every prompt generates the answers with the score of the answer.
After getting the answers from the all prompts we rank the answers based on the score of the answers and select the answer which has the highest score and that is our final answer.
- Pros
- Scales Well
- It is better for single answer-question
- Cons:
- Can not combine information between different chunks of documents
Summary
So these are the chain types that we can use to solve the problem of token limits in the OpenAI models when we are working with document files using Langchain with Python.
chain type is a parameter that we have to use based on our use cases and try to get the best results from the models.
For more learning about artificial intelligence,
Happy Learning And Keep Learning…
Thank You…