How can the OpenAI model’s max token length error be resolved?

I’m currently working on an AI project, but I’m relatively new to the field and need some guidance. My goal is to utilize AI models, specifically OpenAI models, to generate HTML templates based on some data I have.

I explored using Langchain and Pinecone I inserted my data into Pinecone as separate documents. Then, to generate templates based on a specific query, I performed a similarity search in the vector database. The retrieved documents were used as context for an OpenAI model to generate templates accordingly. However, I encountered a max token error when passing lengthy code as context to the AI model.

The context passed to the AI model contains extensive code, and I’m unsure how to handle this error. Is there a way to increase the token length or overcome this limitation?

Hi @oodulaja23

the maxTokenLength issue is part of the LLMs current limitations and is on the OpenAi side. You have to be aware that when using OpenAi models you are limited by the length of your prompt (+context) and the returned answer. They are all together calculated towards maxTokenLength.

For your case I don’t see a simple work around, but the standard idea is to chunk original documents into smaller parts, so they fit in the context AND leave enough tokens for the answer.

Hope this helps