this one I think I figured out, it wasn’t the size of the metadata per se, it was the size of the vectors I was sending per call that gives that error.
yeah my batch size. i was reading in a giant text file and breaking it into chunks. And it was working fine. Then I made the chunks way too big and got that error. So I went back to the first size chunks.
I am using the OpenAIEmbeddings model.
What I don’t understand is that it seems this error would be thrown when I create the embeddings - not when I search the database. Of course, there is the distinct possibility that I am going about this all wrong and I don’t fully understand the process. Any guidance would be greatly appreciated.
If you’ve been able to embed your docs with OpenAI’s embedding and upsert them into the pinecone index, I would think you’ve got the plumbing looking good for the OpenAI embedding of your query. Can you post your upsert/query code (obviously removing your API keys!).
Oh, you need to chunk your metadata. There are restrictions on a per-row level. Try making your document smaller or chunking it. I’m going to assume you’re using langchain rather than directly calling the pinecone APIs, right? Try a small document and see if you still get the error (as the error message indicates, try a doc with a size of less than 40k). Langchain is likely storing the document in the metadata field.