Insert million embeddings is very slow

Hi,

I’m trying Pinecode for semantic search, where I have 9 million embeddings. I just created a new serverless index and started to insert in batches of 100. The problem is that tqdm is estimating a total time of more than 40 hours!

Is there a fastest way to create indexes when we are dealing with million of vectors?

I’m following the exemple given by Pinecone docs. In summary:

pc = Pinecone(api_key="xxx")
index = pc.Index("xx")
for vectors in batches:
    index.upsert(vectors=vectors, namespace="ns1")

Thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.