After I read most of the documentation, I am about to start using the API and came across this Note on a pinecone site:
_When you create an index, it runs as a service until you delete it. Users are
billed for running indexes, so we recommend you delete any indexes you’re not
using. This will minimize your costs.
That means that the DB I build with all the data/vectors from the documents that I previously segmented, embedded, etc. is getting destroyed. At the same time, I read on the pinecone site that it takes time to ingest the vectors (depending on the pod characteristics, data size, etc.) and that time could be in minutes and probably hours (depending on the document data size). I presume that in the scenario of 1000 documents of 100 pages each, it may take a while in order that user would be able eventually to query the DB.
So, obviously, I am asking myself - how it is all supposed to work? I either pay for a running service/index to run continuously and indefinitely (because how would I know when a company employee would need to query the DB) or let user wait for a while recreating DB and also pay for generating embeddings in order to bring the index service into a working state…
So one of the solutions someone told me would be - use collections:
using var pinecone = new PineconeClient(key, env);
await pinecone.CreateCollection(“index-name-backup”, source: “index-name”);
Creating an index from a collection generally takes about 10 minutes. Creating a p2 index from a collection can take several hours when the number of vectors is on the order of 1M.
That means this task of creating and starting Index from Collection should be triggered at the beginning of the business day. Building a web service like that becomes a non-viable option though, unless you are opening yourself for a paid option… but then, the charges for 1,000 users using p1 would be $7,000/mo. to start from…
Can anyone tell me where I am wrong here? Or is it correct that using Pinecone service in the enterprise would incur significant price tag?