We indeed found some mistakes in the documentation, and they have been corrected. That said, please note that whenever we reference index (lowercase), the intent is to reference an instance of Index.
Your request is mostly correct, and can be fixed by rewriting the following:
I dont understand. You can literally copy paste the code from the “updated” tutorial and it returns “PineconeClient: Error calling upsert: PineconeClient: Error calling upsertRaw: ResponseError: Response returned an error code” // “PineconeClient: Error calling upsert: PineconeClient: Error calling upsertRaw: RequiredError: Required parameter requestParameters.upsertRequest was null or undefined when calling upsert.”. I have tried every single possible combination of batches versus individual vectors, wasted multiple hours at this point thinking I had something misconfigured, but nothing works. Meanwhile I got the Python library working within minutes. What is going on?
@jhs - that is the issue then. You can only upsert/update/query with vectors that are the same dimension as your index. Right now you’re trying to upsert vector with a dimension of 5, and that won’t work.
I’m actively working on exposing the correct error, which isn’t properly exposed at the moment. I apologize for this, and I’m working as fast as possible to rectify it. I suspect @sobad also has the same issue.
Thanks. I don’t believe this was my issue, even though I had in fact forgotten about that while briefly testing the versions here. I ended up converting the python tutorial code to react and found the “iMax” value was returning NaN for some reason, and a couple other formatting things. I’m honestly not sure at this point. I used chatgpt and bing to basically debug it all but here is the working code if anyone stumbles upon it in the future. Its not very optimized but the important thing is it works.
Thanks guys.
const index = await pinecone.Index(“app”);
let embeddingTokensUsed = 0
for (let i = 0; i < lines.length; i += batchSize) {
// Set end position of batch
const iEnd = Math.min(i + batchSize, lines.length);
// Get batch of lines and IDs
const linesBatch = await lines.slice(i, iEnd);
const idsBatch = Array.from({ length: linesBatch.length }, (_, n) => String(i + n));
// Create embeddings for the batch
const res = await openai.createEmbedding({
input: linesBatch,
model: MODEL,
});
embeddingTokensUsed += res.data.usage.total_tokens
console.log("used " + embeddingTokensUsed)
const embeddings = res.data.data.map((record) => record.embedding);
// Prepare metadata and upsert batch to Pinecone
const metadata = linesBatch.map((line) => ({ text: line.trim() }));
const toUpsert = idsBatch.map((id, i) => ({
id,
values: embeddings[i],
metadata: metadata[i],
}));
const leVectors = []
for(let x = 0; x < toUpsert.length; x++){
leVectors.push(toUpsert[x])
}
console.log(leVectors.length)
const upsertRequest = {
vectors: leVectors,
namespace: "testooor"
}
await index.upsert({upsertRequest});
}
It’s good to know you’re unblocked. One quick clarification: The client is not meant to be used in the front end (with React etc) but rather in the backend, running on either Node.JS or Deno.
So my upsert is now working perfectly, the vectors are getting created and indexed with the correct namespace but I am having the same issues as yesterday only this time with the query function
var error = new Error(message);
^
Error: Request failed with status code 400
I was looking at the docs for the openAI API and in the docs is uses the xq but I didn’t manage to get that working and based on what you said yesterday, I think the error must be with the dimensions but im not sure how to resolve this, any guidance would be appreciated.
Same here I just was going to come back here to look through and see what I was missing I am so lost on why this isn’t working, I literally was using it the other day?