Your NODE JS Api is NOT GOOD.. differences between the doc and the API reference and nothing work

Thank you for the clarification. I am using NodeJS with Express as a module right now, and don’t anticipate having any frontend. I’m still not familiar at all with the differences between languages, but I do recognize the slight differences in how imports work & the variables in general. Hopefully I’m not doing something wrong?

Got it working. Turns out I had changed my index name accidentally, fixed it later, but in that process ended up referring to the incorrect documentation on the website again. The CORRECT way to query in NodeJS is with this as your queryResponse:

const queryRequest = {
  vector: vectors,
  topK: 1,
  includeMetadata: true,
  namespace: namespace,
}

Once again the docs are wrong. Also if you ctrl-click through the methods it helps too to see what the actual parameters are, since I figure we’ll encounter more of these issues down the road.

Are you using openAI API, the docs for it, while in python said you needed to reuse the embedding mode

const xq = await openai.createEmbedding({
  model: "text-embedding-ada-002",
  query: userMessage,
});

Did you have anything like this or just do the straight query? As if I don’t use xq I get

ReferenceError: xq is not defined

@sobad you’re correct, there was a mistake in the docs, and it is now correct. Apologies. We’ve also updated the client to expose the full error code from the API, which should help greatly in identifying potential issues.

1 Like


const queryRequest = {
    vector: userMessage,
    topK: 1,
    includeValues: true,
    includeMetadata: false,
    namespace: sessionId,
}


console.log("Embeddings queried");

This worked but is ts the vector value right, I’m unsure if my vectors are actually been queried or if the code is just running fine but not querying

@jhs - It doesn’t seem you have the actual call to perform the query, e.g.,

const queryResult = await index.query({ queryRequest })

Can you share how you’re instantiating the index and how you’re performing the request?

Hi,

Yes the question on the other thread was fixed, just a quick question on my current code, this is my entire query code:


const queryRequest = {
    vector: userEmbeddingData,
    topK: 1,
    includeValues: true,
    includeMetadata: true,
    namespace: sessionId,
}

const queryResult = await index.query({ queryRequest })

The upsert works and according to the console log the query succeeds, but as my index isn’t populated it’s hard to tell if its working as intended.

So brief questions are

  • userEmbeddingData is the value I upserted and is also what im querying, is this the correct value to query or should I set up a separate query for the users message?

  • Would there be an easy way to test if this is working ?

The query request only takes a single vector, so it won’t work if you’re passing to it the same values you’re passing to the upsert request. Given a vector vec1 = [0.1, 0.2, 0.3, 0.4] you should be able to do the following:

const vec =  [0.1, 0.2, 0.3, 0.4];
const index = pinecone.Index("example-index")
const upsertRequest = {
  vectors: [
    {
      id: "vec",
      vector: vec,      
    }
  ],
  namespace: "example-namespace",
};
const upsertResponse = await index.upsert({ upsertRequest }); 

Then you can query the vector:

const queryRequest = {
  query: {
    vector: vec,
    topK: 10,
    includeValues: true,
    namespace: "example-namespace",
  },

}
const queryResponse = await index.query({ queryRequest })

The result should include the vector you upserted.

Hi rschwabco,

it seems like I’ve been facing a very similiar issue with upsert and query to the one described in here.

I’ve managed to solve the upsert issue following your steps. However, it seems like I can’t get the query function to work. Would you mind taking a look?

import openai from "../../ai/aiconnect.js";
import client from "../pineconnect.js";

const createEmbedding = async (text) => {
  const response = await openai.createEmbedding({
    model: "text-embedding-ada-002",
    input: text,
  });

  console.log("Creating Embedding...");
  try {
    const embeddings = response.data.data[0].embedding;
    const vectorObject = {
      vector: embeddings,
      topK: 10,
      includeValues: true,
    };
    return vectorObject;
  } catch (error) {
    console.error(error);
    throw new Error("Error creating OpenAI embedding");
  }
};

const queryPinecone = async (vectorObject) => {
  const index = client.Index("doot");
  const QueryRequest = {
    topK: 10,
    vector: vectorObject.vector,
    namespace: "my-first-namespace",
  };
  const queryResponse = await index.query({ QueryRequest });

  console.log(queryResponse);
};

const pickupEngine = async (text) => {
  console.log("Pickup Engine: ", text);

  try {
    const vectorObject = await createEmbedding(text);
    await queryPinecone(vectorObject);
    console.log("Data successfully queried from Pinecone index!");
  } catch (error) {
    console.error("Error quering data from Pinecone index:", error);
  }
};

export default pickupEngine;

Here is the console log:

Creating Embedding...
Data successfully upserted to Pinecone index!
Pickup Engine:  test snow
Incoming PUT request to /api/v1/read-message/63fcbc7d8850136d05baf180, [object Object]
Incoming GET request to /api/v1/receive-message-from-user?id=63fcbc7d8850136d05baf180, [object Object]
Creating Embedding...
Error quering data from Pinecone index: PineconeClient: Error calling query: PineconeClient: Error calling queryRaw: RequiredError: Required parameter requestParameters.queryRequest was null or undefined when calling query.
TypeError: Cannot read property 'matches' of undefined
    at logMessage (file:///C:/Users/aladi/Documents/AI%20Project/latest/server/middleware/chatController.js:72:37)
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (internal/process/task_queues.js:95:5)

Should be queryRequest.

1 Like

I’ve been fiddling around: Sebby/SMS.js at main · Seraphaious/Sebby · GitHub that’s my current iteration

  • If I just have console.log("queryResult"); it returns the vector, the similarity score seems to correspond well enough as if I mention i.e bouldering it will consistently return the same vectors, and different vectors to if I say i.e good morning.
  • However, it does not print the text, to get around this I tried attaching the text content of the upsert to metadata but then it still only returns metadata: [Object]

I tried

if (queryResult && queryResult.length > 0) {
  console.log(Object.keys(queryResult[0]));
} else {
  console.log("No matching vectors found");

Which returned no matching vectors found. If the queries are consistently matching on similarly I thought they must at least be working but it’s hard to tell without seeing the text. And if there is an issue, as you said I imagine it’s that I might be querying more than a single vector, but in the context of my code I cannot query any other variable or it not properly embedded.

@jhs The results are on the matches key, like so:

const index = client.Index(indexName)
const queryResponse = await index.query({ queryRequest })
const queryResult = queryResponse.matches
1 Like

You are brilliant, all working. Thanks a lot.

1 Like

Hi Again guys, so finally what was the problem? I have still the same issue :

"PineconeClient: Error calling upsert: PineconeClient: Error calling upsertRaw: ResponseError: Response returned an error code "

const index = pinecone.Index(“nutrify”);

const v1 = {
  id: "1",
  values: [0.1, 0.2, 0.3, 0.4, 0.5],
  metadata: { key: "value" },
};

const v2 = {
  id: "2",
  values: [0.11, 0.12, 0.13, 0.14, 0.15],
  metadata: { key: "value" },
};
const upsertRequest = {
  vectors: [v1, v2],
};
const res = await index.upsert({ upsertRequest });

What I have to do?

I tested creating an index with a dimension of 5 and it works with the test vectors, but if I want to save embeddings from ChatGPT OpenAI, what dimension should I use? Each text has a vector length, so I don’t understand.

How should I create my index?

Thank you.

It really depends on the model you’re using, but many of the models used by OpenAI have 1536 dimensions.

Hi @rschwabco I have set the index dimensions as 1536 and also inserting the same dimension vectors however I get this error while upserting

error while upsert vectors [Error: PineconeClient: Error calling upsert: Error: PineconeClient: Error calling upsertRaw: FetchError: The request failed and the interceptors did not return an alternative response]

Would you be able to point me to right direction please ?

Hey @fittrjc - Could you please provide a bit more context around the error that you’re seeing? It’s hard to tell what the issue is without seeing more of the code.

Were you able to get this working @fittrjc ? I’m running into a similar issue. Can’t even run LangChain’s basic Pinecone example without hitting

node:internal/process/promises:288
            triggerUncaughtException(err, true /* fromPromise */);
            ^

[PineconeError: PineconeClient: Error calling upsert: PineconeError: PineconeClient: Error calling upsertRaw: FetchError: The request failed and the interceptors did not return an alternative response]

Node.js v18.16.0

All my indices and keys are defined correctly (used short name for the index). Not sure what’s up.

I was getting same error using node 16, just updated to version 18 and it works!