Fetch Error With Langchain JS and Vercel AI SDK

So I’m playing around with the new Vercel SDK and am having an issue with pinecone and fetch errors. I was able to create a conversational chain without issue, but this one is giving me an issue. Any help would be greatly appreciated!

import { ChatOpenAI } from “langchain/chat_models/openai”;
import { ConversationalRetrievalQAChain } from “langchain/chains”;
import { OpenAIEmbeddings } from “langchain/embeddings/openai”;
import { BufferMemory } from “langchain/memory”;
import { StreamingTextResponse, LangChainStream, Message } from ‘ai’;
import { PineconeClient } from “@pinecone-database/pinecone”;
import { PineconeStore } from “langchain/vectorstores/pinecone”;
import { AIChatMessage, HumanChatMessage } from ‘langchain/schema’;

const CUSTOM_QUESTION_GENERATOR_CHAIN_PROMPT = Given the following conversation and a follow up question, return the conversation history excerpt that includes any relevant context to the question if it exists and rephrase the follow up question to be a standalone question. Chat History: {chat_history} Follow Up Input: {question} Your answer should follow the following format: \``
Use the following pieces of context to answer the users question.
If you don’t know the answer, just say that you don’t know, don’t try to make up an answer.

Standalone question: \`\`\` Your answer:`;

export const runtime = ‘edge’;

export async function POST(req: any) {
// Temporary logs to check the API key (REMOVE AFTER DEBUGGING)
console.log(“Pinecone API Key:”, process.env.PINECONE_API_KEY);

// Initialize ChatOpenAI model
const model = new ChatOpenAI({
    modelName: "gpt-3.5-turbo",
    temperature: 0,
});

// Initialize vectorStore with Pinecone
const client = new PineconeClient();
await client.init({
    apiKey: process.env.PINECONE_API_KEY!,
    environment: process.env.PINECONE_ENVIRONMENT!,
});

console.log("Initialized Pinecone client and index");

const pineconeIndex = client.Index(process.env.PINECONE_INDEX!);

const vectorStore = await PineconeStore.fromExistingIndex(
    new OpenAIEmbeddings(),
    { pineconeIndex }
);

console.log("Initialized Pinecone vectorStore");

// Initialize ConversationalRetrievalQAChain
const chain = ConversationalRetrievalQAChain.fromLLM(
    model,
    vectorStore.asRetriever(),
    {
        memory: new BufferMemory({
            memoryKey: "chat_history",
            returnMessages: true,
        }),
        questionGeneratorChainOptions: {
            template: CUSTOM_QUESTION_GENERATOR_CHAIN_PROMPT,
        },
    }
);

console.log("Initialized ConversationalRetrievalQAChain");

// Extract the messages from the request body
const { messages } = await req.json();

// Map messages to LangChain message objects
const langChainMessages = messages.map((m: Message) =>
    m.role === 'user' ? new HumanChatMessage(m.content) : new AIChatMessage(m.content)
);

// Use LangChainStream to create a readable stream and handlers
const { stream, handlers } = LangChainStream();

console.log("Created LangChain stream and handlers");

// Call the chain with the LangChain messages and handlers
try {
  const queryRequest = {
    vector: [/*...your vector data...*/],
    topK: 10,
    includeValues: true,
    includeMetadata: true,
    // ... any additional query parameters needed ...
  };
  const queryResponse = await pineconeIndex.query({ queryRequest });
  console.log("Query response:", queryResponse);
} catch (error) {
  console.error("Error querying Pinecone:", error);
}
// Return the StreamingTextResponse with the stream
return new StreamingTextResponse(stream);

}

Initialized Pinecone client and index
Initialized Pinecone vectorStore
Initialized ConversationalRetrievalQAChain
Created LangChain stream and handlers
Error querying Pinecone: [PineconeError: PineconeClient: Error calling query: PineconeError: PineconeClient: Error calling queryRaw: FetchError: The request failed and the interceptors did not return an alternative response] {
name: ‘PineconeError’
}

1 Like

Hi,

I am trying to implement the same thing. Just wonder if you have any updates.

Thank you.

Similar issue here. PWA works fine in localhost.
In Vercel it throws a 500 fetch error. Resolved Vercel CORS issue, now stuck on this.
Is this a Vercel issue or a Pinecone issue?