Retrieve embeddings stored in index_name

If I have this code

from langchain.llms import OpenAI
from langchain.chains.question_answering import load_qa_chain

llm = OpenAI(temperature=0, openai_api_key=os.environ['OPENAI_API_KEY'])
chain = load_qa_chain(llm, chain_type="stuff")

query = ""
docs = docsearch.similarity_search(query, include_metadata=True)

chain.run(input_documents=docs, question=query)

the docsearch relies on

index_name = "myindex"
if index_name not in pinecone.list_indexes():
    pinecone.create_index(index_name, dimension=1536, metric='dotproduct')
docsearch = Pinecone.from_texts([t.page_content for t in texts], embeddings, index_name=index_name)

that depends on texts as well.

If I created the docsearch on day X, I don’t want to recreate it in day X+1. So I wonder how can I retrieve the stored embeddings to query my documents without rerunning the whole code of day X, also to save API calls.

Mind that I don’t work in the field so I don’t know if I asked something clear.

1 Like

Though not a Pinecone specific issue, I got blocked by the same issue. About 100% of LangChain/Pinecone demos show index creation happening directly before querying the index, and do not show querying an existing index! :face_exhaling:

In the LangChain source code there is a class method directly after ‘from_texts’ called ‘from_existing_index’. I have not tried it yet but this appears to be what we are looking for:

    @classmethod
    def from_existing_index(
        cls,
        index_name: str,
        embedding: Embeddings,
        text_key: str = "text",
        namespace: Optional[str] = None,
    ) 
1 Like

This worked :rocket:

Here is the code snippet:

llm = OpenAI(model_name="gpt-3.5-turbo", temperature=0)
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)
chain = load_qa_chain(llm, chain_type="stuff")
docsearch = Pinecone.from_existing_index(index_name, embeddings)
docs = docsearch.similarity_search(input_text, include_metadata=True)
2 Likes

surely more clean than my solution

index = pinecone.Index(index_name=index_name)

index_stats_response = index.describe_index_stats()
print(index_stats_response)

#the index contains stuff. How to  get the embeddings?

active_collections = pinecone.list_collections()

query = "" #insert here your query


res = openai.Embedding.create(
    input=[query],
    engine=embed_model
)

# retrieve from Pinecone
xq = res['data'][0]['embedding']

# get relevant contexts (including the questions)
res = index.query(xq, top_k=2, include_metadata=True)

# convenient function to retrieve query
limit = 3750

def retrieve(query):
    res = openai.Embedding.create(
        input=[query],
        engine=embed_model
    )

    # retrieve from Pinecone
    xq = res['data'][0]['embedding']

    # get relevant contexts
    res = index.query(xq, top_k=3, include_metadata=True)
    contexts = [
        x['metadata']['text'] for x in res['matches']
    ]

    # build our prompt with the retrieved contexts included
    prompt_start = (
        "Answer the question based on the context below.\n\n"+
        "Context:\n"
    )
    #tunable
    prompt_end = (
        f"\n\nQuestion: {query}\nAnswer:"
    )
    # append contexts until hitting limit
    for i in range(1, len(contexts)):
        if len("\n\n---\n\n".join(contexts[:i])) >= limit:
            prompt = (
                prompt_start +
                "\n\n---\n\n".join(contexts[:i-1]) +
                prompt_end
            )
            break
        elif i == len(contexts)-1:
            prompt = (
                prompt_start +
                "\n\n---\n\n".join(contexts) +
                prompt_end
            )
    return prompt


query_with_contexts = retrieve(query)
query_with_contexts

docs = docsearch.similarity_search(query, include_metadata=True)
chain.run(input_documents=docs, question=query)