Now I’m developing AI chatbot with custom knowledge base using Pinecone and Langchain.
The problem I faced is related to chat history and semantic search.
If user type his query, the system retrieve documents from vector database by semantic search.
For example, if user ask “What is PHP?”, the system get relevant data from Pinecone and then, ChatGPT answer based on this data.
After that, if user ask “How to use it?”, the system should retrieve data considering previous question. In other words, “How to use PHP?”.
But I don’t know how to implement it.
Now I’m using Langchain, so if someone knows this problem, please let me know.
You need to keep track of the previous response and then re-feed that back into pinecone to still get the same or better semantic searches. You could also just keep a running history of context chunks that are retrieved and collect and pass them in as the chat progresses.
Langchain is actually probably blocking your implementation here with their abstractions, but that is for you to decide if you want to move away from it. It can likely be done either way
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.