Fixing Hallucination with Knowledge Bases

Large Language Models (LLMs) have a data freshness problem. Even some of the most powerful models, like GPT-4, have no idea about recent events.

The world, according to LLMs, is frozen in time. They only know the world as it appeared through their training data.


This is a companion discussion topic for the original entry at https://www.pinecone.io/learn/langchain-retrieval-augmentation/

The API for OpenAIEmbeddings changed recently (see this PR) so the example doesn’t currently work on the latest code. I think you just need to pass in a parameter “model” (with the model_name) and remove the document_model_name/query_model_name parameters.

Thanks for the great tutorial!