I referred to the below git hub and built a chat bot.
https://github.com/pinecone-io/canopy/blame/main/examples/canopy-lib-quickstart.ipynb
How to make sure the chat response is built based on knowledge base(Not from the Internet)?
I referred to the below git hub and built a chat bot.
https://github.com/pinecone-io/canopy/blame/main/examples/canopy-lib-quickstart.ipynb
How to make sure the chat response is built based on knowledge base(Not from the Internet)?
Canopy is built with a RAG first approach, which means the bot will always go to the knowledge base to fetch context and build the response from it.
If you happen to see a situation where the bot fails to do so, we would love to help debug
As part of my knowledge base, I upsert some documents related to IRS and how a person can get benefits for his/her special kids. none of these document have any information about capital of a country.
This is yaml file I am using
tokenizer:
type: LlamaTokenizer
params:
model_name: hf-internal-testing/llama-tokenizer
chat_engine:
llm: &11m
type: AnyscaleLLM
params:
model_name: meta-llama/Llama-2-7b-chat-hf
query_builder:
type: LastMessageQueryGenerator
context_engine:
knowledge_base:
record_encoder:
type: AnyscaleRecordEncoder #AnyscaleRecordEncoder ,OpenAIRecordEncoder
params:
model_name:
thenlper/gte-large
batch_size: 100
I started server
canopy start --config anyscale.yaml
and then started
canopy chat
I asked the question
ser message: ([Esc] followed by [Enter] to accept input)
What is the capital of USA?
With Context (RAG):
According to the context provided, the capital of the United States of America is Washington, D.C. (District of Columbia).
I am looking for an answer something like “We don’t have much information available to answer your question”
I haven’t used Canopy, but I have built a RAG chatbot with Pinecone as a content store. In my experience, it took a LOT of trial-and-error effort to dial in the LLM prompt to only answer from the context.
I see that by default Canopy defines the prompt as:
Use the following pieces of context to answer the user question at the next messages.
This context retrieved from a knowledge database and you should use only the facts from the context to answer.
Always remember to include the source to the documents you used from their 'source' field in the format 'Source: $SOURCE_HERE'.
If you don't know the answer, just say that you don't know, don't try to make up an answer, use the context.
Don't address the context directly, but use it to answer the user question like it's your own knowledge.
Looking at the Canopy source, it looks like you can override the system_prompt used in ChatEngine
with your own instructions:
I would experiment with modifying your prompt to tighten up the guardrails specific to your use case. You can use OpenAI’s reference on best practices for prompt engineering as a guide
Good luck!
This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.