UPDATE: I’ve just added sendEmail functionality. Super fun - get a free sendgrid api and you can do fun things like say, “send bob@xxxyyy.com a poem about the joys of chatGPT from Sean” and it will. Super entertaining…
With OpenAI’s chatcompletion enhacement of “function_calls,” you can now enable ChatCompletion to access your Pinecone database as needed based upon a user’s prompt.
It illustrates the power of chaining function_call’s together with a vector database. In the example I created, ChatCompletion calls 5 functions (getCurrentUTCDatetime, getDogName, getNews, getWeather, getPineconeData). The augmentation of external data sources into ChatCompletion without any external programs like LangChain is amazing.
For ease of implemention, I’ve created a wrapper that makes wiring up “function_call” only a few lines.
Thanks for the example.
I’d been thinking of doing this instead of the usual VectorDB embeddings method of performing a search based on the user’s query and stuffing the results into the initial context.
chaining functions together (which my demo shows how to do)
searching text data
Pinecone is great for #2, which is why I have it in my demo. If you have a large repo of data - news, HR information, law, taxes, etc., then you’ll want to “chunk index” those documents so you can feed the data into chatcompletion (chatGPT). Simply chaining commands together cannot help with that.
However, if you want to ask “what is the weather like today in San Francisco”, chaining is AWESOME!!! ChatCompletion can execute a “function_call” request and have your code execute an API to get the weather.