Generative Question-Answering with Long-Term Memory

Generative AI sparked several “wow” moments in 2022. From generative art tools like OpenAI’s DALL-E 2, Midjourney, and Stable Diffusion, to the next generation of Large Language Models like OpenAI’s GPT-3.5 generation models, BLOOM, and chatbots like LaMDA and ChatGPT.

It’s hardly surprising that Generative AI is experiencing a boom in interest and innovation [1]. Yet, this marks the just first year of generative AI’s widespread adoption. The early days of a new field poised to disrupt how we interact with machines.


This is a companion discussion topic for the original entry at https://www.pinecone.io/learn/openai-gen-qa/

How does one train LLM for large text documents? To get around exceeding token limit error of the embedding model?

Maybe this can help? OpenAI API