Hey Folks -
Out of curiosity, and because I’ve been unable to find recent research pertaining to the subject:
Has a system for representing the formal logic of different natural-language embeddings been tested with vector databases? Haven’t been able to find anything on this but want to try to hack it to create an Aristotelian ‘argument’ generator for the coherent possibilities of some domain.
I intend to use LangChain to process through a line of reasoning about what is possible in a domain, recursively evaluating the logic based on some input query. Essentially, I’ll fine-tune a GPT model to identify the formal logic of different Natural Language content, store that each chunk of text as an embedding as well as its logical formulation as metadata.
I’m now in the process of scaffolding the representation of that logical metadata, thinking that I need some sort of tree or graphical representation/hierarchy to ensure that the flow of logic is hierarchically dependent.
Has anyone seen or experimented with hierarchical organization of embeddings in Pinecone?
Let me know! I’d love to spark a conversation and hear some different (probably better-versed) opinions and experiences. For context, I’m a senior at the University of Michigan studying computer science and cognitive science, so this stuff is of unending interest to me.
“Communication rules the Nation…” AI ^ RAG is going to be putting a huge premium on comms in the future, I intend to help that along :^)