The separate graph and vector storage can indeed add overhead for short-lived tasks. I've found that using a dual-memory architecture, where episodic and semantic memories coexist, can streamline this process and reduce complexity. If you're interested in seeing how this could work, I put together some tutorials on similar setups: https://github.com/NirDiamant/agents-towards-production
Actually, the name definitely came after noticing RAM prices. Though the idea where the graph-in-memory only for ephemeral RAG sessions came first, we won't pretend the naming wasn't influenced by RAM being in the spotlight.
The separate graph and vector storage can indeed add overhead for short-lived tasks. I've found that using a dual-memory architecture, where episodic and semantic memories coexist, can streamline this process and reduce complexity. If you're interested in seeing how this could work, I put together some tutorials on similar setups: https://github.com/NirDiamant/agents-towards-production
Out of curiosity, did you settle on that name before or after the RAM availability/price issues?
Actually, the name definitely came after noticing RAM prices. Though the idea where the graph-in-memory only for ephemeral RAG sessions came first, we won't pretend the naming wasn't influenced by RAM being in the spotlight.
GrrHDD
Very cool, kudos
Where might one see more about what type of indexing you do to get the graph?
Appears to be: https://github.com/gibram-io/gibram/blob/main/sdk/python/gib...
Exactly, thank you. Still in LLM-based extraction.
how do you search the graph network?
There are two steps:
Vector search (HNSW): Find top-k similar entities/text units from the query embedding
Graph traversal (BFS): From those seed entities, traverse relationships (up to 2 hops by default) to find connected entities
This catches both semantically similar entities AND structurally related ones that might not match the query text.
Implementation: https://github.com/gibram-io/gibram/blob/main/pkg/engine/eng...
This is how I did it a few years back while working for a set store company. It works well.