Ask HN: RAG or shared memory for task planning across physical agents?

10 points by mbbah a day ago

LLM-based software agents are getting pretty good at tool use, memory, and multi-step task planning. But I’m curious if anyone is pushing this further into the physical world; specifically with robots or sensor-equipped agents.

For example:

Imagine Robot A observes that an item is in Zone Z, and Robot B later needs to retrieve it. How do they share that context? Is it via:

  - A structured memory layer (like a knowledge graph)?
  - Centralized state in a RAG-backed store?
  - Something simpler (or messier)?
I’m experimenting with using a shared knowledge graph as memory across agents—backed by RAG for unstructured input, and queryable for planning, dependencies, and task dispatch.

Would love to know:

  - Is anyone else thinking about shared memory across physical agents?
  - How are you handling world state, task context, or coordination?
  - Any frameworks or lessons you’ve found helpful?
Exploring this space and would really appreciate hearing from others who are building in or around it.

Thanks!

scowler 16 hours ago

We’ve been exploring typed task graphs as an alternative to shared memory. Turns coordination into lineage rather than state. Surprisingly scalable. Happy to compare notes.

  • mbbah 11 hours ago

    [dead]