I have local and remote tasks, some of which are executed sequentially others in parallel.
There's the idea of using some sort of K/V store and use that in a distributed sense also; or even adapting PostgreSQL—which I use elsewhere in the stack—but not sure the overhead tradeoffs for each approach…
That's a big question. it all depends on what it is you are doing. What requirements you have.
For example, we have a lot of IPC going on, on the same machine and on distributed machines. It's mostly done by a publish/subscribe model using the NATS https://nats.io/