Here, "transactional" refers to: for any read of keys, it returns a consistent view, i.e. not halfway through a update. In the above case, the Rust borrow checker ensures that when a write occurs, we have a &mut self, which means there can not be another thread doing a read (needs a &self).
Question: Is there a easy way to make this work in a 'distributed' manner (i.e. the HashMap maintanied by multiple Rust programs running on multiple machines), or is this something that requires using databases for ?
Your problem is underspecified to be able to answer this question. What do you expect to happen, for instance, if node A attempts to insert (1,2) concurrently with node B inserting (1,3)? What behavior do you need in the face of a network partition? How much communication latency will exist between the nodes (typical and worst-case)? What read and write loads do you expect? Etc...
then I expect one to succeed, and one to fail. Which is fine, the client has the responsibility to retry.
Assume similar setup / expectations as a Postgres cluster running within a single AWS datacenter. (But hopefully higher throughput / possibly lower latency, as we are dealing with a 'more restricted' problem.)
The simplest solution here is to have a master process somewhere that serializes all updates, and either responds to queries or broadcasts accepted changes; that can be either Postgres or something bespoke. I doubt that this is what you have in mind, as most people wouldn't describe this as "distributed."
Dealing with distributed algorithms is generally a pain, so I try to avoid them whenever I can. What benefit do you hope to gain by using something "distributed" instead of having a single point of truth?