Channels are a form of shared memory too. For a long time std::mpsc was quite literally Arc<Mutex<Vec<Message>>>. There isn't any actual sending happening there – it's threads writing and reading from a shared queue.
Performance mainly depends on the contention on the lock. If threads aren't fighting for it, then it will be pretty cheap, and most likely there won't even be any difference between a mutex and a channel, assuming you don't keep the mutex locked for longer than it takes to peek at the state.
Channels work fine when there is mainly pipelined processing between threads. Channels get a bit clumsy if there is no strict order for the messages or items to be put into a channel and fetched from the channel. Think about prioritizing messages or flushing the channel, etc.
However, there are virtually hundreds of different channel implementations on crates.io with different communication models which may fit your needs.
Personally, it is not uncommon for me to roll my own “channel”. But these are not channels, rather they are shared queues. The simplest form is an Arc<Mutex<VecDeque<Message>>>. There are a lot of variations on this theme. Just some ideas or hints, outer Arc omitted:
Mutex<Option<State>> - Queue with capacity 1.
Mutex<Option<Box<State>>> - Same, but only a pointer is passed arround.
Mutex<Arc<State>> - To read: lock the mutex, clone the Arc.
This question is hard to answer without knowing what State is, how frequently its read, how frequently its written, etc. For all we know State could be packed into a u64 and just accessed atomically with no further syncing needing.
I would not say so personally. The "rusty" thing is to understand the problem, the tools, and do what is best in the situation.
The standard lib Mutex when not under contention (as is likely your case) is extremely fast, there's no syscall or anything of that sort with no-to-low contention, its just an atomic CAS. You might not even need an Arc depending on the lifetimes.
The std mpsc channels is fast too, and (usually) lock-free, but still has more overhead than a Mutex (usually) because theres simply more going on. One thing to keep in mind is you'll have to have a new "instance" of state for every update using a channel (unlike a mutex which can mutate in place).
As others said it is difficult to recommend any particular solution without knowing better what State is and what are the characteristics of its change over time. I want to share with you one more common pattern, which might be useful for you.
If your state does not change often, the readers only need to have a shared reference to it, and it is acceptable that readers will see out-of-date version of state (as long that they will load new version on the next read), then you can look into something called "atomic arc".
It is a data structure behaving like a combination of RwLock and Arc, but better optimised to reduce congestion at the cost of eventual consistency (different implementation choose different trade-offs).
Classic use-case for it is supporting reloading configuration during process lifetime. For example you can write a server, which loads configuration at start-up, each worker thread gets shared reference to configuration when it processes request, and you want to have a "configuration writer" thread which can reload configuration. After this atomic update all worker threads will get new config version for processing new requests.
There are a couple of implementations of this datastructure, so if you want to use them read their documentation to understand what exact trade-offs they did and what is the expected runtime performance characteristic. Some of the more notable implementations are: