From the discussion it starts looking more and more like you are doing the exact same mistake as “we code for the hardware” people (mostly C, although sometimes C++, developers) are doing: assume that assembler, machine code matter – and everything else is not of too much importance.
The end result: compilers destroy their “perfectly valid programs” and then they publish blog posts, petitions, throw temper tampers and do various other silly things in an attempt to squeeze something that's, as they perceive, is their inalienable right: get, from the compiler makers, some description of the way to predict how arbitrary program would behave, after compilation.
The sad truth that their petitions, demands, and blog posts ignore is the following: what they want simply couldn't exist.
Not as “it would require superintelligence and if we would build few terawattes of datacenters than super AI may do that”, but as in “no matter how much resources we would throw on the probem… it would still be unsolved”.
It's just simply mathematically impossible to reliably get as answer to the simple question “does that machine code works as the source code of Rust program intended or not”.
And that is where the dreaded “undefined behavior” springs from: if it's flat out simply impossible to create a compiler that both accepts all “good” programs and rejects all “bad” programs (note that I haven't specified what are “good” programs and what are “bad” programs… I don't need to, the impossibility would happen with any non-trivial definition of “good” and “bad” programs) – then we can only one choice left: accept the fact that some programs that compiler accepts… would be miscompiled.
And these traits, Send
and Sync
are part of the solution for that sad dilemma. Note that I highlighted all in the previous paragraph. It's very important. If we are ready to reject some “good” programs, or, alternatively, accept some “bad” programs… either accept a bit of chaff or lose a bit of wheat… then the hard block on the mathematically proven “impossible” path disappears.
And that's where Send
and Sync
traits become important. These traits don't actually do anything. They just mark certain types as “good” (in two different senses).
And, most of the time, compiler can do such decision without your help.
But sometimes compiler can not be sure that certain types are “good”. Then you need to verify certain properties of the types involved and implement these traits manually.
This happens rarely but it does happen.
You couldn't derive them automatically because most of the time compiler provides an implementations of these traits for you without you even doing anything, there are no need to derive.
And in cases where you need to derive them manually you need to first understand what requirements are applied on types that are Send
or Sync
, then you need to understand why compiler haven't derived these traits automatically and then you need to explain, to yourself and to the reviewers why do you think your types is actually “good”… and only then you need to derive these traits manually.
But because these traits are used only to separate “good” programs that should be accepted and translated to machine code and “bad” programs that should be rejected… nothing happened in compiled program if you implement or don't implement these traits.
They are only there to better separate chaff from wheat…