Cargo uses file locking that should prevent races like that. If the cache mount somehow doesn't share file locks between containers then multiple containers using the cache concurrently could cause such an issue.
You could try to manually do file locking across multiple containers sharing a cache dir to see if that's the case. If so try sharing=locked on that mount option so that only one container can use it at a time.
You could also try to not cache the the $CARGO_HOME/registry/src/ directory, but let every container extract the source files for themselves. Cargo caches dependencies in the $CARGO_HOME/registry/cache directory and only extracts the source files from the cache when needed for compilation into the $CARGO_HOME/registry/src directory. This section of the Cargo Book describes how you should cache the Cargo Home directory for CI workflows, extending to your use-case I assume.
If unpacking is racy (because two containers unpack the same crate at the same time), the second instance of cargo will fail, because the .cargo-ok file already exist (written by the first instance of cargo while the second one was still busy unpacking).
This lock is global per-process and can be acquired recursively.
(emphasis added by me)
I may have been too quick to jump to conclusions. It looks like Cargo uses file locking under the hood, which should work across processes. Maybe the mounting through Docker breaks the file locking?
Note that the SO question discusses volumes, not --mount=type=cache mounts. The fact that you can set how the cache mount should be shared with the sharing=[locked|shared|private] option like the8472 described makes me think there's some magic going on with the mount that might be breaking flock.
I don't know the exact relationship between docker and buildkit (AFAIK buildkit is supposed to be the new backend for building docker containers), but I believe the mount logic of buildkit starts in this file.