I’ve begun a library to create and lock lockfiles cross-platform, but I don’t have loads of experience in this. If anyone knows fs well on both windows and unix, would they consider looking at my implementation.
I think a good lockfile implementation would be a boon to the ecosystem!
The repo is on github.
In particular, it uses
fs2 to lock files after creating them. It does not consider the presence of a lockfile to be locked, as it is in e.g. libalpm.
I haven’t published it yet, and I’d rather get views of some other potential users first.
Forgive the ignorance, but what problem is your crate solving, exactly?
Why is “cross platform lock files” something useful?
I ask because as I understand it, Cargo.lock files are already platform independent.
The problem: Create some sort of marker to provide synchronization across processes, like you would use a mutex to synchronize across threads.
Examples: Things like package management systems, where trying to update packages with 2 processes concurrently will probably screw up your system.
cargo is interesting in that it doesn’t create a lockfile, it uses
fs2 crate to lock it in the OS. But what is the best way to do this generally, to create a file, or to lock it, or both.
This lib uses locking, because I often have experiences where a process dies after creating a lockfile, and then you can’t start more processes until you manually remove the file.
I don’t know the best answer though.
cargo is interesting in that it doesn’t create a lockfile, it uses fs2 crate to lock it in the OS. But what is the best way to do this generally, to create a file, or to lock it, or both.
If you’re greenfielding a solution, experience tells me that a better approach is to avoid filesystem-level locks in the first place, as they tend to be relatively racy (i.e. they’re prone to race conditions) and in doing so tend to defeat their own point. But I suspect you already knew that last part. The problem is, in general this is not likely to get any better, as it is up to the various teams of FS maintainers to make tradeoffs e.g. writing first to RAM and then let it trickle down to the actual file-to-be. Thus there’s no such thing as “all file systems will work properly when I try to create this lock”. Instead there is only empirical verification, which needs to be repeated after every update to the FS.
As for Cargo, I’m not sure but I think it avoids this problem through a CREW (concurrent read, exclusive write) access pattern, just like Rust borrows. In other words, when it spawns multiple processes it can do so safely because all spawned processes only read from Cargo.toml, and thus locking the Cargo.lock file doesn’t really matter all that much (it’s still good to lock it, of course).
Instead, in general when it is at all feasible, it is better to get concurrency not by forking processes, but by using Rust’s concurrency primitives (or something like Erlang/BeamVM or GoLang, as they have “green threads” aka “fibers”). This is cheaper and definitely safer, at least in terms of race conditions. As there’s no such thing as a free lunch, the tradeoff there is (somewhat?) increased development effort. Alas, such is the relationship between concurrency and our brains, at least for the time being.
An additional issue with forked processes is that it is basically the lowest of the low (in terms of value added) when it comes to concurrency, as they cannot exploit the concurrency for operations that are expensive but amenable to multithreading. Mostly it’s not much more than “I need/want to use more of my cores, but updating the arch for it is next to impossible”.
I agree with everything you say.
My use-case isn’t concurrency as much as it is accidentaly parallel runs. I want to stop another user running the same code while mine is running. So, it’s a safety mechanism rather than a speed thing.
So I’ve published a first version under
lockfile on crates.io.
It doesn’t use locking, the file’s presences is the conceptual lock and its deletion is the release.