Synchronized logging from multiple processes

It looks like very classic problem, but I'm not sure if I handle it correctly.

There is some short-lived program, launched by the external event, possibly several times in rapid succession. This program has to log its work into the fixed file (possibly rotating over time, but not changing for every invocation). How to make sure that every write to log will be "atomic", i.e. that two instances of program won't try to add their logs concurrently?

The solution I can think of is to use lock-files, i.e. create the lock file (waiting and retrying if it already exists), write to the log, the delete the lock to allow others to work. Is there any better way?

Just make the writes atomically. :smiley: Is file append atomic in UNIX? - Stack Overflow

Using files as locks is error prone, but is portable and usually works. Sometimes user has to go and manually delete something somewhere though.

Unix supports more native ways to lock files File locking in Linux . There's probably a crate somewhere for this, with multiple OS support.

Another approach is to use unix socket for coordination. Probably no big benefit over native file locks, except if you want to do something more complex than just locking.

Can you send the log records to a single process that will write them to the file? I've done that with a web server running on localhost, but you probably don't need anything that heavyweight. You could also use a message bus; Kafka has worked well for me.

This perhaps : slog_async - Rust

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.