How to deduplicate/rate limit log entries in log4rs?

Hi community,

I am writing an application that can trigger multiple thousand log entries per second if things go south. I am searching for an easy way to either deduplicate or rate limit (bounded queue with dropping) log entries so the log doesn't utilize the whole IO when writing to disk. However, log4rs doesn't seem to support this.

Any pointers on how to implement this with log4rs?

Regards

Meanwhile, I found a solution and document it here in case someone else has a similar problem.

I introduced a macro that does not repeat log entries unless a certain timeout is reached. The timeouts are stored in a static HashMap in a thread-safe way. The key to the HashMap is tuple of file name and line number of the macro invocation. This allows to generate a globally unique key for each macro invocation for safely separating the timestamps of the last macro invocation.

/// Hash map to check when certain errors were triggered the last time
static ERROR_HM: Lazy<Mutex<HashMap<(&str, u32), Instant>>> = Lazy::new(|| Mutex::new(HashMap::new()));

#[macro_export]
/// Rate limited error macro. Use this instead of error! in case error messages might be
/// triggered very frequently (e.g., in loops). An error message will not be repeated 
/// before timeout seconds have elapsed.
macro_rules! error_rl {
    ($timeout:expr, $($msg:tt)*) => {
        let key = (file!(), line!());
        match ERROR_HM.lock() {
            Ok(mut hm) => {
                match hm.get(&key) {
                    Some(ts) => {
                        if (Instant::now() - *ts).as_secs() >= $timeout {
                            error!($($msg)*);
                            hm.insert(key, Instant::now());
                        }
                    },
                    None => {
                        error!($($msg)*);
                        hm.insert(key, Instant::now());
                    }
                }
            },
            Err(_) => {}
        }
    }
}

Anyways, I would be interested in feedback for this. If you have comments or proposal to enhance this, please comment.

1 Like