Atrribute Macro For Wrapping Trait Methods

Hi all,

I'm in the early stages of writing a decorator-like proc_macro_attribute that does rate limiting on the method it's attached to, and am trying to avoid some headaches I see down the line. I've seen there's a few rate limiters out there, but none have the semantics (or method and not just function support) I'm looking for. The methods being wrapped would potentially be async methods (but I've omitted in examples), and I'll probably end up using darling so I have better semantics in the attribute arguments down the line.

The idea is to have something like:

pub struct Caller { ... }

impl Caller {
  #[rate_limit(Duration::seconds(5), 3)]
  fn call_api(&self, a: A, b: B) -> Response { ... }
}

which would rate-limit the function to 3 calls per 5 seconds. From what I've done before, this seems doable, but the hiccup I see is that when replacing the implementation with an outer function and inner function, I will have to parse &self, a: A, b: B and convert it to self: &Self, a: A, b: B, since &self wouldn't be valid within the inner function signature. Is this assumption correct? E.g., this would not work:

// generated code returned by proc_macro_attribute

fn call_api(&self, a: A, b: B) -> Response {
   fn __inner_call_api(&self, a: A, b: B) -> Response { ... }
  //                               ^^^^ Am I wrong in thinking this will be invalid to define this with &self here?

  // do some rate limiting stuff and maybe wait some time

  __inner_call_api(&self, a, b)
}

If all the above is more or less correct, it seems like the two options are to:

  1. Do the signature/argument parsing for the inner function, and convert &self to self: &Self and pass it in, which is doable (I think), but probably not that fun, or,
  2. Move the __inner_call_api copy of the wrapped method to the impl block itself to avoid the &self issue, but this pollutes the namespace of the impl block, and gives the caller the ability to call __inner_call_api directly and bypass rate limiting if they wanted.

Are my assumptions completely off-base, or is there some easier solution I'm not thinking of? This would be used extensively within a crate, so a decorator-like attribute macro seems like the choice, but I understand it's not easy since I've hard a time even finding examples of similar things being done.

Thanks in advance for any help!

I wouldn't know how to do this as the inner function has no idea of what Self refers to.

That could be mitigated by making __inner_call_api private.

In your title you say you want to wrap trait methods, in your example you wrap a basic method. Which is it? Because you can't just add an unknown __inner_call_api method to a trait implementation. What web framework are you targeting?

A third option would be to turn the tables around. Keep the original call_api intact and add the rate limiting stuff as an inner function you call at the beginning of call_api. You could add the rate limiting logic directly into call_api, though this might be a little unhygienic. Here an example of what I mean:

struct Response;

trait Service {
    fn call_api<A, B>(&self, a: A, b: B) -> Response;
}

struct Endpoint;

impl Service for Endpoint {
    /* The code that gets expanded by the macro to the function below
    #[rate_limit(Duration::seconds(5), 3)]
    fn call_api<A, B>(&self, a: A, b: B) -> Response {
        Response
    }
    */
    
    fn call_api<A, B>(&self, a: A, b: B) -> Response {
        fn __rate_limiting() {
            // do some rate limiting stuff and maybe wait some time
        }
        
        __rate_limiting();
        
        Response
    }
}

Ah, you're quite right. Attempting to use Self in the inner function signature results in use of generic parameter from outer function, so that idea is out.

This would be for an API client that uses Hyper to make calls to external services, so it doesn't target a web framework per se, but the idea is to have this general enough that it could be used for many clients (not with any single Trait between them, though).

In the end, I would need several different flavors of rate limiting, as some services are limit-per-endpoint as above, and others are an API-wide budget (with different costs per endpoint), so several different ways of maintaining/sharing the limiter state are definitely necessary down the line. My current use would be for basic methods, and I'm not sure I see needing it for trait methods, so adding it outside as private for now would be possible.

Your option 3 looks promising though, since __rate_limiting can close over arbitrary state needed for rate limiting, and doesn't take much syn work to do. I think between-endpoint rate limiting cases would be a lot harder overall, but I don't have an immediate use for that, so that's a relief. You definitely caught me with Trait impls vs general method in impls...this is not a great "intro to more advanced Rust project", I've just become too accustomed to decorators in Python to let this go just yet.

You might want to look at how the tracing crate implements its instrument macro. It appears to support methods when the trait is using async_trait.