Future Work for Context-aware Allocators

I'm familiar with the basics of the unstable/nightly allocator API, but I had a question about future work with it.

I have an application which occasionally does cryptographic operations (digital signatures) by obtaining a private key from a secrets service available over HTTP. Notably, I do not control the guts of the service, I can choose my HTTP client but that's basically where control ends for my case; it boils down to client.get(...).await?.

This particular secret should live for as limited a time as possible, and when dropped, it should be securely zeroed so that it's not just hanging around in process memory. I can use the zeroize crate on types that I control, e.g.:

#[derive(Deserialize, Zeroize)]
pub struct PrivateKey {
    data: [u8; 32],
}

However, since I can't control the buffer that my HTTP client creates during the response, I cannot zeroize it and the value will be "leaked" back into the allocator as-is.

A lot of what I'll say from here is broad-strokes stuff, I know that securing memory is not a trivial thing to do, but the design I've come up with is to do the following when I need to generate digital signatures:

  1. Ship another Rust binary along with my application.
  2. Set the global allocator to zeroize_alloc so that all allocations in this process are securely erased before returning memory to the system allocator.
  3. Create a pipe and implement an RPC protocol where I send data to be signed and received signed data.
  4. The child process fetches the secret from the secrets manager service, signs the requested data, and returns the signature back to the parent process.

(I know that there are many other things I should do like mlock, but I'm going to pretend these don't exist for simplicity).

There's obviously penalties for doing it in this way, but this does basically work.

Even if the allocator API were stable, this still wouldn't solve my use-case here because unless my HTTP client supports passing in a custom allocator, again I have no control over the allocations that occur within the client. I could set my global allocator in the original process to zeroize_alloc, but then I'd pay penalties forcing the system to zeroize ALL memory on every drop.

So I guess what I'm asking is whether the allocator team has considered doing something like scoping or context-aware allocators, for example:

let zeroizer = zeroize_alloc::ZeroizingAllocator(std::alloc::GlobalAlloc);

let payload = generate_token();

let signed = zeroizer.with(|| {
    let client = MyClient::new();
    let resp = client.get("https://my-secrets-store.whatever/secrets/private-key");
    let secret = resp.body().text();

    do_signature(secret, payload)
});

I'm not sure if other languages have a similar concept, but I would theoretically be able to do exactly what I need to if such an API exists. I imagine this would need to do some weird thread-local stuff and that it probably wouldn't work in async Rust.

Has anything like this been discussed as part of allocator API stabilization?

You could use a separate program to do the request in a subprocess. Its memory is cleaned up by the OS and never handed out to other programs on exit.

Yes that's exactly what I described above in my question, I'm using a separate process to do the work so as to isolate secrets there.

I'm asking if the allocator team is considering doing a form of scoped allocator API where everything within a provided closure will default to a specific allocator.

You can define a global allocator that defers to a thread local allocator yourself, but that increases the cost of every allocator call, and you need to figure out how to handle freeing after leaving the scope, which presumably means stashing the allocator. If it was a standard library feature, then everyone would have to pay for it.

Yeah, fair point: I guess everything that allocates would need to allow passing an allocator argument to every API :face_with_spiral_eyes:

FWIW, this is how Zig works, and it's not too bad (in my limited experience, anyways).

I guess we'll just have to wait for it to hit stable and then refactor the entire Rust ecosystem :joy:

I think it's not about rust but about the libraries. Hopefully they will allow passing allocators per request etc. By the way, you may try using crates.io: Rust Package Registry in your own code, or use nightly

As others have said, that works mostly, and is essentially what the nightly allocators API is, though it introduces a new semver hazard as you now need to to be sure about it you might need to use an allocator in the future. IIRC Zig has at least one case in their standard library where they have an implicit allocator when they found they needed one (though since it's nightly they might have taken the breaking change since)

Using thread-local allocators isn't a bad design, it's just not one you should have to pay for by default.