I'm familiar with the basics of the unstable/nightly allocator API, but I had a question about future work with it.
I have an application which occasionally does cryptographic operations (digital signatures) by obtaining a private key from a secrets service available over HTTP. Notably, I do not control the guts of the service, I can choose my HTTP client but that's basically where control ends for my case; it boils down to client.get(...).await?
.
This particular secret should live for as limited a time as possible, and when dropped, it should be securely zeroed so that it's not just hanging around in process memory. I can use the zeroize
crate on types that I control, e.g.:
#[derive(Deserialize, Zeroize)]
pub struct PrivateKey {
data: [u8; 32],
}
However, since I can't control the buffer that my HTTP client creates during the response, I cannot zeroize it and the value will be "leaked" back into the allocator as-is.
A lot of what I'll say from here is broad-strokes stuff, I know that securing memory is not a trivial thing to do, but the design I've come up with is to do the following when I need to generate digital signatures:
- Ship another Rust binary along with my application.
- Set the global allocator to
zeroize_alloc
so that all allocations in this process are securely erased before returning memory to the system allocator. - Create a pipe and implement an RPC protocol where I send data to be signed and received signed data.
- The child process fetches the secret from the secrets manager service, signs the requested data, and returns the signature back to the parent process.
(I know that there are many other things I should do like mlock
, but I'm going to pretend these don't exist for simplicity).
There's obviously penalties for doing it in this way, but this does basically work.
Even if the allocator API were stable, this still wouldn't solve my use-case here because unless my HTTP client supports passing in a custom allocator, again I have no control over the allocations that occur within the client. I could set my global allocator in the original process to zeroize_alloc
, but then I'd pay penalties forcing the system to zeroize ALL memory on every drop
.
So I guess what I'm asking is whether the allocator team has considered doing something like scoping or context-aware allocators, for example:
let zeroizer = zeroize_alloc::ZeroizingAllocator(std::alloc::GlobalAlloc);
let payload = generate_token();
let signed = zeroizer.with(|| {
let client = MyClient::new();
let resp = client.get("https://my-secrets-store.whatever/secrets/private-key");
let secret = resp.body().text();
do_signature(secret, payload)
});
I'm not sure if other languages have a similar concept, but I would theoretically be able to do exactly what I need to if such an API exists. I imagine this would need to do some weird thread-local stuff and that it probably wouldn't work in async Rust.
Has anything like this been discussed as part of allocator API stabilization?