There are a few useful things you can do to help against certain attacks:
Heartbleed-style memory disclosure attacks are often prevented by Rust’s regular safety guarantees, but an additional countermeasure against such an exploit is to use
libc::mprotect to mark a page of memory as unreadable, and only mark it readable in your accessor method. You may also want guard pages on either side so a blind memory scan would segfault on those pages, even when the page is readable during a crypto operation. An attacker with full execution can get around that, but it helps against some less-powerful attacks. I wouldn’t be as worried about them in Rust, since memory safety helps you here a lot, but it could still be worthwhile.
A drop impl that’s guaranteed to zero your data is also useful, and is what pixel seems to have been alluding to before. This hopefully prevents reading data after it’s no longer needed. One could imagine an application that only needs a private key to initiate a session, but no later. An exploit in the session would only lead to the session key being in memory, so other later uses of the private key may not be entirely toast.
Of course, all of this is skating on a knife edge and trying to mitigate damage in the event everything has already gone horribly wrong. Using an HSM (or maybe a privileged, out-of-process software simulation) provides much better guarantees, at the disadvantage of having requirements for special hardware, which may have all sorts of problems of its own. I also don’t believe there’s a good way to use an HSM from inside Rust at the moment either. While it’s a sad and bad API, PKCS#11 is the normal thing to use.