Best practices - fault-injection resistant Rust code


Do we have a best-practices guide for writing fault-injection resistant rust-code? I'm working on a couple of security-critical libraries and was wondering if there are thoughts on this topic.

For some context: Say, we have a simple check such as the one below (like an if {condition} or a match check).

match computed_hash.as_slice().ne(hash_value.unwrap()) {
    true => panic!("{} intergity check failed...", prop),
    false => info!("{} integrity check passed...", prop)

Can we rewrite this in some way such that the compiler generates code making it harder to glitch i.e. inject faults and skip the check?

I'm not sure but I would start by getting rid of that unwrap().

I'd look into techniques people use when running their code in space because this sounds very similar to the random bit flips you might get from cosmic rays. Hardware glitches are largely language-agnostic, so solutions for C code would be applicable to Rust, too.

I doubt any compiler will give you this sort of checking automatically, though.

As far as your compiler is concerned, a program is designed to run on an abstract machine which is assumed to execute instructions perfectly[1]. Another concern is that the optimiser will probably want to strip out code it thinks is nonsensical ("they just set x to be 5, so I can improve performance by removing the assert!(x == 5)").

It's off-topic, but this question reminds me of a video I watched a while back - How I hacked a hardware crypto wallet and recovered $2 million - YouTube. Someone had forgotten the password to their crypto wallet, and the hacker was playing with the wallet's power input and timing to read the unencrypted password from memory.

  1. This is also why Undefined Behaviour is the boogeyman in languages like Rust, C, and C++. In order to get any meaningful work done, the compiler has to assume your code is well-formed and that it will be executed in a sane environment... UB breaks that first assumption, while hardware glitches break the second. ↩ī¸Ž

1 Like

That's a good point. So far, I've leveraged some of my previous work that's based on this blog. Things like

  • adding extra random delays to reduce temporal determinism
  • adding a sequence of redundant but dependent checks
  • etc.

Yeah, this type of attack has gained traction (as the tools and technical know-how around this subject has improved). Here's another example that targets a security subsystem within the ARMv8-M architecture.

Although, its not an exact science, it does deliver deterministic results (if the attack actually works).

If you are adding redundant but dependent checks, I would advise looking at the assembly output, compilers are very clever about optimising such things away (I'm often surprised how clever they are).

You can usually defeat this cleverness by compiling without optimisations, but that of course will greatly hurt performance -- although the large amount of extra code and extra time would, I imagine, give people more opportunities for fault-injection.

1 Like

yeah, that is true. In fact, the rust compiler is freakishly clever at times.

In this case, the only workable solution I know, is to bypass optimization with inline assembly.

As we need to protect a select set of boundary checks (like the if signature valid, then continue or stop type of checks) and we're only interested in instructions that can be faulted with repeatable success, the kind of code-bloat (in-size) we end-up with is not actually an issue (at least in practice). But this is definitely a trade-off when evaluating execution-time as it can prove to be more challenging if we have hard timing-requirements.

I'm afraid that Rust can't give you any guarantees here. Rust+LLVM assume the hardware does what it's supposed to do, and in your case this fundamental assumption doesn't hold.

Perhaps you could solve it at a high level, e.g. run the same code on two separate computers, or run it multiple times in a way that requires it to be fully deterministic. But on level of code within a function, you'll be fighting the optimizer.


Hey :wave:,

My PhD dissertation happens to be on the topic of fault injection (simulation, and also fault model inference), so I know the subject somewhat :grin: .

I'd say you can't do fault-injection resistant code (Rust, or anything else) in a vacuum. The state of the art is that multiple fault injections are possible, with various fault models that have sometimes non-local and unpredictable consequences (eg you can access "phantom instructions" on variable-width instruction sets because some fault models makes the target offset its interpretation of instructions by a fraction of one instruction...). You need a hardware target that your code will run on (and be compiled for), a fault model (what happens when I throw an EM pulse at this position on the component? Instruction skip? Instruction change? Data change?) and an attacker model (how likely is the attacker to be able to perform 1 fault on the component. 2 faults? more? How much time, equipment and expertise has the attacker?). Blindly double-checking every conditional is not going to work if the fault model allows you to NOP 10-40 instructions.

With these targets being set, the Rust compiler will be your first enemy in making fault-resistant code. I'm personally convinced that if we are to be serious about fault injection countermeasures at scale, we need to add the capability built-in to the compiler. An approach is described in the 3rd part of the blog post you linked, btw.

If I were to implement something like this, I would add some countermeasure_xxx lang items to the compiler (eg countermeasure_double_check), then implement them as custom LLVM attributes that can drive code generation to add the countermeasure (and inhibit any optimization that would annihilate the countermeasure, which is probably the hard part I guess). You'd also need to make sure the rust compiler doesn't already performs optimization before lowering to LLVM, and disable them here too.

In the absence of such a tool, I guess your best bet is directly writing assembly for the parts of the code that need to be fault-resistant. You need to do so to be side-channel resistant anyway, due to similar reasons (a compiler extension to add "branch_balanced" attributes would be similarly interesting).


I'm evaluating a single hardware target for now - ARMv8-M and assume the attacker is free to do whatever he wants to (ex: think of a micro-controller deployed in the field with zero oversight). I do realize, this is quite a stretch (i.e. not a very realistic target).

But I'm interested in evaluating what's possible i.e. what are the possible defenses given the above attack model. For now, it seems in the absence of compiler-built-in capabilities, inline assembly is the only viable option (although still non-trivial).

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.