Reasoning about Unsafe `Rust` code and ways to automate some of it

I've been exploring ways to reason about some of my embedded-rust implementations and came across a few useful (and really helpful) techniques.

From what I have learnt (as on date), Rust has 3 usable options here (the last 2 are a sort of limited but focused set of formal methods techniques) -

  • type enriched or type-driven programming - we can do a limited form of refinement typing with const_generics here but this is definitely something to look forward to, in this space.
  • property based testing - Crux-mir is simply an awesome tool. It takes a while to set-up but once you get it working, we can get quite creative with our testing.
  • automated reasoning tools - We still don't have full fledged support for specifying pre +post conditions or invariants in unsafe Rust but we're getting there with Prusti for safe rust and haybale, which enables us to reason about code.

Here's a tiny post on this for comments or thoughts:

1 Like

My knee-jerk reaction is that you won't be able to automate away much of the reasoning that goes into writing unsafe code.

When it comes to FFI, which is where I write most of my unsafe code, it'll be almost impossible to automate away the reasoning. This is because a lot of the time you'll won't have access to the source code () and therefore the tool has no way to derive pre/post conditions, or the programmer won't be able to express the reasoning in code (e.g. foo() mutates temporaries stored in statics so can only have one call at any time, and you must also call foo_init() exactly once before foo() can be used).

Writing unsafe correctly also requires a level of global analysis which I haven't seen from static analyzers before (but I'd love to hear of a counter-example!).

That said, this should be technically feasible for pure Rust code where the compiler can see everything. You could even use miri's stacked borrows model in combination with a fuzzer to find most of your soundness issues.

Also, in the same grain as the Anna Karenina principle:

All safe functions are alike; each unsafe function is unsafe in its own way.

3 Likes

This. unsafe is unsafe pretty much for this exact reason: it is hard or impossible to prove what you are writing is correct.

1 Like

unsafe is for cases the compiler can't prove as correct. It still has to be correct code though, or you'll introduce undefined behavior, and having external tools to help with that is very important.

That said, I don't think code like this will ever be UB-free in Rust:

cortex_m::register::msp::write(stack_pointer);
jump_vector();

The reason is that the msp::write will replace the stack pointer of the currently running function, and Rust assumes to have full control over the stack. For example, the jump_vector call might load the vector from the stack.

The only way to correctly write code like this is via inline or external assembly.

This is true, and I think being able to ensure correctness at compile time would be an awesome goal to strive for.

My reservations are that it's hard for a tool to prove something correct when half the picture is inaccessible. For example, when you are calling into some external DLL which may not even be on the host machine when the program gets compiled. Or if an unsafe function is passed around as a function pointer, so you won't know which function is called (and how it interacts with rest of the world) unless you execute the code.

In theory a sufficiently advanced tool could be made aware of functions like cortex_m::register::msp::write() or do global analysis looking at every unsafe block of code and reasoning about how they can all possibly interact. You could even develop a set of heuristics designed to detect possible unsound usages of code.

For example accessing a static mut variable without synchronisation when concurrency may be involved may raise a flag (kinda like Go's race detector)... But even that would be quite error-prone because the programmer might know race conditions are actually impossible because of how the code behaves at runtime.

It's this "knowing" that would be hard to implement using traditional static analysis methods or express as code. You'd need something which can learn, detect patterns, and reason about things in much the same way a human brain does.

1 Like

Some interesting points. I agree with the statement and + it rhymes :slight_smile: - each unsafe function is unsafe in its own way.

To add my thoughts. From a programmers point of view, what we are primarily interested in, is the notion of designing by contracts i.e. Can we write some piece of code and make a claim about what our code does regardless of whether it is side-effectful?

Fundamentally DBC relies on the preciseness or correctness of our specification (or what I as a programmer want to have happen).

  • For unsafe FFI calls: I assume the following applies - 'you cant prove what you don't know'. However, we can treat a foreign function call like a black box and focus on claims that can be made about the interface to the FFI call. One interesting way of doing this is by specifying strong invariants and constraining the number possible paths in your impl. For example: I've been experimenting with an nRF compiled C library to generate rust bindings for the on-board nRF52840's HW crypto accelerator using the RefinedUsize type to specify an invariant (that should hold) for all possible values to an FFI call. This way I can ensure that when I try something like snippet below, ret_val only accepts a range of error codes and nothing outside of it. However, this would be much more useful if we could specify invariants as part of the function signature and most use-cases may not be as straightforward as this example.

let ret_val = bl_cc310::nrf_cc310_bl_hash_sha256_init(&mut hash_ctx);

  • For mutable statics and concurrency - I recently learned of a way that could deal with this problem. The RustBelt project which aims to provide formal proofs for Rust's unsafe foundations is using concurrent separation logic to model concurrent memory accesses and prove that they can be safe. But this is still an academic exercise in itself (unlike stack borrows) i.e. none of it is usable at this point let alone automatable.
  • Hardware - All properties stated and proved in code (either by an automated prover or a proof assistant) are based on the assumption that HW does exactly what the programmer's code has asked of it and nothing else i.e. HW shouldn't do something like perform 2 writes when we have asked for only one. So, when we call something like write_register, correct behavior is determined by how closely HW adheres to a given specification.
    • So, by contract design - jump_vector can't reference or load from the stack or any other memory address because RefinedUsize has constrained it to just one possible value.
    • A related question: I assumed cortex_m::register::msp::write(stack_pointer); uses in-line asm via llvm_asm macro - Is my assumption wrong?
    • The impossibility of making unsafe, safe - I would not use the term impossible but would most certainly agree with it being a difficult problem (but still decidable). Besides at this point, formal methods have been used to prove and provide guarantees in a multitude of domains - Ex: Compcert, SeL4, Ironsides DNS etc.

Finally, an observation - I think what seems to work for the large variety of use-cases is a combination of a built-in native static verifier (which takes verification conditions and feeds them to a constraint solver, ex: Ada, framaC) for problems of the first order logic kind and refinement typing. I believe languages that adopt these capabilities have an obvious advantage, especially when you're working on anything security critical.

P.S - corrected the original title of this post. It should have been automate some of it

1 Like

The thing is that both the stack pointer store and the jump_vector call must happen in the same inline assembly block. If there is any rust code in between on they are in two inline assembly blocks, LLVM may store values on the stack before the inline assembly block and expect the value to be the same when loading it again after the inline assembly block. As an example what is allowed, cg_clif generates a method call for inline assembly blocks. The actual inline assembly then gets wrapped with a function prologue and epilogue that saves and restores all necessary registers to/from the stack. The resulting assembly is then compiled using as to a separate object file. This is completely valid according to the inline assembly RFC. In fact this is the fallback method for when a codegen backend doesn't natively support inline assembly. It works as the inline assembly RFC mandates that among other things the stack pointer is the same when exiting an inline assembly block as it was when entering it.

2 Likes

So, the compiler can end-up optimizing execution if we mix inline + regular rust code. Didn't realize that as my impl worked fine. Given the above explanation, I would've expected it to throw some runtime error.

But this makes sense. I think I'll club both operations into a single inline block and cross check generated assembly code for both impls to look for differences. (maybe it's a nightly thing)

Thank you for the detailed explanation.

There will probably not be any differences, but there might be in a future version or with different compilation flags. For example in Redox OS something similar was done, which caused debug mode builds of the kernel to crash, while release mode builds worked just fine.

Yeah, I believe did run into a similar issue when I was starting out with rust.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.