This is a follow-up from Rust beginner notes & questions
The idea is basically to add an effect system to Rust, for handling things that people may want to consider like unsafe
in some circumstances, here panic
s and transitive unsafe
(that is, unsafe
hidden behind an unsafe
block), in the aim of:
- For
panic
s, having a system that can provably notpanic
(thus ensuring that the system cannot have unexpected code paths, basically like a method that cannot throw in Java) - For transitive
unsafe
, reducing dependency on dependencies for security, as an ill-writtenunsafe
block may easily turn out to be a serious vulnerability. It also helps in cases like Tock OS, where ensuring safety of capsules is essential for the security (even though the absence of anyrustc
bug allowing to trigger unsafe behavior without theunsafe
keyword is debatable, let's not debate this right here, that's just an example use case)
Relevant posts from the main thread:
In addition, I have had a small discussion with @bgeron with the following relevant content:
As for panics, well… I prefer Rust to Haskell, but panics and especially their un-avoidability are still a thing that bother me profoundly in Rust, and I’d love it if we could get what I have heard is possible with effect systems à la F* / Ada (which I haven’t used yet), a way to limit panic! usage to some parts of the code much like unsafe. (and ideally it’d be a fully-generic effect system so that it could also handle
insecure
and similar effects, that are currently handled with hacks).
I prefer Rust to Haskell, but panics and especially their un-avoidability are still a thing that bother me profoundly in Rust
Do you feel it’s better in Haskell? In my view,
undefined
or missing branches from acase..of
are essentially the same as panics.effect systems
I’m actually currently doing a PhD that involves effect systems. My impression is that an effect system cannot be perfect; there are always either problematic things it doesn’t catch, or good things it doesn’t allow.
Then there is the theorem proving approach (Coq/Agda/Idris/F*): it is sometimes very very hard to convince the computer that something is okay; this is the kind of thing that requires you to give pages and pages of proof for small snippets of code. (Though in F*, you only have to give these proofs sometimes instead of always.)
In my understanding, Ada isn’t that special; essentially it just has language support for putting assertions everywhere and maybe it makes them more efficient. But the assertions still happen at runtime, so your program may crash.
[snip] I guess what I want is more knowledge
over exactly what unsafe parts are being used (and actually a
transitively_unsafe
would make sense to me, so that I could enforce
that a function transitively never usesunsafe
, even inside an
unsafe
block)Do you feel it’s better in Haskell? In my view,
undefined
or missing
branches from acase..of
are essentially the same as panics.Honestly, I don’t think Haskell (or any other language that I practiced
apart from – ironically – C, and Java where IIRC the exceptions must
be described) gets a solution to this problem better than Rust.But that’s still something that bothers me with Rust, especially given
it’s trying to get a foothold in the embedded world, where the absence
of exceptions is something highly important.I’m actually currently doing a PhD that involves effect systems. My
impression is that an effect system cannot be perfect; there are always
either problematic things it doesn’t catch, or good things it doesn’t allow.Oh? That’s a topic into which I planned to get more, but still haven’t
found time for it. So I guess for Rust the effect system would better
allow things it maybe shouldn’t… but that’s still better than only
catchingunsafe
? !Then there is the theorem proving approach […]
Well, that’d be ideal for proving snippets of unsafe code, but I don’t
think Rust is ready yet for integrating a theorem prover into its
compiler, and it’ll live for quite some time into the tooling. Ideally
it’d end up with a way to transitively allow only proven unsafe code and
not normal unsafe code, but… well…In my understanding, Ada isn’t that special […]
I was maybe thinking of Ada/SPARK? Or maybe that’s just a wrong
recollection due to name similarity with Agda, I can’t remember, sorry!
actually a
transitively_unsafe
would make sense to me, so that I could enforce
that a function transitively never usesunsafe
, even inside an
unsafe
blockDo you mean “even across function calls”?
I think this was discussed briefly somewhere: it could be very useful in some situation but I don’t expect it to become a common thing. If you want to do this across crate boundaries, then suddenly the API of a crate must include which functions are
transitively_safe
, and that’s a promise that those functions will also staytransitively_safe
in future versions. I think a lot of crate authors would hesitate to make that promise for the future. That said, it might well spawn a whole ecosystem aroundtransitively_safe
crates.But that’s still something that bothers me with Rust, especially given
it’s trying to get a foothold in the embedded world, where the absence
of exceptions is something highly important.I expect that something like Coverity is promising for embedded Rust. It won’t catch all exceptions that will actually happen in real life, but perhaps something like 90%-95%? Embedded software seems simple enough for this to work well. On the other hand, there’s a lot of possibility for panics in Rust. Even stuff like
x+1
can cause a panic when the result doesn’t fit in the same type.C++ has fewer exceptions, but in C++ an integer overflow would typically wrap around (if you use the right compiler options). Perhaps the system doesn’t crash, but you might well get bugs further down the program. I don’t know which option is better.
I was maybe thinking of Ada/SPARK?
Cool! I wasn’t aware of Ada/SPARK. It seems to use essentially the same approach as F*, where the computer does most proofs and you do the remaining proofs yourself.
actually a
transitively_unsafe
would make sense to me, so that I could enforce
that a function transitively never usesunsafe
, even inside an
unsafe
blockDo you mean “even across function calls”?
I think this was discussed briefly somewhere: it could be very useful
in some situation but I don’t expect it to become a common thing. If you
want to do this across crate boundaries, then suddenly the API of a
crate must include which functions aretransitively_safe
, and that’s a
promise that those functions will also staytransitively_safe
in
future versions. I think a lot of crate authors would hesitate to make
that promise for the future. That said, it might well spawn a whole
ecosystem aroundtransitively_safe
crates.Indeed, I’m thinking of this even across function calls and crates. It’s
a relatively restricted use case indeed. For instance, IIRC Tock OS
stated last year (don’t know now) that it could run untrusted programs
that didn’t useunsafe
as part of the OS by wrapping them in a
capsule. Having atransitive_unsafe
would make this sort of features
less braindead (well, there’s still rust compiler bugs, and up until now
rustc hasn’t appeared to deem a unsafe-safe-code soundness issues to be
security vulnerabilities).Actually, upon thinking more about it,
transitively_safe
would not
make much sense, as soon as there’s a Box there’s going to be some
transitive unsafety… so maybe something based on “transitively safe
apart from code in crates A, B and C”? That’s even harder to encode at
the crate boundary, though.But that’s still something that bothers me with Rust, especially given
it’s trying to get a foothold in the embedded world, where the absence
of exceptions is something highly important.I expect that something like Coverity is promising for embedded Rust.
It won’t catch all exceptions that will actually happen in real life,
but perhaps something like 90%-95%? Embedded software seems simple
enough for this to work well. On the other hand, there’s a lot of
possibility for panics in Rust. Even stuff like x+1 can cause a panic
when the result doesn’t fit in the same type.Yeah, that’s one of the points I was thinking of when thinking panics
are deeply integrated with the language, and it’s really hard to
actually be 100% sure there can be no panics.C++ has fewer exceptions, but in C++ an integer overflow would
typically wrap around (if you use the right compiler options). Perhaps
the system doesn’t crash, but you might well get bugs further down the
program. I don’t know which option is better.I don’t think C++ gets the exception story much better than Rust. Java
is the only language I practice that gets it almost right (IIRC), by
forcing exception specification in the prototype of methods.C also gets it right in a way: it just doesn’t have exceptions. That’s a
good way to make sure there can be no unexpected exceptions. Well, the
other issues of the language do balance this advantage, though. But I
think that’s part of the reason why C is actually almost the only
language used in embedded development !(this, the possibility to write
code without any dynamic allocation, and Frama-C, if I had to guess)
So, what do you think about introducing an effect system for Rust? I'm sure there is already literature somewhere, but sometimes it's maybe better to start a debate from fresh, as the things one want may change over time? (between insecure
, no_panic
and no_transitive_unsafe_except_from_crates(A, B, C)
, I guess there's likely enough to think about a full-fledged effect system without too much over-engineering everything now )