Is it possible to limit file system access to dependencies

If a project uses crate dependencies from
And they use transient dependencies,...
Is it possible for the code to limit access to the file system for all of the dependencies?
Is it possible to find 100% if any of the dependencies is reading or writing to the file-system?
Is std::fs the only way to access file-system?
Is it possible to check at compile time if any code is using it?
Is it theoretically possible to make something similar to #![no_std] ? If a crate has #![no-fs] the compiler should guarantee that the crate has not access to files?

No. For example, someone might use C FFI to directly call fopen. You also can't allow inline assembly, as that could be used to manually emit the syscalls that fopen uses. You should also make sure they can't access the functionality that can mark memory as executable, as otherwise they could hardcode the binary executable code that corresponds to the assembly in fopen as a byte array and jump directly into that memory.


Thank you. I am sure there has been a lot of discussion and they come up with a lot of different techniques.
Most of the crates don't need anything of this?
Pure Rust - safe code...
And if the crates really needs it, could it be explicitly declared or checked by the compiler? Similar to "unsafe" ?
Probably all this techniques are already "unsafe" ?
So if a crate has no "unsafe" and no std::fs, it could be a good point to start trusting it does not touch the file-system.
I know there is a lot of different way to do it, but with time we could find them all. And then a crate could have an attribute "no_fs" that we can trust.

Rust is not a sandbox language. On technical level there is absolutely no boundary between your code and code in your dependencies, and Rust never tried to have one.

The separation in code into crates and modules and in Cargo between your code and dependencies is just an illusion for user's convenience. It doesn't exist in the final code that is generated.

There are ways to bypass crate boundaries and unsafe, e.g. with no_mangle or linking unsafe code via There could be many more, because there was never any attempt to avoid creating loopholes. All Rust's safety protections are against accidental errors by the programmer, and to some degree against bad inputs to the program. But Rust trusts all code you give it, and there are no protections against intentional misuse by the programmer.

Having said that, if you're worried about dependencies doing something suspicious, cargo-crev is an attempt to check safety and reliability Rust dependencies via code reviews.


Also, just put your stuff in a Docker container if you're worried it does arbitrary damage to your system, then mount a limited subset of your filesystem through a volume.


I built a small example for this idea of "unsafe features".
We are discussing it here:
It could make all unsafe code in third party libraries disabled-by-default and opt-in in compile time. Access to files and network is also in the broader meaning of "unsafe".

Another partial option is to open privileged files as root and then switch user/group using
setgid(gid)/setuid(uid) to drop privileges. That's what certain unix daemons do.


It sounds like you are trying to do for Rust what Mettler, Wagner, and Close did for Java in Joe-E: A Security-Oriented Subset of Java. They addressed the library problem with a process they called taming.

The JoeE verifier allows Joe-E programs to mention only classes, constructors, methods, and fields in this tamed subset. If the source code mentions anything outside of this subset, the Joe-E verifier flags this as an error.

Taming has also been done for JavaScript as part of Google's Caja project.


I'm not quite sure what you want. Are you looking for a way to sandbox the crates you depend on?

May be take a look at tauri. I don't think tauri itself will provide sandboxing of your dependencies (though it might) but you might at least find some high quality crates in there or be able to ask someone working on the tauri project for help.

Warning. I actually just found the tauri project, so I don't really know what they are about or how reliable they are myself, but they look legit.

Another (somewhat) practical option is to compile all untrusted code to Webassembly and run it by using a runtime like wasmtime as a library.

Direct system access can be exposed to the untrusted code via callbacks, like in this wasmtime example.


Another option on Linux would be to use seccomp, although that is process wide, so you could only disallow system calls that you don't use.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.