Awesome, so it's been on the mind of some people before - good to know.
Great approach, quite similar to Deno, by the looks of it. But expecting people to switch to a more security-oriented version of the standard library just because it's safe, most likely isn't going to have any needed effect. Unless it gets adopted by the language itself, it will simply remain "yet another security thing". Not to mention the fact that just having a more secure standard library doesn't forbid anyone from introducing their own integrations with the underlying OS for their own purposes.
Not necessarily - all the checks can be done directly at compile-time.
First, we'd need to clarify and agree on about the kind of permissions that we care about. Deno's approach might be a good start, but in Rust's case at least the addition of the "unsafe" permission is necessary, for obvious reasons. File system access, permission to access the web - are all essential.
Once that is done, the work would pass to people, responsible for the standard library, as long with maintainers of the most popular independent (async) alternatives - such as
Any function that can be used to read from file system would get either a comment or a procedural macro of this kind:
#[requires(fs_read)]. Any function that can access the web:
#[requires(web)] and so on. These would get parsed at compile-time and checked against the permissions, specified in
.toml file. If it isn't there, the whole body of the function can be replaced by a
panic! macro with the details about the function call - and who called it in the first place.
unsafe code stands out on its own, no annotations are going to be needed there - any
unsafe call or a function can thus be either removed or replaced by some version of
panic! as well.
With that out of the way, there's no room left for anyone to exploit neither the functionality of the standard library, nor its counterparts. You can steal all the passwords that you want, but if neither the functionality in the standard library, nor in the tokio / async-std / smol / mio is enabled for you to transmit it, you won't get anything from anyone - and the user will be alerted during the tests that something went wrong when he tries to compile and run his program, believing that your
id-hashmap does, indeed, provide some better alternative to the
HashMap<Uuid, T> of the standard library.
Any code that is not annotated would be implicitly given
#[requires(none)] - that is, it's a pure function can take in some data and return some back, nothing else. If you're writing a library and calling some other
#[requires(fs_write)] functions from the standard library in such a function, you would be given a warning at first and a compilation error afterwards. This would force, in time, every crate author to explicitly specify which permissions are needed for their libraries - and with code analysis any crate could be trivially analyzed for annotations to make the crate author did his / her job properly.
In the end, you have a security system, enforced directly by the compiler - and specified by the end user of the libraries explicitly. No 2FA, no code reviews, no changes to crates.io needed. If you want to shoot yourself in the foot by allowing anything and everything to compile and run - it's your problem. The ecosystem gave you the best system possible, if you want to be dumb about it, it's your choice.
This is the kind of vision I would have in mind. And I don't see that many downsides to it, aside from a somewhat tedious process of annotating, along with a few additional compiler checks. Heck, I know I'd be the first to use it straight away - but do let me know if I'm only the one here.