Can the global_allocator attribute become a security vulnerability?

From what I have just learned, the `global_allocator` attribute has some magic:

  • it can be used anywhere to define a global allocator, even deep inside a private unused module of a dependency of a dependency
  • the defined global allocator will be called implicitly, i.e., your crate will be affected if it has a dependency which defines the global allocator, even if your crate doesn’t call any function from that dependency
  • as far as I know, there is no way to check what global allocator I’m using

So, I can imagine that a hacker makes a useful crate, and hides the global_allocator definition in somewhere, and do some bad in the implementation of alloc and dealloc, then a rust user finds the crate, checks the documents and source codes of the public function, feels good, calls the absolutly safe functions only, but the hacker still succeed doing the bad thing.

Maybe a lint to deny global_allocator in dependent crate can be added to rustc or clippy.

I think this is a valid concern, but it's unclear what the best solution would be.

I think your suggested lint can be helpful. or, the lint can be simplified to warn the use of #[global_allocator] attribute in crate-type of lib and/or rlib (but not staticlib or cdylib).

for now, if you want to be sure about the global allocator in your program, you can register your own #[global_allocator], which should fail the build if there's another #[global_allocator] in the dependencies.

you can either create a custom type implementing GlobalAlloc, or more likely, use the default System allocator:

#[global_allocator]
static A: std::alloc::System = std::alloc::System;
fn main() {
    // guaranteed to use the system allocator
    let _ = Box::new(42);
}
2 Likes

Rust and rustc are not at all designed to provide the property you imagine here as “checks the source codes of the public functions”. If you are concerned that a library might be malicious, you must review all of its code — not just code that is obviously used. There are many ways for code to cause trouble, and rustc does not promise it is any good at processing “untrusted” input.

18 Likes

Rust is not a sandbox language. Lack of unsafe doesn't give any protection from malicious code.

std::process::Command is safe, but allows arbitrary code execution.

proc_macro is safe, but runs as a plugin in the compiler process and could mess with it from the inside.

build.rs is safe, but can emit linker directives that replace any function anywhere in any crate with something else and run code even before fn main().

Rust is meant to catch mistakes of honest programmers. It's not even trying to stop deliberately malicious code. The low-level systems programming infrastructure it's built on top was never designed to be a security barrier, so there are countless ways in which it trusts the programmer, which can be exploited to bypass high-level semantics of the Rust language.

You have to review code of the crates.

16 Likes

Even this has no guarantee when a crate could substitute a #[no_mangle] fn malloc, or link one through a build script, etc.

1 Like

The best answer here is to generalize the mechanism, IMHO. The allocator is far from the only things where people want injectable implementations -- even float formatting people would like one! -- and then it'd be more obvious that there ought to be good tools (cargo tree style) for seeing which crates are providing which implementations of which things.

5 Likes