Does `cargo test` run on JIT or VM?

I got a feeling that each time I run cargo test, Cargo does not compile the test codes to bin, but somehow run them directly, only when I passed a target-triple will make the test really built into bins.

Do I have the right feelings about that?

  1. If true, how does cargo do that? By using a JIT? A VM?
  2. If false, where is the bin built by running "cargo test"?

P.S. I'm probably asking an A/B problem, In the real case of my project, some codes will be compiled into an so and be loaded on runtime by using "dlopen", which takes times. I'm investigating the possibility of directly read a source code into memory and change it to executable codes.

Same as cargo run, cargo doesn't recompile if the source code doesn't changed since last compile. The binary location it invoke is printed on cargo test -v

JITing with optimizations will probably take more time
than dlopen()'ing a precompiled executable, especially in a language such as Rust that needs to do a lot of type checking.

No, cargo run and cargo test just recompile the program and immediately execute it. Not passing a target triple just means it'll compile for the host architecture.

In my case the project is a game engine (not in Rust though, I'm investigating to refactor it into Rust), and a source code file act like a "script", programmers edit the "script" often and tries. Currently, if someone want to edit a script and run it, I compiled the code into an .so, and dlopen it, this takes time than using a JIT. In this case, JIT could even do zero optimization.

So as your experience, will a Rust JIT be faster (since you mentioned about type checking) to let the programmer's code get ran than using dlopen?

Then what does building Rust code have to do with it? What language are the scripts written in? If (re)compilation has a high cost, then why aren't the scripts written in a… well, scripting language which is designed for easy and fast execution?

I don't believe I understand anymore what you are comparing to what. If you have a compiled dynamic library, then dlopening it should take minimal time, at most milliseconds, while running a full compilation of Rust code can take tenths of seconds or even several seconds. (Just try running cargo check, which only typechecks but does not emit or optimize LLVM – it still takes a couple hundred milliseconds for small to moderately-sized projects).

Well, not using a scripting language is because "scripts" in this case are indeed Rust codes and will be built as machine code in release mode (for performance sake), the "script" concept is for easy developing.

The question seems no need to be that complicated, it can be simplified as this:
Procedure A: Edit rust code -> compile to .so -> use dlopen to load it -> codes get ran.
Procedure B: Edit rust code -> use JIT to load the code files -> codes get ran.

Which one is faster (for developing sake) in Rust.

And in C++ with LLVM-JIT, procedure B is sightly faster for a programmer to wait his code runs. But I'm not sure if this is the case in Rust.

And thanks for your patient, the original question about cargo test is answered. As the JIT thing, I think I need to figure it by myself.

As far as I'm aware, Rust does not have an llvm based VM target. The closest thing is called MIRI but that's intended for validation, and not for speed.

As far as dlopen vs hypothetical JIT VM, I'd take dlopen any day, since the cost of a JIT VM is ongoing whereas a dlopen is a single static amount of time.

2 Likes

I found a JIT on GitHub: GitHub - nbp/holyjit: Generic purpose Just-In-time compiler for Rust., not sure if it meets my requirement. I'll try and come back later.

It's totally different thing. The holyjit is a JIT compiler generator like if you write Python interpreter in Rust the holyjit generates JIT compiler for the Python for free. To generate JIT for Rust code you need to write Rust interpreter in Rust first.

Anyway the project seems abandoned for now. It's sad news as I was very excited when the author introduced this project.

3 Likes

Thanks for letting me know. It seems that currently I have do the same trick as my C++ project (turn source codes into IR codes and using llvm-jit)

Maybe you'll want to look at bjorn3/rustc_codegen_cranelift: Cranelift based backend for rustc (github.com). But beware that because of procedural macros, code using a JIT may actually be slower since it needs to run unoptimized code during build time.