The LLVM home page states:
"A major strength of LLVM is its versatility, flexibility, and reusability, which is why it is being used for such a wide variety of different tasks: everything from doing light-weight JIT compiles of embedded languages like Lua to compiling Fortran code for massive super computers."
What I was wondering is whether it is possible for a Rust program (at run time) to invoke LLVM to compile and then run a function generated on the fly ( i.e. after the compilation and linkage of the Rust program ), and also what kind of overhead there might be - can this be done in a matter of micro-seconds, or would it be a "heavier" operation.
It depends on how many optimizations you ask for. Typically, JITs don't compile everything to native code, and not all of natively-compiled code is optimized, either. The reason for that is exactly that optimizing is a time-consuming business. JITs typically try to measure or guess (or both) where the pieces worth optimizing are, and don't bother with the rest – if you get that trade-off wrong, your JITed code can easily end up being slower, of course not by itself, but because you have to run the whole optimizer pipeline to get to the efficient code.
For what it's worth, this is pretty much what Wasmer does when it loads a WebAssembly module using the LLVM backend. The module is read into memory as WebAssembly bytecode then Wasmer generates the corresponding LLVM data structures for each function and hands them to the LLVM JIT.
This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.