Linking an archive file using cc::Build?

Hello,

I have a dumb question, but lots of reading and trial-error hasn't worked yet. @kornel 's guide (Using C libraries in Rust: make a sys crate) is awesome, and the Cargo book also tiptoes right up to my question but unfortunately stops short of answering it. (Build Script Examples - The Cargo Book)

I'm trying to create a sys crate to build a C project from source. (https://rosie-lang.org, if you're curious) It has a dependency on Lua.

So, I depend on the mlua crate in my Cargo.toml like this:

[dependencies]
mlua = { version = "0.6.4", features = ["lua53", "vendored"] }

And, sure enough, it builds the dependency I need. And in my build.rs file, I can get the include path and the library dir path (and by extension the library path) from the environment variables like this:

let lua_include_dir = std::env::var_os("DEP_LUA_INCLUDE").unwrap().into_string().unwrap();
let lua_lib_dir = std::env::var_os("DEP_LUA_LIB").unwrap().into_string().unwrap();
let lua_lib = format!("{}/liblua5.3.a", lua_lib_dir);

And when I manually invoke cc (clang) from the terminal, using these paths, I get the build output that I want. But I cannot for the life of me figure out the right calls to make to cc::Build to get it to link the Lua build's output archive into my library.

On the command line, I just pass the path to the .a I want to link as another file in the list of source files, but that doesn't work for cc::Build, nor does passing it to include() or object().

Here is what I have right now:

let mut cfg = cc::Build::new();
cfg.static_flag(true);
cfg.include(lua_include_dir);
//How do I specify lua_lib_dir or lua_lib so it gets linked into my output lib??

for src_file in src_files {
    cfg.file(src_file);
}
cfg.compile("rosie");

Any help or insight is very much appreciated. Thank you.

UPDATE: I've been able to work around this limitation by manually expanding the archive file using ar x lua_lib, and then iterating over every output .o file and calling cc::Build::object() on every resulting .o.

This feels really fragile, however. The reason I opted for Build::cc over invoking clang using std::process::Command is because I was hoping it would help me be robust against all the crazy configurations out in the wild. Like Windows. :fearful:

Is there a better way? Thanks!

I think you can use

println!("cargo:rustc-link-search={}", lua_lib_dir);
println!("cargo:rustc-link-lib=lua5.3");
1 Like

Beautiful!!!! It worked! Thank you!

It should be the mlua crate's responsibility to link the interpreter to the binary. What happens if you don't link it yourself?

1 Like

If I don't include that link step (the step of either outputting the "cargo:rustc-link-lib" or explicitly specifying all the lua .o files), then I get link errors (unresolved lua symbols) when I attempt to load my own output library.

You shouldn't need to do this. It's responsibility of mlua crate to set these paths up.

1 Like

Oh, I see. You don't depend on lua yourself, but a C dependency does. In that case you shouldn't use the mlua crate, but indeed use cargo:rustc-link-lib.

Oh, I see. You don't depend on lua yourself, but a C dependency does.

I have a "second order" dependency. i.e. I'm exposing a Rust interface to a C library that has a dependency on Lua. But I want to link Lua statically, so in effect it's the same thing as my crate depending directly on Lua.

In that case you shouldn't use the mlua crate...

Do you know which one might be better? I looked at a few different Lua crates, e.g. lua-sys, lua-src, and a handful of others. It looked like mlua was the best supported and had the feature of building the C headers, which many of the others didn't.

Thanks again for the reply.

I think you can copy the lua build code of mlua. Mlua is not a -sys crate, so the exact specifics of how it links to lua are implementation details I think. I think mlua could for example decide to mangle all lua symbols to avoid conflicts with other lua users in the future.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.