Building a library from a binary inside the same project

I have a project with the following structure:

├── Cargo.lock
├── Cargo.toml
├── build.rs
└── src
    ├── lib.rs
    └── types.rs

When I build the project, the build.rs generates a static file which will be used by the lib.rs using include_bytes and lazy_static.
In reality I compile the project with wasm-pack to build a WASM module out of it that I can later load with a browser.
The types.rs is shared between build.rs and lib.rs as it contains some common type information that is needed for compile-time and runtime.

The project is called tinysearch and the code is already on Github. It's a tiny (~130K) search engine for static websites.
The idea is that people generate a search index for their blog at "compile time" and include the final WASM on their site.

For now, I wrapped all of this up into a make build command but I'd like to have a stand-alone commandline utility that does the code generation.
The way I imagine it to work is

cargo install tinysearch
tinysearch index.json # This will generate the WASM module

This way, people can use it with Jekyll, Hugo, or Zola directly.

Now the question is, how I should structure my code to make both maintainable and idiomatic.
I considered the following options:

  1. Create a cargo workspace which consists of three crates: the binary (what is now in build.rs), the library (lib.rs), and a 'common" package (which contains the types.rs).
  2. Create completely separate crates for bin, lib, and common and publish all of them on crates.io. When running the tinysearch binary, it would download the lib and the common crate, move into the lib crate and run wasm-pack from there.

I didn't use workspaces yet, but it looks like it's only helping me with keeping the dependencies in sync. I'd still have to publish all crates separately. So I'm not sure if it's a possible solution.
Option 2 sounds ugly, because it feels like sidestepping cargo and shoehorning that build process in somehow.

I wonder if there's a better solution which I'm somehow missing at the moment.
Maybe there is one where I don't have to publish the internal dependencies on crates.io and make it somehow part of the main tinysearch bin crate?

Any suggestions?

1 Like

The workspace way is how other projects, like ripgrep, handle it. GitHub - BurntSushi/ripgrep: ripgrep recursively searches directories for a regex pattern while respecting your gitignore

So probably go with that.

I've created a workspace now and it worked! Thanks for the hint @notriddle.

The missing piece was downloading another crate that contains the WASM bindings during code generation. From what I can tell, ripgrep doesn't download other crates so I looked around and found that both binary-install and cargo-download provide some functionality to fetch a crate from crates.io. In the end I took the core-logic from cargo-download and added it to my crate.
Haven't pushed the code to Github yet, but it'll be here at some point if others want to have a look. Will publish under the same license (MIT/Apache).

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.