How to use offline speech model in Rust?

I have some offline speech models, how can I use them in Rust? Some offline speech extensions are .bin

.bin is most probably a saved pytorch model. You can't just run a pytorch model in Rust, you need to convert it into a TorchScript ScriptModel first which you can then load in Rust using tch::jit::CModule::load. Here is a tutorial for C++ on how to convert a pytorch module to a TorchScript ScriptModel.

Or you can convert the pytorch models to ONNX and run that with any ML library that supports it, like candle or burn.

2 Likes

Thank you for the ideas. I didn't know where to start before this.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.