I’m new to rust and I’m porting some c code that copies files over the network. The code in question reads files in chunks and does various hashing and compression before transfer. What is the best way to read files this way? I have seen the take function. Is that the best way to do this? Performance would be the primary goal in this case.
https://doc.rust-lang.org/std/fs/struct.File.html is the type representing a file. For reading/writing, you’ll be primarily using the
std::io::Write traits (which File implements).
To read in chunks, take a look at read and read_exact methods. Both take a
&mut [u8], which is the buffer that will be filled. Depending on the desired chunk size, this buffer can be backed by a fixed size array allocated on the stack, or by a
Box<[u8]> storage on the heap.
What type of compression do you need? There’s a flate2 crate that offers up some compression algos, but if those aren’t what you need, there are crates offering other compressors (eg brotli, zstd, etc). Most compressors have a similar API: the (de)compression layer implements the Read/Write traits, and you create a layered object with compression layers sitting above the raw input/output (eg a File).
I can’t tell whether to you’re also asking about the network aspect here, but I’ll stop here.
If you want more precise help, I suggest you include more details about your existing C code and/or requirements.