Hey there,
I am using Rust to compress a .json to .gz. I am new to Rust and testing it for performance reasons.
Together with ChatGPT I came up with following. Which works, but is not so efficient as expected (Node.js is actually faster):
use aws_config::load_from_env;
use aws_sdk_s3::Client;
use serde_json::{self, Value};
use tokio::io::AsyncReadExt;
use std::fs::File;
use std::io::{BufWriter, Write};
use flate2::write::GzEncoder;
use flate2::Compression;
...
let response: aws_sdk_s3::operation::get_object::GetObjectOutput = client.get_object()
.bucket(bucket_name)
.key(key)
.send()
.await?;
let stream = response.body;
let file = File::create(output_path)?;
let writer = BufWriter::new(file);
let mut encoder = GzEncoder::new(writer, Compression::default());
let mut body = stream.into_async_read();
let mut buffer = [0; 8192];
loop {
let len = match body.read(&mut buffer).await {
Ok(0) => break,
Ok(len) => len,
Err(e) => return Err(Error::from(e)),
};
encoder.write_all(&buffer[0..len])?;
}
encoder.finish()?;
The .json is read from a S3 Bucket, but the important part is the compression itself. I think, that I made it very inefficient. Maybe you got a better solution for this? How can I make it more efficient?
Thank you very much and sorry for my rookie code