Nano-get : minimalistic HTTP(S) GET crate

Most of the crates, I found, to make simple http(s) GET requests add considerable bulk to the binary. I came across the need for a crate with minimal dependencies for a project I was working on. I didn't need anything fancy, just make a GET request and give me the response.

Thus I embarked on the journey of making my own "nano" http get request library nano-get.

The crate has 0 dependencies for HTTP GET and only rust openssl wrapper as the dependency for HTTPS.

https://crates.io/crates/nano-get

While, the library does not yet use any async primitives, I have been able to use it in an async context for concurrent requests (example is in the crate description).

I have successfully used it for some of my projects, but would welcome more eyes. This is also my first rust crate, so its the beginning of my rust/open-source journey :slight_smile:

2 Likes
async fn get_url(url: String) -> String {
    get(url)
}

Do not block in asynchronous code. This will quickly result in no threads being left in the thread pool to handle asynchronous code! You need to use spawn_blocking to tell tokio that your get call will block.

You can see this by inserting a print in your code.

Code with print
fn main() {
    let mut runtime: Runtime = Builder::new().threaded_scheduler().build().unwrap();
    runtime.block_on(async {
        let base_url = "https://jsonplaceholder.typicode.com/albums";
        let mut handles = Vec::with_capacity(100);
        let start = Instant::now();
        for i in 1..=100 {
            let url = format!("{}/{}", base_url, i);
            handles.push(tokio::spawn(get_url(url, i)));
        }
        let responses: Vec<String> = try_join_all(handles).await.unwrap();
        let duration = start.elapsed();
        println!("# : {}\n{}", responses.len(), responses.last().unwrap());
        println!("Time elapsed in http get is: {:?}", duration);
        println!("Average time for get is: {}s", duration.as_secs_f64() / (responses.len() as f64));
    });
}

async fn get_url(url: String, n: usize) -> String {
    println!("Started {}", n);
    get(url)
}

The tasks are not all started at the same time, but (in my case as I have six cores) six at the time, as that's the number of core threads tokio will use at most. It runs in three seconds.

Compare to using spawn_blocking

Code with spawn_blocking
fn main() {
    let mut runtime: Runtime = Builder::new().threaded_scheduler().build().unwrap();
    runtime.block_on(async {
        let base_url = "https://jsonplaceholder.typicode.com/albums";
        let mut handles = Vec::with_capacity(100);
        let start = Instant::now();
        for i in 1..=100 {
            let url = format!("{}/{}", base_url, i);
            handles.push(tokio::task::spawn_blocking(move || get_url(url, i)));
        }
        let responses: Vec<String> = try_join_all(handles).await.unwrap();
        let duration = start.elapsed();
        println!("# : {}\n{}", responses.len(), responses.last().unwrap());
        println!("Time elapsed in http get is: {:?}", duration);
        println!("Average time for get is: {}s", duration.as_secs_f64() / (responses.len() as f64));
    });
}

fn get_url(url: String, n: usize) -> String {
    println!("Started {}", n);
    get(url)
}

This starts every single request simultaneously and finishes after half a second. Tokio has two kind of threads: those running asynchronous code and those running blocking code. Note that this starts 100 OS threads immediately (tokio will use up to 512 threads for blocking code by default), although it would reuse them if you didn't request them to run at the same time.

3 Likes

Other than the details above, it looks quite nice.

2 Likes

wow! Thanks for the feedback and the kind words.

I am still new to the whole async concept (resources for newbies are only now growing since it became part of stable) and your explanation makes total sense to me. I learnt something new today!

I will update the example in the crate with the spawn_blocking version!

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.