Sync/Async best practices

Hello folks,
I want to implement a crate for some external api. Right now I am deciding how to combine sync and async clients in the single crate with following to DRY principle. Are there any best practices? I want to avoid re-implementing the same things like request building in the both clients.

One option is to write it with async and use something like tokio::Runtime::block_on for the sync version.

2 Likes

Thanks for answer. Is there any runtime overhead with this approach comparing to simple sync/blocking flow?

I would only refine what @alice suggests slightly by suggesting you offer only the async version and include an example or doctest that uses tokio's block_on (and/or futures executor) to demonstrate how a user would "make it synchronous, themselves". Try to resist hiding a tokio runtime and block_on in your own function, at least not without feature gating it.

That, said, I didn't manage to resist (for legacy, compatability reasons). Take a look at these two functions:

Async code has a small overhead, but as the number of tasks increase, async becomes more performant. That said, if you're using a web api, you're going to be spending 99.9% of your time waiting for the website to respond, so it doesn't really make a difference.

The reqwest sync api uses this technique.

1 Like

I will check it, thanks. I have a vision I can implement common code with builder pattern that will produce request object for reqwest package and use it in the both clients. From the high-level point it should just provide a request builder and return request object/struct that suit for reqwest to execute.

Yeah, I asked about sync overhead. For example: some_blocking_function() vs Tokyo::Runtime::block_on(some_blocking_function). Does Tokyo add some overhead to execute it on a separated thread pool? Anyway, thanks. I will check how the reqwest crate does that.

When async came out, I wanted to experiment with it a bit, trying to understand how it works (never worked with asynchronous code before). I was a bit surprised, when I found out, that Rust didn't include any way to "syncify" async code by resolving futures. I didn't want to use full-blown third-party crates for some simple experimentation, so I had to implement my own version. I looked at the available packages on crates.io, that dealt with this problem already and then went on to implement the code to resolve futures. It might be of interest to you, if you want to test performance of sync vs async. Here's my code to resolve a future in the simplest possible way (to my knowledge):

#![no_implicit_prelude]

use ::core::clone::Clone;
use ::core::future::Future;
use ::core::mem::transmute;
use ::core::mem::ManuallyDrop;
use ::core::pin::Pin;
use ::core::task::Context;
use ::core::task::Poll;
use ::core::task::RawWaker;
use ::core::task::RawWakerVTable;
use ::core::task::Waker;
use ::std::thread::current;
use ::std::thread::park;
use ::std::thread::Thread;

static VTABLE: RawWakerVTable = {
    /// 1. Convert the data from `*const ()` to `Thread`
    /// 2. Wrap the data with `ManuallyDrop`, because we don't own it
    /// 3. Clone the data
    /// 4. Convert the cloned data from `Thread` to `*const ()`
    /// 5. Create a new `RawWaker` instance with the cloned data and return it
    unsafe fn clone(data: *const ()) -> RawWaker {
        RawWaker::new(
            transmute(ManuallyDrop::new(transmute::<_, Thread>(data)).clone()),
            &VTABLE,
        )
    }

    /// 1. Convert the data from `*const ()` to `Thread`
    /// 2. Wake up the waiting thread
    /// 3. (Automatically) Drop the data, because we own it
    unsafe fn wake(data: *const ()) {
        transmute::<_, Thread>(data).unpark();
    }

    /// 1. Convert the data from `*const ()` to `Thread`
    /// 2. Wrap the data with `ManuallyDrop`, because we don't own it
    /// 3. Wake up the waiting thread
    unsafe fn wake_by_ref(data: *const ()) {
        ManuallyDrop::new(transmute::<_, Thread>(data)).unpark();
    }

    /// 1. Convert the data from `*const ()` to `Thread`
    /// 2. (Automatically) Drop the data, because we own it
    unsafe fn drop(data: *const ()) {
        transmute::<_, Thread>(data);
    }

    RawWakerVTable::new(clone, wake, wake_by_ref, drop)
};

pub trait FutureResolve: Future {
    fn resolve(self) -> Self::Output;
}

impl<TFuture> FutureResolve for TFuture
where
    TFuture: Future,
{
    fn resolve(mut self) -> Self::Output {
        let mut this = unsafe { Pin::new_unchecked(&mut self) };
        let raw_waker = RawWaker::new(unsafe { transmute(current()) }, &VTABLE);
        let waker = unsafe { Waker::from_raw(raw_waker) };
        let mut context = Context::from_waker(&waker);

        loop {
            // `async`-generated futures call clone and wake
            match this.as_mut().poll(&mut context) {
                Poll::Ready(result) => return result,
                Poll::Pending => park(),
            };
        }
    }
}

Note: Thread is just a wrapper around an Arc, that contains the actual Thread-data, which is why I used transmute. However, the standard library does currently not guarantee anything about the inner workings of Thread. Therefore, I'd never use transmute for this task in production. You'd have to wrap Thread in another Arc to be certain, that it'll work in all future versions of Rust.

P.S.: I learned, that async wasn't what I needed for what I'm doing, so I stopped the experiments pretty much right after finishing the resolve-method.

The default executor will start a thread pool, but you can ask it it to use a single thread with Runtime::basic_scheduler.

thanks. it looks good to start with.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.