Blocking in async code - what about compression?

I've been trying to write an async web service, which internally sends some data through this process (hopefully in a streaming manner, without massive buffers):

  • Decompress it (xz2)
  • Do some fairly cheap manipulation of the decompressed data
  • Compress it again (zstd)

With the recent discussion about blocking in async code, I've been attempting to be careful about this - these (de)compression operations are fairly expensive. However, something has been throwing me off:

Both the xz and async-compression crates offer async APIs for (de)compression which don't seem to use any sort of offloading of blocking work. Would there be negative impact on scheduling if I use these APIs as-is? Do they split up the operations somehow so that each individual poll is fast?

Thanks :slight_smile:

I don't see the async APIs in xz, could you post a link to the specific documentation page?

Sorry, the real crate name is xz2 (xz doesn't seem to have any code in it)

It's on the main crate page on docs.rs, which I linked (the section called Async I/O)

Compression itself has nothing to do with writing asynchronous code, because that is a CPU-bound task. It can take as long as it wants, because it doesn't block the CPU by waiting on something. If you read/write from/to the filesystem, that does indeed block.

Hmm, that contradicts some things I've read recently:

From https://stjepang.github.io/2019/12/04/blocking-inside-async-code.html

Intensive computation can also block. Imagine we’re sorting a really big Vec by calling v.sort() . If sorting takes a second or so to complete, we should really consider moving that computation off the async executor.

As well as the replies to my quesion here: https://www.reddit.com/r/rust/comments/e64b2d/blocking_inside_async_code/f9o4u12/

cc @Nemo157 @alexcrichton in case either of you have thoughts :slight_smile:

This is not how the issues with blocking work wrt. async code. The important part is that by doing IO or heavy computation, the poll method on the future doesn't return quickly, which is what blocks up the executor.

The exact reason behind poll not returning quickly is not important. You can put your expensive computation on a thread pool such as rayon and await the response using an oneshot channel.

Some executors also have mechanisms for marking code as blocking, e.g. tokio has spawn_blocking.

1 Like

Any task which occupies the CPU for a long time without yielding can cause issues for other tasks, because they will not get scheduled on the executor for this time.

However "long" is pretty much relative. If you are doing compression in small chunks, which takes deterministic 200us, it might not have a big impact on other tasks. If it takes 1s, it certainly will. You can control the behavior a bit by controlling chunk sizes. The smaller a chunk the faster it will get compressed and the faster the thread could serve another task. And I think that's the main lever one should use. Delegating compression to a dedicated compression thread would still take away CPU resources, and there would be synchronization overhead between the that thread and the async executor. Therefore it's not necessarily a better solution than directly doing tiny units of CPU bound work on an async executor.

3 Likes