hey folks, I planning to write a service that will consume from kafka and write a few keys to a key-value data store for each message consumed. I plan to use rust-rdkafka. currently this kafka topic has around 250K messages per second, so performance counts
Thing is my database client is blocking. Currently I see 2 options:
use the non blocking mode for the kafka consumer and do everything on Tokio thread pools? ( tokio::spawn + tokio::task::spawn_blocking )
consume in a serial blocking way (with while let Some(message) = message_stream.next().await or somehow feed this into rayon par_iter.
my biggest concern is the ability to back-pressure the message consuming when the database starts to fail/respond slowly.
What do you guys think should work best?
My recommendation is to spawn a dedicated thread for your kafka connection. You can read why this is preferable to spawn_blocking in my blog post on blocking, specifically the section on things that run forever.
To communicate with the kafka thread, I recommend the use of a bounded tokio::sync::mpsc channel. The use of a bounded channel gives backpressure to your application.
If you use Tokio 0.3, the mpsc channel provides blocking versions of the send and recv methods, which you can use in the dedicated thread. Alternatively on 0.2, you can use futures::executor::block_on on the async send/recv methods.
Regarding the 250K msg/sec thing, you might want to try batching the messages you send into the mpsc channel for better performance. An alternative optimization you can try is to wrap the entire blocking kafka loop inside block_on so you can just send with an .await. This is not a problem blocking-the-thread-wise because you can do it in a manner such that no other tasks run inside that block_on call.
To communicate with the Kafka thread, I recommend the use of a bounded tokio::sync::mpsc
This is what we do currently in Clojure with core.async and it works well. Good to know that this is the best practice here also.
The basic algorithm for a message mixes CPU and IO-bound processing so I am wondering how (and if) to tell Tokio what part should it do in which thread pool, and how exactly should I chain them:
consume a message
parse JSON
validation + possible filtering
possibly producing some data into another kafka topic