Well, there will probably be to much overhead elsewhere to reach 10 million, but it’s a function that handles incoming network packets. So the less overhead the better, since it all just adds up.
Also async functions are far less cheap if I understand well, memory is allocated for the whole stack frame of the function (as a generator) and then that needs to go in the generator one up, and so on all the way up the call (await) stack. If I’m not mistaking those add up? I’m not sure because they might be pinned, so maybe everything only exists once, but it’s not as cheap as a normal function call.
I’m talking about refactoring a function for clarity. Basically the refactored code will be called from exactly 1 location. I don’t think in-lining could create overhead here. I’m not worried about LLVM failing to inline code that get’s called exactly once, even without annotation, but I don’t know how LLVM optimizes awaiting async functions.
async fn do_complicated_processing()
// 150 LOC
// I would like to move those 150 lines in here:
some_condition(); // <- doesn't really work because I need to
// make it async and await it if I want to use
// .await inside the function
else if something_else
// 200 LOC
else if yet_something_else
// 350 LOC
// How big is the generator that needs to be allocated for 350 lines
// of code with a bunch of variables (some that need to be passed in),
// with this code awaiting a bunch of futures again?
// 80 LOC
The long pieces of code obscure the logic of the if/else blocks. In non-async code I would be confident putting this in functions, and probably even without telling the compiler to inline it, it would be done. With async code, it’s not so clear what’s going to happen.