Calling async from async code is certainly possible, but only if you are not already in the async context. So, for example, if async code calls sync code, then it's not possible to go back into async in that sync code. However if you spawn a new thread, then you are not in an async context and you can call async code from there in various different ways.
But you can use a channel to communicate with an async task? Maybe "call" isn't quite the right word, but sending a message and getting a reply back amounts to a "call"? Sorry, I am not thinking very clearly right now ( have a cold! ).
There's no problem with sending the message, but when it comes to waiting for the response, then that would be blocking the thread, which is not acceptable in an async context.
If you are not inside the async context, then it's fine, yes. But just because you are in a non-async function does not mean that you are outside the async context. Your non-async function could have been called from an async function.
I had a case where a function was "formally async" but never really waiting for any I/O, see Semi-async function. In that case, you can use a different executor such as now_or_never. However, not sure if that applies in your case, and it might be considered bad practice (but it did make things much more easy for me in my case).
Perhaps I should state what I am trying to do ( or considering doing ). In my sync code ( thread created by tokio::task::spawn_blocking ), I want to do a list of reads from a file ( typically 8 reads ) asyncronously, using tokio_uring.
I cannot call the io_uring async function directly, but I can create an async task, send it a message to perform the reads asynchronously, and wait for the reply message. I think this involves not using #[tokio::main], and instead creating the async runtime explicitly, so I can get the handle required to create the async task.
Incidentally, is creating an async task a cheap operation, or is there significant overhead? I am assuming it's fairly cheap.
When I try to use tokio_uring I get that error, I presume because I am running on windows not unix, and I presume only unix is supported. I started googling ways to run unix.
( The documentation does state "The library requires Linux kernel 5.10 or later." )
Well I managed to implement what I had in mind on windows. The implementation of "list of reads" described above looks like this:
/// Read multiple ranges. List is (file offset, data offset, data size).
fn read_multiple(&self, list: &[(u64, usize, usize)], data: &mut [u8]) {
let mut handles = Vec::new();
for (addr, off, size) in list {
let data = &mut data[*off..off + *size];
handles.push(self.start_read(*addr, data));
}
for h in handles
{
self.wait(h);
}
}
It does need unsafe blocks to call the windows functions. I am very unfamiliar with the windows functions, so if anyone knows them well, I would appreciate a code review!
Well, I just got to check the performance. Turns out the overlapped version seems to be significantly slower! I made a test database file with 1 million records = 35MB file size ( so record size is about 35 bytes ).
Running from "cold" (nothing loaded into memory) it takes around 400ms to read and process the records with overlapped file I/O, and about 270ms to read it normally. ( Once loaded into memory it takes 64ms ).
I don't know why it was slower, but clearly this attempt to optimise file read speed is currently a total failure!
For reference, here is the test code.
Setup:
CREATE TABLE dbo.Test( x string, y bigint )
GO
DECLARE i int
SET i = 0
WHILE i < 1000000
BEGIN
INSERT INTO dbo.Test(x,y) VALUES ( 'Hello', i )
SET i = i + 1
END
The test:
DECLARE a int, total int
FOR a = y FROM dbo.Test
BEGIN
SET total = total + a
END
SELECT 'total=' | total
[ Incidentally, I just tried this on Microsoft SQL server express. It took an inordinate amount of time to insert 1 millions records, something like 2 minutes (rustdb is 1.5 seconds) . Reading 1 million records using a cursor took 12 seconds. Using SUM is fast enough, I wasn't able to really measure that, I think it pre-loaded everything, but well under a second ]