Well, I just got to check the performance. Turns out the overlapped version seems to be significantly slower! I made a test database file with 1 million records = 35MB file size ( so record size is about 35 bytes ).
Running from "cold" (nothing loaded into memory) it takes around 400ms to read and process the records with overlapped file I/O, and about 270ms to read it normally. ( Once loaded into memory it takes 64ms ).
I don't know why it was slower, but clearly this attempt to optimise file read speed is currently a total failure!
For reference, here is the test code.
CREATE TABLE dbo.Test( x string, y bigint )
DECLARE i int
SET i = 0
WHILE i < 1000000
INSERT INTO dbo.Test(x,y) VALUES ( 'Hello', i )
SET i = i + 1
DECLARE a int, total int
FOR a = y FROM dbo.Test
SET total = total + a
SELECT 'total=' | total
[ Incidentally, I just tried this on Microsoft SQL server express. It took an inordinate amount of time to insert 1 millions records, something like 2 minutes (rustdb is 1.5 seconds) . Reading 1 million records using a cursor took 12 seconds. Using SUM is fast enough, I wasn't able to really measure that, I think it pre-loaded everything, but well under a second ]