I have a question though: is there anything similar to .NET cancellation tokens in futures-rs?
The data in the graph, is that the plaintext benchmark or the JSON serialization benchmark? I couldn’t find the source for the plaintext benchmark.
Cancellation in futures-rs right doesn’t actually require tokens as we just express it via
drop. Once a future is dropped, everything it’s associated with is canceled.
select returns the other future after the first one resolves, and you can cancel its computation by just dropping it.
That isn’t quite the same thing. For example, in .NET it’s possible to create a deferred computation, then register the future with an event sink (e.g. another thread), while maintaining control over its cancellation.
Ah yes, if you specifically have a channel-like barrier between two halves of a computation then the idea is that cancellation will still be signaled through drop. That is, if there’s a channel, when the consumer is dropped it’ll either sent a notification for allow the producer to check whether the consumer has dropped. This means that so long as the consumer has defined cancellation points to check this flag it’ll work out.
This can (and probably should) be implemented on the
Promise type. The idea though is that futures don’t prevent you from doing this in any way, and the idea is that cancellation is always signaled via drop.
This is basically a weak reference to a future, right?
Eh no. You basically want an
In a nutshell, with async I/O you can attempt an I/O operation without
blocking. If it can’t complete immediately, you can retry at some later
That is the readiness based model of async I/O. However there is also the completion based model of async I/O where you simply do the operation, there is no need to retry later, however you don’t necessarily get the result immediately. In my opinion futures should work really well and be really easy to implement on top of completion based async I/O, due to futures being in that paradigm of do something now and get the result later.
Because of this, if you want to implement future based async on Windows, don’t do it on top of mio! With mio it adds a fair amount of overhead and stuff just to go from a completion based model to a readiness based model, and to have futures on top of that would be another abstraction layer to go from readiness to completion. Meanwhile with futures implemented directly on top of completion based async, there is significantly less overhead.
It would be nice to explain in the tutorial how
Task's map onto concepts from other languages (C++ futures, C# tasks, etc).
For example, coming from C++, one might expect that a
Future represents the full state of a deferred computation. However it is not so with Rust futures: in order to poll a
Future for completion, you also need a
Task that goes with it. Seems to me that the closest analog to a C++ future in Rust is actually a
Future trait: if I understood the docs correctly, the same
Task must be used for every call to
schedule(), or else. This seems like an very error-prone API because anyone who has a reference to it may screw things up by passing in a different
Task. Why can’t there be an operation of binding a
Future to a
Task, which consumes the future?
I would like to hear what the plans are with respect to
#![no_std] and in particular heap-free and/or alternate-allocator usage of these libraries. “zero-cost” means “no heap allocations” to me since heap allocations have non-zero cost, but it seems to me that “one allocation” might be a better description than “zero-cost” from what I understand from other descriptions.
Like others noted above, it seems like IO completion ports map naturally to futures. I would be very interested in seeing work along those lines (on Windows).
Certainly! I’m not personally familiar enough with the C++ or C# implementations, but PRs are of course always welcome
To clarify, though, a
Task is intended to follow through an entire computation from finish to end. That computation is likely composed of many independent futures over time, and each future internally has a state machine of what to do next. This
Task is then passed to
schedule to help them complete.
Task for a future can change over time, so it’s not required that the
Task is always the same. The guarantee provided by
schedule is that only the last task is guaranteed to be woken up.
Also, to wed a future to a
Task, I believe that’s
Currently we don’t have initial plans for making the library
#![no_std] compatible. The zero-cost of futures is that creating an entire future for a computation does not require allocations. That is, common combinators like
select, etc, don’t require allocations. The
Task, however, requires an allocation, and this typically goes hand-in-hand with the concept of “one
Task per TCP connection”.
Right now I/O is prototyped with mio which uses IOCP under the hood, but in a way that provides a readiness model, not completion. Futures could likely be directly bound to IOCP, though!
We seem to have a problem here: I am not sufficiently familiar with Rust implementation to make any contributions just yet.
If tasks can change, what happens to data allocated in the old task? Tutorial says that this is intended for passing data between chained futures. So if the task changes, task-local data becomes inaccessible, doesn’t it?
Are there any examples of task data usage?
Yeah currently this causes a panic, and futures intending to migrate between tasks would have to not use task-local data.
Isn’t that kinda bad for a “foundational” API?
It depends on your point of view. Would you consider it bad that
RefCell panics by default? Or that
Copy? Similar to those abstractions
TaskData is meant to be general within one context, not general for all contexts. That’s what the trait’s for!
Will it be possible to run a Loop (in futures-mio) in a separate thread?
Indeed! The crate does not currently spawn any threads, so you’ve got full and complete control over what threads are in play.