Do one thing if my value is Sync and another if it isn't

I am writing a library that includes a generic struct that takes an Inf: Inference, where Inference is a trait that is defined in the library.

use std::future::Future;

trait Inference {
   fn infer(&self, feature: &[u8]) -> Future<Output = Vec<f32>>;
}

If Inf is Sync, I want to spawn infer() on a threadpool. For simple cases, it would be nice to be able to run inferences on-thread. But I can't find a reactor that doesn't require the future to be Send, and the future can't be Send if it's holding a reference to a non-Sync self. This comes up because I expect simple impls to have a RefCell inside of them to deal with the fact that running an inference is formally mutating but referentially transparent (e.g., tf::Session has one).

I guess I have two problems: on-thread executor and type specialization.

  1. On-thread executor. Is there any executor that can drive a future to completion on-thread? I.e., an executor that doesn't constrain its Future to be Send.
  2. Type specialization. Is it even possible to do something different if Inf {is,isn't} Sync, short of exposing different types? If I just define two impl blocks for Engine, I think run() will end up with two applicable definitions.

If I can't fully achieve my goals (one API that does something different depending on Sync-ness), what's the next best option? Expose a SynchronousInference and a SynchronousEngine as well?

1 Like

Executors run (call poll method on) futures. Reactors sit in background and responsible for call to Waker.wake()

You have thrown in Sync when Send is partially the controlling factor. (Forget Sync)

This is the limiting factor, your making a future that has a lifetime. This stop spawn on some executors. Instead you are limited to blocking calls

Next bit is you can't specify a trait like you show. The function will either be returning a Send-or-not Future. And gets messy if trying to use associate types.

No easy answer that I have to give the dual functionality.

1 Like

Thanks for the reply!

You got me - when I said reactor, I meant executor. I edited the OP for clarity.

block_on or LocalPool solve problem 1. Yes, when Inference is not Sync, the whole operation has to degrade to effectively blocking calls on the local thread. I don't think I can forget about Sync though, because (IIUC), the constraint is whether Inference is Sync, not Send. Inference being non-Sync automatically implies that the futures it returns are not going to be Send, since those futures need a reference to a non-Sync Self. I.e. if Inference were just Send, I'd have all the same problems that I do now - reduced to blocking calls on local thread.

Part 2 is still an open question. But I'm starting to think it's not bad, actually, to expose a separate synchronous API so the behavior is clearer to the caller, so I guess I won't be too disappointed if the answer is exposing a new type.

With regard to part 2: I don't think this type of specialization is currently possible the way you might like. As you note, you can probably create an additional type. However, specialization has been an accepted RFC (#1210), and (as noted last year by aturon), one of (if not the) oldest post-1.0 feature not yet implemented.

It is coming some day. I think we're getting a much nicer story with regards to how rust intends to allow this, along with const generics and GATs. You can follow the tracking issue for specialization here. Specialization, as you can see from that tracking issue, still has a long way to go, but is being worked on.

These three features (specialization, const generics, GATs) are some of the larger additions to the language that have been in the works for a long time, and are likely to be focused on after async/await lands. In fact, those four feature are the ones specifically mentioned as a focus in the 2019 roadmap blog post and RFC by the language team.

1 Like

Thanks for the link. I have seen the RFC before, but I'm going to read it again to refresh my memory.

These are the exact RFCs I can point to as having caused serious pain when I encountered the use-cases they're designed to solve. (Especially GATs and const generics. The others can usually be solved by writing more code, as is the case here). I'm glad they're being worked on, and I love the direction they're all heading, but Rust team, I beg you to take your time! It will be worth the wait.

Everything[1] you've landed so far is refined gold. You don't need to rush it. If it takes another year, please let it be so. The process is working! I know it's exhausting from the inside, but just look at what is coming out of it. I truly believe that if all Rust team members went to the mountain for a year of meditation, then by the time you got back, Rust would still be years ahead of any other production language - at least for every use case I care about.

[1] Heh...almost. But let me not spoil the praise with trifles.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.