Something to keep in mind is Rust's implementation is quite different to Go's, due to the differences in priority.
Goroutines each get their own stack and the runtime manages switching between stacks. In this way it's a similar model to normal OS threads and you don't even notice that there's any smart multitasking going on.
Rust's futures are more akin to syntactic sugar around the state machine you might write by hand when doing asynchronous programming. All the state you need is bundled up into a single object which can be passed around like any other value (although you may not necessarily be able to inspect its contents due to encapsulation and such).
This isn't as simple as the goroutine model, but has the benefit that almost everything can be implemented as libraries (that's actually how futures first started and were developed!), and because the structure of these nested state machines is known at compile time it gives the optimiser a better chance of generating fast code. There's also no need to switch stacks or have a pervasive runtime or GC because making progress on a future is just a case of calling its
poll() method, something typically done by the
Executor exposed in libraries like
async-std so they can do fancy things like multitasking and only waking a future when it needs to make progress.