Tuples destructuring assignment


#1

I understand that there is an issue out for this:
https://github.com/rust-lang/rust/issues/10174
I would like to add my voice to being able to do the following:
(var1, var2, var3) = functionreturningatuple();
Maybe someone well informed could tell me what is the status of this?
Is it really problematic/difficult?


Some Scala warts (and Rust?)
#2

You can destructure tuples when you use let.

let (var1, var2, var3) = functionreturningatuple();

However, it’s understandable that there are cases where let cannot be used like Fibonacci and probably other situations too. There is a workaround for Fibonacci sequence in particular.

fn fib(n: u32) -> u64 {
    let mut pair = (0, 1);
    for _ in 0..n {
        let (a, b) = pair;
        pair = (b, a + b);
    }
    pair.0
}

Or written in functional style, although that one may look quite weird.

fn fib(n: u32) -> u64 {
    (0..n).fold((0, 1), |(a, b), _| (b, a + b)).0
}

The issue is that assignment operator expects an lvalue, but a tuple expression is not lvalue, and the point where Rust realizes that, it already realizes it’s too late, as bindings were already dereferenced, and all what is seen is approximately (0, 1) = (1, 1) (well, not quite, the point is it doesn’t see lvalues anymore).

It is an issue, but one that is tricky to deal with, and has relatively easy workarounds. Anyway, there is an issue in RFCs repository, in case you are interested in this functionality.

https://github.com/rust-lang/rfcs/issues/372


#3

But that is an inferior solution, as it is allocating (a,b) n times on the stack, unnecessarily.


#4

Currently there’s not a lot of work to make it go forward :frowning:


#5

It isn’t really “allocation”, whatever that means. I even compiled this function with optimizations on x86_64, no stack pointer movement, everything is done in registers. And even if it would involve stack pointer changes, it’s a straightforward optimization to just do stack movement at beginning and end of a function.

This is a high level programming language, not assembly, let the optimizer do what it’s supposed to do. Compilers aren’t really stupid, they don’t convert code directly to assembly, they take time to generate better and faster code. No need to spend time micro-optimizing every detail, because compiler can do those things for an user. And what’s worse, it’s easy to get those things wrong, x / 2 cannot be optimized to x >> 1 when x is of i32 type (exercise for the reader: why is this optimization incorrect?).

I suppose long time ago, when computers were slower, optimizers weren’t as good as they are today, but that was past.