SICP introduced the stream paradigm implemented by lazy lists. I want to understand whether the expressive power of Rust is capable to express the same idea easily.
Rust book discussed the iterators, which could be used to modal lazy lists. However, the machinery is linear, which does not obviously cover the full power of stream power. It is similar to a control flow without looping structures.
To illustrate, we consider the lazy list of Fibonacci sequence. It could be, of course, implemented by an iterator linearly, as Rust by examples illustrated. Essentially, it is iterating the process
(a,b)->(b,a+b). However, there is another point of view: we can describe the Fibonacci lazy list
f as: the first two terms are
1,1, and the lazy list starting from the third term
f[2..], is the term-wise sum of the
f[1..]. It could also be described by a system of “updating” constraints:
f = Cons(1,g) // The first term of f is 1, and the rest is g g = Cons(1,add_list(f,g)) // The first term of g is 1, and the rest is the term-wise sum of f and g
Or expressed in Haskell,
let fib = (1:1:zipWith (+) fib (tail fib)).
Let me point out how this differs from the paradigm given in the Rust book. The examples given in the Rust book, the iterators are passed consecutively through “devices” such as
filter, here we have “feedback loops”. In terms of the constraints, the terms appears at the left hand side (
f in the preceding Fibonacci system) could also appear in the right hand side (
Generally, we consider a system of “updating” constraints, and we update the lists through these constraints repeatedly. To be preciser, a constraint look like
h is a computable function. The adjective “updating” means that we use the lists
g_1,g_2,... to update
f. That is to say, the system is used to describe a computational system, iteratively updating the lists on the left side, rather than a system of constraints to “solve”.
There is a somewhat nontrivial example:
a = Cons(-1,-2,3,4,5,-6,b) // The first terms of a is -1,-2,3,4,5,-6, and the rest is b b = add_list(c,d) // b is the term-wise sum of c and d c = filter(>0,a) // c consists of the strictly positive terms in a d = filter(<0,a) // d consists of the strictly negative terms of a
There are a lot of abuses of notations, but I hope that this is comprehensible. At the first stage, we have
b=2,2,-1,... where we denote by
... unknown terms at the moment. Then we update
2,2,-1,1,..., and then we update
3,4,5,2,2,1,.... If we proceed further, we fall into an infinite recursion. This shows that, in general, a system of “updating” constraint might not end up with a terminate computational process. However, we can still compute the first 10 terms of
a. This system could be expressed in Haskell:
let v = ((-1):(-2):3:4:5:(-6):zipWith (+) (filter (>0) v) (filter (<0) v)).
I wonder whether this system could be expressed in Rust (or better, we can meta-program a translator from “updating” constraints to Rust programs), without any sacrifice of time and space asymptotically, and hopefully more efficient than direct representations in Haskell or Lisp.
I would like to add that this is more theoretical, although it might have some applications for simulations of signal processing systems. In practice, especially for concrete cases, one should avoid such kind of representations, because the control flow, and consequently, the time and space complexity is very vague in these representations. At least for me, it is not very obvious that the first Fibonacci example, described in this way, will lead to a computation with linear time and constant space.