Most coveted Rust features

I think your quantifiers are in the wrong place. Could you write them more explicitly.

For example, my code

fn fractalize<T: Collection>(t: T) where for<'a> &'a T: IntoIterator<Item=&'a T::Item> {
    let iterator = (&t).iter();
    // ...
    frobnicate_collection(t);
}

has a type ∀T. (T: Collection) → (∀'α. &'α T: IntoIterator ∧ <&'α T as IntoIterator>::Item = &'α T::Item) → fn(T)

Here <&'α T as IntoIterator>::IntoIter is a lifetime-indexed family of types that is known to exist but is not specified. I don't see how you can do that with type parameters.

@llogiq

I was assuming that fractalize also does other things with the collection

Thanks for the tip, I just finished skimming Chapter 6 of "Elements of Programming".

I agree that iterators are an abstraction of the fundamental taxonomy of linear traversal data structures he mentioned: single-pass forward (input stream), multipass forward (linked list), bidirectional (doubly-linked list), and random access (array).

However, I'm contemplating they are an anti-pattern because they are so often premature generalization which renders the high-level algorithm opaque, thus grabbing more complexity than is required (e.g. low-level memory lifetimes) and less high-level optimization clarity than if we expressed the high-level algorithm more transparently.

Iterators are opaque semantics.

For example, we can employ a single-pass forward iterator to implement some, then the caller can compose the output with another the (reset) multipass forward iterator to find a specific iterated element in the output. But if we instead built these composable operations on the higher-level abstractions of functor.map, then we can glue them together with lazy evaluation such that we get deforestation so that we don't process the elements of some that occurred after the found element. We get this high-level optimization automatically under any sort of functional composition of forward traversal without needing to think about it. Also we don't need to remember to deallocate the iterators since the iteration occurs in the Functor.map library and not in our user land code. And our high-level semantics are more explicit and less opaque.

It doesn't replace iterators where there is no reusable high-level abstraction such as functor map, but it makes the premature generalization of iterators an anti-pattern when there is a high-level abstraction that is more closely fit to the actual algorithm.

I don't understand well lifetimes because I am not interested in that capability of Rust at this time and thus not putting the effort into learning everything about lifetime modeling. Thus unless you can reformulate your point without lifetimes, then I will not be willing at this time to go learn enough about lifetimes to understand the syntax you wrote.

I am also not very interested in iterators so i am not that interested to learn what IntoIterator is.

Right or wrong, those are my current priorites and thus limitations. Apologies. My interest is on the ad hoc polymorphism composability independent of use cases employing low-level lifetime modeling and algorithms expressed with low-level, imperative iterators.

I would go further and say with my experience of Haskell, lazyness is an anti-pattern for programming language design. The problem is performance is too unpredictable and magic. Programmers don't understand that a recursion will build a stack of thunks for later evaluation, they just don't have any intuition about the cost of operations. Yes it's great for when you need streams, but for most uses its terrible, and Haskell programs get littered with strictness annotations to try and get them to perform well in time and space. It rapidly gets to the point that the program had strictness annotations everywhere, except where the author explicitly wants lazyness. So the lazyness by default seems a mistake for Haskell. How about eager by default with lazyness annotations? Well this is better, but it complicates the type system, and leads to needing two copies of data structures eager and lazy ones, not great either. How to deal with lazyness then? The best solution I have found is to use co-routines and 'yield', so that 'some' yields it's results instead of returning them. However you don't need specific language support for this you just need to define some as an iterator itself, so that it provides a new 'successor' definition. So iterators provide a mechanism for abstracting lazyness in an eager language as well, they are a pretty general abstraction.

As for making an algorithm more general that is the point. The definition of generic programming is to write the algorithm in the most general setting possible with no loss in efficiency. This means the most code re-use, and as the algorithms get used so much it becomes in everyone's interest to read and understand them. Do you really find any of the algorithms in chapter 6 too complex? I personally find them much easier to read and understand than points-free functional notation, and some quick research I did suggests that most programmers, even functional enthusiasts find the imperative code easier to read and understand.

You should read elements of Programming from the beginning, but first watch this Elements of Programming - YouTube let's talk some more after you have watched the video.

Edit: The video is about the book, so its not really about Iterators directly, but explains why the book was written, gives a good insight into Stepanov's approach to programming, what the book is like to read, and how to approach reading the book.

1 Like

+1

Wow. I learned something new. Big thank you! On superficial reading, that seems to be an excellent, composable abstraction which solves both the uncontrollable implicit side-effects of laziness and the inefficiency of an eager functor.

So thus a functor is not a granular enough abstraction and thus premature specialization.

I was thinking that iterators are lower-level abstractions which render opaque the intent of the algorithms that operate on a functor, but once we incorporate the need to control the implicit side-effects of laziness while also correcting the inefficiency of an eager functor, then per your co-routines idea, I now see that the low-level generality is the only abstraction will solves all the algorithmic issues. So thus while a functor gives us some higher-level insight/transparency on the composability of the operations driving the result and shields us from managing the lifetimes of the iteration structures in user land code, it loses the granularity to control the laziness locally and optimize the efficiency of the eager locality. The co-routine can abstract the eagerness to laziness while retaining the locality between some and find without infecting the type system or collections with lazy versus eagerness. Orthogonal abstraction. Wow!

So although we would no longer need higher-kinds for functors, we might need them for iterator factories on the collection trait.

I concur that category theory coding is often too obtuse as compared to imperative constructions.

I had noticed that YouTube on my prior Google, but I don't have time for that 1+ hour video right now. I put it on my TODO list if I head down the direction of iterators, which you seem to have made a major change in my thinking on just now.

There is some work on ways to iterate collections that might be relevant to the discussion:
http://okmij.org/ftp/papers/LL3-collections-enumerators.txt

1 Like

The goal is attaining the "Reuse" Augustss prefers about lazy evaluation, while not incurring the negative side-effects (i.e. resource and timing indeterminism) of lazy evaluation and also not forsaking encapsulation of resource lifetime management that imperative spaghetti such as unstructured use of iterators ostensibly might entail.

In Haskell we can write:

map :: (a -> b) -> [a] -> [b]

some :: (Maybe m) => (a -> m b) -> [a] -> [b]

f :: a -> b

g :: (Maybe m) => a -> m b

let x = some g (map f 0::1)

We can I suppose write that in Rust roughly as follows:

fn map<A,B,C:Collection>(fn(A) -> B, C<A>) -> C<B>;
fn some<A,B,C:Collection>(fn(A) -> Option<B>, C<A>) -> C<B>;
fn f<A, B>(A) -> B;
fn g<A,B>(A) -> Option<B>;
let x = some(g, map(f, {0, 1}));

Since Rust employs eager evaluation strategy, map calls f for every element before some is called; thus if some terminates (when g returns false) before all the elements have been enumerated, Rust has wasted some computation. Whereas, since Haskell employs lazy evaluation strategy, f is only called for those elements which some enumerates.

We need to employ inversion-of-control in an eager language so that the called functions can call and enumerate their callers. But this isn't straightforward because the generalized structure and composability of Collection semantics must be respected.

The most general forward enumeration function is a fold where B is the current value of the result of the fold and the Option can terminate the enumeration early:

fn fold<A,B>(Enumerable<A>, B, fn(B, A) -> Option<B>) -> B;

The functions map and some can be built employing fold.

Afaics, it would be impossible to build our design on the general fold without coroutines or generators, because there is no guarantee that the incremental partial orders of its interim result type can be enumerated incrementally, i.e. there are no assumptions that can be made about the semantics of the unbounded generic result type B.

Afaics, if we were to instead build our design for inversion-of-control on fn map<A,B,M:Monoid+Enumerable>(M<A>, fn(A) -> Option<B>) -> M<B> where the incremental consistency is honored and M is consistent across the functional composition, then the optimization arises of ignoring the structure of the result type, compose the callers' chain of fn(A) -> Option<B> operations, enumerate the input Enumerable (aka Iterable) on those composed functions, and employ Monoid::append to construct the result. In other words, simply call map with the composed function.

For more general case where M is not consistent but the enumeration can be modeled as an incremental cursor (see the aforementioned quoted reference), we only need a control function with that has a variadic input taking an Enumerable+Monoid paired with each fn(A) -> Option<B> operation in the composition. This control function can encapsulate all the resource lifetime management orthogonal to instance and compositions. I don't see any reason we need coroutines or generators in this case. The instance of the Enumerator (aka Iterator) maintains the necessary state.

For more general case where the enumeration can't be modeled as an incremental cursor (see the aforementioned quoted reference), we need to allow the Enumerable:fold method decide when and how many times it will invoke the fn(B, A) -> Option<B> operation so it can then yield to its caller, who can continue its Enumerable:fold. So the input to fold requires a Yieldable interface:

fn fold<A,B:Yieldable>(Enumerable<A>, B, fn(B, A) -> Option<B>) -> B;

This Yieldable interface should be implemented with stackless coroutines, which in this special case can be implemented with a function call that inputs and outputs its internal state so each yield (return from the function) can be restarted (by calling the function inputting the last output state). Otherwise known as generators.

The fn(B, A) -> Option<B> operation must be a pure function so that it can't leak resources to subvert the encapsulation of the resource lifetimes.

I believe this is more and less the rough outline at least to my current understanding of this issue.

Edit: the fn(B, A) -> Option<B> operation does not have complete control over its efficiency w.r.t. to how many successive times it will be invoked relative to the yield. It can't cache and queue expensive operations in B because it can't be sure it will be invoked again. Some improvement in this design may be needed.

Edit#2: I don't think higher-kinded types are required by any of the functions mentioned above. They could possibly required for iterators.

I don't see how they are requires for iterators at all (they are not). The one use case for lazy streams is because of Rust lifetimes and is not a normal issue with parametric types. It turns out you would like to have an associated type with a parametric lifetime, which means the associated type itself is higher kinded (lifetime -> type) but no higher kinded types are needed in function signatures. As you are not interested in lifetimes, you should just assume HKT are not needed for Iterators (I don't think C++ has HKT, so iterators don't require them).

Agreed. I was having a similar realization (as I was sleeping) that the only reason we need HKT is for compiler checked lifetimes.

And I add the thought that the we don't need the compiler to check lifetimes where we are using proper high-level reasoning to encapsulate lifetimes, e.g. the inversion-of-control in my prior post using stackless coroutines so the lifetime is encapsulated in the Enumerable::fold instance and then enforcement that fn(B, A) -> Option<B> is a pure function so it can't leak the resource lifetime.

I think you mean that the reason we need to have lifetimes is because there is no garbage collector. It was realised early on that Lisp needed a GC to implement functions with closures properly, and every functional language since has required a GC. Rust is a bit of a breakthrough as it brings features normally associated with functional languages to a language with no GC and zero runtime requirements (there is no runtime library that must be linked with the produced code). Haskell's pure functions rely on GC to not leak memory.

The only way to avoid GC in a functional language without lifetimes, is to limit yourself to stack only data, and no heap data that can live on outside its lexical scope.

GC alone is not sufficient. It is still possible to leak resources that can prevent GC.

And encapsulating lifetimes perhaps can enable manual resource deallocation in some scenarios perhaps (but I need to think about that more deeply). So perhaps the GC choice is orthogonal to my point.

Edit: I am really building on my intuition that compiler checked lifetimes are an unnecessary low-level pita that infects the type system deleteriously, or at least should not be the default in the language. Apologies to Rust fans not trying to disparage Rust, just expressing my opinion but still open minded to learning if I am correct or not.

Oleg implemented the enumerator in Scheme which is garbage collected. Without linear types there is not really much chance to avoid GC.

The enumerator is very limited and only deals with one access pattern, the basic Iterator, and is less generic. It does not even cope with a copyable multi-pass iterator like ForwardIterator, which you can see being used here to find the partition point by bisection. Try that with an enumerator:

    pub fn partition_point_n<I, P>(mut f : I, mut n : I::distance_type, mut p : P) -> I
    where I : ForwardIterator + Readable, P : FnMut(&I::value_type) -> bool {
        // Precondition: readable_counted_range(f, n) && partitioned_n(f, n, p)
        while !n.is_zero() {
            let h : I::distance_type = n.clone().half_nonnegative();
            let m : I = f.clone() + h.clone();
            if p(m.source()) {
                n = h;
            } else {
                n = n - h.successor();
                f = m.successor();
            }
        }
        f
    }

Note: the "clones" are only necessary because I have deliberately not derived "Copy" for these types, as I want to see the cost of copying in the code. As these types are not copyable, this means this algorithm uses no copying, not even processing arguments of functions except where there is an explicit clone, but that is tangential to this discussion.

Note that Rust's standard library offers an Iterator trait that is implemented by lots of structs and default-implements a lot of lazily-evaluating functions such as map, fold, etc.

Just to clarify, Iterator::map creates a new map adaptor around the original iterator, which is lazy, while fold eagerly consumes the whole iterator.

You're right, of course. I should read the documentation I link more carefully before linking it. :slight_smile:

...

When you say Generics over Integers, do you mean

struct Blob<S: u32> {
    p: [u8; s]
}

Because this is something I would like.

I have decided we need higher-kinded types:

I'll have to come back to this later, bcz I need to sleep, but just real quick I will posit that I think if the callback function is pure, then the enumerable doesn't need linear typing to know it can deallocate the resources.

Back to the original topic, I think a great feature would be faster/incremental compilation, which looks like is going to happen thanks to the development of MIR. I would also appreciate smarter borrow checking, looking forward to when this will be available! Finally, after reading the debate around HKT, I would just like to ask: please keep Rust simple; don't turn it into Haskell :blush:

3 Likes