In other recent threads, some of us have discussed various relevant issues such as:
- Efficiency of lazy versus eager evaluation in functional programming composition over collections (more generally enumerables).
- Advantages of encapsulation and a more informational structure of enumerables versus iterators, yet need iterators for some algorithms.
- GC memory performance cost versus imperative boilerplate and/or manual memory management cost of temporary objects.
- The imperative boilerplate and/or manual memory management cost of longer lived generational objects when abandoning GC due to performance issues related to the above 3 issues.
In the aforementioned threads, I have alluded to some ideas I have about perhaps how to paradigm shift the issues with inversion-of-control.
I think it is helpful to first understand how C++11 encapsulated the problem of temporary objects of rvalues, and thus was able to optimize their memory allocation orthogonal to any semantics in the programmer’s code, i.e. afaics rvalue is an inherently encapsulated semantic for this optimization case.
But rvalue encapsulation is useless in the more general case of temporary objects created under functional composition:
Quoted above the problem from issue #1 is that in an eager language a copy of the entire collection is created by
map even though
some may exit before processing the entire collection. So temporary objects are created and CPU is wasted unnecessarily, which is the price to pay for generalized functional composition over collections. Even in a lazy language like Haskell, all those temporary objects for the thunks will be created unnecessary (in addition to other disadvantages of lazy evaluation such as non-deterministic resources and timing).
Afaics, Rust’s memory management paradigms can’t help us solve that waste with maximum degrees-of-freedom. I anticipate @keean will probably offer some more complex boilerplate way to accomplish what I lay out below, and I will aim to show him as I did in my other thread, that boilerplate reduces degrees-of-freedom.
But I showed a conceptual way to invert the control in an eager language (like Rust), so that none of the above waste is incurred:
Additionally, if the callback functions of the enumerables are pure functions (and given they are pure they can’t mutate—such as via closure—the collection they are enumerated over), and if using an appropriate method such as
map (and not a generalized
fold) then the intermediate collections can be freed without any need for GC, refcounting, nor lifetime checking, because they are encapsulated such that they can’t be referenced by the callback function. The solution is pure semantic encapsulation.
However the temporary objects created by the callback function can’t be assumed to have not ended up referenced in the final output of the composition, so these can’t be freed without some optimized form of GC, refcounting, or lifetime checking:
The lack of encapsulation of iterators from issue #2 can also apparently be solved with inversion-of-control.
Afaics, Rust’s memory management paradigms can’t help us solve this semantic “memory leak” unsafety:
Instead of calling the collection’s factory method to create an
Iterator, we pass to the
Iterable method our callback generator-enabled function which takes as input instances of each of the various types of iterators it needs to employ in its algorithm. When the algorithm inside our function needs a new state from any one of the input iterator instances, it
yields indicating which iterator and the requested state modification, which the
Iterable method processes and then reinvokes the generator-enabled function thus returning control to the point where the function had
yielded and thus returning the new state of the said iterator to the local instance in the said generator function.
In this way, the lifetimes of the iterators are entirely encapsulated semantically assuming the said function is pure, then no need for any GC, refcounting, nor lifetime checking to free the iterators.
This initial post is just to get the discussion going, is already too long, and we can elaborate from here in posts.