Inversion-of-control resource management

In other recent threads, some of us have discussed various relevant issues such as:

  1. Efficiency of lazy versus eager evaluation in functional programming composition over collections (more generally enumerables).
  2. Advantages of encapsulation and a more informational structure of enumerables versus iterators, yet need iterators for some algorithms.
  3. GC memory performance cost versus imperative boilerplate and/or manual memory management cost of temporary objects.
  4. The imperative boilerplate and/or manual memory management cost of longer lived generational objects when abandoning GC due to performance issues related to the above 3 issues.

In the aforementioned threads, I have alluded to some ideas I have about perhaps how to paradigm shift the issues with inversion-of-control.

I think it is helpful to first understand how C++11 encapsulated the problem of temporary objects of rvalues, and thus was able to optimize their memory allocation orthogonal to any semantics in the programmer's code, i.e. afaics rvalue is an inherently encapsulated semantic for this optimization case.

But rvalue encapsulation is useless in the more general case of temporary objects created under functional composition:

Quoted above the problem from issue #1 is that in an eager language a copy of the entire collection is created by map even though some may exit before processing the entire collection. So temporary objects are created and CPU is wasted unnecessarily, which is the price to pay for generalized functional composition over collections. Even in a lazy language like Haskell, all those temporary objects for the thunks will be created unnecessary (in addition to other disadvantages of lazy evaluation such as non-deterministic resources and timing).

Afaics, Rust's memory management paradigms can't help us solve that waste with maximum degrees-of-freedom. I anticipate @keean will probably offer some more complex boilerplate way to accomplish what I lay out below, and I will aim to show him as I did in my other thread, that boilerplate reduces degrees-of-freedom.

But I showed a conceptual way to invert the control in an eager language (like Rust), so that none of the above waste is incurred:

Additionally, if the callback functions of the enumerables are pure functions (and given they are pure they can't mutate—such as via closure—the collection they are enumerated over), and if using an appropriate method such as map (and not a generalized fold) then the intermediate collections can be freed without any need for GC, refcounting, nor lifetime checking, because they are encapsulated such that they can't be referenced by the callback function. The solution is pure semantic encapsulation.

However the temporary objects created by the callback function can't be assumed to have not ended up referenced in the final output of the composition, so these can't be freed without some optimized form of GC, refcounting, or lifetime checking:

The lack of encapsulation of iterators from issue #2 can also apparently be solved with inversion-of-control.

Afaics, Rust's memory management paradigms can't help us solve this semantic "memory leak" unsafety:

Instead of calling the collection's factory method to create an Iterator, we pass to the Iterable method our callback generator-enabled function which takes as input instances of each of the various types of iterators it needs to employ in its algorithm. When the algorithm inside our function needs a new state from any one of the input iterator instances, it yields indicating which iterator and the requested state modification, which the Iterable method processes and then reinvokes the generator-enabled function thus returning control to the point where the function had yielded and thus returning the new state of the said iterator to the local instance in the said generator function.

In this way, the lifetimes of the iterators are entirely encapsulated semantically assuming the said function is pure, then no need for any GC, refcounting, nor lifetime checking to free the iterators.

This initial post is just to get the discussion going, is already too long, and we can elaborate from here in posts.


Correction: the lifetimes of the iterators are not encapsulated if they can leak into the outputs of the pure function. I assume such could in theory be prevented by a compiler by annotating the constraint on the inputs that their references can only be borrowed. Well so we can model this with Rust's lifetimes and as stated GC wouldn't be needed. So seems I've found a use case for Rust's lifetimes, but note it is a special case and not substituting for GC in general. And so far this has not identified a need for moves.

If we use Box<T> for the element references for these temporary collections, and if the callback function is only borrowing a reference, then when the collections are explicitly freed without GC, then so are these elements. So again afaics Rust's lifetimes without moves can eliminate the need for GC in this case.

In general I presume we won't be able to always structure the inversion-of-control such that we can use Rust's borrowing of memory lifetimes to eliminate GC. Instead of resorting to Rc / Arc which can't free circular references and performance horrible in multi-threaded, my current intuition is K.I.S.S. with GC.

So I was thinking in the above quote (written in haste while rushing out the door) that maybe we could track immutability. Because it is necessary to maximize the use of immutable references since afaics the way to maximize the performance of the deallocation of temporary objects with generational GC is to use only immutable references with it (c.f. quoted explanations in the OP).

However on further thought, it seems impractical to isolate a mutable GC pool from an immutable GC pool. It seems if want to go immutability, then have to do it every where for GC references. Mutability could still be used in low-level performance critical routines that operated on an exclusive deep copy of an immutable object.

However, immutable data structures are costly both in memory space and performance.

Since the goal here is to make sure we can free temporary objects which are referenced by objects in the older generation, perhaps instead the GC could mark each reference as "owned by older" on the first time a reference is assigned to an object which is already so marked, i.e. a one-way marking no revert. So then the GC could know which temporary objects can't be traced at the end of generation and move them to the older generation without any analysis. The remaining temporary objects can be traced without tracing through the older generation. If that is viable, then don't need to emphasize immutability. I think I agree with @keean that immutability is orthogonal to maximizing degrees-of-freedom of composition. Immutability is important when sharing objects amongst threads, but I think I prefer an asynchronously single-threaded model of concurrency with entirely isolated threads for parallelism.

V8 is already doing this apparently:

As I said this thread is not a fully cooked conceptualization, so expect it to be rough sailing.

IIRC Rusts iterators used to work like this (google for "rust internal iterators"). Here is a relevant mail on the mailing list

Also have a look at the Rayon library. (It has a parallel for_each method that is similar to your proposal)

Ada adopted something like this solution initially for safe modification of collections. You pass a closure to the collection which mutates the data, or reads, or writes to the collection. In my own tests it was much slower than simply returning an access variable (Ada's version of a reference) to the data. I did think inlining could help overcome this, but I never succeeded in getting equivalent performance. In more recent versions of Ada this functionality is retained, but the focus seems to have shifted to iterators.

If you find it so troublesome the Rust hasn't a GC why don't you use another language that has one?


I think perhaps you haven't read my second post in this thread, wherein I've found a need for Rust's lifetimes.

Did I make any decision yet?

Analysis is a process.

And btw, in the other thread I've been told others have worked on a GC for Rust.

The point I'm aiming at is to nail down what is best for which use cases and then figure out how to get there within Rust or outside of Rust, depending on how it all pans out.

Why is in any group there always seems to be a desire to squelch discussion that isn't perceived to fit in some ideal box of what the is perceived to be the energy of the group. I think there are very, very smart developers here and we should all be open minded. I am willing to turn against my own ideas ... as I did in the second post.

I've made this thread, to pigeon-hole my discussion more in one place, so those who don't want to be bothered with it can more easily ignore it.

Thanks for your opinion. Maybe you just really wanted to know. So hopefully I've answered. Of if you just wanted to express annoyance, hopefully I've explained that I will annoy less by putting this annoying discussion in one thread.

Edit: I am by no means trying to build support to remove any of Rust's features. I would never desire to split a community (that would make me feel like I am stealing). I am not here to play politics, just purely for the technical analysis. I'll either adopt Rust as is and suggest/code additions/crates, or I'll head another direction. And my/our analysis will be here for posterity and reference. Also I hope to not be too much longer on this foruming. I need to make some decisions and move on to coding. I just swore to myself after my last big coding projects, that before the next time I dig into to coding another perhaps million LOC, I want to put some effort into selecting a programming language and design patterns.

I have a different solution for the lifetime problems with some iterators, but it is very imperative, so you are probably not going to like it.

The idea is that algorithms should not have ownership of anything, as such we can use both a read and a write iterator. So for example 'map' might be written like this:

fn map<I, O, F>(mut f0 : I, mut n : I::DistanceType, mut f1 : O, mut f : F)
    I : Iterator + Readable,
    O : Iterator + Writable,
    F : FnMut(&I::Value_Type, &mut O::ValueType)
    while !n.is_zero() {
        f(f0.source(), f1.sink());
        f0 = f0.successor();
        f1 = f1.successor();
        n = n.predecessor();

This map function can pass each a reference to each location in the input iterator and output iterator to the procedure 'F' in turn. F only borrows the input and output, so there are no lifetime problems as both collections are function arguments (they are referenced from the iterators passed).

I like all your ideas because I learn something. I'll sleep then study the feedback thus far when I awake.

Thanks for the reply.

Yes it was a genuine question because it's hard to follow all (the very long threads) where the topic changes and different threads gets intertwined with each other but now I have a better understanding of what you are looking for.

Afaics from a quick glance, that for_each is not the same as what I'm proposing...

(is interesting idea though to parallelize automatically and do work stealing to balance thread load, which I first learned about in Sean Parent's video on concurrency)

Thanks for linking me to that discussion. Daniel Micay's elucidation is a bit dense/terse (as expected for an email discussion) and requires heuristic unpacking. I'll attempt to address all the issues presented there; so anyone please feel free to point out any of my lapses:

  1. Examples of external iterators are Alexander Stepanov's Essence of Programming point-style iterators implemented in C++ STL or Andrei Alexandrescu's range iterators implemented in the D language (of which Rust models the forward range).

  2. Afaics it is impossible to write a compile-time checked safe algorithm that requires more than one mutable external iterators on the same collection, because each external iterator has mutable reference to the collection. Whereas, an internal iterator can supply as input only the borrowed references of the required multiple mutable elements on each iteration of the callback function. Can anyone explain the reason the linked mailing list post seems to imply that even mutation with a single (forward) iterator is unsafe with external iterators?

  3. The claim that the internal iterators won't stop if not implemented correctly when the callback function returns value that says to terminate, doesn't make sense to me. Even an external iterator could be implemented incorrectly, e.g. skipping multiple elements. Although the manifestation of the bug would be different, I fail to see the need for the distinction. A bug is a bug. Does Daniel Micay mean that for loops are some sort of DSL (such as in Scala) and there is some reason a break doesn't get translated into the signal to terminate?

  4. Can anyone explain more about the mailing list post's point about needing yield to enable emulating with external iterators everything internal iterators can do in the immutable context?

  5. The post seems to have a typo "can't" where "can" is intended, with the implication that internal iterators can't implement operations over multiple iterators, e.g. "zip, union, intersect and merge". Whereas, afaics the design I am proposing can input multiple iterators on each collection and can join collections over the internal iteration. The key difference is the prior internal iterators were not considering employing yield (and afaik there is no language level yield in Rust yet).

    The following iterate method and its input callback should be a generator functions (i.e. allows similar semantics to Javascript's yield) and the callback is a curried function so that A could be a function or the final return type of the operation. So visualize what happens is that we can have a function that inputs different types of internal iterators (not just ForwardIterator) even on different collections and we build the composition by calling the curried function as the input to the inner-most nested call to an iterate methods. The outermost iterate will execute the curried function which will in turn invoke each of the recursively nested iterate methods. When the curried function returns then this recursive nesting is unwound.

    To change the state of the nested iterators and receive a new value of T, the curried function does a yield on a special value which informs which of the nested iterate it applies to. The nested iterate methods inspect this yielded value and pass it up the recursive call chain, then the designated iterate processes it and calls the generator methods back down the recursive call chain so the curried function is restarted where it had yielded with the new value.

    I haven't thought yet about how to type the intermediate generator return value orthogonal to the final return value. Perhaps employ an enum. Note it is also possible to alter the type signature to supply an initial value to the curried function instead of relying solely on closures, and this would enable composability (chaining the output of one iteration as initial input to a subsequent iteration). Note there is no higher-kinded type (HKT) in this signature. Obviously the following is slower than external iterators (yet more composable, also safe in the mutable case), unless the compiler is smart enough to restructure the LLVM output to not use recursion (which in a very popular language one might do for the common collections such as array). And I am still advocating that I want to prioritize higher-level advantages and leave low-level performance tuning for those portions written in low-level imperative code. Also note I didn't show in the signature below selecting a slice, but that could be added to the iterate and/or a separate method trait for the collection to implement.

trait ForwardIterate<T> {
   fn iterate<A>(&self, fn(&T) -> A) -> A;

That is the same principle idea of internal iterators. I see your f : F function takes only element types, not iterators.

The difference is your map is taking iterators, thus it can't for example take multiple mutable iterators from the same collection, but my internal iterator proposal can.

Where your proposal calls f0 = f0.successor() and n = n.predecessor(), mine would v0 = yield v0 and n = yield n.

I am happy to see we were thinking down the similar line of solution, same as in my other expression problem solution thread, yet the distinction remains afaics I am achieving greater separation-of-concerns which is evident in being able to do algorithms which require two or more mutable iterators on the same collection.

I would declare the 'buffers' outside of the iterator, so I would do something like:

fn main() {
    let mut x = vec![1,2,3,4,5];
    let mut y = vec![];

    map(x.first(), x.len(), y.first_mut(), |a| a + 1);
    map(y.first(), y.len(), x.first_mut(), |a| a * 2);

Note I can reuse the buffers, swapping x for y, so a chain of maps of any length can be done using only two buffers. Although map is actually a special case and can be implemented over a single mutable buffer.

This is vital, as otherwise you cannot implement many algorithms. Look at the algorithms from EoP and you will see when and how many times successor is called is a critical part of every algorithm. Otherwise you are trying to reduce every algorithm to a 'foreach' over a closure with mutable state.

Consider the partition-point algorithm:

pub fn partition_point_n<I, P>(mut f : I, mut n : I::DistanceType, mut p : P) -> I
where I : ForwardIterator + Readable, P : FnMut(&I::ValueType) -> bool {
    // Precondition: readable_counted_range(f, n) && partitioned_n(f, n, p)
    while !n.is_zero() {
        let h : I::DistanceType = n.clone().half_nonnegative();
        let m : I = f.clone().add(h.clone());
        if p(m.source()) {
            n = h;
        } else {
            n = n - h.successor();
            f = m.successor();

Our proposals appear to have equivalent composability.

Mine inverts the roles of the function arguments and functions inside out:

y.first_mut(|f1| y.len(|n| x.first(|f0| map(f0, n, f1, |a| a + 1))));

The nesting order doesn't matter:

y.first(|f0| y.len(|n| x.first_mut(|f1| map(f0, n, f1, |a| a + 1))));

I wish the compiler allowed to write that as:

y.first(y.len(x.first_mut(map($1, $2, $3, $4 + 1))));

The above is missing the alternative invocation within each anonymous ("closure") function employing a single input argument when restarting from yield.

Thus I need some macro to make invoking mine as intelligible as yours, such as perhaps:

map(x.first(_), x.len(_), y.first_mut(_), |a| a + 1);

Note the reason for the (_) above is because these functions may also take other input arguments.

You didn't address the case of needing two or more mutable iterators on the same collection. For example bubble sort.

Edit: The reason is because for each mutable iterator you need a mutable reference to the collection. Whereas, with my proposal, there is only one mutable reference to the collection which is borrowed for each nesting of the call chain. However, that borrowed nesting is not explicit in the signature I wrote upthread. I will need to think about how to make it explicit to the compiler. The salient point is mine is a functional nesting, so borrowing can be applied.

Ostensibly you have not read point #5 carefully in my reply to @nielsle. My proposal can do everything yours can and without losing that separation-of-concerns. :wink:

My proposal is employing yield to tell the caller when to call successor and supply a new value. And my proposal can signal each iterator separately by changing the value that is yielded. It is not limited to successor and any state change on each iterator can be signaled to the nested recursion of callers.

It would be good to see an implementation of partition-point for comparison.

I can't. Rust doesn't have yield.

You can pretend it has :slight_smile: Although I don't think Rust is likely to get yield I am interested to see what the code would look like in Rust, assuming it did have yield.

The reason I think rust is unlikely to get yield is because they stripped out all the light-weight threading stuff because they did not want to have a runtime. This means that implementing an iterator is about the only way to get lazy/yield like functionality.

I also referred to another pseudo-code for the algorithm.

For every clone you need, just take an extra input value from that iterator type. Manipulate the state of the clones with yield followed by some syntax. In other words, think of yield as a way to call the methods on your iterator.

No afaik, we can get generator form of yield by simply rewriting the body of the generator function as a state machine with switch-case (i.e. match-case), but we'd prefer the compiler does that transformation to boilerplate, e.g. C#'s yield. Stackless corountines might require a custom runtime, but I am thinking I may not need those. Link to research I did on this, which includes some quotes from Steve Klabnik.

I am going to need the yield any way to achieve the lazy Enumerable composition that I mentioned in #1 of the OP, or do you have a way to suggest to achieve that elegantly without yield?

Maybe I can have a go at implementing the interfaces and see if I can come up with anything. I generally need to write it in code to really understand it :slight_smile:

You can achieve multiple mutable iterators in your external iterators design also, by grouping them and have them share a common mutable reference to the collection. But afaics, there is no way to do that encapsulated and still have it open to extension in terms of how many and types of iterators in a grouping (which I afair is similar to conclusion we arrived at in my other thread on the expression problem where other than boilerplate the only functional weakness yours had was not being as open to extension in some case).

Up to you if want show how to code that, so readers can see that both our proposals are essentially equivalent in functionality except for that.

I am not yet clear on how to write the signature of mine to be open to extension and also make the borrowing of the mutable reference into the nesting explicit. So it is possible that mine has a similar weakness.

Edit: I edited my examples upthread because I had forgotten the anonymous functions. What I need is an alternative signature for each iterate so the anonymous function can borrow the mutable reference for the collection of the parent in the nested hierarchy of functions.

Actually multiple mutable iterators is no problem for the iterators I am implementing in elements_in_rust for example reverse_bidirectional is simply:

pub fn reverse_bidirectional<I>(mut f : I, mut l : I)
where I : Mutable + BidirectionalIterator {
    loop {
        if f == l {return;}
        l = l.predecessor();
        if f == l {return;}
        exchange_values(&f, &l);
        f = f.successor();

There is no unsafe code in the generic library, so the traits and the algorithms are all safe, you need some unsafe code in the implementation of the collection, but you need that anyway to do the pointer arithmetic. If you look at the library you can see the code for the implementation for the slice-iterator is very simple. These iterators are not as 'safe' as the native rust ones, but I am simply implementing EoP as it is for now. I will look into improvements and safety (maybe using ranges) once I have completed all the algorithms as they are.